LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
NASA Matt

Automatic Memory Management Upgrade Needed

Status: Declined

Too many times, a generic "Out of Memory" error pops up without explanation, source, or traceability.  Sometimes it occurs intermittently when executing the exact same process. Tracking these mystery errors down takes more time than necessary and takes away from the efficiency and gains intended by the design of the automatic memory manager within LabVIEW.  After some research and help from an application engineer, it is apparent that the memory manager is not well suited for modern PC's and OS's when needing to process larger amounts of data.

 

LabVIEW should be able to use all the application memory offered by the OS, not just the contiguous parcels it is lucky enough to find.  Not only should it be able to use fragmented virtual memory but it should also be able to exploit more than just 75% of a 1GB application segment, particularly when 16 GB is installed on the motherboard.

 

For example, simple arrays of I16's are sometimes denied if they are only tens of MB in length and denied all the time if they are in the hundreds of MB.  That doesn't even come close to the available memory capacity in the PC.  Granted, those arrays are large compared to VI's written for simple GPIB devices twenty years ago but the need for larger arrays is now more prevalent with high-speed data acquisition and high-resolution imaging.

 

Why can't the memory manager grow with the latest PC memory capacities, motherboard architectures, modern OS's, and modern instruments that can acquire and transmit data with those array sizes?  Isn't it time to challenge the need for contiguous memory?  Can't more intelligence be added to the memory management strategy by not needing to copy large arrays redundantly that cause "out of memory" errors?  Can't a memory manager be able to work within the fragmented virtual memory space of a Windows OS without having to reboot?  Shouldn't it adapt to the OS environment instead of needing to prevent every other application from running in order to statistically gain more contiguous memory?  Can't better automatic tracing and error messaging be delivered to the programmer prevent to much wasted time?

 

I have been impressed by the quality of service and detail of the online help to tiptoe around these limitations.  However, it seems time to graduate from building contraptions to avoid the problem and instead apply that effort towards solving the problem.  Are there plans to issue a new automatic memory manager to optimize the potential of modern PC's and OS's?

19 Comments
NASA Matt
Member

The mandatory IN and OUT error connections on array VI's that allocate memory sounds like a GREAT idea!

 

Our testing shows that most PC/OS/LV combos fling this out-of-memory error up on the screen at different size thresholds and at seemingly random time durations since the last reboot.  It would be prudent to introduce error handling to the programmer/user rather than let the whole thing fail at random times (not very productive).  Following the example posted on 2/27, LV 2009 (32-bit) will crash on a WinXP 32-bit OS with 4 GB physical RAM when requesting 64M items in that I16 array initialization.  That should be roughly 122 MB requested from the VM in XP.  However, even with over 3 GB free, LV stops with that anonymous out-of-memory error.  When the same thing is tried on LV 2009 (32-bit) on a Win7 OS (64-bit) with 16 GB, it has worked so far.

 

The solution is not to run away from the problem and just purchase new PC's, RAM, OS, and new LV when a problem arises.  This is a fundamental architecture and error handling issue that will keep coming up in the future as memory needs increase.  It is also a legacy maintenance issue for all of us with slightly less recent copies of LV and Windows that don't have the time and resources to just buy everything new every year or for each release.  Many of us are under schedule and budget pressures to produce and don't have the luxury of turning the memory manager investigation into a full-time research project.  We are counting on NI and their expertise to excel in the support of this product.

 

GregSands
Active Participant

At the very least, I'd like a VI that would tell me definitively whether a proposed array allocation (or better still, set of allocations) will succeed or fail, before I try to allocate it/them.  For example, I do 3D deconvolution of reasonably large arrays, which requires not just the input array, but several other real and complex arrays to be created during the process.  If I can't work on the whole array at once, then I can do a segment at a time, but with a reasonable cost in overhead.  At the moment, I take an "educated" guess as to how big these portions can be based on the OS and the memory available, but it would be far preferable to be sure as to what could or could not be allocated.

AristosQueue (NI)
NI Employee (retired)

> At the very least, I'd like a VI that would tell me definitively whether

> a proposed array allocation (or better still, set of allocations) will

> succeed or fail, before I try to allocate it/them.

 

Between the moment you ask "do I have enough space to do this?" and the moment you actually do the reservation, a parallel thread may have allocated memory and you may then fail. In a parallel system, it is *always* a race condition to ask "is the resource available?" and not actually claim the resource. For a whole chain of resources, you have to claim them one by one and if any one of them fails, you then back up and release the ones you already acquired. Yes, this does mean keeping track of which ones you've successfully gotten and then freeing only those (lest you generate more errors from closing resources you never opened). I will freely admit that the error handling on such code is not pretty.

GregSands
Active Participant

> For a whole chain of resources, you have to claim them one by one

> and if any one of them fails, you then back up and release the ones

> you already acquired.

 

OK, that makes sense - I hadn't considered race conditions.  The problem is that at the moment, there's no way to to recover from a failed allocation, or to tell LabVIEW to release allocated memory.

NASA Matt
Member

Aren't these tasks supposed to be the job of the automatic memory manager and the OS?  Why do we as programmers and end-users have to spend so much time and effort doing this?

AristosQueue (NI)
NI Employee (retired)

> Aren't these tasks supposed to be the job of the automatic

> memory manager and the OS?  Why do we as programmers

> and end-users have to spend so much time and effort doing this?

 

Tradition! 🙂

Seriously though... more and more of it is being handled, but we haven't figured out how to handle all the issues. One example: On real-time systems, in order to achieve determinism, there is no memory compaction once it is allocated and when you run out of sizable blocks, you have to reboot, and you have to have that under direct programmer control so that the reboot can be scheduled for a convenient time.

MaryH
Member
Status changed to: Declined
 
X.
Trusted Enthusiast
Trusted Enthusiast

Just a quick follow up on the suggestion of error in and out connectors. GerdW had suggested precisely this and it is still possible to kudo the suggestion (just to feel good).

SteenSchmidt
Trusted Enthusiast

@AQ: There exists some deterministic memory compaction algorithms for real-time systems. Memory defragmentation that runs in the "shadow" of dellocation meaning that at most a block equal the size of the deallocation may be moved as well. There are usually restrictions based on block and page sizes for instance, depending on the specific memory architecture and layout.

 

Traditionally you're right, but I think real-time in the industry is moving toward dynamic memory management since demand is rising.

 

I'd be happy for some sort of error output that just could tell if an allocation failed. Not if the requested allocation is possible (since that'll create the racing condition), but attempt the malloc and then report to me if it failed so I can retry the operation using a less memory-demanding algorithm. It could be a handful of special prims in the Memory Control palette for instance (Init Array, Build Array and a few cousins). Wouldn't really work for real-time as this is far from deterministic, but it might save the day for some applications that would otherwise just run head first into the "insufficient contiguous memory" wall.

 

Cheers,

Steen

CLA, CTA, CLED & LabVIEW Champion