Dynamic Signal Acquisition

cancel
Showing results for 
Search instead for 
Did you mean: 

Why do I have to close and re-open Labview in order to run my VI multiple times? I get a "not enough memory" error if I try to run the VI twice without closing labview

I have acquired lots of data. (~500000 samples, at a 4K S/s rate, for 2 min, on 64 channels) Once I've got my data collected I do some processing, which all seems to work well, but when I try to run the VI a second time I get the "not enough memory" error. 

 

If I close labview and open it up again, I can run it once, before having to restart it. 

 

Is there a way around this restarting?

0 Kudos
Message 1 of 3
(6,158 Views)

For the second run LabVIEW can't allocate new (big) memory buffers.

There are some knowledge base articles on howto handle large memory buffers. The trick is to allocate the needed memory only once and reuse these buffers.

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 2 of 3
(6,151 Views)

  Once the memory is allocated to hold the data, it is retained until all VI's are closed. This applies to subVI's as well (the data is retained until the calling VI is closed). Your multiple runs are allocating new buffers while the previously filled ones are retained. It becomes obvious if you watch your memory using the Windows Task Manager.

  One way I cut down on usage is to subVI the acquisition and use the 'call by reference' technique to allow the memory used by the subVI to be de-allocated as soon as it finishes. You also have to be careful with how you manipulate the data to avoid making copies (search for info on 'show buffer allocations'). Graphing the data will also increase memory usage and there are tricks to limit the amount of data plotted to the resolution of the plot (i.e. decimating the data so you only get one point per pixel). Adding more memory to your computer may or may not help because you're limited by the largest chunk of contiguous memory available. I've had identical systems behave differently because the memory allocations by Windows were different resulting in one system having a larger contiguous block than the other (very frustrating when trying to deploy multiple, identical systems).

 

0 Kudos
Message 3 of 3
(6,016 Views)