LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory full

What I do is collect data from DAQ5102, filter and display.
 
But the program will run and hang after a while, I capture the memory usage as attached. From windows task manager, my progarm will use 70M in the end start from 50M.
 
There are some insert and delete array operation for big array, especially in Update_Hist_Plot.vi.
 
How can I improve it?
Download All
0 Kudos
Message 1 of 12
(3,809 Views)
0 Kudos
Message 2 of 12
(3,808 Views)

This thing needs major surgery!

The keyword is "in place", and you are not doing it. 😉 You need to initialize an array at a fixed size and never insert or delete elements. All you need to do is replace array elements, keeping the array size constant. You can always rotate the array by a certain amount and overwrite the oldest data. All in place.

Another cause for extra data copies are coercions. You need to make sure that datatypes match.

  1. "Filter channel" has SGL inputs, but you feed it DBL.
  2. You basically output two entire copies of the data out of "update...", one for the plot and one for the output. Why?
  3. The plot output is DBL, but the output connector is SGL, causing a coercion, then there is another coercion when you feed it into the property node, which again expects DBL.
  4. Get rid of that request deallocation. Once things are in-place, it is expensive to deallocate after each call, just to reallocate the same memory at the next call.
  5. Shuffling data via value properties is very inefficient.

Remember that arrays must be contiguous in memory. This means that each array resize (insert, delete) will force a copy of the entire array to be created in memory at a new location.

How is the calling program? Ideally you would keep the data in a shift register in the toplevel program and graph it right there instead of pushing large arrays deep into subVIs and shuffling the data via property nodes.

Could you explain in more general terms what the program is supposed to do exactly?

0 Kudos
Message 3 of 12
(3,791 Views)
You mentioned 'can always rotate the array by a certain amount and overwrite the oldest data', I can rotate,but how to overwrite the old data. thanks.
 
1. For the data type, now I changed all the DBL, in case data convert.
 
2. I will put only 1 output
 
 3. Feed to the property node is to feed data for waveform plot, the plot is located in the main program for  interface, this is thhe only way i have to pass data to it, other wise need to go through many vi level.
 
4. I will get rid of that request deallocation, may I know how to reallocate the same memory at the next call?
 
5. Do you refer to the property node?
 
This progam is to collect data at high sampling rate, then display it in 2 formats, one is online refreshment, another is accumulated historical plot, but will clear the data when reach limit.
0 Kudos
Message 4 of 12
(3,764 Views)
I have changed the program accordingly, now the pc will not hang, but the  performance will go down after 8 mins. Is there further way to improve it?
 
 
Download All
0 Kudos
Message 5 of 12
(3,757 Views)
I don't see anything that would cause major problems like continually building and enlarging arrays.  I know the one array is 60,000 elements.  How large is the array that gets replaced into it.
 
I see one possible way to improve the code in you update_history VI if that array that gets popped in is particularly large.  It looks like you place the elements into the 60k row array row one at a time.  If you exceed 60,000 then you wrap around (perhaps you should be just setting the index to 0 rather than 60,000 -60,000 in that case),  Why not replace larger chunks of the array at once.  If that array has 1000 rows, replace a 1000 row chunk 1 time rather than 1 row chunk a 1000 times.  Of course there is an issue if the 1000 row array would cause it to wrap around the larger array.  In which case you see how many rows you have left to hit 60,000.  If you have enough, place in the whole array.  If less, only replace in what you have room for, reset the index to 0, then replace in the rest.
 
I don't know if this would help or explain the issue with your problems at the 8 minute mark (how many rows would have been manipulated after running the code for 8 minutes?).  But it would be a way to optimize the code one step further.
0 Kudos
Message 6 of 12
(3,748 Views)
One other thing, in the current  history display VI, you use two different index functions on the same array when you can do the same thing with one function.  See below. This might be creating an extra data copy of a large array.


Message Edited by Ravens Fan on 11-22-2007 12:01 AM
0 Kudos
Message 7 of 12
(3,745 Views)
Since I don't have your hardware, there is no way for me to test your VI. Would it be possible for you to substitute a diagram constant of the 2D array containing typical data?
 
Could you do the fillowing:
  • Place an indicator on the 2D array right after the fetch operation.
  • Run your VI so the indicator contains typical data.
  • Abort the VI.
  • Right-click the terminal of the new indicator and do "change to constant".
  • delete all DAQ stuff.
  • Now we have a VI that no longer relies on any hardware and I can test it live with typical data.
  • Save under a new name and attach.
Some additional observations:
  • You still do way too much data juggling.
  • You  are still coercing going into the filter VI.
  • Instead of your homebrew fifo buffer, have you considered the "collector" express VI?
  • Remember that xy graphs take complex data directly (re=x, im=y), so you don't need any if this indexing and bundling you currently do. All you need is a 1D complex array containing your history.
  • Even simple things such as the order of operations are backwards: You take the absolute value of a 2D array and then slice out a column, discarding the rest. Wouldn't it make more sense to swap these two operations, taking the absolute value of the 1D array?
  • A 2D xy graph with 60000 points contains about a MB of data, so your shuffling 5MB per second through subVIs and property nodes (the property node most likely requires another datacopy in memory and there are a few more copies scattered over the various VIs.
  • It all ends up on an xy graph which is probably less than 1000 pixels wide, meaning only about 1/60 of the data is really shown. 
  • There is no purpose having the shift register in the main VI, because all history processing is done in the subVI. Design the subVI (current...vi) as an action engine, eliminating the entire 2D (or 1D complex) array in the main program.
  • You should also make sure that none of the subVIs have their panel open when you are running. This costs extra.
 
 
0 Kudos
Message 8 of 12
(3,717 Views)
The 60000 array only has 2 rows as attached. I have not found the function to replace large chunk of data yet.
Download All
0 Kudos
Message 9 of 12
(3,697 Views)
The collect vi looks good, but seems only can process one dimension data, for 2 dimension data, need to use split and build up array again.
0 Kudos
Message 10 of 12
(3,692 Views)