08-02-2013 09:21 AM
Is there an easy way to find and reduce the number of Buffer Allocations in a LabVIEW application?
Let me give you some background onto my approach before getting into my issues: I create a waveform and send it to a PXI-6552 for generation. At the same time that it outputs this waveform, it is also acquiring another waveform. I process this acquired waveform's information and then save it to disk. This loops over and over.
I want to run this VI for periods of time AT LEAST equal to 24 hours. (Maybe longer!)
I know that you can't really control memory management much in LabVIEW, but is there a way to reduce the number of allocations between iterations? Or, at least, free up some memory between for-loop iterations?
When I use the Profile >> Show Buffer Allocations, it seems like everything and their sister VIs all blink with dots. 😕
I would prefer not to run the Memory Profiler and my application for 24 straight hours to examine its memory management and potential to crash. There has to be a better way to go about this.
The common places I find these Buffer Allocations are:
Am I really, really bad at LabVIEW memory management, or is there something obvious I am missing? Please help me out 😄 I cannot find much literature on the subject aside from "use the profile tools".
08-02-2013 09:36 AM
I pretty much feel the same as you do, but may have some light to shed.. Just hoping I'm not shining the wrong light(s)..
First, and maybe foremost: the compiler is generally ridicolously good at managing constant folding and also at re-using. Just because you have a dot indicating a buffer allocation does not mean that it is allocating a NEW buffer. It indicates that a buffer MAY be created. For the buffer-dots in your loops, it is fairly safe to say that it is not allocating new buffers on every iteration. I too looked at my code with these allocation dots once, and asked on the forum, but answers then and experiences later, lead me to no longer overly concern myself with them.
Also, you don't need to run 24 hours to see if you will have a problem. Use the performance monitor and/or windows task manager, start the app before lunch, noting the initialized memory status (loaded and running for a minute or some), go to lunch, when you get back (or some other arbitrary time) check the memory usage again.. unless it is growing uncontrollably, you should be fine.
There is generally not a way to manage memory from your end because LabVIEW is designed to do memory garbage collection automatically. In rare cases, it may be necessary to flag a particular sub-vi to force it to unload after it completes its execution, which would free up the memory space used by that VI. This would typically only be usefull if you have a sub-vi that eats a huge ammount of memory, run's once and then never runs again. The compiler may or may not be able to reckognize that it will only run once, and may (or may not) keep that memory space in memory. The VI's for forcing this unload is called "Request Deallocation" and is found in "Application Control-->Memory Control" palette. Make sure to read the help for this function.
08-02-2013
09:39 AM
- last edited on
04-27-2025
07:24 PM
by
Content Cleaner
Pay Attention with Strings/Arrays/Clusters and more complex datatypes. (Especially if they are inside of a loop)
you can use In Place Element Structure to get rid of that dots and increase performance: https://www.ni.com/en/support/documentation/supplemental/08/labview-block-diagram-explained.html
If you are working with scalar types, there's nothing to worry about. The real problem is with that examples above.
Can u share more details? Can u show some code?
08-02-2013
09:41 AM
- last edited on
04-27-2025
07:26 PM
by
Content Cleaner
That was very informative.
I also stumbled upon this by jumping from link to link in the help documents https://www.ni.com/docs/en-US/bundle/labview/page/vi-memory-usage.html. So I am reading through this now!
I also considered running it on a more microscopic scale just to see how rapidly the memory will grow. It still might be worthwhile to test.
My question about the "buffers":
In a textual programming language, I believe I could re-use almost all of the allocated memory without duplication. Is there a way to analyze the code to check where or if new memory is being allocated? And is it possible to enforce a waveform data type (I believe its just a cluster) of size X to be over written next iteration by the new waveform data type also of size X? Or do you believe this might already be happening?
EDIT:
I cannot release actual code due to Company policies, but I am using the following general set-up:
Create a relatively simple I2C waveform using the I2C Waveform Reference Library. This is passed through a tunnel to the while loop. I also pass a reference to the NI HSDIO generation and acquisition channels and an error cluster via tunnels as well.
Within the actual loop, I use the NI HSDIO to send the same waveform every iteration to the PXI. I use NI HSDIO Fetch Waveform to return a DWDT (digital waveform data type) which is the same size as the original waveform but naturally with different elements.
I post process this DWDT by converting it to boolean with Digital to Boolean Array VI, a few build arrays, array transpose'es, and comparsion functions.
Lastly I convert this boolean array into a string of 0's and 1's and then substrings from this string. (Around 10 per iteration). These 10 substrings are converted to decimal numbers which are built into an array that is saved to disk with Write Spreadsheet to File.
I do not use any shift registers or anything. It contains no inner loops but does use an inner case structure.
I do not suspect any large allocations before or after this main loop either.
I also do not have any front panel objects (except for the file path Contol, but this could be set to be a constant if need be.)
08-02-2013 10:16 AM
In a textual programming language, I believe I could re-use almost all of the allocated memory without duplication.
In Another Programming Languages, biggest part of memory management is a manual thing (Heap & Stack concept lovers). There's duplications too. That's the main reason to Pointers existence (as well some Reference work)
Is there a way to analyze the code to check where or if new memory is being allocated?
Answer: Labview Desktop Execution toolkit.
And is it possible to enforce a waveform data type (I believe its just a cluster) of size X to be over written next iteration by the new waveform data type also of size X?
In another words, we call that an "In Place Operation". If u know the final size of an array (Y Array - Waveform), preallocate that and use "Replace Subset function" to feed with real data. Work with Shift Register.
Some tips to reduce overall memory use:
Use Shift Registers: Especially if ur working with Big DataTypes. U can so wire an unitialized array with bundle function after u have worked with some data. That will free up some memory.
Avoid Resizing Arrays: Especially inside of a Loop. LabVIEW will reclaim more and more memory at every iteration...
Use small datatypes: U8,U16...
I'm going to work right now. Good Luck 😄
08-02-2013 10:17 AM
You seem to be confusing buffer allocations with memory use that grows over time. These two are independent. While a buffer needs to be allocated for certain operations, that buffer can be re-used for the duration of the run unless e.g. the size changes.
You should focus on smart coding where you operate on fixed sized arrays as much as possible and you'll not likely to run into any problems.
08-02-2013 10:19 AM
@altenbach wrote:
You seem to be confusing buffer allocations with memory use that grows over time. These two are independent. While a buffer needs to be allocated for certain operations, that buffer can be re-used for the duration of the run unless e.g. the size changes.
You should focus on smart coding where you operate on fixed sized arrays as much as possible and you'll not likely to run into any problems.
That is exactly my issue.
Is there a way other than following guidelines to ensure an opperation will re-use a buffer instead of re-allocating more memory over time?
It seems like it might be a run-time decision that is out of my hands aside from just coding in a more predictable way.
08-05-2013
01:42 PM
- last edited on
04-27-2025
07:27 PM
by
Content Cleaner
Hello MrHappyAsthma,
This is something LabVIEW is going to handle for you behind the scenes. After a wire ends and the data is no longer being used, LabVIEW will automatically free up this memory.
Also, here is a link to VI Memory Usage if you are interested.
08-06-2013 08:03 AM
One thing to be careful of, and it isn't so much a buffer allocation as a memory allocation is in building arrays "a few build arrays, array transpose'es, and comparsion functions.Lastly I convert this boolean array into a string of 0's and 1's and then substrings from this string. (Around 10 per iteration). These 10 substrings are converted to decimal numbers which are built into an array " Array manipulation is definitely an area where we can have control over memory usage, and where improper usage can cause a "memory leak" where memory usage grows with each iteration. As Altenbach suggests, using fixed, pre allocatted size arrays will help prevent this, as well as improve overall program execution performance. Without seeing your code it is hard to see what might be going on with the build arrays, but if rather than having build arrays you can preallocate them and then substitute the new data for whatever initial values the array has it will help make sure that the compiler isn't having to reallocate memory for the array on every execution.