05-09-2017
10:17 AM
- last edited on
12-19-2024
09:29 AM
by
Content Cleaner
I'm writing high speed data to a cRIO-9035. NI literature and training both recommend using TDMS for this type of data. My application logs 4 channels in 7 second bursts at a resolution of 1Mhz. I use 32-bit floats to cut down on memory which results in a file size of about ~106Mb per file.
I've noticed my memory continues to grow and doesn't free after I close my file. Is the following article saying that TDMS leaks memory and NI has known since 2009!? The term "presents as a memory leak" is confusing. It either is or isn't a memory leak. Not sure why the word "present" is used here. Maybe to soften the blow?
Memory Growth with TDMS
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MfNSAU&l=en-US
Solved! Go to Solution.
05-09-2017 11:06 AM - edited 05-09-2017 11:07 AM
i think the "presents as" means "it looks/behaves like", but is intentional (caching for faster access),
at least that is what i understood from the documentation.
05-09-2017
12:09 PM
- last edited on
12-19-2024
09:29 AM
by
Content Cleaner
@craige wrote:
... The term "presents as a memory leak" is confusing. It either is or isn't a memory leak. Not sure why the word "present" is used here. Maybe to soften the blow?
Memory Growth with TDMS
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MfNSAU&l=en-US
In the world of medicine, doctors use the term "presents" when speaking of the symptoms associated with an illness or condition. I read it in the same way in that article.
I have not heard of any memory leaks in TDMS lately but that does not mean there aren't any.
Under most circumstances, LV is hesitant to return memory once it is allocated because re-allocating memory is very expensive time-wise.
Ben
05-09-2017 12:32 PM - edited 05-09-2017 12:32 PM
Around the 2010, and I know in 2011 there was a couple of memory leaks that were later resolved either with an Always Copy, or with a patch, which was later resolved in newer versions of LabVIEW. But I'm unaware of any recent ones. If you can get a minimum set of code that reproduces the issue we'd be very interested in getting it resolved. Is there any chance you can post the relevant code?
Unofficial Forum Rules and Guidelines
Get going with G! - LabVIEW Wiki.
17 Part Blog on Automotive CAN bus. - Hooovahh - LabVIEW Overlord
05-09-2017 12:54 PM
We have an application running on a cRIO that writes TDMS files to a network shared drive, and we see that the free memory decreases steadily down to about 5mb. Looking at "top" (a Linux process monitoring tool), it appears that the additional memory is the operating system taking advantage of unused memory to cache the file, rather than memory growth of the LabVIEW application, so I haven't worried about it. The application has run continuously for at least a month without issues.
05-09-2017 01:01 PM - edited 05-09-2017 01:11 PM
@NathanD Keep in mind you have to make up for the bad memory reporting that NI Linux when looking at your memory. Maybe you have more than you think. (I see that you're using Top. Nevermind) My system on the otherhand eventually crashes after about 7 recording bursts.
Thanks for the perspectives and replies. I'm out of town working and away from the lab
where the code exists but I'll try two things upon my return next week.
1) Send a dumbed down version of the code
2) Update the TDMS system with the patch that NI recommends
This patch mitigates but does not eliminate the memory growth issue as mentioned in the whitepaper. (Notice again that NI is not calling it a leak or bug. I have no idea what a memory growth is after 20 years in the software development industry).
If TDMS is going to leak...ahem, excuse me..."grow" in my memory space then I need to use a different file writing technique. Plain and simple.
07-01-2019 03:51 AM - edited 07-01-2019 03:52 AM
Well there are really two solutions to data writing in the way TDMS does it.
1) Keep a cache of the indices where data is currently stored for each stream and as the file gets bigger and bigger that cache grows.
2) Don't cache anything and start to seek through the file again every time a new data set needs to be written. That seek time will grow very quickly as the file gets larger and larger and your fast TDMS streaming will eventually start to feel like a snail instead.
If what you need is really fast streaming of a fixed set of data without any need for accessing channels or variable data speeds per channel the solution would be to stream the data as raw binary data to disk. Basically just a 2D array of data flattened to disk in one go. That will be consistently fast no matter how big your file grows.
If you need the flexibility of a solution like TDMS there is no way to avoid having to do some bookkeeping as time goes by and that bookkeeping requires some memory.