LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory Leak or Not in TDMS?

Solved!
Go to solution

I'm writing high speed data to a cRIO-9035. NI literature and training both recommend using TDMS for this type of data. My application logs 4 channels in 7 second bursts at a resolution of 1Mhz. I use 32-bit floats to cut down on memory which results in a file size of about ~106Mb per file. 

 

I've noticed my memory continues to grow and doesn't free after I close my file. Is the following article saying that TDMS leaks memory and NI has known since 2009!? The term "presents as a memory leak" is confusing. It either is or isn't a memory leak. Not sure why the word "present" is used here. Maybe to soften the blow?

 

Memory Growth with TDMS

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MfNSAU&l=en-US

 

CAR# 239631

0 Kudos
Message 1 of 7
(4,319 Views)

i think the "presents as" means "it looks/behaves like", but is intentional (caching for faster access),

at least that is what i understood from the documentation.


If Tetris has taught me anything, it's errors pile up and accomplishments disappear.
0 Kudos
Message 2 of 7
(4,284 Views)

@craige wrote:

... The term "presents as a memory leak" is confusing. It either is or isn't a memory leak. Not sure why the word "present" is used here. Maybe to soften the blow?

 

Memory Growth with TDMS

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MfNSAU&l=en-US

 

CAR# 239631


In the world of medicine, doctors use the term "presents" when speaking of the symptoms associated with an illness or condition. I read it in the same way in that article.

 

I have not heard of any memory leaks in TDMS lately but that does not mean there aren't any.

 

Under most circumstances, LV is hesitant to return memory once it is allocated because re-allocating memory is very expensive time-wise.

 

Ben 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 3 of 7
(4,265 Views)

Around the 2010, and I know in 2011 there was a couple of memory leaks that were later resolved either with an Always Copy, or with a patch, which was later resolved in newer versions of LabVIEW.  But I'm unaware of any recent ones.  If you can get a minimum set of code that reproduces the issue we'd be very interested in getting it resolved.  Is there any chance you can post the relevant code?

0 Kudos
Message 4 of 7
(4,260 Views)

We have an application running on a cRIO that writes TDMS files to a network shared drive, and we see that the free memory decreases steadily down to about 5mb. Looking at "top" (a Linux process monitoring tool), it appears that the additional memory is the operating system taking advantage of unused memory to cache the file, rather than memory growth of the LabVIEW application, so I haven't worried about it. The application has run continuously for at least a month without issues.

0 Kudos
Message 5 of 7
(4,251 Views)
Solution
Accepted by topic author craige

@NathanD Keep in mind you have to make up for the bad memory reporting that NI Linux when looking at your memory. Maybe you have more than you think.  (I see that you're using Top. Nevermind) My system on the otherhand eventually crashes after about 7 recording bursts.

 

Thanks for the perspectives and replies. I'm out of town working and away from the lab

where the code exists but I'll try two things upon my return next week.

1) Send a dumbed down version of the code

2) Update the TDMS system with the patch that NI recommends

 

This patch mitigates but does not eliminate the memory growth issue as mentioned in the whitepaper. (Notice again that NI is not calling it a leak or bug. I have no idea what a memory growth is after 20 years in the software development industry). 

 

If TDMS is going to leak...ahem, excuse me..."grow" in my memory space then I need to use a different file writing technique. Plain and simple.

0 Kudos
Message 6 of 7
(4,249 Views)

Well there are really two solutions to data writing in the way TDMS does it.

 

1) Keep a cache of the indices where data is currently stored for each stream and as the file gets bigger and bigger that cache grows.

 

2) Don't cache anything and start to seek through the file again every time a new data set needs to be written. That seek time will grow very quickly as the file gets larger and larger and your fast TDMS streaming will eventually start to feel like a snail instead.

 

If what you need is really fast streaming of a fixed set of data without any need for accessing channels or variable data speeds per channel the solution would be to stream the data as raw binary data to disk. Basically just a 2D array of data flattened to disk in one go. That will be consistently fast no matter how big your file grows.

 

If you need the flexibility of a solution like TDMS there is no way to avoid having to do some bookkeeping as time goes by and that bookkeeping requires some memory.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 7 of 7
(3,372 Views)