LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Read From Binary file : memory leakage?

 

Hello,

When I run the attached Vi reading a binary file of 66Mb (that takes 12Mb when zipped) I cannot understand the different stages of memory usages values that I see as a result (see attached picture). The Vi attached is called several consecutive times with a different to read and analyze a different binary file. From one call to the next the memory never gets fully released and although it does not increase as much as the first go it still increments significantly to about 1,2Gb and all operations on the PC become completely clogged.

My question is thus two-fold:

Is there a better way to program what I did?

Is there a way to release the memory completely from one call to the next?

Thank you very much for your ideas

Christophe

0 Kudos
Message 1 of 9
(3,474 Views)

We don't have any of the subVIs or a sample file. Do you have a short example file?

Why do you use request deallocation? Why do you need the FOR loop?

0 Kudos
Message 2 of 9
(3,446 Views)

Hello,

I am attaching the full library and also one example of file that I read.

The reason for the FOR loop is that because of this ‘memory leakage’ that I have since the beginning I was wondering if I could reduce it by splitting the reading of the entire file into reading successive chunks of it…

Also for request de-allocation, i have tried with and without and it did not change the situation...

Thank you for your insights

Christophe

Download All
0 Kudos
Message 3 of 9
(3,317 Views)

Thanks for including the data file.  It enabled me to write a VI consisting of an Open File function (which, strictly speaking, I didn't need), a single Read from Binary File function, and a Close File function.  The result was an Array of Data consisting of records that appear to be 4 I32 integers (the first two being identical "index counts", 0, 1, 2, ..., and the next two being large negative numbers, the first one increasing steadily and the second increasing in broad "steps") and a string "          No_TSN".  There are 208757 records of this type, and to read them all into an array took less than a second.

Here's the code:Repertoire detaille Read.png

 

Bob Schor

 

P.S. -- the only "hard part" was guessing the format of the file.  Without the file, it would have been hopeless, and even with the file, it could have been much more difficult.  This is one drawback of the Binary File format -- it is blinding fast, has a small file size, but without "extra knowledge" (or good sleuth work), troublesome to reconstruct how it was written (and how it needs to be read).

 

 

Message 4 of 9
(3,294 Views)

Hi Bob,

In my last message together with the data file I had also posted the Labview code that shows the structure of the binary file (4 times U32 and one string of 16 char). You code is exactly what I am doing and this is the source of the problem (not the time to read the file but reading AND subsequent operations cause what I call this memory leakage as explained in my original message).

Thank you
Christophe

0 Kudos
Message 5 of 9
(3,266 Views)

Christophe,

     You are correct -- I didn't carefully read your original Post, and missed the "tricky part" of how to process really enormous data files.  If it is possible to process the data in "chunks", say reading 100,000 records at a time (and either processing all of them before reading another "chunk", or process, say, 90% of them by stopping at a "convenient point", whatever that means), then something like the following "sequential" code should work.  The main routine works by reading 100,000 files, and passing the "New Data" to the Process Data VI.  If it reads past EOF, it will throw an EOF Error, which we use to stop the loop after we process any "good" data.  The Process VI takes two Data inputs -- the New Data we just read and any "Old Data" we didn't process the last time we called it.  It also passes through the File Reference (just to keep the wires neat and straight).  Inside Process Data, it checks for and clears the EOF Error (Error 4).  The While Loop picks off the top element of the Array and processes it, stopping itself "when convenient", and returning the unprocessed data as "Unused Data Elements".  Of course, if you can process all the data at once without worrying about where you start and stop, you can simply use a For Loop and an indexing tunnel, and forget about Old Data and Unused Data inputs and outputs.  The top Snippet is the Main (with the Cluster defined as a TypeDef), the bottom is the sub-VI Process Data.

 

Process Chunks of Data.pngUTIL Process Data.png

Bob Schor

Message 6 of 9
(3,232 Views)

Hello,

I understand that I can spit my data of course. But this does not explain the issues that I referred to in my original message. Also I still do not understand how to free the memory used between the readings...

The intermediate memory usages that I display on the front panel still puzzle me...

Thanks

Christophe

0 Kudos
Message 7 of 9
(3,108 Views)

Requesting memory from the OS is expensive, to LV has its own memory manager and holds on to memory for reuse.

https://www.ni.com/docs/en-US/bundle/labview/page/vi-memory-usage.html

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 8 of 9
(3,094 Views)

I also have no idea how LabVIEW efficiently manages memory, but the guys designing the code "behind the scenes" know what they are doing, and know how to reuse memory so you should not have to worry about it.  In the example I showed, if the Process Data VI proceses most of the data, returning very little, then each time through the loop essentially the same memory will be used over and over again to hold the 100,000 data points being read (automatically releasing the memory when the Process Data loop exits, except for the (small number of) points you decide to process on the next go-around.

 

Bob Schor

0 Kudos
Message 9 of 9
(3,060 Views)