LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

“Write to Binary File” Overhead

I need to know if there is sufficient overhead during a call to “Write to Binary File” to justify storing data into a larger array so more data would be written in fewer calls. I have 51200 array elements of 32 bits each produced from an A to D converter every 128ms that need to be written to a binary file. The writing happens over a period of several hours so the writes must be able to keep up.

 

After each write of the A to D data there is another write of a few hundred bytes of processed data. So that would be two calls to “Write to Binary File” every 128ms.

 

I am not sure I would come out ahead because there is a time cost in storing the data for fewer writes when the data is copied into a buffer.

 

I am already using the producer – consumer architecture and the “In Place Element Structure.”

 

The computer is an LX800 based SBC running at 500MHz. The operating system is Windows XP Embedded.

0 Kudos
Message 1 of 5
(3,428 Views)

Your data rate is fairly low, 1.6MBytes/sec, so you should have no problem keeping up, provided you optimize your write sizes. Write chunks of 65,000 bytes should give you best performance on Windows (tested with LabVIEW binary file write primitive on Windows XP and 2000 using FAT32 and NTFS).  Be aware that the number you get will depend most strongly on your disk drive and where on the disk you are writing.  However, in 2001, I achieved 12MBytes/sec continuous with then-current hardware.  You should be able to achieve four or five times this rate, or more, with modern hardware.

 

Writing very small chunks can reduce your write rate by an order of magnitude or more.

 

Using TDMS would automatically give you buffering.

 

Do you need to produce several files?  If you are using the FAT32 disk format, you will be limited to 4GBytes/file.  You will hit that limit in under an hour.  To avoid missing data as you open a new file, you may want to open all the files you need up front.  However, given your relatively low buffer rate, if you have the internal memory, you could simply buffer the data until the file is open.  Your write process should be able to catch up pretty easily.

 

Good luck.  Let us know if need more help.

 

 

0 Kudos
Message 2 of 5
(3,393 Views)

To make file size manageable the file is closed and another opened when the file size exceeds 100MB.

 

Was that actually 65000 bytes or 64 times 1024 which is 65536 making it a power of 2?

 

The files are being written to a 65GB USB flash drive.

 

0 Kudos
Message 3 of 5
(3,351 Views)

It was 65,000 bytes, not 65,536.  That is an empirically determined number and I do not know why, but it was consistent across all the Windows platforms I tried (no guarantees on other platforms).  The drop off is pretty gradual as you go higher and much steeper if you go lower.

0 Kudos
Message 4 of 5
(3,311 Views)

To answer the "why" part of that Q would require knowing more about the hardware. In the "old days' writting a complete sector (which was 512 bytes) was optimal. By your numbers it looks like the file structure is set-up for what used to be 128 sectors ( I think I read the term "clump" being used in modern file allocations) and then there is the question of the driver caches and hardware buffer sizes.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 5
(3,298 Views)