LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Writing to large file takes an increasingly long time.

We are acquiring a large amount of data and streaming it to disk.  We have noticed that when the file gets to be a certain size it takes an increasingly longer time to complete the write operation.  Of course, this means that during these times our DAQ backlog grows large, and, although we can process any backlog quickly enough, when the write operation takes a sufficiently long time, we will overwrite our buffer and the DAQ will fail.  We have looked at numerous examples of high-speed DAQ and feel that we are following the examples as given.  This behavior happens on a variety of computers, under different programming stratagies (data as 1D WF, Raw, etc.).  On one system (h/w&s/w) we can get to almost 1.5GB flawlessly before our write speed drops off severely and affects our DAQ, while on another (much more capable) system we can reach 20 GB before starting a decline in write speed.  We've implemented a work-around by limiting our file size and writing to a new file when the limit is reached (multiple 10 GB files as an example) then reworking the data files during postprocessing.  We would like to know why this is happening.  I do not believe this is a G issue as the info I have is you can open a file and write to it with "position current" as many bytes as you like, then close it when done, and I have read that you can do this "until your disk is full".  I have searched the NI knowledgebase without any relevant info on this behavior, and the MS KBase with the same results.
 
Here is a little detail about our setup.  PXI chassis with 4472 cards acquiring data at 102.4KS/s.  One system has XPpro, a controller in slot 1 (128MB RAM, 1.3 GHz cpu) and two 4472 cards (call it Junior), another at the high end has XP pro, four 4472 cards, MXI-4 connection to a computer with 2GB RAM, dual AMD Opterons @ 2.0 GHz, 400 GB RAID (call it Senior).  All systems XP pro SP 2, LabVIEW 7.1.1, NI-DAQ 7.4.  The programming methodologies used follow the many high-speed data logger examples found in the KBase and it works flawlessly until up until the file reaches a critical size that is different for systems of differing capabilities (Junior and Senior) (the rate of performance degradation is different also).  Obviously we are using a high sample rate on a lot of channels for a long time, but we do not see an obvious increase in memory usage until we pass our "critical" file size, so I am pretty confident that our program is ok, and that LabVIEW is also behaving itself.  I am suspicious of WinXP but I have no good infomation that reliably points to it as the culprit.
 
If you can shed some light on this issue, I would appreciate it.  I know, it seems odd even to me, that being unable to write a 50-60 GB file should be a concern, it wasn't that long ago that I thought 500 MB files were huge, but the things that our engineers want to be able to do these days would stun me if I took the time to think about them instead of solve them.  Thanks for the efforts!
0 Kudos
Message 1 of 5
(3,630 Views)
The OS is probably reallocating space for the file from time to time. Say it had space for 10 GB at location 1. When the file size exceeds 10 GB then the OS goes looking for a location 2 which is bigger and then rewrites the file. Or it may begin fragmenting the files, but it has extra overhead keeping track of all the fragments. If you can allocate a space bigger than the largest file you expect to produce at the file creation time, you may avoid the slowdown.

I have not used files this big and do not use Windows, so these comments are generic based on things I have heard from others and have observed in other systems.

Lynn
0 Kudos
Message 2 of 5
(3,626 Views)
aaahhhhh...you must mean that I need to "preallocate" a large enough (contiguous?) space on my disk prior to writing to it?  I'll look into how to do this, thanks!
0 Kudos
Message 3 of 5
(3,619 Views)
If you can manage to get a contiguous space large enough, set it that way...although that's very nearly impossible unless you're using a clean drive.


0 Kudos
Message 4 of 5
(3,615 Views)

Sorry to resurrect this old thread, but it's important that people understand that as a conventional hard disk fills up, the write performance degrades because you are accessing the 'inner' portion of the drive, which can store less data per revolution.  You can expect to see up to a 50% degradation in throughput as you approach the capacity of the drive.

 

A solid state drive does not have this limitation, and it will generally result in better all-around performance.

0 Kudos
Message 5 of 5
(3,226 Views)