We are acquiring a large amount of data and streaming it to disk. We have noticed that when the file gets to be a certain size it takes an increasingly longer time to complete the write operation. Of course, this means that during these times our DAQ backlog grows large, and, although we can process any backlog quickly enough, when the write operation takes a sufficiently long time, we will overwrite our buffer and the DAQ will fail. We have looked at numerous examples of high-speed DAQ and feel that we are following the examples as given. This behavior happens on a variety of computers, under different programming stratagies (data as 1D WF, Raw, etc.). On one system (h/w&s/w) we can get to almost 1.5GB flawlessly before our write speed drops off severely and affects our DAQ, while on another (much more capable) system we can reach 20 GB before starting a decline in write speed. We've implemented a work-around by limiting our file size and writing to a new file when the limit is reached (multiple 10 GB files as an example) then reworking the data files during postprocessing. We would like to know why this is happening. I do not believe this is a G issue as the info I have is you can open a file and write to it with "position current" as many bytes as you like, then close it when done, and I have read that you can do this "until your disk is full". I have searched the NI knowledgebase without any relevant info on this behavior, and the MS KBase with the same results.
Here is a little detail about our setup. PXI chassis with 4472 cards acquiring data at 102.4KS/s. One system has XPpro, a controller in slot 1 (128MB RAM, 1.3 GHz cpu) and two 4472 cards (call it Junior), another at the high end has XP pro, four 4472 cards, MXI-4 connection to a computer with 2GB RAM, dual AMD Opterons @ 2.0 GHz, 400 GB RAID (call it Senior). All systems XP pro SP 2, LabVIEW 7.1.1, NI-DAQ 7.4. The programming methodologies used follow the many high-speed data logger examples found in the KBase and it works flawlessly until up until the file reaches a critical size that is different for systems of differing capabilities (Junior and Senior) (the rate of performance degradation is different also). Obviously we are using a high sample rate on a lot of channels for a long time, but we do not see an obvious increase in memory usage until we pass our "critical" file size, so I am pretty confident that our program is ok, and that LabVIEW is also behaving itself. I am suspicious of WinXP but I have no good infomation that reliably points to it as the culprit.
If you can shed some light on this issue, I would appreciate it. I know, it seems odd even to me, that being unable to write a 50-60 GB file should be a concern, it wasn't that long ago that I thought 500 MB files were huge, but the things that our engineers want to be able to do these days would stun me if I took the time to think about them instead of solve them. Thanks for the efforts!