12-22-2005 04:24 AM
12-22-2005 07:35 AM
At the data rates you mention, the 5124 interface is fast enough to stream to memory/disk. I have successfully streamed to disk at speeds in excess of 20MBytes/sec (10MHz, single channel). To memory is faster. Your speed will depend on your computer hardware. I have attached an example of how to stream to disk. It requires the NI-HWS API, available on your driver CD you got with the 5124. You can modify the lower loop to store data in memory instead of on disk, if you wish, but I am not sure I would recommend it, given the size of your data. The two loop structure is used to get the full benefit of LabVIEW's multithreaded environment.
One other point. If you haven't done so yet, you will want to read Managing Large Data Sets in LabVIEW. Your data set, for a single channel, is going to be close to half a gigabyte. Just one extra copy will probably result in an out-of-memory problem. This is why I recommend you stream to disk. You can store as much as you want, that way, and it will be easily available for later use. Note that NI-HWS supports file sizes greater than 2GBytes on all versions of LabVIEW it can be used with.
Good luck. Let us know if you have any further problems.
12-22-2005 08:01 AM
02-01-2008 01:15 AM - edited 02-01-2008 01:17 AM
02-01-2008 08:15 AM
02-01-2008 08:44 AM
Yeah, that write speed seems more like it. It might just be that Vista's resource monitor isn't capturing the peak write spike. But it still takes too long to write the data in my opinion. The data RAID array is through the motherboard and there's a three drive RAID0 array for swap that goes through a PCIe RAID card (for number crunching vast GBs of data files after it's all taken). But the swap shouldn't be involved in writing to the disk so I think I'm good there.
As for the chunk size I'm writing to the binary file, I'm just using LV's built in write to binary file (left over from when I wrote it 4 years ago). There's no mention of an alterable chunk size. It definitely sounds like I'll need to check out the NI-HWS VIs. The one caviat is that all these files will later be opened in Matlab and the current algorithm works with the way the data is saved via the write to binary file VI. Would the file saved by NI-HWS be any different (i.e. header information regarding the waveform information) requiring a change to the Matlab code?
Creating the file while acquiring is an outstanding idea (one I should have thought of before, but I didn't have the system working to a point where I could have noticed the file save time was too long).
As for the large data set, the output from fetch goes into a hidden control on the front panel (for viewing/zooming, etc.) and the directly into the save routine. Any requests for the data (viewing in a waveform chart, user option saving, histograms, etc.) read directly from the local variable so there shouldn't be too many wire branches making extra copies. In auto acquisition it skips the front panel control all together and saves directly to a file.
I'll try out those suggestions tonight and give a report on the speed increase at all.
02-01-2008 08:57 AM
02-01-2008 11:05 PM
02-03-2008 12:59 PM - edited 02-03-2008 01:01 PM
02-04-2008 08:27 AM