High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

NI 5124, max points measured with 256MB memory

Thanks for fielding all my questions, DF.

That's wonderful that HWS automatically chunks the data. I will definitely change the fetch chunk size. As far as writing to disk in a separate loop, using a queue is the best method to do that, right?

As an aside (and a background leading up to a question about VI design), right now I'm using a modified Producer/Consumer template and added two 'Generator' loops that handle either continuous monitoring & processing, or high bandwidth operations (like high speed acquisition or video) that shouldn't bog down the consumer loop. So right now I've got 4 loops: Producer, Consumer, Generator 1 (reading of a spindle speed & some processing), Generator 2 (niScope acquisition).

Generator 1 essentially runs mindlessly, but communication with Generator 2 is via a queue w/ the # of times and amount of data to acquire. In manual mode, the data is saved in the Consumer loop (since the data is stored in a hidden front panel variable), but 'auto' mode has the saving routines inside the Generator 2 loop. After thinking about it a bit, and after your last post, I'm considering gettting rid of all the saving routines in the Consumer and Generator 2 loops and puttting them into a third "De-Generator" loop that is dedicated to streaming info in a queue (which would include the HWS waveform ID) to disk. My question is, since I'd be generating data faster than I can send it to disk, will LabVIEW have conniptions when I start reaching the 16G limit of my RAM? I imagine it would and that some kind of communication regarding "done writing to disk" would be needed between the two loops. An occurrance or notifier, maybe?

0 Kudos
Message 11 of 17
(4,810 Views)
Hi Matt,

Sorry to jump in a little late in the game, but I thought I would share a few streaming resources we have which have been optimized to benchmark your system.  There are 3 main applications: 1)Streaming to memory from your scope, 2)Streaming to disk from your scope and 3)Streaming from memory to disk.  These VIs will give you a good idea of how fast you can stream to disk, and also determine if the bottleneck is the PCI bus or the file writing operations.  One thing to note is the file writing VIs are not the standard LabVIEW binary file VIs.  These examples use more optimized win 32 file operations.

Streaming resources:
ni.com/streaming
NI-SCOPE Stream to Disk Using Win32 File I/O (download the ni-scope-streaming-advanced.zip file).

Hope this helps.  The VI is already set up with the producer/consumer architecture.  Just another alternative which I think is worth a try.
0 Kudos
Message 12 of 17
(4,789 Views)
Hi Mlang,

Using queues is a good way to ensure that your "producer" loop is gathering data at the rate you specify while the "consumer" loop handles that data at another rate.  The idea would be that you would be writing this data to file fast enough that you wouldn't be filling 16G of RAM by the end of your test.  Yes, labVIEW can run out of memory if you aren't able to write the data fast enough which would throw an error or cause unexpected behavior.  Queue's will store all the data passed.  If you used notifiers, you will lose data in your "consumer" loop but ensure that you don't run out of memory.  When the data is being consumed in the consumer loop, the data transferred to the notifier during this time will be lost (such as entire chunks of data).  I would recommend taking a look at our streaming website found here.  In addition, take a look here for a specific example using our National Instruments Digitizers (such as the 5114) and download the advanced streaming example.  This contains some of the win32 write to file vi's that have significantly increased the rate at which you can stream to disk over the regular LabVIEW vi's.  There are some benchmark vi's included that will test the rate you can stream to disk.  I would run both the Win32 Write to File Speed Test.vi example and the LV Write to File Speed Test.vi.  Notice that the win32 write to file speed test is much faster. 

I would also recommend taking at look at the niScope Stream to Disk Queues Win32 File IO.vi example which demonstrates using queues to write to disk. 

Please let me know if you have any questions using these vi's etc.

Regards,
Paul C.
0 Kudos
Message 13 of 17
(4,787 Views)
Ok. I did a few tests myself and while I don't have enough time to give both HWS and the customized Win 32 VIs a chance, I did a very quick test using the write to disk benchmark, and then replacing the file write VIs w/ HWS VIs. Result? Using Win32 w/ an optimum chunk size of ~5MB, I couldn't really break 22MB/s. Using HWS, I got ~40-50MB/s.

So now my last crazy problem (I'm shipping the computer tomorrow) is that the separate "data consumer" loop works just fine except that even though I'm bundling the waveform reference w/ the I8 data, it seems like on even loop iterations, the waveform reference that gets bundled reads as "0", which results in an error and only writing half my data. I'm attaching a screenshot and for the meantime sticking the waveform ID into a front panel control and using local variables seems to be an acceptable (though clugy) workaround. But any idea why a probe pre-bundle would read "1000000" and post bundle would read "0" every other loop iteration and the other loop iterations be fine?


0 Kudos
Message 14 of 17
(4,779 Views)
I don't see any obvious reason this should happen, but at this point, you just need it to work.  Other things you could do to pass the data, from easiest to hardest.
  1. Create a front panel control to use to pass the reference.  Use a local variable to set the control and then read it in your streaming consumer loop.  Wrap the local variable in a single-frame sequence.  Run the error wire from the dataset creation VI to the acquisition loop through this single-frame sequence.  This will ensure that the control gets updated before it is needed at the next write to disk.  This method has the unfortunate problem of doing a switch to the UI thread every time the control is read or the local variable is set, causing slowdowns.  If this does not cause you problems, go for it.  If not, we go to plan 2.
  2. Create a simple LabVIEW 2 global which contains your waveform reference.  Use error in/out to create a dataflow dependency.  Use it to set the reference before entering your data acquisition loop and read it in your streaming loop.  I have attached an example LabVIEW 2 global which should work for you.  This method does not go through the UI loop so will be faster.
  3. Create a single-element-queue global and use it the same way you would use the LabVIEW 2 global.  This requires more initialization and does not have much, if any, speed advantage in this situation.
Good luck!
0 Kudos
Message 15 of 17
(4,772 Views)

I was reading this thread and this page http://zone.ni.com/devzone/cda/epd/p/id/5273 (NI-SCOPE Stream to Disk Using Win32 File IO) and am still a bit confused by the stream to disk operation.  The application example "writes data to disk using the LabVIEW primitive Write to Binary File.vi".  I am trying to adapt the "Array to Spreadsheet String" or "Write to Text File", and came up with this, which generates an error.  Is there a special Win32 Open File, or am I missing something?

 

0 Kudos
Message 16 of 17
(4,171 Views)

Hi Douglas,

 

I am interested in knowing what kind of error you are seeing, whether it may be coming from the File I/O operations or the NI-SCOPE functions. I could not tell from the picture, but I am also interested in knowing what function you are using to open the file before calling the Write to Text File VI. In the normal example that you would download at the page you linked, it does use a special Win32 Open File function, which is optimized by disabling Windows caching when writing to a binary file. This is the suggested method of streaming to disk since it will provide the least amount of overhead from the software when writing to the hard drive. However, this Win32 function should not be used when writing to a text file; instead, you would want to just use the standard LabVIEW Open/Create/Replace File primitive. Also, keep in mind that you will not be able to achieve high streaming rates from your device if you are streaming this data to a text file. If you need the data in a spreadsheet-type format, you may want to consider post-processing to read the binary file and convert the binary data to a spreadsheet, which is not going to be a time-dependent process.

 

I hope this helps, and please let me know if you have any additional information/details/questions. Regards,

Daniel S.
National Instruments
0 Kudos
Message 17 of 17
(4,134 Views)