LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Most efficient method to acquire data...

What is the most efficient (least uP time) methond to acquire data from an E-series board in LabView RT?...using single point read in a while loop OR setting up an acquisition process to a memory buffer and reading from the buffer in a while loop?
0 Kudos
Message 1 of 7
(3,366 Views)
For any OS, a buffered acquisition is more efficient than a single-point acquisition. Single-point acquistions are generally interrupt-driven (or worse, programmed I/O), so every time the E-Series scanclock initiates an A/D conversion, the uP is interrupted so that it can collect the sample directly off the board.

On the other hand, a buffered acquisition uses DMA to transfer several points at a time to a software buffer on your PC. The software buffer is user-configurable and will store points until you are ready to collect them with AI Read.vi. Although some uP time is required to execute AI Read.vi, the "efficiency" or ratio of data acquired to uP time spent acquiring is much higher for buffered acquisitions. This is why a buffered acquisition can achieve
sample rates 100s of times faster than a single-point acquisition.

The reason you might want to perform a single-point acquistion is if you needed to respond with an analog output at regularly scheduled intervals (i.e. perform control).
0 Kudos
Message 2 of 7
(3,366 Views)
But if your DAQ rate is high enough...wouldn't it be sufficient to perform control by reading the newest data from the buffer, execute the control algorithm and then update the ADC accordingly?
0 Kudos
Message 3 of 7
(3,366 Views)
If you're performing single-point control, AI SingleScan.vi is actually faster than AI Read.vi up to a certain buffer size.

Take the extreme case of acquiring only 1 point. AI Singlescan.vi can keep up with a 60 kHz hardware acquistion on a PXI-8170, 850 MHz. On the other hand, AI Read.vi runs at about 20 kHz (on a PXI-8170) when acquiring one point at a time from the buffer. It's not until you're acquiring larger chunks of data that AI Read.vi becomes more "effiicient".

Given a constant overhead of 50us (1/20kHz), AI Read.vi would surpass AI Singlescan.vi in efficiency at around 3 points per read, and in buffered applications it's not uncommon to read 500-1000 points at a time.

Several major points, however, have been overlooked thus
far. First, AI SingleScan.vi provides sleep when waiting on the next point - something AI Read.vi does not do. Also because AI SingleScan.vi is interrupt driven, it provides you a method of synchronizing your loop in software to the precise scanclock timing in hardware - again, something AI Read.vi does not do. Finally, AI SingleScan.vi lets you know if you're actually keeping real-time with its "scans remaining" parameter - AI Read.vi has a similar parameter but it applies to the points remaining in the software buffer...not the onboard hardware FIFO of the E-Series board.
0 Kudos
Message 4 of 7
(3,366 Views)
Hi Troy,

In the case of the one point acq your processor is interupted when the I/O compltes. In LV-RT this gives you very precise timing control over when the output is updated. This is generally refered to as determinism.

You could do a buffer acq in the standard manner used in non RT OS's, reading backlog size reading only that amount etc. but the determinisim of your output would vary.

If your logic takes this into concideration you can do what you want.

I have done a variation on both methods. I have read a fixed number of values in a time critical loop and used that in my control algorithm. This let me over sample a noisy signal that I had to use in a PID loop where all three (P,I,D) were required. Worked out real nice.


Trying to help,

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 7
(3,366 Views)
Expounding upon the idea of efficient data acquisition...is it more efficient to utilize NI-DAQ virtual channels to scale the inputs or is it better to scale the inputs in LabVIEW code (using LV RT)?
0 Kudos
Message 6 of 7
(3,366 Views)
As a personal preference, I'd scale the inputs in LabVIEW code. I have never benchmarked virtual channels, so I'm not sure which is faster.

I just don't like the idea of having to carry around a config file that defines my scales.
0 Kudos
Message 7 of 7
(3,366 Views)