LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What is delay using AI Single Scan?

Hello,
I have a NI PXI-1002 chasis with a PXI-8176 controller (PIII 1.2 GHz), and 6052E I/O board, running LabVIEW-realtime.

When I use "AI SingleScan.vi" to acquire a data point at 8000S/s, what is the delay between the real-world and that data point being available for processing? Is it less than 1/8000 s, or is there some buffering going on between the real-world and my program?

I would also like to know the answer for outputing a single data point with "AO SingleUpdate.vi". When I update a point, is it an analogue voltage in the real-world within 1/8000 s, or is there buffering?

Thanks in advance,
Frenk
0 Kudos
Message 1 of 3
(2,890 Views)
The delay between the A/D conversion and the data becoming available for processing is the overhead of AI Singlescan.vi (and its NI-DAQ calls underneath). To measure AI Singlescan's overhead, you can put it in a loop by itself and check the scan backlog. Try several scan rates to find the max for your system. There is an example to get you started:

\RT Timing.llb\Hardware-Timed Acquisition.vi

The best approach is to put that example in a loop and run a binary search on the scan rate until you zero in on the max rate with no error and no scan backlog. If memory serves, AI Singlescan's overhead on a PXI-8170 (850 MHz) is about 15 usecs.

Now for AO SingelUpdate.vi, the task is a little trickier because AO SingleUpdate does not "wait" like AI Singlescan an
d does not have a scan backlog that we can check to determine if the I/O is in lockstop with the software.

With that said, you can measure the overhead of AO SingleUpdate by running it in a loop for several thousand iterations. Take a tick count before and after, then divide by iteration count to get average overhead. Keep in mind though, AO SingleUpdate has three primary modes of operation: Output only, Update only, and Output and Update. You'll notice the RT examples use "output only". Basically, that means AO Singleupdate loads the output register with a new value but does NOT latch the analog voltage to the I/O connector. This latching action is the responsibility of the timing source, and in the RT examples, the timing source is usually the AI scanclock or the output of a counter.

So to make a long story short, AO has two measureable rates in this case: 1) How fast we can load new values to the output register on the board and 2) How fast the board can perform the D/A
. Hope that helps.
0 Kudos
Message 2 of 3
(2,890 Views)
A typical control loop acquires one point, processes it, and generates an output. You can easily implement a true real-time control loop in LabVIEW by directing the scan clock to the update clock or vice versa, so that both analog input (AI) and analog output (AO) share a clock, ensuring that AI and AO occur exactly at the same time. The disadvantage to this method is that each update (analog output) actually depends on the scan (analog input) that occurred one loop iteration ago. However, because it is impossible to scan, process, and update the data at the same time, you need to introduce a delay between the scan and update to process the scan and update accordingly. Because both clocks share the same clock source and therefore are tightly connected, this delay is known (usually one clock cycle) and therefor you can take care for it in your control algorithm. Assuming that you use continous control algorithms, this one cycle delay is no problem as long as your desired looprate is approx. 5-10 times faster than the fastest dynamic of the process that should get controlled.

Optimized Hardware-Timed PID Loop
http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=B45EACE3E93B56A4E034080020E74861&p_node=174823&p_source=external

The timing and triggering functionality of the RTSI bus on our DAQ HW is used to internally route the analog input scan clock to the analog output update clock. As a result, the analog input scan clock serves as the “master” that determines loop cycle time and provides sleep.
During the first iteration of the control loop, the scan clock initiates the analog input value to be sampled. The data is brought into software where a PID calculation is performed and the analog output value is written to the board. At the next rising edge of the scan clock, the analog output channel is updated with the value from the previous iteration and a new analog input value is read.
This example uses analog input and analog output on the same board; however, the same technique can be used with RTSI and PXI to synchronize input and output events on multiple boards.
Loop cycle time with this implementation can be specified in terms of microseconds. As long as your control code (including the time it takes to talk to the IO) can be executed within the amount of time, specified by your desired looprate, your external process just sees the HW Jitter of the IO HW clock which is in the range of nanoseconds.
Depending on your control algorithm, 1/8000 sec shouldn't be a problem with your HW. We achieve up to 1/40000 sec for one PID with this HW (single input /single output).

Regards

Stephan A.
System Engineer Control & Simulation
National Instruments
0 Kudos
Message 3 of 3
(2,890 Views)