LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Confusion regarding hardware-software interfacing using DAQmx

I tried a variety of things and this was about the best I managed when trying for a 20 kHz loop rate governed by DAQ timing, including both AI and AO.  (See file including "in sequence" in the name).  Actual average loop iteration time was ~112 microsec (~8 kHz rate).  Variation was bounded between about 50 microsec and 200, but was highly variable within those bounds.

HW-Timed Single Point control loop - in sequence.png

 

Then I tried a small pipelining cheat by letting the AI run in parallel to the code that waited for the next sample clock.  The "cheat" aspect is that this parallelism means that the control calculation is based on AI data that's 1 sample older than in the sequenced case.  (See file including "small cheat" in the name).  Average loop time was only ~51 microsec (~19.5 kHz rate).  It was pretty well bounded between 2 and 14 microsec, and showed quite a bit better consistency. 

HW-Timed Single Point control loop - small cheat.png

 

I know of some problems in the attached code, and don't have time to prettify it or thoroughly explain it now.  It works good enough as-is to further this conversation though.

 

My overall take away: without the "cheat", 20 kHz doesn't look feasible.  With the cheat, the loop can approach reasonable speed and consistency.  However, the nature of the cheat means you aren't really controlling to a 20 kHz bandwidth.  It's more like 10 kHz, because there's more nearly 2 cycles between the AI sample and the corresponding AO generation.

 

Still, quite a bit better than I would have anticipated...

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 11 of 12
(385 Views)

Thanks a lot Kevin. I just need try this out myself. I'll get back to you after that.

0 Kudos
Message 12 of 12
(366 Views)