08-01-2019 08:01 PM
I tried a variety of things and this was about the best I managed when trying for a 20 kHz loop rate governed by DAQ timing, including both AI and AO. (See file including "in sequence" in the name). Actual average loop iteration time was ~112 microsec (~8 kHz rate). Variation was bounded between about 50 microsec and 200, but was highly variable within those bounds.
Then I tried a small pipelining cheat by letting the AI run in parallel to the code that waited for the next sample clock. The "cheat" aspect is that this parallelism means that the control calculation is based on AI data that's 1 sample older than in the sequenced case. (See file including "small cheat" in the name). Average loop time was only ~51 microsec (~19.5 kHz rate). It was pretty well bounded between 2 and 14 microsec, and showed quite a bit better consistency.
I know of some problems in the attached code, and don't have time to prettify it or thoroughly explain it now. It works good enough as-is to further this conversation though.
My overall take away: without the "cheat", 20 kHz doesn't look feasible. With the cheat, the loop can approach reasonable speed and consistency. However, the nature of the cheat means you aren't really controlling to a 20 kHz bandwidth. It's more like 10 kHz, because there's more nearly 2 cycles between the AI sample and the corresponding AO generation.
Still, quite a bit better than I would have anticipated...
-Kevin P
08-02-2019 02:57 AM
Thanks a lot Kevin. I just need try this out myself. I'll get back to you after that.