Hello all,
I think I am in need of some basic Labview advice regarding execution time. We have a main program loop that currently runs every 100ms. When it executes, it pulls N-samples from four AI voltage channels and computes the average value (i.e oversample & average), while updating the AO value of a voltage out channel. There is a small of amount of "downstream" math, like the averaging. When I track the "previous iteration duration", it averages between 6-12ms to accomplish these tasks. Removing the downstream math and writing the straight voltages to shared variables did not *seem* to make a large difference. I was hoping someone could view the code and make suggestions on how to better perform these tasks. We'd like to reduce the speed as much as possible, by an order of magnitude at least, if not more
Does the acquisition occur sequentially? (4 channels @ 1ms of sampling each = 4ms) If so, it makes sense that I need to increase sample rate/decrease total time per read, etc.
Is the math playing a larger role than I assume?
Regards,
Matthew Pausley
NC State University
Raleigh,NC