11-29-2013 08:26 AM
Hello,
I have a control application that uses queues to push data from a processing loop where an output voltage is generated to a consumer loop where I update my analog output voltage. I have noticed that the DAQ seems to be having regular timing "blips" every 65 writes to the analog output channel (looks suspiciously close to 64). I have generated an example code which simulates the error here:
and here is the output
I have found this behaviour regardless of:
Can someone please explain to me why this is happening and how I can avoid/eliminate it? I am open to other suggestions for how to output my control voltage but our current design requirements dictate that we need the producer/consumer architecture with the queues rather than reading and writing to a single variable. This glitch is currently causing our control algorithm to become irrecoverably behind. I have attached the code and I am running with the following software:
Thank you in advance!
12-02-2013 06:20 PM
It's a bit unclear of what the issue is... what exactly do you mean by timing blips? Are they noticeable on the output waveform? I see you are graphing time between iterations vs number of iterations is that correct?
12-03-2013 08:12 AM
Hi Chris, thanks for the response.
The output graph I am showing is the "loop iteration time" vs. "loop iteration". I realized that the particular set of parameters that I used for this screen shot does not illustrate the problem well enough. For some sets of parameters this increase in the time of the consumer loop can be larger than the amount of time it takes our producer loop to create the data.
In a typical producer/consumer architecture, this shouldn't be a problem because if the consumer loop is usually faster (despite the occaisional increases in the consumer loop iteration time), then the consumer loop should eventually catch up. In our larger application, however, this increase in loop time from the consumer loop (DAQ loop) seems to be causing delays in our producer loop which contains a camera (via IMAQ software with a PCIe-1433) which is triggerred by a pulse train generated by the same DAQ. Somehow, this delay in our consumer loop seems to be causing a delay in our producer loop and our consumer loop starts to fall behind and we lose data.
So my goal is to understand why these timings are happening. I can't tell if this is because we are not using any kind of real-time hardware that has been designed for this kind of application, if there are issues with the PCI express bus or if this is simply a Windows interrupt.
12-04-2013 06:19 PM
Hi ColeV,
The consumer should be able to fall behind and still collect all of the data. When is the data lost? Is it a random times or always at the end? If the producer loop is generating too much data for the consumer loop to keep up it could start to use up all the memory on the computer and then slow everything else on the computer.
Also is the hardware timing being used on your output instead of software. You may have to explicitly wire in DAQmx VIs to make sure this is done correctly.
So far there seems to be still a lot of factors.
Regards,
12-06-2013 11:45 AM
@ColeV wrote:
This glitch is currently causing our control algorithm to become irrecoverably behind.
If this ~750 us timing glitch is enough to completely derail your control algorithm, you should probably look toward RT instead of Windows.
Having said that... you could try configuring hardware-timed single point timing on the output task. This way, the output would be clocked out synchronously with the card's timebase. Once you add hardware-timed acquisition to the code (instead of the simulated acquisition you are running now) you can synchronize the tasks together and have a deterministic amount of time between the sample being taken and the sample being generated. Of course, getting this to work depends on the OS being able to handle the input and write it back to the output quickly enough to fit in your allotted timing window--on Windows this really can't be guaranteed.
Best Regards,