Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

PCIe 6351 Timing Blip when Writing to Analog Output

Hello,

 

I have a control application that uses queues to push data from a processing loop where an output voltage is generated to a consumer loop where I update my analog output voltage. I have noticed that the DAQ seems to be having regular timing "blips" every 65 writes to the analog output channel (looks suspiciously close to 64). I have generated an example code which simulates the error here:

 

DAQ Output Block Diagram.png

 

 

and here is the output

 

DAQ Output Front Panel.png

 

I have found this behaviour regardless of:

  • Changing the "Number of DAQ Writes"
  • Changing the "Time Between Producer Events (ms)"
  • Resetting the DAQ
  • Using a different PCIe-6351 on a different computer

 

Can someone please explain to me why this is happening and how I can avoid/eliminate it? I am open to other suggestions for how to output my control voltage but our current design requirements dictate that we need the producer/consumer architecture with the queues rather than reading and writing to a single variable. This glitch is currently causing our control algorithm to become irrecoverably behind. I have attached the code and I am running with the following software:

 

  • Windows 7 64 bit
  • LabVIEW 2013 f3 64 bit
  • DAQmx 9.7.5 device drivers

Thank you in advance!

0 Kudos
Message 1 of 5
(4,290 Views)

It's a bit unclear of what the issue is... what exactly do you mean by timing blips? Are they noticeable on the output waveform? I see you are graphing time between iterations vs number of iterations is that correct? 

Chris S.
0 Kudos
Message 2 of 5
(4,249 Views)

Hi Chris, thanks for the response.

The output graph I am showing is the "loop iteration time" vs. "loop iteration". I realized that the particular set of parameters that I used for this screen shot does not illustrate the problem well enough. For some sets of parameters this increase in the time of the consumer loop can be larger than the amount of time it takes our producer loop to create the data.

In a typical producer/consumer architecture, this shouldn't be a problem because if the consumer loop is usually faster (despite the occaisional increases in the consumer loop iteration time), then the consumer loop should eventually catch up. In our larger application, however, this increase in loop time from the consumer loop (DAQ loop) seems to be causing delays in our producer loop which contains a camera (via IMAQ software with a PCIe-1433) which is triggerred by a pulse train generated by the same DAQ. Somehow, this delay in our consumer loop seems to be causing a delay in our producer loop and our consumer loop starts to fall behind and we lose data.

So my goal is to understand why these timings are happening. I can't tell if this is because we are not using any kind of real-time hardware that has been designed for this kind of application, if there are issues with the PCI express bus or if this is simply a Windows interrupt.

0 Kudos
Message 3 of 5
(4,240 Views)

Hi ColeV,

 

The consumer should be able to fall behind and still collect all of the data. When is the data lost? Is it a random times or always at the end? If the producer loop is generating too much data for the consumer loop to keep up it could start to use up all the memory on the computer and then slow everything else on the computer.

 

Also is the hardware timing being used on your output instead of software. You may have to explicitly wire in DAQmx VIs to make sure this is done correctly.

 

So far there seems to be still a lot of factors.

 

Regards,

Noah | Applications Engineer | National Instruments
0 Kudos
Message 4 of 5
(4,217 Views)

@ColeV wrote:

 

This glitch is currently causing our control algorithm to become irrecoverably behind.


 

If this ~750 us timing glitch is enough to completely derail your control algorithm, you should probably look toward RT instead of Windows.

 

Having said that... you could try configuring hardware-timed single point timing on the output task.  This way, the output would be clocked out synchronously with the card's timebase.  Once you add hardware-timed acquisition to the code (instead of the simulated acquisition you are running now) you can synchronize the tasks together and have a deterministic amount of time between the sample being taken and the sample being generated.  Of course, getting this to work depends on the OS being able to handle the input and write it back to the output quickly enough to fit in your allotted timing window--on Windows this really can't be guaranteed.

 

 

Best Regards,

John Passiak
0 Kudos
Message 5 of 5
(4,195 Views)