LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx Analog Output at 1 kHz is "glitching"

Hello experts,

 

I have a cRIO-9045 with analog output (NI-9265) and analog input (NI-9203) boards.  I am trying to create a PID controller with it. I have attached a simplified example of my code (not my actual code, which is far more complex).  This example demonstrates my exact problem, though.  I have an analog input task and an analog output task with a function block inbetween them.  So my output is continually changing based on my inputs.  If I run this code as-is, my AO task throws this error:

 

Error -200019: ADC Conversion was attempted before prior conversion was completed

 

This KB article doesn't seem to help me much. I tried reducing the output buffer size (doesn't help).  And I really don't want to reduce my update rate (1 kHz is serious goal).    

 

If I switch to an "on demand" AO task, it can't execute at 1 kHz.  And if I enable "regeneration mode" then my code does execute at 1 kHz, but I am clearly getting the glitching regeneration problem (which is not acceptable).  So... how can I get my AO task to output at 1kHz and without errors and without glitching?

AO task is glitching.png

http://www.medicollector.com
0 Kudos
Message 1 of 8
(199 Views)

You have at least 4(!) different (and conflicting) Timing sources in the picture of your Block diagram:

  • The AO "clock", running at 1 kHz for 1000 samples (so it should "tick" once per second).
  • The AI "clock" running at 1 kHz for 1000 samples (so it should also "tick" once/second)
  • The Timed Loop, which appears to be set for 1 msec (1000 ticks of a 1 MHz clock)
  • A Time Delay inside the Timed Loop, which appears to be set for 10 sec (!!?), and is "unanchored" (so you don't know when, relative to everything else in the Timed Loop, its clock starts ticking).

If you want tasks to run synchronously, you want one timing structure to set the timing, and have the other tasks use Data Flow to run "relative" to the "Master" task.  I've not actually done much "hardware-in-the-loop" programming, but the notion is that your next "stimulus" output is based on the previous output + any correction you need to make based on the current "input signal" from your AI channel.  Both should take 1 ms to run, and I would expect them to stay "in synch" (though it might work better if only one was on a Timing source, i.e. its internal clock, and the other was "on demand" -- I'll leave it to my colleagues with more experience in these matters).

 

I'm sure you'll get additional useful feedback from Forum members who've done these things.

 

Bob Schor

0 Kudos
Message 2 of 8
(151 Views)

Thank you, Bob.  Though I'm not sure why you are saying the timing sources are conflicting.  The two hardware clocks execute at 1 kHz, which is a 1 msec period (not sure where you get 1 second from).  The timed loop is also 1 msec.  And that "time delay" is for 10 microseconds (not milliseconds), which is so small compared to out period that it can be ignored (deleted it, and we get the same error).  

 

The crux of my problem appears to be the Error -200019.  Can someone explain that error in more basic terms?  I thought this was a buffered output task, so I would expect to see a buffer overrun or buffer underrun error.  But this error appears to be something else?  Is it just a poorly worded buffer overflow error?  Or does it mean the hardware can't keep up the hardware clock?  Is there something about my hardware that is preventing it from outputting at a kHz?  

http://www.medicollector.com
0 Kudos
Message 3 of 8
(102 Views)

LabVIEW RT allows PID controls up to a couple hundred Hz, so 1kHz is difficult/impossible. If you want faster control, you need to switch to FPGA.

 

what are you controlling?

 

Here's the answer from google search AI:

 

altenbach_0-1758467074573.png

 

0 Kudos
Message 4 of 8
(84 Views)

A problem with attaching a picture of a VI is that you can't really be sure what the icons mean.  When I saw the picture of the Time Delay, I looked in the Timing palette and saw the "N second" Time Delay.  I failed to notice that the icon was slightly different, and might be "something else", but I couldn't just "right-click on the image" and see what you were using.  My bad.

 

Your two Analog channels appear to run "one sample at a time", so why do you even have a clock signal and "Continuous Samples" (instead of "On Demand" and in a Timing Loop)?

 

The Three Laws of Data Flow ensures that the AI code (after passing through an unknown sub-VI whose icon image is too small for me to see what it does, other than extract a Float from an Array of Floats) precedes the AO code.  When does the (unnecessary) Time Delay occur?  After AI?  After mysterious VI?  After AO?  If the Timed Loop is really running at 1 kHz, why do you even need an internal Time Delay?

 

Bob Schor

0 Kudos
Message 5 of 8
(73 Views)

I recommend you use Hardware-Timed Single Point Mode instead of sample clock mode for high speed PID controller application.

-------------------------------------------------------
Applications Engineer | TME Systems
https://tmesystems.net/
-------------------------------------------------------
https://github.com/ZhiYang-Ong
Message 6 of 8
(40 Views)

Thank you ZYOng!

 

That was the tip I needed!  I didn’t know that Hardware-Timed Single Point Mode existed!  Im gonna try it out and will report back. 

http://www.medicollector.com
0 Kudos
Message 7 of 8
(20 Views)

Also, thanks to ZYOng from me.  Despite a lot of years with LabVIEW and USB DAQ devices, I never ran into the Timed Loop functions!  You can teach an Old Dog New Tricks!

 

Bob Schor

0 Kudos
Message 8 of 8
(6 Views)