LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx Analog Output at 1 kHz is "glitching"

Hello experts,

 

I have a cRIO-9045 with analog output (NI-9265) and analog input (NI-9203) boards.  I am trying to create a PID controller with it. I have attached a simplified example of my code (not my actual code, which is far more complex).  This example demonstrates my exact problem, though.  I have an analog input task and an analog output task with a function block inbetween them.  So my output is continually changing based on my inputs.  If I run this code as-is, my AO task throws this error:

 

Error -200019: ADC Conversion was attempted before prior conversion was completed

 

This KB article doesn't seem to help me much. I tried reducing the output buffer size (doesn't help).  And I really don't want to reduce my update rate (1 kHz is serious goal).    

 

If I switch to an "on demand" AO task, it can't execute at 1 kHz.  And if I enable "regeneration mode" then my code does execute at 1 kHz, but I am clearly getting the glitching regeneration problem (which is not acceptable).  So... how can I get my AO task to output at 1kHz and without errors and without glitching?

AO task is glitching.png

http://www.medicollector.com
0 Kudos
Message 1 of 10
(321 Views)

You have at least 4(!) different (and conflicting) Timing sources in the picture of your Block diagram:

  • The AO "clock", running at 1 kHz for 1000 samples (so it should "tick" once per second).
  • The AI "clock" running at 1 kHz for 1000 samples (so it should also "tick" once/second)
  • The Timed Loop, which appears to be set for 1 msec (1000 ticks of a 1 MHz clock)
  • A Time Delay inside the Timed Loop, which appears to be set for 10 sec (!!?), and is "unanchored" (so you don't know when, relative to everything else in the Timed Loop, its clock starts ticking).

If you want tasks to run synchronously, you want one timing structure to set the timing, and have the other tasks use Data Flow to run "relative" to the "Master" task.  I've not actually done much "hardware-in-the-loop" programming, but the notion is that your next "stimulus" output is based on the previous output + any correction you need to make based on the current "input signal" from your AI channel.  Both should take 1 ms to run, and I would expect them to stay "in synch" (though it might work better if only one was on a Timing source, i.e. its internal clock, and the other was "on demand" -- I'll leave it to my colleagues with more experience in these matters).

 

I'm sure you'll get additional useful feedback from Forum members who've done these things.

 

Bob Schor

0 Kudos
Message 2 of 10
(273 Views)

Thank you, Bob.  Though I'm not sure why you are saying the timing sources are conflicting.  The two hardware clocks execute at 1 kHz, which is a 1 msec period (not sure where you get 1 second from).  The timed loop is also 1 msec.  And that "time delay" is for 10 microseconds (not milliseconds), which is so small compared to out period that it can be ignored (deleted it, and we get the same error).  

 

The crux of my problem appears to be the Error -200019.  Can someone explain that error in more basic terms?  I thought this was a buffered output task, so I would expect to see a buffer overrun or buffer underrun error.  But this error appears to be something else?  Is it just a poorly worded buffer overflow error?  Or does it mean the hardware can't keep up the hardware clock?  Is there something about my hardware that is preventing it from outputting at a kHz?  

http://www.medicollector.com
0 Kudos
Message 3 of 10
(224 Views)

LabVIEW RT allows PID controls up to a couple hundred Hz, so 1kHz is difficult/impossible. If you want faster control, you need to switch to FPGA.

 

what are you controlling?

 

Here's the answer from google search AI:

 

altenbach_0-1758467074573.png

 

0 Kudos
Message 4 of 10
(206 Views)

A problem with attaching a picture of a VI is that you can't really be sure what the icons mean.  When I saw the picture of the Time Delay, I looked in the Timing palette and saw the "N second" Time Delay.  I failed to notice that the icon was slightly different, and might be "something else", but I couldn't just "right-click on the image" and see what you were using.  My bad.

 

Your two Analog channels appear to run "one sample at a time", so why do you even have a clock signal and "Continuous Samples" (instead of "On Demand" and in a Timing Loop)?

 

The Three Laws of Data Flow ensures that the AI code (after passing through an unknown sub-VI whose icon image is too small for me to see what it does, other than extract a Float from an Array of Floats) precedes the AO code.  When does the (unnecessary) Time Delay occur?  After AI?  After mysterious VI?  After AO?  If the Timed Loop is really running at 1 kHz, why do you even need an internal Time Delay?

 

Bob Schor

0 Kudos
Message 5 of 10
(195 Views)

I recommend you use Hardware-Timed Single Point Mode instead of sample clock mode for high speed PID controller application.

-------------------------------------------------------
Applications Engineer | TME Systems
https://tmesystems.net/
-------------------------------------------------------
https://github.com/ZhiYang-Ong
Message 6 of 10
(162 Views)

Thank you ZYOng!

 

That was the tip I needed!  I didn’t know that Hardware-Timed Single Point Mode existed!  Im gonna try it out and will report back. 

http://www.medicollector.com
0 Kudos
Message 7 of 10
(142 Views)

Also, thanks to ZYOng from me.  Despite a lot of years with LabVIEW and USB DAQ devices, I never ran into the Timed Loop functions!  You can teach an Old Dog New Tricks!

 

Bob Schor

0 Kudos
Message 8 of 10
(128 Views)

Definitely try the hardware timed single point, but you do NOT want a timed loop AND a DAQmx timer going at the same time. Since you wired -1 into the "read" terminal, you should read all available samples; but if there is some delay anywhere, you could wind up reading 2 samples or 0 samples.

 

Like Bob said, you need ONE timing source- not three sources set to the same value. If hardware timed single point doesn't work, look up using shared clocks. Configure your Read loop and export the clock to the Write loop and use a normal While loop. Timed loops can sometimes do wonky things to its contents (I think it single threads everything? I may be wrong there).

 

At any rate, you already have something providing timing (DAQmx) so get rid of other stuff that's doing the timing.

 

(But yeah try to get HTSP running, I think this is what that's for)

0 Kudos
Message 9 of 10
(75 Views)

1. Kudos to both ZYOng and BertMcMahan.  Hardware-Timed Single Point is the best thing to try first.  (More detail later).

 

2. Bert's other point about having ONLY ONE timing source is also crucial.

 

I'm not well-versed in cRIO so the rest of my remarks may need some tweaks here and there.  But they should be at least *mostly* helpful.

 

3. For a PID control application, I'd start by considering the analog output signal to be the most critical one for consistent hardware-level timing consistency.  So AO should be the first task you configure for HTSP sampling mode.  At least initially, leave the AI task in software-timed on-demand mode (by never calling DAQmx Timing to configure a clock).

    First try this with the AI and AO executing in parallel (no data dependency).  If successful operating at 1000 Hz loop rate, try again with them executing in series (similar to your original screenshot).  If not successful, figure out what kind of loop rate you *can* sustain with both parallel and series execution.

 

4. If you eventually want to run *both* tasks in HTSP mode, I'd recommend that you use an internal counter from the cRIO chassis to generate the clock that both AI and AO use for sampling.  But you should have them sample on different *edges* of that clock.

    Using a counter gives you the ability to try to dial in a preferred duty cycle which lets you make explicit tradeoffs between latency (the time from AI measurement of system state until AO control output) and system response allowance (the time from AO control output until measuring the system's response behavior with AI).

 

5. The purpose of testing loop rates with AI and AO executing in *parallel* is that it opens up the possibility of a pipelining approach, especially if you *also* control your duty cycle by using a counter for your sample clock(s).   That could look something like this:

 

                                                 Kevin_Price_0-1758592699499.png

It's a little subtle, but if you work through it carefully, you can see how a parallel pipelining arrangement might be able to support a faster PID loop rate.

 

 

-Kevin P

 

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 10 of 10
(52 Views)