LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx: read value(s) with delay

Sorry, I've been fortunate in dealing with "simple timing" situations.  However, I'm sure other Forum readers with the proper experience will see this and chime in (and maybe even point out flaws in my code -- where did the extra 25 msec come from?)

 

Bob Schor

0 Kudos
Message 11 of 28
(2,664 Views)

Oops, I spoke too soon -- I should have tested my own code (and my own device).  So here's what I did --

 

  1. I sprinkled 5 High Precision Timers in the code that I posted.  Two were back-to-back before the D/A loop to estimate how much time the "Timer" took to produce a time.
  2. The third was right after the D/A (I was curious how long it took to output a sample).
  3. The fourth was right after the 50 msec delay.
  4. The last was right after the 100-point (at 10KHz) A/D sample.
  5. All of the Timers were tied to Error lines.  The first three were on the Error Line through the D/A function, while the last two were before and after the A/D read.

Here are the results after running about 10 seconds (generating about 100 results), expressed as Mean ± Standard Deviation.

 

Delay caused by the timers:     2 ± 1    microseconds.

D/A conversion                   :  1.0 ± 0.2 milliseconds

Delay                                  :   56 ± 0.3 milliseconds

A/D conversion                   : 100 ± 31  microseconds

 

Aha!  If the A/D conversion started when I did the DAQmx Read, the timing should have been 10 milliseconds, not 0.1 millisecond.  So the code is wrong -- the Start belongs on the other side of the Wait.

 

That's much better.  Here are revised timings:

 

Delay caused by the timers:     4 ± 2    microseconds.

D/A conversion                   :  0.6 ± 0.05 milliseconds

Delay                                  :   49 ± 0.08 milliseconds

A/D conversion                   :   18 ± 0.7  milliseconds

 

So the Delay is pretty accurate (providing you have the rest of the code right, of course.  Yes, now that we are starting the A/D after the delay, we may have to do "timing tests" such as this to figure out how much (8 milliseconds?) is taken up "getting ready" (provided we don't discover how to "start" the A/D but tell it to wait for a hardware trigger signal).

 

It is also not a bad idea to test your code (as I initially failed to do, but now have done), as it might prove to be illuminating.

 

Bob Schor

 

 

0 Kudos
Message 12 of 28
(2,656 Views)

Pavel,

 

I think the following is what you were describing in the original message.  Generate an AO stimulus, wait 50 msec for your system to respond and settle, capture AI to do averaging, output the average as AO.

 

I included partial code below.  It's based on expecting the tasks *NOT* to be using any kind of triggering.  The AO tasks should be started before the loop, the AI task should be committed before the loop (as shown).

 

Expectation is that the 2nd AO will occur about 60-61 msec after the 1st.  50 msec for the explicit delay, about 10 msec more for the 100 samples at 10 kHz.  Adjust as needed.

 

 

-Kevin P

 

easy sw timing.png

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 13 of 28
(2,646 Views)

Thanks Kevin and Bob,

 

I've tried the enhanced solution, proposed by Kevin (snippet below) and the delay seems to be perfect ... probably because of using virtual device (I've tried it at home)

 

I will try it next Monday on a real device and communicate you the results.

 

But besides that do you really consider that for such kind of setup HW synchronization (i.e. triggering with specified delay) is unrealizable ?

I've looked through a certain number of exapmles and all of them use HW synchronization.

Of course all of them use prediefined waveform for DA output (not generated inside a loop as in my case).

 

So, this particularity of my setup makes HW synchronization unrealizable ?

 

Thanks

 

acquisition_with_delay_iteration_time_measurement.JPG

 

 

acquisition_with_delay [Kevin_Price].png

0 Kudos
Message 14 of 28
(2,625 Views)

Hi Pavel,

a few comments: 

1) The error -200284 was appearing most probably because the trigger works only once, in the beginning of the acquisition. To have it armed again you would need to set the task as retriggerable and use a different signal as trigger (the start trigger will also be generated only once - in the beginning of the acquisition).

2) If you want your code to be running continuously, the AI acquisition cannot be finite (starting the ascquisition/generation inside the loop is not the best idea, especially if you want to have tight synchronization on the level of ms).

3) Usually the best way to create unusual timing configurations is to use counters. Counter functionality will depend on the device family - you can find more details on this topic in the manual (information relevant to the X series is here: http://www.ni.com/pdf/manuals/370784g.pdf#page=155 ). The general idea is the following: you configure AO to use the counter output as sampling signal, connect the AI sample clock to the counter input and create a counter task that will output a single pulse with 50ms of low time every time it gets a rising edge on the input. The outcome? AO will output a signle sample exactly 50ms after the AI is sampled (this is the effective delay we create). Please let me know if something is unclear. I can elaborate more if you are interested.

Message 15 of 28
(2,609 Views)

 

I'm fully in agreement with stockson that continuous AI would be a better choice.  I didn't want to get into it b/c it looks like your app wants to ignore most of the AI data, and only pay attention for about 10 msec out of every 100.  The way *I* would approach that situation is to use DAQmx Read properties to always ask for the most recent sampled data rather than gathering a continuous stream and throwing out 90% of it after the fact.  Here's an older post that shows how to set that up.

 

As to the hardware timing sync, yes there are probably ways to accomplish what you need but it'll take a relatively advanced bit of DAQmx programming to pull it off.  Doable, but not trivial.  Spend a little time learning about  "Hardware Timed Single Point" sampling mode, which is what you'll need for the AO task(s).  On a given board you'll probably only be allowed a single AO task that uses hw timing, so you'll need to figure out how to handle 2 channels of output that need to be updated at different instants.  (Hint: calculate once, write twice)   The exact timing relationship between the AO and AI isn't clear to me yet either, but you'll probably want something more repeatable than a msec wait timer.  Syncing them in hw is complicated by the fact that you want the AI running at a much faster rate so you can do some noise-suppressing averaging.  Therefore, the AI and AO can't simply share a sample clock.

 

In short, yes you can probably get there from here, but there's a lot to be learned and tried along the way.  I suggest you start by taking the software-timed example and converting the AI to a continuous sampling task.   Next, I'd suggest playing around with hw-timed single point AO as a separate dedicated program before integrating it in with this one.   And so on, one step at a time.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 16 of 28
(2,594 Views)

stockson a écrit :

Hi Pavel,

a few comments: 

1) The error -200284 was appearing most probably because the trigger works only once, in the beginning of the acquisition. To have it armed again you would need to set the task as retriggerable and use a different signal as trigger (the start trigger will also be generated only once - in the beginning of the acquisition).

2) If you want your code to be running continuously, the AI acquisition cannot be finite (starting the ascquisition/generation inside the loop is not the best idea, especially if you want to have tight synchronization on the level of ms).

3) Usually the best way to create unusual timing configurations is to use counters. Counter functionality will depend on the device family - you can find more details on this topic in the manual (information relevant to the X series is here: http://www.ni.com/pdf/manuals/370784g.pdf#page=155 ). The general idea is the following: you configure AO to use the counter output as sampling signal, connect the AI sample clock to the counter input and create a counter task that will output a single pulse with 50ms of low time every time it gets a rising edge on the input. The outcome? AO will output a signle sample exactly 50ms after the AI is sampled (this is the effective delay we create). Please let me know if something is unclear. I can elaborate more if you are interested.


Hi Stockson,

 

I'm trying to implement your scenario.

Indeed there are points that I didn't properly understand.

Here is a staff that I've composed, based on your propositions (at least as I understood them).

As you can it's only rough draft ... only few connetctions are wired.

For example for me it was not clear 2nd point: sampling source management for AI ?

Counter task is configured as follows:

Output terminal: PFI12

Trigger Source: PFI0

Edge: rising

Trigger Type: Digital Edge

High time: 50msec

Low time: 50msec

Idle State: high

 

Should I create an additional task that outputs square waveform (with period that corresponds to the timing of my while loop on PFI0 output) and also make it sampling source for AI ?

 

Thanks in advance.

 

Pavel.

 

read_input_with_delay_HW (0) [Stockson scenario].png

 

0 Kudos
Message 17 of 28
(2,564 Views)

Kevin_Price a écrit :

 

I'm fully in agreement with stockson that continuous AI would be a better choice.  I didn't want to get into it b/c it looks like your app wants to ignore most of the AI data, and only pay attention for about 10 msec out of every 100.  The way *I* would approach that situation is to use DAQmx Read properties to always ask for the most recent sampled data rather than gathering a continuous stream and throwing out 90% of it after the fact.  Here's an older post that shows how to set that up.

 

As to the hardware timing sync, yes there are probably ways to accomplish what you need but it'll take a relatively advanced bit of DAQmx programming to pull it off.  Doable, but not trivial.  Spend a little time learning about  "Hardware Timed Single Point" sampling mode, which is what you'll need for the AO task(s).  On a given board you'll probably only be allowed a single AO task that uses hw timing, so you'll need to figure out how to handle 2 channels of output that need to be updated at different instants.  (Hint: calculate once, write twice)   The exact timing relationship between the AO and AI isn't clear to me yet either, but you'll probably want something more repeatable than a msec wait timer.  Syncing them in hw is complicated by the fact that you want the AI running at a much faster rate so you can do some noise-suppressing averaging.  Therefore, the AI and AO can't simply share a sample clock.

 

In short, yes you can probably get there from here, but there's a lot to be learned and tried along the way.  I suggest you start by taking the software-timed example and converting the AI to a continuous sampling task.   Next, I'd suggest playing around with hw-timed single point AO as a separate dedicated program before integrating it in with this one.   And so on, one step at a time.

 

 

-Kevin P


Kevin,

 

I've tried your suggestion on HW and it works quite properly - the delay is about 60 ms (measured with oscilloscope) that is explained by conversion time.

Concerning my application, actually it's more simple that I've exposed in my previous posts.

The idea is taking measurement in loop:

  1. AO generates a voltage
  2. Testbend reacts to this stimulus (settling time isn't defined, so the acquisition delay should be adjustable)
  3. System acqires input data - many samples (adjustable ... can be until the end of the while loop iteration)
  4. Acquired data is averaged and then processed

As for me this kind of scenario is one of the most generic, so I was surprised that there is no templates for it (here I mean HW synchronization).

 

Thanks

 

Pavel

0 Kudos
Message 18 of 28
(2,559 Views)

Hi Pavel,

The configuration of the counter should be Retriggerable Single Pulse Generation and the trigger should be set to AI sample clock (you started well; if you need help there is an example in LabVIEW that shows how to do it - Counter - Single Pulse Output).

 

It would result in something like this (screenshot is taken from the M Series manual):

 

Untitled.png

 

You don't need to worry about AI clock, it will be generated automatically.

0 Kudos
Message 19 of 28
(2,536 Views)

stockson a écrit :

Hi Pavel,

The configuration of the counter should be Retriggerable Single Pulse Generation and the trigger should be set to AI sample clock (you started well; if you need help there is an example in LabVIEW that shows how to do it - Counter - Single Pulse Output).

 

It would result in something like this (screenshot is taken from the M Series manual):

 

 

 

You don't need to worry about AI clock, it will be generated automatically.



Hi Stockson,

 

Thanks for feedback.

I've tried "Retriggerable Single Pulse Generation" example. It works.

Then I've tried to implement the similar staff for my case.

Here is the code

 

read_input_with_delay_HW (1) [Stockson scenario].png

 

 

 

Unfortunately an error is generated on running:

counter_task_settings_error.JPG

 

I didn't understand what this error means, as I've only one virtual channel. Here is snapshot from Counter task settings:

counter_task_settings.JPG

 

 

 

0 Kudos
Message 20 of 28
(2,524 Views)