LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx: read value(s) with delay

I'm a bit confused - if the task is created in MAX, why do you create it again programmatically? That will surely give you an error 🙂

 

And please remembert about the order in which all taksks need to be started. All slaves need to be running before the master is started ("master" in this context is the task which supplies the initial timing signal that is later used to generate other timing signals).

0 Kudos
Message 21 of 28
(1,797 Views)

stockson a écrit :

I'm a bit confused - if the task is created in MAX, why do you create it again programmatically? That will surely give you an error 🙂

 

And please remembert about the order in which all taksks need to be started. All slaves need to be running before the master is started ("master" in this context is the task which supplies the initial timing signal that is later used to generate other timing signals).


I rather completed the task created in the MAX by programming staff.

The reason: in the MAX I couldn't select AI clock as trigger source - it merely is not figured in the "Trigger source" drop-down list.

 

counter_task_settings - triggering.JPG

 

 

I've modified the VI by eliminating MAX staff. Here is new version:

 

read_input_with_delay_HW (1a) [Stockson scenario].png

 

 

Unfortunately another problems appeared.

Error -201025
Non-buffered hardware-timed operations are not supported for this device and Channel Type.

The workaround that I've found on http://digital.ni.com/ consists in "calling a DAQmx Write before the DAQmx Start function".

 

When I apply this workaround (i.e. insert Write block into AO path just before Start task), another error appear:

Error - 200609

Generationcan't be started because the selected buffer is too small.

 

Then I've added the block that configures output buffer ... another error appears:

Error -200479

Specified operation cannot be performed while the task is running

 

Thanks in advance

 

 

0 Kudos
Message 22 of 28
(1,786 Views)

 

I'd suggest taking a step back for a moment.  We're starting to get to the point where we're trying to chase all the rabbits at the same time.  I don't think we'll be able to catch them all at once so let's get back to some basics.

 

1. What DAQ device are you using?

2. Will your real app have a 2nd AO occurring in the same loop iteration as the 1st AO?  Or is that just a temporary task used only to help you troubleshoot and check timing?

3. How precise do you need the time to be from the 1st AO until the AI sampling?    What matters more, time to 1st sample or time to last sample used in the averaging?  

4. How precise do you need the time to be from the 1st AO until the 2nd AO?

5. How precise do you need the time to be from 1st AO on this loop iteration to 1st AO on next loop iteration?

6. Do you need to *control* the timing precisely, or would it be sufficient 

 

Any place where you consider timing precision to be crucial, please explain clearly exactly *why* it's so important.  Low-latency hw-sync over multiple tasks of varying rate is not trivial to implement, and may not even be possible with your particular DAQ device.   Low-latency software timing over multiple tasks is fairly simple, and will generally perform pretty well.  Probably within a msec or two 90-99+% of the time.  In other words, I want to be sure we really *need* to catch the rabbits we chase.

 

 

-Kevin P

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 23 of 28
(1,771 Views)
Ok, below there are my answers
@Kevin_Price wrote:

 

I'd suggest taking a step back for a moment.  We're starting to get to the point where we're trying to chase all the rabbits at the same time.  I don't think we'll be able to catch them all at once so let's get back to some basics.

 

1. What DAQ device are you using?

The main device is USB-6343, but as it is connected to testbend, some exercises I do with less performant USB DAQ (I don't rememeber the model, as I'm actually at home)

2. Will your real app have a 2nd AO occurring in the same loop iteration as the 1st AO?  Or is that just a temporary task used only to help you troubleshoot and check timing?

Yes, 2nd AO is just for troubleshooting ... allows to control with oscilloscope.

3. How precise do you need the time to be from the 1st AO until the AI sampling?    What matters more, time to 1st sample or time to last sample used in the averaging?

Let say several milliseconds ... we have different setups (based on lasers) and we didn't yet tried all of them (also please, see point 5)

4. How precise do you need the time to be from the 1st AO until the 2nd AO?

2nd AO is for troubleshooting

5. How precise do you need the time to be from 1st AO on this loop iteration to 1st AO on next loop iteration?

For the moment we don't know ... it will become more clear as faras we advance with our experiment

6. Do you need to *control* the timing precisely, or would it be sufficient

For actual experiment it isn't so critical, but my objectif is also to learn DAQmx timing solutions for future more complex setups

 

Thanks

Pavel

 

Any place where you consider timing precision to be crucial, please explain clearly exactly *why* it's so important.  Low-latency hw-sync over multiple tasks of varying rate is not trivial to implement, and may not even be possible with your particular DAQ device.   Low-latency software timing over multiple tasks is fairly simple, and will generally perform pretty well.  Probably within a msec or two 90-99+% of the time.  In other words, I want to be sure we really *need* to catch the rabbits we chase.

 

 

-Kevin P

 


 

0 Kudos
Message 24 of 28
(1,765 Views)

I think I got confused along the way - I am sorry for that. Somehow in my head it switched and I was sure it is the "second" AO that needs to wait 50ms after the AI acquires the sample. I can now see this is not the case. So what I wrote before needs to be corrected - the AO is the master, the AI is the slave (which means all the wiring has to be corrected).

 

When it comes to the error - did you disable the "auto start" in the first "DAQmx Write" block? You shouldn't have to set the buffer on your own. Though it is important to sety correct values of "samples per channel" and "rate".

0 Kudos
Message 25 of 28
(1,746 Views)

 

Pavel,

 

Here's some code that appeared to function correctly for me on a PCIe-6341, an X-series board much like your own.  There's a possibility that your USB-based board won't support the hw-timed single point sample mode I used.  On my end, it runs without error, and both the AI readings and msec timing check look correct.

 

If you get errors from hw-timed single point mode, I'm not sure where to go next.  The regular hw-timed sampling mode for AO uses a buffer and your newly calculated AO values will have to wait at the end of the line, leading to a variable amount of latency from the time you calculate your desired AO out value until it goes into the real world as a signal.

 

Snippet, front panel screenshot, and attached code follow.

 

 

-Kevin P

 

hw timed stim response.png

 

hw timed stim response - front panel.png

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 26 of 28
(1,731 Views)

Kevin_Price a écrit :

 

Pavel,

 

Here's some code that appeared to function correctly for me on a PCIe-6341, an X-series board much like your own.  There's a possibility that your USB-based board won't support the hw-timed single point sample mode I used.  On my end, it runs without error, and both the AI readings and msec timing check look correct.

 

If you get errors from hw-timed single point mode, I'm not sure where to go next.  The regular hw-timed sampling mode for AO uses a buffer and your newly calculated AO values will have to wait at the end of the line, leading to a variable amount of latency from the time you calculate your desired AO out value until it goes into the real world as a signal.

 

Snippet, front panel screenshot, and attached code follow.

 

 

-Kevin P

 

 

 

 


Thanks Kevin

 

I've tried your example on my low-performance device (USB - 6251).

Unfortunately it gets error

 

error_while_running_KevinPrice_code.JPG

 

Obviously it doesn't like Hardware Timed Single Point value for Sample Mode parameter for AO.

I will try it on more performant device (also USB). Actually it's employed for measuremnts.

 

Other question - in your code "StartRetriggerable" (in AI path) is set to FALSE. Why using this block ... I suppose that FALSE is default value ?

Thanks.

0 Kudos
Message 27 of 28
(1,711 Views)

 

First, nice catch.  Setting Retriggerable==False was simply a mistake on my part.  I meant to set it to True.  Everything runs without errors here either way, but the real world signal timing would have been wrong when set to False.  

 

I suspect that HW-Timed Single Point acquisition isn't supported on any USB device due to the nature of the USB bus and latency issues.  You might need to use SW timing for your AO signal.  If so, there still might be an internal timing signal we can extract from the AO task that can be used to control timing on the delay until AI.  I'll look around a bit later when I have a little time.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 28 of 28
(1,703 Views)