Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Random channel skew on a USB-6251

Solved!
Go to solution

Hello everybody

I hope you will be able to help me out with this problem that is haunting me for over two years now.

I am using a USB-6251 to acquire 3 channels @100kS/s (N sample acquisition, N~250kS), while simultaneously sending an output to one of the analogue channels.

In my application the inter-channel delay is very critical, i.e., it must be known precisely (but not necessarily null, although it would be nice).

Not convinced by the poor results I was obtaining, I decided to run a simple test, to measure the impulse response (an thus the channel skew) between the AO and the first 4 AIs of my 6251. To avoid ghosting and keep the 4 AIs isolated from each other, I connected the outputs of  the 4 op-amps contained in a TL084 IC in configuration voltage follower to the AIs, while feeding their inputs with the signal from the AO.

I sent a chirp and observed its cross-correlation with the acquired signals. The chirp was designed generate a "lowpass filtered" impulse response, in order to better appreciate delays of fractions of samples in the impulse response (IR).

 

Some very confusing results: the IRs are randomly (comparing different acquisitions) placed at "t=0" and "t=1", very rarely at 0<t<1, while I was expecting to see 4 IRs "equi-spaced" within 1 sample, since the 6251 contains a single, multiplexed, ADC.

The same random behaviour can be observed by sending a simple sine wave and checking that the 4 signals acquired are randomly out of phase with respect to each other by exactly 1 sample.

 

I repeated the same tests in Matlab (both using the data acquisition and calling the daqmx library directly) and in Labview 8.2, always obtaining the same resut. I also tried (only in Matlab) to change the sampling frequency (as low as 8kS/s) and the number of samples acquired, without any appreciable change in the results.

 

So finally my questions is: is there some sort of post processing going on behind the scenes that aims at aligning the inputs, which somehow fails? Or is it due to some incorrect setting? Could it be due to missing samples?

 

Has anybody else had similar issues with multiplexed DAQs?

 

Any thoughts on this would be much appreciated

 

Kind Regards,

 

Giovanni

0 Kudos
Message 1 of 16
(5,452 Views)

Its important to note that the channel skew in Matlab is consistant and in LabVIEW it is not. This suggests that the communications latency is not the gating factor and DAQmx is not the gating factor!

 

Matlab can be optomized for execution speed.  Its a bit harder to do in LabVIEW, especially on a non-deterministic OS!  What I think you are seeing is an artifact of how LabVIEW will tend to hog as much CPU time as is available thus yielding different execution times for iterations of the same loop.  Matlab executes loops iterations in a much more predictable fashion (same # of threads each iteration).

 

Some things to try:

1) use a timed loop structure and set its execution properties to achieve more consistant iteration speeds

2) adjust the vi execution priority

3)Consider an M series DAQ device that supports hardware timing

4)If timing needs cannot be achieved with 1-3 consider moving to a deterministic OS (RT module)


"Should be" isn't "Is" -Jay
0 Kudos
Message 2 of 16
(5,443 Views)

Hi Jeff,

Thanks for your prompt reply.

Probably my OP was not very clear. I do see exactly the same results both in Matlab and in LabView, but I think you got the point.

In fact later yesterday I tried the same experiment but without using the AO, and using the system's sound card instead. I got exactly what I was expecting, that is 4 acquisitions slightly delayed one another, but the sum of these delays smaller than 1 sample. I could even use the Data Acquisition's option to switch between 'minimum delay' (I believe 0.1 samples) and 'equispaced delay' (1/N samples, N number of channel used).

So I believe now that it is an issue with having the two processes (AI and AO) running at the same time, which probably causes missing samples or something of the kind.


>1) use a timed loop structure and set its execution properties to achieve more consistant iteration speeds
Do you mean acquiring several smaller chunks of data instead of a single one? Should not this be the same of taking just one small(er) acquisition?

>2) adjust the vi execution priority

 

I will try this, thank you.

 

>3)Consider an M series DAQ device that supports hardware timing

 

I thought I was using the 10MHz internal clock...?

 

>4)If timing needs cannot be achieved with 1-3 consider moving to a deterministic OS (RT module)

 

Errr... I'm afraid it's a bit too late for those kind of decisions... Money and time for the project are up 😞


 
Thank you once again
 
Giovanni

 

0 Kudos
Message 3 of 16
(5,422 Views)

Pasu wrote:


>1) use a timed loop structure and set its execution properties to achieve more consistant iteration speeds
Do you mean acquiring several smaller chunks of data instead of a single one? Should not this be the same of taking just one small(er) acquisition?
>> Well yes, its pretty typical to read back the waveform array in sections in a "producer" loop and pass of the dat to a "consumer" loop for analisys (that way you can operate on the sections on-the-fly and reduce memory needs)  Using a timed loop you have direct access to the loops execution priority and can report warnings if a iteration runs late, among many other options.  Generally speaking its the operations on large data sets that cause the greatest timing hits or performace issues.  You might want to check out This Artical.
 

>2) adjust the vi execution priority

 

I will try this, thank you.

 

>3)Consider an M series DAQ device that supports hardware timing

 

I thought I was using the 10MHz internal clock...?

>> My bag- I thought you had stated youu had a NI-USB-6008 (reading too many posts I guess)

 

>4)If timing needs cannot be achieved with 1-3 consider moving to a deterministic OS (RT module)

 

Errr... I'm afraid it's a bit too late for those kind of decisions... Money and time for the project are up 😞


 
Thank you once again
 
Giovanni

 



"Should be" isn't "Is" -Jay
Message 4 of 16
(5,411 Views)

Hi Giovanni,

 

I was curious as to how you were synchronizing your AO generation with your AI acquisition.  If these tasks are not sharing at least a start trigger, then there is no guarantee which task gets started first.  If you are not sharing a sample clock between these tasks, keep in mind that AO can update at any time with respect to the sequence of converts required to read your three AI channels.  If you're not controlling this synchronization, I would expect that you might see some skew.

 

If you need more detail on how to do this, please reply (unfortunately I'm out of time at the moment, but will check back later).

 

Hope this helps,
Dan

0 Kudos
Message 5 of 16
(5,407 Views)
Dan,
Thank you very much for your enlightening post

>I was curious as to how you were synchronizing your AO generation with your AI acquisition.

 

In fact I don't syncronise at all. I just start the two processes , wait for the  acquisition time, and stop both (some zero padding on the AO sequence and crop on the acquisition "fixes" everything). I trusted that the first process on the list would consistently start first and the update at each sample would be kept in sync (ideally AO, AI0, AI1, AI2,...). Silly me 😞

 

>If these tasks are not sharing at least a start trigger, then there is no guarantee which task gets started first.  If you are not sharing a sample clock between these tasks, keep in mind that AO can update at any time with respect to the sequence of converts required to read your three AI channels.

 

That explains everything.

 

>If you're not controlling this synchronization, I would expect that you might see some skew.

 

Yeah... I can post you some plots if you like... Very entertaining (not).

 

>If you need more detail on how to do this, please reply (unfortunately I'm out of time at the moment, but will check back later).

 

That would be great!

As I was mentioning I'm at the end of the project, so it is not thinkable to move to another environment (I'm working in Matlab). So if you could tell me more on how to do this by calling a bunch of functions from the daqmx, I would really appreciate it.

 

Giovanni

 

 

 

0 Kudos
Message 6 of 16
(5,404 Views)
Thinking back of what you said
>If you are not sharing a sample clock between these tasks, keep in mind that AO can update at any time with respect to the sequence of converts required to read your three AI channels.

I think I am syncronising AO and AI, since I am using the 10MHz clock for the two of them, am I not?

 


 If these tasks are not sharing at least a start trigger, then there is no guarantee which task gets started first.


I don't need AO and AI to be in sync, as long as they are updated at the same instant. I don't care if there is a random latency (due to when the OS decied to start the two processes) between AO and AI.

 

 

Giovanni

 

0 Kudos
Message 7 of 16
(5,400 Views)

@Jeff Bohrer wrote:
[...] its pretty typical to read back the waveform array in sections in a "producer" loop and pass of the dat to a "consumer" loop for analisys (that way you can operate on the sections on-the-fly and reduce memory needs)  Using a timed loop you have direct access to the loops execution priority and can report warnings if a iteration runs late, among many other options. 


That makes perfectly sense to me. I simply thought it was managed by the libraries/driver/data acquisition toolbox. Anyhow, all my problems should disappear when I use a smaler sequence (say a few kS), shouldn't they?

 

Thank you

 

Giovanni

0 Kudos
Message 8 of 16
(5,399 Views)

Hi Pasu,

 

The AI and AO timing engines divide down the sample clock timebase (20 MHz by default) to produce the sample clock. If you don't synchronize the tasks using a shared start trigger or shared sample clock, then the timing engines start dividing the timebase at different times, so your AI and AO sample clocks are out of phase with each other. 

 

Brad

---
Brad Keryan
NI R&D
0 Kudos
Message 9 of 16
(5,370 Views)

Hi Giovanni,

 

I'm not sure my suggestion to synchronize AI and AO is correct.  Since you're trying to measure inner-channel delay you need to be measuring a signal which is changing in the time between each one of your individual AI channels is sampled.  To make this happen, you'll want to be running your generation much faster than your acquisition.  The synchronization I was proposing would have held AO at the same voltage while your AI channels were sampled.  This would not show inner-channel delay.

 

If all you want to know is what that delay is, you can query DAQmx, and it should provide this information to you.  To do this, you'd call DAQmxGetAIConvertRate.  The reciprocal of this value should give you your inner-channel delay.  Essentially what happens is each time a sample clock occurs, the channels in your task are sampled at the rate returned by this property.  If you want to change this rate, you can do so by calling DAQmxSetAIConvertRate.  Additionally, you get/set the delay from the sample clock to the convert on your first channel by  calling DAQmx(Get/Set)DelayFromSamplClk.

 

Again, to measure this you would have to measure a signal that is being updated at least as fast as the ConvertRate.

 

I hope that helps,

Dan

 

0 Kudos
Message 10 of 16
(5,367 Views)