Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Timing problem (latency)

We are experiencing a strange problem that manifests itself on some computers, I would appreciate if there are any ideas:

 

We use a NI USB multifunction daq card to control and collect data from a device. At each sampling period, several digital lines are set and analog signal is sampled. The sampling period is 500ms. This should be more than enough including latency. However, on some computer, the period can go to 700-800ms or even more. Interestingly, this can even happen on a computer that is the exact model/brand of a properly working one. 

 

More interestingly, on those that is NOT properly working, if we launch MAX (Measurement and Automation Explorer), even just once,  and turn off, the issue can be resolved (only one some).

 

Obviously, MAX is initializing/fixing something in the driver and we need to identify that.

 

We have also found that, if the BIOS has the option, turning of legacy USB support also fixes the problem.

 

The sampling and setting digital lines are fairly simple operations, and it should work fine, but it may not even finest computer with fastest CPU, etc.. We believe that this is not related to hardware but something in the driver is not initialized or using legacy compatible communication protocol that is introducing the issue.

 

We would appreciate if there are any ideas how to fix this!.

 

We have tried all versions of NIDAQmx up until 9.1

 

 

Thanks..

 

0 Kudos
Message 1 of 8
(4,806 Views)

Hello hayazh-

 

     Which USB device are you using?  The 6008/6009?

 

     Are you using LabVIEW?  Can you describe/post your program so we can take a quick glance at it?  I am not yet convinced that this is a driver issue.  It is very possible that the latency is attributed to the execution of your code on your OS.  When you say sampling period, what do you mean?  Are you relying on a while loop to execute this sampling?  Is there a lot of overhead in the while loop?  Leaning out your while loop will decrease latency and increase code execution.

 

     I would suggest trying an example program.  If you are using LabVIEW, click on Help»Find Examples.  Once the Example Finder opens, click on Hardware Input and Output»DAQmx»Analog Measurements»Voltage.  Try running an example in this folder that closely matches your code.  A good one to start with is Cont Acq&Graph Voltage-Int. Clk.vi

 

     Hopefully this helps.  Best of luck with your application!

 

    

Gary P.
Applications Engineer
National Instruments
0 Kudos
Message 2 of 8
(4,776 Views)

Dear Gary,

 

Thanks for your reply. I am using both USB 6221 and USB 6221 (OEM) versions.

 

The program is in C using NIDAQmx API. It is running in a seperate thread with thread priority set to "THREAD_PRIORITY_TIME_CRITICAL"

 

I am using the following calls each time:

DAQmxReadAnalogF64

DAQmxWriteDigitalU32

 

 

From initialization function:

 

    sprintf(name,"Dev%d/ai0:3",iDeviceID);
    DAQmxErrChk (DAQmxCreateTask("myA",&taskHandleA));
    DAQmxErrChk (DAQmxCreateAIVoltageChan(taskHandleA,name,"",DAQmx_Val_RSE ,0.0,10.0,DAQmx_Val_Volts,NULL));
    if( DAQmxFailed(errorCode) ) return 0;
    DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandleA,"",sRate,DAQmx_Val_Rising,DAQmx_Val_FiniteSamps,BurstNum));
    if( DAQmxFailed(errorCode) ) return 0;
    DAQmxSetChanAttribute (taskHandleA, "", DAQmx_AI_Gain, adGain);
    if( DAQmxFailed(errorCode) ) return 0;

 

.. then create taskHandleB ... etc..

 

.. then create digital port handles:

 

    sprintf(name,"Dev%d/port0",iDeviceID);
    DAQmxErrChkP0 (DAQmxCreateTask("myP0",&taskHandleP0));
    DAQmxErrChkP0 (DAQmxCreateDOChan(taskHandleP0,name,"",DAQmx_Val_ChanForAllLines));
    if( DAQmxFailed(errorCode) ) return 0;

 

 

 

And, here's key lines for getting data:

 

DAQmxErrChk (DAQmxReadAnalogF64(taskHandleA,BurstNum,0.500,DAQmx_Val_GroupByChannel,data,BurstNumQuadrant,&read,NULL));

 

and

 

DAQmxErrChkP0 (DAQmxWriteDigitalU32(taskHandleP0,1,1,DAQmx_Val_WaitInfinitely,DAQmx_Val_GroupByChannel,&digitalPortPattern,&digitalPortWritten,NULL));

 

 

 

Thanks...

0 Kudos
Message 3 of 8
(4,762 Views)

Hello hayazh-

 

     I notice that in your DAQmxCfgSampClkTiming function, you are not using the proper call for the timing source to refer to the onboard clock for the 6221.  In the NI-DAQmx C Reference Help, it states that in order to use the internal clock for your device, you must use NULL or OnboardClock.  It seems that your code is software timed, which would explain the code iterating at different rates on different computers.  Even the same exact configuration on two computers can run code at different rates - especially on Windows.  There is so much that the processor is doing in the background that causes code iteration to be non-deterministic.  Additionally, opening and closing MAX or disabling Legacy support should have nothing to do with how your device functions.  You need to be sure that you are referencing the timing source for the card in order to have a more precise acquisition.

 

     I also notice that you are configuring the acquisition to be a finite acquisition with BurstNum representing the number of samples to acquire and sRate representing the sampling rate.  Are these variables, or is the value of BurstNum and sRate the same every time?  If you want to have a specific sampling period, these should be the same every time.

 

     You also say that your 'sampling period' is 500 ms.  Are you getting this number from the value of BurstNum and sRate, or are you getting it from the value of your timeout, which is set to 0.5 sec?  The value of 0.5 that you set in the DAQmxReadAnalogF64 simply sets the timeout (in sec) of your acquisition, or the amount of time to wait before throwing an error and terminating the code execution.  It will not affect how fast you actually acquire your samples.

 

     Try these suggestions.  I think you will find that your code will be much more precise by moving your timing to hardware based as opposed to software based.

 

     If you need further help, you can refer to the NI-DAQmx C Reference Help.  If you are using Windows, go to Start»Programs»National Instruments»NI-DAQ»Text-Based Code Support»NI-DAQmx C Reference Help.

 

Gary P.
Applications Engineer
National Instruments
0 Kudos
Message 4 of 8
(4,744 Views)

Hi Gary,

 

Thanks for suggestions.  Here are my responses

 

 

 I notice that in your DAQmxCfgSampClkTiming function, you are not using the proper call for the timing source to refer to the onboard clock for the 6221.  In the NI-DAQmx C Reference Help, it states that in order to use the internal clock for your device, you must use NULL or OnboardClock.  It seems that your code is software timed, which would explain the code iterating at different rates on different computers.  Even the same exact configuration on two computers can run code at different rates - especially on Windows. 

 

I have used "" (zero length string) as timing source, since it was used in the NIDAQmx sample code for internal clock sampling. I tried as you mentioned NULL and "OnBoardClock" parameters. Unfortuantely, it didn't make any change.

 

And, I have to repeat, the same code runs fine on many computers. But it doesn't run on some others. And these other (problematic) computers can even be more powerful computers. So this issue is not about CPU or memory...

 

 

There is so much that the processor is doing in the background that causes code iteration to be non-deterministic.  Additionally, opening and closing MAX or disabling Legacy support should have nothing to do with how your device functions.  You need to be sure that you are referencing the timing source for the card in order to have a more precise acquisition.

 

I agree that running MAX  shouldn't have any effect, but it does!!! On computers that have difficulty with keeping up sampling,running MAX just once (after computer boots) fixes issue. However, this fix works only on a portion of the computers that needs to be fixed. But the effect is clear and repeatable...

 

 

     I also notice that you are configuring the acquisition to be a finite acquisition with BurstNum representing the number of samples to acquire and sRate representing the sampling rate.  Are these variables, or is the value of BurstNum and sRate the same every time?  If you want to have a specific sampling period, these should be the same every time.

 

BurstNum and sRate are fixed all the time. They do not change.

 

 

     You also say that your 'sampling period' is 500 ms.  Are you getting this number from the value of BurstNum and sRate, or are you getting it from the value of your timeout, which is set to 0.5 sec?  The value of 0.5 that you set in the DAQmxReadAnalogF64 simply sets the timeout (in sec) of your acquisition, or the amount of time to wait before throwing an error and terminating the code execution.  It will not affect how fast you actually acquire your samples.

 

No, 500ms is the interval between calling DAQmxReadAnalogF64 for the same channel. So, I set digital lines, read analog channel, do this for some more channels and wait until next iteration.

 

So, for problematic computers, these operations do not complete in 500ms due USB latency, etc... But, launching MAX, or changing BIOS settings helped on some computers. I am trying to find a better fix and also let NI about this problem, so you guys can also look into it. In fact, I have been trying to find a solution for this over a year and really out of options now. I would appreciate your comments and feedback.

 

Thanks,

Hasan

 

 

0 Kudos
Message 5 of 8
(4,716 Views)

Hasan-

 

     The next step is to try running a piece of example code.  Try running the Cont Acq-Int Clk example.  It can be found in the ANSI C examples from Start»Programs»national Instruments»NI-DAQ»Text Based Code Support»AnSI C Examples»Analog In»Measure Voltage.  You will notice in this program that it calls a function named DAQmxRegisterEveryNSamplesEvent.  It waits for 1000 values to be loaded into the buffer between each DAQmxRead function.  I don't know how you specify in code to wait 500 ms, but I am guessing that it is software timed.  What this means is you are relying on the program (software) to determine the time between each successive DAQmxRead function, correct?  This example code I am referring to relies on 1000 hardware-timed events (values) between each DAQmxRead.  This makes the acquisition deterministic.  The onboard clock for your hardware is used as a timebase to take samples, and as soon as a finite number of hardware-timed samples are taken, the software reads what is in the buffer.

 

     Run this program and let us know how it goes.  You may need to adjust a few parameters in the example to fit your application, but it should be a very deterministic acquisition.

Gary P.
Applications Engineer
National Instruments
0 Kudos
Message 6 of 8
(4,703 Views)

I have a similar problem and am wondering if you got your's solved?

See: http://forums.ni.com/t5/Multifunction-DAQ/A-USB-6221-Visual-C-application-NIDAQmx-9-0-amp-Windows-XP...

 

0 Kudos
Message 7 of 8
(4,257 Views)

Hi hayazh,

 

This is unrelated to your problem, but I would expect this line to error on an M Series board:

 

DAQmxSetChanAttribute (taskHandleA, "", DAQmx_AI_Gain, adGain);

 

In DAQmx, only DSA and SCXI devices support the AI Gain property.

 

Brad

---
Brad Keryan
NI R&D
0 Kudos
Message 8 of 8
(4,240 Views)