Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

read and write in Binary I16

Hi,

 

In other threads, you saw that  your problem is about coefficient. Each device have own calibration coefficient, you can read this coefficient with DAQmxGetAIDevScalingCoeff or DAQmxGetAODevScalingCoeff. It's polynomial coef: http://digital.ni.com/public.nsf/allkb/0FAD8D1DC10142FB482570DE00334AFB

 

This example show you how to get the coef value of anologic input: http://forums.ni.com/ni/attachments/ni/250/39680/1/main.cpp

 

Regards

0 Kudos
Message 11 of 14
(1,498 Views)

I just retrieved the calibration coefficients from the two PXI-6250 cards that I'm using, and I'm a little concerned by the results.  On both cards, the linear scaling coefficient for a gain of 1 (+/- 5V input range) is reported as 0.000161108, whereas if no calibration were required, the linear scale for an ideal 16 bit A/D converter would be

 

    5.0V / 32768 = 0.000152587

 

so the effect of using calibrated versus raw sample values is

 

    0.000161108 / 0.000152587 = approx 1.0558, i.e. a difference of 5.6%

 

Am I understanding this correctly?  i.e. are the raw values from a PXI-6250 AI channel really that bad? 

 

Also, in the NI doc "Is DAQmx Raw Data Calibrated and/or Scaled?" it says that the first calibration coefficient for an E series card will always be zero.  Doesn't this imply that any DC offset isn't being corrected?  When we called whatever the calibration function was in Traditional NIDAQ on an E series card, the most obvious benefit was that it nulled out most of the DC offset, which was sometimes quite noticeable if a card hadn't been calibrated in a long time.

 

Thanks,

Larry

0 Kudos
Message 12 of 14
(1,476 Views)

Hey Larry,

 

Seems to be the season for scaling questions. Check out my post on this recent thread - analog in calibration on PCI-6221. That should explain the ~%5 difference. 

 

For E-series, those cards did calibration in the HW - it corrected for non linearity and offsets (especially when you self cal often) before the data was converted to a binary number. However that circuitry is expensive and not perfect, so Mcal is actually cheaper and gives you a more accurate scaled measurement. It has been a point of confusion for many users though. 

 

Hope this helps, 

Andrew S
0 Kudos
Message 13 of 14
(1,457 Views)

Thanks for the pointer.  I'm reading the sample values as 64 bit doubles and then scaling and rounding them back to 16 bit integers, which is what our legacy code requires.  For 32 channels at 40 kHz / channel (two PXI-6250 cards), it took very little CPU time (on a 3.4 GHz Xeon) to convert the doubles to shorts, so it looks like this is a reasonable workaround for our application.

 

Thanks,

Larry

 

0 Kudos
Message 14 of 14
(1,441 Views)