Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Calibrating M-Series Analog Inputs

I'm working with an NI PCIe-6251 M-series digitizer.
My data acquisition and analysis app calls the NI-DAQmx API from C++.
I have connected AO 0 to AI 0, and to a monitoring oscilloscope.
When I deliver a nominal 10 V output to AO 0, an accurate 10 V signal appears on the oscilloscope.
But, when I read the same signal from AI 0, the result is 3% too big (10.3 V).
The same 3% error is seen on every AI channel.
This error is large enough to be a serious concern to my science research customers.
I'm confused: is the error is specific to my particular PCIe-6251 digitizer, or is it a design issue, or perhaps something I'm doing wrong?


My main question:
If I introduce a 3% correction in my software, will this work for all M-series digitizers?

Secondary question:
Is there some standard approach for calibrating the AI inputs on an M-series digitizer? I read something about 'Virtual Calibration' but it was a bit vague about the detailed procedure that is involved.
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 1 of 3
(3,258 Views)
Please disregard my question.

If I use DAQmxBaseWriteAnalogF64() to write floating point values, then use DAQmxBaseReadAnalogF64() to read back floating point values the errors are only 0.1% - totally acceptable.

My confusion arose because I am used to interacting with digitzers from other manufacturers using integer values (typically +-30,000 for a +-10 V range). Because of limitations in NI-DAQmx Base, I was writing 10.0 V to the digitzer using DAQmxBaseWriteAnalogF64(). When you read the result using to an integer DAQmxBaseReadBinaryI16() the result is 30,100. Kind of an strange value - 3% bigger than I was expecting. I would have expected either 30,000, or 32,767.

It's disappointing that NI-DAQmx Base doesn't support writing integer values to the digitizer, but I can live it.
Dr John Clements
Lead Programmer
AxoGraph Scientific
0 Kudos
Message 2 of 3
(3,243 Views)

Hi John-

M Series devices use a complex polynomial scaling algorithm to convert raw ADC values to the equivalents in engineering units (floating point voltage readings in the case of most M Series devices, including yours).  This removes the necessity of performing hardware calibration on raw readings and also allows for enhanced accuracy of measurements overall.  This article has a bit of information about NI M-Cal under the Analog Input section.

This has a number of benefits including reduced cost of M Series hardware relative to legacy DAQ devices and longer recommended intervals between external calibration (typically 2 years for M Series, vs 1 year or less for legacy devices).  The tradeoff with all of this software-based sample calibration is that raw ADC values are somewhat less useful than you might be used to.  If you need to use or interpret a +-2^[resolution]-style value in your application, you could simply read back F64 values and then scale them back to raw values.

M Series DAC values, on the other hand, are scaled based on a linear relationship and are calibrated against two points (offset and gain).  In order to use this functionality in NI-DAQmx Base we would need to support both querying those scaling coefficients as well as raw DAC writes, neither of which are supported currently.  If you need this functionality, I would appreciate your filing of a product suggestion for that feature here so that we can evaluate it for future versions of that driver.  Of course, from an ease of use standpoint I would definitely recommend using the scaled DAC write functions.

Hopefully this helps-

Tom W
National Instruments
Message 3 of 3
(3,213 Views)