11-22-2007 02:20 AM
11-22-2007 12:06 PM
11-26-2007 10:14 AM
Hi John-
M Series devices use a complex polynomial scaling algorithm to convert raw ADC values to the equivalents in engineering units (floating point voltage readings in the case of most M Series devices, including yours). This removes the necessity of performing hardware calibration on raw readings and also allows for enhanced accuracy of measurements overall. This article has a bit of information about NI M-Cal under the Analog Input section.
This has a number of benefits including reduced cost of M Series hardware relative to legacy DAQ devices and longer recommended intervals between external calibration (typically 2 years for M Series, vs 1 year or less for legacy devices). The tradeoff with all of this software-based sample calibration is that raw ADC values are somewhat less useful than you might be used to. If you need to use or interpret a +-2^[resolution]-style value in your application, you could simply read back F64 values and then scale them back to raw values.
M Series DAC values, on the other hand, are scaled based on a linear relationship and are calibrated against two points (offset and gain). In order to use this functionality in NI-DAQmx Base we would need to support both querying those scaling coefficients as well as raw DAC writes, neither of which are supported currently. If you need this functionality, I would appreciate your filing of a product suggestion for that feature here so that we can evaluate it for future versions of that driver. Of course, from an ease of use standpoint I would definitely recommend using the scaled DAC write functions.
Hopefully this helps-