05-28-2025 04:42 AM
I'm trying to calculate the temperature inaccuracy caused by quantization error in NI DAQ modules like the NI 9217 (PT100) and NI 9213 (thermocouples).
They advertise 24-bit ADCs, but don't clearly state the actual input voltage range that's digitized. For example, the NI 9217 measures 0–400 Ω with 1 mA excitation, so the voltage should be 0–0.4 V — but a PT100 never drops below 100 Ω, so in practice it's more like 0.1–0.4 V.
My goal is to calculate the °C error from quantization alone. But to do that, I need to know:
Is the ADC really digitizing 0.1–0.4 V?
Or is the signal internally scaled to something like ±10 V?
Should I use the full 0–0.4 V range or just the realistic 0.1–0.4 V span?
Thanks for any clarification — I just want to calculate the actual °C/bit resolution.
These are 2 datasheets from the aforementioned Modules:
https://www.ni.com/en-us/support/model.ni-9217.html
https://www.ni.com/en-us/support/model.ni-9213.html
Solved! Go to Solution.
06-02-2025 02:48 AM
@konradkloesch wrote: ... but a PT100 never drops below 100 Ω ...
unless you measure temperatures below 0°C 😄
The unit measures resistance , the input range is 400 Ohm, add another 5% to 10% over range and divide that by 2^24 .. a very small and useless number , since the last bits will be noise.
There is not much information how the ADC reference voltage is managed, however it's good practise to link the exiting current and the ADC reference ... and still drifts and noise are always there...
And always remember: The sensor will report the temperature he feels, that might be the temperature you are interested in, but doens't have to 😄