12-31-2025 06:05 AM
Hello everyone,
I am working on a system that acquires signals using an NI-9239 analog input module, and I am trying to accurately evaluate the input delay introduced by the ADC path, by comparing the measured signal against a reference.
In the NI-9239 datasheet, I found the delay expression shown in Figure 2:
40+5512fs+3.3 μs\frac{40 + \frac{5}{512}}{f_s} + 3.3\,\mu sfs40+5125+3.3μs
I understand that part of this expression corresponds to an integer number of sample delays, but I am not fully clear on the origin of the specific constants:
Why is the integer delay equal to 40 samples?
What is the physical or architectural meaning of the 5/5125/5125/512 fractional-sample term?
Does the 512 correspond to the decimation factor or the structure of the internal digital filter?
I assume this delay is related to the delta-sigma ADC and its internal digital decimation filter, but I have not been able to find an official explanation that justifies these values in detail.
Is there any documentation or application note from NI that explains how this delay is derived, or how these constants relate to the ADC architecture? Additionally, are these values specific to the NI-9239, or are similar delays (and fractional terms) common across other NI C-Series delta-sigma input modules?
I am not a digital-filter expert, but understanding the origin of these terms is important for precise time-alignment in my application.
Any guidance or references would be greatly appreciated.
Thank you very much.
Solved! Go to Solution.
01-01-2026 02:11 AM
01-08-2026 04:28 AM
You are right. There is no point on having this information. Thank you for the response.