06-29-2008
10:28 PM
- last edited on
11-10-2025
01:11 PM
by
Content Cleaner
Hello,
For an M-series PCI-62xx card, scanning 8 channels at 20kHz each ("AI Sample Clock") results in an ADC rate ("AI Convert Clock") of 160kHz. This can be verified with this VI, which uses the AIConv.Rate property node to display the actual convert clock rate.
However, various NI documents (such as this one) state that after a sample clock pulse, there are two or three "delay" ticks at the convert clock rate, which set a delay before the first actual convert is performed. How many delay ticks are used can be read out from the DelayFromSampleClk.Delay property node. It is typically 3 ticks for an M-series card, and 2 ticks for an E-series card, according to this figure from the NI website:
So my question is: If I scan 8 channels, and there are three additional "delay" pulses, shouldn't the appropriate Convert Clock Rate be SampleRate x (NumberOfChannes + NumberOfDelayTicks)? More precisely, in my sample code below, I scan 8 channels at 20kHz each. I see from the indicator that my card has 3 delay clicks. However, the delay clicks are apparently not taken into account, because in this setup, NI-DAQmx sets the Convert Clock to 8 x 20kHz = 160kHz, whereas I think that it actually should choose (8+3) x 20kHz = 220kHz as the appropriate Convert Clock rate.
Note that it is possible that data acquisition can work in this setup even at 160kHz, because if the next sample tick arrives too early, i.e. before all 8 + 3 Convert Clock ticks are done, then the early sample clock tick is ignored, according to NI documentation. In practice, this would mean only every other sample clock is actually performed.
Once again - given that there are three empty "delay ticks" per each sample clock tick, shouldn't NI-DAQmx choose a Convert Clock of 220kHz instead of 160kHz in my example?
07-01-2008 09:41 AM
Hi Gustep,
Just to clear things up on the issue of how the convert clock is chosen, please refer to the KnowledgeBase article found here. Note that the rate of the convert clock depends on the maximum sample rate with a 10µs delay added between channels to allow for settling. Above certain frequencies, there will not be enough room for this extra delay and the convert clock is then calculated as the maximum sample rate times the total channels being scanned. I think the biggest point of confusion about the delay between sample clock and convert clock is that the delay is going to be 3 ticks of the master timebase, and not the convert clock. This timebase is going to be much greater than the convert clock, and therefore skipping 3 ticks of it will not affect the convert clock greatly. Hope this helps,
07-02-2008 06:13 PM - edited 07-02-2008 06:19 PM
