01-19-2011 09:50 AM
A quick question regarding the implementation of decimation in the 5660...
When the BW is set to 1.25MHz or below...the DDC in the A/D kicks in. The resultant decimated sample rate is then...
Decimated Sample Rate = 64MS/s / Decimation
where...
BW ---> Decimation ---> Resultant sample rate
1.25MHz ---> 32 ---> 2 MS/s
800kHz ---> 64 ---> 1 MS/s
400kHz ---> 128 ---> 500kS/s
etc.
etc.
etc.
My question.....how is it that the decimated sample rate is not twice the BW?
For example....if my BW is only 400kHz...I don't need all 64MS/s...I could just as well reconstruct that BW with 800kS/s....so I could just pick out every 80th sample in my 64MS/s stream (64M/800k=80). However...the actual decimation factor is more than this, and the resultant sample rate is less.
There seems to be something more at play...or...decimation isn't just keeping every n-th sample and throwing away the rest. Is there some kind of filtering/averaging going on here too? Why can the resultant decimated sample rate be LESS than twice the highest frequency component? It seems the only way this would be possible is if the anti-aliasing filter placed BEFORE the decimator was actually of LOWER BW than what you input. i.e. - when I say 400kHz, the anti-aliasing filter is set to something much lower...like 250kHz. I'm conjecturing at this point.
What's going on here?
---
Brandon
01-20-2011 09:48 AM
Hello Brandon,
The Resulting Sample Rate is refering to the sample rate of the IQ data returned from the DDC. This means that the input signal is not only decimated, but also changed from real to complex data.
The bandwidth of an IQ signal is 0.8*sample rate, which gives the results you are seeing in the table.
Regards,
Dan King
01-20-2011 10:00 AM
Hi Brandon,
I believe what Dan is trying to say is that the situation changes when you are dealing with complex data centered at 0 Hz. With real data, you have to sample faster than twice the highest frequency compenent of the signal to avoid aliasing.
However, when the same signal is moved to be centered at 0 Hz via digital downconversion, resulting in complex IQ data, you have to now start visualizing negative frequencies. What I mean is best illustrated via example.
You have a 1 MHz signal centered at 5 MHz, so the signal ranges from 4.5 - 5.5 MHz. The highest frequency component is 5.5 MHz so you would want to sample > 11 MSPS to satisfy Nyquist.
If you digitally downconvert the signal to be centered at 0 Hz, the signal ranges from -500 kHz to +500 kHz. The maximum frequency component is 500 kHz, so you would sample > 1 MSPS (IQ sample rate). You can acquire a 1 MHz bandwidth sampling at 1 MSPS with complex data. Because of digital downconversion, you can base the sampling rate purely on the signal bandwidth and not the center frequency, allowing you to sample at 1 MSPS instead of 11 MSPS.
Regards,
Andy Hinde
RF Systems Engineer
National Instruments
01-20-2011 11:12 AM
I think I see the confusion here...which is in regards to the definition of "bandwidth".
From your note above (which I agree with), the definition of BW is the complex bandwidth.
And...using your example...the highest frequency component (after downconversion to DC) is 500kHz. This might not be a good term to use...but let's call this 500kHz the "real" bandwidth. Maybe data bandwidth is a better term? For example, if I was transmitting a binary message (1 bit per symbol)...my baseband data rate is 500kHz. Or...if I had an analog signal, the highest baseband frequency component would be 500kHz.
So...the question remains....when you set the BW in LabView...are you setting the complex bandwidth or the "real"/data bandwidth? I was under the assumption that it was the 'real'/data bandwidth...but from your message it looks like it's more the complex bandwidth.
As an example...if I want to do real time continuous acquisition...I need to set the bandwidth 1.25MHz or less. Let's assume I set it at 1MHz for ease of example. Does this then mean that the highest baseband frequency I can expect to transmit is on the order of 500kHz? By the decimation table...a 1MHz BW results in 2MS/s...which is now 4x my highest baseband frequency, and more in-line with what I'd expect.
---
Brandon
01-20-2011 11:54 AM
Hi Brandon,
I don't know what you mean with your different definitions of bandwidth. The signal BW is the signal BW, and the instrument's input bandwidth is its input bandwidth.
Setting the BW in LabVIEW for the NI-5660 determines a) if the DDC is on or off and b) if the DDC is on, what the complex IQ sample rate ends up being, which determines the final, effective NI-5660 input bandwidth.
If you set the NI-5660 bandwidth to 1 MHz, it will be coerced to 1.25 MHz BW with a 2 MSPS final IQ sample rate, as shown in the table you posted above. This 1.25 MHz BW is centered around whatever you select to be your RF center frequency. If you set an RF center frequency of 2 GHz, you will acquire 1.25 MHz of signal BW around 2 GHz with a final 2 MSPS sample rate.
2 GHz gets analog downconverted to an IF frequency of 15 MHz at the 5600 output/5620 input. If you input a 2 GHz tone to the 5600 and set the RF frequency to 2 GHz, the 5600 output would be a 15 MHz tone.
The 5620 ADC samples this 15 MHz tone at a sample rate of 64 MSPS. The 5620 ADC always runs at 64 MSPS. At this point the data is either ready for retrieval, or if the DDC is on, the data is then fed into the DDC.
If the DDC is on, the 15 MHz tone is (digitally) downconverted to 0 Hz and is know represented using complex numbers I + jQ. If you performaned a complex power spectrum on this data, you would see a tone at 0 Hz. The DDC also applies a low pass filter followed by decimation, which reduces the sample rate from 64 MSPS to 2 MSPS (in this case) as the higher sample rate is now unnecessary to represent the 1.25 MHz BW centered around 0 Hz.
Regards,
Andy Hinde
RF Systems Engineer
National Instruments
08-15-2012 04:31 AM
Wonderful! Thank you very much!