12-08-2009 02:43 AM
Hi,
I am trying to understand how the signals are affected by the ADC and DAC, respectively, of the NI5641R. From the datasheets I know that both the ADC and the DAC support a 14Bit conversion. Now, from the ADC I receive 2 (I and Q) I16 values, so I guess the binary of these signals look something like 00xx'xxxx'xxxx'xxxx b.
For a first application I just want to have a NULL-Modem. Thus, those bits are directly sent to the DAC. But in some examples it is stated, that the I16 values need to be adapted to a 14Bit value by shifting the I16 by 2. If I do that with my value from the ADC my output signal would be something like 0000'xxxx'xxxx'xxxx b, which thus is NOT the same as my input singal. Or is there something wrong?
Another question which concerns the ADC and DAC is about their power levels: If I build a NULL-Modem and connect a signal with a power-level of -20dBm to the ADC, the signal on my FPGA has a level of -30dBm and after the DAC a level of -40dBm. Are the losses of 10dB in each stage normal?
Thanks for the help.
Kind regards
Roman Gassmann
12-08-2009 01:40 PM
Hi Roman,
I believe I can clear things up for you. The data port for the DAC is a 16bit data type, but it only uses the lower 14bits, so the easiest/quickest way is to right shift the 16bit value by 2 and write it to the DAC .If you are only using the lower 14 bits you can simply send in the lower 14 bits without the need for a shift. The examples are assuming 16 bit data is used which I think is what caused the confusion. With regard to the losses you are seeing I believe this is due to the shifts you are performing on the data which is effectively dividing the data. I hope this helps to clear things up please feel free to reply if you have any other questions or concerns.
12-09-2009 03:52 AM
Hi JaceD,
Thanks for your answer.
Although you cleared up the loss at the DAC, the loss of 10dB at the ADC input (from outside the NI5641R to the FPGA) still remains unclear for me. Any ideas on that? I dont think that in case the loss is caused by a shifting operation - since that simply would not make sense in the ADC path, right?
Roman
12-09-2009 12:56 PM
Hi Roman,
How are you converting the data from the ADC in to dBm? This seems like a good candidate for the source of the error a screen shot or the code you are using for this conversion would be helpful.
12-10-2009 10:28 AM
Hi JaceD,
well that specific part of the code I copied out of an example. Further I dont think that the error is in that code part since the measurements of the output an the LabView-calculations are the same (as far as I can tell).
Thanks.
Roman
12-10-2009 11:37 AM
Hi Roman,
You are correct that is exactly what we do in the example program. I do have a few questions though, Are you processing the data on FPGA? If you run the example program do you see the same 10dB loss? How are you verifying that the input signal is -20dB? If you are willing you can post your project either to the forums file size permitting or to our ftp site ftp://ftp.ni.com/incoming/ I will gladly take a look at this and see if I can reproduce this issue on my end. I have run some quick test here and have not seen a drop.
12-10-2009 02:47 PM
Hi JaceD,
Yes I process the data on my FPGA, but I implemented the processing in a way, that I can turn it off. Unfortunately the loss still remains.
Since I am now at home I can not check the examples right now, but I will check them tomorrow moringn.
I tested the inputsignals power by directly connect it to the spectrum analyzer. I will check this again also tomorrow morning. For the meantime, I uploaded my project to your ftp server. It is called 09.11.zip. It is possible that the version I sent to your ftp server has still the shifting part in it. Unfortunately I dont have the newest version with me. Hope that would not be a too big problem.
Thanks
Roman
12-16-2009 01:34 PM
Hi Roman,
Sorry for the delay in this reply, but I have been looking over your code and believe I know the cause of the issue I was seeing and in turn have shed some light on what you have been experiencing. I did see a drop of about 10dB but only at lower AI spans at higher spans I saw no drop and also no spurs. You did not mention any spurs in your previous post but when I used a sine tone to test your VI I saw images equal in power to the fundamental. After tracking the cause of this down I believe I know what is causing your issues. When you move from the ADC clock to the RTSI clock you are not gating your FIFOs off of a data valid signal(read inversed timeout). This is causing your FIFO to fill with zeros intertwined with your data on some clock cycles essentially bringing every other sample back to zero and creating a aliasing effect. My suggestion is to use logic similar to the example below to ensure that when you are crossing clock domains you are only writing valid data into a FIFO, the false case only contains a false constant wired to the boolean output. I hope this clears things up let me know if you have any further questions.