01-16-2013 04:09 PM
I am working on some fpga code that uses the PCI-7833R FPGA card. Its analog output accept an integer. I am also working on some FPGA code for a NI 9263 in a 9148 chassis, and its analog output only accept a floating point. Why would the 7833R analog output accept an integer?
01-16-2013 05:03 PM
Just to confirm - are you writing a floating point (orange wire) to the 9263, or are you writing a fixed-point (grey wire)?
I'm going to guess that the 7833R is older hardware that for some reason can write only raw, unscaled values to its DAC, in which case you need to incorporate appropriate scaling into your code. When LabVIEW FPGA was first available, all analog IO was integer only.
01-16-2013 05:30 PM - edited 01-16-2013 05:34 PM
I agree with Nathan. The most crude way to do this (I'm just guessing without seeing the code/device) is on your RT program do a conversion to an integer based on the scaling of the device. So, if you want to output 3.2 volts and it's a 0-10 V signal and the integer being output is a U16, you will do 3.2/(10-0)*65536. Then take the decimal and convert to U16 using the "to unsigned word integer" primative. Pass this integer value to the FPGA>
Basically, you are just scaling a 0-10 decimal to an integer output representing the same range. If the output is an I16 you have to do things a little different but not much. This should give you a good idea on what to do.
Sorry if this was a bit confusing. I think there's an NI article on it somewhere, I'll see if I can link to it. Someone better with words than I may be able to help clarify.
Edit:
From the last line here
Binary Code = (Output Voltage x 32768) / 10.0V