Measurement Studio for VC++

cancel
Showing results for 
Search instead for 
Did you mean: 

Why is a 24 bit A/D giving 32 bit readings

I confess I am new to NiDAQmx in general as well as Measurement Studio. So please be patient. I am having difficulty with the following problem.

I am trying to read the raw 24 bit samples out of a NI 4474 24 bit, 4 channel A/D. Using NiDAQmx VC++ software, I am using 32 bit signed integer reads. When I do this, I am getting what is close to a 32 bit result (it peaks at 0x43D0C958 at the positive end and 0xBC2CE3F0 on the negative end). I would have expected 24 bits with sign extension to 32 bits. All the digits are toggling, so the data is not simply shifted upward. I presume this has to do with calibration. When I plot the data using autoscaling, my test signal is clearly present in the data.

So what is going on? Do I need to retrieve calibration data to know how to scale my data? Please advise...
0 Kudos
Message 1 of 4
(3,522 Views)
Hey Schlew,
Raw reads from the board in DAQmx return a "level" of the full-range of the input (+-10V). The 0xBXXXX is bigger than the 0x43XXX so instead of thinking of the 0xBXXXX as being the negative, think of it as being "a higher level" in respect to the full range of the board. Instead of me going through the theory here, I suggest reading the tutorial found here: http://zone.ni.com/devzone/conceptd.nsf/webmain/139dfa3645b29be586256865004e742a

Especially read the part about Analog Inputs under DAQ Hardware. The graph gives a good explanation of this behavior. If you think of it this way, you should be returning the correct data. By returning this "Raw" value, this is what you get back as opposed to getting back a positive and negative value. You can do some simple math and figure out what the "level" corresponds to in "voltage".

Sincerely,
Gavin Goodrich
Applications Engineer
National Instruments
0 Kudos
Message 2 of 4
(3,502 Views)
gaving

Thanks for the reply, although I'm not sure I had communicated the problem effectively. What was confusing to me was that the 24 bits of dynamic range did not seem to map in any logical way into the 32 bit integer results I was getting. This was especially true when I examined the positive and negative rails. I finally came across an obscure post that explained it to me. Apparently, the 24 bits is shifted upward to use 31 bits, not 32 as I thought it might (and yes, it does appear to do the sign extension correctly, the 0xBXXX number was indeed negative). The board internally applies its calibration factors prior to the host reading the "raw" values, so you can actually read values that are in excess of your expected dynamic range. The reason they didn't shift the 24 bits up to 32 bits is to account for these small variations caused by the calibration which might otherwise cause it to overrange. The long and the short of it is, I can compute measured voltage at the input from the formula 20.0 * iVal/(2^31) where the 20. is the input range.

A note to NI, this could have been a little better documented, ya know?
0 Kudos
Message 3 of 4
(3,494 Views)
Hey Schlew,
My apologies for the confusion. Your statement "I am trying to read the raw 24 bit..." threw me off. Doing raw reads from the board and signed reads from the board are completely different things. If you were indeed acquiring raw reads, my statements would have been correct. Again, I apologize for the confusion, and I'm glad you figured it out.
-gavin
0 Kudos
Message 4 of 4
(3,484 Views)