Measurement Studio for VB6

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQ analog input precision

In Visual Basic using the component works AIPoint to read an analog input, why does the value returned have 7 or 8 digits of precision when the ADC's LSB accuracy is only 3 digits?

Example: I have a channel set up for an input range of +/- 10 volts (gain=0.5), which has an LSB accuracy of roughly 0.005V. Yet readings returned by the AIPoint read function for say, a 5V signal will look like "5.02745678". Where are the digits past the 3rd decimal place coming from, if the 12-bit ADC can only resolve 1LSB= 0.005V ?
0 Kudos
Message 1 of 2
(3,266 Views)
Short answer: The value returned only has three statistically significant digits. Due to the way a computer handles non-integer numbers, though, it will display the number with as many digits of precision as it can generate unless you explicitly tell it otherwise. To do this, look at the VB FormatNumber function.

More info: Numbers are stored in computers in one of a few well-defined formats. In the case of non-integer (floating point) numbers, this is usually either a 32 or 64 bit representation. In the 64 bit representation, for instance, I believe 53 bits are used for the mantissa and its sign and 11 for the exponent and its sign. This gives it a precision of log((2**53)/2)=~15 digits. There is no space in a computers standard representation of
numbers to dictate which are statistically significant digits so when it is asked to display a number, you get the whole 15 (minus any trailing zeroes).

OK, so you may ask "why aren't all of the trailing digits except the significant 3 set to zero?"

The ADC accuracy a set number of bits. A 12-bit ADC will give you 3 digits of accuracy for unsigned numbers (log(2**12)=~3). The ADC communicates its reading to the computer as an integer from 0 to 1024. The computer takes this integer and scales it to the actual range of the ADC (lets say 0 to 10.0 V). To do this, it uses a formula like y = (10.0/1024)x. Where y is the scaled voltage and x is the integer reading from the ADC. Because the computer doesn't know anything about statistical significance, 10.0/1024 = 0.009765625 and thus the number that would be returned in this case would be an exact multiple of that.

I hope that makes some sense. In general though, go by the stated precision of your card and use the FormatNumbe
r function to round it to the correct number of digits for display.

TonyH
Message 2 of 2
(3,266 Views)