LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Decimal precision in analog input vi's

Hi,
I am using ai config with ai start and ai read vi's to acquire my data at a desired rate. However the output from my ai read vi does not have more than 2 digits after the decimal point in it. But i need more precision.I have tried to increase the precision by looking into the ai buffer read vi but have been unable to do so as it ultimately calls a dll function AI_Buffer_Read_WDTInterface and i dont know what happens inside that. Do you know if there is any other way to increase the precision of my acquired data. I am attaching a sample code too.
Thanks,
Shyam Menon.
0 Kudos
Message 1 of 6
(3,504 Views)
if you open up your AI Read you will see the waveform data indicator which is connected as an output. On this indicator you can right click on the Y values and you will see format and precision on the bottom of the menu. Select this and you can specify the digits of precision. This will then be the format of the output into your index array. Hope this helps.
BJD1613

Lead Test Tools Development Engineer

Philips Respironics

Certified LV Architect / Instructor
0 Kudos
Message 2 of 6
(3,504 Views)
you can do this also on the output indicator of the AI READ in your top level vi.
BJD1613

Lead Test Tools Development Engineer

Philips Respironics

Certified LV Architect / Instructor
0 Kudos
Message 3 of 6
(3,504 Views)
Thanks, It works now!!
Shyam.
0 Kudos
Message 4 of 6
(3,504 Views)
Hi Shyam,
Just an additional note about precision in LabVIEW.

LabVIEW was coded to use 10 digits of precision. Essentially, accuracy is not guaranteed for a number with more than 10 digits. Usually it still works fine, but occasionally some of the lesser significant digits are incorrect when using a large number of digits and rapidly changing the values. The behavior is more apparent as the number of digits of precision increases and as the number of changes to the value increases.

However, if you use more than 10 digits of precision (specified under Format & Precision), the number will display with the correct number of digits, but the value will sometimes be incorrect.

Feroz
0 Kudos
Message 5 of 6
(3,504 Views)
> LabVIEW was coded to use 10 digits of precision. Essentially, accuracy
> is not guaranteed for a number with more than 10 digits. Usually it
> still works fine, but occasionally some of the lesser significant
> digits are incorrect when using a large number of digits and rapidly
> changing the values. The behavior is more apparent as the number of
> digits of precision increases and as the number of changes to the
> value increases.
>

This is a good comment, reminding others to be aware of numeric
precision, but the ten digit rule is a bit simplified. LV uses IEEE 754
floating point format, as do the computers that it runs on. This means
that the precision and range is well specified and will match what you
would do
with the same datatypes in most other languages.

For more details, look for details on IEEE 754 on the web.

Greg McKaskle
0 Kudos
Message 6 of 6
(3,504 Views)