LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Format into string converts some whole numbers wrong

Hi,

 

I stumbled into this issue while trying to write some values to file, which had very small variations centered around 1. I decided to write all my values formatted as %#.16f (hide trailing zeroes, 16 digits of precision). Here is a sample snippet:

Format into string.png

 

 

 

 

Later I found that some whole values were written wrong to file. I noticed then that some ranges of whole numbers get converted to non-whole numbers, written as *.9 periodic. For example, 78 is converted to 77.9999999999999999.

 

The same happens within these ranges:

  • 78 to 99
  • 529 to 999
  • 4225 to 9999
  • 36045 to 99999
  • 298302 to 999999

Reducing the precision, say from 16 to 15, solves the conversion issue for the range from 78 to 99. Decreasing further from 15 to 14, solves the conversion issue for the range between 529 to 999, in addition to 79 to 99. So on, and so forth. This limits the highest number of digits of precision to 11.

 

This happens on LabVIEW 2017 and 2019, 32-bit versions. I am wondering is others have seen this and/or if there is a bug already reported about it.

 

Best regards,

Sergio

---
SergioR
0 Kudos
Message 1 of 12
(3,587 Views)

It's not a bug. As already pointed out thousands of times on this forum, representation of integer numbers in the float or double binary format has some limitations. If you know that these are really whole numbers, use a %d format or explicitly convert the double to an integer.

Paolo
-------------------
LV 7.1, 2011, 2017, 2019, 2021
0 Kudos
Message 2 of 12
(3,574 Views)

Hi Sergio,

 

oh, all those wonders of floating point precision (and related issues when converting floating point values to/from string)…

 

One more thought: where do those values come from?

Which kind of DAQ device do you use that requires saving data with 16 decimal digits of precision?

(A 16bit ADC gives you ~4.8 decimal digits, a 24bit ADC ~7.2 digits…)

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 3 of 12
(3,567 Views)

Thanks for the reply Paolo.

 

I looked around the forum for a while and I did not find posts with a similar discussion, but now I find a couple of post by searching for issues with double precision, rather than for integers.

 

As far as knowing the type of number, I have a generic write method with the input type as double, such that it accepts any type of numeric input. But a check can be added as a workaround, to decide if the value is a whole number and act upon that.

 

Regards,

---
SergioR
0 Kudos
Message 4 of 12
(3,563 Views)

Hi Sergio,

 


@SergioR wrote:

As far as knowing the type of number, I have a generic write method with the input type as double, such that it accepts any type of numeric input. But a check can be added as a workaround, to decide if the value is a whole number and act upon that.


What about using polymorphic (or mallable) VIs to handle different datatypes? No need for (too) generic functions…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 5 of 12
(3,554 Views)

Hi Gerd,

 

Not a super secret DAQ device, hehe. The values in question were the gain correction for a digital decimation filter, which were provided from simulation with 16 digits of precision. While debugging, I needed to record the gain correction value applied to each data point captured from an ADC, to double check that the right correction value was indeed selected, and my original 6 digits of precision would truncate the value in my results file, so I increased the number of digits in my string conversion library to 16 digits. Later I decreased it to 15 digits, and now I am considering to decrease it further to 11 digits, or to implement a work-around.

 

Best regards,

---
SergioR
0 Kudos
Message 6 of 12
(3,550 Views)

@GerdW wrote:

Hi Sergio,

 


@SergioR wrote:

As far as knowing the type of number, I have a generic write method with the input type as double, such that it accepts any type of numeric input. But a check can be added as a workaround, to decide if the value is a whole number and act upon that.


What about using polymorphic (or mallable) VIs to handle different datatypes? No need for (too) generic functions…


Polymorphic could be used in my string conversion library; I haven't worked with malleable VIs... I will look into it. However, the method where I am using the number-to-string conversion uses always the double data type. I would require an additional method that uses an integer data type, instead of a double data type. It would provide a way to handle integers and double separately.

 

But in the implementation, the issue remains mainly because during measurements, the numbers written to file (results, test conditions, etc) can be whole numbers or fractions... it would still require a check to see if the number is whole.

 

Thanks,

---
SergioR
0 Kudos
Message 7 of 12
(3,541 Views)

I understand where the calls for "IEEE 754" understanding and the infinite wonders of floating point numbers is coming from BUT....

 

If I enter the number "78" into a DBL, it should be correctly represented in DBL data space. As should 529. Yet configuring a FP control to 16 digits of precision makes the entered value of "529" change to 528.99999999999....

 

https://www.h-schmidt.net/FloatConverter/IEEE754.html

 

Have a look here for an exmaple of which values ARE exactly representable in IEEE 754 format. Spoiler: 529 is one of them. IF 529 is exactly represented, then surely even a display format with 1 million digits of precision should still all be zero.... Unless I (after all these years) still fundamentally misunderstand how this stuff works.

 

I think this IS a bug in the display code for floating point numbers in LabVIEW. The exact same thins happens without parsing to string, just set the numeric representation of the FP control accordingly and the exact same issue occurs.

Message 8 of 12
(3,527 Views)

@Intaris wrote:

I think this IS a bug in the display code for floating point numbers in LabVIEW. The exact same thins happens without parsing to string, just set the numeric representation of the FP control accordingly and the exact same issue occurs.


Ah, you are right, it is the same behavior on the front panel numeric controls.

---
SergioR
0 Kudos
Message 9 of 12
(3,500 Views)

@Intaris wrote:

I think this IS a bug in the display code for floating point numbers in LabVIEW. The exact same thins happens without parsing to string, just set the numeric representation of the FP control accordingly and the exact same issue occurs.


The online C Compiler:

#include <stdio.h>

int main()
{
    printf("%.30f", 529.0);
    return 0;
}

Outputs 529.000000000000000000000000000000.

 

That online IEEE-754 converter seems to use singles (hex value is 0x44044000). Although that doesn't seem significant here.

0 Kudos
Message 10 of 12
(3,476 Views)