06-06-2008 04:07 PM
06-06-2008 05:09 PM
06-06-2008 05:11 PM - edited 06-06-2008 05:16 PM
06-07-2008 01:52 AM
06-09-2008 01:35 PM - edited 06-09-2008 01:36 PM
06-09-2008 02:08 PM
06-09-2008 07:15 PM - edited 06-09-2008 07:16 PM
Yes, I understand what you are saying and you are completely correct. The problem I have is not that I expect a mathematically perfect representation of a number, but rather that LabVIEW calculates and produces an 80-bit extended precision number on my computer and then appears to convert it to a 64-bit representation of that number before displaying it!
If you convert the extended precision value into an unflattened string in order to attempt to access the binary representation of the data, you’ll find that it is represented by 80-bits. This is a 64-bit fraction plus a 15-bit exponent plus one bit for the sign. Delightfully, the flatten to string function appears to scramble the bits into “noncontiguous” pieces, so about all I can tell for certain is that we have, as expected, an 80-bit extended precision number in memory. The documentation for the other number-to-Boolean array and bit manipulation functions I looked at (even the exponent-mantissa function) all claim to only be able to handle a maximum input of a 64-bit number (double precision float max) -correct me if I’m wrong on this one, because I’d really like to be able to see the contiguous binary representation of 80-bit extended floats.
It turns out though that what you said about not being able to tell whether we have twenty digits of precision without bit fiddling is not true at all. If you look at the program I wrote, you can prove with simple addition and subtraction that beyond the shadow of a doubt the extended numbers are being stored and calculated with twenty digits of precision on my computer yet being displayed with less precision.
As you can plainly see in the previous example I sent:
A = 0.1111111111
B = 0.00000000001111111111
A+B=C= 0.11111111111111111111
We know that
C-A=B
The actual answer we get is
C-A=0.00000000001111111110887672
Instead of the unattainable ideal of
C-A=0.00000000001111111111
The first nineteen digits of the calculated answer are exactly correct. The remainder of the actual answer is equal to 88.7672% of the remainder of the perfect answer, so we effectively have 19.887672 digits of accuracy.
That all sounds well and good until you realize that no individual number displayed on the front panel seems to be displayed with more than 16-17 significant digits of accuracy.
As you see below, the number displayed for the value of A+B was definitely not as close to being the right answer as the number LabVIEW stores internally in memory.
A+B=0.11111111111111111111 (the mathematically ideal result)
A+B=0.111111111111111105 (what LabVIEW displays as its result)
We know darned well that if the final answer of A+B-A was accurate to twenty digits, then the intermediate step of A-B did not have a huge error in the seventeenth or eighteenth digit! The value being displayed by LabVIEW is not close to being the value in the LabVIEW variable because if it were then the result of the subtract operation would be drastically different!
0.11111111111111110500 (this is what LabVIEW shows as A+B)
0.11111111110000000000 (this is what we entered and what LabVIEW shows for A)
0.00000000001111110500 (this is the best we can expect for A+B-A)
0.00000000001111111110887672 this is what LabVIEW manages to calculate.
The final number LabVIEW calculates magically has extra accuracy conjured back into it somehow! It’s more than 1000 times more accurate than a perfect calculation using the corrupted value of A+B that the display shows us – the three extra digits give us three orders of magnitude better resolution than should be possible unless LabVIEW is displaying a less accurate version of A+B than is actually being used!
This would be like making a huge mistake at the beginning of a math problem, and then making a huge mistake at the end and having them cancel each other out. Except imagine getting that lucky on every answer on every question. No matter what numbers I plug into my LabVIEW program, the intermediate step of A+B has only about 16-17 digits of accuracy, but miraculously the final step of A+B-A will have 19-20 digits of accuracy. The final box at the bottom of the program shows why.
If you convert the numbers to double and use doubles to calculate the final answer, you only get 16-17 digits of accuracy. That’s no surprise because 16 digits of accuracy is about as good as you’re gonna do with a 64-bit floating point representation. So it’s no wonder all the extended numbers I display appear to only have the same accuracy as a 64-bit representation because the display routine is using double precision numbers, not extended precision.
This is not cool at all. The indicator is labeled as being able to accept an extended precision number and it allows the user to crank out a ridiculous number of significant digits. There is no little red dot on the input wire telling me, ‘hey, I’m converting to a less accurate representation here, ok!’ Instead, the icon shows me ‘EXT’ for ‘Hey, I’m set to extended precision!’
The irony is that the documentation for the addition function indicates that it converts input to double. It obviously can handle extended.
I’ve included a modified version of the vi for you to tinker with. Enter some different numbers on the front panel and see what I mean.
Regardless of all this jazz, if someone knows the real scoop on the original question, please end our suffering: Can LabVIEW display extended floating point numbers properly, or is it converting to double precision somewhere before numerals get written to the front panel indicator?
06-09-2008 11:15 PM
06-19-2008 02:56 PM
When I receive more information, I will let you know what we figure out and what course of action we decide to take."
If there really is a precision problem here, I don't imagine that this minor issue is liable to affect anyone other than myself adversely. However, if you are reading this post in order to find out whether or not there is something fishy with displaying extended precision numbers, the best I can do for you is to relay the tentative guess of a support engineer. He told me over the phone that it looked like the numeric indicator can not display more precision than can be held within a double precision 64-bit numeric value, but that he wasn't really sure.
I will post the official determination promptly upon its imminent arrival, assuming that they have email in hell. Cheers.
06-19-2008 03:36 PM