02-03-2009 01:45 PM
I'm sure there's a good reason for this and I'm sure it has to do with floating point math and rounding but take a look at this VI. It seems to calculate two different results for the same operations:
Before you ask, yes, I have to use singles.
02-03-2009 01:57 PM
02-03-2009 02:32 PM
It all starts with the 1/x operation. For 0.001, it doesn't give you 1000 like you'd expect, it gives you 999.999938964843749
What are you trying to accomplish with this equation? I'm wondering if it can be re-written. Could you also temporarily convert to DBL, perform the operation and convert back to SGL?
02-03-2009 02:44 PM - edited 02-03-2009 02:47 PM
Actually, as a work around, I'm using that bottom formula. That seems to give me the desired output.
The intent is basically to calculate the precision of a number. I'm expecting values like 0.1, 0.05, 0.01, . 0.001 and it needs to return 1,1,2,3 for those values.
If the problem stemmed in the 1/x operation, I would expect the bottom formula to get the same answer as the top. As you can see, with an input of 0.001 they get different answers. Any thoughts?
02-03-2009 02:51 PM
I've seen a situation like this before. I was writing a macro in MS Access. I wanted to take a number that could be between 1 and perhaps 3000. I wanted to have it zero filled with the year in front of it so that it would be 09-0001 for the first item of the year, 09-0010, 09-0156, 09-1000, 09-1856. etc. I used an equation with the log function to determine how many zeroes it would need. It almost always worked, but failed for 1000. I figured it was a round off error in dealing with the log function and just created a special if-then structure to deal with it the one time a year that 1000 would be the next number up.
Thanks for your message, as it basically confirmed and gave me insight as to why I had problems when I first created that database macro so many years ago.
02-03-2009 03:46 PM
0.001 is not an exact value in binary. If you connect an indicator showing lots of digits you get 0.001000000047497451.
If you implement the function with LabVIEW primitives and the log function, you get different results depending upon whether you do the calculation with single or double precision data.
The fun of finite binary representation of numbers!
Lynn
02-03-2009 03:48 PM
02-03-2009 04:09 PM
I was just pointing out that many seemingly unexpected effects can occur when you do not adequately account for the binary representation of numbers. Numbers such as 0.1 and 0.05 are infinitely repeating expressions in binary although they are exact in decimal. You need to think carefully about what you mean by the precision of a number. If you are talking about the precision when expressed in decimal but calculating it in binary, then these effects must be considered.
Can you specify exactly what you are trying to do? Perhaps someone can offer a more robust solution.
Lynn
02-05-2009 10:18 AM - edited 02-05-2009 10:21 AM
Ok, here is exactly why this happens
First lets look at how this calculation works with real numbers
x=.001
abs(1/.001)=1000
log(1000) = 3
floor(3) = 3
When we set X to a Single Precision number .001 becomes 0.00100000004749745131
This is the results if you do the steps one at a time in seperate expression nodes
(I will use ans to represent the answer from the previous line)
x = .00100000004749745131 (because of the single precision limit)
abs(1/ ans) = 999.999938964843749
log(ans) = 3.00000000000000000
floor(ans) = 3
But if you do the whole calculation in one expression node;
x = 0.00100000004749745131
floor(log(abs(1/x))) = 2
This is actually the more correct answer because in double precision;
if x = 0.00100000004749745131
then abs(1/x)= 999.999938964843749
log(ans) = 2.99999997937211971000
floor(ans) = 2
When you do this calculation one step at a time it casts the number as a single after each node
and when log(ans) = 2.99999997937211971000, single precision cannot represent this number and the closet value that it can represent is 3.
But when you do the step all at once, it maintains the double precision accuracy internally, and gives the mathematically correct answer, 2.
This is not an error of Labview, just a demonstration of the limits of precision, but an odd and interesting one. Let me know if this needs any further clarification.
-Hunter
02-05-2009 11:16 AM