10-12-2007 08:38 PM
10-15-2007 03:44 AM
10-15-2007 04:02 AM - edited 10-15-2007 04:02 AM
Message Edited by Anthony de Vries on 10-15-2007 11:03 AM
Message Edited by Anthony de Vries on 10-15-2007 11:04 AM
10-15-2007 04:50 AM - edited 10-15-2007 04:50 AM
Message Edited by shoneill on 10-15-2007 11:55 AM
10-15-2007 05:45 AM
In stead of just blindly following IEEE standards, you might actually think about this issue.
You know that when using floating points, the last few digits have no meaning. They consist purely of quantization errors, with additional rounding errors from doing any calculations. Normally, you don't care about that, because you will round the end results. I.e. you discard those meaningless digits, and then get the same results as you had with exact math.
And that's where floating point comparisons go wrong... Suddenly, those meaningless last digits start to influence the end result, and make the comparisons invalid. Obviously, that's an unacceptable situation. You cannot just ignore them, saying that it's ok because it's defined by IEEE. That's rediculous. The answers are wrong. It's stupid to just define them as ok.
Presumably, the IEEE standard assumes that you take care of rounding errors yourself, before calling these operators. But that's difficult without a built-in rounding primitive in Labview. And in the case of the QR routine, NI should implemented a rounding before the comparison, and they haven't. (Neither has Matlab, which just shows how dangerous it is too make such assumptions...)
Like I allready indicated, there's multiple ways to solve these issues. You can extend the primitives, by adding an rounding option that indicated how many (if any) of those last meaningless digits should be ignored. Or you can add a rounding primitive, and then must make sure that you call that before the comparison operators. As you always need to call this rounding operation, it makes more sense to include it in the comparison operators.
BTW... you remove rounding errors, by selecting the number of significant digits to which you round the end results. That's the weird thing... people apparantly don't realize that this is inherent to using floating points. You define a certain required accuracy to your input and output. Then you select a number representation with enough extra accuracy, so that the quantization and subsequent rounding errors don't influence the result. That's the way you determine what needs 'fixing'.
10-15-2007 07:23 AM
10-15-2007 07:31 AM - edited 10-15-2007 07:31 AM
Message Edited by GerdW on 10-15-2007 02:33 PM
10-15-2007 07:48 AM
The way to have a chance at progress here, I think, is that any users with interest should hash out a decent workaround as open source. A key reason I favor a separate "LSB's rounding" function rather than a "rounding spec" input to all the comparison-based functions is that I think it helps focus the effort in one place. Later, a set of user-defined comparison functions can expose an additional input for "rounding spec" and simply call this one common converter internally. Such an approach helps support consistency of rounding behavior.
I tend to favor rounding in terms of base-2 bits for the (likely) boost in speed and efficiency, which will be quite important in a user implementation when handling biggish arrays one element at a time. I grant that base-10 signif figures would make a more intutitive interface, but I think the target audience ought to be able to make the mental stretch.
-Kevin P.
10-15-2007 08:05 AM
Kevin,
Is LSB rounding valid when using mantissa and exponents? What if two neighbouring numbers (in IEEE notation) have different exponents?
Shane.
10-15-2007 09:21 AM
shoeill,
The difference between 2.0 internally represented as 1.999999999995 and the real number 1.999999 is that the latter is internally represented as 1.99999900003.
You internal floating point representation should always have higher precision that the highest precision of your input!
After some calculation, the first might be 1.99999999456, and the second 1.99999900274. If you want to represent them as the end result, you round to the required level of precision, and get 2.0 and 1.999999 as the answers. The same if you want to compare them.
That's the way to get valid results from floating points.
The last digit's are only a buffer for quantization/rounding errors. They should be removed if you want to do comparison, otherwise the comparisons will become dependend upon pure artefacts. Removing the rounding errors before comparison or in the final answer, is basicly a simple form of the interval arithmetic that Matt W describes. But in stead of precisely following the precision level, you simply choose a precision of which you can be sure that rounding errors will never reach it.