LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Change/add math vi's to prevent rounding errors

You know your solution doesn't fix the problem it just changes the numbers it breaks on to less obvious ones, for instance your method with the single conversion returns the wrong answer for 1713 / .0259 (it returns 66139 rem 0 when it should return 66138 rem 0.0258).

There's a reason there's an entire field (numerical analysis) of mathematics for dealing with rounding errors.

Now if you wanted to make it so all user enterable numbers worked for quotient remainder you could use binary coded decimals (which makes the math work the same as if written out on paper. Assuming you do your math in base 10 Smiley Happy, bcd is used often in financial systems). But if you need to use a number that can't be represented perfectly in base 10 (like 1/3) then you're in trouble again. But we can change the system to use rational numbers (fairly easy if you have support for very large integers), then 1/3 will work out (as will anything you can write out finitely in any base. 1/3 is easy to write in base 3). But if you need irrational numbers (like pi or sqrt(2)) then your in big trouble. You could use symbolic math. So now you have exact numbers, but they're not in a useful form for most applications.

One potentially useful tool I've seen (but is very rare as a built-in in programing languages) is interval arithmetic, which doesn't remove rounding errors but can tell you how big of a problem they could be causing. Basically with interval arithmetic you give a range of possible values, then all operations on those ranges are very careful about reflecting the affect of rounding errors on their output.

For instance if the user enters .2 you could create the interval of [.2-(10^-9),.2+(10^-9)] (in real use the interval would reflect the accuracy of the number's hardware representation of that number). The quotient of 1 divided by that would be the interval [4,5], so it doesn't give you an answer but at least you know there was a problem (note as long as the bound is on .2 is greater than than 0,it'll return [4,5]). Now had the use entered .20001 (or something like that), then the answer would be [4,4] which means the remainder must be exactly 4.

 I've never used interval arithmetic heavily so I don't know how practical it really is, My gut instinct is that they make rounding errors appear far worse than they really are (since they always look at the worst possible case). But if you get them to work then you can declare yourself safe from rounding errors.

PS. There are times when it's ok (if you're extremely careful) to test for equality between floating points numbers. The most common is treating them as integers larger than 32 bits, since doubles can perfectly represent integers with 53 (or was it 52) bits. But if you do an operation that doesn't return an integer then you probably can't use equality. You're technically safe if all inputs and intermediate values and output can be represented perfectly in the floating number. An example operation would be dividing or multiply by two (as long the numbers isn't too small or too large respectively).


Message 11 of 45
(1,565 Views)
You're both right and wrong Matt.    What I propose has nothing to do with shifting the number breaks.  Those stay absolutely the same.  It's only about removing rounding errors before comparison, which is something completely different.
 
However...  Investigating your observations I noticed that the way it's implemented with the SGL conversion, does shift the number breaks...  Not because the concept is wrong, but because the DBL to SGL conversion does something weird.
 
The DBL and SGL controls with 0.259 as input both gives 0.02589999999999999.
However, the DBL converted to SGL gives 0.0259000007
 
That's very surprising to me...  but that is the reason why the method with the SGL conversion goes wrong.  It simply means that you will need a better rounding rountine, like the "round to # signf. digits".    
 
(Also means that floating point number conversions are dangerous....)
 
 
 
NB: These kind of problems are exactly the reason why I suggest to Labview needs alternative math primitives built in.  Because the comparison operators are so terribly sensitive to quantization errors, you get into these issues all the time, even if you try to work around them yourself.   All the more reason for NI to implement it, because it can be solved with the proper rounding routines.
 
At the moment, the comparison operators are sensitive to quantization errors. You either solve that problem by (1) having the comparison operator ignore those last bits, or by (2) making sure that the input values have exactly the same quantization errors.
 
Obviously, the first solution is much easier and more reliable than the second, and that's why I suggest NI implements that. The second option is possible, but that means we need a good (built in) rounding method, that we can call before calling a comparison operator. My round to sign. number is a first suggestion, but I'm sure there are better/faster methods.
NB:  When choosing the second option, routines like the QR should be (re)implemented including that rounding method before the floor operation.
 
I like the first approach better.  By having a option to set how many bits to ignore in the comparison, we immediately get the IEEE implementation again when we select to ignore 0 bits.  
0 Kudos
Message 12 of 45
(1,546 Views)
Good that you talked about negative numbers Kevin....   That made me reliaze that I need an 'absolute' conversion before the 10-log in my "round to # sign. digits"  Smiley Surprised
 
I guess the preferred answer for negative numbers depends on how you use it....  I can imagine both approaches being usefull.
 
Maybe the QR should be left how it is in this respect....  You can get your result very easily, by checking the sign of the input before you use QR.  But you can't get the other way without rewriting the QR.
Could make a polymorphic QR with both options ofcourse...

Message Edited by Anthony de Vries on 10-15-2007 11:03 AM

Message Edited by Anthony de Vries on 10-15-2007 11:04 AM

0 Kudos
Message 13 of 45
(1,544 Views)
Anthony,

I'm glad you're realised the trials and tribulations of rounding errors in Floating-point representation.  Many people don't know this.

So if I summarise:

1) The IEEE behaviour is counter-intuitive for you
2) You've made a work-around
3) You want NI to introduce a completely non-standard set of math VIs

I would guess that the easiest way would be to tweak your intuition rather than upset an internationally recognised and tried-and-trusted method of dealing with floating point numbers.

BTW, how does one "remove" rounding errors?  DBLs can only represent certain values.  How does one determine what needs "fixing"? I think a set of VIs of your own making should satisfy your every need, no?

Shane.

PS you wrote "However, the limited accuracy of the number representation generally should not influence the actual answers we expect." in your original post.  I'd be interested in any suggestion you have to get this up and running.

Message Edited by shoneill on 10-15-2007 11:55 AM

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 14 of 45
(1,534 Views)

In stead of just blindly following IEEE standards, you might actually think about this issue.

You know that when using floating points, the last few digits have no meaning.   They consist purely of quantization errors, with additional rounding errors from doing any calculations.  Normally, you don't care about that, because you will round the end results.  I.e. you discard those meaningless digits, and then get the same results as you had with exact math

And that's where floating point comparisons go wrong...  Suddenly, those meaningless last digits start to influence the end result, and make the comparisons invalid.  Obviously, that's an unacceptable situation.   You cannot just ignore them, saying that it's ok because it's defined by IEEE.  That's rediculous.   The answers are wrong.  It's stupid to just define them as ok.

 

Presumably, the IEEE standard assumes that you take care of rounding errors yourself, before calling these operators. But that's difficult without a built-in rounding primitive in Labview.  And in the case of the QR routine, NI should implemented a rounding before the comparison,  and they haven't. (Neither has Matlab, which just shows how dangerous it is too make such assumptions...)

 

Like I allready indicated, there's multiple ways to solve these issues.  You can extend the primitives, by adding an rounding option that indicated how many (if any) of those last meaningless digits should be ignored. Or you can add a rounding primitive, and then must make sure that you call that before the comparison operators.   As you always need to call this rounding operation, it makes more sense to include it in the comparison operators.

 

BTW... you remove rounding errors, by selecting the number of significant digits to which you round the end results.   That's the weird thing... people apparantly don't realize that this is inherent to using floating points.  You define a certain required accuracy to your input and output.  Then you select a number representation with enough extra accuracy, so that the quantization and subsequent rounding errors don't influence the result.    That's the way you determine what needs 'fixing'.

0 Kudos
Message 15 of 45
(1,525 Views)
Anthony,
 
your proposal : "by selecting the number of significant digits to which you round the end results" may not stand up to scrutiny.
 
Take the number (This is a randomly chosen example and may not correspond to reality...) 0.199999.....9.  Perhaps this cannot be represented, so we get 0.200000.....1 instead.  You can hack off significant digits all you like, but you won't be able to get back to 0.19999999999.....  In fact the more digits you hack off, the greater the error becomes.  Rounding up or down propagates throughout ALL digits in certain circumstances.
 
If the pros and cons of FP calculations don't suit, maybe it's your way of doing your calculations which needs improving.
 
There are plenty of opportunities to do these routines yourself if you deem them neccessary.  I still don't understand why you want NI to do them for you.
 
"Obviously, that's an unacceptable situation". I don't find it unacceptable at all.
 
I'm pretty sure that the IEEE approach was used to minimise overall errors.  Maybe you should download some information on the IEEE approach and the reasons behind it before ranting on any further.
 
Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 16 of 45
(1,503 Views)
Hi Anthony,

those digits have a meaning - otherwise they wouldn't be there...

It's a  fact IEEE can't cope with all and every number - it's the very nature of binary numbers. Everybody knows about it (or should atleast). Everybody has to handle this.
(It's also a fact you cannot copy with every number when using hand-calculated decimal numbers, you may start with using PI. Smiley Wink)

Your problem is: You want a special set of functions. Maybe you're the only one who wants such functions, maybe there are some others too. But (I guess) less than 1% of all programmers (not only LV, all programming languages) want such functions...

Your solution is: Don't blame the well known, well-defined standard to act different to your needs... Make your own vi. Use them whenever you think you need them. Move them into your own user.lib. And don't forget to validate them for each and every use case! And when you think your functions may be needed by other programmers as well you can distribute them in the internet! OpenG can be a great place to do so...

Message Edited by GerdW on 10-15-2007 02:33 PM

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 17 of 45
(1,503 Views)

The way to have a chance at progress here, I think, is that any users with interest should hash out a decent workaround as open source.  A key reason I favor a separate "LSB's rounding" function rather than a "rounding spec" input to all the comparison-based functions is that I think it helps focus the effort in one place.  Later, a set of user-defined comparison functions can expose an additional input for "rounding spec" and simply call this one common converter internally.  Such an approach helps support consistency of rounding behavior.

I tend to favor rounding in terms of base-2 bits for the (likely) boost in speed and efficiency, which will be quite important in a user implementation when handling biggish arrays one element at a time.  I grant that base-10 signif figures would make a more intutitive interface, but I think the target audience ought to be able to make the mental stretch.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 18 of 45
(1,489 Views)

Kevin,

Is LSB rounding valid when using mantissa and exponents?  What if two neighbouring numbers (in IEEE notation) have different exponents?

Shane.

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 19 of 45
(1,482 Views)

shoeill,

The difference between 2.0 internally represented as 1.999999999995  and the real number 1.999999 is that the latter is internally represented as 1.99999900003.       

You internal floating point representation should always have higher precision that the highest precision of your input!

After some calculation, the first might be 1.99999999456, and the second 1.99999900274.    If you want to represent them as the end result, you round to the required level of precision, and get 2.0 and 1.999999 as the answers.  The same if you want to compare them.

That's the way to get valid results from floating points. 

The last digit's are only a buffer for quantization/rounding errors. They should be removed if you want to do comparison, otherwise the comparisons will become dependend upon pure artefacts.  Removing the rounding errors before comparison or in the final answer, is basicly a simple form of the interval arithmetic that Matt W describes.   But in stead of precisely following the precision level, you simply choose a precision of which you can be sure that rounding errors will never reach it.

0 Kudos
Message 20 of 45
(1,466 Views)