> 3. LabView should have no problem representing 0.1 as a double
> precision number. It's not a number like 1/3.
> LabView 6.1 behaves the same way.
Actually, I'm glad you brought up 1/3. Most of us are comfortable with
the fact that 1/3 is precise, and that no decimal representation of it
is. But, remember that LV didn't make up its own numeric type, we just
use the IEEE numerics that Intel or Motorola or Sun built into the HW.
And IEEE many years ago decided that the low-level format of floating
point numbers is stored in binary. Guess what, 1/10 in base two is an
infinitely repeating decimal. It is no different than 1/3 in base ten.
And just for completeness, 1/3 isn't that special, in base nine, it is
precisely represented as
0.3.
So what you have stumbled across is simply mathematical magic where
discrete math tries to pretend it has infinite precision, but doesn't
really. It can carry off the illusion most of the time, but introduce
our friend 0.1 and most of the magic stops working.
My recommendation would be to use remainder and quotient with integers,
or be sure to round your numbers appropriately afterwards.
Greg McKaskle