10-18-2007 05:54 AM
10-18-2007 06:17 AM
10-18-2007 09:19 AM
10-18-2007 12:45 PM
In hopes of making progress on the technical aspects of implementation, I did a little playing around with the modified Q-R posted earlier in the thread. Some observations / questions / comments:
1. My own preference for handling negative x and/or y terms is not always the same as that produced by the floor() rounding function. I would personally like to optionally specify the use of a "round toward 0" vi in place of the floor() function. [My simple implementation of "round toward zero" is to compare the input >= 0. If true use floor(), if false use ceil().]
Example: I've had apps with encoders measuring cumulative angles over thousands of revs. I have a custom control that contains a simple numeric and a round dial guage. If I start at position 0 and rotate +390, the Q-R will break that down as +1 rev and an additional +30 degrees. If I were to rotate by -390, the floor() version of Q-R would break that down as -2 revs and an additional +330 degrees. In these cases, I'd prefer a "round-toward-zero" version of Q-R that would report -1 rev and an additional -30 degrees.
I would propose this as a non-default behavior that can specified on an input terminal of a modified Q-R. Unless there are specific compelling mathematical reasons why "remainders" should only be expressed as values >= 0...
That would cover the negative numerator case for me. Negative denominators would also need to be handled in a clear, consistent way but I myself can't think of a use case for them and have no reason to suggest any different treatment than outlined above.
2. Using specified # of significant digits does a nice job of producing an expected integer quotient. However, I'm not so sure about the way a remainder term can have a value many orders of magnitude smaller than the rightmost significant figure of the numerator.
Let's say your numerator is in the order of 10e6 and denominator is order of 10e0. You specify 10 significant figures. Ok, you get your quotient, round to 10 significant figures, then round to integer. Then, to find the remainder, you subtract IQ * denom from numerator. So now you've got a subtraction of two values in the order of 10e6. In some cases, the result may be a difference in the order of 10e-9 which is reported as R, the remainder.
So, we've specified that we only want to consider 10 significant figures when computing the integer quotient IQ. But our remainder R demonstrates that when subtracting order 10e6 values, we treated order 10e-9 values as significant, implying 15 significant figures. Wouldn't it be more consistent to round off the R term after subtraction at the 10e-4 place, which is 10 significant figures over from the order 10e6 values being subtracted?
I've gone back and forth on this, but I lean toward thinking that if there's a modified Q-R with an input spec about significant figures, then the rules for significant figures should be followed both for the division which produces the IQ term *AND* for the subtraction which produces the R term.
3. What are good defaults for the # of significant figures to use for DBL's? I mostly deal with measurement data. The stuff that comes off an A/D converter rarely has more than 5 or 6. However, if I measure encoder angles in degrees, significant digits aren't really the best measure of precision. Instead I get absolute precision down to a specific power of 10 decimal position, like maybe the 10e-3 place. The number of additional significant digits to the left of the decimal is, in principle, unbounded.
Other considerations are the amount of error that can creep into the low-order bits from other common processing functions. I think of filters first, though a good argument can be made that the choice to perform filtering implies a wanton disregard for the original data's lowest significant digits anyway.
-Kevin P.
10-18-2007 01:48 PM
10-19-2007 08:36 AM
Shane wrote: "Kevin, I still have a strong suspicion that simply limiting the "significant digits" won't work bacause what happens when X.999...99 gets rounded to (X+1).000...01?I'm not sure I follow. Signif digits aren't just truncated, there's a step of rounding to integer. And integer values can be represented exactly in floating point. (Right? Maybe I need to look at the raw bit patterns...) . Maybe the modified Q-R should return an integer datatype for IQ?
10-19-2007 10:23 AM
10-22-2007 07:34 AM
I think both vi's should be completely safe now. I did some testing, and both are still very fast, especially considering the typial usage.
For easy use with the comparison primitives, I added a 'Round two values to # sign. digits', so that you can easily create the following construction:
I think that's a nice small construction, which make it easy to use. This prevent having to make a complete set of replacement vi's. Now, you can't do this trick for QR, so that still needs to be replaced completely.
I wonder... is it possible to make a polymorphic vi like 'build array' yourself, where you can pull to make it bigger?
10-22-2007 07:49 AM
10-22-2007 08:13 AM
Ah yes... I every time forget about this Labview version thing. I'll better post Labview 8.0 versions, otherwise I get that a request for that one as well.
P.S. because I frequently use values with units, I choose to add a polymorphic units to the round vi's.