I am using DBL precision float (default control settings). I am setting precision = 3 decimal places. I fill and array or control with numbers like .001, .002, .003 etc. But if you look at them with a precision of 20 decimal places, you will see we have a number like 0.01299999999999999940 instead of the 0.013 that I typed in the cells. Incidentally, the number above I copied directly from the array. I hand typed .013 in the cell.
Of course if you send these numbers around a loop in a shift register and add a value like .001 to it each time it becomes very poluted. So if you set the resolution at 3, you do not see these invisible errors. But your program sure knows about it. So when you test, does .013 => .013, it may actually be c
omparing 0.01299999999999999940 => .013. And of course you do not get an equal result.
What is my solution?