You get a star for that answer... very good. It took me long enough to prove this is what was really happening but I got there.
The problem here is that the coercion routines for the float values aren't really coercing the slider value to 5mV increments. Where the last few (4 that I could see) least significant digits differ, they are not interpreted as a numerical difference, and aren't coerced.
So when the slider is moved to -5mV, it may be -5mV, or it may be -5mV +/- a few bits, which is also interpreted as -5mV, and just left as -5mV +/- a few instead of -5mV exactly (and by exactly, I mean whatever the repeating fraction comes out to when you go from decimal to binary). That's the problem--that it isn't set to the exact value. When it's t
hen +/- a few bits, and you move the pointer towards zero, you get a few bits leftover, but an increment value that still falls within the exact 5mV coercion test. And instead of saying (5mV exactly) - (5mV exactly) = 0, it says, (5mV +/- a few bits) - (5mV exactly) = -6.07aV.
I don't see how NI would even fix this--it's just annoying. I can't multiply by some constant on the block diagram without a lot of trouble because I'm dynamically modifying the limit values of the slider. There might be a related workaround though--thanks for the suggestion.
For curiousity's sake in the guts of the floating point problem and how I arrived at the above conclusion, see the second attached VI, which basically caluclates the value of a float from binary and shows how LabVIEW interprets two different numbers as one and the same since the significant digits are E-18 or thereabouts.