06-12-2012 09:20 AM
Hello
I am having some difficulties tio fit an exponential curve. For example: VI Linear, Exp, and Power Fitting (in labview\examples\math\curvefit.llb) and the parameters :
- number of point: 3000
- a: 1E-9
- b: -0.0006
- c: 1
- and no noise
The results are not really what I expected. Any Idea of why it doesn't work or which other solution could be used?
Thanks in advance
Solved! Go to Solution.
06-12-2012 04:40 PM
I am afraid it could be a problem of accuracy related to IEEE 754 floating point representation.
With the chosen coefficients (specially the number of point), it seems that the Levenberg-Marquardt algorithm presents cumulative errors leading to a "bad" result.
If a better accuracy is really needed, a way to improve this could be to modify the calculation made in DBL (64 bits corresponding to ~15.9 decimal digits) in order to perform it with EXT (128 bits corresponding to ~34.0 decimal digits).
jihef
06-13-2012 01:35 AM
Thanks! I understand the problem and the solution, that's a good point.
I try to turn the DBL to EXT in exponential fit.vi. But modifications in this file and in it sub-vi are not possible (They are opening in a "copy" mode). I doesn't like to modify the original files, so I copied this Vis, renamed and modified them. But now i can't execute them because this renamed Vis are connected to a library that didn't reference them...
06-13-2012 02:48 AM - edited 06-13-2012 10:35 AM
EXT will give you about 18-21 decimal digits under windows (mantissa is 64 bit), not 34 as falsely claimed above. This won't help you really unless you rewrite everything that is used inside the entire fitting hierachy. Even if you do, there will be limitations.
The real problem is that your data causes the internal matrices to become ill conditioned. The condition number is larger than 1E19, making it impossible to determine improved parameters. (Because you would be losing >19 digits of accuracy, even EXT would not be sufficient).
You need to scale your data into a more resonable range, then transform the found fitting parameters to get the original values. For example if you set c=0 in the fit and also change the termination condition to a much smaller value (e.g. tolerance=1e-14), the fit will succeed.
If you have real data, subtract a suitable constant from the data for fitting (e.g. the value of the last point), then add it to the found C later.
06-13-2012 04:29 AM
Thank you!
It works really better when a reasonable dynamic is used (for example between 1 and 10).
Operations on the data (just substract by the last value isn't working if a is too small):
- substracting the last value (c)
- divide by the higher value (a)
- fit
- multiply the fit by a and add c.
06-13-2012 06:01 AM - edited 06-13-2012 06:08 AM
@jihef wrote:
I am afraid it could be a problem of accuracy related to IEEE 754 floating point representation.
With the chosen coefficients (specially the number of point), it seems that the Levenberg-Marquardt algorithm presents cumulative errors leading to a "bad" result.
If a better accuracy is really needed, a way to improve this could be to modify the calculation made in DBL (64 bits corresponding to ~15.9 decimal digits) in order to perform it with EXT (128 bits corresponding to ~34.0 decimal digits).
jihef
The only problem with this is that EXT is not really a 128 bits floating point but instead an 80 bits floating point as implemented by the x86 mathematical coprocessor. The upper six bytes are unused. And the 68k architecture while technically using 128 bits for extended precision numbers was only really utilyzing 80 bits too, padding the number between mantissa and exponenet to fill up 128 bits. LabVIEW extends the floating point to the same 128 bit format used in the 68k architecture when flattening data, to make sure flattened data are platform independant, but internally it simply uses the native 80 bit format on x86 CPUs.
Not sure about PPC platforms, but as far as I know the only LabVIEW platform ever supporting true 128 floating point numbers was the Sparc platform but its implementation was using a software floating point library from Sun, in absence of any hardware support for such floats, and that was supposedly VERY slow.