LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Levenberg-Marquardt Fit - Scaling the data, number of iterations, stop criterion

Hi there,
I am currently working on some extensive L-M-Fits envolving 8 and 11 coefficients unsing LabView 7.1. All fits are running without any error message (Took some time though Smiley Wink). It would be great, if you could answer my following questions:

1. In the help-file under "levenberg Marquardt" it says: "Scale the X-data so that delta X is at least 1E-2."
This sentence is quite ambiguous . Should it be larger or smaller 1E-2? In my opinion it should be smaller, let's say e.g. 1E-4, shouldn't it?
The results of the fit vary, if I change the delta X! That's why I somehow doubt the my fit results so far.
How should I scale it? I have 11000 measuring points and the corresponding Y-values are normalized and thus lie in between 0 and 1.

2. Connected to the fit described above:
Is there the possibility to display the number of iterations used for a certain fit?

3. What ist the stop criterion for the L-M-VI? Normally it should be given by an accuracy goal or something comparable?
Is there the possibility to change it somehow?

As written above I don't believe in the fit-results as long as I am not sure of using the VI properly.
Thanks in advance for your help

0 Kudos
Message 1 of 5
(4,103 Views)
I no longer have 7.1 installed, but all fitting algorithms received a significant face lift in LabVIEW 8.0 and are now much better. Could you possibly attach your Vis? (or you cna send them to me via e-mail).
 
If I remember right, even in 7.1 there were a few possibilities. Are you using the express VI?
 
(1) I don't understand the "scaling" comment. The fitting criterium is chisquare and is completely independent of x, are you talking about scaling in Y perhaps (later you do)? You can open the lev-mar subVI and inspect the code. You will see that if the fit is better than 1e-8 or 1e-6 (I don't remember), the fit is considered converged. This means that if your y-data  is e.g. in nanovolts, the fit is considered converged at iteration 1 and you basically get the initial estimates back. The easy solution would be to wire the standard deviation array input with a reasonable value. Have you tried that?
 
Scaling in x can matter if you use numeric partial derivatives since it uses a fixed dx (1e-6?). So again if your function wildly changes over a much smaller range, your partial derivatives go to hell.
 
In any case, I would strongly suggest to upgrade to 8.0 or higher.
 
(2) In LabVIEW 8.0 you get the number of function calls (which is more than the number of iterations). In all versions you can customize the existing functions to output the number of iterations and save it elsewhere.
 
(3) In LabVIEW 8.0, you can define the termination criteria via an input. In 7.1, the  only available termination is the number of iterations but you can modifiy the stock VI for anything you want.
Message 2 of 5
(4,089 Views)
Hi Altenbach,

thanks for the fast answer...

@LabVIEW 8.0 Upgrade: That was, what I thought, when I checked the new features of 8.0... I'll talk to my Prof...
 
@express VI: No I am using the normal "Levenberg-Marquardt.vi" found under "Analyze/Mathmetics/Curve Fittings"

@delta X: see "help-file.jpg"-image attached. It shows the part of the help file containing the "delta X" part. X- and Y-values are used correctly in my question.
I have checked the code of the L-M-Fit-VI and in the Sub-VI "Levenberg Marquardt P.D.VI"  I've found the constant "1E-8" followed by a "logarithm base to" and "power of 2", which is finally used as a numerical stepwidth for the derivatives.
 
I understand the situation now as followed: my delta X should be >1E-2 (e.g.: 0.1 or 1) in order to be greater than the numerical step width (1E-8) with which the Algorithm is working in order to have the algorithm working properly. Then my function is changing in a larger scale than the one the LevMar-Fit is working with. This would prevent my partial derivatives of "going to hell" 🙂

There is another "1E-8" in a case structure of the subVI "Lev Mar Prep.vi" which is probably the accuracy limitation of the complete fit-algorithm.

I haven't wired a constant standard deviation array so far and I don't think that it would be nescessary, since my Y-values are in between 0 and 1 which should give the algorithm enough digits to work properly.
 
Nevertheless, I had several fits running with deltaX=1 first followed by deltaX=0.01 and the results varied, what should I do now? Which are the correct ones?
Unfortunately the last fit of my VI conatins coefficients which depend on each other, which makes the standard deviation values for the best fit parameters useless (they are MUCH larger than the original value of the best fit coefficient). Thus I cannot compare the "absolute errors" of the fit in order to compare whether they cover same values.

The version of the VI attached is an older one, the later ones are more complicated, since they include more other calculations, which have no influence on the problem discussed here.

Unfortunately running the programm with my settings takes quite a long time, which makes it difficult to quickly change some settings and than have it calculated again and again 😞

Thanks again for the help.
 


 

Download All
0 Kudos
Message 3 of 5
(4,078 Views)
Do you have a typical data file too?
 
OK, you are using the version that uses a formula string. I never use that because I don't like it at all. I guess the formula parsing is quite slow. I used the subVI version for lev-mar. I highly prefer wires over text formulas. Do you have a link explaining the modeling?
 
For some reason you have about 50% of the data as EXT precision and you are constantly converting from DBL to EXT and back. This always causes extra data copies. All analysis VIs operate strictly on DBL, so it would be worth cleaning this up. What is youre reason for using EXT???
 
Much of your code is overly complicated. It seems you are using a flat sequence for purely documentation purposes. Sometimes you read from multiple instances of the same local variable in the same frame. Why not use a single wire??
 
Look at the two examples below shown with a simpler alternative next to it.
 
(1) Why do you need to read from the same local variable at each iteration of the loop (11000 times!). IF the variable would change while the loop is executing, the data would be garbage anyway. Read once before the loop!
(2) You don't need to wire 20 indices to get the diagonal elements, just use a FOR loop.:)
 

Message Edited by altenbach on 03-06-2007 04:56 PM

0 Kudos
Message 4 of 5
(4,057 Views)
You were right, I used the flat sequence as well as multiple local variables in a frame only for documentation and order purposes. I've changed it in the VI I am currently working with. Thx

Actually there is no good explanation for the mixed usage of EXT and DBL values. If analysis VIs are working with doubles, there's no need to use EXT somewhere. Changed it as well...

Thank you for the simpler alternatives! I've learned LabVIEW no more than a couple of weeks ago and now I am gaining my first experience by dealing with such a VI. Anyway it has been 10 indices and not 20 Smiley Wink

Attached there is a typical data-set (The first column contains the important values) showing 6 peaks (in ascending x direction): small, large, small, small, large, small
The periodic large peaks are desrcibed by the first part of the fitting formula:
o=offset
a=amplitude
r= not important
v= offset in x-direction
l=period (distance between the large peaks)
The small peaks are represented by 4 individual Lorenz-curves
w= FWHM
z= distance to the left/ right of one the large peaks (two small peaks are located symmetrically around a large peak respectively)

The coefficients e,f,g are used for a non-linear scaling of the x-axis (=time). That's why the complete polynomial is representing the x-value of the first L-M-Fit.
0 Kudos
Message 5 of 5
(4,037 Views)