LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Optimize Levenberg for gaussian fit... (initial guess,...)

Solved!
Go to solution
I only want to fit a gaussian distribution (approx. 400 data points) with offest using the example vi (fit sum of 3 gaussian with offset) which i modified. Everything works fine but too slow for "realtime".
To determine the initial guess I calculate the fwhm and the weighted average so that I can use them:

a1 - ist the difference betweeen max and min of my data array (amplitude)
a2 - my weighted average (mean)
a3 - fwhm divided by 2.35 (deviation)
a4 - min of my data array (offset)

These values match more or less to the output values of the gaussian fit, for example:

                    Guess            Fit

a1                   38                46                         (amplitude)
a2                   93               103                        (mean)
a3                  180              186                        (deviation)
a4                  31                  20                        (offset)

My problem is that the whole levenberg fit needs 30ms up to 70ms and that is really to much! The LeastSquare Method gaussian fit (without offset) for example only needs less than 5ms but is not as good as the levenberg due to the missing offset.

There are two questions:

1. Is there a way to combine the least square gaussian fit with any calculation to get the same precision like the levenberg fit does and above all with offset (!), but faster.

2. How can I optimize the levenberg  fit, initial guess or what can I else do to increase fit-performance ?


In the attachement you can find the data array with the levbenberg-fitted gaussian curve in red.

Hope someone can help !

0 Kudos
Message 1 of 8
(7,406 Views)
There are a few things you can do to improve performance:
1)  In your model function, explicitly calculate the Jacobian if you are not doing so already (f'(X,a) output from your  model function VI) .  By doing this the Jacobian does not need to be computed by finite differences, saving quite a few function evaluations.
2)  What precision do you really need?  Try adjusting the tolerance in the termination criteria cluster.  This may save a little, but not as much as 1)
3)  Your initial guess of the offset looks overestimated.  Maybe scale this by a fixed amount?

If you can please post your actual data/VIs so we can take a look and try some things out.

-Jim
0 Kudos
Message 2 of 8
(7,385 Views)
Hello,
 
You might want to try NLSSOL in TOMVIEW (http://tomopt.com/tomview/products/npsol/solvers/NLSSOL.php).
 
The problem is setup with assign CLS. nllsQG.vi can be used as a template.
 
Feel free to email me your VI's (medvREMall@tomREMopt.cREMom).
 
Best wishes, Marcus
 
Marcus M. Edvall
Tomlab Optimization Inc.
855 Beech St #121
San Diego, CA 92101-2886
USA

web: http://tomopt.com
e-mail: medREMvall@tomREMopt.coREMm
Office1: (619) 203-2037
Office2: (619) 595-0472
Fax: (619) 245-2476
0 Kudos
Message 3 of 8
(7,379 Views)
Another thought is to use the Peak Detector.vi to estimate the peak amplitude and location.  Not sure if the additional overhead in the initial guess will offset the  reduced lev-mar execution, but it is something to try, and should result in a very high quality estimate.

The Lev-Mar algorithm is a parameterization between steepest descent and Gauss-Newton.  The degree to which Lev-Mar is one or the other at a given iteration is governed by the "lambda" parameter.  By default the initial steps are more steepest descent, which is conservative but robust.  If the initial guess is of good quality it is possible to alter the initial lambda value to be more Gauss-Newton like from the start.  This will make the first several steps more productive and accelerate convergence.  Look on the diagram of "Nonlinear Curve Fit LM.vi" and find the constant labeled lambda (feeds a shift register in the main loop).  Larger means more steepest descent like, smaller means more Gauss-Newton like.  The default is 10.  Try something like 0.001.  Please note you will be modifying a shipping VI.  Take precautions to save your work...

-Jim
0 Kudos
Message 4 of 8
(7,378 Views)
Did some benchmarking on an implementation I have here.  I took a stab at roughly duplicating your problem (see attached VI).  400 points, using the "Fit" values to generate data, "Guess" values as input to Lev-Mar (Initial Guess):

                    Guess            Fit

a1                   38                46                         (amplitude)
a2                   93               103                        (mean)
a3                  180              186                        (deviation)
a4                  31                  20                        (offset)


Execution times (Lev-Mar only):

initial lambda=10 (default), termination criteria->tolerance=1E-8
numeric derivatives:50ms, 162
function calls
analytic derivatives:42ms, 18 function calls

initial lambda=10, tolerance=1E-2
numeric derivatives:44ms, 144 function calls
analytic derivatives:38ms, 16 function calls

initial lambda=0.001,
tolerance=1E-8
numeric derivatives:22ms, 72 function calls
analytic derivatives:19ms, 8
function calls

initial lambda=0.001, tolerance=1E-2
numeric derivatives:16ms, 54 function calls
analytic derivatives:14ms, 6 function calls

I included the number of function calls for comparison.  In this case the model function is quite simple to compute, and so the function evaluations do not completely dominate the fitting time.  However, if the model function were more difficult to compute then the analytic derivatives would make a bigger difference.  Of course the biggest difference here is the initial lambda value.

Please let me know if this reduces your execution times to be where you need them.  I have not tried getting a better estimate of the parameters.

-Jim



0 Kudos
Message 5 of 8
(7,368 Views)
@All

I am going to post my vi a soon as possible, but currently I am not at work until friday. But there is nothing interesting about my vi, it is the standard lev-mar. The only interesting thing could be the model function which is the standard.

For more information:

My program grabbs pictures and tries to analyze a specified Region of Interest (ROI). In most cases there is a light spot with a horizontal and vertical gaussian distribution (see attachement "spot.png"). I cannot calculate the gaussian fit during each iteration because it lasts so long (decreases FPS). To increase the FPS I try to minimize the fitting time.


@Medvall

I think I have to buy something?! That was not my intention...

@DSPGuy

Thx for the tips. I already tried a vi model function before which calcluated the f'(x)  explicit.(found in the forum) I could not recognize a boost of performance, in addition I would say it slowed my vi a little. The vi model function I found is attached, maybe you can say something about that.

Referring to the precision I abort the fit after 20 attempts, with a tolerance of 10^-7. I cannot decrease the tolerance, because if the light spot is very faint - and that is often - , the vi would always try to fit something in that tolerance. That would lead to unnecessary strange fits, so the high tolerance and the 20 iterations forces the vi it to fit very well or none.

The last point is that I need the FWHM and the first Moment (weighted average) and for that reason I can use it for the initial guess. I have no clue how to determine the offset in a better way. Tried a lot of things (e.g. get waveform offset). The problem is, that the gaussian offset parameter is not he total offset. Any idea?

In some cases my inital guess values are more or less exactly the gaussian output parameters, but I would say there is no huge difference between exact inital guesses and more or less initial guesses referring to the performance. May you convince me it is.
I am going to try the peak detection to more accuratly determine the amplitude and center (maximum position).

Ok, that lambda could be an opportunity to boost erverything up. The tolerance does not seem to make a difference or?! I am going to check that.

Thx for more advices...


Download All
0 Kudos
Message 6 of 8
(7,357 Views)
I thought about that someone programmed the gaussian fit with offset in C++ or similiar and I could integrate it as external code. Maybe someone from Labview Smiley Happy. Would be the best way for me.

0 Kudos
Message 7 of 8
(7,328 Views)
Solution
Accepted by topic author Bauch@Berkeley
Lamba changing successfully accomplished.
The performance increased similiary to your performance test. Now the vi runs with 13-16ms approx. 15ms.
This is much better but  smaller than 5ms would be better. So thx for the help, I am happy about the success.

0 Kudos
Message 8 of 8
(7,303 Views)