05-19-2014 03:37 PM
Hello,
I have a task which requires using global potimization, however, I am not really familiar with it. I've tried looking at the example VI's, however I have a hard time understaniding where my data would go.
My actual task includes measuring mechanical impedance (as a spectrum of complex numbers) and fitting a model of 4 parameters on it with the global optimization finding the best values. (That's how it was described by previous ones to work the best, that's why I'm trying to implement it this way). The outcome of this process would be minimizing the difference between the measured impedance and the model fitted on it.
Our research group uses a program with a working implementation for this, but we want to implement this model fitting it in Labview.
The impedance data is a complex number, so it has real and imaginary parts as well, with some model parameters being determinde by both real and imaginary part of the data.
I've tried using a "simple" Lev-Mar fit on my data as a first trial by modifying the VIs from this topic (using typecasting to input complex data to the Lev-Mar vi), but it only found values similar to my desired fit (done with the existing program) if I set the estimation very close to the values I get in my program. That's why I wanted to move on to global optimization, to get labview to find these estimation values in some boundaries.
And that's where I am stuck:
I have a complex impedance (so real and imaginary parts for frequency points) and an equation that is supposed to give the output. If I take for example the "Two Circles Optimization" example vi, where should I drop my impedance spectrum for the VI to analyze? As the Lev-Mar had clear inputs for my data (as X and Y) I knew where to wire my data, but with the global optimization I do not know, where to wire it.
I have an impression that I should use a VI for the objective function to handle the complex input as well.
The objective function for the optimization should be the mean of "[Z(omega)-Zfit(omega)]/Z(omega)" for each frequency points, with Z being the measured and Zfit being the fitted impedance with the parameter estimation. Zfit=A+j*omega*B+(C-j*D)/(omega^[(2/pi)*arctan(C/D)] (j is the imaginary unit)
So where should I put my impedance data in the example vi to be used by the parameter function?
Thank you, hope my explanation is more or less clear
Solved! Go to Solution.
05-20-2014 02:51 PM
The short answer is to use the 'function data' variant to pass the data being fitted to your cost function.
The longer answer:
The non-linear curve fit VI solves a problem of the form:
min ||y-f(x,a)||
where x and y are arrays of data, a is the array of model parameters, and ||.|| indicates the 2-norm. Because the form of function being minimized is known, the only thing that a user must define is the model function f(x,a).
In the case of the more general optimization VIs, including global optimization, there is no assumed form. To implement curve fitting, your cost function must be the entire ||y-f(x,a||. Please look at '\examples\Mathematics\Fitting\Nonlinear Spring Constant fit.vi'. This does not use global optimization, but it is an example of fitting using the 'constrained non-linear optimization.vi'. Notice on the block diagram that the data that that is being fitted is bundled into a cluster and passed to the 'function data' input. Looking at the cost function block diagram (\examples\Mathematics\Fitting\support\nonlinear spring constant objective function.vi), notice that the objective function output is a simple sum of squares of the difference between the data and the model function. Unfortunately this example is more complicated than you need due to the ODE solver, but hopefully gives you a better starting point.
05-26-2014 04:52 AM
Thank you for your help, I'll look into the example you mentioned and try to adapt it to my task.
Do these optimization tasks accept CDB input or do I have to typecast it to DBL?
05-27-2014 11:37 AM
You will be bundling data into a cluster and passing to a variant, so the cluster can contain anything you wish, including array(s) of CDB.
07-08-2014 03:47 AM
Hello,
I had some time to go on with this part of the project. I figured out finally the way I should use these vis to take my data, calculate the objective function and pass the parameters in the end. However, I am not really familiar with optimization and have a hard time tweaking them to work.
I have created a simple vi to include a few kinds of optimization and values I receive from and old existing program I want to replace. (It is said to include a kind of global optimization with a random search method, however, I am not sure of this and cannot ask the original programmer.)
However, there are huge differences in the paramters and the adherences of the curves to the original data. What should I do to receive more or less similar data from Labview to my previous program (MyProgram)? (Something like less than 5 percent difference.)
And I have a strange problem as well: if I enter some initial values I receive an error that says: error in Armijo stepsize reduction, and I do not know, what values make this appear and what makes it disappear. So I randomly changed this to some value and finally they disappeared.
Can you have a look at my code, what should I improve? (I know, it is a complete mess, it is just for demonstrating purposes fo seeing the differrences of the optimization vis).
(My measurement is an impedance measurement and I want to get some parameters in the end: A, B, C and D).
07-08-2014 01:28 PM
A few suggestions. In your cost functions, use a subVI to encapsulate the actual distance function. For example, a VI that has A, B, C, D, Z[], and w[] as inputs, and distance as output. Then the cost function for each type of optimization can call into the same common subVI. If possible, structure things so that your graphing code uses the same VIs as your optimization code.
In your graphs, the 'Original' and 'MyProgram' curves appear to be closer than the rest, but when I input the 'MyProgram' parameters in one of the cost functions, the function output is ~0.16, which is larger than any of the optimization results. It looks like there is a mismatch between the graph and the cost function, if I am interpreting it correctly. This may mean that the cost function is not quite what you intended.
I suspect the optimization functions are not converging well, at least the for the unconstrained algoritm. This may be why you are getting the armijo error message. The most robust unconstrained algorithm is the Nelder-Mead Simplex algorithm, so try that one until you are sure things are working well, then switch to something else for better/faster convergence.
-Jim
07-08-2014 04:47 PM
Hi Jim,
Thanks for looking at my code, it seemed that I made a few mistakes when writing the equations for demonstrating, however as I looked inside the formula, it looked fine. But as you recommended, I created a subvi to incorporate the calculation of Zmodel by manually doing again the wiring from scratch and the optimization functions became better, so it seems that I made a mistake in the previous objective function.
The data MyPorgram was the fit created by the previous program and the parameters were also calulated by that program. (As far as I am informed it uses clustering and global optimization, however that program needs boundaries, so it seems to be constrained.) Now I added a further plot MyProgramCalc with the graph calculated from the parameters by Labview and it is practically the same as MyProgram.
I attach my new version with some corrections of the equations.
Does it matter what I give as initial guesses? As now I tried some random numbers in constants to avoid the armijo errors.
Thanks
07-09-2014 10:41 AM
The initial guess will definitely matter. A strategy I have had some success with is to use the globalization to generate the initial guess for the unconstrained optimization. I made this change to your code, and then looked at the number of function evaluations for the unconstrained optimization using the QN or the Downhill Simplex algorithm, and they were quite close. Given this, I would recommend using the Downhill Simplex.
-Jim
07-09-2014 12:15 PM
Meant to say:
A strategy I have had some success with is to use the globalization global optimization to generate the initial guess for the unconstrained optimization.
07-09-2014 04:13 PM
This approach looks good to get a good guess. Yesterday I tried using a Lev-Mar fitting to get some parameters, but it definitely is easier. (In the meanwhile it turned out I could fit the data with the old program a bit better, so I could reach 5-15% differences on most parameters)
However it is still a question how good will it fit for other data sets.