LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Calling vi from formula node

I re-wrote the Optimization VI so that it now dynamically calls the objective function (the error measumerement in your case). Now all you need to do is supply the path to the VI that calculates the error, and make sure the VI connection termainals are the same as the VI reference in Pattern Search. This can now be applied to and n Dimensional non-linear, non-analytical problem easily.

- Nate
Message 11 of 26
(1,579 Views)
To process numeric variables use Eval Formula Node. This VI can accept strings as inputs. Very useful but it is not in a basic package.
Mark
0 Kudos
Message 12 of 26
(1,559 Views)
Mark,

This is a useful function, but I can still only evaluate analytical expressions (x^2 + 3*y - sin(x)) etc. In this case I needed to evaluate the "goodness of fit" to a data set, and thus did not have an analytical expression. Is there a way to make this work without such an expression?

- Nate
0 Kudos
Message 13 of 26
(1,556 Views)
Nate,

Just a comment: I believe you'll get much better performance if you do your function in plain G (see attached, please double-check for errors). Using "eval function" seems like a very, very long detour 😉
Message 14 of 26
(1,550 Views)
Toni,

A better algoritm for fitting arbitrarily complex problems is Levenberg-Marquardt. Have a look at this thread from last summer to fit a 2D surface.

I am sure you can adapt it easily for your complicated function. Please ask if anything is not clear. 🙂
Message 15 of 26
(1,546 Views)
I have the same problem that Toni does.

I have a complex error function F(x,y) which I need to minimize to solve for several parameters. The function includes a double summation with thousands of data points. The function cannot be rewritten as y = F(x) so therefore I believe that Levenberg-Marquardt cannot be used to solve it. The only solution that I can think of is to use the Congugate Gradient nD VI and to write a subVI that expands the summations and generates a huge string that includes all of the data points for the function string. I'm afraid that parsing this string will be agonizingly slow...
0 Kudos
Message 16 of 26
(1,530 Views)
Ken,

The pattern search dynamic function I attached above should work for your problem. Make a copy of your error function F(x,y) such that it outputs the error as a single variable (double float) and the input is a 1D double numeric array. It it has more inputs I suggest global variables so you don't have to hack my VI. Just select the your error function VI in the "Objective function path" and you should be good to go. If you have any other issues please let me know. If your objective function computes relatively quickly this algorithm should work very quickly.

- Nate

Message Edited by S Nate dx on 04-05-2005 02:52 PM

0 Kudos
Message 17 of 26
(1,523 Views)
Don't underestimate Levenberg-Marquardt.
Just hack into it and replace chisquare with your error function to be minimized. How you calculate it as a function of your fitting parameters is entirely up to you. 🙂
0 Kudos
Message 18 of 26
(1,520 Views)
I've attached the error function that I'm trying to minimize.

Nate, your optimization function looks viable except for the step size. M in my function could be as high as 25, meaning I could have as many as 53 design variables. The optimal value for these variables could range from 0.0001 to 1000. Therefore applying the same step size for each design variable probably does not make sense for my application. I would have to use a very small step size which would probably take forever to converge, especially since I have thousands of x and y data points as inputs.

On the other hand, Levenberg-Marquardt optimization uses partial derivatives for each design variable to compute the step for each variable. I think this makes more sense for my application. However, I can't figure out how to hack NI's version as altenbach suggests. It uses y - F to compute Alpha and Beta. These values are input into a "black box", i.e. the get new coefficients.vi. Computing y - F for my application makes no sense since y is not meant to be the same as F since F is an error function. Since I am not intimately familiar with the Levenberg-Marquardt approach, I have no clue how Alpha and Beta are used, therefore I have no clue how I can hack this VI for my purposes. Any other suggestions?

Thanks,
Ken
0 Kudos
Message 19 of 26
(1,509 Views)
Ken,

If you can send me your error function VI I'll take a look atn try it. The step size is actually not a huge issue. If you start with a large step size the algorithm will find the region of optimality in the design variables that are farthest away FIRST, then dynamically reduce the step size to home in the final solution. This means that it will take the large steps first, then the smaller steps. This is a little less efficient than a gradient based method, however in practice I have found this to be faster to converege as gradient computations and particularly the line searches required by a conjugate gradient or steepest descent method are computationally time consuming. Give it a try, have it outputs its progress, you may be surprised.

- Nate
Message 20 of 26
(1,492 Views)