I've attached the error function that I'm trying to minimize.
Nate, your optimization function looks viable except for the step size. M in my function could be as high as 25, meaning I could have as many as 53 design variables. The optimal value for these variables could range from 0.0001 to 1000. Therefore applying the same step size for each design variable probably does not make sense for my application. I would have to use a very small step size which would probably take forever to converge, especially since I have thousands of x and y data points as inputs.
On the other hand, Levenberg-Marquardt optimization uses partial derivatives for each design variable to compute the step for each variable. I think this makes more sense for my application. However, I can't figure out how to hack NI's version as altenbach suggests. It uses y - F to compute Alpha and Beta. These values are input into a "black box", i.e. the get new coefficients.vi. Computing y - F for my application makes no sense since y is not meant to be the same as F since F is an error function. Since I am not intimately familiar with the Levenberg-Marquardt approach, I have no clue how Alpha and Beta are used, therefore I have no clue how I can hack this VI for my purposes. Any other suggestions?
Thanks,
Ken