LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Non Linear fit vi slows down computation

Solved!
Go to solution

Hello you all,

 

I am currently working with the non linear fit vi (nlf) in order to approximate a sine wave function with noise. I have created a circular array so the nlf is only fed the recent values. That is probably obvious to you all (it's my first time being here and I am a nooby to LabVIEW). However as soon as the array is filled up, the nfl function slows down its computation by around 20 times. The nfl with its attachments is a modified version of a vi that was posted here fyi in case something seems odd. I would appreciate someone giving me a hint why computation is slowed down as soon as the array is full.Thank you!

0 Kudos
Message 1 of 9
(2,093 Views)

Formula parsing is very expensive. You will gain orders of magnitude is speed by using the version with the VI model. In addition, you can of course calculate all partial derivatives from first principles.

 

If you look at your formula (a0*x^(1/3)+a1*x^(1/2)+a2*x+a3*x^2+a4*x^3), you can see that it is actually a problem that can be solved using general linear fit. Just form the H matrix (based on x) and solve without any iterations near instantly.

 

 

 

Message 2 of 9
(2,047 Views)
Solution
Accepted by mikoborn

See if this can give you some ideas. Fit is well below milliseconds!

 

(Of course I question the use of such fancy terms for such a simple function segments. Looks way overparametrized)

 

 

altenbach_0-1665772139800.png

 

altenbach_0-1665772339019.png

 

Message 3 of 9
(2,032 Views)

Dear Altenbach,

 

thank you for your great help! Very neat and compact vi from what I can tell. However I tried to rebuild your vi from the picture you attached (learning by doing) and ran into some issues/uncertainties. How does the general linear fit vi get the function (a0*x^(1/3)+...) it is supposed to approximate the data points with? And how does a linear fit end up being a sin wave? I was guessing that with each loop iteration the best linear fit is calculated and then merged together?

 

Thank you!

0 Kudos
Message 4 of 9
(1,985 Views)

We are fitting the last N points. The oldest points are discarded while new ones are added.

(You used a chart, which also only has a fixed history length.).

 

You did not say if you were able to get it working. Did you?

 

Most of your questions are mathematical in nature and have nothing to do with LabVIEW. You simply need to brush up on your linear algebra.

 

The approximation is just linear algebra. Your model is very similar to a polynomial, but you have some non-integer coefficients. The function is just a sum of terms where each term is a function of X and multiplied by a coefficient.. These terms are calculated in the FOR loop once because X is irrelevant for the fit quality (but the coefficient depend on the x scaling, of course). Each term can be nonlinear in x as long as all your coefficient a0, a1, etc. are linear.

 

Where does your model come from? Compared to a simple polynomial fit it is more fragile because due to the fractional exponents, all x need to be greater than zero. In my casual testing, a polynomial fit seems to work better.

 

There is no need to guess anything. Just follow the wires!

 

Obviously, you cannot approximate an infinite sine function with any such model. For example if you do a polynomial fit, you need more and more terms as you try to fit more periods. (Have a look at e.g. the Taylor series of sin(x)). You get a good fit with a reasonable number of terms as long as you have a subsection with not too many turning points.

 

In the attached example, you can see that a 9th order polynomial works great for a sine with more than one period.

 

 

altenbach_0-1666019100897.png

 

0 Kudos
Message 5 of 9
(1,948 Views)

There were some issues, but I finally got it to work. I also realised how the function is genereted. I am a bit ashamed I asked given how obivous it is.

 

I am not trying to achieve anything particular, just trying out the vi for learning purposes. I just happened to try to fit it to a sin wave, could have been any other function though. To give you a bigger picture: This learning progress is part of my bachelor thesis. The finished product should be a programm that detects anomalies in sensor data. Maybe you have some hints what to look for?

 

I got one more question though: All the terms have coefficients. Is there an option for the coefficients to become zero in order to "delete" some terms?

In general I'm searching for an approximation function that is suited for a wide variety of sensor data behavior

 

 

0 Kudos
Message 6 of 9
(1,915 Views)

If any of the coefficients is near zero, you could omit that term and see how good the fit is afterwards. (For example if the x are chosen in a certain way, the odd or even polynomial coefficients could vanish). Note that the coefficients of these fits are otherwise not really informative and quite meaningless. If you really have a partial section of a sine function, doing a nonlinear fit using a sine model would give you real parameters (amplitude, phase, offset). If you have many periods, an FFT can give you that too.

0 Kudos
Message 7 of 9
(1,895 Views)

All these methods imply, that I know what the actual function I'm trying to approximate looks like. However I'm searching for a solution to dynamically be able to approximate behavior of datasets that is unknown to me. As I said the sin function is just an example of behavior that occurs in sensor data (vibration sensor for example). However logarithmic functions are just as likely (temperature de-/increase).

 

I was thinking that a function like a0*sin(x) + a1*ln(x) + "polynomial" would be able to perform such task, if parameters can become zero.

 

However I haven't seen such function in my research yet

0 Kudos
Message 8 of 9
(1,861 Views)

@mikoborn wrote:

All these methods imply, that I know what the actual function I'm trying to approximate looks like. However I'm searching for a solution to dynamically be able to approximate behavior of datasets that is unknown to me. 

 

I was thinking that a function like a0*sin(x) + a1*ln(x) + "polynomial" would be able to perform such task, if parameters can become zero.

 


Well, this seems like a contradiction. First you want a "general" solution, and then you come up with a very "specific" function. The question is: What do you actually want???

 

  1. If you want to retrieve parameters that have meaning (time constants, amplitudes, etc.), you need a specific parameterized model based on theory. 
  2. If you just want to get rid of noise, you can do polynomials, filtering, convolution with a smoothing kernel, and many other things, even ptbypt solutions, but there won't be any interesting parameters. All you get is a smother approximation of the data. For example your sin(x) term very strongly depends on the x scaling and only makes sense if you know that there is only exactly that known fundamental frequency at exactly that known phase. In your earlier example, the phase constantly changes, so that won't work at all!!! Also, your ln(x) term is restricted to positive x, i.e. not "general" at all!

If your data section of interest resembles a banana (relatively featureless, few turning points), a simple low-order polynomial will always work extremely well. You simply need to be careful that x is in a reasonable range. For example if your x is in the millions, the 0th order term will be 1 and the 9th order term will be 1e54 and your matrix will become very ill conditioned, making a solution impossible.

 

For some general topics in fitting data, have a look at my group. Also have a look at my NI week talk for a few years ago.

0 Kudos
Message 9 of 9
(1,832 Views)