LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

peak detection VI skips regions

Hi to everybody,

This is my first post here and I hope to be able to learn more about labView and it's detailed workings in due course.

For the momenmt I would like to start off with a question about the peak detection VI

We have been using this VI in many scipts and have only recently discovered an odd behaviour of the VI when trying to locate the exact peak position. 

We have signals which consist of a fairly noise free (S/N >100) well defined gaussian peaks of typically 20 data points width at FWHM (note: data points are equispaced) .  

We use peak detection to locate the peak and want to know the peak position to an accuracy of 1/100 of the data point spacing.  

The peak position varies slightly from measurement to measurement by a small fraction of the data point spacing und over long term over a regioin equivalent to ~ 3 data points.   Statistically the measured peak position should therefore be evenly scattered over a region equivalent to 3 data points. 

We find to our great surprise that there are regions where the peak detection VI generates a jump in the apparant peak position geerating a kind of avoided band.   Initially we suspected an error in our experimemnt but after carefull checking we were unable to find any problem with our experiment / system.

We have setup a simulation in LabView which demonstrates the problem.  VI is attached  

A gaussian peak of width 0.02 (default) is scanned aross a spectral region (xmin, xmax) by incrementing the peak position xc with xstep.  The peak position is measured using peakdetection VI using the threshold and width parameters and compared to the actual peak position.  Upper graph in VI

Using the default values as set in  the VI it is clear that the peak detection does something strange and depending on the settings either lags behind the actual peak position and then suddenly jumps  by quite a marging to far ahead of the peak as shown in the middle graph

The lower graph plots the error of the recovered peak position. 

A boolean switch (interpolate) allows the raw (simulated) signal to be interpolated (dt=0.1 means 10points interpolation)  and significantly reduces the observed error.

I am quite surprised to see this kind of behaviour and I am wondering if this is an error in the peak detection VI  and the underying code or is there an essential limit to the maths as used in the VI . 

Altough the interpolation routine does asssist in reducing the error, it doesn't eliminate it.  

I am wondering if anyone out there or at NI has any further information on this. 

I would be happy to hear from you and will be able to provide some more details on the problem / observation in the form of prints of our measurement results when we observed this probelem initially.

Regards

Robert

Robert
Message 1 of 11
(3,757 Views)

The error is due to your settings, specifically to the width parameter you have been using.

Change it to 3, the default value, and you will find that the error is clearly much lower. The reason is simple : the algorithm find the peak position by fitting a parabolic curve to the data points. Then it looks for the position of the maximum (first derivative = 0).  With your settings (20 points), a bias was introduced since a parabolic curve cannot represent properly a gaussian ! The jitter around the exact position was simply due to symetry/asymetry of the data point with regard to the exact peak position  : the error was close to zero each time the peak position corresponded to an actual data point. Close but not equal, since there was always some asymetry in the point position, except when the curve was perfectly centered in the 20 points window.

No bug, just a nice demo showing that the documentation is not clear enough ! 😉

Chilly Charly    (aka CC)
Message 2 of 11
(3,742 Views)
May be you didn't read carefully enough the documentation :
width specifies the number of consecutive data points to use in the quadratic least squares fit. width is coerced to a value greater than or equal to 3. The value should be no more than about 1/2 of the half-width of the peaks/valleys and can be much smaller (but > 2) for noise-free data. Large widths can reduce the apparent amplitude of peaks and shift the apparent location. For noisy data, this modification is unimportant since the noise obscures the actual peak. Ideally, width should be as small as possible but must be balanced against the possibility of false peak detection due to noise.
:D:D:D
Chilly Charly    (aka CC)
Message 3 of 11
(3,741 Views)
Hello Chilly Charly 
 
Thanks for taking time out from your vacation and replying to my Question.
 
We have been looking at a quite a range of values for the peak width parameter.
 
The simulation is only an approximation of the real situation where we have a peak which has similar flanks to a gaussian but with a flat top of approximatly the same width as the FWHM of the half of a gausssian peak gaussian leading up to the flat top. 
In addition we might have a hump on the sholder leading up to the flat top
                   
If we use a low number for peak width we start to pick up on the ripples in the "flat top" or on the hump. 
 
Our key interest is the CHANGE of the peak position as a function of external effects on a sensor system and not so much it's absolute position.  
 
Unfortunatly, the shape of the peak is not perfectly constant when it is shifted.  External effects start to induce additional ripples (mostly in the flat top),  increase the amplitude of humps on the sholders of the peak and start to modify the slope at the foot of the peak making it more Lorentzian.
 
In order to reduce the influence of these effects on the signal we have to go to a larger peak width but I absolutly take your point that using a value of larger than the FWHM of the peak is wrong.  To be honest, we actually use a value which is ~1.3x FWHM, only in the simulation did I enter a value as large as 20 to make the effect very pronounced.    
 
I have attached a 1 page word document with some more details. 
 
After discovering this effect we are now using a resampling routine where we interpolate by a factor of 10 (using cubic spline) which brings the total error down to < 1/100 of the original data point interval (0.005nm) which is where we want (need) to be.   Unfortunatly this makes the data sets rather big at 160000 points per dataset ( the experiment is running at 10 sets per second per channel with 4 channels aquired in parallel) and we need a (much) faster PC to do the analysis on the fly.
 
Never the less, I must say , LabView has never let us down and is a brilliant piece of software engineering.
 
 
Robert
 
 
 
 
   
 
 
 
 
 
Robert
0 Kudos
Message 4 of 11
(3,726 Views)
Means that you have noisy data, but you still want accurate values for the peak position. Hum...
 
What I would try is to fit a 4 order polynomial to the data (because it holds 2 inflexion points, on either sides of the peak), using the point-by-point vis and adjusting the number of points to the half-width of the expected peak,  and then search for a maximum close to the interval center.
 
 
Chilly Charly    (aka CC)
Message 5 of 11
(3,720 Views)

4th order polynomial is a good idea

We tried general curve fitting using gaussian function but it suffered from problems with automatically seeding initial guess coefficients.

I suspect 4th poly order is more robust.

And yes, we try to push the suystem to the very limit (and beyond). 

In the end application we need to monitor the peak position (nominally at 1550nm) to an accuracy of 1x10-4.   Under normal circumstances the peak position is stable to this accuracy over 12 hours, but we expect staistically (in time) distributed excursions from the nominal peak position by 2 to 3 x10-4 over a period of a few seconds (acoustic emissions from material degradation i.e. cracking) with the whole system running fully automated over a period 5 years.   

 Tall order ?  YYYYYEEEEESSSSS

 

Robert
0 Kudos
Message 6 of 11
(3,718 Views)


RRJMaier wrote:...In the end application we need to monitor the peak position (nominally at 1550nm) to an accuracy of 1x10-4.   Under normal circumstances the peak position is stable to this accuracy over 12 hours, but we expect staistically (in time) distributed excursions from the nominal peak position by 2 to 3 x10-4 over a period of a few seconds (acoustic emissions from material degradation i.e. cracking) with the whole system running fully automated over a period 5 years.
Ouch ! what sort of temperature control do you use ?!
Chilly Charly    (aka CC)
Message 7 of 11
(3,714 Views)

A nested box of 4 boxes with the outer 3 boxes being independently actively temperature controlled using NTCs and thermoelectric elements with different PID settings gradually getting to towards longer and longer time constants.  The inner most box is connected to the one surrounding it by low thermal conductivity posts and the volume is at low pressure (few mbar).  The actual system under test sits in the centre of the inner most box on top of a 5kg lump of aluminium which "floats in there".   

Temperature stability is better than 1mK at 25degC over 3 weeks.  Temperature is recorded using fibre optic sensor technology (fibre Bragg gratings) attached to high thermal expansion material.

 

   

Robert
0 Kudos
Message 8 of 11
(3,709 Views)

Ouuuuch ! Never leave the door open !

Fortunately, my bug cultures don't need that kind of crazy precautions ! 😄

I'm impressed !

 

 

Chilly Charly    (aka CC)
Message 9 of 11
(3,705 Views)
thanks for your complimements, help and YES we agree it's crazy
but we are quite proud about the system -
it's got its own name She is called babushka    
 
R
Robert
0 Kudos
Message 10 of 11
(3,702 Views)