LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How make formula node faster?

Solved!
Go to solution

If you still need a further speed increase, it's also worth looking at what computations are expensive in your loop - in your case it will be the "exp" function that costs the most time, and the argument of the exponential is always in the same range (0 to -10 or so).  One way of speeding that up would be to replace the Exponential computation with a lookup table - i.e. precompute the exponentials and then interpolate the result from the stored array, rather than compute it each time.  I get a speedup of another 50% using a 2500 element table, with approximation errors < 10^-6.  The speedup will be even more pronounced as your problem gets "larger" - i.e. if you have more Gaussians, or more points in your spectrum.

 

MkGauss Lookup Table_BD.png

Message 11 of 21
(3,120 Views)

That is a brilliant idea - I like it.  I don't quite grok the x500 (in the calculation) and the /500 (in the formula for the lookup table).  Still - that is sweet - and I may not need it to be even that accurate.

 

Thanks for showing me how to implement it without an inner loop, as well.

 

RipRock 

0 Kudos
Message 12 of 21
(3,109 Views)
Solution
Accepted by RipRock99
The /500 and x500 is to map between the input number ("x") and the lookup table index - i.e. each lookup table value maps to a change in "x" of 0.002 (1/500).  That could easily be any other mapping depending on how big you want your lookup table to be.  By the way, the Interpolate 1D Array function has the capability of defining the lookup table using an array of clusters, where the cluster includes both the x value and its result - however this turns out to be at least a hundred times slower!
Message 13 of 21
(3,102 Views)

GregS -

 

Your solution turns out to work very well, in addition to some other improvements.  

 

Specifically, I was using programmatic cursors to mark the locations of the peaks found by the code, and updating their position every cycle.  Changing that to a second line on the graph, all with values of -1000, except where the cross-correlation found peaks were identified, which were marked with a +1000, caused the speed to improve to 70 ms.  Still to slow.  

 

But when I put in your lookup table for mkgauss, it improved to 40 ms!  That is just under the bar (50 ms; however, I have other activities that need servicing that take time as well) - a little more speed improvement would be useful, but I will need to think hard on where to find it.  Perhaps decimating the # of data points before plotting them, as well as implementing a producer-consumer loop.

 

Thanks for all of your help everyone!  

 

RipRock 

 

PS:  And if anyone is interested, I could post the whole peak identification code.   It is a subroutine of a larger data collection code for a prototype in-situ dating mars instrument; I have modded it to use a previously recorded data set to simulate real data.  It is looking for the characteristic shape of strontium isotope peaks in a potential morass of other peaks.  If you are interested, let me know, and I will put it and a sample data set on the board.

0 Kudos
Message 14 of 21
(3,082 Views)
That would be interesting to see the whole code - do upload it if you can.
0 Kudos
Message 15 of 21
(3,064 Views)

Hi GregS -

 

Well here is the whole brouhaha as .zip file.  It includes a few subroutines, and 1 pretty optimal data file and header.  I would welcome any of your input on speeding/refining up the other parts.  My apologies for the clumsy implementation (while I have been coding for years, I am still relatively new to LV).  I have tried to provide plenty of comments. 

 

A couple of notes:

1)  My real goal is to minimize the stddev of the 87/86Sr ratio (the values marked by the redlines).  So when the XY graph goes down, I am doing something right on the instrument.  When it goes up, something is going wrong.   

2)  The ratio of the 87/86 should be ~0.70

3)  This data is really optimal.  It is often worse, typically with the presence of additional non-Sr peaks.

4)  From day to day the absolute position and width of the peaks may change within some known bounds reflected by the code. 

0 Kudos
Message 16 of 21
(3,061 Views)

And thanks for your thoughts and help!

 

RipRock 

0 Kudos
Message 17 of 21
(3,058 Views)

Whoopsie-doodle!

 

I tried downloading the zip file I posted earlier and ran into a few problems.  

 

1)  It seems that LV wants some of the included subroutines to be in a library.  Just ignore the error, then go to each problem subroutine, open it, and under the File menu disconnect it.  I thought I had fixed this before posting, so I am unclear on why the error occurs.

 

2)  The low pass filter needs to be set to 25e6 or more

 

3)  The main program is called RingBufferTest3.vi; everything else is a subroutine or a data file.

 

Hope this helps,

 

RipRock 

0 Kudos
Message 18 of 21
(3,034 Views)

OK - that previous zip file seemed totally goofed, due to a previous entanglement with a lvlib, which made it very difficult to work on the same code on multiple computers.  Lesson learned: Avoid projects/libraries.  Just zip up the subroutines.  2nd grouse:  what is up with the totally non-standard and incomprehensible "Save As" file menu option?  OK - rave off.  

 

Here is a fixed zip file.  All you should need to do is run RingBufferTest3.vi, and select the data file that is in the zip directory.

 

My bad,

 

RipRock 

0 Kudos
Message 19 of 21
(3,010 Views)

Hi RipRock --

 

Here's a few more ideas that might shave a few more ms off the computation.  They're all fairly small changes, but worth considering for any large project.

  1. Keep all data types (representations) consistent.  So leave your spectra as DBLs, and all indices as I32.
  2. Rather than transpose the buffer at each iteration, just set it up the other way around to start with.
  3. The next most costly computation you had was computing the histogram in your Baseline Correction.  You don't lose any accuracy by only computing a 20-bin histogram rather than a 100-bin, but it saves several ms.  It's worth using the profiler to see where the time is spent.
  4. After that, the next expensive calculation was the Mean of each sample in your buffer at every iteration.  The Mean is just the Sum divided by the Number of Spectra, and it's straightforward to maintain the Sum as a separate array - simply add the new spectra and subtract the one replaced in the buffer. Then the Mean calculation is reduced to a single Division.
  5. Once the buffer is full, there's no need to extract a part of it - just use the full buffer.

I've attached the code saved back to LV 8.2 - hope that worked.

Cheers ~ Greg

 

0 Kudos
Message 20 of 21
(2,986 Views)