LabVIEW MathScript RT Module

cancel
Showing results for 
Search instead for 
Did you mean: 

FFT performance

Solved!
Go to solution

Hi everybody,

 

I copared the performance of the FFT algorithm for MathScript and original LabVIEW.

For this I calculated a 128k FFT 100 times. The calculation time needed by MathScript

is 4-5 times higher. I used a very simple MathScript code that just calculates the FFT,

no loops, etc. involved. Does anybody know a good reason for this? It cannot be explained

by the data exchange between LabView and MathScript. Is MathScript using a less efficient

FFT algorith than LabView?

 

Thanks

Ulf

0 Kudos
Message 1 of 17
(9,774 Views)

Hey, that's not even an order of magnitude. 😉 I would guess Mathscript makes slightly more internal data copies.

 

  • How are you measuring the speed?
  • What is your LabVIEW version? Mathscript received performance improvements with nearly every upgrade.
  • Can you attach your actual benchmark code? (e.g. Are you looping 100x inside the same mathscript mode ar are you calling the mathscript node 100 times?)
  • Make sure there is no constant folding for a fair comparison.
Message Edited by altenbach on 03-07-2009 12:09 AM
0 Kudos
Message 2 of 17
(9,766 Views)

Thanks for the reply. I use FFT or analysing of long time series sampled at high

rates + overlapping. So waiting 30 minutes or 150 minutes make a some difference

even it is not an entire order of magniltude.

 

  • I measure the speed with the LabView Function "Get Date/Time In Seconds" used in
    "Stacked Sequence" (see attached code)
  • LabView 8.6.1
  • What is constant folding? Could you please check my code?
0 Kudos
Message 3 of 17
(9,735 Views)

So, you want to compare FFT LabVIEW with FFT Mathscript (don't call it matlab on the indicators! ;)).

 

To compare raw FFT performance, you should eliminate all other operations, for example:

  • don't include the graph update inside the frame, it might steal CPU cycles.
  • the dynamic data conversion (you coerce waveform to dynamic, back to waveform, extract Y array). Dynamic data conversions are often expensve.
  • Don't compare with the express VI, which is also slower because it carries more baggage. Use the plain FFT.
  • Don't do scaling (to dB, trim, etc.).
  • Don't use stacked sequences, they make the code clumsy. 😉

 

Here's a quick example how a better benchmark would look like that only compares FFT and not all the extra stuff. You will see that now LabVIEW is even faster. 😄

 

 

 

FYI, Constant folding is internal optimization where certain data is folded into constant (example) (details). While there is folding in the FOR loop, it's OK, because the FFT is not folded as it seems.


us09 wrote:

Thanks for the reply. I use FFT or analysing of long time series sampled at high

rates + overlapping. So waiting 30 minutes or 150 minutes make a some difference

even it is not an entire order of magniltude.


 

Ah, now we are getting to the interesting part. As you can see, it would be best to use all lowlevel function and code in a way to keep things in-place as much as possible and re-use data buffers. Use plain arrays instead of complicated data types and keep track of dt and df elsewhere. Coding style alone will make a huge difference. Do you have more details on how your data is arranged? You'll be surprised how fast these things can go when done properly. 😄

Message Edited by altenbach on 03-09-2009 08:02 AM
Download All
0 Kudos
Message 4 of 17
(9,732 Views)

Thanks a lot. This is very, very  instructive.

 

But what about my original question Smiley Wink? Now LabView is even faster and

MathScript evern slower. So what is the reason for this difference. I would expect

that MathScript and LabView use the same FFT core routines. But in this case the

wouldn't differer that much. Right?

0 Kudos
Message 5 of 17
(9,726 Views)

I just found a build-in LabVIEW funktion that would solve my data analysis problem.

How good is the performance of NI_AdvSigProcTSA.lvlib::TSA Welch.vi? Is this

funktion already speed optimized or is the performance degraded as it is for Express VIs?

0 Kudos
Message 6 of 17
(9,697 Views)
I would just run a benchmark... 😉
0 Kudos
Message 7 of 17
(9,694 Views)

Hmm... That means I must implement the Welch algorithm myself.

Just for checking how good the the build-in function is. I hoped I

could avoid this Smiley Wink

0 Kudos
Message 8 of 17
(9,693 Views)

The TSA::Welch VI is a polymorph VI. Is a polymorph VI as fast as a non-polymorph one?

The Welch VI can process waferforms (sorry, I have a German LabVIEW, so I don't know

the correct translation) and plain arrays. I suppose that the plain array processing is the

better chiose in terms of processing speed. Right?

0 Kudos
Message 9 of 17
(9,678 Views)

A polymorphic VI is just a collection of similar VIs and the correct one is selected e.g. based on inputs (or polymorphic selector status). Once it is placed, the correct instance is used, whcih should be equally fast as a plain VI, after al, it now IS a plain VI. :D. The magic occurs during editing, not in run mode.

 

I am not familiar with your TSA:Welch. VI. Can you give the full path? Is this from a toolkit?

 

Of course everything else being equal, working on plain arrays is of typically fastest, but you can still get orders of magnitude difference between an poor and an excellent implementation.

0 Kudos
Message 10 of 17
(9,676 Views)