08-27-2020 02:56 AM
I've seen it but never used it. Since the this part only accounts for a few % of the total time, gain will be marginal.
There is even the inverse function.... 🙂
I'll test it out...
08-27-2020 04:00 AM
Hi GerdW,
I had never even noticed that primitive. After 21 years of using LabVIEW there are new things to learn all the time - and goodness knows how long ago that appeared. Thanks for pointing it out.
So, I have put the Ln(x+1) and Exp(x)-1 in and strangely they are slightly slower than using the separate increment/decrement and ln/exp functions. I've tried benchmarking them on different sizes of data and they seem to be about 10% slower. This test was 100 loops calculating the time for each method processing the same 10M values. I calculated the mean and standard deviation of each route based on the 100 measures.
Thanks for the new knowledge though,
David
08-27-2020 04:03 AM
Hi FireFist-Redhawk,
Here's a LV 17 version with the improvements suggested by Altenbach incorporated.
Thanks for looking at this too,
David
08-27-2020 06:36 AM
Um, um... it appears to still be in 2019 😟
Saying "Thanks that fixed it" or "Thanks that answers my question" and not giving a Kudo or Marked Solution, is like telling your waiter they did a great job and not leaving a tip. Please, tip your waiters.
08-27-2020 06:43 AM
Apologies - I used the save for previous version options. Here is another attempt
08-27-2020 11:57 AM
@dpak wrote:
So, I have put the Ln(x+1) and Exp(x)-1 in and strangely they are slightly slower than using the separate increment/decrement and ln/exp functions. I've tried benchmarking them on different sizes of data and they seem to be about 10% slower.
I noticed the same slowdown. It is possible that these functions are relatively old while e.g. the +1, -1 primitives can take advantage of SSE instructions and can thus operate on multiple array elements at once. Just guessing.
08-27-2020 01:30 PM - edited 08-27-2020 01:32 PM
For comparison, I still think that you do much data shuffling (it's the overall time that count!).
Here's a cleaned up version now wrapped into a subVI. Much less scattered code, no big penalty
(I am sure improvements are still possible, see the #add bookmarks):
A view of the caller:
08-28-2020 05:17 AM
Hi Altenbach,
It's running really nicely now. Thank you for your help!
Regarding some of your questions:
The key improvement you have suggested is the new (to me) way of calculating the moving point average. I had tried other different methods, but not this one, and the point-by-point NI supplied vi was always the fastest. Now I know a much better way - thank you. I think there is a case to be made for having a specific moving point average tool in the toolbox as we know there is a significant speed differential between the different methods.
Kind regards,
David
08-28-2020 10:11 AM
@dpak wrote:
The key improvement you have suggested is the new (to me) way of calculating the moving point average.
I was actually surprised that it was so fast. I tried other well known methods, (e.g. convolution based) and they were a few times slower. Once the width gets much wider certain adjustments could be made because many additions would be carried out multiple times in the current code when taking the sum. One could keep the running sum in a scalar shift register, then add the newest and subtract the oldest with each iteration, requiring only two operations instead of [2w+1].
@dpak wrote:
- We don't need to do the NaN check until the inner loop output
Yes, it is probably a good idea. to eliminate NaNs early on. In the past, we had scenarios where certain operations were dramatically slowed when NaN are present in a 32bit application (64bit applications did not have that problems!). I am not sure if this issue still exists in moderns versions, but have a look here. (Matlab had the same problem).