LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Scaling an Array of Numbers

Solved!
Go to solution

@crossrulz wrote:

wiebe@CARYA wrote:

B being faster then C? Weird!


They are within the noise.  I would expect them to actually compile to the same thing due to optimizations that the compiler does such as moving that first multiply outside of the loop.


Within the noise? I disagree.

 

I did all of those measurements repeatedly. They did not differ more then 10% (more like 1%) between measurements.

 

And those measurements where already the result of 100 iterations.

 

Never mind, I thought you mend all measurements where within the noise. Still, those results where pretty reproduceable, So I do thing B and C differ.

 

I guess we could have a look at the DIFR or assembler code that it compiles to... But I also have to get some work done every now and then Smiley Sad.

0 Kudos
Message 11 of 16
(1,100 Views)

Also, did you set the VI to Subroutine priority and close diagrams before running? Getting the correct measurement can be trickier than you think at first. Also, there's a Mean.vi that calculates the mean/average. 😉

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 12 of 16
(1,089 Views)

@Yamaeda wrote:

Also, did you set the VI to Subroutine priority and close diagrams before running?


No. Lots of room for experimenting with options. But most of my code doesn't run as subroutine (and I find that usually it doesn't improve the overall speed at all), so for me a bench using it wouldn't be representative. As for the front panel, as long as nothing is updated in the loop, I doubt it makes much difference. My guess is it all stays relative.

 

I found in the past that even making a change, saving and closing the VI benchmarks showed N to be twice as fast as M, only to find after a restart of LabVIEW that M was twice as fast as N...

 


@Yamaeda wrote:

 Also, there's a Mean.vi that calculates the mean/average. 😉


I know, but this is more intuitive for me. I actually used it, before replacing it with the median. Then did it manually on that result. Go figure...

0 Kudos
Message 13 of 16
(1,082 Views)

@Yamaeda wrote:

That it's slower when you wire the scalars on top isn't that surprising. There's some threads on LV memory management, and if you wire an array on top it usually work in place, while a scalar on top forces it to create a new array.


I seriously doubt that wire order makes a difference. The compiler knows how to order things optimally.

(I think decades ago, it made a difference, but in my experience it does not matter any more. Your theory could easily be tested by looking at buffer allocation dots).

0 Kudos
Message 14 of 16
(1,074 Views)

wiebe@CARYA wrote:

Here's my test bench. And it's tricky, Sometimes any change makes the time double, and a random change makes it go back.


Seems most of your results are quantized to ~1x, ~2x, ~3x of your fastest, maybe hinting at SSE differences. Hard to test.

 

In any case, a really (really!) smart compiler could get at the gist of your algorithm and substitute something equivalent that is yet another 50x faster 😄 We can dream! 😄

 

fasterfaster.png

0 Kudos
Message 15 of 16
(1,058 Views)

@altenbach wrote:

wiebe@CARYA wrote:

Here's my test bench. And it's tricky, Sometimes any change makes the time double, and a random change makes it go back.


Seems most of your results are quantized to ~1x, ~2x, ~3x of your fastest, maybe hinting at SSE differences. Hard to test.

 

In any case, a really (really!) smart compiler could get at the gist of your algorithm and substitute something equivalent that is yet another 50x faster 😄 We can dream! 😄


Yes, algorithmic optimizations traditionally are orders of magnitude higher then simply optimizing instructions. Guess we have to wait for AI to take over...

Message 16 of 16
(1,043 Views)