03-25-2019 08:58 AM
Why is the native "Polar to Complex" function is twice as slow as calculating via cosine and sine?
Solved! Go to Solution.
03-25-2019 09:36 AM
03-25-2019 09:37 AM
I work a lot with the complex datatypes in LabVIEW, but I must confess I have never benchmarked these primitives.
There are some conversions that may go on behind the scenes, for example when you provide R and Theta, R should not be negative, if you provide a negative R it will be converted to a positive R, but offset by Pi.
Can you share the code you used for your benchmark?
0xDEAD
03-25-2019 01:01 PM
Benchmark code attached.
Versus
03-25-2019 01:34 PM
On my system running your code, both sine/cosine and polar/complex came out around low 50msec. The exp version was around 110 msec.
My results don't seem to confirm yours.
03-25-2019 03:20 PM
Aha. The speed boost is only apparent using LV 64-bit. I'm using 2018 18.0f2 (64-bit).
When I try the same code using LV 32-bit indeed both go at the identical ~48ms.
03-25-2019 07:39 PM
I wonder if it could have anything to do with the way the functions do "array handling". I just tried generating a million pair of numbers (not worrying if I used the same set for each test) and did a "Polar to Complex" function and a Real + Imaginary by computing both from r cos theta and r sin theta. The difference was a few percent, trig winning (but I only tested once)(hold on, I'll run it again ...)(nope, the sine/cosine method still wins by about 8-10 msec/million numbers, or about 10 nsec/conversion).
Polar to Complex
This was done on Windows 10 x64, LabVIEW 2018 (32-bit).
Bob Schor
03-26-2019 09:01 AM
@Bob : The Random number generation can be expensive. In this case, it might even be comparable to what we are trying to measure. Perhaps see how much of a performance boost if either :
A: Replace the random number generators with front panel controls. (LabVIEW still should have to read the front panel control every iteration).
B: Move the random number generation outside of the performance test.
Regarding "array handling":
Modern CPUs can optimize for applying a single operation to an array of data. This is taken advantage by the LabVIEW compiler (remember the old SSE2 checkbox in the .exe builds? That's what this is: the SSE2 Streaming Single Instruction, Multiple Data (SIMD) Extensions 2 (SSE2)). Operating on arrays is also the case that I'm most interested in. Putting the input number generation inside the for loop, may prevent that optimization. (must create a random number or read front panel control in between each polar-to-complex conversion).
Side note : Be careful with the debugging switch in your test bench.
I'm still curious why it's faster, and why it's only faster on 64-bit OS. But I'll won't look the gift horse in the mouth. Enjoy the boost!
-D
03-26-2019 01:54 PM
When I run Test polar to complex.vi 10 KB
@D* wrote:
Aha. The speed boost is only apparent using LV 64-bit. I'm using 2018 18.0f2 (64-bit).
When I try the same code using LV 32-bit indeed both go at the identical ~48ms.
"Polar to complex" appears to be a bit slower than "Sine and Cosine"
60..65 ms in comparison to 55..60 ms
with LabView 2018 x32 on Windows 10 x64:
03-26-2019 02:09 PM