LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Matrix power slower than Matrix Multiplication

I am looking for ways to optimise the speed of a data analysis program I am developing, and noticed that the Matrix Power vi appeared to take twice as long to run as the Matrix Multiplication (Real A x B) vi, as reported in the Performance and Memory Profile tool.

 

To verify this, I created a test VI (attached). In this VI, I tried both a single threaded version (one subVI call per loop) as well as the attached version which has 2 calls per loop iteration to see if it would benefit from my dual core processor. Matrix Power didn't, but the Matrix Multiplication function did, so in this case the difference in execution time was 4:1!

 

Why is Matrix power so much slower? (I am only squaring a matrix). When watching the Windows task manager, both CPU's were fully utilized during Matrix Multiplication, but only ~50% during Matrix power (in the double version attached). In the singlethread version, CPU utilization was still higher in Matrix Multiplication than Matrix Power.

 

I have tried this in both LabVIEW 8.2 and 8.5 on Windows, and got similar results.

0 Kudos
Message 1 of 12
(4,637 Views)

Rising to a power is more computation intensive than a simple multiplication. Just try to do it by hand on a simple example :

1.23 x 3.1 compared with 1.23 ^3.1 : you'll have to calculate Ln(1.23) (without a calculator, enjoy !), multiply by 3.1 (easy), then calculate Exp(result)  (enjoy again)! 😮

Not exactly what's done by the processors, but you get the idea. Of course your intention is to calculate A^2, but LV can't make the difference with A^9999999.... (should be significantly faster than multiplying A 9999999 timdes by itself ! 😉

 

You 'll get similar results using an addition instead of a multiplication (A + A is faster than A x 2) or a multiplication instead of a division (more tricky, but A x 1/B is faster than A/B), although the integration of the math "co-processor", and the wide data buses make the difference thinner than in the past 😉

 

Now the fact that both CPU's are not fully used is another story...

Message Edité par chilly charly le 09-11-2008 12:46 PM
Chilly Charly    (aka CC)
Message 2 of 12
(4,630 Views)

Thanks for your reply Chilly Charly

 

Matrix Power is defined as AxAxAxAx...xA (multiplied n-1 times), so it should be comparable to Matrix Multiplication. In the case of A^2, it is AxA, A^3 is AxAxA. And if A=B=C, A^2 should take the same time as AxB, A^3 should be the same as AxBxC, etc.

0 Kudos
Message 3 of 12
(4,624 Views)

You could be right... if the power rise was done only using integers. However, try to use Pi (= 3.14156...   ;)) as exponent ! There is no time penalty for the power rise... I'm not sure how you will handle that with mere multiplications !

 

On my Macintosh (core 2 duo), both processors are fully used... 

Message Edité par chilly charly le 09-11-2008 01:00 PM
Chilly Charly    (aka CC)
Message 4 of 12
(4,617 Views)

Matrix Powers can only be done with integer powers by definition because Matrix multiplication is different to scalar multiplication (it involves multiplying all rows of the first matrix by all columns of the second matrix), rather than simply raising all numbers in the matrix to some power.

 

For example, you cannot define a square root of a matrix as such, but you can sometimes find a matrix such that AxA = B.

0 Kudos
Message 5 of 12
(4,606 Views)

Well, of course you are right... I think I must apologize for not paying enough attention... I believe one of my friends here (the one with a blinking eye and a fur) will be extremely happy...
This being said, the calculation time is far from linearly dependent on the exponent value (see below).
And on my mac, with n= 3 the ratio falls to 1.2...

So, I was not entirely out of the path 😄

 

Add 1 to the abcissa !

Message Edité par chilly charly le 09-11-2008 04:08 PM
Chilly Charly    (aka CC)
Message 6 of 12
(4,592 Views)

May be my apologies came too quickly : seems that non-integer exponents can be used ! 😮 😄 (not in the LabVIEW implementation)

For an indeep discussion, see here.

Message Edité par chilly charly le 09-11-2008 04:25 PM
Chilly Charly    (aka CC)
Message 7 of 12
(4,581 Views)
And I forgot to specify that to establish the graph abose, the reference time (used for the ratio) was the calculation of AxA...
Chilly Charly    (aka CC)
Message 8 of 12
(4,577 Views)

It is interesting that the ratio can fall for n=3 - I suspect that is due to the Matrix Power performing the operation in one call, but it requires two successive calls to the Matrix Multiplication.

 

The point remains however - for the current implementation of Matrix Power (only for integer powers), Matrix Power should be no slower than Matrix Multiplication.

Message Edited by pauldavey on 09-12-2008 11:45 AM
0 Kudos
Message 9 of 12
(4,555 Views)

Hi, pauldavey

 

Thanks for bring this topic!  I have verified that Matrix Power VI requires twice as long as Matrix Multiply VI when power=2. We will fix this issue in the next release of LV. But the Matrix Power VI can run parallel on my dual core PC, as a result,  the time ratio is 1:2 not 1:4 as you reported. Can you tell me the configurations of your machine (OS, CPU)?


0 Kudos
Message 10 of 12
(4,488 Views)