LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

a lot of time to multiply two arrays

So I was playing with this again today to see if my new RAM would make any difference. Didn't do anything for the matrix calculation (maybe with a much larger matrix) and seemed to take about the same time to startup LV8  Smiley Sad

But one thing I did notice is that when you connect a matrix data type to a standard function like the 'multiply', it actually gets replaced with a polymorphic VI. This seems to be true with all the functions. So underneath the hood the standard multiply function is using the the "A x B.vi". Go ahead, just double click on it and see for yourself.


Message Edited by Ed Dickens on 11-28-2005 08:34 PM



Ed Dickens - Certified LabVIEW Architect
Lockheed Martin Space
Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.
0 Kudos
Message 21 of 24
(1,331 Views)
Hi all,

Thought I might jump in here and cover some of the differences between LabVIEW versions and the underlying linear algebra code.  Prior to LabVIEW 7.1, all linear algebra functionality was computed in lvanlys.dll, using code that was developed in-house.  Beginning with LabVIEW 7.1, lvanlys.dll now calls into Intel's MKL library.  The Intel libraries use a lot of techniques to wring performance from their implementation.  For example, blocking the algorithm to preserve data locality in the caches.  Also, where available the libraries use the vector instruction set (SSE, SSE2, etc.).  Check out Intel's website for details.  For large matrices this really makes a big difference in execution speed, as demonstrated in this example.  For small problems (e.g. 10x10) there can actually be a slight decrease in performance as compared to LabVIEW 7.0, but the execution time is so small for this problem size anyway that it is typically not an issue.

-Jim


0 Kudos
Message 22 of 24
(1,239 Views)
Hi, Jim,

It means realization of matrix multiplication different in Intel MKL and Intel IPP libraries?
Why it so slow when ippmMul called for Intel IPP library?

Andrey.
0 Kudos
Message 23 of 24
(1,221 Views)
Hi Andrey,

I don't have any direct experience with the Intel IPP libs, but in looking at their manual I noticed that the matrix operations claim to be optimized for very small sizes, probably to support the specific sub-problems encountered in image processing and gaming applications.  The matrix sizes that are listed as highly optimized are:3x3, 4x4, 5x5, and 6x6.  This is a very different use case than that targeted by MKL.  MKL is intended for larger problems and optimizes accordingly.

-Jim
0 Kudos
Message 24 of 24
(1,209 Views)