10-17-2015 12:32 PM - edited 10-17-2015 12:33 PM
@LukasW wrote:
I'm also curious how your benchmark works.
It's just a plain three-frame sequence feeding a chart to see the variations.
10-19-2015 08:59 AM
Altenbach,
It seems there is a problem in the last loop (at least on the screenshot): number of rows (8) is number of iterations and number of elements for array subset function, number of columns (100000) is not used. So you are replacing 64 elements (first 8 columns), it can affect benchmark results.
10-19-2015 11:27 AM - edited 10-19-2015 04:43 PM
@Alexander_Sobolev wrote:
Altenbach,
It seems there is a problem in the last loop....
Thanks. Yes you are right. N needs to be wired with the number of columns, not rows. Now it's about the same speed as reshape. The single element loop ist still significantly slower, thought. Makes more sense.
And, yes, the decimate/build version is about 2x faster, with the disadvantage that it is not really scalable without a change in code.
10-21-2015 02:29 AM - edited 10-21-2015 02:33 AM
It took me a while but here are my results:
I used altenbachs benchmark and included the decimate and build just out of interest. Its not really applicable beause scalability is more important than speed (at least if we're talking µs).
The test was run on a PXIe-8135 Target with 8 rows and 7200 columns of data.
(The results on my LabVIEW VM are pretty much the same but with a lot more jitter).
Would reshape and transpose be my best option according to the results? Regarding memory allocations altenbachs column loop would be better, yet reshape and transpose runs a lot faster.
Thanks for your input!
Lukas