05-25-2021 10:38 AM - edited 05-25-2021 10:42 AM
So this code was mentioned elsewhere and I had a look. Completely wrong approach!
If this is all the code needs to do, it can be sped up by orders of mangitudes using very little memory.
There is absolutely no reason to store all these elements in an array of clusters (size=20935). All we need is a 2D array with 20935 rows and 3 columns (one to keep count, one to sum elements and one to sum the square of the elements. Details). That's all you need to calculate mean and stdev for each row later! Now all operations are fully in place!
05-25-2021 11:39 AM
@altenbach wrote:
All we need is a 2D array with 20935 rows and 3 columns (one to keep count, one to sum elements and one to sum the square of the elements. Details). That's all you need to calculate mean and stdev for each row later! Now all operations are fully in place!
Here's a quick comparison crudely implementing my idea. (I am sure it can be optimized further). Results are identical (within numerical errors due to slight changes in execution order), ~40x faster on the first iteration (and hundreds of times faster as the original code slows down).
05-25-2021 11:58 AM
Hi altenbach,
Thank you for revisiting that old code.
I was trying to figure out how these are equal but I might as well give up and just replace it 😁
05-25-2021 12:15 PM
The ptbypt version that ships with LabVIEW is quite similar to my approach when lenght=0 (infinite horizon). just open the panel and study it. 😉
(We cannot really use it for our approach, because you would need a huge number of parallel instances, one for each row. :D)
05-25-2021 12:19 PM