04-16-2019 09:47 AM - edited 04-16-2019 09:48 AM
On the subject of "recursion", I guess there is a distinction to make that could be important to readers. The algorithm I used I believe is recursive in that the function is symbolically called within itself. In other words, the code I wrote is literally calculating GCD(GCD(GCD(...))), which is a recursive approach to finding the GCD of N numbers.
The implementation of the code I wrote is categorically NOT recursive in the programming sense of the word because the GCD function does not literally call itself.
This distinction does not absolve me of my original sloppy use of the term. (This is, after all, a programming forum.) That said, I like words and think they're important and found the above to be a clarifying distinction for me looking back on the post, so I thought I'd share it.
04-16-2019 09:48 AM
@altenbach wrote:
@croohcifer wrote:
Yes, I did benchmark it per Darren's brainless LabVIEW benchmarking best practices to be multiple orders of magnitude faster than the original solution.
Most likely things could be improved even further by taking advantage of the fact that we are dealing with an array. For example if we would sort the array in a certain way first and then use code tailored to that fact. I have not tried.
Or decimate the array in two arrays, and do the GCD of the pairs in parallel. Then recurce (yes!) over the results. Of course take case of arrays with odd numbers.
04-16-2019 09:57 AM
wiebe@CARYA wrote:
Or decimate the array in two arrays, and do the GCD of the pairs in parallel. Then recurse (yes!) over the results. Of course take case of arrays with odd numbers.
What would you do for an odd number? I'm thinking that for a GCD function if you just repeated one of your elements to make it even again, the end result should still be the same.
04-16-2019 10:46 AM
Nice and clean.
But if you have a large set of numbers, this should be faster:
04-17-2019 02:55 AM
I was thinking in the lines of this:
Note that we can skip GCD-ing the rest, if any of the numbers is 1.
Also note the parallel for loop, that should benefit the speed.
This would be an alternative, benefitting from 'normal' parallel execution:
I wander how these would perform in a bench test (but not so much to spend much time on it). It would depend highly on the data set anyway
95% untested, as usual...