06-15-2017 12:46 PM
Hello,
I'm working on Project Euler problem 14 right now and have a working code I think, but I'm running out of memory and am hoping for guidance on how to better manage this. The problem is the find the number which is less than 1 million that produces the longest Collatz sequence. the Collatz sequence is:
n -> n/2 (even n)
n -> 3n+1 (odd n)
and the sequence always (probably) ends at 1. I have attached my VI. I am brute forcing it and checking every number less than 1 million to produce the Collatz sequence then find the length and keep track of the largest array by comparing the current array size to whatever I have as the largest so far.
Solved! Go to Solution.
06-15-2017 05:03 PM
The first problem I see is that the first number you are checking is 0 so you are going to be stuck in an infinite loop immediately.
Besides that, I would be careful with floating point comparisons which could cause the same problem.
Final problem I can see from a quick glance is that you are never clearing the array you storing your Collatz Sequence in which means that each number you are checking will just add it's sequence on to the previous sequence.
06-15-2017 05:18 PM
Oops, I am starting at 0 and forgot to clear the array. I've attached a new draft. It, however, is not giving the correct answer to the problem.
06-15-2017 05:58 PM - edited 06-15-2017 06:25 PM
I'll have a look.
06-15-2017 07:07 PM
I did a quick rewrite and it finds the correct solution in under 500ms and no memory problems. Try to go below 1 second. 😄
Many optimizations are still possible ....
06-15-2017 07:11 PM
I'll try to! Thanks for your advice. I'll post my results tomorrow when I do it. How do you measure the elapsed time by the way?
06-15-2017 07:19 PM - edited 06-15-2017 07:19 PM
Just use a three-frame flat sequence and the "high resolution relative seconds" as follows.

(also have a look at our NI-Week presentation from 2016)
06-16-2017 07:01 AM
I got it right! But mine is still a lot slower than your's. I converted everything to integers and did bit operations where I could. Would you mind looking at mine and giving me some pointers where I can improve efficiency? My elapsed time was 29.3 seconds.
Also, can you tell me if I'm correct about checking even/oddness of an int with AND? I'm thinking the mechanism is that AND just converts the integer into its binary then compares the last bit to 1. If they are both 1, then it returns 1 and means the number is odd. If the integer ends in a 0, then it must be even, so the AND function returns 0. Is this correct that it only compares the last bit of the int to 1?
06-16-2017 07:10 AM
@TheStrangeQuark wrote:
Also, can you tell me if I'm correct about checking even/oddness of an int with AND? I'm thinking the mechanism is that AND just converts the integer into its binary then compares the last bit to 1. If they are both 1, then it returns 1 and means the number is odd. If the integer ends in a 0, then it must be even, so the AND function returns 0. Is this correct that it only compares the last bit of the int to 1?
Think of the AND operation as converting both inputs to boolean arrays, ANDing each element and then converting the boolean array back into an integer.
06-16-2017 08:57 AM
Most of the speed gains are not due to the LabVIEW code itself.
First, close the front panels of the subvi's when doing benchmarking so there are no GUI indicators or controls to update millions of times. Then go to File-->VI Properties... and choose Category="Execution". There you can turn off debugging and consider inlining, etc.
Have a go at these settings and your execution speed will improve dramatically.
-Kevin P