http://qs321.pair.com?node_id=11115520


in reply to Optimizing with Caching vs. Parallelizing (MCE::Map)

Hi dear fellow monks,

as a follow-up to my previous post, I implemented my new caching strategy, consisting in storing in the cache the length of the sequences rather than the full sequences.

On the computer where I'm running my tests right now, my original program has the following timing:

$ time perl collatz.pl 837799: 525 626331: 509 [lines omitted for brevity] 906175: 445 922524: 445 922525: 445 real 1m37,551s user 1m9,375s sys 0m21,031s
My laptop is obviously much slower than Nick's computer (where this program took 22 seconds).

This is now my first implementation with the new caching strategy:

#!/usr/bin/perl use strict; use warnings; use feature qw/say/; use constant MAX => 1000000; my %cache = (2 => 2); sub collatz_seq { my $input = shift; my $n = $input; my $result = 0; while ($n != 1) { if (exists $cache{$n}) { $result += $cache{$n}; last; } else { my $new_n = $n % 2 ? 3 * $n + 1 : $n / 2; $result++; $cache{$n} = $cache{$new_n} + 1 if defined $cache{$new_n} and $n < MAX; $n = $new_n; } } $cache{$input} = $result if $input < MAX; return $result; } my @long_seqs; for my $num (1..1000000) { my $seq_length = collatz_seq $num; push @long_seqs, [ $num, $seq_length ] if $seq_length > 400; } @long_seqs = sort { $b->[1] <=> $a->[1]} @long_seqs; say "$_->[0]: $_->[1]" for @long_seqs[0..19];
This program produces the same outcome, but is nearly 3 times faster:
real 0m34,207s user 0m34,108s sys 0m0,124s
But we now end up with a cache having essentially one entry per input number in the 1..1_000_000 range. So, I thought, perhaps it might be better to use an array, rather than a hash, for the cache (accessing an array item should be faster than a hash lookup).

This is the code for this new implementation:

#!/usr/bin/perl use strict; use warnings; use feature qw/say/; use constant MAX => 1000000; my @cache = (0, 1, 2); sub collatz_seq { my $input = shift; my $n = $input; my $result = 0; while ($n != 1) { if (defined $cache[$n]) { $result += $cache[$n]; last; } else { my $new_n = $n % 2 ? 3 * $n + 1 : $n / 2; $result++; $cache[$n] = $cache[$new_n] + 1 if defined $cache[$new_n] and $n < MAX; $n = $new_n; } } $cache[$input] = $result if $input < MAX; return $result; } my @long_seqs; for my $num (1..1000000) { my $seq_length = collatz_seq $num; push @long_seqs, [ $num, $seq_length ] if $seq_length > 400; } @long_seqs = sort { $b->[1] <=> $a->[1]} @long_seqs; say "$_->[0]: $_->[1]" for @long_seqs[0..19];
With this new implementation, we still obtain the same result, but the program is now more than 55 times faster than my original one (and almost 20 times faster than the solution using a hash for the cache):
$ time perl collatz3.pl 837799: 525 626331: 509 [Lines omitted for brevity] 922524: 445 922525: 445 real 0m1,755s user 0m1,687s sys 0m0,061s
I strongly suspected that using an array would be faster, but I frankly did not expect such a huge gain until I tested it.

So, it is true that throwing more CPU cores at the problem makes the solution faster (although to a limited extent with my computer that has only 4 cores). But using a better algorithm can often be a better solution.