|No such thing as a small change|
Re: Optimizing with Caching vs. Parallelizing (MCE::Map)by Laurent_R (Canon)
|on Apr 14, 2020 at 13:53 UTC||Need Help??|
Hi dear fellow monks,
as a follow-up to my previous post, I implemented my new caching strategy, consisting in storing in the cache the length of the sequences rather than the full sequences.
On the computer where I'm running my tests right now, my original program has the following timing:
My laptop is obviously much slower than Nick's computer (where this program took 22 seconds).
This is now my first implementation with the new caching strategy:
This program produces the same outcome, but is nearly 3 times faster:
But we now end up with a cache having essentially one entry per input number in the 1..1_000_000 range. So, I thought, perhaps it might be better to use an array, rather than a hash, for the cache (accessing an array item should be faster than a hash lookup).
This is the code for this new implementation:
With this new implementation, we still obtain the same result, but the program is now more than 55 times faster than my original one (and almost 20 times faster than the solution using a hash for the cache):
I strongly suspected that using an array would be faster, but I frankly did not expect such a huge gain until I tested it.
So, it is true that throwing more CPU cores at the problem makes the solution faster (although to a limited extent with my computer that has only 4 cores). But using a better algorithm can often be a better solution.