It won't work as well, but it works well enough. My benchmark for 5000 says:
Method 6: 14 wallclock secs (14.31 usr + 0.01 sys = 14.32 CPU)
Method 7: 15 wallclock secs (14.28 usr + 0.01 sys = 14.29 CPU)
Method 8: 14 wallclock secs (14.25 usr + 0.01 sys = 14.26 CPU)
I got rid of the Math::BigInt call on the list of 1 then ran it a bunch of times, and there seemed to be some fluctuation. Another run gave me:
Method 6: 14 wallclock secs (14.08 usr + 0.02 sys = 14.10 CPU)
Method 7: 14 wallclock secs (14.14 usr + 0.01 sys = 14.15 CPU)
Method 8: 15 wallclock secs (14.12 usr + 0.01 sys = 14.13 CPU)
Not bad for such a stupid approach, huh?
In truth yours is better by about as much as you lose because you used the overloading interface. If I use the overloading interface, well
sub fact6b{
my ($m, $n) = @_;
if ($m < $n) {
my $k = int($m/2 + $n/2);
return my $x = fact6b($m, $k) * fact6b($k+1,$n);
}
else {
return Math::BigInt->new($m);
}
}
is very straightforward to anyone who has seen divide-and-conquer before. And recursive is only bad because Perl penalizes function calls horribly.
Of course at this point we are both slow mainly because Perl doesn't implement a good algorithm for big integer multiplication. |