### Re^2: bigint == horrible performance?

by BrowserUk (Pope)
 on Nov 06, 2011 at 11:02 UTC ( #936262=note: print w/replies, xml ) Need Help??

in reply to Re: bigint == horrible performance?
in thread bigint == horrible performance?

It's not a hard problem to solve:

```#! perl -slw
use strict;
use Time::HiRes qw[ time ];

my \$start = time;

my @res;
for ( 1 .. 1e6 ) {
my \$c = 0;
my \$n = \$_;
while( \$n > 1 ) {
++\$c;
\$n = \$n&1 ? 3*\$n+1 : \$n/2;
if( defined \$res[ \$n ] ) {
\$c += \$res[ \$n ];
last;
}
}
\$res[ \$_ ] = \$c;
}

printf "Took %.6f seconds\n", time() - \$start;

my( \$n, \$i ) = 0;
\$n < \$res[ \$_ ] and ( \$n, \$i ) = ( \$res[ \$_ ], \$_ )
for 1 .. \$#res;

print "longest sequence is \$n steps starting with \$i";

__END__
c:\test>collatz.pl
Took 2.000000 seconds
longest sequence is 524 steps starting with 837799

And the OP gave a link to his own 3 second solution, though I'm not sure why he pastebin'd it rather than posting it here.

The intriguing thing is why the OP felt the need for bigint in the first place given that using Perl's native scalars he could calculate this(*) for numbers up to 2,251,799,813,685,248 before running into precision problems.

(*)Assuming of course he had 400+ years and the exabytes of ram required to store the hash :)

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^3: bigint == horrible performance?
by mojotoad (Monsignor) on Nov 08, 2011 at 09:39 UTC
So I've bumped into some stuff lately. XS extensions with no particular expectation of whether you're on an actual 64-bit bus or not. Perl itself actually recommends that it be compiled into 32-bit mode even when on a 64-bit architecture because the docs suppose <it will rarely be actually required>. (those are semi-quotes)

So now, in the XS world, I've found myself facing a couple of things: 64 bit stuff and ipv6 (128 bits).

In the XS world, Math::Int64 works quite nicely. However, I get way too many explosions with Math::128 at this point. So, I end up using packed arrays and timely casting, while hopefully not misplacing network byte-order.

32-bit (and I guess 64-bit?) perl has unsigned, int, 'double', and 'string'. (e.g. UV, IV, NV, and PV). There is no native perl guarantee for 64-bit or 128-bit other than a string.

I understand the perils of assuming too much about the data types. But in the world of XS, <whimper>

Cheers,
Matt

salva produces great code and enthusiastically maintains and extends his modules at the drop of a hat.

Indeed, Math::Int128 came about because I asked a simple question here, and he offered to write one.

And lo, two days later it was so.

If the module is giving you problems, help him to help you. You will not regret the effort.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
I am really interested in knowing the details of any "explosions" related to Math::Int128. Report them through the CPAN bugtracker, please!
I am really interested in knowing the details of any "explosions" related to Math::Int128

Me, too. Some elaboration would be appreciated - even if it's just a link to the bug report that salva has requested.

Cheers,
Rob
Hi salva, sorry for the delay in responding.

I don't have the particular hardware in front of me at the moment, but as I recall: The problem wasn't in the runtime, it was during compilation. There was a particular pragma or definition required which was dependent on the version of glib installed. It was explicitly mentioned in a thread here on PM which I found while googling. I didn't have the time to deep-dive into it at the time.

I'll update this node once I get back on the system in question and give you the specific reference. Perhaps you can save me some time with the deep-diving.

update: Okay, found it. It's this business regarding how TI gets manipulated with __attribute__ and __mode__. Salient details:

I didn't pursue it much further because the project I was working on has to be very portable (mostly across linuxes, some solaris) across 32 and 64 bit architectures. I'm not saying your code isn't portable, but I'd like to avoid exceptional cases if possible.

On a more general note, for both Math::Int64 and Math::Int128, I would very much like a way to catch overflow conditions at the XS level; without this I have to punt to Math::BigInt when I'd really rather have the speed of your modules.

Thanks,
Matt

Re^3: bigint == horrible performance?
by moodywoody (Novice) on Nov 06, 2011 at 22:21 UTC

Ok .. to clear things up .. I am _not_ interested in better libraries or algorithms for that problem. I used bigint because I was not sure whether one of the Collatz chains would produce an intermediate number exceeding the int scope (probably my take on pessimation). Only after reading on the projecteuler forum that everyones solution was running in 1-5sec and mine taking 10min - while using the same algorithm as many others - made me suspicious. So I removed "use bigint" .. lo, and behold! It runs .. fast. Googling "perl bigint performance" didn't turn up anything useful, that's why I asked here.

PS. I used pastebin because I wasn't sure about the "post code" conventions ;-)
So I removed "use bigint" .. lo, and behold! It runs .. fast.

Arbitrary precision math is, regardless of the language in which it is written, much, much slower than fixed precision. It is the nature of the beast. If you know how arbitrary precision math works it will not come as any surprise.

It therefore falls to the developer to understand that and only use arbitrary precision when it is actually required.

I have a tendency to get somewhat exasperated when I see people offering Math::Big* as a solution to problems where there is a perception of a "precision problem", when 95% of them have far simpler and more efficient solutions.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
Arbitrary precision math is, regardless of the language in which it is written, much, much slower than fixed precision

I didn't do any benchmarking but my hunch is that the real slowdown wrt the op's code (when used with bigint) came about as a result of the multiplication and division operators being replaced by function calls.

But then ... replacement of "operators" with "function calls" is really part and parcel of how arbitrary precision math works, so I'm not sure that stating my "hunch" actually contributes anything terribly useful ... even if it *is* correct.

Cheers,
Rob
Of course you're right. I pray excuse that I come from java land where everything not totally foolproof comes with giant warning signs. One gets used to that.

Create A New User
Node Status?
node history
Node Type: note [id://936262]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (7)
As of 2021-01-24 00:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
The STEM quote I most wish I'd made is:

Results (257 votes). Check out past polls.

Notices?