We don't bite newbies here... much | |
PerlMonks |
Bug when undefining a large hashby oxone (Friar) |
on Aug 22, 2008 at 13:09 UTC ( [id://706159]=perlquestion: print w/replies, xml ) | Need Help?? |
oxone has asked for the wisdom of the Perl Monks concerning the following question: It seems that it takes a long time to undefine a large hash on some operating systems. The following code illustrates the problem:
Now, on Windows this does what I'd expect: the time taken for the script to undefine the hash is minimal. However, on FreeBSD using Perl 5.8.8 the same script takes around 14 seconds to complete that 'undef'.
Searching on PM, there is some relevant discussion in this node, which notes "there is/was a malloc bug that led to slow destruction of big data structures". My first question is: is this bug documented anywhere, in terms of which OS'es and Perls are affected, and exactly when the problem arises? Second question: is this fixed in a version of Perl later than 5.8.8? Final question: what are the possible workarounds? From my own investigations, I can delay the problem to the end of the script by not undefining any large hashes, AND making sure all large hashes are globals (so they don't get undefined when they go out of scope at the end of a function block, for example). However on doing this (eg. by commenting out the 'undef' line in the script above), the problem still occurs in that there is a long delay after the script says "Done", but before it actually finishes. I could combine this approach with the POSIX-based hack suggested here, thereby pushing the problem to the end of script execution, then skipping past Perl's own garbage collection with "POSIX::_exit(0);". However, this all feels very hack-y, especially making all large hashes globals, which goes directly against the best practice of limiting scope of variables wherever appropriate. Anybody know of any better workarounds?
Back to
Seekers of Perl Wisdom
|
|