Re: Improving memory performance
by Zaxo (Archbishop) on Oct 04, 2002 at 02:59 UTC
|
Also,
- Use tightly scoped lexicals. They are automatically destroyed as they go out of scope, leaving space for the next scopes's lexicals.
- Read files record by record instead of slurping them. Related to that,
- Use pronouns. $_ and @_ do very well for many uses of temporary lexicals.
- The memory number you need to worry about is peak usage. Perl hangs on to any memory the system gives it, and doesn't let go till the perl process ends. That has major importance for long-running processes, as with mod_perl and other daemons.
After Compline, Zaxo
| [reply] |
|
The memory number you need to worry about is peak usage. Perl hangs on to any memory the system gives it, and doesn't let go till the perl process ends. That has major importance for long-running processes, as with mod_perl and other daemons.
To expand further on this one, this is actually the operating system's fault. In *nix, a process address space can only be extended. I believe the same is true in Win32 systems, unless specialized APIs are used (which nobody uses anyway ;)
(BTW, ++ to Zaxo)
Regards.
-lem
| [reply] |
|
Good points but one question. I was under the impression that Perl normally hangs onto the last pad values as an optimization. Is this only in subs or does this happen with all scopes? I think it does hang on to them or I don't think the my $x if 0 behaviour would occur. If this is the case and you have very large lexical data structures I would think you would want to explicitly undef them at the end of the scope.
-Lee
"To be civilized is to deny one's nature."
| [reply] |
|
There's only one pad per subroutine, so it's per-sub.
| [reply] |
|
|
Re: Improving memory performance
by dws (Chancellor) on Oct 04, 2002 at 02:03 UTC
|
How do I free up memory (e.g. objects) I no longer need?
@array = (); # empty the array
%hash = (); # empty the hash
$var = undef; # empty the variable
You'll also see people using undef to undefine things. Consult perlfunc for details on undef.
| [reply] [d/l] [select] |
Re: Improving memory performance
by fokat (Deacon) on Oct 04, 2002 at 05:05 UTC
|
Something you can also try in some cases, is to use the same data. Let's say you have a %hash or a @list you want to work with.
You might be tempted to do something along the lines of:
my @newlist = &perform_fancy_op(@list);
But this has the side effect of pushing all of the list elements into the stack (hence, potentially growing the stack segment and your memory footprint) and also allocating a new list for the result.
If you can throw the old contents of @list, perhaps you could change your code to do something like:
&perform_efficient_op(\@list);
In this form, you only pass a reference to the whole list (a scalar) and could probably operate on the old list in place, saving the memory required for the additional copies of the list going up and down your stack.
This will usually make your code faster too, specially when dealing with long lists.
Regards. | [reply] [d/l] [select] |
|
I have a way ( not thouroughly tested ) to minimize memory usage for a long running script: I fork childrens to take care of the memory intensive stuff. I then kill the forked child which brings back my perl script to a little size, and enables it to sleep without consummiong ressources. The next time it has to process very large data i refork and so on.
| [reply] |
Re: Improving memory performance
by lestrrat (Deacon) on Oct 04, 2002 at 08:19 UTC
|
Also, according to mod_perl Developer's Cookbook, you want to be careful with modules that automatically export symbols to your namespace, like POSIX:
# original code (from mod_perl book)
use POSIX;
setlocale(LC_ALL, 'en_US');
# now stop symbol exporting... (from mod_perl book)
use POSIX(); # instead of use POSIX;
POSIX::setlocal(&POSIX::LC_ALL, 'en_US');
According to the same book, doing "use POSIX"
consumes about 140KB just for exporting those symbols. If you have many modules that uses such modules, eventually this will easily compound to a few megabytes... | [reply] [d/l] |
|
I thought that if you used a module once ( in your main for example ) it would'nt increase the momory taken by subsequent calls in other modules also called by the main? Am I wrong ?
| [reply] |
|
Remember, use also does an import. If the module exports any subs, they will take up a small amount of space in each namespace they are imported into. That's what he was referring to.
| [reply] |
|
|
|
|
Re: Improving memory performance
by mce (Curate) on Oct 04, 2002 at 09:57 UTC
|
Hi all,
good points about lexical scoping, but I would append:
use references
@a=@b; # the array will be twice in memory
$a=\@b; # the array is once in memory
This is specially usefull when passing large arrays (or
hashes or scalars) to subroutines.
I hope this helps.
---------------------------
Dr. Mark Ceulemans
Senior Consultant
IT Masters, Belgium
| [reply] [d/l] |
Re: Improving memory performance
by fsn (Friar) on Oct 04, 2002 at 11:00 UTC
|
If you wan't to remove a hash element, array element, hash slice, or array slice, and not a whole hash or array, I guess you should use delete. Read more in perlfunc. | [reply] [d/l] [select] |
Re: Improving memory performance
by rir (Vicar) on Oct 04, 2002 at 19:28 UTC
|
Also beware of circular references.
A variable that refers to itself or a set or
variables that create a closed loop of references
can be a memory leak.
What not to do:
{
my ($x,$y);
$x = \$y; # These will last until exit
$y = \$x;
}
{
my $x;
$x = \$x; # This will last also
}
| [reply] [d/l] |
|
I've also had problems with a script using $sth->fetchrowhashref (DBI). Didn't clean correctly so at each iteration swallowed up a little chunk of memory. I think there is a method that allows you to clean up though
| [reply] |