Hello Monks
I have a strange problem here. I read a huge file (1+ GB) 1000 records at a time. I read the records into hash map, process them and then try to free the memory using undef. Then process the next 1000 records and so on.
But, strangely the code runs out of memory after some time. Looks like the garbage collection is not triggered in time. Is there some way I can enforce garbage collection?
Have you faced such a problem?
The code is something like this...
my %map
while (sysread(CSV, $record, 66)) {
$map{substr($record, 18, 14)}->{substr($record, 3, 15)} = subs
+tr($record, 36, 29);
if ($count++ > 1000) {
&process();
undef %map;
}
}
Update::
The problem was solved by using scope rather than delete/undef... Updated the code to the following and that performs much better!!
while (sysread(CSV, $record, 66)) {
my %map
my $count=0;
$map{substr($record, 18, 14)}->{substr($record, 3, 15)} = substr($
+record, 36, 29);
while (sysread(CSV, $record, 66)) {
$map{substr($record, 18, 14)}->{substr($record, 3, 15)} = subs
+tr($record, 36, 29);
if ($count++ > 1000) {
&process(\%map);
last;
}
}
&process
}
Thank you all for your help!!
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.