perlquestion
ezekiel
<p>I have a really simple script that queries a database then loops over each row like this:
<code>
$sth = $dbh->prepare("select COLUMN_1 from table where COLUMN_2 < 2500");
$sth->execute();
while (my ($column_1_value) = $sth->fetchrow_array()) {
# do some processing on the column 1 value here
}
</code>
</p>
<p>There is nothing fancy here except the query brings back 60 million rows! As a result, when I try to run the script it brings the database and machine running the database to its knees during the execution of the SQL statement.</p>
<p>Short of database level things or memory upgrades (that are largely outside my control) does anyone have any suggestions for handling these big queries in a more effective manner?</p>
<p>One thought I had was to break the query up into smaller chunks as such:
<code>
$sth = $dbh->prepare("select COLUMN_1 from table where COLUMN_2 between ? and ?");
for ($i = 0; $i < 2500; $i += 100) {
$sth->execute($i, $i + 100);
while (my ($column_1_value) = $sth->fetchrow_array()) {
# do some processing on the column 1 value here
}
}
</code>
but that seems ugly to me. Surely there must be a more elegant solution?</p>
<p>Thanks.</p>