http://qs321.pair.com?node_id=193952

ezekiel has asked for the wisdom of the Perl Monks concerning the following question:

I have a really simple script that queries a database then loops over each row like this:

$sth = $dbh->prepare("select COLUMN_1 from table where COLUMN_2 < 2500 +"); $sth->execute(); while (my ($column_1_value) = $sth->fetchrow_array()) { # do some processing on the column 1 value here }

There is nothing fancy here except the query brings back 60 million rows! As a result, when I try to run the script it brings the database and machine running the database to its knees during the execution of the SQL statement.

Short of database level things or memory upgrades (that are largely outside my control) does anyone have any suggestions for handling these big queries in a more effective manner?

One thought I had was to break the query up into smaller chunks as such:

$sth = $dbh->prepare("select COLUMN_1 from table where COLUMN_2 betwee +n ? and ?"); for ($i = 0; $i < 2500; $i += 100) { $sth->execute($i, $i + 100); while (my ($column_1_value) = $sth->fetchrow_array()) { # do some processing on the column 1 value here } }
but that seems ugly to me. Surely there must be a more elegant solution?

Thanks.