I have a mod_perl process that slurps up a large dataset from a database, sorts it, then displays the first N items. This happens to balloon the memory consumption of the apache process to an unacceptable level.
An obvious question, perhaps, but have you ruled out getting the database to do the work for you? That'd be a no-brainer if the column you're interested in is indexed. I'm guessing it isn't. Still, having the database do the sort might still be faster than scanning the entire table, pulling the data across a process boundary a row at a time, and then filtering it in the Apache process.
Assuming you have a fairly large table, here's an ugly heuristic you might try: Probe the first 2*M rows of the table, sort, and find the M'th largest value. Then run a query that does a sort on the database side, subsetting the data before sorting by adding a WHERE clause to reject any row whose target column has a value greater than the M'th value you found the first time through. You can tune this by trying 3*M rows, 4*M rows, etc. The idea is to reduce the set that the database needs to sort.