http://qs321.pair.com?node_id=1000796


in reply to Processing ~1 Trillion records

First of all, I would look at tuning the database, and pushing part of the work into the database. Databases are usually good at aggregation, for example, but depending on your key space size, aggregation or sorting may be prohibitive on your database server unless the indices needed for that already exist.

If working with the database is not a good option, I would write the query results to disk into one or more files, and simply launch the processing in parallel for each file. This approach relies on the fact that all your aggregators are symmetric. For example sum() and count() and max() are symmetric aggregators, because it doesn't matter in which order you visit the rows to find them.

If you need more complicated aggregators, like TOP 10 or a HAVING filter, things will require much more thought - Google Sawzall and MapReduce have produced some papers on symmetric aggregators for other than the trivial stuff.

I've used small SQLite databases to produce and store the intermediate results and ATTACHed the databases to create the final results from them. In my experience SQLite is not suitable for holding (and JOINing) data if the file size goes above 2GB, but that experience was with SQLite 2. Changes to the Btree backend may have improved that limit.

Without knowing about how the data is fetched from the database and how the restructuring is to be done, it's hard to suggest a proper optimization. Just be warned that for example some tied hashes do not like to have more than 4GB keys.