![]() |
|
Keep It Simple, Stupid | |
PerlMonks |
Re: Processing ~1 Trillion recordsby mpeppler (Vicar) |
on Oct 26, 2012 at 06:43 UTC ( #1001007=note: print w/replies, xml ) | Need Help?? |
I've only glanced at the various answers quickly, so maybe I'm off the mark, but: My immediate reaction to needing to process that many rows is to try to parallelize the process. It will put a higher load on the DB, but that's what the DB is really good at. Obviously your dataset needs to be partitionable, but I can't imagine a dataset of that size that can't be split in some way. Michael
In Section
Seekers of Perl Wisdom
|
|