P is for Practical | |
PerlMonks |
Re: Efficient search through a huge datasetby lhoward (Vicar) |
on Oct 20, 2004 at 00:16 UTC ( [id://400712]=note: print w/replies, xml ) | Need Help?? |
If the files can be stored in sorted order (or you can maintain an index on them that lets you access them in sorted order quickly a-la b-tree or you don't mind going through the overhead of sorting both before performing the comparison) based on the fields you want to compare then you could step through the 2 of them in lock-step fashion basically like the merge step of the mergesort algorithm. Pseudoperlcode (based on the assumption that the entire line is the key you want to match):
That way doing the check for common records is as fast as reading each file once and you never have to hold more than one record from each file in memory at a time. L
In Section
Seekers of Perl Wisdom
|
|