Perl-Sensitive Sunglasses | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Although not a Perl solution, if you're on a Unix-like platform, you
have access to the standard tool comm, which does exactly
what you want, provided that the input files are sorted. Comm can
tell you which lines are shared by the files and which are unique to
either file. For example, here is how you would find the records
that are shared by the files:
(If the files are already sorted, you can just pass them directly to comm, without first processing with sort. Here, I'm using the bash shell's <(command) syntax to avoid using having to deal with temporary files for holding the sorted records.) Here's how to find the records that are unique to the first file: Most sort implementations are fast and will use external (file-based) sorting algorithms when the input is large, so you don't need to worry about input size. Cheers, Tom Moertel : Blog / Talks / CPAN / LectroTest / PXSL / Coffee / Movie Rating Decoder In reply to Re: Efficient search through a huge dataset
by tmoertel
|
|