Perl-Sensitive Sunglasses | |
PerlMonks |
Re: Removing duplicates in large filesby lestrrat (Deacon) |
on Jan 30, 2004 at 20:29 UTC ( [id://325381]=note: print w/replies, xml ) | Need Help?? |
I suppose that if you must use Perl for this, you could use DB_File (or other *DB_File modules), and just keep chugging the email address to a file. Since this being a hash, it would weed out duplicates. some code fragments...
Then you can open that db that DB_File created, and dump it to a file, whatever. However, if you got that much data I would use SQL ;)
In Section
Seekers of Perl Wisdom
|
|