Your skill will accomplish what the force of many cannot |
|
PerlMonks |
Re: Compare two large files and extract a matching row...by mrguy123 (Hermit) |
on Apr 17, 2012 at 12:57 UTC ( [id://965509]=note: print w/replies, xml ) | Need Help?? |
The problem here is that storing too much data in an array will give you a memory problem very fast (especially for large files)
Placing the data in a hash is a great solution (as mentioned above) but for large files that might still be a problem What might help you out here, is that Perl is amazingly fast in reading files. I've seen it go through more than 40 GB of data in minutes. This means if your hash is to big you can split File1 into smaller hashes and then go over File2 a few times and it won't take you very long Hope this helps Mr Guy
In Section
Seekers of Perl Wisdom
|
|