http://qs321.pair.com?node_id=400712


in reply to Efficient search through a huge dataset

If the files can be stored in sorted order (or you can maintain an index on them that lets you access them in sorted order quickly a-la b-tree or you don't mind going through the overhead of sorting both before performing the comparison) based on the fields you want to compare then you could step through the 2 of them in lock-step fashion basically like the merge step of the mergesort algorithm. Pseudoperlcode (based on the assumption that the entire line is the key you want to match):

open FILE1,"<file1"; open FILE2,"<file2"; my $key1=<FILE1>; my $key2=<FILE2>; while((!eof(FILE1)&&(!eof(FILE2))){ if($key1 gt $key2){ # do something when you find a key in FILE2 and not in FILE1 # read a line from FILE2 $key2=<FILE2> }elsif($key1 lt $key2){ # do something when you find a key in FILE1 and not in FILE2 # read a line from FILE1 $key1=<FILE1>; }else{ #found a match, read a line from FILE1 and FILE2 #this behavior may vary depending on how you want to #handle multiple matches, i.e. a given line is in both #files more than once $key1=<FILE1>; $key2=<FILE2>; } } close FILE1; close FILE2;

That way doing the check for common records is as fast as reading each file once and you never have to hold more than one record from each file in memory at a time.

L