If you load the current files 50,000 records into a hash keyed by the first field, it will consume ~50MB. That is if you only extract the key and store the entire record as the value:
my %current;
open CURRENT, '<', $currentFile or die $!;
m[^ ( [^|]+ ) \| ]x and $current{ $1 } = $_ while <CURRENT>;
close CURRENT;
If you pre-split the entire record and store the fields as an array, the memory consumption goes way up. So don't do that :)
Then you can process the master file 1 line at a time, extracting the key and looking it up in the hash. When a matching key is found, you then split both records into the fields, compare the required six and take the appropriate action according to the result.
Altogether it might look something like the following pseudo-code. Note: I didn't really understand your description of what you need to do if the records do or dont match so I took you "file X" and "file Y" literally for this example code:
#! perl -slw
use strict;
my $currentFile = 'current.dat';
my $masterFile = 'master.dat';
my $matched = 'matched.dat';
my $nomatch = 'nomatch.dat';
my %current;
open CURRENT, '<', $currentFile or die $!;
m[^ ( [^|]+ ) \| ]x and $current{ $1 } = $_ while <CURRENT>;
close CURRENT;
open X, '>', $matched or die $!;
open Y, '>', $nomatch or die $!;
open MASTER, '<', $masterFile or die $!;
while( <MASTER> ) {
if( m[^ ( [^|]+ ) \| ]x and exists $current{ $1 } ) {
my $key = $1;
my @master = split '\|', $_;
my @current = split '\|', $current{ $key };
if( 6 == grep{ $current[ $_ ] eq $master[ $_ ] } 7, 14, 21, 30
+, 50, 119 ) {
print X; ## Matched records
}
else {
print Y; ## non-matched records
}
}
}
close MASTER;
close X;
close Y;
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|