As
Herkum pointed out
above, you are making this too complicated. You said yourself:
What I want to do is check if a VALUE in Ahash is also in Bhash (or a line in File1 is also in File2).
So just using whole lines from each file as the hash keys (like Herkum does) is the thing to do. In case the same string can occur multiple times in one file -- and in case it's important to keep track of how many times it occurs -- here's a simple variation on his approach to handle that:
my %strings;
while (<FILE1>) {
chomp;
$string{$_} .= '1'; # if same data appears three times, hash valu
+e is "111";
}
while (<FILE2>) {
chomp;
$string{$_} .= '2'; # same as above, but with "2" instead of "1"
}
# get hash keys (lines) that occur in both files:
my @common = grep { $strings{$_} =~ /12/ } sort keys %strings;
# report findings:
for my $key ( @common ) {
my ( $n1, $n2 ) = ( $strings{$key} =~ /(1+)(2+)/ );
printf("%s found %d times in file1, %d times in file2\n",
$key, length($n1), length($n2));
}
# you can also pick out strings unique to file1 (/1$/)
# and/or strings unique to file2 (/^2/), along with their
# frequency of occurrence. This also scales fairly well to
# handling three or more files.