http://qs321.pair.com?node_id=1084515

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hey monks!

I work in a unix environment all day and have grown very efficient through use of perl and unix on the command prompt.

I still have not developed a good method of joining data in two separate files based on similarity or dissimilarity criteria. For example, does anyone have a good solution for implementing the following process on the command line?
use strict; use Data::Dumper; ###################### ###################### Read in the files open (FILEHANDLE, "$ARGV[0]") || die("Could not open input file"); my @File1 = <FILEHANDLE> ; close (FILEHANDLE); open (FILEHANDLE, "$ARGV[1]") || die("Could not open input file"); my @File2 = <FILEHANDLE> ; close (FILEHANDLE); ##### my %hash1; ######## Read first file into a hash with first element a key and the +entire line the value foreach ( @File1) { chomp; my @file1_elements = split (/\t/,$_); push(@{$hash1{$file1_elements[0]}},$_); } foreach (@File2){ chomp; my @file2_elements = split (/\t/,$_); if exists ($hash1{file2_elements[5}) { print $_ . "\t" . "@{$hash1{$file2_elements[5]}}" ."\n"; ##### Prints +the current line in file 2 and adds on the line in file1 where file1[ +0] == file2[5]. I understand that if there is more than one value i w +ill need to put a loop in to print out all the values but lets leave +that out just to simplify. } }

Now again I'm just looking to do this kind of thing on the command line, if you can throw in the print loop some smart way that would be even better

Thank you so much for your time!