There's more than one way to do things | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
This is even less memory efficient, but I couldn't resist turning your problem into a golfed one-liner. I'm sure someone else will squeeze a few extra characters out of it:
-a = autosplit into @F. -n means wrap the -e code in a while(<>){.....} loop. So as this one-liner iterates over the two (or more) files, it pushes each line into an anonymous array held in a hash where the keys are the "objectN" (the first element of @F). After the first implicit while. loop (-n) finishes, the END{} block is executed. Here we test each hash element to see if its anonymous array holds more than one element. If it does, print the array. We're taking advantage of the fact that each array element still contains the \n newline from the original file's line endings, and that's why "print @ARRAY." results in one element per print-out line. I hope my description of this solution helps, but you can also brush up on perlrun for more details. There are a couple of caveats with this one-liner. First, both files are slurped into a hash in their entirety. Second, the output is in no particular order. Dave In reply to Re: comparing two files for duplicate entries
by davido
|
|