Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses

Re: Use 2 files and print each line of each one side-by-side

by Kenosis (Priest)
on Feb 18, 2014 at 19:05 UTC ( #1075366=note: print w/replies, xml ) Need Help??

in reply to Use 2 files and print each line of each one side-by-side

Always, and without fail, include the following at the top of your Perl scripts:

use strict; use warnings;

These pragmas will likely save you many headaches by showing you problematic areas in your script.

Use the three-argument form of open. For example:

open $USER1, '<', $userfile1 or die "Couldn't open file: $userfile1 - +$!"; ... while(<$USER1>) { ...

Note also the use of $!. It's good that you're handling errors; it's better if you're shown exactly what produced an exception if there's an error, and $! will let you know.

You need to do the same when opening a file for writing. However, you can do the just following when opening files:

open $OUT, '>', "CONCATENATED_FILES"; ... open $USER1, '<', $userfile1;

if you include the following pragma:

use autodie;

Looping through both files is a good way to achieve your desired results, and roboticus provided a solution for handling reading from both files. Since, however, your running this script from the command line, here's another option--in case you may be interested:

use strict; use warnings; use File::Slurp qw/read_file/; use List::MoreUtils qw/zip/; chomp( my @file1 = read_file $ARGV[0] ); chomp( my @file2 = read_file $ARGV[1] ); my @combined = zip @file1, @file2; for my $i ( 0 .. $#file1 ) { last if !defined $file1[$i] or !defined $file2[$i]; print "$file1[$i]\t$file2[$i]\n"; }

Usage: perl file1 file2 >combinedFile

The script uses File::Slurp to read each file's contents into an array. Next, is uses List::MoreUtils to 'zip' or interleave the elements of the two arrays. When iterating through the arrays for printing, the script checks that both elements are defined in case one file has more lines than the other.

The above script will work just fine with small or not-too-large files, since their entire contents are read into arrays. If, however, your files are large, stick with just iterating through each file, a line at a time.

Hope this helps!

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1075366]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (5)
As of 2021-02-27 04:05 GMT
Find Nodes?
    Voting Booth?

    No recent polls found