Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re^6: Split tab-separated file into separate files, based on column name (open on demand) (updated)

by haukex (Archbishop)
on Aug 28, 2020 at 14:00 UTC ( [id://11121147]=note: print w/replies, xml ) Need Help??


in reply to Re^5: Split tab-separated file into separate files, based on column name (open on demand)
in thread Split tab-separated file into separate files, based on column name

... do the work in less time than Perl needs for startup/shutdown overhead. Perl is more flexible and powerful, but that power does come at a cost ...

If that really was the point you were trying to make here, then it probably would have been better if you'd benchmarked and shown a solution that's actually faster than Perl. On a longer input file (OP never specified file length, but the fact that the number of columns grew from 3 to 20 is a hint), this pure Perl solution I whipped up is twice as fast as the awk code you showed:

use warnings; use strict; my @cols = split /\t/, <>; chomp($cols[-1]); my @fh = map { open my $fh, '>', $_ or die $!; $fh } @cols; while ( my $line = <> ) { chomp($line); my @row = split /\t/, $line; print {$fh[$_]} $row[$_], "\n" for 0..$#row; }
#!/usr/bin/env perl use warnings; use strict; use FindBin; use File::Spec::Functions qw/catfile/; use File::Temp qw/tempfile tempdir/; use IPC::System::Simple qw/systemx/; my $COLS = 20; my $ROWS = 1_000_000; my $AWKSCRIPT = catfile($FindBin::Bin,'11121118.awk'); my $PERLSCRIPT = catfile($FindBin::Bin,'example.pl'); my $expdir = tempdir(CLEANUP=>1); my ($tmpinfh, $infn) = tempfile(UNLINK=>1); { warn "Generating data...\n"; chdir $expdir or die $!; my $c = 'a'; my @cols = map { $c++ } 1..$COLS; print $tmpinfh join("\t", @cols), "\n"; my %fh; open $fh{$_}, '>', $_ or die $! for @cols; for ( 1..$ROWS ) { my @row = map { int rand 1000 } 1..$COLS; print $tmpinfh join("\t", @row), "\n"; print {$fh{$cols[$_]}} $row[$_] ,"\n" for 0..$COLS-1; } close $fh{$_} for @cols; close $tmpinfh; } { warn "Running awk...\n"; my $workdir = tempdir(CLEANUP=>1); chdir $workdir or die $!; systemx('/usr/bin/time', 'awk', '-f', $AWKSCRIPT, $infn); systemx('diff','-rq',$expdir,$workdir); } { warn "Running perl...\n"; my $workdir = tempdir(CLEANUP=>1); chdir $workdir or die $!; systemx('/usr/bin/time','perl',$PERLSCRIPT,$infn); systemx('diff','-rq',$expdir,$workdir); }
I firmly believe that every Perl programmer should learn Awk because learning Awk will make you a better Perl programmer.

Sure, in general, the more programming languages a programmer is exposed to, the better they (usually) become. And yet, there are other situations:

Some time ago I suggested to another questioner to either use sed in his shell script ...

And I once showed someone who was writing an installer shell script how to use a oneliner to do a search and replace to change a configuration variable. And what happened? As the installation script grew, the oneliner just got called over and over again for different variables. While you, I, and the OP may know there are better solutions (as you said yourself, "rewrite the entire script in Perl"), these posts are public and may be read by people who may not know better, and in particular in comparison to awk, I disagree with an unqualified "Sometimes Perl is not the best tool for the job."

Update - I also wanted to mention: In environments where there are several programmers on a team, most of whom are only focused on one language, having a product consist of code written in several different languages is more likely to cause maintenance problems. These are the reasons I said "throwing yet another new language into the mix" isn't necessarily a good thing. (Also, just in case there's any confusion with non-native speakers, the definition of "unqualified" I was using is "not modified or restricted by reservations", as in an "unqualified statement", and not "not having requisite qualifications", as in an "unqualified person".)

  • Comment on Re^6: Split tab-separated file into separate files, based on column name (open on demand) (updated)
  • Select or Download Code

Replies are listed 'Best First'.
Re^7: Split tab-separated file into separate files, based on column name (open on demand)
by LanX (Saint) on Aug 28, 2020 at 15:46 UTC
    > "Sometimes Perl is not the best tool for the job."

    OK this a "Jein" situation.

    Perl is certainly often not the best tool.

    But if it comes to sed and awk it's hard to believe, because Larry meticulously copied all features.

    I bet, I could easily translate this given awk script in a one2one fashion to Perl, by encapsulating the open on demand into a short sub.

    Just look at perlvar , perlrun and perltrap at all the details given concerning awk. Now the startup argument for short data, where overhead counts ...

    ... startup isn't the same issue anymore like it was 25 years ago.

    To make it matter we need start a script over and over again. The realistic approach in this case is to write a persistent service which doesn't even need to start up.

    We are not talking about heavy apps like perltidy which may need a second to initialize.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      Perl is certainly often not the best tool. But if it comes to sed and awk it's hard to believe

      Yes, this was my point.

      I bet, I could easily translate this given awk script in a one2one fashion to Perl, by encapsulating the open on demand into a short sub.

      You don't need to, Larry did that already :-) a2p was part of the Perl core until 5.20, now it lives on CPAN.

      $ a2p 11121118.awk #!/usr/bin/perl eval 'exec /usr/bin/perl -S $0 ${1+"$@"}' if $running_under_some_shell; # this emulates #! processing on NIH machines. # (remove #! line above if indigestible) eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift; # process any FOO=bar switches $FS = ' '; # set field separator $, = ' '; # set output field separator $\ = "\n"; # set output record separator $FS = "\t"; line: while (<>) { chomp; # strip record separator @Fld = split($FS, $_, -1); if (($.-$FNRbase) == 1) { @Fields = split($FS, '', -1); # clear fields array for ($i = 1; $i <= ($#Fld+1); $i++) { $Fields[($i)-1] = $Fld[$i]; } next line; } for ($i = 1; $i <= ($#Fld+1); $i++) { &Pick('>', $Fields[($i)-1]) && (print $fh $Fld[$i]); } } continue { $FNRbase = $. if eof; } sub Pick { local($mode,$name,$pipe) = @_; $fh = $name; open($name,$mode.$name.$pipe) unless $opened{$name}++; }

      Unfortunately, there's apparently a bug in the translator, and the above script needs a s/\$Fld\[\$i\K\]/-1]/g to fix it.

        Unfortunately, there's apparently a bug in the translator, and the above script needs a s/\$Fld\\$i\K\/-1]/g to fix it.
        Patches are welcome!
        I know and I didn't mention a2p on purpose ;p

        As you can see it's producing Perl 4 code.

        I'd implement Pick() differently and this whole script is twice as long as needed.

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        Wikisyntax for the Monastery

      Larry meticulously copied all features

      Not quite: (quoting perlvar) "Remember: the value of $/ is a string, not a regex. awk has to be better for something."

      There is also a broader (information-theoretic?) issue where Awk can, in some cases, be more concise because it is less powerful than Perl.

      I could easily translate this given awk script in a one2one fashion to Perl, by encapsulating the open on demand into a short sub.

      You probably could, but the Awk script had one other feature that might be some extra code in Perl: Awk's FNR is reset at the beginning of each input file, so that script will correctly process multiple input files given on the command line, extracting the header from each file.

      On the other hand, it also accumulates open files, so if you have enough distinct columns across a multi-file input set, you will run out of file descriptors. :-)

      To make it matter we need start a script over and over again.

      In the case of a one-liner simple enough to be replaced using sed in a shell script, we were talking about running it over and over again. The better answer is usually to rewrite the entire script in Perl, but sometimes a shell script is the right tool for the job, if the job consists almost entirely of running external programs with very little "local" data processing.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11121147]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2024-04-19 22:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found