http://qs321.pair.com?node_id=50537


in reply to Re: Re: Re: Re: Slurp a file
in thread Slurp a file

Yes they were :) But I'm not sure I like the results. Is there a problem with this benchmarking code?

#!/usr/bin/perl -w use strict; use Benchmark; my $file = $0; open IN, $file or die "$file: $!\n"; sub joinit { my $content = join '', <IN>; } sub dollarslash { my $content = do { local $/; <IN> } } timethese(100_000, {join => \&joinit, slash => \&dollarslash} );

because it implies that the join is faster...

Benchmark: timing 100000 iterations of join, slash... join: 2 wallclock secs ( 1.52 usr + 0.29 sys = 1.81 CPU) slash: 4 wallclock secs ( 2.99 usr + 0.35 sys = 3.34 CPU)
--
<http://www.dave.org.uk>

"Perl makes the fun jobs fun
and the boring jobs bearable" - me

Replies are listed 'Best First'.
(tye)Re2: Slurp a file
by tye (Sage) on Jan 08, 2001 at 22:13 UTC

    Yes, you forgot to seek so most of the iterations don't read anything so you are only measuring overhead. (:

            - tye (but my friends call me "Tye")

      Thanks. I knew I was doing something wrong. Here's another attempt.

      #!/usr/bin/perl -w use strict; use Benchmark; my $file = $0; open IN, $file or die "$file: $!\n"; sub joinit { seek(IN, 0, 0); my $content = join '', <IN>; } sub dollarslash { seek(IN, 0, 0); my $content = do { local $/; <IN> } +} timethese(100_000, {join => \&joinit, slash => \&dollarslash} );

      Which gives these (much more believable) results:

      Benchmark: timing 100000 iterations of join, slash... join: 12 wallclock secs ( 9.98 usr + 1.33 sys = 11.31 CPU) slash: 6 wallclock secs ( 3.79 usr + 1.53 sys = 5.32 CPU)
      --
      <http://www.dave.org.uk>

      "Perl makes the fun jobs fun
      and the boring jobs bearable" - me

        Only twice as long.... ;-)

        <duck>

        <run>

        .....

        I see.
        Hmmm... time to update that one node, I guess.

        Jeroen
        I was dreaming of irritating hickups of the local gateway