Excellent points! Several people have pointed out very egregious
logic errors in the initial code. Several of them are not
errors in the sense that I know the specifics of the files I
am using (no blank lines, no '\n' at the end of the file).
I really like blogans comment. Does <$fh> hit the disk each
time, or is it reading from a cached block? Does anyone know?
I guess my initial point was flawed for the general case, but
I can reformulate it to a better, stronger statement:
If you know certain aspects of the files you are reading
(e.g. average line size, whether there are blanks in the file, etc)
you could implement a bare-bones, lightning-fast read method
that passes up the traditional <$fh>. But for a basic, system-independent
file-reader, <$fh> is a strong contender.
Anyone agree? | [reply] [Watch: Dir/Any] |
Update: a typo and factual error. Mis-read my own benchmark.
As (was:if) you know your files are not too big to fit in memory and you really need the speed, the add this to your benchmark. It beats your code by 400%60%. Standard perl.
sub sub3 {
open FILE, 'yourfile' or die $!;
binmode FILE;
my @lines = split $/, do{ local $/; <FILE>; };
close FILE or warn $!;
}
Cor! Like yer ring! ... HALO dammit! ... 'Ave it yer way! Hal-lo, Mister la-de-da. ... Like yer ring! | [reply] [Watch: Dir/Any] [d/l] |