Due to your manual $cnt it is surprisingly annoying to follow. Write a loop over the indices instead: for my $i ( 0 .. $#pre ). And why splice+push (both linear operations) frequently if you can just do a bit of trivial index math?
my @pre = map "../../pics/$_/", '00' .. '16';
my $offs = 0;
while ( <> ) {
chomp;
for my $i ( 0 .. $#pre ) {
local $_ = $pre[ ( $i + $offs ) % @pre ] . $_;
next if not -e;
print;
$offs = $i;
last;
}
}
Makeshifts last the longest.
| [reply] [d/l] |
Due to your manual $cnt it is surprisingly annoying to follow. Write a loop over the indices instead: for my $i ( 0 .. $#pre ).
I partly agree with you, which is why I voted your post ++ anyway. But isn't "surprisingly annoying" a bit excessive?!? As with lots of other things relating to readability and aesthetics in Perl (and not only!) I think that these are largely matters of personal taste. As far as I'm concerned I do not like to iterate over indices: not that i've never done it, but I tend to avoid it if possible...
However I would like to stress that the point I was focusing on was rather what to do than how to exactly do it, i.e. use some trick to let the "most probable" choice be the first one to be tested. Of course this only applies to situations (like the one I had under consideration in my personal experience) in which you know a priori that this strategy is likely to be more efficient.
As a side note if you want to see a yet more awkward (but IMHO still interesting) way to do it, please see the USENET article cited in the previous post!
And why splice+push (both linear operations) frequently if you can just do a bit of trivial index math?
Of course splice() and push() are expensive, but I use them only as needed, whereas you always do your "bit of trivial index math": all in all I do not expect my code to be less efficient than yours, nay, to be fair I expect it to be slightly more efficient. Of course we can verify this soon:
#!/usr/bin/perl
use strict;
use warnings;
use Benchmark qw/:all :hireswallclock/;
my @ARGV0=@ARGV;
my @pre=map "../../pics/$_/", '00'..'16';
cmpthese 1000, { blazar => \&blazar,
aristotle =>\&aristotle }, 'all';
sub blazar {
@ARGV=@ARGV0;
while (<>) {
chomp;
my $cnt;
for my $p (@pre) {
local $_ = $p . $_;
if (-e) {
# print;
push @pre, splice @pre, 0, $cnt if $cnt;
last;
}
$cnt++;
}
}
}
sub aristotle {
@ARGV=@ARGV0;
my $offs = 0;
while ( <> ) {
chomp;
for my $i ( 0 .. $#pre ) {
local $_ = $pre[ ( $i + $offs ) % @pre ] . $_;
next if not -e;
# print;
$offs = $i;
last;
}
}
}
__END__
I ran this on perl 5.8.6 under Linux (kernel 2.6.9) with a sample input file. This gives me:
Benchmark: timing 1000 iterations of aristotle, blazar...
aristotle: 20.2232 wallclock secs (14.14 usr 6.08 sys + 0.00 cusr 0.00 csys
= 20.22 CPU) @ 49.46/s (n=1000)
blazar: 16.3845 wallclock secs ( 9.93 usr 6.45 sys + 0.00 cusr 0.00 csys
= 16.38 CPU) @ 61.05/s (n=1000)
Rate aristotle blazar
aristotle 49.5/s -- -19%
blazar 61.1/s 23% --
UPDATE: Honest of me to point out that I made a mistake preparing this benchmark. However please see a correct one at a later post of mine here below...
| [reply] [d/l] |
But isn't "surprisingly annoying" a bit excessive?!?
It's what it is. It took me a moment of staring to figure out that you were doing something much simpler than the code suggested at first glance.
As far as I'm concerned I do not like to iterate over indices
Instead you iterate over elements and increment $cnt for each one. A rose by any other name… Neither do I like to, btw. I'm usually the one telling people not to. But when that's what's gotta be done, then that's what's gotta be done. (Perl6, as always, will have the solution, but alas, it's so far away yet…)
i.e. use some trick to let the "most probable" choice be the first one to be tested.
In that case I'd bubble single elements to the top. That retains memory of second-, third-, etc most likely matches based on previous data. Depending on the patterns in your data that may or may not be more, less or equally efficient.
while ( <> ) {
chomp;
for my $i ( 0 .. $#pre ) {
local $_ = $pre[ $i ] . $_;
next if not -e;
print;
@pre[ 0 .. $i ] = @pre[ $i, 0 .. $i - 1 ] if $i;
last;
}
}
I'm too lazy to set up a test environment to benchmark this, sorry. Depending on how much the array lookup costs it might pay to maintain the counter explicitly as you did in your code, though I can't quite believe that.
Note that in your case, a chdir '../../pics/' might speed things up quite a bit if you're testing a lot of files, since stat(2) won't have to traverse the same directories over and over for each and every single file test.
Makeshifts last the longest. | [reply] [d/l] [select] |
#!/usr/bin/perl -li.bak
use strict;
use warnings;
my @pre=map "../../pics/$_/", '00'..'16';
while (<>) {
chomp;
my $cnt;
for my $p (@pre) {
local $_ = $p . $_;
if (-e) {
print;
push @pre, splice @pre, 0, $cnt if $cnt;
last;
}
$cnt++;
}
}
__END__
| [reply] [d/l] |