C + NM, which was what your benchmark was measuring
Having read your answers and this -- 269035 -- thread, maybe it's better to say that in case of "use_str" benchmark was measuring
C + 2268 * ( 5 - time_for_C ) * 11 * ( M + E )
where E is "light-weight" compilation, consisting only of string equality (eq) check? Next example shows that "proper" compilation happens just once, if I'm reading output correctly:
use strict;
use warnings;
use feature 'say';
use re 'debug';
my $str = 'foobar';
sub foo { map /$str/, 1 .. 3 }
foo for 1 .. 2;
Compiling REx "foobar"
Final program:
1: EXACT <foobar> (4)
4: END (0)
anchored "foobar" at 0 (checking anchored isall) minlen 6
Compiling REx "foobar"
Compiling REx "foobar"
Compiling REx "foobar"
Compiling REx "foobar"
Compiling REx "foobar"
("use_re" benchmarked concatenation of 45000 words, BTW. Similar, to above, script shows proper compilation happens only once, too.) And "time_for_C" is negligible, we can compile regexp placing dummy /$str/ at the top of the script, and benchmark result won't change. Moreover:
use strict;
use warnings;
use Benchmark qw( cmpthese timethese );
open my $words, "<", "linuxwords.txt" or die "$!";
my @words = <$words>;
chomp @words;
my @search = @words[0..10];
$" = "|";
my $re = qr/^(?:@words)$/;
my $str = "^(?:@words)\$";
my $str1 = $str;
my $r = timethese
(
-5,
{
use_qr => sub { map /$re/, @search },
use_str => sub {
substr $str1, rand 400_000, 1, '#'; # 'equalize' condi
+tions with 'use_str1'
map /$str/, @search
},
use_str1 => sub {
substr $str1, rand 400_000, 1, '#'; # force proper re-
+compilation below
map /$str1/, @search
},
}
);
cmpthese $r;
Rate use_str1 use_str use_qr
use_str1 3.65/s -- -99% -100%
use_str 436/s 11853% -- -99%
use_qr 31592/s 865014% 7138% --
To summarize, if I may: if pattern is "long enough", then D (time to duplicate the regex structure) is much less than E (time to compare patterns with simple "eq"). For short patterns (as in OP), D is more expensive than E (even multiple E's), so attempt to optimize through use of "qr" failed.
Edit. People "lucky" enough to hit 0..3 with "rand 400_000" should change offset in "substr" to "4 + rand 400_000". :(
Edit 2. I mean, you say: if "N(M+D)" is so much faster than "C+NM", then it's C that is so slow, but I think, no, C is negligible, it's rather "N(M+D)" is faster than "C+N(M+E)" because D is faster than E. Sorry I haven't communicated this thought without edits. |