http://qs321.pair.com?node_id=1205979


in reply to Re^2: Performance penalty of using qr//
in thread Performance penalty of using qr//

Um ... so ... when is the sometime the optimization kicks in
What optimisation are you referring to?

Dave.

  • Comment on Re^3: Performance penalty of using qr//

Replies are listed 'Best First'.
Re^4: Performance penalty of using qr//
by Eily (Monsignor) on Dec 21, 2017 at 14:56 UTC

    I'm guessing the optimization of using qr// over a plain string (ie: "Since Perl may compile the pattern at the moment of execution of the qr() operator, using qr() may have speed advantages in some situations ...")

    From your previous post I would say that this happens when there is a big compilation overhead, so I thought about this, and used this list of words for testing:

    use strict; use warnings; use Benchmark qw( cmpthese timethese ); open my $words, "<", "linuxwords.txt" or die "$!"; my @words = <$words>; chomp @words; my @search = @words[0..10]; $" = "|"; my $re = qr/^(?:@words)$/; my $str = "^(?:@words)\$"; my $r = timethese ( -5, { use_qr => sub { map /$re/, @search }, use_str => sub { map /$str/, @search }, use_re => sub { map /^(?:@words)$/, @search }, } ); cmpthese $r;
    Benchmark: running use_qr, use_re, use_str for at least 5 CPU seconds. +.. use_qr: 5 wallclock secs ( 5.23 usr + 0.00 sys = 5.23 CPU) @ 98 +736.51/s (n=515997) use_re: 5 wallclock secs ( 5.33 usr + 0.00 sys = 5.33 CPU) @ 23 +.99/s (n=128) use_str: 5 wallclock secs ( 5.23 usr + 0.00 sys = 5.23 CPU) @ 22 +68.47/s (n=11855) Rate use_re use_str use_qr use_re 24.0/s -- -99% -100% use_str 2268/s 9355% -- -98% use_qr 98737/s 411431% 4253% --
    The re case is pretty bad because of the systematic interpolation, but I can't help but feel like I might be missing something because of how absurd the difference between qr and str is? But if this is correct, then qr is a clear winner for dictionary search.

      You've created a benchmark where the cost of compiling a single regex containing 45000 alternations is overwhelmingly more expensive than running that compiled regex 11 times.

      In the qr case, the pattern is compiled once, *before* the benchmark is run. The benchmark cost is 11 matches, plus 11 clonings of the compiled regex's internal struture.

      In the str case, the benchmark includes compiling the pattern before matching the first word. For the subsequent 10 matches it attempt to recompile /$str/, but each time it uses an optimisation where it sees if the string has changed since last time and if so skips recompiling it.

      So sometimes /$str/ in a loop for unchanging $str can be faster than /$qr/, but that benchmark won't show it.

      Or more formally, if C is the time to compile a pattern, M is the time to run (match) against the compiled pattern, and D is the time to duplicate the regex structure, then

      $qr = /.../; /$qr/ for 1.N
      takes C + N(M+D), but your benchmark was measuring N(M+D); while
      $str = "..."; /$str/ for 1.N
      takes N(C+M) C + NM, which was what your benchmark was measuring. (I just corrected the above - I forgot to include the 'unchanged pattern' optimisation)

      Dave.

        C + NM, which was what your benchmark was measuring

        Having read your answers and this -- 269035 -- thread, maybe it's better to say that in case of "use_str" benchmark was measuring

        C + 2268 * ( 5 - time_for_C ) * 11 * ( M + E )

        where E is "light-weight" compilation, consisting only of string equality (eq) check? Next example shows that "proper" compilation happens just once, if I'm reading output correctly:

        use strict; use warnings; use feature 'say'; use re 'debug'; my $str = 'foobar'; sub foo { map /$str/, 1 .. 3 } foo for 1 .. 2;

        Compiling REx "foobar" Final program: 1: EXACT <foobar> (4) 4: END (0) anchored "foobar" at 0 (checking anchored isall) minlen 6 Compiling REx "foobar" Compiling REx "foobar" Compiling REx "foobar" Compiling REx "foobar" Compiling REx "foobar"

        ("use_re" benchmarked concatenation of 45000 words, BTW. Similar, to above, script shows proper compilation happens only once, too.) And "time_for_C" is negligible, we can compile regexp placing dummy /$str/ at the top of the script, and benchmark result won't change. Moreover:

        use strict; use warnings; use Benchmark qw( cmpthese timethese ); open my $words, "<", "linuxwords.txt" or die "$!"; my @words = <$words>; chomp @words; my @search = @words[0..10]; $" = "|"; my $re = qr/^(?:@words)$/; my $str = "^(?:@words)\$"; my $str1 = $str; my $r = timethese ( -5, { use_qr => sub { map /$re/, @search }, use_str => sub { substr $str1, rand 400_000, 1, '#'; # 'equalize' condi +tions with 'use_str1' map /$str/, @search }, use_str1 => sub { substr $str1, rand 400_000, 1, '#'; # force proper re- +compilation below map /$str1/, @search }, } ); cmpthese $r;

        Rate use_str1 use_str use_qr use_str1 3.65/s -- -99% -100% use_str 436/s 11853% -- -99% use_qr 31592/s 865014% 7138% --

        To summarize, if I may: if pattern is "long enough", then D (time to duplicate the regex structure) is much less than E (time to compare patterns with simple "eq"). For short patterns (as in OP), D is more expensive than E (even multiple E's), so attempt to optimize through use of "qr" failed.

        Edit. People "lucky" enough to hit 0..3 with "rand 400_000" should change offset in "substr" to "4 + rand 400_000". :(

        Edit 2. I mean, you say: if "N(M+D)" is so much faster than "C+NM", then it's C that is so slow, but I think, no, C is negligible, it's rather "N(M+D)" is faster than "C+N(M+E)" because D is faster than E. Sorry I haven't communicated this thought without edits.