in reply to Re^2: Performance penalty of using qr//
in thread Performance penalty of using qr//
Um ... so ... when is the sometime the optimization kicks inWhat optimisation are you referring to?
Dave.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^4: Performance penalty of using qr//
by Eily (Monsignor) on Dec 21, 2017 at 14:56 UTC | |
I'm guessing the optimization of using qr// over a plain string (ie: "Since Perl may compile the pattern at the moment of execution of the qr() operator, using qr() may have speed advantages in some situations ...") From your previous post I would say that this happens when there is a big compilation overhead, so I thought about this, and used this list of words for testing:
The re case is pretty bad because of the systematic interpolation, but I can't help but feel like I might be missing something because of how absurd the difference between qr and str is? But if this is correct, then qr is a clear winner for dictionary search. | [reply] [d/l] [select] |
by dave_the_m (Monsignor) on Dec 21, 2017 at 20:01 UTC | |
In the qr case, the pattern is compiled once, *before* the benchmark is run. The benchmark cost is 11 matches, plus 11 clonings of the compiled regex's internal struture. In the str case, the benchmark includes compiling the pattern before matching the first word. For the subsequent 10 matches it attempt to recompile /$str/, but each time it uses an optimisation where it sees if the string has changed since last time and if so skips recompiling it. So sometimes /$str/ in a loop for unchanging $str can be faster than /$qr/, but that benchmark won't show it. Or more formally, if C is the time to compile a pattern, M is the time to run (match) against the compiled pattern, and D is the time to duplicate the regex structure, then takes C + N(M+D), but your benchmark was measuring N(M+D); while takes Dave. | [reply] [d/l] [select] |
by vr (Curate) on Dec 22, 2017 at 19:26 UTC | |
C + NM, which was what your benchmark was measuring Having read your answers and this -- 269035 -- thread, maybe it's better to say that in case of "use_str" benchmark was measuring C + 2268 * ( 5 - time_for_C ) * 11 * ( M + E ) where E is "light-weight" compilation, consisting only of string equality (eq) check? Next example shows that "proper" compilation happens just once, if I'm reading output correctly:
("use_re" benchmarked concatenation of 45000 words, BTW. Similar, to above, script shows proper compilation happens only once, too.) And "time_for_C" is negligible, we can compile regexp placing dummy /$str/ at the top of the script, and benchmark result won't change. Moreover:
To summarize, if I may: if pattern is "long enough", then D (time to duplicate the regex structure) is much less than E (time to compare patterns with simple "eq"). For short patterns (as in OP), D is more expensive than E (even multiple E's), so attempt to optimize through use of "qr" failed. Edit. People "lucky" enough to hit 0..3 with "rand 400_000" should change offset in "substr" to "4 + rand 400_000". :( Edit 2. I mean, you say: if "N(M+D)" is so much faster than "C+NM", then it's C that is so slow, but I think, no, C is negligible, it's rather "N(M+D)" is faster than "C+N(M+E)" because D is faster than E. Sorry I haven't communicated this thought without edits. | [reply] [d/l] [select] |