http://qs321.pair.com?node_id=11111223


in reply to Re^7: "exists $hash{key}" is slower than "$hash{key}"
in thread "exists $hash{key}" is slower than "$hash{key}"

Thanks for the comments and clarifications.

If I update the benchmark code to use the ternary operator, and only a global assignment, then the general pattern remains on windows, but less so on linux.

Is anyone able to replicate these results?

use Benchmark qw {:all}; use 5.016; my %hash; # set up the hash for (1001..2000) { $hash{$_}++; } # two keys we use below our $key1 = 1001; our $key2 = 1002; # hash key 1 is SV $hash{$key1} = 1; # hash key 2 is RV $hash{$key2} = {1..10}; # assign to global as a baseline our $xx_global; # keys are short for formatting reasons # (and kept the same as in the original post) # char1: e = exists check, v = value check # chars 2,3: ck = constant key, vk = variable key # chars 4,5: sv = key contains scalar value, rv = key contains refere +nce # char 6: l = assign to lexical, g = assign to global # thus # ecksvl means "exists check using constant key, scalar value, assign +ed to lexical" my %checks = ( evksvg => '$xx_global = exists $hash{$key1} ? 1 : 2', vvksvg => '$xx_global = $hash{$key1} ? 1 : 2', evkrvg => '$xx_global = exists $hash{$key2} ? 1 : 2', vvkrvg => '$xx_global = $hash{$key2} ? 1 : 2', ); cmpthese ( -2, \%checks );

I ran the above code on both a linux box (perlbrew 5.30.0, CentOS 7) and a windows laptop (windows 10, Strawberry perl 5.30.0). Each was repeated four times. (In previous posts I used Strawberry 5.28.0, but the windows machine is the same).

The relative differences on the linux machine are very small and the order changes between runs. One of the value calls is fastest across each of the runs, but not by much in absolute terms, and the exists call is second fastest for three of the four calls. On windows the value calls are always faster than the exists calls and the relative differences are much greater.

Linux results:

Rate evkrvg vvksvg evksvg vvkrvg evkrvg 25263440/s -- -5% -6% -6% vvksvg 26588209/s 5% -- -1% -1% evksvg 26896356/s 6% 1% -- -0% vvkrvg 26916138/s 7% 1% 0% -- Rate evksvg evkrvg vvksvg vvkrvg evksvg 25969403/s -- -1% -3% -4% evkrvg 26171165/s 1% -- -2% -3% vvksvg 26639554/s 3% 2% -- -1% vvkrvg 26943132/s 4% 3% 1% -- Rate evkrvg vvkrvg evksvg vvksvg evkrvg 26418063/s -- -2% -3% -5% vvkrvg 27020119/s 2% -- -1% -3% evksvg 27265486/s 3% 1% -- -2% vvksvg 27927270/s 6% 3% 2% -- Rate vvksvg evkrvg evksvg vvkrvg vvksvg 22526952/s -- -7% -9% -15% evkrvg 24292879/s 8% -- -2% -8% evksvg 24670397/s 10% 2% -- -7% vvkrvg 26535260/s 18% 9% 8% --

 

Windows results:

Rate evkrvg evksvg vvkrvg vvksvg evkrvg 11983737/s -- -13% -21% -21% evksvg 13772688/s 15% -- -9% -10% vvkrvg 15178025/s 27% 10% -- -0% vvksvg 15245990/s 27% 11% 0% -- Rate evksvg evkrvg vvksvg vvkrvg evksvg 14079720/s -- -2% -12% -13% evkrvg 14319917/s 2% -- -11% -12% vvksvg 16084312/s 14% 12% -- -1% vvkrvg 16231025/s 15% 13% 1% -- Rate evksvg evkrvg vvkrvg vvksvg evksvg 16847142/s -- -6% -13% -14% evkrvg 17937967/s 6% -- -7% -9% vvkrvg 19322856/s 15% 8% -- -2% vvksvg 19628993/s 17% 9% 2% -- Rate evkrvg evksvg vvkrvg vvksvg evkrvg 14452940/s -- -4% -8% -14% evksvg 14982490/s 4% -- -5% -11% vvkrvg 15741756/s 9% 5% -- -6% vvksvg 16757529/s 16% 12% 6% --

And I should reiterate my point from the original post that the relative differences remain very small. If the difference is real, then one would have to be running a very large number of calls for the choice of idiom to make any meaningful difference.

Addendum:

After writing the above, I decided to run more replications on Windows to get a better sense of how consistent the results are on my machine, and get the results below for 30 replications. I could have simplified the benchmarks to one of each type, but have left the code as-is for simplicity.

Of the 30 reps, 23 show both value calls being faster than either exists call. In only one case was exists fastest.

Runs are numbered for reference. A numbers with suffix "n" denotes that an exists call was first or second fastest. 1 Rate evksvg evkrvg vvkrvg vvksvg evksvg 14016791/s -- -3% -10% -10% evkrvg 14502095/s 3% -- -7% -7% vvkrvg 15591928/s 11% 8% -- -0% vvksvg 15647458/s 12% 8% 0% -- 2 Rate evksvg evkrvg vvksvg vvkrvg evksvg 13784646/s -- -4% -14% -15% evkrvg 14302757/s 4% -- -11% -12% vvksvg 16046915/s 16% 12% -- -1% vvkrvg 16215803/s 18% 13% 1% -- 3 Rate evkrvg evksvg vvksvg vvkrvg evkrvg 14185097/s -- -2% -11% -13% evksvg 14413155/s 2% -- -9% -12% vvksvg 15911268/s 12% 10% -- -3% vvkrvg 16342816/s 15% 13% 3% -- 4 Rate evkrvg evksvg vvksvg vvkrvg evkrvg 14685190/s -- -0% -7% -10% evksvg 14718917/s 0% -- -6% -10% vvksvg 15741756/s 7% 7% -- -4% vvkrvg 16386200/s 12% 11% 4% -- 5 Rate evksvg evkrvg vvksvg vvkrvg evksvg 14680062/s -- -1% -10% -11% evkrvg 14760674/s 1% -- -10% -10% vvksvg 16326796/s 11% 11% -- -1% vvkrvg 16474871/s 12% 12% 1% -- 6n Rate vvkrvg evkrvg evksvg vvksvg vvkrvg 13132131/s -- -4% -13% -14% evkrvg 13640527/s 4% -- -10% -10% evksvg 15088261/s 15% 11% -- -1% vvksvg 15199441/s 16% 11% 1% -- 7 Rate evkrvg evksvg vvkrvg vvksvg evkrvg 13587332/s -- -4% -8% -16% evksvg 14171127/s 4% -- -4% -12% vvkrvg 14811159/s 9% 5% -- -8% vvksvg 16155605/s 19% 14% 9% -- 8n Rate evksvg vvkrvg vvksvg evkrvg evksvg 11707959/s -- -22% -24% -27% vvkrvg 14986647/s 28% -- -2% -6% vvksvg 15311861/s 31% 2% -- -4% evkrvg 16023862/s 37% 7% 5% -- 9 Rate evksvg evkrvg vvksvg vvkrvg evksvg 14244165/s -- -3% -10% -12% evkrvg 14642711/s 3% -- -8% -9% vvksvg 15830158/s 11% 8% -- -2% vvkrvg 16152573/s 13% 10% 2% -- 10 Rate evksvg evkrvg vvkrvg vvksvg evksvg 14121311/s -- -3% -11% -12% evkrvg 14591640/s 3% -- -8% -9% vvkrvg 15795394/s 12% 8% -- -2% vvksvg 16083963/s 14% 10% 2% -- 11n Rate vvkrvg evkrvg evksvg vvksvg vvkrvg 16824099/s -- -1% -3% -5% evkrvg 17014413/s 1% -- -2% -4% evksvg 17336475/s 3% 2% -- -2% vvksvg 17730661/s 5% 4% 2% -- 12 Rate evkrvg evksvg vvkrvg vvksvg evkrvg 16192077/s -- -7% -15% -16% evksvg 17375149/s 7% -- -9% -10% vvkrvg 19072920/s 18% 10% -- -1% vvksvg 19278927/s 19% 11% 1% -- 13n Rate evksvg vvkrvg evkrvg vvksvg evksvg 17275592/s -- -3% -3% -7% vvkrvg 17805106/s 3% -- -0% -4% evkrvg 17812893/s 3% 0% -- -4% vvksvg 18638037/s 8% 5% 5% -- 14n Rate evkrvg vvksvg evksvg vvkrvg evkrvg 15942681/s -- -5% -7% -11% vvksvg 16845763/s 6% -- -2% -5% evksvg 17115273/s 7% 2% -- -4% vvkrvg 17818450/s 12% 6% 4% -- 15 Rate evksvg evkrvg vvkrvg vvksvg evksvg 17140881/s -- -1% -8% -12% evkrvg 17322469/s 1% -- -7% -11% vvkrvg 18615359/s 9% 7% -- -4% vvksvg 19401535/s 13% 12% 4% -- 16 Rate evkrvg evksvg vvkrvg vvksvg evkrvg 15344534/s -- -4% -17% -18% evksvg 16026386/s 4% -- -13% -14% vvkrvg 18390436/s 20% 15% -- -2% vvksvg 18743663/s 22% 17% 2% -- 17 Rate evksvg evkrvg vvkrvg vvksvg evksvg 16758062/s -- -7% -10% -10% evkrvg 17948695/s 7% -- -3% -4% vvkrvg 18591773/s 11% 4% -- -0% vvksvg 18662780/s 11% 4% 0% -- 18 Rate evkrvg evksvg vvkrvg vvksvg evkrvg 15643772/s -- -4% -11% -11% evksvg 16250242/s 4% -- -7% -8% vvkrvg 17501894/s 12% 8% -- -0% vvksvg 17582178/s 12% 8% 0% -- 19 Rate evksvg evkrvg vvksvg vvkrvg evksvg 15403076/s -- -2% -17% -19% evkrvg 15723064/s 2% -- -15% -17% vvksvg 18470873/s 20% 17% -- -3% vvkrvg 18964468/s 23% 21% 3% -- 20 Rate evksvg evkrvg vvksvg vvkrvg evksvg 13764563/s -- -6% -12% -15% evkrvg 14649482/s 6% -- -7% -9% vvksvg 15692295/s 14% 7% -- -3% vvkrvg 16184162/s 18% 10% 3% -- 21 Rate evksvg evkrvg vvkrvg vvksvg evksvg 14479288/s -- -3% -6% -10% evkrvg 14950644/s 3% -- -3% -7% vvkrvg 15471803/s 7% 3% -- -3% vvksvg 16022650/s 11% 7% 4% -- 22 Rate evkrvg evksvg vvkrvg vvksvg evkrvg 13734297/s -- -6% -12% -12% evksvg 14573226/s 6% -- -7% -7% vvkrvg 15677646/s 14% 8% -- -0% vvksvg 15687019/s 14% 8% 0% -- 23 Rate evksvg evkrvg vvksvg vvkrvg evksvg 13885612/s -- -9% -12% -14% evkrvg 15188495/s 9% -- -4% -6% vvksvg 15800737/s 14% 4% -- -3% vvkrvg 16232594/s 17% 7% 3% -- 24 Rate evksvg evkrvg vvkrvg vvksvg evksvg 13927062/s -- -10% -16% -17% evkrvg 15464966/s 11% -- -7% -7% vvkrvg 16566376/s 19% 7% -- -1% vvksvg 16710598/s 20% 8% 1% -- 25 Rate evkrvg evksvg vvksvg vvkrvg evkrvg 13409022/s -- -5% -13% -15% evksvg 14156262/s 6% -- -9% -11% vvksvg 15472862/s 15% 9% -- -2% vvkrvg 15859452/s 18% 12% 2% -- 26 Rate evkrvg evksvg vvksvg vvkrvg evkrvg 14159040/s -- -6% -8% -10% evksvg 15036462/s 6% -- -3% -4% vvksvg 15449998/s 9% 3% -- -1% vvkrvg 15669876/s 11% 4% 1% -- 27 Rate evksvg evkrvg vvksvg vvkrvg evksvg 14418525/s -- -1% -11% -12% evkrvg 14622963/s 1% -- -10% -11% vvksvg 16220206/s 12% 11% -- -1% vvkrvg 16421489/s 14% 12% 1% -- 28 Rate evkrvg evksvg vvksvg vvkrvg evkrvg 13445842/s -- -4% -12% -14% evksvg 14037792/s 4% -- -8% -10% vvksvg 15321097/s 14% 9% -- -2% vvkrvg 15604214/s 16% 11% 2% -- 29n Rate vvkrvg evkrvg evksvg vvksvg vvkrvg 11093337/s -- -20% -22% -30% evkrvg 13940970/s 26% -- -2% -12% evksvg 14248088/s 28% 2% -- -10% vvksvg 15858079/s 43% 14% 11% -- 30n Rate evkrvg vvkrvg evksvg vvksvg evkrvg 12109210/s -- -17% -17% -24% vvkrvg 14554599/s 20% -- -0% -9% evksvg 14593796/s 21% 0% -- -9% vvksvg 16004545/s 32% 10% 10% --

Replies are listed 'Best First'.
Re^9: "exists $hash{key}" is slower than "$hash{key}"
by dave_the_m (Monsignor) on Jan 09, 2020 at 08:17 UTC
    At this point I think you're mainly measuring noise. You've also still got the bug whereby you populate the lexical %hash, but the benchmarks get run against the *empty* global %hash.

    By "noise", I mean a combination of timing noise, and (for lack of a better term) "compiler noise". How C code gets compiled can effect the alignment of machine code bytes across cache line boundaries, which means that different compilers can compile the same source code of the perl interpreter into different executables which have different instruction cache and branch prediction miss patterns. I have personally seen adding a line of code into a part of the perl interpreter which wasn't being executed (e.g. in dump.c) cause a 10% change in benchmark speed for a simple benchmark.

    These days I mostly benchmark the perl interpreter using a tool of mine (Porting/bench.pl) based on top of cachegrind, which profiles the execution run in terms of how many individual machine code instructions, branches etc it does. Under that, 'exists' takes slightly fewer instruction and data reads and writes and branches than a hash lookup.

    Dave.

      Thanks once again.

      Changing the my %hash line to our %hash makes the results much more variable, with exists being fastest about half the time across ten runs.

      If the Porting/bench.pl tool shows fewer instructions, branches etc. for exists then I'll take that as being a more authoritative test.

      For future readers, adding an explicit use warnings; to the script does not raise any warnings with the lexical hash in the benchmark code. Benchmark.pm does not use warnings and explicitly disables strict when evaling strings of benchmark code (see sub _doeval in the code). String-form benchmark code might avoid sub overheads, but more care needs to be taken with the code.

      For purposes of posterity, the compilers used to compile the perls I used were gcc 7.1.0 for Strawberry perl on Windows, and gcc 6.2.0 on linux.

Re^9: "exists $hash{key}" is slower than "$hash{key}"
by Anonymous Monk on Jan 09, 2020 at 09:15 UTC

    is it noise?

    sub BenchIt { print "\n\n## $^O $]\n"; use Benchmark qw {:all}; our %hash; for (1001..2000) { $hash{$_}++; } our $key1 = 2000 - int rand 1001; our $key2 = 2000 - int rand 1001; $hash{$key1} = 1; $hash{$key2} = {1..10}; our $xx_global; cmpthese ( -2, { svExist => 'for(1..10_000){$xx_global = exists $hash{$ke +y1} ? 1 : 2}', svValue => 'for(1..10_000){$xx_global = $hash{$key1} ? 1 + : 2}', refExist => 'for(1..10_000){$xx_global = exists $hash{$ke +y2} ? 1 : 2} ', refValue => 'for(1..10_000){$xx_global = $hash{$key2} ? 1 + : 2}', } ); return; }

    a few old perls

    laptops fluctuate :)