G'day eyepopslikeamosquito,
I tried using a matrix: $cells{x_coord}{y_coord}.
For comparison, here's the benchmark using your original code:
Benchmark: timing 200000 iterations of Big, Pak, Str...
Big: 10 wallclock secs (10.36 usr + 0.02 sys = 10.38 CPU) @ 19
+267.82/s (n=200000)
Pak: 8 wallclock secs ( 9.81 usr + 0.03 sys = 9.84 CPU) @ 20
+325.20/s (n=200000)
Str: 9 wallclock secs ( 8.30 usr + 0.02 sys = 8.32 CPU) @ 24
+038.46/s (n=200000)
So, while it looks like your machine is about twice as fast as mine, the results are roughly equivalent.
I then changed the last part of your code to:
sub mat_hash {
my %cells;
$cells{$_->[0]}{$_->[1]} = undef for @points;
my $ncells = 0;
$ncells += keys %{$cells{$_}} for keys %cells;
$ncells == $npoints or die;
exists $cells{$_->[0]}{$_->[1]} or die for @points;
exists $cells{'notfound'} and die;
exists $cells{'notfound2'} and die;
exists $cells{'notfound3'} and die;
return \%cells;
}
sub mat_look {
my $cells = shift;
exists $cells->{$_->[0]}{$_->[1]} or die for @points;
exists $cells->{'notfound'} and die;
exists $cells->{'notfound2'} and die;
exists $cells->{'notfound3'} and die;
}
my $str_ref = str_hash();
my $big_ref = big_hash();
my $pak_ref = pak_hash();
my $mat_ref = mat_hash();
timethese 200000, {
Str => sub { str_look($str_ref) },
Big => sub { big_look($big_ref) },
Pak => sub { pak_look($pak_ref) },
Mat => sub { mat_look($mat_ref) },
};
I'm pretty sure mat_hash and mat_look capture all the same operations
as your *_hash and *_look functions;
however, it wouldn't hurt for a second look at that
— preferably with a non-popping eye :-)
Other than the shebang line
#!/usr/bin/env perl
the remainder of the code is unchanged.
I ran the benchmark three times:
Benchmark: timing 200000 iterations of Big, Mat, Pak, Str...
Big: 10 wallclock secs (10.06 usr + 0.02 sys = 10.08 CPU) @ 19
+841.27/s (n=200000)
Mat: 8 wallclock secs ( 7.51 usr + 0.01 sys = 7.52 CPU) @ 26
+595.74/s (n=200000)
Pak: 10 wallclock secs ( 9.63 usr + 0.01 sys = 9.64 CPU) @ 20
+746.89/s (n=200000)
Str: 8 wallclock secs ( 7.78 usr + 0.02 sys = 7.80 CPU) @ 25
+641.03/s (n=200000)
Benchmark: timing 200000 iterations of Big, Mat, Pak, Str...
Big: 10 wallclock secs (10.01 usr + 0.02 sys = 10.03 CPU) @ 19
+940.18/s (n=200000)
Mat: 8 wallclock secs ( 7.52 usr + 0.01 sys = 7.53 CPU) @ 26
+560.42/s (n=200000)
Pak: 10 wallclock secs ( 9.72 usr + 0.01 sys = 9.73 CPU) @ 20
+554.98/s (n=200000)
Str: 8 wallclock secs ( 8.00 usr + 0.02 sys = 8.02 CPU) @ 24
+937.66/s (n=200000)
Benchmark: timing 200000 iterations of Big, Mat, Pak, Str...
Big: 10 wallclock secs (10.05 usr + 0.01 sys = 10.06 CPU) @ 19
+880.72/s (n=200000)
Mat: 8 wallclock secs ( 7.47 usr + -0.01 sys = 7.46 CPU) @ 26
+809.65/s (n=200000)
Pak: 9 wallclock secs ( 9.66 usr + 0.02 sys = 9.68 CPU) @ 20
+661.16/s (n=200000)
Str: 9 wallclock secs ( 8.09 usr + 0.01 sys = 8.10 CPU) @ 24
+691.36/s (n=200000)
So, it looks like Mat is slightly faster than Str (the previous fastest).
Your post stressed speed but, in case it matters, the increased number of keys will equate to greater memory usage.
I'm using:
$ perl -v | head -2 | tail -1
This is perl 5, version 26, subversion 0 (v5.26.0) built for darwin-th
+read-multi-2level