Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

RE: Doesn't anyone do their homework anymore?

by Caillte (Friar)
on Oct 17, 2000 at 14:29 UTC ( [id://37118]=note: print w/replies, xml ) Need Help??


in reply to Doesn't anyone do their homework anymore?
in thread better way to find list members?

The hash test seems a little unfair. It is the only one where you construct the data structure before testing it.

If you construct the hash before testing the results are somewhat different

I used the following, a slight change from your program

Assuming that you can use a hash table elsewhere in the program (and why not?) hash tables appear to make more sense when looked at this way. I cannot see any use for dynamically creating a hash table just for one search and, if you are going to search a hash you may as well structure your data into a hash.

Of course there are exceptions to this, there are exceptions to everything in the perl world (TIMTOWTDI)

anyway, the data:

use Benchmark; use strict; # Times array searches. Assumes search element is in the # dead center of the array my $x; my @array; my %hash; for (10, 50, 100, 500, 1000, 5000) { @array = 1 .. $_; $x = $_ / 2; $hash{$_}++ for @array; print "Array size is $_\n"; timethese(5000, { 'grep' => \&grep_test, hash => \&hash_test, manual => \&manual_test }); } sub grep_test { return 1 if grep { $x == $_ } @array; } sub hash_test { return 1 if $hash{$x}; } sub manual_test { for (@array) { return 1 if $x == $_ } } Array size is 10 Benchmark: timing 5000 iterations of grep, hash, manual... grep: 0 wallclock secs ( 0.03 usr + 0.00 sys = 0.03 CPU) (warning: too few iterations for a reliable count) hash: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (warning: too few iterations for a reliable count) manual: 0 wallclock secs ( 0.04 usr + 0.00 sys = 0.04 CPU) (warning: too few iterations for a reliable count) Array size is 50 Benchmark: timing 5000 iterations of grep, hash, manual... grep: 0 wallclock secs ( 0.15 usr + 0.00 sys = 0.15 CPU) (warning: too few iterations for a reliable count) hash: 0 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU) (warning: too few iterations for a reliable count) manual: 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) (warning: too few iterations for a reliable count) Array size is 100 Benchmark: timing 5000 iterations of grep, hash, manual... grep: 0 wallclock secs ( 0.27 usr + 0.00 sys = 0.27 CPU) (warning: too few iterations for a reliable count) hash: 0 wallclock secs ( 0.02 usr + 0.00 sys = 0.02 CPU) (warning: too few iterations for a reliable count) manual: 1 wallclock secs ( 0.20 usr + 0.00 sys = 0.20 CPU) (warning: too few iterations for a reliable count) Array size is 500 Benchmark: timing 5000 iterations of grep, hash, manual... grep: 1 wallclock secs ( 1.39 usr + 0.00 sys = 1.39 CPU) hash: 0 wallclock secs (-0.00 usr + 0.00 sys = -0.00 CPU) (warning: too few iterations for a reliable count) manual: 1 wallclock secs ( 0.90 usr + 0.00 sys = 0.90 CPU) Array size is 1000 Benchmark: timing 5000 iterations of grep, hash, manual... grep: 3 wallclock secs ( 2.79 usr + 0.00 sys = 2.79 CPU) hash: 0 wallclock secs ( 0.02 usr + 0.00 sys = 0.02 CPU) (warning: too few iterations for a reliable count) manual: 2 wallclock secs ( 1.77 usr + 0.00 sys = 1.77 CPU) Array size is 5000 Benchmark: timing 5000 iterations of grep, hash, manual... grep: 15 wallclock secs (15.39 usr + 0.00 sys = 15.39 CPU) hash: 0 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU) (warning: too few iterations for a reliable count) manual: 10 wallclock secs ( 9.03 usr + 0.05 sys = 9.08 CPU)

Replies are listed 'Best First'.
RE (tilly) 2: Doesn't anyone do their homework anymore?
by tilly (Archbishop) on Oct 17, 2000 at 17:42 UTC
    A note on hash performance.

    Hashing is a bucket algorithm. The idea is that you look at the key, produce a number, then drop that name/value into a bucket based on that number. To check if a particular key is in the hash all you need to do is look at the key, find that number, and look in the right bucket. Assuming that there are enough buckets (Perl adjusts this dynamically) and the hashing function does a good job of distributing things (Perl has a very carefully designed hashing function) that bucket will have very few things in it and therefore will be very fast to search.

    In fact Perl does a pretty good job of keeping that down to 0-3 elements per bucket.

    So no matter how large your dataset (ok, within the bounds of physical memory limits) accessing a hash is basically constant time. It is slower than accessing an array, but not all that much slower.

    So comparing building a hash to a linear search is actually pretty fair. If you expect to need to find things more than a (pretty small) fixed number of times, the hash should be faster. For just one it is slower. If the number is dynamic then the hash definitely makes sense.

    (Yeah, I know. I am ignoring all sorts of things in that broad claim. But Perl is largely designed around the assumption that a hash lookup is constant-time and fast. In the real world that works out pretty well.)

RE: RE: Doesn't anyone do their homework anymore?
by indigo (Scribe) on Oct 17, 2000 at 22:44 UTC
    Ugh! Now the hash test is useless!

    A hash is only a win if you are doing multiple searches. My benchmark provides enough information to suggest you need to do 4 or 5 search before you break even. Your benchmark, in completely ignoring hash creation overhead, erroneously indicates a hash is best in all situations!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://37118]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (7)
As of 2024-04-18 07:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found