Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Re^2: elsif chain vs. dispatch

by JavaFan (Canon)
on Apr 27, 2009 at 12:01 UTC ( [id://760333]=note: print w/replies, xml ) Need Help??


in reply to Re: elsif chain vs. dispatch
in thread elsif chain vs. dispatch

When you say that something is O(N) or O(N^2) or whatever, you are saying that as N changes then the resource in question (time or memory normally) changes with that relation to it. So if something is O(N) and N doubles, then the time taken doubles.
To be pedantic, that's not true. O(N) means that the growth is at most linear. O(N^2) means that the growth is at most quadratic. This means that any algorithm that is O(N) is also O(N log N) and O(N^2).

If you want to express that an algorithm is linear (and not worst case linear), the correct function is use is called Θ.

Note also that hash lookups are, worst case, Θ(N). There's always a chance that all hash keys map to the same value, resulting in a linear list that needs to be searched.

Replies are listed 'Best First'.
Re^3: elsif chain vs. dispatch
by grinder (Bishop) on Apr 27, 2009 at 14:04 UTC
    Note also that hash lookups are, worst case, Θ(N). There's always a chance that all hash keys map to the same value, resulting in a linear list that needs to be searched.

    I recall that perls from 5.8.3 or so have code in place to watch out for this sort of degenerate case, and will rehash to prevent this from occurring.

    • another intruder with the mooring in the heart of the Perl

      I recall that perls from 5.8.3 or so have code in place to watch out for this sort of degenerate case

      I think it's the HV_MAX_LENGTH_BEFORE_SPLIT, currently set to 14.

      /* hv.c */ #define HV_MAX_LENGTH_BEFORE_SPLIT 14 ... Perl_hv_common( ... ) { ... while ((counter = HeNEXT(counter))) n_links++; if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) { /* Use only the old HvKEYS(hv) > HvMAX(hv) condition t +o limit bucket splits on a rehashed hash, as we're not goin +g to split it again, and if someone is lucky (evil) enou +gh to get all the keys in one list they could exhaust our + memory as we repeatedly double the number of buckets on ev +ery entry. Linear search feels a less worse thing to do +. */ hsplit(hv); } ... }

      (the comment seems to be a left-over from an ealier implementation, though...)

      I don't know what is new in Perl 5.8.3 regarding new re-sizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.

      To pre-size a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.

      The Perl hash algorithm is:

      /* of course C code */ int i = klen; unsigned int hash =0; char *s = key; while (i--) hash = hash *33 + *s++;
      Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2. As mentioned above, this "(10/1024)" string shows number of "buckets" and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries.

      If the total number of (typo update:) keysentries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again.

      So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will re-size the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >Perl 5.8.3.

      Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm.

        The degenerate case has nothing to do with the ratio of used buckets to the number of total buckets.

        The degenerate case occurs when the number of elements in the hash (0+keys(%hash)) is much greater than the number of buckets in use (0+%hash) because most of keys hash to the same value.

        Locating a key in the degenerate case is a linear search since they're all in the same bucket.

      A measure was added to 5.8.1 to thwart the intentional exercise of the degenerate case.

      I don't see anything in there or in the linked section of perlsec about detecting the accidental exercise of the degenerate case, but it's possible. (It's even likely.)

      if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) in hv.c in perl.git might be that very check.

      Ooops. I think I screwed up here an pushed create/update button at the wrong level. Lots below is redundant. A goof...

      I don't know what is new in Perl 5.8.3 regarding new re-sizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.

      To pre-size a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.

      The Perl hash algorithm is:

      /* of course C code */ int i = klen; unsigned int hash =0; char *s = key; while (i--) hash = hash *33 + *s++;
      Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2. As mentioned above, this "(10/1024)" string shows number of "buckets" used and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries.

      If the total number of entries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again.

      So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will re-size the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >=Perl 5.8.3.

      Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm. However this does appear to point out a pitfall in pre-sizing hash. Perl starts a hash with 8 "buckets". If you start it yourself with say 128 buckets, it is possible to wind up with a lot more things associated with a hash key than if you let Perl just grow the hash on its own.

      update: As a small update, I would add that I haven't found much performance difference in just letting Perl do its hash thing vs pre-sizing a hash. The hash key computation effort (which is actually very efficient as above shows) tends to get dwarfed by the input effort to get the say, 100,000 keys and the computation required on those keys!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://760333]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (3)
As of 2024-04-25 09:45 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found