http://qs321.pair.com?node_id=760354


in reply to Re^2: elsif chain vs. dispatch
in thread elsif chain vs. dispatch

Note also that hash lookups are, worst case, Θ(N). There's always a chance that all hash keys map to the same value, resulting in a linear list that needs to be searched.

I recall that perls from 5.8.3 or so have code in place to watch out for this sort of degenerate case, and will rehash to prevent this from occurring.

• another intruder with the mooring in the heart of the Perl

Replies are listed 'Best First'.
Re^4: elsif chain vs. dispatch
by almut (Canon) on Apr 27, 2009 at 20:59 UTC
    I recall that perls from 5.8.3 or so have code in place to watch out for this sort of degenerate case

    I think it's the HV_MAX_LENGTH_BEFORE_SPLIT, currently set to 14.

    /* hv.c */ #define HV_MAX_LENGTH_BEFORE_SPLIT 14 ... Perl_hv_common( ... ) { ... while ((counter = HeNEXT(counter))) n_links++; if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) { /* Use only the old HvKEYS(hv) > HvMAX(hv) condition t +o limit bucket splits on a rehashed hash, as we're not goin +g to split it again, and if someone is lucky (evil) enou +gh to get all the keys in one list they could exhaust our + memory as we repeatedly double the number of buckets on ev +ery entry. Linear search feels a less worse thing to do +. */ hsplit(hv); } ... }

    (the comment seems to be a left-over from an ealier implementation, though...)

Re^4: elsif chain vs. dispatch
by Marshall (Canon) on Apr 27, 2009 at 19:58 UTC
    I don't know what is new in Perl 5.8.3 regarding new re-sizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.

    To pre-size a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.

    The Perl hash algorithm is:

    /* of course C code */ int i = klen; unsigned int hash =0; char *s = key; while (i--) hash = hash *33 + *s++;
    Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2. As mentioned above, this "(10/1024)" string shows number of "buckets" and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries.

    If the total number of (typo update:) keysentries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again.

    So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will re-size the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >Perl 5.8.3.

    Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm.

      The degenerate case has nothing to do with the ratio of used buckets to the number of total buckets.

      The degenerate case occurs when the number of elements in the hash (0+keys(%hash)) is much greater than the number of buckets in use (0+%hash) because most of keys hash to the same value.

      Locating a key in the degenerate case is a linear search since they're all in the same bucket.

        If you let Perl grow the hash, this super degenerate case will be detected and Perl will add bits to the hash key. The num keys start at 8, then 16,32,64,etc. The 9th entry to same hash value with buckets =8 would re-gen the entire hash. Now, I suppose that some case can be generated where at each bit addition, the same thing not only occurs, but becomes harder for earlier versions of Perl to detect!

        I think my general advice about checking these parms: #buckets used, #total buckets and #total entries is a good one when dealing with very large or performance sensitive hashes.

        Completely correct! Yes this could happen. If it keeps happening, then the 17th entry would cause the hash to be re-sized. Then again on the 33rd entry.

        It sounds like Perl 5.8.3+ has made some improvements! Great!

        For Perl versions less than that and even on Perl 5.8.3, I don't think that a user will know more than #buckets, #buckets used and #total entries (ie, user wouldn't know the max entries into a "bucket"), but given those 3 things, a user can make some judgment call about increasing the hash table size and is able to do so.

Re^4: elsif chain vs. dispatch
by ikegami (Patriarch) on Apr 27, 2009 at 20:05 UTC

    A measure was added to 5.8.1 to thwart the intentional exercise of the degenerate case.

    I don't see anything in there or in the linked section of perlsec about detecting the accidental exercise of the degenerate case, but it's possible. (It's even likely.)

    if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) in hv.c in perl.git might be that very check.

Re^4: elsif chain vs. dispatch
by Marshall (Canon) on Apr 27, 2009 at 20:31 UTC
    Ooops. I think I screwed up here an pushed create/update button at the wrong level. Lots below is redundant. A goof...

    I don't know what is new in Perl 5.8.3 regarding new re-sizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.

    To pre-size a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.

    The Perl hash algorithm is:

    /* of course C code */ int i = klen; unsigned int hash =0; char *s = key; while (i--) hash = hash *33 + *s++;
    Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2. As mentioned above, this "(10/1024)" string shows number of "buckets" used and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries.

    If the total number of entries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again.

    So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will re-size the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >=Perl 5.8.3.

    Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm. However this does appear to point out a pitfall in pre-sizing hash. Perl starts a hash with 8 "buckets". If you start it yourself with say 128 buckets, it is possible to wind up with a lot more things associated with a hash key than if you let Perl just grow the hash on its own.

    update: As a small update, I would add that I haven't found much performance difference in just letting Perl do its hash thing vs pre-sizing a hash. The hash key computation effort (which is actually very efficient as above shows) tends to get dwarfed by the input effort to get the say, 100,000 keys and the computation required on those keys!