Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^7: elsif chain vs. dispatch

by JavaFan (Canon)
on Apr 27, 2009 at 22:40 UTC ( [id://760456]=note: print w/replies, xml ) Need Help??


in reply to Re^6: elsif chain vs. dispatch
in thread elsif chain vs. dispatch

But that would mean having N/4 keys hashing to the same bucket isn't detected. Which means the worst case is still Θ(N). In fact, if there's an ε > 0 such that it requires more than εN keys to be hashed to a single bucket before Perl reorders the hash, the worst case look up is still Θ(N).

Replies are listed 'Best First'.
Re^8: elsif chain vs. dispatch
by Marshall (Canon) on Apr 27, 2009 at 23:30 UTC
    Yes, if I understand your point correctly: There is no absolute guarantee that all keys won't hash to the same hash key until the keys are absolutely unique! Correct!

    However in a practical sense, I think that you are going to be hard pressed to come up with a realistic example for this user's input data.

    Of course there is a "trick" here. Even if the hash table has to compare say 16 things to get a result, it is still going to be very fast!

    This idea that say 256 things will hash into an identical hash table entry is unlikely. Now "very, very seldom" doesn't mean "never".

    But, as the hash grows the probability of this decreases exponentially.

      However in a practical sense, I think that you are going to be hard pressed to come up with a realistic example for this user's input data.

      Accidentally, sure. But a intentionally, you have a DOS attack. That's why the fix is called a security fix.

        Accidentally, sure. But a intentionally, you have a DOS attack. That's why the fix is called a security fix.

        I am not a Windoze fan. Microsoft calls "security fixes" O/S updates or O/S upgrades. I don't know what you mean by "DOS attack"?

        update: mis-interpreted a previous post caused by just missing an update on the thread.

      Yes, if I understand your point correctly: There is no absolute guarantee that all keys won't hash to the same hash key until the keys are absolutely unique! Correct!
      You understood me utterly wrong. The claim was made Perl detects if too many keys hash to the same bucket, the hash is expanded in size and the keys reinserted, spreading over more buckets. I then pointed out that the description of how it's done still means that you can have enough keys map to the same bucket so your lookup isn't constant anymore.
      This idea that say 256 things will hash into an identical hash table entry is unlikely. Now "very, very seldom" doesn't mean "never".
      Yes, and? We were talking about a worst case scenario. And a worst case scenario could be anything that doesn't never happen.
        The claim was made Perl detects if too many keys hash to the same bucket, the hash is expanded in size and the keys reinserted, spreading over more buckets.

        Where did that idea come up?

        If Perl detects num_entries > num_buckets, that will trigger more buckets.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://760456]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (4)
As of 2024-03-29 01:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found