http://qs321.pair.com?node_id=944920


in reply to "Just use a hash": An overworked mantra?

But just because hash key lookups/increments are O(1) operations, that doesn't make them computationally cheap.
......
An array lookup is also O(1), but it turns out it's a much computationally cheaper task to perform these array operations.

not correct

Hash is not O(1).

Also, this example of 0..999 integers looks too artifical to me, and it is too obvious to use an array here, so - for me - it does not deserve to even start the discussion, in the first place :)

I do appreciate good English of the original post, however, which I, personally, do not possess
:) :)

Replies are listed 'Best First'.
Re^2: "Just use a hash": An overworked mantra?
by davido (Cardinal) on Dec 23, 2011 at 19:19 UTC
      yes. I am sure.

      first, the third link you've pointed reads:
      Thus—formally—hash tables often run in time O(log n).

      next, if the question is "is it possible to create an algorithm that, given arbitrary strings of finite length, make associative store/lookup for O(1)" - then the answer is "yes" but these are non-realistic algorithms.
      but - back to reality - real-life alhorithms are O(log N) in best case, but I afraid Perl's one is worse than that
      and - yet - have you ever heard of so-called "hash complexity attack" that was security-fixed by seed randomization in 5.8.0?
      To inform you, pre-5.8.0 were looking like O(1) but actually were O(n) (or maybe even worse in worst case scenario)

      next, your "stackoverflow.com" link contains simply wrong information.
      IOW, I do not see any proof there that convinces me.

      And - your last link does not even mention hashes.
      Bother explain what is doing here? TIA!

      and finally, do you see something strange in your phrase
      hash operations are O(1) operations, where 'n' is the number of elements in the hash
      ?
      I could be a bit joking, but I do not trust phrases like that.
      you're explaining parameter 'n' that is not presents in your formulae - what other mistakes/typos have you made?
      :)

        real-life alhorithms are O(log N) in best case, but I afraid Perl's one is worse than that

        I'd like to see you demonstrate that Perl's hashes are O(log N), let alone "worse than that"?


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        The start of some sanity?

        And - your last link does not even mention hashes. Bother explain what is doing here? TIA!

        Look at the table

        Here are some well-known algorithms and their order functions (adapted from [2]):
        Notation Name Example O(1) constant array or hash index

        ...

        2. Kernighan, B. W. & Pike, R., (1999). The Practice of Programming, p. 41. Addison-Wesley.

        yes. I am sure.

        Great, prove it :)