http://qs321.pair.com?node_id=1007020


in reply to Re^11: Hash order randomization is coming, are you ready?
in thread Hash order randomization is coming, are you ready?

Prevention of algorithmic complexity attacks.

Hm. That is reasoning for randomising the seed for the hashing algorithm; but not reasoning for changing the hash algorithm itself.

It also doesn't explain why you would do it on a hash-by-hash basis rather than a per-process basis.

I don't get the reluctance to share this information?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

RIP Neil Armstrong

Replies are listed 'Best First'.
Re^13: Hash order randomization is coming, are you ready?
by demerphq (Chancellor) on Dec 04, 2012 at 07:45 UTC

    but not reasoning for changing the hash algorithm itself

    Sure it is. A strong hash function is harder to attack.

    why you would do it on a hash-by-hash basis rather than a per-process basis.

    Concerns over information exposure of key order to an attacker.

    I don't get the reluctance to share this information?

    If there is any reluctance it is purely that of me wanting to avoid a long dialog repeating what has already been said elsewhere. I have a lot of demands on my time these days.

    ---
    $world=~s/war/peace/g

      If there is any reluctance it is purely that of me wanting to avoid a long dialog repeating what has already been said elsewhere. I have a lot of demands on my time these days.

      Great, link?

      but not reasoning for changing the hash algorithm itself -- Sure it is. A strong hash function is harder to attack.

      With respect, that is garbage. The way the original algorithmic complexity attack was constructed, was to simply hash a mess of random strings of a given length and see which one's hashed to the same value. As soon as anyone gets their hands on the release that contains a different hashing function, the "strength of the hashing function" -- a totally meaningless measure in this context -- is completely negated.

      Only the reliability of the randomised seed provides any protection whatsoever.

      why you would do it on a hash-by-hash basis rather than a per-process basis. -- Concerns over information exposure of key order to an attacker.

      Unfounded (and illogical) concerns. If the "attacker" has sufficient access to be able to determine the per-process seeding, they have sufficient access to have far simpler and more effective attack vectors.

      Like fitting an anchor to a car or an air brake to a submarine, the extra prophylactic serves no purpose.

      If there is any reluctance it is purely that of me wanting to avoid a long dialog repeating what has already been said elsewhere.

      I see. So we users of this modification shouldn't be concerning our simple selves with the difficult details of this change huh?

      Would copy/pasting taking so much timeand effort? Even a link to the existing discussion would have sufficed.

      But fear not, I'm not asking you to argue your case here. I've already heard enough to realise that this is tinkering for it's own sake, rather than justifiable development.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      RIP Neil Armstrong

        With respect, that is garbage

        With respect I think you are under informed. See SipHash and the documented attacks on various hash functions. A strong hash does not allow one to predict the hash value of a given string even if one knows the hash value of any other string assuming one does not know the seed.

        . If the "attacker" has sufficient access to be able to determine the per-process seeding

        Exposing key order provides an attacker information that can be used to eventually deduce the seed. Randomizing per hash means that this information is useless. We know that much code exposes key order without realizing it.

        Would copy/pasting taking so much timeand effort?

        Would *reading* what has been written be so much time and effort? I don't mind explaining if you genuinely do not understand what has been said, but the impression I have is that you are unwilling to read what has already been written and would prefer to interrogate me about the same points while being offensive in the process. Eg, using big bold to repeat things I already said, ignoring what has been said (such as "per process randomization") and accusing me of talking garbage.

        ---
        $world=~s/war/peace/g