I just threw these hashing algorithms in as examples - seemed at least tangentially relevant to the discussion of "how does a Perl hash work?". The first one is close to what Perl used to do. The second one works differently. They are equivalent at the interface level. Performance note: the left shift then add stuff is not necessary. To multiply by 5 => left shift 2 (multiply by 4) then add original => a multiply by 5. On a modern processor, integer shift, add, and multiply are all about as fast. That's quite astonishing, but true. The math unit has way more transistors in it than the actual CPU. Intel and AMD have spent a lot of time making math work faster. However the shown code is so fast, this doesn't matter. Anyway, multiply is faster than you might think. And also applies to Perl.
Both algorithms operate on a binary memory pointer. In the first one, I made the binary pointer into a string with sprintf() because the Perl algorithm is optimized for strings. That one is "slow" because sprintf is slow. The second algorithm is specifically designed to generate hashing bits from a binary number.
There is no DoS security issue with my application. I'm not building hash tables based upon user supplied data. All data comes from C/C++ memory allocation operations (binary memory pointers). If I am accepting data from a user application and that application knows the hash algorithm, it is possible for me them? to generate ascii data strings which will cause lots of hash "collisions" which cause the hash to double in size way, way more often than one would normally expect.
I don't know what you mean by keys are indeed mutable? In my C application, all the hash keys are the same size, true. The binary hash key results in a binary hash value. The size of the bucket array depends upon how many bits of the binary hash value are used as an index into that bucket array. If a key "changes" usually it will cause the entry to move to a different "hash bucket". There is no special case code for: "hash key changed, but by chance the new key hashes to the same hash bucket". That case, by hash algorithm design, is rare. "Change hash key" means delete old key, create new key. That may or may not cause the hash size (the size of the bucket array) to double.
I don't know what you mean by this: Like integrity of the hash after being swapped?. If mean does Perl ensure the integrity of the hash when it doubles? Yes! If you mean, "if some physical memory is swapped to disk, does the O/S MMU figure that out to make it transparent to the application?", Yes. All of that has gotten much easier over the years. One project I had required 16 MB of physical memory on a 80186 processor which only handled 1 MB. I had to work with the O/S and Application folks to design some effective, fast, easy to use hardware to make this possible. Now the problem is in the inverse, the processor can produce more address bits than exist in the physical sense. | [reply] |
OK, Sorry I was under the impression you were showing parts of Perl's implementation ...
... and this raised some security doubts.
It also surprised me to see that the address to the key is hashed and not the string of the key.
Cause this has some implications.
- For example these pointers must stay "stable" after memory re-allocations and I have seen Perl ports to machines without MMU listed (like running on Motorola 68000 boxes).
- It's also that you can normally replace the string behind the pointer with a shorter one, which would make keys mutable (in a limited way).
Anyway ...
Thanks for the enlightening conversation! :)
| [reply] |
The first hashing algorithm WAS Perl's implementation more than decade ago. It is more complex now, but I'm sure that it is still very, very fast.
Again this code is NOT Perl. But it does use Perl concepts. I am sorry if this more confusing than helpful.
It also surprised me to see that the address to the key is hashed and not the string of the key.
There is a misunderstanding here - perhaps caused by some C misunderstanding? The address to the key is not hashed. A binary memory address "IS the key". In Perl, a hash key must (or used to be) be ascii. In version 1, I converted this binary hash key to ascii to run the Perl algorithm upon it. The second algorithm doesn't convert the binary value to ASCII and uses just binary operations directly.
We are getting way "off topic" if we talk about 68000 vs modern Intel/AMD processors. I worked at one time with Bob Childs, who invented the Intel 286 memory segmentation stuff to make it compatible with what Intel could do within the limits of backward compatibility etc. Bob was a very smart, albeit argumentative guy. I miss him.
| [reply] |