Yes and No.
And this means a stored key must be immutable , because that calculated array-index would change too.
The stored key is immutable in this sense, but the calculated array-index for the linked list containing that particular key can change based upon the size of the hash.
This is true:
Consequently there can't be a faster way than replacing the whole key-value pair with "delete OLD" + "store NEW".
One hashing algorithm that has been used before in Perl is:
unsigned int hash = 0;
while (*s)
hash = hash * 33 + *s++;
s is a pointer to a null terminated ASCII string.
This runs "rocket fast".
On a modern processor, multiply by 33 is faster than left shift by 4 (times 32) and then an add.
--UPDATE ---
Just for fun, I attach actual C code from memtracker.h from 2010 and the 2013 update.
The program uses an integer as the "hash key". Some obvious performance enhancements mentioned in the comments to version 2 were not actually coded because it just didn't matter! The longer version is much, much faster. Shorter code does not equal faster code. The HTTP addresses in the code probably don't work anymore since they are more than a decade old! Also noteworthy: this code was only tested on a 32 bit machine.
| [reply] [Watch: Dir/Any] [d/l] [select] |
> but the calculated array-index for the linked list containing that particular key can change based upon the size of the hash
Well yes, but in the case when the hash-table is doubled, the whole internal data is overhauled anyway and effectively a new hash is created.
The old linked lists can't be reused.
(At least I don't see how this could be done, the collisions need to be reduced by using the new slots)
| [reply] [Watch: Dir/Any] |
Regarding your update:
Thanks, very interesting! :)
Please pardon my C ignorance, but AFAIS is the second hashing version operating on the pointer to the key's string not the string itself.
Right?
This would have 3 consequences
°) or the function is stable modulo memory page boundaries. But this might make then predictable again
| [reply] [Watch: Dir/Any] |
I just threw these hashing algorithms in as examples - seemed at least tangentially relevant to the discussion of "how does a Perl hash work?". The first one is close to what Perl used to do. The second one works differently. They are equivalent at the interface level. Performance note: the left shift then add stuff is not necessary. To multiply by 5 => left shift 2 (multiply by 4) then add original => a multiply by 5. On a modern processor, integer shift, add, and multiply are all about as fast. That's quite astonishing, but true. The math unit has way more transistors in it than the actual CPU. Intel and AMD have spent a lot of time making math work faster. However the shown code is so fast, this doesn't matter. Anyway, multiply is faster than you might think. And also applies to Perl.
Both algorithms operate on a binary memory pointer. In the first one, I made the binary pointer into a string with sprintf() because the Perl algorithm is optimized for strings. That one is "slow" because sprintf is slow. The second algorithm is specifically designed to generate hashing bits from a binary number.
There is no DoS security issue with my application. I'm not building hash tables based upon user supplied data. All data comes from C/C++ memory allocation operations (binary memory pointers). If I am accepting data from a user application and that application knows the hash algorithm, it is possible for me them? to generate ascii data strings which will cause lots of hash "collisions" which cause the hash to double in size way, way more often than one would normally expect.
I don't know what you mean by keys are indeed mutable? In my C application, all the hash keys are the same size, true. The binary hash key results in a binary hash value. The size of the bucket array depends upon how many bits of the binary hash value are used as an index into that bucket array. If a key "changes" usually it will cause the entry to move to a different "hash bucket". There is no special case code for: "hash key changed, but by chance the new key hashes to the same hash bucket". That case, by hash algorithm design, is rare. "Change hash key" means delete old key, create new key. That may or may not cause the hash size (the size of the bucket array) to double.
I don't know what you mean by this: Like integrity of the hash after being swapped?. If mean does Perl ensure the integrity of the hash when it doubles? Yes! If you mean, "if some physical memory is swapped to disk, does the O/S MMU figure that out to make it transparent to the application?", Yes. All of that has gotten much easier over the years. One project I had required 16 MB of physical memory on a 80186 processor which only handled 1 MB. I had to work with the O/S and Application folks to design some effective, fast, easy to use hardware to make this possible. Now the problem is in the inverse, the processor can produce more address bits than exist in the physical sense.
| [reply] [Watch: Dir/Any] |