![]() |
|
There's more than one way to do things | |
PerlMonks |
Re: Best way to allocate some memory firstby Marshall (Canon) |
on Dec 03, 2021 at 22:11 UTC ( #11139378=note: print w/replies, xml ) | Need Help?? |
I would not consider 41 KB as "large". Some years ago I spent literally days benchmarking and working with pre-sizing hash structures. Perl starts off with 8 hash "buckets". Every time that the hash doubles in size, 8,16,32,64, buckets etc. Perl has to re-visit every item in the hash to find it's new "hash index" (that's an entry in the hash table which is a pointer to a linked list). My conclusion was that pre-sizing a hash table to hold say 100K hash keys didn't matter at all in the scheme of things. I don't remember all of the initial "bucket array" sizes that I experimented with. A major factor in my analysis was, that the time spent actually using the hash completely dwarfed the time spent creating it in the first place. I took out the pre-sizing because it made no significant application perceptible difference. Perl itself will call low level C functions to do its internal data structure management. These are "very fast" functions. Yes, I can write an ASM function that will beat C memcpy(), but so what? (memcpy()is good C code, it uses byte operations to align onto a memory boundary and then proceeds from there. The compiler, except at extreme optimization levels, cannot recognize that there is a specialized ASM instruction that is much faster at these block memory operations.) Again, so what? Anyway, I am very skeptical that "pre-sizing a 41K Perl array" will make any difference at all in terms of overall application performance.
Update: I don't think that this will achieve the intended purpose, because undef is a value. This will mess things up when trying to decide how many elements are in the array now.
In Section
Seekers of Perl Wisdom
|
|