|Perl: the Markov chain saw|
I actually think you would be hard pressed to even demonstrate an advantage in pre-allocating the array.
Update: The short answer is even with five million elements, and growing the non-pre-allocated array one element at a time, pre-allocating is a little SLOWER. What follows is my testing methodology and findings.
If I build up an array by pre-allocating, then adding elements one by one within the range of indexes that was preallocated, and then build up another array just by adding elements one by one, I can compare elapsed times. When I do, the non-preallocation method seems to win every time, presumably because there's one less statement to execute. Here's an example that attempts to isolate just the parts that involve preallocation and building up of the array's elements:
On my system the output is:
As you can see, without preallocation is actually faster. One might wonder why I didn't use push. The reason was that push was adding elements to the end of the preallocated array, skipping past the part that had been preallocated. So that raises one other reason not to preallocate (aside from it not actually being faster): It may trick you.
It's interesting that the size of the non-preallocated array is larger in terms of memory consumption than the preallocated one. This is both interesting and can be explained by explaining why preallocation isn't significantly faster than building the array up incrementally.
It would be grossly inefficient to grow memory for an array one element at a time. The reason is that as an array grows, if a new contiguous chunk needed to be allocated every time, the contents of the previous contiguous chunk would need to be copied over to the new one, for each new element. When Perl allocates space for arrays, it leaves the array space to grow to avoid this issue. I don't know the growth plan it uses, but let's say initially Perl leaves enough room for 50 elements. When those 50 are used, Perl allocates a chunk large enough for 100 elements and copies the array into that new chunk. When that is filled, it allocates enough space for 200, and so on. These numbers are totally made up, but the point is it almost always allocates enough space for the array to grow past its current size.
In the case of pre-allocation, the array growth algorithm hasn't applied. You requested a million elements, the actual array size in memory will be a million elements. Not 1.2 million (allowing for growth without further allocations). So in terms of memory consumption, the pre-allocation method was a little more efficient. But in terms of speed, the growth algorithm is already good enough to make the time it takes to grow an array inconsequential when compared to the time it takes to interpret and execute the statement that preallocates the array.
So the pre-allocation microoptimization, even for an array of five million elements, is useless if your goal is to improve speed. Perl's array growth algorithm is similar to a C++ std::vector container class. And has similar time complexity characteristics. For adding elements at the end, allowing the array to grow as needed, the operation is considered O(1) amortized time.
If speed is an issue, profile your code with Devel::Profile and solve the hotspots. Array allocation isn't going to be one of them.
I modified the code above to remove "extra" stuff and to put the guts into a main() subroutine so that I could profile that subroutine using Devel::NYTProf. When I profiled, I see that the loop over the pre-allocated array takes 601 milliseconds. And the loop over the non-preallocated takes 603 milliseconds. However, the preallocation step takes 6.69 milliseconds. So the preallocated array operation (allocating, then iterating) takes a total of 608ms, versus 603 for non-preallocated. Therefore, my assertion earlier, that the act of pre-allocating actually consumes more wallclock time interpreting and processing the line of code than just letting Perl allocate as needed.
You can do this too: See the documentation for Devel::NYTProf.