As to the complexity, (to me) it looks very much like O(n^2) in time.
I agree. the Perl version certainly looks to be tending that way with respect to the time for a number of records with the number of buffers fixed.
But the number of buffers also has a non-linear effect that is harder to quantify.
These are timings taking using the Perl prototype:
Array N u m b e r o f b u f f e r s.
Size 2 4 8 16
+
10000 9.710932970 14.441396952 16.980458975 17.980459929
20000 38.186435938 54.618164063 65.470575094 68.913941145
40000 148.056374073 224.696855068 244.073187113 260.217727184
80000 310.495392084 882.082360983 969.694694996 1098.488962889
But Perl's internal overheads tend to cloud the issue.
These are a similar set of timings, (though with 1000 times as many records), from a straight Perl to C conversion: Array N u m b e r o f b u f f e r s.
Size 2 4 8 1
+6
10000000 0.248736811 0.497881829 0.799180768 1.33560723
+0
20000000 0.285630879 1.024963784 1.879727758 3.58831668
+6
40000000 0.402336371 1.211978628 3.943654854 7.76231127
+6
80000000 0.966428312 2.093228984 10.674847289 15.20866750
+8
160000000 3.682530644 7.499381800 16.773807549 34.45496560
+7
That's a lot less pessimistic of the algorithm.
Based on the timings alone, it looks to be linear(ish) in the number of buffers and (um) log.quadratic in the number of records?
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
|