Easier and simpler, but less efficient
Woah! I would benchmark that before making such a statement. I expect you'll need a seriously huge list before the O(n log n) starts to lose to your straight O(n).
There's no debate that it will, but the problem is the 0(n) algorithm is using a comparatively large number of slow ops, where as the sort compiles down into a single op, and there you're running at C speed (simple {$a <=> $b} blocks are recognised and special-cased during the parse). This will drown out the extra cost for a long, long time (that is: for a long list of values). On my machine, the cross-over occurs between 100 000 and 1 000 000 elements (and I had to run the million element benchmark for 15 seconds in order to give it enough time to settle down)
Rate with_scan_1 with_sort_1
with_scan_1 35662/s -- -53%
with_sort_1 76332/s 114% --
Rate with_scan_2 with_sort_2
with_scan_2 6838/s -- -40%
with_sort_2 11437/s 67% --
Rate with_scan_3 with_sort_3
with_scan_3 759/s -- -11%
with_sort_3 853/s 12% --
Rate with_sort_4 with_scan_4
with_sort_4 62.6/s -- -18%
with_scan_4 76.2/s 22% --
Rate with_sort_5 with_scan_5
with_sort_5 3.38/s -- -47%
with_scan_5 6.38/s 89% --
s/iter with_sort_6 with_scan_6
with_sort_6 4.88 -- -66%
with_scan_6 1.65 196% --
And since either choice is crazy fast enough for me, I'd throw my lot in with the more succinct version -- less chance of introducing semantic mistakes and off-by-one errors). For instance, I had to think for a little while about how you initialised $min and $max...
• another intruder with the mooring in the heart of the Perl
|