unexpectedly-good results can be obtained by using “an old COBOL trick,” namely, use an external disk-sort to sort the file first.
How many times do you need to be told. No, they cannot! It takes at least 20 times longer!
- Using a hash takes 38.54 seconds:[ 1:27:09.40] c:\test>wc -l rands.dat 100000000 rands.dat [ 1:27:48.30] c:\test>perl -nlE"++$h[ $_ ]" rands.dat [ 1:28:27.24] c:\test>
- Just sorting the same file takes almost 10 minutes!:[ 1:29:54.32] c:\test>sort -n rands.dat >rands.dat.sorted [ 1:39:03.08] c:\test>
And that before you run another process to perform the actual counting!
To anyone with half a brain this is obvious.
- Using a hash requires:
100e6 X the average time taken to read a record (IO).
100e6 X the time taken to hash (H) the number and increment the value (i).
~Total time required: 100e6 * ( IO + H + I )
- Using a sort and then count (at least):
100e6 X the average time taken to read a record (IOR).
100e6 X log2( 100e6 ) = 2,657,542,476 X the time taken compare two lines (COMPL).
100e6 X the average time taken to write a record (IOW).
100e6 X the average time to read the sorted file (IOR)
100e6 X the time taken to compare two lines (COMPL) + the time taken to increment a count (I) + the time taken to record that count (R).
~Total time required: 200e6*IOR + 100e6*IOW + 2,757e6*COMPL + 100e6*I + 100e6*R
And that assumes that the whole dataset can be held and sorted in memory thus avoiding the additional, costly spill and merge stages. Which if it could there would be no point in using an external sort.
And please note: This isn't a personal attack. I will respond in a similar fashion to anyone pushing this outdated piece of bad information. It only seems personal, because you keep on doing it!
By now, I'm half tempted to believe you are only doing it to incur this response. But I dismiss that notion as it would require me to attribute you with some kind of Machiavellian intent, and I prefer to believe in Hanlon's Razor in these cases.