http://qs321.pair.com?node_id=741004


in reply to mathematical proof

So it's understood that:

So hash: ~ n + 2m; array: ~ 2n + n log n.

Not all work is the same, however. Scanning the array certainly requires ~ n comparisions, but this is in a Perl loop -- which may dominate the running time. Further, this does not account for the cost of reorganising the array to remove duplicates, if that's what you want to do...

However, that may not be the complete story. What do you intend to do with the information ?

Suppose you want to print out the unique keys in key order:

so the totals now are hash: ~ n + 4m + m log m; and the array ~ 3n + n log n + m. So, where there are a lot of duplicates the hash approach is doing less work; where there are few duplicates there's apparently not a lot in it.

Suppose you want to print out the unique keys in original order (duplicates sorted by first instance):

in this case the hash is a clear winner, not only is there no n log n component, but the work in the array sorting and scanning has clearly increased.

So what's the point of all this ? Well:

Or, in short, before embarking on big-O, you do have to consider at least the major components of the whole problem, and beware counting apples and oranges.

FWIW, the pseudo-code for keys in original file order is:

# hash... # array... while (<HUM>) { while (<HUM>) { my $k = key_part($_) ; push @keys, $k if !$hash{$k} ; push @array, key_part($_) ; $hash{$k}++ ; } ; } ; my $p = '' ; my $c = 1 ; foreach (sort { $array[$a] cmp $ +array[$b] || $a <=> $b }, +(0..$#array)) { my $k = $array[$_] ; if ($p ne $k) { push @keys, $p ; @counts = map $hash{$_}, @keys ; push @counts, $c ; $p = $k ; $c = 1 ; } else { ++$c ; } ; } ; } ; push @keys, $p ; push @count, $c ;
... and with Perl, if it's less code it's generally faster !