So it's understood that:
reading the file is ~ n work and inserting the keys into a hash is ~ 2m work (where m is the number of unique keys, taking into account the rearrangements required as the hash grows  assuming no pathology).
reading the file into an array is ~ 2n work (one to read and one to insert), but to sort it is ~ n log n (assuming no pathology), then you need to scan the array to spot the duplicates, which is a further ~ n work.
So hash: ~ n + 2m; array: ~ 2n + n log n.
Not all work is the same, however. Scanning the array certainly requires ~ n comparisions, but this is in a Perl loop  which may dominate the running time. Further, this does not account for the cost of reorganising the array to remove duplicates, if that's what you want to do...
However, that may not be the complete story. What do you intend to do with the information ?
Suppose you want to print out the unique keys in key order:
the keys need to be extracted from the hash ~ m work, and sorted ~ m log m, and then printed ~ m.
the array can simply be printed for ~ m work  most efficiently if that is combined with the scan for duplicates step.
so the totals now are hash: ~ n + 4m + m log m; and the array ~ 3n + n log n + m. So, where there are a lot of duplicates the hash approach is doing less work; where there are few duplicates there's apparently not a lot in it.
Suppose you want to print out the unique keys in original order (duplicates sorted by first instance):
with the hash you keep an auxiliary array with the keys in original order, so we have ~ n to read, ~ 2m to add to hash, ~ m to add to auxiliary array, ~ m to print, total: ~ n + 4m  not a log n or log m in sight, so the total work decreases !
with an array... requires at least an auxiliary array containing the original array indexes; then the array indexes are sorted according to the key values; then the array indexes are scanned to count the number of times each key value appears; .... from a work perspective this is not apparently a big increase, but the extra code to work with the index array will affect the running time.
in this case the hash is a clear winner, not only is there no n log n component, but the work in the array sorting and scanning has clearly increased.
So what's the point of all this ? Well:
bigOwise filling the hash is O(n) and filling the array and sorting it is O(n log n). But, for the whole process (assuming key order output) the hash is O(n + m log m) and the array is O(n + n log n)  bigO is broadbrush, but you do need to consider the entire problem.
hashes carry quite a hefty memory overhead (and for key order output the entire hash is overhead)... so a hash implementation will start to page earlier than an array one  bigO is all very well, but it's not the whole story.
while the array implementation appears little different (unless m is a lot smaller than n), it does involve the scan step, which is a Perl loop  so we might have, roughly:
# hash... # array...
while (<HUM>) { while (<HUM>) {
$hash{key_part($_)}++ ; push @array, key_part($_) ;
} ; } ;
my $p = '' ;
my $c = 1 ;
foreach (sort keys %hash) { foreach (sort @array) {
if ($p ne $_) {
print "$_\: $hash{$_}\n" ; print "$p\: $c\n" ;
$p = $_ ;
$c = 1 ;
}
else {
++$c ;
} ;
} ; } ;
print "$p\: $c\n" ;
which puts the array implementation in it's most favourable light, but the extra code for the array version will have weight not accounted for in the bigO analysis.
Or, in short, before embarking on bigO, you do have to consider at least the major components of the whole problem, and beware counting apples and oranges.
FWIW, the pseudocode for keys in original file order is:
# hash... # array...
while (<HUM>) { while (<HUM>) {
my $k = key_part($_) ;
push @keys, $k if !$hash{$k} ; push @array, key_part($_) ;
$hash{$k}++ ;
} ; } ;
my $p = '' ;
my $c = 1 ;
foreach (sort { $array[$a] cmp $
+array[$b]
 $a <=> $b },
+(0..$#array)) {
my $k = $array[$_] ;
if ($p ne $k) {
push @keys, $p ;
@counts = map $hash{$_}, @keys ; push @counts, $c ;
$p = $k ;
$c = 1 ;
}
else {
++$c ;
} ;
} ; } ;
push @keys, $p ;
push @count, $c ;
... and with Perl, if it's less code it's generally faster !

Are you posting in the right place? Check out Where do I post X? to know for sure.

Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>

Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).

Want more info? How to link
or How to display code and escape characters
are good places to start.