in reply to Re: Re: Re: Perl's pearls
in thread Perl's pearls

I have to confess that, for my relatively young Perl experience, this second example is not as easy to read as the first one.
Anyway, this second script is even slower, due to the fact that we are accessing the hash more than needed. Access during the input phase is minimal, but during the output phase the hash is used twice (once to extract the keys and once to get its values (@list = sort @{$words{$_}}). It sums up to scanning a 85_000 items hash twice.
Moreover, this script is sorting the complete hash, instead of doing it after filtering out only the items with anagrams.
As an example, if we replace the second block in the script with this line
print map {"@$_\n" } sort {$a->[0] cmp $b->[0]} grep { @$_ > 1} values %words;
it becomes almost 20% faster.
Further difference in speed should be blamed on the efficiency of strings vs arrays (more on this topic later).
My school of programming is quite pragmatic. Since I usually work with large chunks of data that I get from databases, I learned how to minimize the heavy loads in a program.
In this particular case, I know that we are dealing with a potentially huge hash. Every unnecessary access to such structure makes the program slower.
I want to find an acceptable compromise between readability and efficiency. Maximum efficiency could be achieved at the price of readability, and maximum readability could challenge efficiency.
About readability, while there are style principles that might give a common base, reading advanced programs requires some advanced knowledge. Therefore, readability is subjective, and it is a blend of language knowledge and style principles.

Back to business, I made a new version of my script, modified for speed. No warnings, no strict and no declarations (but I tried it with eveything enabled before presenting it here). I think it is easily readable, except maybe the last line, for which I made an explanation in the main node (remembering my first days with Perl).
while (<>) { chomp; $_ = lc $_; $signature = pack "C*", sort unpack "C*", $_; if (exists $words{$signature}) { next if $words{$signature} =~ /\b$_\b/; $words{$signature} .= " "; } $words{$signature} .= $_; } print join "\n", sort grep {tr/ //} values %words; print "\n";
This script touches the hash three times directly plus two times conditionally. The first access is made with the exists function. If this test is true, two more accesses to the hash are performed, but only to those items that have anagrams or duplicates. In our case, about 15% of the items). Then we access the hash to insert the words and to get the results. Only once.
It runs under 4 seconds for those 100_000 words that I collected, while merlyn's second script runs in 6.7 seconds.
I don't want to start a competition with anybody (especially not merlyn, whom I admire and respect,) but I would just like to point out that my script, more than a matter of taste, is the result of some research on efficiency issues, as I have already stated in my main node.
I benchmarked the resource consuming parts of this short script, and my choice of pack vs split and strings vs arrays is due to the timing of the relative performance.
In particular, I went to benchmark extensively the performance of hashes of strings vs hashes of arrays.
There are three operations that affect this data structure in our anagrams application
1. append an item at the end;
2. count how many items in my array or string;
3. fetch all the items at once (string interpolation);
In two of these operations, (1 and 3) strings are faster. If my application only needs operations 1 to 3 (ie with no access to the items individually), strings are still faster, since the speed for insertion and fetching compensates for the slower counting. Arrays are faster only if I want to access items one by one.

An explanation is necessary for the slower performance of arrays in string interpolation.
my @array = qw(one two three); print @array; # output : 'onetwothree' # it's the same as foreach(@array) {print $_} print "@array"; # output : 'one two three' # it's the same as print join " ", @array;
The above code fragment shows the effects of string interpolation. An array is merged into a string with its items separated by a space. This is standard Perl behavior. This operation is roughly the same as using join on the array explicitly and this fact should account for the slower performance.
For small hashes the difference is almost insignificant, and thus I would prefer to use an array, to have a more clean data structure. In my anagrams script I preferred the strings because I am dealing with potentially huge input.
The following is my benchmarking code that I used to evaluate the relative speed of these structures.
#!/usr/bin/perl -w use strict; use Benchmark; my $iterations = 200_000; my %with_str; # hash containing strings my %with_arr; # hash containing arrays my $strcount = 0; # counter for hash of strings my $arcount = 0; # counter for hash of arrays my ($constant1, $constant2) = ("abcd", "dcba"); # strings used to fill the items timethese ($iterations, # inserts two elements per each hash value { "insert string" => sub { $with_str{$strcount} .= "$constant1$strcount"; $with_str{$strcount++} .= " $constant2$strcount" }, "push array" => sub { push @{$with_arr{$arcount}}, "$constant1$arcount"; push @{$with_arr{$arcount++}}, "$constant2$arcount" } }); my $count = 0; $arcount = 0; $strcount = 0; timethese ($iterations, # counts items for each hash value { "count string items" => sub { $count = $with_str{$strcount++} =~ tr/ //; }, "count array items" => sub { $count = scalar @{$with_arr{$arcount++}} } }); $arcount = 0; $strcount = 0; my $output = ""; timethese ($iterations, # string interpolation { "fetch string" => sub { $output = "$with_str{$strcount++}" }, "fetch array" => sub { $output = "@{$with_arr{$arcount++}}" } }); $count = 0; $arcount = 0; $strcount = 0; timethese ($iterations, # access separate items { "items from string" => sub { foreach (split / /, $with_str{$strcount}) { $output = $_; } $strcount++; }, "items from array" => sub { foreach ( @{$with_arr{$arcount}}) { $output = $_; } $arcount++; } }); =pod Benchmark: timing 200000 iterations of insert string, push array... insert string: 3 wallclock secs ( 1.92 usr + 0.14 sys = 2.06 C +PU) push array: 3 wallclock secs ( 2.39 usr + 0.15 sys = 2.54 C +PU) timing 200000 iterations of count array items, count string items... count string items: 2 wallclock secs ( 0.83 usr + 0.00 sys = 0.83 C +PU) count array items: 0 wallclock secs ( 0.64 usr + 0.00 sys = 0.64 C +PU) timing 200000 iterations of fetch array, fetch string... fetch string: 1 wallclock secs ( 0.59 usr + 0.00 sys = 0.59 C +PU) fetch array: 2 wallclock secs ( 1.13 usr + 0.00 sys = 1.13 C +PU) timing 200000 iterations of items from array, items from string... items from string: 2 wallclock secs ( 2.65 usr + 0.07 sys = 2.72 C +PU) items from array: 1 wallclock secs ( 2.02 usr + 0.07 sys = 2.09 C +PU) totals (inserting, counting items in each hash value, and fetching all the values at once) string: 3.34 array : 4.16 totals (inserting, counting items, and fetching items one by one from each hash value) string: 5.40 array : 5.05 =cut
 _  _ _  _  
(_|| | |(_|><