Find out what/which words in your dictionary can be rearranged the most time to form other "valid" words. It makes one pass for every word in the file, plus one pass for every sorted word (join '', sort split //, $word). So if your dictionary contains no anagrams, it will take 2 passes through the entire dictionary. If every word has one anagram, two words per sorted word, it will take 1.5 passes through your dictionary. If every word has 3 anagrams, four words per sorted word, it will take 1.25 passes.
passes = (1/words-per-sorted-word) + 1
My guess is there is a more effcient algorithm as far as computation goes, and I would like to think the memory requirements could be cut.
Note: Depending on your dictionary you can probably drop the sort in the inner print loop (
foreach my $word (sort @{$word_ref}) {)... if you care about the sort of thing.
#!/usr/bin/perl -w
use strict;
use Getopt::Std;
my (%options);
getopts("d:h", \%options);
if ($options{h}) {
print <<'eof';
-d file: dictionary file
eof
exit;
}
my ($sorted_word, %word, @most_ref);
my ($number) = 0;
my ($word_file) = $options{d} || "/usr/share/dict/words";
open(WORDS_FH, "<" . $word_file) or die ("Can't open $word_file: $!\n"
+);
while (<WORDS_FH>) {
chomp;
$sorted_word = join '', sort split //, $_;
push @{$word{length($sorted_word)}{$sorted_word}}, $_;
}
foreach my $length (keys %word) {
foreach $sorted_word (keys %{$word{$length}}) {
if ($number < $#{$word{$length}{$sorted_word}}+1) {
$number = $#{$word{$length}{$sorted_word}}+1;
@most_ref=();
push @most_ref, \@{$word{$length}{$sorted_word}};
} elsif ($number == $#{$word{$length}{$sorted_word}}+1) {
push @most_ref, \@{$word{$length}{$sorted_word}};
}
}
}
print $number . ":\n";
foreach my $word_ref (@most_ref) {
foreach my $word (sort @{$word_ref}) {
print " " x 2 . $word . "\n";
}
print "\n";
}
Update 2003:01:29 7:54:36: Fixed $option{d}. jmcnamara++ && Coruscate++
Update 2003:01:29 10:43:12: Dropped the length key of %words as per boo radley's suggestion. I am wondering if memory usage will go up with one large hash as opposed to many (~20 for my dictionary) smaller hashes. New code below (untested)
#!/usr/bin/perl -w
use strict;
use Getopt::Std;
my (%options);
getopts("d:h", \%options);
if ($options{h}) {
print <<'eof';
-d file: dictionary file
eof
exit;
}
my ($sorted_word, %word, @most_ref);
my ($number) = 0;
my ($word_file) = $options{d} || "/usr/share/dict/words";
open(WORDS_FH, "<" . $word_file) or die ("Can't open $word_file: $!\n"
+);
while (<WORDS_FH>) {
chomp;
$sorted_word = join '', sort split //, $_;
push @{$word{$sorted_word}}, $_;
}
foreach $sorted_word (keys %word) {
if ($number < $#{$word{$sorted_word}}+1) {
$number = $#{$word{$sorted_word}}+1;
@most_ref=();
push @most_ref, \@{$word{$sorted_word}};
} elsif ($number == $#{$word{$sorted_word}}+1) {
push @most_ref, \@{$word{$sorted_word}};
}
}
print $number . ":\n";
foreach my $word_ref (@most_ref) {
foreach my $word (sort @{$word_ref}) {
print " " x 2 . $word . "\n";
}
print "\n";
}
Update 2003:01:29 16:19:06: Tested the above code (it works)