in reply to Re: challanging the dictionary
in thread challenging the dictionary
blokhead,
This seems to be a relatively fast approximation algorithm (64K words in about 5 seconds):
This seems to be a relatively fast approximation algorithm (64K words in about 5 seconds):
#!/usr/bin/perl use strict; use warnings; my $file = $ARGV[0] || 'words.txt'; open(my $fh, '<', $file) or die "Unable to open '$file' for reading: $ +!"; my %freq; while (<$fh>) { tr/a-z//cd; ++$freq{$_} for split //; } while (%freq) { my ($small) = sort {$freq{$a} <=> $freq{$b}} keys %freq; seek $fh, 0, 0; my ($max, $word) = (-1, ''); while (<$fh>) { last if $max == keys %freq; next if index($_, $small) == -1; tr/a-z//cd; my %uniq = map {$_ => undef} split //; delete $uniq{$small}; my $cnt = grep defined $freq{$_}, keys %uniq; ($max, $word) = ($cnt, $_) if $cnt > $max; } print "$word\n"; delete @freq{split //, $word}; } __END__ photojournalism quarterbacked detoxifying blowzier aardvark
The algorithm is quite simple. Start with the rarest letter and look for words containing that letter that have the most unique letters not found so far. Wash-Rinse-Repeat.
Cheers - L~R
|
---|
In Section
Seekers of Perl Wisdom