Your approach does n^2 comparisons with n=(number of words in list). I took a different aproach of sorting the list then comparing each word only with the following words in the array. This cuts the comparisons by half when n gets large as in this case. I'm curious how grep with the regex will stand up against your approach.
As for the extra overhead for sorting the word list my guess is the extra sorts at the beginning pay off as n gets large. What do you think? I have been wanting to try out benchmarking for some time after seeing you use it so much. This could be a good opportunity.
use warnings;
use strict;
open LOG, ">", 'wordsinwords_LOG.txt' or die $!;
#Sort alphabetically first so identical words end up together.
chomp (my @words = sort { length($a) <=> length($b) } sort <DATA>);
@words = map {lc} @words;
for (my $i=0; $i<$#words; $i++) {
my $word = $words[$i];
next if $word eq $words[$i+1];
my @matched = grep {/$word/ and $_ ne "${word}s" and $_ ne "${word
+}'s" } @words[$i+1..$#words];
print LOG "$word => @matched\n" if @matched;
}
__DATA__
at
Ate
crate
tree
Trip
tread
read
ad
at
ads
crate's
Log file contains:
ad => read tread
at => ate crate crate's
ate => crate crate's
read => tread
This is interesting. http://wordlist.sourceforge.net/ SCOWL (Spell Checker Oriented Word Lists) has 652,000 words plus a range of smaller lists.