Hi, I'm trying to create a tagcloud from the collection of texts. That is, given a set of text (e.g., sentences), I'll need to count frequencies for the phrases. It's pretty easy to count frequencies of individual words, but I'm wondering whether there is a good algorithm for counting phrases. Here's a naiive way to count them:
my %words = ();
my %phrases = ();
while(<DATA>){
chomp;
my @words = split/\s+/;
#count words
++$words{lc $_} foreach @words;
# count phrases
for(my $i = 0; $i < @words; ++$i){
for(my $j = $i; $j < @words; ++$j){
++$phrases{lc join(" ", @words[$i..$j])};
}
}
}
print "Words:\n";
foreach my $wd (sort {$words{$b}<=>$words{$a}}keys %words) {
print "$wd => $words{$wd}\n";
}
print "\n\nPhrases:\n";
foreach my $p (sort {$phrases{$b}<=>$phrases{$a}}keys %phrases) {
print "$p => $phrases{$p}\n";
}
__DATA__
Mary had a little lamb
little lamb
John had a lamb
Mary and John both had a lamb
Mary and John had two little lambs
For my particular case, each sentence is about 50 words long, and there can be up to a few thousand such sentences.
Related to this, is there a good way to rule out the common words (e.g., "a", "the", etc.)?