http://qs321.pair.com?node_id=186585


in reply to Spell Check Logic

First of all you have to build the set of suggetions.

For this purpose, you have to avoid string distance as Hamming, Text::Levenshtein or Text::WagnerFischer (but you can use them later)
Instead you can use a phonetic algorithm like Text::Soundex (that you should find along with Perl), Text::Metaphone and Text::DoubleMetaphone.
They transform a word in a phonetic code that you can use to retrive suggested words:

hello->(phonetic alg.)->h2l3
hullo->(phonetic alg.)->h2l3 (note: same code for phonetic similarity)
harry->(phonetic alg.)->h549

So you can made an hash on disk (e.g. with DB_File) with phonetic codes as keys and as values the words that have the same phonetic code:

{h213}->hello,hullo
{h549}->harry

and so on.

After the hash (that you have built before running the spell checker), you parse the text; you have to:

1) isolate the word
2) calculate the phonetic code of the word.
3) retrive the suggestions for this word (i.e. words that have the same phonetic code)
4) see if the word is in the set of suggestions
4a) if yes, parse another word (i.e. the word is in my corpus)
4b) if not, prompt the user THIS set of suggetions
4ba) now you can use the distance string algorithms to sort the suggested words (e.g. with Levensthein).

To retrive the words you can do for example:
my $sentence="worda wordb, wordc"; my @sentence = split(//, $sentence); my $current_word; foreach my $s (0..$#sentence) { if ($sentence[$s] =~ /($\W|_|\d)/) { # process current word $current_word='' ; } else { $current_word.= $sentence[$s]; } }

Replies are listed 'Best First'.
Re: Re: Spell Check Logic
by robobunny (Friar) on Jul 31, 2002 at 20:28 UTC
    you could speed this up a bit by maintaining a list of the words themselves in a separate hash, and checking that before you calculate the phonetic code (step 2). that way, you only have to calculate the code for words that are not in the list. of course, that only helps if most words are spelled correctly :)
      You are right! :)

      But an hash (DB_File) with 2/3 millions words+phonetic codes is around 150+Mb.
      So to gain some speed-up you have to twice the database: one that has keys as phonetic codes and the second that has keys as correct words.
      And this is not always a good thing.

      But in this particulary case, with only 100.000+ words, your suggestion is ok :)