in reply to Markov Chain Program
$you = new YOU;
honk() if $you->love(perl)
|Replies are listed 'Best First'.|
Re: Re: Markov Chain Program
by Anonymous Monk on Aug 29, 2001 at 19:34 UTC
But I'm planning on rewriting it shortly after being done and will be doing it with digraphs (two character pairs, including punctuation and spaces, although not likely line breaks) after learning how nice those can be for old-school cryptography - the digraphs should theortically be better than trigraphs and mono (single letters).
just a thought. (my current problems lie less in the above theory or programming - all very easy - and in the way I strip it and from where... trying several boards, as well as doing it in an amusing way in newsgroups as well)
You ask how this works and she explains:
Find some body of text (in our case, text files) that you want to imitate. For every pair of words that occurs in the text, keep track of each word that can follow that pair of words. So, for every pair of words, you would know a) which words followed that pair of words AND b) know at what probability those words might follow the pair of words. (See examples below.)
Using the information gathered in the previous step. Start with a pair of two consecutive words ($word_one and $word_two) that occur in the text, print those two words, then randomly choose the next word ($next_word) according to the probability that it would follow those two words. Print that word. Now use the second word ($word_two) and the new word ($next_word) as your two consecutive words and repeat this process until you have generated the amount of text you want or hit a word pair that has no next word.
Let us look at an example from The New Testament According to Dr. Suess:
He didn't come in a plane.If we were to analyze the word pairs, we see the following pairs of words in the text:
We can see that the word pair He didn't occurred three times, each time followed by the word come (at 100% probability). And the word pair in a occurred three times, followed by either Jeep., pouch, or plane (each of these with a 33.3% probability).
Your task is write a program called babble that will read text from <> and apply the Markov Chain algorithm to generate random text that reads like the input text.
Your program will also take three options (you are advised to use Getopt::Long qw(GetOptions) but you may use other methods if you insist):
--words (the total number of words to generate)
You are advised to implement --show_pairs first. This will require designing a data structure to store the "word pair" to "next word" mappings (when you hear "map", you might think "hash" or "hashref") and then writing a subroutine to load/build this data structure from the input text. Don't worry about capitalization and punctuation -- you can treat anything that's not whitespace as word characters (i.e., @words = split() is a perfectly acceptable construct to use to get your words). Once you have --show_pairs working, you should be able to do something like this:
Edit by dws to rescue formatting