First, you have to remove all the markup. That alone does not seem trivial, since the MediaWiki format is a big mess to begin with.
All I need is to parse the text from an xml dump of the articles
enwiki-latest-pages-articles.xml to create a clean dictionary with good statistics of terms. I kind of hoped there exists a module which retrieve the pure text from the content. Once I have it, creating a dict. is a one line code.
Yes, I already found that MediaWiki parser does not do it, but at least gracefully a reads multi-giga file. I think I need probably to apply some filtering. Say retrieve only rows without special characters hoping that those have only pure text or so from what MediaWiki parser gives me. So something like that:
$pages = Parse::MediaWikiDump::Pages->new("xml file");
while(defined($page = $pages->next))
{
$text = $page->text;
## process text, which is quite messy
}