First, you have to remove all the markup. That alone does not seem trivial, since the MediaWiki format is a big mess to begin with.
All I need is to parse the text from an xml dump of the articles enwiki-latest-pages-articles.xml to create a clean dictionary with good statistics of terms. I kind of hoped there exists a module which retrieve the pure text from the content. Once I have it, creating a dict. is a one line code.
Yes, I already found that MediaWiki parser does not do it, but at least gracefully a reads multi-giga file. I think I need probably to apply some filtering. Say retrieve only rows without special characters hoping that those have only pure text or so from what MediaWiki parser gives me. So something like that:
$pages = Parse::MediaWikiDump::Pages->new("xml file");
while(defined($page = $pages->next))
{
$text = $page->text;
## process text, which is quite messy
}
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.
|