http://qs321.pair.com?node_id=240634

Hello all,

I'm writing a technical paper on the use of Perl as a data mining tool, and I want your massed wisdom.

My intention is to write a paper that tantalizes people with the power of Perl, and provides community wisdom for all of us who write data processing scripts. I am not under contract to write this, nor will I make a dime off of it. In fact, I don't even have a place to submit this to... I'll probably just post it on my website.

For those who want a real publication, I think a book such as Data Munging with Perl would do. I just want to coalesce some experience into practical advice and example.

I've been programming Perl for a long time now. I could classify most of my work as 'data management' or perhaps 'data mining' -- bringing order and meaning to text data. Be it logfile, list, or database dump, it seems like the method of extracting data can be codified. Or at least organized. Well, maybe some principles can be gleaned. Or how about just sympathy?

Casting about for a buzzword, I'd like to call it "Bottom-Up Data Mining". Or maybe "Bottom-Up Data Analysis". Akin to the idea of bottom-up programming, bottom-up data mining is an iterative process by which data, starting out as 'unknown', proceeds along a path of extraction to a final 'known' state, where all meaningful data (for the task at hand) can be extracted with acceptable accuracy. Starting at the bottom, the Perl programmer has some text files of data to mine, and perhaps written requirements. As the script is iteratively written and refined, data from the source files resolves into greater detail.

There's nothing earth-shattering there. But the sheer volume of experience from Perlmonks can contribute a gazillion suggestions and hints. Writing a paper is my way of organizing my thoughts. It can also be a contribution to the world at large about how any programmer can learn how to mine data with Perl in one afternoon.

Subjects I've been thinking about covering in the paper includes "do's and don'ts" of text processing, including what I call "The Burrito Principle" (stolen from the Pareto Principle) which basically says to aim for the most useful data first ("80% of the meat is in 20% of the burrito"). I'll mention some good ways of parsing text, including paying careful attention to the record separator, and modules in CPAN that may have already invented your wheel for you. I'm also thinking about a section that mentions programming languages that are likely to be helpful for data mining, and languages that are likely to be more trouble than they're worth for data mining (Java).

Needless to say, the paper will have a subjective feel to it, but I'm offsetting that with code snippets to inject a dose of reality into my arguments.

So, your comments, suggestions, and offerings would be nice.

Rob