Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re: Framework for News Articles

by smalhotra (Scribe)
on Mar 25, 2004 at 00:36 UTC ( [id://339601]=note: print w/replies, xml ) Need Help??


in reply to Framework for News Articles

There seems to be some confusion on the intentions of this project I apologize for not being clear. Thanks for raising these issues. They help clear ambiguities as well as implementation problems (copyrights or changing websites).

1. Khabar itself is not a crawler or an aggregator. Given a website with certain content it parses it for data you could use for whatever reason. It separates article specific content from things like page headers, menus, etc. What you do with the data is up to you, in accordance with the site's usage license. You could use Khabar to read pages found by a crawler or aggregator.
2. In most cases downloading this content for personal use is fair. It is really no different from Finance::Quote.
3. Dealing with page structure changes is up to the person who writes the parser. Good idea; perhaps it should be suggested that they write test to ensure the parser/format is still valid. I like the XML idea, but it's perhaps too much for the first version.
4. I suggest that the parsers return any advertisements they can read from the page as part of fair use. The person using the data can decide what to do with it. In general, the more details you can accurately parse out, the better.

Keep it coming ...

Replies are listed 'Best First'.
same prob. different approach
by g00n (Hermit) on Mar 25, 2004 at 07:31 UTC
    at the mercy of change

    My problem is I want data. Not *pretty web pages*. Raw data in feed format that I can process. I'm pretty much getting the results you are looking for now but not beating my head around having to parse html with all it's problems: namely you open to the mercy of web designers whim to change the layout.

    use rdf, rss or pda feeds

    So I avoid HTML. I'm lazy. I look for the rss, rdf, pda html pages. Point my spider and dump them in a directory for later parsing. Most news sites have rss feeds (though my local newspaper, The Age supplies rss feeds for a fee. but produces a lite page for pda's.) so some parsing is necessary.

    Now suppose I want to parse a page (in Perl) why wouldn't I use Andy Lesters fine WWW::Mechanise? (WWW::Mechanise article).

    questions, questions, devils advocate

    I'm not actually knocking the idea.

    • does an existing CPAN module exist that does a subset already?
    • could you build apon such a module?
        I ask this for 2 reasons. The first is the idea of rss feeds and web api's are gaining traction. The second is for quick hacks tools already exist. Take this example of Andys hack to get and sort the Perl Haiku results.
    • is the intention to build it to scratch your itch or solve a generic problem?
    • if you are using data structures to store the data could you investigate using/supporting YAML. (for multi-language support)?

    now you may say, goon your an idiot, be quiet. but ...

    -1 is what you get for having update button near the vote button :(

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://339601]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (3)
As of 2024-04-24 01:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found