laziness, impatience, and hubris | |
PerlMonks |
Re: Strating Internet related scriptsby ajt (Prior) |
on Oct 18, 2002 at 08:18 UTC ( [id://206244]=note: print w/replies, xml ) | Need Help?? |
Depending on how many hoops you have to jump through this could be very easy or very hard. If you don't have to contend with logging in and cookies then it could be easier to use a tool such as wGet to get the work done. It runs on most flavours of Unix, and on Windows if you have Cygwin installed. A more Perlish solution would be to use LWP to interact with the web server for you, it can handle logins, and cookies for you, and grab the pages you want. There are a range of HTML parsing tools at your disposal, and to the list already suggested I would add HTML::TreeBuilder which I think is quite good and often overlooked. One thing I would recommend is the Sean Burke's excellent excellent "Perl and LWP" (ISBN 0596001789) from O'Reilly. It's a little on the slim side, but it does cover the LWP module, and several of the HTML parsing modules, with plenty of examples and useful explanation. There are reviews: Perl & LWP and Perl and LWP. Other good resources are davorg's "Data Munging with Perl" (ISBN 1930110006) which has a good chunk of grabbing and parsing web pages, and the long defunct Web Client Programming with Perl. -- ajt
In Section
Seekers of Perl Wisdom
|
|