more useful options | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Joost is right on this. I did something similar a while back grabbing newspaper headlines and LWP::Simple did the trick for me. Of course at the time I didn't know about HTML::TokeParser and would have made my job a whole lot easier.
You will want to save a copy of the source for a few days to make sure you're that the information you're looking for is in the same place every time. What you're going to want to look for is HTML comments. Hopefully the page you're scraping is going to have those around what you want. Then it's just a simple matter of reading until you get to the point you want to parse, parse it, and you're done. In addition, if you look here, this node contains a small program I wrote using HTML::TokeParser so you can see what you're going to get as output using that module. That may help you if you go that direction. Hope that helps!
There is no emoticon for what I'm feeling now.
In reply to Re: Easiest Way To Cut Info from Webpages
by Popcorn Dave
|
|