http://qs321.pair.com?node_id=13054

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question: (http and ftp clients)

I'm trying to parse all the links in a web page into an array organized like this: ($link, $description) where:
<a href="http://www.mysite.com/mypage.html">Come <b>visit</b> my <u>we +b page</u>!</a>
gets parsed into: $link = "http://www.mysite.com/mypage.html" $description = "Come visit my web page!" thanks very much for the help!

Originally posted as a Categorized Question.

Replies are listed 'Best First'.
Re: How do I parse links out of a web page
by tokpela (Chaplain) on Jun 24, 2006 at 06:34 UTC
    Or you can use WWW::Mechanize
    use strict; use warnings; use WWW::Mechanize; my $url = "file:///D:/webpage.html"; #my $url = "http://www.domain.com/webpage.html"; my $mech = WWW::Mechanize->new(); $mech->get( $url ); my @links = $mech->links(); foreach my $link (@links) { print "LINK: " . $link->url() . "\n"; print "DESCRIPTION: " . $link->text() . "\n"; }
Re: How do I parse links out of a web page
by gregorovius (Friar) on May 19, 2000 at 04:45 UTC
    Unfortunately HTML::LinkExtor does not offer a way of extracting the link text from the 'A' tag. You can resort to the HTML::TokeParser instead.

    The HTML::TokeParser perldoc contains a snippet that does exactly what you ask for, except that the link URLs it extracts can be relative so you need to concatenate a base to them.
Re: How do I parse links out of a web page
by Anonymous Monk on Sep 25, 2004 at 17:48 UTC

    You could try this as well

    #!/usr/bin/perl -w use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.google.ca/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'a'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be diffent from $url) $p = HTML::LinkExtor->new(\&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join("\n", @imgs), "\n";
Re: How do I parse links out of a web page
by agent00013 (Pilgrim) on Jun 22, 2001 at 19:39 UTC
    The Perl Cookbook has a good example:
    #!/usr/local/bin/perl # xurl - extract unique, sorted lists of links from URL use HTML::LinkExtor; use LWP::Simple; $base_url = shift; $parser = HTML::LinkExtor->new(undef, $base_url); $parser->parse(get($base_url))->eof; @links = $parser->links; foreach $linkarray (@links) { local(@element) = @$linkarray; local($elt_type) = shift @element; while (@element) { local($attr_name, $attr_value) = splice (@element, 0, 2); $seen{$attr_value}++; } } for (sort keys %seen) { print $_, "\n"}
    Hope this helps. /msg me if you need anything else.
Re: How do I parse links out of a web page
by merlyn (Sage) on May 18, 2000 at 21:00 UTC
    See HTML::LinkExtor in the LWP module in the CPAN.
Re: How do I parse links out of a web page
by Anonymous Monk on May 10, 2002 at 00:24 UTC
    I say snag all your html with: use LWP::Simple; $webpage=get <>; #replace <> with the URL of your web site search the results for "href=" and then use the split() function to chop up those lines into what ever sections you want.

    Originally posted as a Categorized Answer.