Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re: Simple link extraction tool

by ikegami (Patriarch)
on Jan 02, 2007 at 21:43 UTC ( [id://592651]=note: print w/replies, xml ) Need Help??


in reply to Simple link extraction tool

How am I suppose to use your program?

  • This clobbers any existing listurls.txt, gives me two copies of the data and puts a useless status message in preferedname.txt:

    linkextractor http://www.blah.com/ > preferedname.txt
  • This clobbers any existing listurls.txt and puts a useless status message in urls.txt:

    linkextractor http://www.blah.com/ > preferedname.txt & del listurls.t +xt
  • This clobbers any existing listurls.txt and loses any error status message:

    linkextractor http://www.example.com/ > nul & move listurls.txt prefer +edname.txt

Suggestions:

  • Don't say it's OK when it isn't. Use the correct message.
  • Don't say it's OK when it is. Only send the URIs to STDOUT.
  • Send error messages (incl non 200 status messages) to STDERR.
  • Convert the URIs to absolute URIs.
  • Remove duplicate URIs.
  • Replace my $url = <@ARGV>; with my ($url) = @ARGV;.
  • The domain www.example.com (among others) was set aside for examples. It's better to use that than www.blah.com, a real live domain.

Suggestions applied:

use strict; use warnings; use List::MoreUtils qw( uniq ); use WWW::Mechanize qw( ); # usage: linkextractor http://www.blah.com/ > listurls.txt my ($url) = @ARGV; my $mech = WWW::Mechanize->new(); my $response = $mech->get($url); $response->is_success() or die($response->status_line() . "\n"); print map { "$_\n" } sort { $a cmp $b } uniq map { $_->url_abs() } $mech->links();

Update: At first, I didn't realize it was outputing to STDOUT in addition to listurls.txt. I recommended that the output should be sent to STDOUT. This is a rewrite.

Replies are listed 'Best First'.
Re^2: Simple link extraction tool
by Scott7477 (Chaplain) on Jan 02, 2007 at 23:38 UTC
    Thanks for taking the time to educate me and produce working code per your suggestions. Prior to posting my code, what I found with Super Search was that any queries regarding the existence of code like this simply got referred to CPAN modules; which was mildly suprising as many SOPW's get responses with code snippets that solve their problem.

    I later found brian d. foy's Re: Creating a web crawler (theory) which points to his webreaper which is apparently designed to download entire websites.

      One of the things you want to do when previewing a post is check that all your links go where you meant them to go. If you had done this, you would have found that your "webreaper" link doesn't work. You could have even simply copied the link from the source node: webreaper.

      Instead, you (apparently) wrote [cpan://dist/webreaper/]. ++ for a good guess, but it's wrong. The PerlMonks way to link efficiently to a distribution on CPAN is with [dist://webreaper] (⇒ webreaper). This is documented at What shortcuts can I use for linking to other information?

      Moral: Verify your links when you post.

      A word spoken in Mind will reach its own level, in the objective world, by its own weight

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://592651]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (5)
As of 2024-03-28 13:03 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found