As you see, originally I posted a piece of pseudo-code. It described logic. The algorithm. I used perl syntax and made the block into a sub to help you with the actual implementation.
Below is a piece of REAL executable code in realworld perl. It actually crawls the web (provide it with urls in command-line). And I --you, sorry.
Just run it as a separate script, no need to "put it into" your code.
#/usr/bin/perl -w
use strict;
use LWP::RobotUA;
use HTML::SimpleLinkExtor;
use vars qw/$http_ua $link_extractor/;
sub crawl {
my @queue = @_;
my %visited;
while(my $url = shift @queue) {
next if $visited{$url};
my $content = $http_ua->get($url)->content;
# do useful things with $content
# for example, save it into a file or index or whatever
# i just print the url
print qq{Downloaded: "$url"\n};
push @queue,
do { $link_extractor->parse($content); $link_extractor->a };
$visited{$url} = 1;
}
}
$http_ua = new LWP::RobotUA theusefulbot => 'bot@theusefulnet.com';
$link_extractor = new HTML::SimpleLinkExtor;
crawl(@ARGV);