I'm writing a web spider, I am currently trying to make it check itself against a file so it does not visit the same page twice. Here is what I have: (and perltidied too :))
#!/usr/bin/perl -w
use strict;
use diagnostics;
use LWP::RobotUA;
use URI::URL;
#use HTML::Parser ();
use HTML::SimpleLinkExtor;
my $a = 0;
my $links;
my $visited;
my $base;
my $u;
for ( $u = 1 ; $u < 1000000000 ; $u++ ) {
open( FILE1, "</var/www/links/file$u.txt" ) || die;
while (<FILE1>) {
my $ua = LWP::RobotUA->new( 'theusefulbot', 'bot@theusefulnet.
+com' );
#my $p = HTML::Parser->new();
$ua->delay( 10 / 6000 );
my $content = $ua->get($_)->content;
#my $text = $p->parse($content)->parse;
open( VISITED, ">>/var/www/links/visited.txt" ) || die;
print VISITED "$_\n";
close(VISITED);
open( VISITED, "</var/www/links/visited.txt" ) || die;
my $extor = HTML::SimpleLinkExtor->new($base);
$extor->parse($content);
my @links = $extor->a;
$u++;
open( FILE2, ">/var/www/links/file$u.txt" ) || die;
foreach $links (@links) {
my @visited = <VISITED>;
foreach $visited (@visited) {
if ( $visited eq $links ) {
print "Duplicate found";
}
else {
open( OUTPUT, ">/var/www/data/$a.txt" ) || die;
print OUTPUT "$_\n\n";
print OUTPUT "$content";
close(OUTPUT);
print FILE2 url("$links")->abs("$_");
print FILE2 "\n";
}
}
}
$a++;
$u--;
}
close(FILE1);
close(FILE2);
close(VISITED);
print "File #: $a\n";
}
This still lets duplicate files exist, I know people have told me to use an array, but that would get rather large, so I'm just using a file, if you know exactly how to do it with an array, then that would be fine, so far I've gotten only "use shift".
Thanks
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.
|