Hi to the fellow monks, my problem is that i am stripping all the HTML content from a website and saving it in the first element of an array. When i use the print command to see if it really works it works a bit weird. When i leave the print command in the loop and run the script i input a website and the stripped conent is printed. If i take the print command outside the loop it won't print anything.
#Create an instance of the webcrawler
my $webcrawler = WWW::Mechanize->new();
my $url_name = <STDIN>; # The user inputs the URL to be searched
my $uri = URI->new($url_name); # Process the URL and make it a URI
#Grab the contents of the URL given by the user
$webcrawler->get($uri);
#Use the HTML::TokeParser module to extract the contents from the web
+site
my @stripped_html;
my $x = 0;
my $content = $webcrawler->content;
my $parser = HTML::TokeParser->new(\$content);
while($parser->get_tag){
$stripped_html[0] = $parser->get_trimmed_text(),"\n";
print $stripped_html[0];
}
exit;
Here i have left the print $stripped_html[0]; and it works. If i take that command outside the loop it wont print anything.Any ideas?Thanks in advance