http://qs321.pair.com?node_id=864475


in reply to Re^2: getting LWP and HTML::TokeParser to run
in thread getting started with LWP and HTML::TokeParser

Here is an example which uses WWW::Mechanize to visit the page, populate the field and submit the form. Error checking is left as a exercise for you, this is a short example to get you started:

#!/usr/bin/perl use strict; use warnings; use WWW::Mechanize; my $url = 'http://www.kultusportal-bw.de/servlet/PB/menu/1188427_pfhan +dler_yno/index.html'; my $mech = WWW::Mechanize->new(); $mech->get( $url ); $mech->field('einfache_suche','*'); $mech->submit(); # $mech->content now contains the results page.

I can't read German, so you'd better check that you're not breaking any site policy regarding automation.

Replies are listed 'Best First'.
Re^4: getting LWP and HTML::TokeParser to run
by BrimBorium (Friar) on Oct 10, 2010 at 18:04 UTC

    I'm not an expert in robots.txt but I would understand http://www.kultusportal-bw.de/robots.txt as 'no agents allowed'.

    # robots.txt von 17-8 Uhr # email Sammler draussenbleiben User-agent:EmailCollector Disallow: / # Robots die durchdrehen fliegen raus User-agent: GagaRobot Disallow: / # Allow anything User-agent: * Disallow: Disallow: *ROOT=1161830$ Disallow: */servlet/PB/-s/*

      Really, IIRC this looks as though only user agents 'EmailCollector' and 'GagaRobot' are dissalowed. All other user agents are dissalowed from seeing '*ROOT=1161830$' and '*/serverlet/PB/-s/*', but as specified under their '# Allow anything' comment.

      Update: In my previous post I was really warning against documented terms of use, as I say I can't read German so am unable to tell if the site has any.

      Funny, now they serve a different robots.txt:

      # cat robots.txt.8-17 # robots.txt Tagsueber von 8-17 Uhr # Disallow robots thru 17 User-agent: kmcrawler Disallow: User-agent: * Disallow: / Disallow: *ROOT=1161830$ Disallow: */servlet/PB/-s/*

      Apart from that, I'm not sure about the ROOT and servlet lines. They look like patterns and not like URL path prefixes. Robots don't have to implement pattern matching, and most probably don't, even if Google's does. So many robots may consider this lines junk, and simply ignore them.

      With the 17-8 robots.txt, only EmailCollector and GagaRobot are excluded from the entire site, and all other robots are expected to avoid only URLs containing the ROOT and servlet patterns. Robot without a pattern matching engine will see that two lines as junk and ignore them.

      With the 8-17 robots.txt, only kmcrawler is allowed, all other robots have to avoid the site.

      From the text fragments it is obvious that you are expected to spider only in the night, and that you should behave. Don't collect e-mail addresses, don't waste server resources, don't cause large server load.

      There is an imprint claiming some (equivalents of) copyrights, especially non-private use of the layout and the content is prohibited, except for press releases. There is also a contact page that you should use when in doubt.


      Rough translations of the text fragments:

      von 17-8 Uhr
      from 17:00 to 08:00 (local time in Germany, I think)
      email Sammler draussenbleiben
      e-mail collector(s) stay outside
      Robots die durchdrehen fliegen raus
      robots running amok are kicked out
      Tagsueber von 8-17 Uhr
      during the day from 08:00 to 17:00

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)