http://qs321.pair.com?node_id=864501


in reply to Re^3: getting LWP and HTML::TokeParser to run
in thread getting started with LWP and HTML::TokeParser

I'm not an expert in robots.txt but I would understand http://www.kultusportal-bw.de/robots.txt as 'no agents allowed'.

# robots.txt von 17-8 Uhr # email Sammler draussenbleiben User-agent:EmailCollector Disallow: / # Robots die durchdrehen fliegen raus User-agent: GagaRobot Disallow: / # Allow anything User-agent: * Disallow: Disallow: *ROOT=1161830$ Disallow: */servlet/PB/-s/*

Replies are listed 'Best First'.
Re^5: getting LWP and HTML::TokeParser to run
by marto (Cardinal) on Oct 11, 2010 at 08:15 UTC

    Really, IIRC this looks as though only user agents 'EmailCollector' and 'GagaRobot' are dissalowed. All other user agents are dissalowed from seeing '*ROOT=1161830$' and '*/serverlet/PB/-s/*', but as specified under their '# Allow anything' comment.

    Update: In my previous post I was really warning against documented terms of use, as I say I can't read German so am unable to tell if the site has any.

Re^5: getting LWP and HTML::TokeParser to run
by afoken (Canon) on Oct 11, 2010 at 14:20 UTC

    Funny, now they serve a different robots.txt:

    # cat robots.txt.8-17 # robots.txt Tagsueber von 8-17 Uhr # Disallow robots thru 17 User-agent: kmcrawler Disallow: User-agent: * Disallow: / Disallow: *ROOT=1161830$ Disallow: */servlet/PB/-s/*

    Apart from that, I'm not sure about the ROOT and servlet lines. They look like patterns and not like URL path prefixes. Robots don't have to implement pattern matching, and most probably don't, even if Google's does. So many robots may consider this lines junk, and simply ignore them.

    With the 17-8 robots.txt, only EmailCollector and GagaRobot are excluded from the entire site, and all other robots are expected to avoid only URLs containing the ROOT and servlet patterns. Robot without a pattern matching engine will see that two lines as junk and ignore them.

    With the 8-17 robots.txt, only kmcrawler is allowed, all other robots have to avoid the site.

    From the text fragments it is obvious that you are expected to spider only in the night, and that you should behave. Don't collect e-mail addresses, don't waste server resources, don't cause large server load.

    There is an imprint claiming some (equivalents of) copyrights, especially non-private use of the layout and the content is prohibited, except for press releases. There is also a contact page that you should use when in doubt.


    Rough translations of the text fragments:

    von 17-8 Uhr
    from 17:00 to 08:00 (local time in Germany, I think)
    email Sammler draussenbleiben
    e-mail collector(s) stay outside
    Robots die durchdrehen fliegen raus
    robots running amok are kicked out
    Tagsueber von 8-17 Uhr
    during the day from 08:00 to 17:00

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)