I would go with marto
's advice about WWW::Mechanize. I haven't used it yet, but I hear that it is great. I suspect that you will find it easier to use than any advice I could give about decoding the raw HTML to get the next pages to "click" on. You are getting about 5K pages from a huge government website that performs very well. I wouldn't worry too much about fancy error recovery with retries unless you are going to run this program often.
You can of course parse the HTML content of the search results with regex, but this is a mess...
my (@hrefs) = $mech->content =~ m|COMPLETEHREF=http://www.kultus-bw.de
print "$_\n" foreach @hrefs; #there are 5081 of these
#these COMPLETEHREF's can be appended to a main url like this:
my $example_url = 'http://www.kultusportal-bw.de/servlet/PB/menu/11884
Then things get hairy and you will want to whip out some of that HTML parser voo-doo to parse the resulting table. Also, the character codings aren't consistent, for example the page has ä, but not ü which is coded as ü