Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re: Does fatalsToBrowser give too much information to a cracker?

by strat (Canon)
on Apr 10, 2002 at 11:11 UTC ( [id://157995]=note: print w/replies, xml ) Need Help??


in reply to Does fatalsToBrowser give too much information to a cracker?

I agree with George; warnings and errors are nothing for normal user, and may sometimes even become dangerous
(e.g. open (FILE, $file) or die "can't read password from $file: $!"; or the like. Because with some providers, you have to keep "sensible" data in directories that might eventually be accessed by web ).

Another reason why I always remove -w (use warnings) in production systems, as well as qw(fatalsToBrowser) and try to do some defensive programming, to try to catch all errors that might happen is that I don't want to confront users with errormessages they won't understand or won't be able to do anything against.

In cgi-scripts, I only use die for really serious errors; more often, I write an own error-outputting-routine that cares about returning a complete html-page.

Best regards,
perl -le "s==*F=e=>y~\*martinF~stronat~=>s~[^\w]~~g=>chop,print"

  • Comment on Re: Does fatalsToBrowser give too much information to a cracker?

Replies are listed 'Best First'.
Re: Does fatalsToBrowser give too much information to a cracker?
by Smylers (Pilgrim) on Apr 10, 2002 at 12:56 UTC
    Another reason why I always remove -w (use warnings) in production systems ... is that I don't want to confront users with errormessages they won't understand or won't be able to do anything against.

    How does removing -w aid that? Warnings (even with fatalsToBrowser, since warnings by their very nature aren't fatal) only appear in the server error log, which isn't going to be seen by users.

    In deployed code there shouldn't be any warnings generated, but should something of a dubious nature occur surely it's better for the warnings to be available in the log than not at all?

      Actually, warnings produced by -w can often be viewed in the raw data returned by the HTTP server. It won't display in the browser, but is visible with GET (or is it HEAD?) or with a util such as curl or Sam Spade. These errors can reveal paths and filenames, which may or may not be a problem. I tend to use a BEGIN{} block to skip stupid errors ("$xxx used only once...") and/or carpout() to redirect the errors to a file.
        Actually, warnings produced by -w can often be viewed in the raw data returned by the HTTP server. It won't display in the browser, but is visible with GET (or is it HEAD?)

        Are you sure? I've just tried creating a script which causes an unitialized value warning. Whenever it's run as a CGI script it indeed spews said warning to the server error log. But I can't provoke the server into yielding the warning in any headers.

        I've tried telnetting directly to port 80 on the server and using both HEAD and GET (though I'm fairly sure it would violate the HTTP spec for those two to return different sets of headers) and don't see any warnings.

        What do they look like when you see them — what HTTP header do they use? I'm just getting Date:, Server:, Connection:, and Content-Type:, exactly the same as I do with warnings turned off.

        Smylers

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://157995]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (3)
As of 2024-04-25 21:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found