Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

is I/O checking worth it?

by Beatnik (Parson)
on Jan 13, 2001 at 20:56 UTC ( [id://51607]=perlmeditation: print w/replies, xml ) Need Help??

I've seen a number of scripts (mostly poorly written ones) blindly overwriting files, without actually checking if it should (or can) be read/overwritten. Is it worth using alot of checking (file existence, file permissions, files vs directories, files being (sym)links, etc.) before doing some important file I/O? One of the more common exploits appears to be, changing a file by a (sym)link to another file, and hence accessing a different file than the one intended.

Replies are listed 'Best First'.
(tye)Re: is I/O checking worth it?
by tye (Sage) on Jan 14, 2001 at 00:15 UTC

    First, don't bother trying to check file permissions. The only real way to check file permissions is when you try the actual operation. So check for failure and report a good error message.

    You might see advice to use access() to check permissions but that is a bit of a misleading API. It was never meant for checking generic file permissions. It was meant only for use by set-UID programs to make a rough guess about whether the file would be accessible by the user once set-UID was taken away. And it only gives a rough guess, ignoring lots of potential reasons for an access to fail or succeed.

    Second, the way you talk about this checking makes it sound like you are just going to generate a bunch of race conditions. For example, checking whether a file exists, say with -e, before attempting to overwrite causes a race condition. Instead, open the file in a way that will fail if the file exists then you can check if the file existing is what caused the open to fail and deal with it at that point.

    But if, for example, your desired behavior to avoid overwriting a file is to rename the current file before opening the new one, then you still need to open the new file in a way that will fail if the file already exists or you get a race condition again.

    The symlink trick only works when a privileged program (such as a set-UID program) keeps its privileges while working with files in a directory that it doesn't need its privileges for (such as /tmp). Because if you can use a symlink to redirect non-/tmp files, then you have privilege to just directly move the file and don't need a symlink trick.

    A much better solution than checking for symlinks is to have your set-UID program remove its privileges whenever it deals with places where it doesn't need them.

    I don't think checking for tricky things is usually a good idea. If your program isn't set-UID (or privileged in some other way), then these tricks really don't pose a security hole and may actually be used legitimately to work-around some temporary system problem.

    But even if there are no security issues to worry about, it is a good idea to avoid race conditions.

            - tye (but my friends call me "Tye")
Re (tilly) 1: is I/O checking worth it?
by tilly (Archbishop) on Jan 14, 2001 at 05:14 UTC
    The first step if security matters is to read perlsec and then turn on taint checking.

    A good step regardless is to have every open test what you did. I believe in doing it like perlstyle says and having the error message include the filename, attempted operation, and $!.

    If you need to read and write files but don't want to follow symlinks, this can get fairly tricky. The following code (which will fail horribly on systems without symlinks) demonstrates how to do it safely:

    #! /usr/bin/perl -w use strict; use Carp; use Symbol; # Needed on 5.005 and less sub clear_file { my ($fh, $name) = @_; seek($fh, 0, 0) or confess("Cannot seek to beginning of '$name': $!" +); truncate($fh, 0) or confess("Cannot truncate '$name': $!"); } sub deny_symlink { my ($fh, $name) = @_; # In the following testing the filehandle avoids a race # condition, but I think that whether it works is OS # specific. :-( if (-l $fh or -l $name) { my $real = readlink($name); confess("Refusing to follow symlink from $name to $real"); } } sub open_read { my $name = shift; my $fh = gensym(); open($fh, "< $name") or confess("Cannot read '$name': $!"); deny_symlink($fh, $name); return $fh; } sub open_write { my $name = shift; my $fh = gensym(); open ($fh, "+>> $name") or confess("Cannot write '$name': $!"); deny_symlink($fh, $name); clear_file($fh, $name); return $fh; } my $filename = "whatever"; *FH = open_write($filename); print FH "Hello world\n"; close FH; *FH = open_read($filename); print <FH>;
    In general if you need temporary files, do not attempt to roll that yourself. Use File::Temp. Really.

    Also note that if you are concerned with security then you may want to think about locking. For an example (which could easily be improved) that I came up with a while ago see Simple Locking.

    With luck this should give you some ideas of how to improve the security of your programs.

      I usually do alot of checking on more critical file I/O, instead of blindly opening, and I once so often even do a forced check on file permissions (which ofcourse will break the script on platforms that don't support it).
      Locking will only work on processes that understand the concept. If applications don't obey file locking, they can do whatever they want with the files. Perl, ofcourse, obeys the locking.
      Not all OSs have flock implemented, good example: Windows (not that I use it). flock will actually break your script if it's run on a platform that doesn't support it.
      What about the file versus dir check? A file can be opened, a dir can't (in a file meaning). Will -d suffice enough? =)
        Actually locks on Unix are only advisary, and while Perl scripts may obey them, it depends on the script writer properly calling flock.

        As for the rest, generally it is a far sounder strategy to open in a non-destructive manner, then test. Testing first opens up race conditions.

        Beyond that putting in a ton of paranoid checks tends to create unmanageable messes. The harder you make security, the less likely it is to happen. Make it easy to be secure (eg through a small number of functions like I wrote above) and think about how it fits in your overall policy. (I might work as a non-privileged user in directory structures whose permissions are locked down to just that user, then leave it at that. If I want to put a symlink in there, that is probably OK.)

        In general make sure that things are sane, you have programmed in a way where unexpected inputs cannot be misunderstood, and make it simple to maintain that. But if (and without seeing what you do I have no idea whether this applies in your case) you set up a complex scheme that is supposed to be followed, you have set yourself up for failure. Complex schemes tend to erode security.

Re: is I/O checking worth it?
by moen (Hermit) on Jan 13, 2001 at 21:50 UTC
    First I think you answered your own question =)
    Second, Yes..I really think that one should, it's a matter of security and good programming practice.

    Using symlinking when compromising an machin is very common, yes, and easy if you locate scripts that don't check for symlinks and are run/executed as root (or any other user for that matter).

    So messing around deleting, creating and modifying files on your system without checks whether it's sain or not, is just plain stupid.
    methinks

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://51607]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (5)
As of 2024-04-16 06:57 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found