http://qs321.pair.com?node_id=258980

In #perlhelp on EFnet, someone said:

14:01 < mauke> Is this a feature? mkdir tmp; cd tmp; touch 'echo Hello +, world |'; perl -pe1 * 14:03 < mauke> Even better: touch 'rm -rf `echo \\\\57` |'; perl -pe1 +*
The first one is safe to run. It prints "Hello, world". The second one is on most systems not safe to run. It deletes everything it can, starting at /.

This feature is documented in perlop:

You can even set them to pipe commands. For example, this automatically filters compressed arguments through gzip:            @ARGV = map { /\.(gz|Z)$/ ? "gzip -dc < $_ |" : $_ } @ARGV;

So all code using the filehandle ARGV (this includes oneliners using -n or -p) is unsafe if used with shell globs.

Unfortunately, -p and -n are used in a LOT of places. Often in scripts that cron starts once a day. Often running as root. I found 5 root holes on my server system.

The fix is easy, but a lot of typing, which isn't handy for oneliners. You should open the files explicitly, using 3-argument open.

Too bad this is a feature. If it weren't documented, I'd say it's a bug and a very scary one too.

:(

Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }

Replies are listed 'Best First'.
Re: Dangerous diamonds!
by PodMaster (Abbot) on May 18, 2003 at 13:18 UTC
    I'm a little suprised people are still being suprised by the magic of the open call. If you're worried, turn on taint.
    C:\>perl -Tpel "echo asdf|" Insecure $ENV{PATH} while running with -T switch.

    Also, what if a file has nasty shell escapes? nevermind. BTW, who runs oneliners as root? (i'd consider that a bug)


    MJD says you can't just make shit up and expect the computer to know what you mean, retardo!
    I run a Win32 PPM repository for perl 5.6x+5.8x. I take requests.
    ** The Third rule of perl club is a statement of fact: pod is sexy.

      BTW, who runs oneliners as root? (i'd consider that a bug)

      It's not just one-liners, and it's not just root. Any script that doesn't untaint ARGV is vulnerable. Partly, that vulnerability is incidental, given that once someone has broken into an account it is a lot easier for them to do damage directly, rather than wasting time attacking some Perl script.

      Very few Perl books talk about ARGV being a vulnerability. Or if they do, it's in passing in one part of the book, with examples in other parts ignoring the hazard.

        Any script that doesn't untaint ARGV is vulnerable.

        Which is this thread's lesson :)

        But I still think magic ARGV should not use two-arg open.

        Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }

      I'm a little suprised people are still being suprised by the magic of the open call.

      I'm not surprised by the open call. I'm surprised that Perl uses this way to open files with magic ARGV. Three argument open would have been a lot safer.

      I'm very sure I'm not the only one who forgot that magic ARGV uses normal two-arg open internally. The number of exploitable scripts made by my customers and myself proves that most people are unaware of the security problems or chose to simply ignore them. I found 15 so far.

      If you're worried, turn on taint.

      Thanks. Even though I hate Perl's tainting mechanism, I'll use it here. It still cannot really fix the problem, since scripts will now die if they encounter an invalid file.

      BTW, who runs oneliners as root? (i'd consider that a bug)

      Everyone who needs a script to run as root runs scripts as root.

      Users can't do everything root can, and sometimes you need to be root to do what you want to do.
      Not everything can be done by a user, some things need to be done by root.

      And some of those things are made by me, and those things made by me are written in Perl.
      Perl is a powerful language that lets me do those things in simple one-liners, so I do do that with simple one-liners.

      The one-liners run as root because they need to do things that only root can do.
      The one-liners couldn't do what they need to do if they were not run as root.
      And THAT would be a bug.

      Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }

        I'm surprised that Perl uses this way to open files with magic ARGV.
        Why? This feature existed long before open had 3 arguments, why would it suddenly change.
        It still cannot really fix the problem, since scripts will now die if they encounter an invalid file
        There is no problem to fix. Sanitize your @ARGV if you insist on magic.


        MJD says you can't just make shit up and expect the computer to know what you mean, retardo!
        I run a Win32 PPM repository for perl 5.6x+5.8x. I take requests.
        ** The Third rule of perl club is a statement of fact: pod is sexy.

Re: Dangerous diamonds! (accident)
by tye (Sage) on May 19, 2003 at 15:43 UTC

    I boggle everytime I see this defended as a feature. Why is it a feature that:

    perl -ne '...' *
    is broken?? It doesn't process the files matching * as any sane person would expect it to. It can't even handle files that have leading spaces in their names. It is quite simply stupid, dangerous, and counter-intuitive.

    Sure, it is cute to have a feature where you can load up @ARGV with qw( >this >that >the >other ) and have the "read files" operator create a bunch of empty files for you. You can even go out of your way to come up with a useful invocation of it.

    But that shouldn't come at the expense of breaking the default case of dealing with @ARGV coming from the command line that contains a list of file names! If you want to play such games, then you should be allowed to but you shouldn't break the basic functionality of perl -ne '...' * in order to support that! Especially not in a way that warrants a CERT advisory!

    I'd patch this so that the "magic open" is applied to <> iff use open IN=>':magic'; were in effect. I started down that road when I started a patch to fix use open IN=>':raw'; to work on <> because it is currently impossible to use binmode with <>.

    But the reaction from p5p made me think that my efforts would be wasted as such a patch would not be accepted.

    Someone really should have CERT file a report on this. It is a serious security bug in Perl that should be fixed and should be advertised more.

    How this works is clearly an accident of implementation and not an intentional design. The fact the people have come up with creative uses for this accident doesn't mean that the tons and tons of legitimate uses of <> (in an attempt to read the files named in @ARGV) should be left broken just because we didn't realize that they were broken when we were telling people to use <> to iterate over the lines in files named in @ARGV.

    Those who really feel that this misfeature should continue to be the default behavior need to update tons of documentation that encourages the use of <> for iterating over a list of files matched by a wildcard.

    The fact is that *both* magical and non-magical expectations for <> are documented. Any place that mentions perl -ne '...' * is documenting the sane behavior that was always expected/intended and this is by far the most common desired behavior when <> is used.

    Making <> sane by default (instead of magical) would fix more existing code than it would break! And the code that would be broken would be simple to fix (add a single use open IN=>':magic';) and would be code that was written with awareness of how strange <> can behave and so would more be likely to learn of the need for this change.

    It isn't hard to find nodes by people who claim to not be surprised by this mis-feature that contain code that seems to clearly indicate that they don't expect magical behavior. I did a quick super search for nodes by merlyn that mention "local" and "*ARGV" and I found a bunch of uses of <> that don't set @ARGV= "< input.file" nor mention the dangers of not doing that.

    And if you change all of your code so @ARGV is always populated with "< $filename", then $^I becomes useless! How $^I works clearly indicates that the writers of Perl did not take magical open into account during the design. We just need to fix it, not document how it has always been this way and shame on you for not realizing it (we didn't realize it either, but we refuse to admit that it was a mistake).

                    - tye

      It doesn't process the files matching * as any sane person would expect it to. It can't even handle files that have leading spaces in their names. It is quite simply stupid, dangerous, and counter-intuitive.

      All of these would be fixed if it used three-arg open with an explicit "<" as its second arg. It's a shame that some people think the current way of handling things is okay. :(

      Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }

      I did a quick super search for nodes by merlyn that mention "local" and "*ARGV" and I found a bunch of uses of <> that don't set @ARGV= "< input.file" nor mention the dangers of not doing that.

      Did merlyn go back and fix them? The super search turned up 9 nodes and he sets @ARGV explicitly in each one. (Not to "<input.fil" but there is no danger in forgoing the '<' if the filename is explicit, right?)

      -sauoq
      "My two cents aren't worth a dime.";
      

        Having heard merlyn complain many times about people promoting dangerous memes, I expected to see merlyn at least mention this danger of <> that so many seem to be saying "Oh, sure, I expected that all along; after all it *is* documented" about. I couldn't find a single one. Perhaps I just missed it.

        What I did find was what I described. I picked something I knew I could find with Super Search, merlyn doing local(*ARGV), setting @ARGV, then using <>. I found no use of "< filename" nor any mention of these dangers.

        I had expected that merlyn would realize that posting code that does @ARGV = "filename"; would invite someone to copy and modify his code and end up with @ARGV = $filename; and so realize he was promoting a dangerous meme and address this point somewhere.

        Especially in something like •Re: XML log files, which includes code meant to be copied and modified and was in reply to a node that used $logfn not some hard-coded log file name. So merlyn should have expected "mylogfile" to be replaced with $logfn and yet didn't even mention this risk.

        I didn't expect him to always mention this risk, I was just looking for any indication that he had realized this risk and couldn't find any despite finding several nodes where <> is used and @ARGV is set. That certainly doesn't prove that merlyn hasn't always been keenly aware of this risk. But I think it indicates that even merlyn probably usually thought about @ARGV containing filenames and (at least until the issue was raised recently) usually didn't worry about <> sending filenames to the shell. In any case, I think most users of Perl usually think about @ARGV and <> that way and I have yet to find any evidence of many (any) other people doing otherwise until quite recently.

        So I did some more searching looking for any places where someone has said "oh, and be careful because <> can pass your filenames to the shell for interpolation, of course (everyone knows that, it is spelled out explicitly in the documentation!)". I searched for nodes that contain both '"<' and '<>' in hopes of finding nodes that use <> defensively. I looked at about half of the matches and none of them were using <> defensively.

        But several of them show evidence of the opposite, of people knowing full well that open FH, $filename is a bad idea and then doing the equivalent Bad Idea™ of @ARGV = $filename; then using <>. That is, nodes that do open FH, "< $file" and yet don't follow the same precaution when using @ARGV and <>.

        I found Dominus (well-respected Perl author) doing this in How do I insert a line into a file?. And Adam (very careful Perl programmer that I respect) doing it (via the command line) in Re: Populating an array. And pjf doing it in Re: Searching a whole directory of databases.

        So I've got hard evidence that people have expected <> to interpret @ARGV as containing names of files to be read and not expressions to be interpretted by 2-argument open, yet still no hard evidence of anyone interpretting the vague documentation as "the above pseudo code used 2-argument open so <> will also behave like it used 2-argument open and do stupid things for files with names beginning with > or |, even though that would be dangerous and, well, stupid". q-:

                        - tye
Re: Dangerous diamonds!
by Abigail-II (Bishop) on May 18, 2003 at 21:38 UTC
    The problem doesn't like in magic open. The problem lies in assuming world writeable directories are safe. Consider the following program:
    foreach my $file (@ARGV) { open my $fh => ">", $file or die "Failed to open $file: $!\n"; print $fh "Buzzle\n"; close $fh or die "Failed to close $file: $!\n"; }

    Or even:

    foreach my $file (@ARGV) { truncate $file, 0 or die "Failed to truncate $file: $!\n"; }

    which doesn't even open a file, let alone use magic open. If you call any of those programs in a world writeable directory with * as argument as root, you're open for a DoS attack. All the attacker needs to do is create a symbolic link in the directory, pointing to an important file like /etc/passwd or /vmunix, and KABOOM!.

    It would very insecure to think that using 3-arg open will fix your problems.

    Abigail

      Sigh. I KNOW THAT.

      The problem doesn't like in magic open. The problem lies in assuming world writeable directories are safe.

      No.

      The problem that I am discussing is that it's using 2-arg open without me knowing it is. Now that I know it does do that, I won't make the mistake of EVER doing any -pe'something' * in something that is automated again. Too bad, since Perl really is nice as a one-liner crafting tool. Or I thought it was.

      Consider the following program:

      Blah blah. Those two examples are completely unrelated. They only happen to use @ARGV. I was *NOT* discussing the array @ARGV, but the magic filehandle that opens implicitly.

      Your examples change the files, which is by definition less secure. My concern is with scripts/one-liners that readline *ARGV without knowing it could be any mode. I'm talking about implicit open, you're talking about explicit open and explicit truncate.

      Please step into the real world and realise people make mistakes, and that people sometimes think they know how something works but do not. I thought I knew what magic ARGV did (Note again: @ARGV is not magic. I'm talking about *ARGV{IO} here, and only that.), but apparently did not.

      My search on my server, on which multiple people automate tasks using Perl one-liners, proves that I'm not the only one that opened up huge security holes by assuming -n and -p were safe (again assuming no $ARGV, no @ARGV and no $^I or ANYTHING that *changes* files).

      It would very insecure to think that using 3-arg open will fix your problems.

      Pedant. Let me rephrase: 3-arg open with "<" as its second argument would fix most of the problem that I describe. Possibly still has exploits with nullbytes and such, but at least those are real exploits, and not some stupid Perl bugfeature that can very easily be abused.

      To anyone reading my post: I acknowledge that it is a feature (after all, it's documented and sometimes useful) and not a bug. I also agree that you shouldn't assume things. But people do assume a lot and my message serves as a warning for people like me. In some IRC channels some people were quite shocked and started editing their scripts immediately.

      Sometimes I wonder why it is that in Perl world you cannot warn people or express your wishes without getting replies about that things are supposed to work the way they do, that any changes would break legacy scripts, that I should have been perfect in the first place and that worse situations are possible too.

      So, to avoid further confusion:

      • Beware: magic ARGV (implied by -p and -e) uses two-arg open and can open files in a not-read-only mode and can even execute external commands.
      • How it works now is a feature, documented in perlop.
      • I wish it were different (explicit read-only using three-arg open).
      • I'm only talking about the magic ARGV filehandle, not about $ARGV, @ARGV or its elements.

      Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }

        To anyone reading my post: I acknowledge that it is a feature (after all, it's documented and sometimes useful) and not a bug.
        I disagree. That is like saying that carrying a loaded and unlocked gun with you all the time is sometimes useful. You'll accidently shoot someone that way, likely even yourself while not even handling the gun.

        If people want to make use of the effect that this kind of "feature" achieves now, they should program the loop explicitely. How much work is it? Is it really worth the savings?

        Again, in summary: <> should only try to open existing files, and for reading only.

        Please step into the real world and realise people make mistakes, and that people sometimes think they know how something works but do not. I thought I knew what magic ARGV did (Note again: @ARGV is not magic. I'm talking about *ARGV{IO} here, and only that.), but apparently did not.

        Please step into the real world, where it is your responsibility to know what you are running when you are logged in as root.

        Update: tilly has pointed out to me that the interface is partly to blame whether or not this is documented behavior. I agree that it is _partly_ the blame of the interface. The other part of the blame, I believe, still lies with the user of the interface. The point is to be extremely careful when you do things as root that you do know what is going on.

        tye points out that it's nearly impossible to tell if one of some large number of modules uses the diamond operator. It is similarly difficult to tell if some portion of a C library does something stupid. This doesn't mean that C is inherently insecure, nor does it mean that Perl is. The furor has been over one-liners, which are simple enough that you CAN tell everything they are doing. Taint.pm is your friend in either case.

        I do agree it's worth fixing. I don't think it's entirely outside the responsibility of the user to be aware of shortcomings before they are fixed, though. End of update

        Christopher E. Stith
        use coffee;
Re: Dangerous diamonds!
by sauoq (Abbot) on May 21, 2003 at 02:25 UTC

    I think this should be changed. The default behavior should be just as you, tye, and others have pointed out.

    But, to put it in perspective, the problem isn't as terribly nasty as it has been made out to be in some posts here. Boil it down and it is simply a question of how much you can or should trust the source of your data. (And I think that was the real point Abigail was trying to make, even he went off on a bit of a tangent.)

    In other words, it's the same old issue that pops up time and again with CGI scripts. We constantly remind people that they can't trust the data submitted to their scripts so they really should use taint checking. We educate them. Continually. The only differences in the case of Perl, the diamond operator, and shell globs are:

    1. Intuitively, it seems such a thing would be innocuous because we use shell globs all the time with other programs, and...
    2. We can usually place a higher level of trust in the names of the files we are working with than we can in input from some random websurfer.

    A reasonable effort at following best practices will almost eliminate any potential danger from this infamous little "feature." There is no need for hyper-rigorous draconian super-sysadmining, which we all know is unrealistic anyway. Good habits are sufficient.

    Limit file and, especially, directory permissions. Use system accounts and groups to create sandboxes and segregate users.¹ Don't run processes, particularly automated ones, with greater privileges than necessary. Look at the files in a directory before you leap at them all willy-nilly with a splat on the command line.²

    These things are (or should be) second nature to experienced administrators. They are, afterall, the same measures that protect us against many far more subtle threats than a file named 'chown root:root somefile && chmod 4555 somefile|' which sits around waiting to get executed by an unsuspecting root privileged perl script foolishly making use of ARGV.

    Besides, whenever (or if) this is fixed, we'll still have to educate people on the dangers of using two argument open. Afterall, perl -e 'open F, $_ and print <F> for @ARGV' is no better than using -p. (Though, admittedly, perl isn't making the decision for you in that case.)

    The security implications are real, but the magnitude of the threat is actually small and completely avoidable. This behavior of perl's isn't exactly news but, as you point out, lots of experience Perl coders are unaware of it. That means two things: 1) there is room for more education and 2) it hasn't caused much of a problem over the years. I think that tye's call for a CERT advisory is a bit melodramatic.

    So, in summary, yeah; I think it should be changed. It's a minor security risk and, just as damnable and maybe moreso, it doesn't work as you'd expect. As tye pointed out, -p and ilk don't play nicely with filenames that start with whitespace. (They don't like files ending with whitespace either.) That's good enough reason to change it.

    ¹ In another post you said:

    The one-liners run as root because they need to do things that only root can do.
    The one-liners couldn't do what they need to do if they were not run as root.
    And THAT would be a bug.
    Do you have an example? There is likely a better way of configuring things so that root doesn't have to do the task.

    ² Excuses like, "there are too many files in the directory to see all of them easily" don't hold up. If there are, then one shouldn't be using * anyway. There are always other choices like ls | less or a better constructed pattern to match exactly the desired files.

    -sauoq
    "My two cents aren't worth a dime.";
    
      Excuses like, "there are too many files in the directory to see all of them easily" don't hold up.

      Classic security exploits often involve race conditions:

      root: hacker: # cd /user/joe # ls -a (a few normal files listed) # >'adduser x 0|' # pgrep '\b\d{3}-\d{4}\b' *

      (Where pgrep is a Perl-based 'grep' command such as this one that you might want because you like Perl regular expressions.)

      So I don't think looking at what * will expand to before you use it makes that much sense. But mostly I still consider it a very poor tool that will leak file name contents into the execution stream. That is just such a bad idea that I think most people will find such very surprising and easy to forget.

      I don't feel like I'm being dramatic in saying that this should be a CERT advisory. That something as simple as using "pgrep" as root on files whose names you don't control can run arbitrary code (as root) is a serious security risk that could easily result in security being breached somewhere.

      It is easy to come up with many different ways this could end up breaching security. So far, I haven't come up with a really plausible way that I could use this to gain privileges somewhere. But the huge number of implausible ways that are so easy to come up with convince me that this is a real risk; that someone will figure out a plausible way to use this to "break in" somewhere. It is a larger security hole than many items that have been the subject of CERT advisories.

                      - tye
        Classic security exploits often involve race conditions:

        How do you suppose user blackhat will manage to predict 1) that a root user will be using pgrep in a blackhat writable directory and 2) exactly when he should create his evil file?

        Yes, exploiting race conditions is a classic attack strategy... against processes that run with elevated privileges and which are in some way predictable. Generally, they involve doing something repeatedly in a tight loop, like creating a symlink for instance. Attempting to use this strategy against a human being does not pose a realistic threat.

        But mostly I still consider it a very poor tool that will leak file name contents into the execution stream.

        I agree that it's a misfeature. This one issue isn't enough to make me call perl² a "poor tool" though.

        That something as simple as using "pgrep" as root on files whose names you don't control can run arbitrary code (as root) is a serious security risk that could easily result in security being breached somewhere.

        It's true that there is a security risk present here, but that risk is really very small. There aren't even simple criteria by which to determine if any particular system has a security vulnerability due to this behavior. Building on your example, even if pgrep is installed it may be that the root user only uses it responsibly or not at all.

        But the huge number of implausible ways that are so easy to come up with convince me that this is a real risk; that someone will figure out a plausible way to use this to "break in" somewhere.

        Even if someone figures out how to use this behavior to "'break in' somewhere', their attack will be specific to the system they are violating. If someone were able to write an exploit based on it that would affect any significant number of machines, the chances are that it would already have been done² a dozen times over and they'd all be available on every script-kiddy site on the web. If, on the other hand, a widely distributed perl script is found to misuse two-argument open(), then CERT should issue an advisory or at least a vulnerability note about the guilty script. In fact, there are several of those already. (e.g. VU#453475, VU#181907, VU#671444, etc.)

        It is a larger security hole than many items that have been the subject of CERT advisories.

        I respectfully disagree. Most CERT advisories address specific vulnerabilities which have well-defined exploits. There have, however, been a few general advisories such as CA-1997-25: Sanitizing User-Supplied Data in CGI Scripts which address a whole class of vulnerabilities. By the way, that one mentions Perl; it says,

        "The cause of the problem is not the CGI scripting language (such as Perl and C). Rather, the problem lies in how an individual writes his or her script. In many cases, the author of the script has not sufficiently sanitized user-supplied input."
        Let's face it though, the threat of an authorized user gaining elevated privileges on a system by seeding a directory with poisoned filenames is not nearly the same risk posed by a web user being able to gain unauthorized access to a system by feeding a CGI script a poisoned query.

        I'll say again that I do think there needs to be a change. But let's keep a realistic view of the security implications. There is no cause for a panic inducing advisory. In fact, there is nothing here that should prevent a slow graceful transition from the current default behavior to something sane. That seems to be the direction things are already going. At least we have the 3-arg open() now. I advocate educating people and I agree that there hasn't been enough of that. I'll try to do my part from here on out.


        [1] Nor, for that matter would I call the -p or -n switches or the diamond operator "poor tools." They are just tools that require a little more caution... like a band saw or a blow torch.

        [2] This "feature" is not new; in fact, it's old. The problem is with two-argument open() not just that perl uses it with <>, -p, and such. Chip wrote about it here in Two-arg open() considered dangerous a year and a half ago. The 3-arg form was only introduced about a year and a half prior to that, iirc, when 5.6 came out. From perl56delta: "This is primarily useful for protecting against unintended magic behavior of the traditional two-argument form."

        -sauoq
        "My two cents aren't worth a dime.";
        

      Do you have an example? There is likely a better way of configuring things so that root doesn't have to do the task.

      Scripts that clean up after users. System wide /tmp and per-user ~/tmp directories, for example. And scripts that md5sum some user files. Perhaps the smallish log rotator could be run as apache. Let's see, nope, Apache writes its logs as root.

      Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }

        It is time to learn about sudo, then. Write short scripts that do exactly as much as whatever extended priveleges (root or otherwise) are necessary for, and no more, and give exactly specified users the permission to execute them under another exactly specified account without being asked for a password. Your /etc/sudoers might grow somewhat, but the result is a completely controlled environment.

        Makeshifts last the longest.

        Scripts that clean up after users. System wide /tmp and per-user ~/tmp directories, for example.

        Make /tmp owned by 'sys' or create a system user for it. You can do the same for ~/tmp directories, just make them group writable by a system group... But really, users should be left to clean up after themselves. Institute quotas if they refuse to do so. Give them access to cron so they can automate cleanup if they like. This has an added benefit; since it is their ~/tmp directory, they should choose how old files should be before they are removed.

        And scripts that md5sum some user files.

        I'm sure there is an easy solution, but its hard to say what it is without more information. Why are you doing it? Which user files? Do you really need a glob to describe them or do they have well-defined names? Is it a service to users that they can be given control of (like cleaning up their ~/tmp dirs?) Can the files in question be group readable?

        Perhaps the smallish log rotator could be run as apache. Let's see, nope, Apache writes its logs as root.

        Out of the numerous ways you can handle that one, I'll point out the easiest: make the logs directory writable only by root. You shouldn't have to do anything because that's the default anyway. Since someone would need root before creating a file with an evil filename in that directory, it would be pointless for them to do so.

        -sauoq
        "My two cents aren't worth a dime.";
        
Re: Dangerous diamonds!
by diotalevi (Canon) on May 22, 2003 at 15:27 UTC

    This is a short scratch list of easily implementable ideas for defending against readline(*ARGV). The ultimate form would be a module which when loaded subsequently "fixes" everything else. I'm imagining it could be used like perl -MSafeARGV -ne '...'. This "fix" should have no effect any future fixing of this behaviour in the actual codebase.

    Idea #0

    die("Unsafe readline(*ARGV)") when readline(*ARGV) is detected:

    $ perl -MO=Concise -e 'print <>' 7 <@> leave[t1] vKP/REFC ->(end) 1 <0> enter ->2 2 <;> nextstate(main 1 -e:1) v ->3 6 <@> print vK ->7 3 <0> pushmark s ->4 5 <1> readline[t1] lK/1 ->6 4 <$> gv(*ARGV) s ->5 *ARGV

    Idea #1

    Override readline() and pass the filenames to sysopen() instead. Or detect GvNAME for the passed in filehandle and die("Unsafe readline(*ARGV)") if *ARGV is detected.

    package SafeReadline; require Exporter; @ISA = 'Exporter'; @EXPORT = 'readline'; sub import { my $pkg = shift; return unless @_; my $sym = shift; $pkg->export("CORE::GLOBAL", $sym, @_); } sub readline { ... }

    Idea #2

    Alter the optree so that to get the effect of overriding readline() without actually doing that. This would get around the nullifying effects of multiple readline() overriding modules. Ok, this is harder, would require some tuits, and leaves anything less than 5.8.0 out in the cold (B::Generate is only for 5.8.0 and above).


Re: Dangerous diamonds!
by choroba (Cardinal) on Nov 02, 2017 at 10:29 UTC
    Fixed by the double diamond <<>> in 5.22.

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
Re: Dangerous diamonds!
by ambrus (Abbot) on Feb 14, 2004 at 16:51 UTC

    I don't quite like this feature either. Nevertheless, I used it in One-liner japh.

Re: Dangerous diamonds!
by Anonymous Monk on Nov 23, 2012 at 07:43 UTC