Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

script suicide?

by AssFace (Pilgrim)
on Jul 21, 2003 at 01:05 UTC ( [id://276121]=perlquestion: print w/replies, xml ) Need Help??

AssFace has asked for the wisdom of the Perl Monks concerning the following question:

I have had an account at pair.com for a few years. Up until very recently, it was a shared account. As a result, any scripts that ran over 30 seconds, were killed.

I now I have my own pair dedicated server. One of the great benefits of that is that my scripts can run as long as they want now... One of the downsides is that the scripts can run as long as they want.

I have a script that connects to eBay and grabs some feedback from random users, and then connects to Yahoo and grabs the headlines. This script will work perfectly, most of the time.
But sometimes, this script will just run pretty much forever and take up all of the CPU. I can log in every now and then check uptime to see how things are going and then kill the script if need be... but aren't all us Perl users lazy by default? Therefore, I'd rather automate this scripts death.

The script has loops, so dumping out checkpoints to see where it is snagging up is a real hassle (especially since I've never gotten it to live forever on me while testing - it only does it "in the wild" sometimes) - it generates a lot of output with the loops (sure, I can set up a system so that it outputs every N times - but what if things are happening between those N times... I would just rather avoid that).

As far as I can tell, aside from rewriting the script or spending a lot of time debugging it to find the odd times it does go astray, I have two easy options - write my own script and run it via cron to check for this thing hogging the CPU and killing it, or I could - and this is what I would rather do - have the script itself be aware of how long it is alive for, and if that is over some amount of time (30 seconds or so), then it should kill itself.


I post here because I am asking for ideas as to how either of those options are feasible, or if there is an even better way that I'm just not seeing at this point.

-------------------------------------------------------------------
There are some odd things afoot now, in the Villa Straylight.

Replies are listed 'Best First'.
Re: script suicide?
by bobn (Chaplain) on Jul 21, 2003 at 01:09 UTC

    perldoc -f alarm

    Alarm allows a program to set a timer, at the expiration of which, a SIGALRM is sent to the program.

    By eval'ing a block that includes re-assignment of the $SIG{ALRM} handler, then setting the alarm, the block can be made to exit if the timer expires before the block finishes normally. Examining $@ allows you to ensure that the block died for the expected reason. Very useful for anything that interacts over the network.

    I think a process can only have one alarm running at a time, so if any of your routines or modules you're use'ing uses it already, you may be out of luck. But if the module is using alarm, it should be doing so to implment a timeout of it's own, so the trick is to make it timeout the way you want it to.

    --Bob Niederman, http://bob-n.com

      Or you could assign a handler to $SIG{ALRM} and exit that way. I think that's the name anyway. Read perlipc for the details.

      <plug> If you'd like to be able to have multiple alarms running, without affecting (or being affected by) other code you use which sets alarms, you could use the :OVERRIDE option of Alarm::Concurrent. One should be wary of using this option in production level code, as it overrides %SIG, but the OP's situation doesn't seem like it would be a problem. And, of course, this is just a standard warning, as I've never actually had it cause a problem for me.

      bbfu
      Black flowers blossom
      Fearless on my breath

Re: script suicide?
by dws (Chancellor) on Jul 21, 2003 at 04:42 UTC
    The script has loops, so dumping out checkpoints to see where it is snagging up is a real hassle ... it generates a lot of output with the loops

    So write a script to analyze the log. Looking in logfiles for evidence of accidental loops isn't rocket science. If you follow a bit of discipline in how you format your log entries, parsing them should be straightforward.

Re: script suicide?
by Abigail-II (Bishop) on Jul 21, 2003 at 07:43 UTC
    To have a script kill itself after 30 seconds:
    $SIG {ALRM} = sub {exit}; alarm (30);

    There's however one caveat: if you use Perl 5.8.0, and your script is "hanging" during a Perl operation, the signal will be queued until that Perl operation is finished.

    Abigail

      But then you can get the "old" dangerous pre 5.8.0 signal handling back by using POSIX::sigaction. YMMV.

      Liz

      Hmm, trying this in a small test script didn't seem to work. Which means I'm probably not doing it right.
      The code below would make me think that it will print that out for 5 seconds and then stop via the exit call.
      Instead it just runs and runs, happy as can be.
      $SIG {ALRM} = sub {exit}; alarm (5); while(1){ print "hairy fishnuts\n"; }
      "perl -v" tells me "This is perl, v5.6.1 built for i386-freebsd"

      What am I doing wrong?

      -------------------------------------------------------------------
      There are some odd things afoot now, in the Villa Straylight.
        You either have a bug in your particular installation of Perl, or in the OS (I tried it with the same version of Perl, but different OS), or you aren't patient enough. Did you by any chance run this without redirecting standard output? Then it might take more seconds, and if you do this remotely, and have a slow network, it even takes more time. For me, the program gets aborted after 5 seconds if I redirect output - but it takes 8 seconds before getting a prompt if I don't redirect output. With a slow network, or a slow terminal, it might take minutes.

        Abigail

Re: script suicide?
by hatter (Pilgrim) on Jul 21, 2003 at 11:26 UTC
    I suspect the shared hosting you had used rlimit or ulimit to set the resources available to a process - for CGI scripts, this can be set with the RLimit* directives in apache, for system commands such as your cron jobs, have a poke around the various manpages for ulimit/rlimit/setrlimit/etc The correct or most appropriate ones depend on which OS and version you're running.

    A quick flick through CPAN finds the psh perl shell has a ulimit function, but nothing much else. You might want to look in closer detail at the Proc:: hierarchy in case there's something useful hidden in there, if you'd rather do it within the script rather than in the shell that calls the script.

    the hatter

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://276121]
Approved by blokhead
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2024-04-19 22:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found