Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask

using $SIG as an IPC interrupt?

by bobbob (Acolyte)
on Aug 21, 2008 at 17:46 UTC ( [id://705900] : perlquestion . print w/replies, xml ) Need Help??

bobbob has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I have implemented an interrupt using SIGUSR1. It works well except for one problem - the latency is very bad, typically 150-200ms and sometimes as much as 500ms :-( Is this typical? I know that Perl defers the signal to handle it at a safe time, but I find it difficult to believe the latency for this should be so high. Thoughts? Thanks!

Replies are listed 'Best First'.
Re: using $SIG as an IPC interrupt?
by samtregar (Abbot) on Aug 21, 2008 at 18:04 UTC
    You could definitely use signals for this, but I probably wouldn't. The easiest way to do this kind of thing is to have your C program write to a pipe (anonymous or named) instead of a real file and have your Perl program use select() to wake up as soon as there's anything to read. I like IO::Select for making select() a little easier to manage. Rumor has it EV performs better, but I haven't tried it yet.

    If you're stuck using a real file for the data you need to pass, you could still use the pipe trick by having your C program write its "wake up" notice down the pipe.


      Thanks. Unfortunately the perl program is not asleep waiting for the signal, it is doing other things. I thought I could fork off a process inside the perl program that would sleep until there is something to read, but it seems like it will be problematic having that process asynchronously tell the parent process to jump into the handler loop. Perhaps not?


        Yup, that's definitely problematic. I think signals are your only hope here, but you've got some significant problems to solve. For example, recent Perls do not guarantee immediate signal delivery, they only check for new signals between op-codes. Start up a long-running regex and you won't see a signal until it returns. That's better than older Perls though, which had immediate delivery but also offered a side of seg-faults with that meal.

        You may need to re-think your design - trying to have a single process handle IPC and do "other stuff" simultaneously is not a good design. Maybe you can fork off that "other stuff" and send it a signal to stop, or pause, when something more important arrives.


        In theory threads are better for this since they both have the same data available and you just have to make sure synchronize access to the data.

        In reality perl interpreter threads share only data you declare as shared. But that might be just right for your application. A second thread that sleeps until data arrives and then processes that data independently from the main thread. Or sets a shared variable which could be polled faster than a file descriptor, but that would have to be tested.

        Also perl threads seem to have a bad reputation(??) but I might mix them up with the older threads before 5.6.

Re: using $SIG as an IPC interrupt?
by Tanktalus (Canon) on Aug 21, 2008 at 17:59 UTC

    You have a lot of options, since you apparently can control both the C and the perl code.

    IIRC, If your perl code simply sleeps forever, any interrupt will stop the sleep early, and allow your code to continue. If sleep doesn't do this, using select will.

    Speaking of select, you could have your C program spawn the perl program such that the C program can write to the perl program's stdin (and possibly read from the stdout). The perl code will be able to just wait until something is written, and then continue off merrily until it's ready for the next chunk of data. The only time that this will cause the C code to block is if it's looking for output from perl (if that's a problem, don't - doesn't sound like you need it currently) or if perl is getting really backed up and isn't already waiting for additional data, and its buffer is full (unlikely to be a problem if you're already polling every 200ms). If that's a serious problem, you can read each record from stdin and fork off a child to handle it while you wait for the next chunk, but your system will likely perform better overall if you don't do this (if the perl code is falling behind, you probably don't want dozens of processes all handling huge chunks of data).

    If parent/child is a poor choice, perhaps setting up the perl code to listen on a socket would be better. Here you have two choices - you can continue to write to the file and use the socket as the interrupt (which is useful if the C code doesn't have permissions to send the perl code a signal), or you can use that as the pipe for data directly.

    To be honest, I don't think blocking is going to be your issue, though that, too can be solved.

      Thanks for the response. I do have control of the code for both applications. The perl code is not sleeping forever - it is a full application.

      I'm happy to continue to pass the actual information through a file. What I want to do is simply notify the Perl app that new data is in the file, and then the Perl app would read the file as it normally does. The goal is to reduce that potential 200ms latency by an order of magnitude.

      I was hoping there might be some simplier method then setting up socket communication just to generate an interrupt :-) but if you think thats the best fit for what I'm doing I'll look into it.


Re: using $SIG as an IPC interrupt?
by lidden (Curate) on Aug 21, 2008 at 17:56 UTC
    There is SIGUSR1 and SIGUSR2 which I think is for this purpose.
      Thanks for the response. I am also a little concerned though that some other programs on the same machine could also be using these signals (or some programs in the future).
        Sending signals is restricted by the operating system - so no random user can send these signals, but only root and the user that your program runs under. (At least on unixish systems).
Re: using $SIG as an IPC interrupt?
by moritz (Cardinal) on Aug 21, 2008 at 18:01 UTC
    It depends on how frequent these events are. If it's perhaps one per second, I don't see a problem with sending SIGUSR1 or so to the script.

    Otherwise I'd just suggest to use sockets, which tend to be easier to use non-blockingly than named pipes.

      The events are asynchronous. The hope is to reduce the response time from 200ms (worst case) by an order of magnitude or so. Tightening the polling loop much more than 200ms is unpalettable due to system load. Thanks!
Re: using $SIG as an IPC interrupt?
by pileofrogs (Priest) on Aug 21, 2008 at 19:05 UTC

    What OS are you on? If it's a reasonably modern Linux, you could use Linux::Inotify. It does exactly what you want: The perl code gets prodded when specified files change.

    How are you checking the file for changes? If you're checking the modify time, the file system needs to have a granularity smaller than the loop, I.E. if your fs only keeps track of modify time in seconds, a loop smaller than 1 second is pointless.

    Another OS side option would be a fifo, AKA named pipe. This is a thing that looks like a file to your processes, but it actually takes the writes of one process and hands them as reads to another process. You need to use select again, for this to work. This is really no different than the other pipe suggestions except you wouldn't have to modify your C code.

    When you say it doesn't sleep, but does stuff in the background, are you saying it never has to wait for new data? IE, the new data comes in so fast your script can't keep up? Or are you saying it does other unrelated jobs while it waits for new data? What kind of data is this? Why do you want to react to it faster than 200ms if your script is already busy? Does new data stream in in large volumes, or is it a rare event that signals the script should change what it's doing?

      Right now I am polling the file every 200ms and looking for the contents to change (ie. to be appended to).

      The Perl application includes a substantial Tk GUI interface component, and it also writes files to send messages to other applications, is doing some computations for the GUI interface, etc. Its doing other things but when a message comes in from this C application that is the 'high priority' task and should be processed as soon as possible.

      Data does not stream in large volumes per se, but when new data is there it is highly desireable to process it as soon as possible.

        You should seriously consider multiple threads or multiple processes. This is the kind of thing they do really well.

        Did you say what OS you're on? Can you use Inotify?

        Since you are using Tk, you could hook in POE's POE::Component::DirWatch. The DirWatch session is given time slices within the Tk event loop, so unless the GUI is clogged it should deliver directory changes fairly quick. For immediate processing you would use signals and a signal handler.

Re: using $SIG as an IPC interrupt?
by shmem (Chancellor) on Aug 21, 2008 at 20:20 UTC


    If your program is driven by an event loop, e.g. Tk, you should use that. If it isn't, you could roll your own using alarm - or ualarm() from Time::HiRes - and use a named pipe ( see mknod(1) or makefifo(1) ) for data transfer.

    use IO::File; use Time::HiRes qw(ualarm); use strict; my $interval = 10000; # microseconds { my ($fh, $rout, $rin); my $pipe = '/path/to/pipe'; sub pipe_open { $fh = IO::File->new( $pipe , O_RDONLY | O_NONBLOCK) or die "Can't open $pipe: $!\n"; $rin = ''; vec($rin,fileno($fh),1) = 1; } pipe_open(); sub alarm_handler { # look if there's input at the pipe, non-blocking mode. my ($nfound, $timeleft) = select($rout=$rin, undef, undef, -1) +; if ($nfound) { print STDERR "data arrived at $pipe.\n"; while (my $line = <$fh>) { print $line; } close $fh; pipe_open(); } ualarm($interval); } } # set up signal handler $SIG{ALRM} = \&alarm_handler; ualarm($interval); # main program. sleep while 1; # the real program certainly does something more intere +sting

    You control the latency tweaking $interval.

Re: using $SIG as an IPC interrupt?
by SilasTheMonk (Chaplain) on Aug 21, 2008 at 19:38 UTC
    Have you considered rewriting the perl program as a SOAP server. I have to admit I am familiar with an environment where a lot of work has gone into this, so I cannot comment on the CPAN offering.
      Soap doesn't solve bobbob's problem, because it still requires the data exchange to happen something. Instead it adds an unnecessary bloat layer.
Re: using $SIG as an IPC interrupt?
by jdporter (Chancellor) on Aug 22, 2008 at 13:08 UTC

    For sending simple messages between cooperating processes, you could use semaphores. They're somewhat more robust than signals, and are specifically designed for coordinating access to shared resources (such as a file or shared memory).

    Between the mind which plans and the hands which build, there must be a mediator... and this mediator must be the heart.