Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask

Handling long running operations started by CGI requests

by oyse (Monk)
on Jun 15, 2009 at 15:46 UTC ( #771691=perlquestion: print w/replies, xml ) Need Help??

oyse has asked for the wisdom of the Perl Monks concerning the following question:

Hello fellow monks,

I am current implementing a system where a CGI requests can initiated a long running operation. This operation involves sending or retrieving data from other systems using web services. It don't know exactly the amount of time the operation will take, but I expect that it can take anything from 1 - 30 minutes split over several smaller operations.

A simple way to implements this seems to be to put all the operations in a queue that is implemented as a database table. A separate task on the web server will be responsible for checking the queue and if there are any operations in the queue, it will perform the operation. If there are no more operations in the queue it will just wait.

So far so good. The problem is implementing the task on the web server that will check the queue. I could implement this as a client (CGI request) and server (task checking queue) or as a form of periodic script initiated by some service, but what is the common way to do this? If anyone has experience with this type of thing and know of any best practices I would like to hear them before I start.

Some requirements that must be taken into account to some degree:

  • Responsive. The user should not have to wait for a long time for the task to start. (more than one minute is probably to long.)
  • Easy to monitor. If should be easy to check whether the server task is running or not.
  • Easy to implement. I have limited time to implement this and it is for only one customer, so that the solution is easy to implement is more important than that it is flexible.
  • Robust w.r.t system restarts. If the system restarts the server task should be online at the same time as the web server becomes available.

BTW, the application will run on Windows and IIS, so Unix solutions will not help me much.

  • Comment on Handling long running operations started by CGI requests

Replies are listed 'Best First'.
Re: Handling long running operations started by CGI requests
by fenLisesi (Priest) on Jun 15, 2009 at 16:56 UTC
    We have such a system as you describe that runs on Windows/Apache as well as Unix variants. We have never tried IIS. We use CGI scripts to control the services. (Win32::Service and Win32::Daemon are involved in these.) The queue-checking and job-implementing code runs as a service. We provide a page that allows you to follow what this service has been up to. We support regularly scheduled as well as one-time jobs in the queue. I can't promise you that it will be easy to build, unless you take most of it from a libre project. For us, it was a relatively small part of a big multi-year development effort. I remember that we have had some problems with Win32::Daemon, but the whole thing is working pretty well now, in production for a year or so. Cheers.
Re: Handling long running operations started by CGI requests
by afoken (Canon) on Jun 15, 2009 at 18:24 UTC

    I would make the CGI create a new process for the long operation. Write the PID of the new process to a place where a later CGI call can find it (i.e. database). Use some kind of IPC to ask the long-running process for its status, to abort it, and so on. OLE could work, or local TCP or UDP sockets. But this may be a little bit too simple, because it does not take the system load into account.

    Another way would be a permanently running daemon / Windows service. That service has to implement an IPC interface (OLE, local TCP or UDP sockets). The CGI is just managing that service; when a new long job has to start, it just tells the service to start the job. The service decides when and how to start the job.

    Robustness w.r.t system restarts as you describe it is not too hard with the service way, as services can be set to auto-start. If you mean that jobs aborted due to the reboot have to be restarted, you need to track the job statuses in the service.

    If you want to delay the startup until the webserver runs, the web server has to notify the long job service. mod_perl offers a startup hook that could do this.


    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
      Thanks for the input to both of you. This was just the type of information that I needed.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://771691]
Approved by wfsp
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others examining the Monastery: (3)
As of 2020-10-25 03:43 GMT
Find Nodes?
    Voting Booth?
    My favourite web site is:

    Results (249 votes). Check out past polls.