Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re: Closing stdout on CGI early.

by dga (Hermit)
on Aug 31, 2001 at 02:05 UTC ( [id://109220]=note: print w/replies, xml ) Need Help??


in reply to Closing stdout on CGI early.

If Apache requires your program to exit to close out the browser, one possibility would be to close STDOUT and fork() and have the child do the work and the parent can exit when its done with the output.

If you ever move to mod_perl this will defeat the reduction in startup overhead since you will need to start a new script every time anyway. If it doesn't break mod_perl entirely by exiting early.

For mornal CGI this may work ok for you.

Update: Another thought. I have a system which needs some slowly changing data from another database so I have a daemon process that listens on a FIFO and my CGI script writes an information request to the FIFO and then goes on with its life.

The daemon process then reads the FIFO and gets the updates independant from the web process completely. The FIFO is checked in non blocking mode by the CGI for obvious reasons and the update request is just skipped by the CGI if the FIFO isn't ready to read data for any reason.

As a side benefit the web system can run 'offline' for a while completely transparent to the end users.

Replies are listed 'Best First'.
Re: Closing stdout on CGI early. - how better to start daemon?
by pmas (Hermit) on Aug 31, 2001 at 17:33 UTC
    My solution will be to use database - I am a database guy...;-)

    When you want another process to handle something, delegate: write in some table parameters of what needs to be done, and set status to 'Scheduled'. Then, "boss" process is done and can return.
    Later (nightly), you can run "subordinate" script via cron. It will read all task sheduled, try to process them (changing status to "In Process', and after completion, change status again, to 'OK' or 'Failure' (and saving status or sending email or something you need), and will exit when no more task are scheduled. I do not have expereince with Apache, if starting new process via cron is more expensive, or you need results to be processes ASAP, just leave "subordinate" running all the time.

    Later "boss" process can check status of tasks scheduled (you may sort them by userID or something), and even try to re-schedule rejected tasks (after fixing error).

    Database transactions will handle all record locking.

    Does it make sense? It plan to use something like that in my system later, not now (so no code right now, sorry).

    Can you monks with expertise in Apache tell me what is better method (less resorce hog) to run "subordinate": all the time (without need to start new process), sleeping some time, or via cron, requiring to start new process occasionally. I understand it is a trade-off, what will be the guidelines for right decision?

    pmas
    To make errors is human. But to make million errors per second, you need a computer.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://109220]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (3)
As of 2024-04-23 23:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found