DarkBlue has asked for the wisdom of the Perl Monks concerning the following question:
I believe it's possible to start a new and independent(?) thread in Perl.
Is this also possible with a Perl script running as a CGI application?
I have a script that handles some basic database maintenance on a Berkeley
DB via a web-interface (developed in-house). I would like to be able to
order a backup of the database through the browser. This much I have
already enabled... but, the database is quite large (2.7Gb) and the backup
takes a couple of minutes to complete. In the meantime, the user is left
with a "Please Wait" message in their inactive browser, before the
inevitable time-out. This looks very unprofessional and the powers that be
want it cleaned up! :-(
I'd like to run my backup sub-routine as a seperate process so that the user
can initiate the backup and then carry on working. How do I do this?
In theory, there is no difference between theory and practise. But in practise, there is. Jonathan M. Hollin Digital-Word.com
Re: Starting a new process (fork?)
by Dominus (Parson) on Apr 17, 2001 at 15:46 UTC
|
I don't think the suggestions people have advanced so far are going to work.
Here's a copy of a usenet article I posted
a while back explaining why not.
In article <39E82ADC.79E61D63@biochem.usyd.edu.au>,
Joel Mackay <j.mackay@biochem.usyd.edu.au> wrote:
>I tried simply using:
>
>`dyana &`
>and a number of variations thereof, but the browser always waits till
>the program is finished running before continuing.
I suspect that
system("dyana >/dev/null 2>&1 &");
will do what you want.
There are several things wrong with your try. The simplest is that
the
`command`
notation specifically instructs Perl to wait until it has received all
the data from the command. Even though you 'ran the command in the
background', you sabotaged that by forcing the main program to wait
until it had received all the command's output.
Because of this sort of situation, is is usually considered bad style
to use `...` unless you are interested in getting the output of the
command back into the master program. If you are not interested in
the command output, you should probably use system instead.
Using
system("dyana &");
will fix this part of the problem. However, in the case of CGI
programs, that is not usually enough to get the browser to continue.
Here's what is happening.
The browser sends the request to the server. The server runs your CGI
program. It is waiting for the program to complete; then it will
gather all the output data from the command, package them up, and send
the package back to the browser. The browser will display the data
and return control to the user.
The server will not send the package until it has all the data from
the command. It is attached to the CGI program via a pipe which is
hooked into the standard output of the program. When the CGI program
prints data to the standard output, as with
print "Content-type: text/html\n\n";
or whatever, the string actually goes into the pipe, and is received
by the server. The server is waiting for this pipe to be closed.
When the pipe is closed, the server knows that nor more information is
forthcoming, so it can send all the data off over the network to the
browser. Until the pipe closes, there might be more data, to the
server must wait.
Pipes close automatically when there is nobody left to write to them.
Normally, the CGI program is the only thing attached to the writign
end of the pipe, so when it exits, the pipe closes and the server
sends the data.
However, when a process runs a subprocess, the subprocess inherits all
the open files and pipes from its parent. If you did:
system("dyana &");
then dyana would inherit the pipe back to the server. When the main
CGI program exited, dyana would still be attached to the pipe, so it
wouldn't close, and the server would continue to wait for the end
of the command output. This wouldn't occur until dyana had also
exited or otherwise closed STDOUT.
The solution I suggested was:
system("dyana >/dev/null 2>&1 &");
The >/dev/null detaches dyana's standard output from the pipe and
points it into /dev/null. 2>&1 means "make the standard error go to
the same place that standard output is going to"---in this case,
/dev/null. (2>&1 may be unnecessary, but some web servers attach
standard error to the server pipe also.) The & puts the command in
the background.
>Any suggestions out there?
Hope this helps.
Hope this helps.
--
Mark Dominus
Perl Paraphernalia
| [reply] [d/l] [select] |
|
| [reply] |
|
| [reply] |
|
NT has the "nul" (yes, three characters) "device":
echo "I can see ducks." > nul
Dunno how much that helps... too early in the morning...
- Zoogie | [reply] [d/l] |
|
As Zoogie points out, you can use "nul" (or even
"/dev/nul"). In NT, you can use ">nul" and
"2>&1", but you can't use "&".
But you can do any of these:
system(1,"dyana >nul 2>&1");
system("start dyana");
# something with Win32::Process
-
tye
(but my friends call me "Tye") | [reply] [d/l] [select] |
Re: Starting a new process (fork?)
by ColtsFoot (Chaplain) on Apr 17, 2001 at 13:46 UTC
|
If you have a perl script that just performs the backup you could just "system" the script in background
#!/usr/bin/perl -w
use strict;
use CGI;
my $page = new CGI;
print $page->header();
print $page->start_html();
my $command = qq(script/that_performs/the/basckup.pl &);
system ($command);
print qq(Finished);
print $page->end_html();
Hope this helps.
| [reply] [d/l] |
|
Perfect! Why the hell didn't I think of that? Thanks mate.
In theory, there is no difference between theory and practise. But in practise, there is. Jonathan M. Hollin Digital-Word.com
| [reply] |
Re: Starting a new process (fork?)
by repson (Chaplain) on Apr 17, 2001 at 14:17 UTC
|
Instead of relying on the shell background metacharacter like
ColtsFoot does, you can use the perl fork function.
use CGI ':standard';
my $pid = fork;
if ($pid == 0) { # we are the child
close; # close all filehandles so server won't try to stay open
exec 'backupprogram' or exit ; # transfer execution
}
elsif ($pid) { # we are the original process
print header, start_html,
'Backup initiated. You can close this window at any time',
end_html;
}
else { die "Fork failed: $!\n"; } # something went wrong
You could write that as:
fork==0 and exec 'backupprogram';
print header,start_html,'etc...'
But that wouldn't be as reliable.
Update: Remembered close behaviour wrongly, that line should read
close STDOUT; close STDERR; close STDIN; | [reply] [d/l] [select] |
|
close; # close all filehandles so server won't try to stay open
Not a big deal, bit this only closes the currently selected filehandle... which is usually STDIN,
though I think that's all you need to close for the httpd to be happy.
- Ant | [reply] [d/l] |
|
|