Perl: the Markov chain saw | |
PerlMonks |
Re: Perl Daemonsby jsadusk (Acolyte) |
on Jul 29, 2004 at 02:37 UTC ( [id://378272]=note: print w/replies, xml ) | Need Help?? |
One way of making perl an effective daemon type language is to implement your daemon in the traditional UNIX manner (something that's kind of fallen out of style as of late). Have your listener be the same perl program that does the work, but when it gets a request, you fork(). Don't exec after that, just fork and handle the request. And then each request handler exits. You get paralel processing of requests. You get any code that would be likely to leak (possibly from variables not being garbage collected) exiting immediately after being done. And since you just fork and don't exec, you're not restarting the perl interpreter. If you're on a modern UNIX (linux, solaris, bsd, whatever), you have copy on write so you're not making a huge copy in memory of your entire perl process (this is especially the case if your listener process sits in a tight loop not updating much data). One drawback to this approach is getting information back from the handler processes. You can use pipes, or named pipes, but then you're limited to a stream of text that needs to be parsed. But if your handlers are mostly atomic, this is a non-issue. Another drawback is if you have a ton of requests, you have a ton of processes, and your system scheduler and VM start slowing down. Only a drawback compared to a threaded system however. Still, for medium load daemons with medium amounts of communication, this approach works extremely well and I've used it multiple times.
In Section
Seekers of Perl Wisdom
|
|