Going back to the original problem yet again, you have a daemon that accepts connections, but misses new requests while servicing the last one. Next idea is: Following the way, functionally, apache solves the same problem, you would have a master daemon that spawns eight eager daemonets all grabbing connections and being busy servicing them - not for long because they'll spawn another process to actually handle the request, e.g. Perl if a cgi program is allocated to the URL, and then go back to grabbing the next request. That way, translating back to your situation, if one daemonet is busy, there is always another one available to do the accept call. The looping required for each daemonet would be something like:
while ($request = $listen->accept()) {
# handle request ... BUT ...read on
}
Instead of spawning new processes to handle requests like apache does or forking or threading, the daemonets could write the request to a fifo. That way you can separate the asynchronous process of collecting and filing requests from the synchronous process of reading off the queue and servicing them - it is generally a good idea not to mix asynchronous and synchronous processing in the same process (this being what seems to be at the root of it all) - in fact that would be an oxyMoron ;)