Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re^6: Help designing a threaded service

by zwon (Abbot)
on Jan 26, 2014 at 15:42 UTC ( [id://1072129]=note: print w/replies, xml ) Need Help??


in reply to Re^5: Help designing a threaded service
in thread Help designing a threaded service

With forks (*nix), when you have multiple processes all waiting to accept on a shared socket, when a client connects, *every* listening process receives the connect.
That would be horrible, but fortunately it's not true. If multiple processes waiting for a connection on the same socket, when client connects, *only one* listening process accepts connection. Here's a simple example that demonstrates it:
use 5.010; use strict; use warnings; use IO::Socket::INET; my $sock = IO::Socket::INET->new(LocalPort => 7777, Listen => 10); for (1..3) { my $pid = fork; unless($pid) { my $cli = $sock->accept; say "Process $$ accepted connection from " . $cli->peerport; print while <$cli>; exit 0; } }
Try to connect to 7777 and you will see that only one process will accept connection. Hence there's no need to have any global mutexes.

Replies are listed 'Best First'.
Re^7: Help designing a threaded service
by BrowserUk (Patriarch) on Jan 26, 2014 at 16:06 UTC

    Hm. The description was based upon the implementation of nginx server.

    Which states that:

    After the main NGINX process reads the configuration file and forks into the configured number of worker processes, each worker process enters into a loop where it waits for any events on its respective set of sockets.

    Each worker process starts off with just the listening sockets, since there are no connections available yet. Therefore, the event descriptor set for each worker process starts off with just the listening sockets.

    When a connection arrives on any of the listening sockets (POP3/IMAP/SMTP), each worker process emerges from its event poll, since each NGINX worker process inherits the listening socket. Then, each NGINX worker process will attempt to acquire a global mutex. One of the worker processes will acquire the lock, whereas the others will go back to their respective event polling loops.

    Meanwhile, the worker process that acquired the global mutex will examine the triggered events, and will create necessary work queue requests for each event that was triggered. An event corresponds to a single socket descriptor from the set of descriptors that the worker was watching for events from.

    *nix isn't my world, so I'll leave it to you and others to decide if your observations or the implementation of a widely used and well tested server is correct here.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      On at least some versions of some Unix systems, multiple processes waiting on the same socket will cause all of them to be awoken but only the first one to ask will get the connection or data that triggered them to be awoken. Since nginx is setting up "necessary work queue requests" in order to handle the connection coming it, it is useful for only one process to do that. Though I'm not completely convinced that the nginx authors didn't implement this protection out of misunderstanding rather than real need.

      I believe that it is the case that you don't need to worry about this implementation detail at least in most cases.

      My vague memory of one report of this "every process wakes up" "problem" was just noting the wasted resources and that only one of the waiting processes would return from select(2) (or equivalent). I certainly don't expect more than one process to actually return from accept() when many of them are blocked inside an accept() call.

      - tye        

        I believe...

        Again I say, this is a well used, well-tested server.

        I'll add, it is also, by all accounts, extremely well designed and implemented, such that it is known and proven to knock the spots off Apache and most other servers for concurrency and throughput.

        Your "belief" that it might be badly implemented out of "misunderstanding" does not accord with *any* other opinion I have found. Accordingly, I give your "belief" due weight commensurate with that finding.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
      Hm. The description was based upon the implementation of nginx server.
      The article you linked is factually incorrect (looks to me like a work of some intern from zimbra). Nginx workers don't fight for the accept_mutex after they got events from the listening socket, but they lock this mutex before they subscribe to events from it (see implementation). The reason is to not waste CPU, not to avoid accepting the same connection in different workers, which won't happen even if you disable this option. Anyways, nginx running event loop and OP is seems more interested in traditional prefork options (otherwise he should look at AnyEvent or Mojolicious instead of Net::Server) which simply block in accept, so it is hardly relevant.

      PS is it this guy? That's funny

        The reason is to not waste CPU, not to avoid accepting the same connection in different workers, which won't happen even if you disable this option.

        Hm. I don't how you draw that conclusion from the docs you linked.

        This, "If accept_mutex is enabled, worker processes will accept new connections by turn. Otherwise, all worker processes will be notified about new connections, " very clearly states pretty much the opposite.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1072129]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (3)
As of 2024-04-19 19:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found