laziness, impatience, and hubris | |
PerlMonks |
Re: Multi-client approachesby Joost (Canon) |
on Jan 31, 2008 at 22:50 UTC ( [id://665474]=note: print w/replies, xml ) | Need Help?? |
I've used a variant of b) with good results on a production system but note that it only really has benefits if you have a substantial or concurrent (threaded, for instance) process that needs to be synchronized.
Also note that you can always abstract away the communication by using a client library/module, but you'll still need to make sure the clients are using the current protocol, i.e. use the latest client module. Actually what I did was something like this: Where the clients and workers connect to the server whenever they're ready, and pass simple messages around to start jobs and indicate job status. The server itself is about 150 lines of single threaded perl code that organizes requests so that duplicate requests are ignored and shedules jobs to the first worker that becomes available. The protocol (client) mechanisms are really pretty simple. Just a few lines to connect to the server (using IO::Socket::INET) and a couple of methods that convert a request to a single line. The reason for this setup was that a single worker process can take up to 6 Gb of memory so purely for memory efficiency they had to be multi-threaded (on multi-core machines) to get the best performance out of them. And also that we could spread the workers to other nodes.
In Section
Seekers of Perl Wisdom
|
|