![]() |
|
Perl-Sensitive Sunglasses | |
PerlMonks |
Re^2: Need suggestion on problem to distribute workby perlfan (Vicar) |
on Jun 14, 2020 at 23:31 UTC ( #11118064=note: print w/replies, xml ) | Need Help?? |
Work queue is also what I suggest. But don't use a database as the queue. Use something like redis' FIFO queue. You could get fancy and make a priority queue using sets, but sounds like you want straightforward, and I agree.
The producer process puts work on the atomic queue, worker daemons spin and pop off work to do. Sure you could have the worker daemons fork off children to do the work, but as long as you have the atomic queue then you can just have any number of worker daemons checking for work to to do in a loop - so there is no need to get fancy with the worker processes. Redis (and the Perl client) is not the only way to do this, but it's the one I have the most experience with. As I stated above, don't use a database to serve the queue. You don't have to use Redis, but do-not use a database (terribly inefficient for this type of middleware). If you wish for the worker process to communicate back to the work producer, you can use a private Redis channel specified in the chunk of work. However, if you want real messaging you will be best to go with something built for that, like RabbitMQ or something similar but lighter weight. Work can be shoved into the queue by the producer in JSON or some other easily deserialized format; it can include a "private" redis channel or "mailbox" for the worker thread to send a message to the producer or some other listening. You could actually set up a private mailbox scheme so that the initial contact with work on the queue allows the producer and consumer to have any sort of meaningful conversation you wish. Also note, the 6.x version of redis supports SSL natively and some level of access controls. I'd use them if going over public internet or crossing any sort of untrusted networks.
In Section
Seekers of Perl Wisdom
|
|