go ahead... be a heretic | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Personally, I would only keep one boss thread that feeds the workers. As you seem to have the filesystem shared between the machines, you can keep your queue either as a file or have the second machine connect via TCP to the boss thread to read available files from it. Having two boss threads scan the same directory is a recipe for disaster, or at least for lots of interesting failure scenarios. My favourite approach is to organize the queues as directories and have the workers move the files between the directories:
This will be problematic if your (network) filesystem does not support atomic rename. I think that NFS does not support atomic rename. If your filesystem supports atomic append, you can simply have each worker append a line with its machine name, pid and the filename to one common file and then reread the file to see if another thread snatched the file before it. This would mean that the file grows, but you can either move that file away or just truncate it from time to time. Truncating would mean that files get processed twice, which might not be acceptable. In reply to Re: Sync item list between perl scripts, across servers
by Corion
|
|