Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister

Sync item list between perl scripts, across servers

by reinaldo.gomes (Beadle)
on Nov 14, 2016 at 05:50 UTC ( #1175852=perlquestion: print w/replies, xml ) Need Help??

reinaldo.gomes has asked for the wisdom of the Perl Monks concerning the following question:

I have a multi-threaded script which does the following:

1) One boss thread searches through a folder structure on an external server. For each file it finds, it adds its path/name to a thread queue. If the path/file is already in the queue, or being processed by the worker threads, the enqueuing is skipped.

2) A dozen worker threads dequeue from the above queue, process the files, and remove them from the hard disk.

It runs on a single physical server, and everything works fine.

Now I want to add a second server, which will work concurrently with the first one, searching through the same folder structure, looking for files to enqueue/process. I need a means to make both servers aware of what each other is doing, so that they don't process the same files. The queue is minimal, ranging from 20 to 100 items. The list is very dynamic and changes many times per second.

Do I simply write to/read from a regular file to keep them sync'ed about the current items list? Any ideas?

  • Comment on Sync item list between perl scripts, across servers

Replies are listed 'Best First'.
Re: Sync item list between perl scripts, across servers
by Corion (Pope) on Nov 14, 2016 at 08:53 UTC

    Personally, I would only keep one boss thread that feeds the workers. As you seem to have the filesystem shared between the machines, you can keep your queue either as a file or have the second machine connect via TCP to the boss thread to read available files from it.

    Having two boss threads scan the same directory is a recipe for disaster, or at least for lots of interesting failure scenarios.

    My favourite approach is to organize the queues as directories and have the workers move the files between the directories:

    /work/incoming /work/processing /work/done

    This will be problematic if your (network) filesystem does not support atomic rename. I think that NFS does not support atomic rename.

    If your filesystem supports atomic append, you can simply have each worker append a line with its machine name, pid and the filename to one common file and then reread the file to see if another thread snatched the file before it. This would mean that the file grows, but you can either move that file away or just truncate it from time to time. Truncating would mean that files get processed twice, which might not be acceptable.

Re: Sync item list between perl scripts, across servers
by GrandFather (Sage) on Nov 14, 2016 at 09:06 UTC

    A "regular file" isn't going to cut it for you. For such concurrent access to work and be robust at least each update to the file must be serialized to ensure the queue's integrity.

    There are several ways around the problem, but none are altogether trivial. Probably the easiest way to do it is to set up a transacted database to manage the queue. Even using transacted updates some care needs to be taken to ensure queue item insertion and removal are handled correctly.

    Premature optimization is the root of all job security
Re: Sync item list between perl scripts, across servers (rename)
by tye (Sage) on Nov 14, 2016 at 19:58 UTC

    For each system, create a subdirectory for use just by that system. When a thread picks up a filename, use rename to perform an atomic move of that file into the subdirectory for the system that the thread is hosted on. If the rename fails, then just skip that filename as the other system beat you to it.

    Even better, when rename() fails, compare $! against ENOENT() from Errno. If you are on a Unix system, than you can check "man 2 rename" on your particular system to verify that ENOENT is the appropriate choice (but if you are on a Unix system, then I'm pretty sure it will be).

    In the unlikely event that you are not on a Unix system and are not using MS Windows, then Perl's rename might not be atomic.

    - tye        

Re: Sync item list between perl scripts, across servers
by reinaldo.gomes (Beadle) on Nov 14, 2016 at 17:19 UTC

    I see. Now I have more options, and even more questions. But those are for me to meditate upon.

    If I ever get this project going (it's still a prototype to automate ffmpeg's audio conversion), I'll drop by and tell how I handled things.

      Well, I said I would be back if I had any news, so here I am.

      I've decided to try and build an UDP communication system between my servers, and so far I haven't had any serious issues with this setup.

      I chose this approach considering my specific situation, which might not aply to many others using any sorts of IPCs.

      I have no need for persisting the queue items between application restarts, so I avoided the I/O activity caused by a file/database-based system. The servers are always on the same network, so UDP is just fine.

      Although I think Corion's advice would probably be the most secure approach, it would also mean a lot of aditional work, such as having satellite servers depending on a single master for distributing the work to be done, or maybe set up another system just to elect a new master when the primary goes offline. Which would obviously set up a whole new level of complexity in the application.

      So I just chose what I found out to be the most "cost-effective" solution: an ad-hoc, masterless system, where each servers asks (single tiny UDP datagram) its peers if they are working on the item which the server is about to enqueue. From the several runs that I've done, after many tweaks, I've seen duplicated processings at the average rate of 1 for each several thousands.

      Since the main problem with duplicated processing, in my case, is the waste of CPU cycles for the servers doing twice the same job (no data loss, corruption or anything like that), it's perfectly fine for me.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1175852]
Approved by GrandFather
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2020-10-24 00:28 GMT
Find Nodes?
    Voting Booth?
    My favourite web site is:

    Results (242 votes). Check out past polls.