Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re: Sync item list between perl scripts, across servers

by reinaldo.gomes (Beadle)
on Nov 14, 2016 at 17:19 UTC ( [id://1175902]=note: print w/replies, xml ) Need Help??


in reply to Sync item list between perl scripts, across servers

I see. Now I have more options, and even more questions. But those are for me to meditate upon.

If I ever get this project going (it's still a prototype to automate ffmpeg's audio conversion), I'll drop by and tell how I handled things.

  • Comment on Re: Sync item list between perl scripts, across servers

Replies are listed 'Best First'.
Re^2: Sync item list between perl scripts, across servers
by reinaldo.gomes (Beadle) on Sep 20, 2018 at 14:36 UTC

    Well, I said I would be back if I had any news, so here I am.

    I've decided to try and build an UDP communication system between my servers, and so far I haven't had any serious issues with this setup.

    I chose this approach considering my specific situation, which might not aply to many others using any sorts of IPCs.

    I have no need for persisting the queue items between application restarts, so I avoided the I/O activity caused by a file/database-based system. The servers are always on the same network, so UDP is just fine.

    Although I think Corion's advice would probably be the most secure approach, it would also mean a lot of aditional work, such as having satellite servers depending on a single master for distributing the work to be done, or maybe set up another system just to elect a new master when the primary goes offline. Which would obviously set up a whole new level of complexity in the application.

    So I just chose what I found out to be the most "cost-effective" solution: an ad-hoc, masterless system, where each servers asks (single tiny UDP datagram) its peers if they are working on the item which the server is about to enqueue. From the several runs that I've done, after many tweaks, I've seen duplicated processings at the average rate of 1 for each several thousands.

    Since the main problem with duplicated processing, in my case, is the waste of CPU cycles for the servers doing twice the same job (no data loss, corruption or anything like that), it's perfectly fine for me.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1175902]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (6)
As of 2024-04-25 11:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found