|The stupid question is the question not asked|
Re: Paging with REST::Client?by sundialsvc4 (Abbot)
|on Jan 13, 2020 at 05:14 UTC||Need Help??|
Allowed me to write a new daemon that checks for queued changes and decides whether to kick off a DHCP restart ... Huh?!
In all my many years, I have never seen fit to “restart a DHCP daemon ... anywhere on any device” in order to get my intended work done. And so, if you think that you are routinely needing to do that, you are IMHO already fighting a losing battle against the fire.
Reading further: “Allowed me to write a new daemon that checks for queued changes and decides whether to kick off a DHCP restart. It usually works great, but there are times when we are importing a ton of changes (e.g. 1800+) and I'm hitting the max amount of data REST::Client will accept.”
Okay, there’s your actual problem: throttling. As in to say that “your present strategy, being based on simple curl and on request-loads that are routinely much smaller, was never properly engineered to intelligently handle.” Your simplistic present strategy is what I call “flaming arrows” ... when a new request arrives, ignite another flaming arrow, launch it into the air and hope for the best.
Maybe that was, at its time, a great strategy. Maybe it worked – maybe for years. Not anymore. Strategies like this one do not “scale up.”
What you actually have here – on the requestor side, not the server side – is a basic workload management problem, for which a plentitude of solutions already exist in CPAN. No matter how many thousands of changes your system might be requested to do at any one time, your software must throttle, and therefore effectively manage, that “queue.” It must present an appropriate volume of requests to the back-end server and be able to hold the rest of them back, so as to maintain a predictable and sustainable level of service. Exactly as every fast-food restaurant does every single day at lunch hour.
What you need to select and deploy right now is a workload-management system which puts these 1,800+ requests into a queue which is then serviced by an algorithm that knows how to manage the request stream to the back-end side. CPAN is full of excellent candidates. Perhaps other Monks would like to now suggest their favorites?