Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
WWW::Mechanize timeout problemby Marshall (Canon) |
on Aug 25, 2022 at 07:19 UTC ( [id://11146395]=perlquestion: print w/replies, xml ) | Need Help?? |
Marshall has asked for the wisdom of the Perl Monks concerning the following question:
I have a LWP type program using Mechanize that has been running every hour for the past 7 years without problems. All of a sudden, I get an error report that I determined resulted because of a corrupted DB - a duplicate value appeared in a column where all the values should be unique. The code is specifically designed to prevent this. However, I theorized that if 2 instances of the program where running, they could "fight" and cause this problem. But I didn't see how that could possibly happen, until I saw this in the log file:
Retry #1 took about 90 minutes!! WOW! Here is what the ancient code does: The retry ultimately succeeds but the long wait time has the effect of pushing the run time into the next hour's timeslot. This is a normal HTTP (not HTTPS) Url. Over the years, there could be that a lot of things that have changed at the website's end. I have no idea. The default timeout for Mechanize is supposed to be about 3 minutes - it really doesn't matter to me as long as it is not measured in hours! I don't know how often this super long request problem happens. A retry historically happens about every 2-3K requests with this particular site and a couple seconds later, all is well. I have no idea what is actually causing the hang. Thoughts and ideas are welcome.
Back to
Seekers of Perl Wisdom
|
|