http://qs321.pair.com?node_id=1224860


in reply to Re: Parallel::ForkManager takes too much time to start 'finish' function
in thread Parallel::ForkManager takes too much time to start 'finish' function

I agree. That's the same reason why webservers use pre-forking. Which, depending on the task at hand, could also be a possible solution for this problem.

Fork the children at the start, send them work and have the parent just manage the amount of currently running children to always keep some spares, but reap any unneeded extras. Done right, this can even reduce the total amount of processes running at any one time. Again, depending on how long the task takes, it might be more efficient to wait for a child to finish it's current task and directly work on the next in the queue instead of spinning up a new child.

And it's always a good idea to have some code in place to manage and limit the number of concurrent tasks. The moment your system runs out of RAM and starts swapping stuff to disk, all hope is lost for speedy performance. Same goes with any other resources. And it could lead to other rather unfortunate "features".

I once had to do a lengthy database repair, because someone-who-shall-not-be-named-but-looks-like-me had a major memory leak and the linux kernel started killing random tasks including some rather essential postgresql processes.

perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'