"be consistent" | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Yes sir I have! There's a _check() routine in the script which runs by default after every 50th child to check to see if all the kids it thinks are alive, are in fact still alive, in case one manages to evade the reaper! That process as well as the overhead of checking the environment for safety seem to delay the process just enough to not fork() a billion processes at once. Of course, I could see that this might pose a problem down the road on some systems. I'm definately going to address this in a more permanent way by removing the current call to '/usr/bin/uptime' with something more reliable and more understanding. Granted, there will be a certain over head associated with determining the current load/mem/cpu usage before every fork() call, but the safeguards it'll provide should more than pay off. Additionally, I'll probably provide a mechanism to forego the safety net, because sometimes, I want enough rope to hang myself.
Its difficult to really provide a decent temporary solution for this problem as my children might only use 1% mem and 5% CPU, where as someone elses might use 54% mem and 70% CPU during processing. I suppose, it might help to profile the children as well. If we're processing on a big list, we might only fork() 5-10 processes for the first 60 seconds, and gather information on their peak performance, and be able to make an educated guess as to the resources these children will use on the system, and be able to dynamically adjust the number of concurrent processes based on that data. I have a ton of ideas to make this the easiest to use, and most flexible process controller for Perl. Granted, poor coding in the children will almost always blow up in your face, I'm just aiming to make it harder to do so.
-brad..
In reply to Re: Re: My First Submission to CPAN (Parallel::ForkControl)
by reyjrar
|
|