We don't bite newbies here... much | |
PerlMonks |
Re: Re: Re: Re: 3 weeks wasted? - will threads help?by Limbic~Region (Chancellor) |
on Jan 28, 2003 at 08:47 UTC ( [id://230529]=note: print w/replies, xml ) | Need Help?? |
BrowserUK's solution was not to fork each globbed directory, but to create one large glob of all the directories. This is what I am claiming is not feasible.
I admit that I thought the enormous sz from ps in each child processes I forked came from its own instance of the Perl intrepreter, but I never claimed that HPUX didn't use copy-on-write. My problem is I have no way of profiling it - how can I tell how much memory is really being used and how much is shared. I have thought of a few more ways to optimize speed and memory allocation of the original code, but that won't get rid of the overhead I mentioned by my example of just a simple script.
That tiny program shows up in ps with a sz comparable to my full blown script. If I can't tell how much of that is shared by fork'ing another process - I have no idea if the project is viable or if it should be scrapped. Now your proposal is a tad bit different from the others as your forked children die returning all memory to the system, they are just spawned each iteration, which means the memory is MORE available (during sleep) to the system and since all the variables will be pretty stagnant once the child is forked - it won't start getting dirty before its dead. This is food for thought. Thanks and cheers - L~R
In Section
Seekers of Perl Wisdom
|
|