Fair enough....
In my readmore tags - I explained that each copy works on a different directory - each directory is very transient and that is a race condition beyond my control
The extra memory overhead is coming from the Perl intrepreter - not the code itself (or at least that is my belief) - see below:
#!/usr/bin/perl -w
use strict;
while (1) {
print "I am only printing and sleeping\n";
sleep 1;
}
The above code shows up in ps -el with almost the same sz as the code in my readmore tags
Forking will not buy me anything as I understand it since I will be making an exact duplicate (memory and all). I was thinking threads may help, but as I understand them - each thread gets its own copy of the intrpreter - no memory savings either
So my question stated more clearly is:
Given a piece of code to parse a single directory, how can I parse multiple directories concurrently (or very nearly) without the memory overhead of each piece requiring its own intrepreter?
Concatenating the files in each directory into one long list isn't feasible either.
I freely admit that I may be asking to get something for nothing, but it seems like an awful waste not to be able to use the Perl code and continue using the shell script :-(
Cheers - L~R |