Take the following section of code:
$pid = open($firsh_child, "-|");
if ($pid) {
while (<$first_child>) { $lines .= $_; }
} else {
foreach (@gets) {
$i = 0;
$cpid[$i] = fork;
unless ($cpid[$i]) {
print `perl $_`;
exit;
}
$i++;
}
exit;
}
@gets contains a list of outside perl programs that are to be called, where the response is printed back to the main program. Everything works as it should. The forked processes merrily create their children and run concurrently. But I'm running into a performance issue.
If I run one program, running time is usually 1-3 seconds. However, if I pass a number of programs, the speed of each program drops dramatically, frequently double or more the time to run each program. So, instead of having a process take 1-3 seconds, it'll take 2-6, sometimes more. In fact, once the number of called programs exceeds 4, the programs called last tend to take 6-8 seconds to run.
I've considered the fact that this may be a memory related issue, but I don't know how to check the memory usage while this is running (UNIX environment). I'd prefer to have something that can print to STDERR while running so that I can check values while the main program runs.
Anyone out there with any code tuning experience that can give me some pointers here?