Based on my reading of the docs, Net::OpenSSH is already forking off separate processes. I'm not sure you're getting anything by doing the child processes in parallel, why not fire off each connection in a loop. Net::OpenSSH appears to support this directly (taken from its POD, added an example for $cmd, tested):
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
my @hosts=( 'user@server1.com','user@server2.com','user@server3.com' )
+;
my $cmd='uptime';
my %conn = map { $_ => Net::OpenSSH->new($_) } @hosts;
my @pid;
for my $host (@hosts) {
open my($fh), '>', "/tmp/out-$host.txt"
or die "unable to create file: $!";
push @pid, $conn{$host}->spawn({stdout_fh => $fh}, $cmd);
}
waitpid($_, 0) for @pid;
exit;
The
spawn command directly and asynchronously runs each remote host session. I'm always leery of
fork, it is probably copying all the named sockets into each child process space which is why you lose them when one child closes. (Hmm, isn't there reference counting?)
HTH,
SSF