http://qs321.pair.com?node_id=512879

swares has asked for the wisdom of the Perl Monks concerning the following question:

This code seems to work but breaks on large files due to out of memory issues. I would think the file needs to be written out to file periodicly instead of cached in memory until the xfer completes or fails but I do not know quite how to go about it. We would use this to make backups of directories on remote machines with out creating a tar ball on the remote machine.

Does anyone know how to make it work with very large files?

use Net::SSH::Perl # vars to set: $user, $password, $host, $local_fqn, $rmt_dir open STDOUT, ">$local_fqn" or die "Can't redirect stdout"; select STDOUT; $| = 1; # make unbuffered $cmd="tar -czf - $rmt_dir"; my $ssh = Net::SSH::Perl->new($host); $ssh->login($user, $password); my($stdout, $stderr, $exit) = $ssh->cmd($cmd); close STDOUT;

Replies are listed 'Best First'.
Re: Probelm getting a large compressed tar file over ssh.
by Corion (Patriarch) on Nov 30, 2005 at 08:13 UTC

    You have an error in your script - the command you're sending to the remote machine is not interpolated. You likely want:

    $cmd="tar -czf - $rmt_dir";

    Depending on how dire your error handling situation is, avoiding Perl might be the most convenient way to resolve your memory issues:

    #!/usr/bin/sh ssh -c "tar -czf - $rmt_dir" >backup.tar.gz

    This will run the tar command remotely and output the created .tar.gz file directly to STDOUT, and on the local end will write the output directly into a file instead of buffering it locally in memory. You will need some error checking afterwards though.

      I used to do this as you sugested but that method requires a trusted host type relationship which I do not have in this environment. I have to conect via ssh to a number of different systems using username / password authentication.

        You don't really need a trusted host relationship, you just need passwordless keys. If passwordless keys are impossible, then you will have to use the interactive method. Maybe you can go forward by hacking Net::Ssh::Perl to redirect the STDOUT part of the connection. Looking into the source of Net::SSH::Perl::SSH1, there seem to be handlers like SSH_SMSG_STDOUT_DATA and replacing the default handler with something that doesn't accumulate the string might help:

        # Original code sub cmd { ... unless ($ssh->handler_for(SSH_SMSG_STDOUT_DATA)) { $ssh->register_handler(SSH_SMSG_STDOUT_DATA, sub { $ssh->{_cmd_stdout} .= $_[1]->get_str }); } ... }

        I would try to supply my own callback like this:

        my $ssh = Net::SSH::Perl->new(...); open my $outfh, ">", $filename or die "Couldn't create '$filename' : $!"; binmode $outfh; $ssh->register_handler('stdout', sub { print $outfh $_[1]; });
      I was following this thread, and I never saw anything about the GNU tar portion.

      I've had issues in the past where GNU tar chokes if the tar file created exceeds 2GB in size. I don't know if this has been fixed, but I was seeing this issue as late as Sept of 2004.
      Maybe...