Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re: Perl Program to efficiently process 500000 small files in a Directory (AIX)

by rminner (Chaplain)
on Mar 17, 2018 at 07:11 UTC ( [id://1211103]=note: print w/replies, xml ) Need Help??


in reply to Perl Program to efficiently process 500000 small files in a Directory (AIX)

Historically i wrote a programm, which also read in a large number of files into memory through setting $/ to undef. The given program took 45 minutes at the time to complete. After changing the reading mechanism to File::Slurp the runtime went down to 3 minutes. I did not change anything else. This will of course depend on a number of factors, but maybe you could give it a try. Your example adapted to File::Slurp:
use File::Slurp; # Update: or use File::Slurper which Athanasius menti +oned. opendir(DIR, $dir) or die "$!\n"; while ( defined( my $txtFile = readdir DIR ) ) { next if( $txtFile !~ /.txt$/ ); $cnt++; my $data = read_file($fh); my ($channel) = $data =~ /A\|CHNL_ID\|(\d+)/i; move ($txtFile, "$outDir/$channel") or die $!, $/; } closedir(DIR);
Since your programm is blocking while reading and moving the File, you might also want to parallelize it. E.g. with Parallel::ForkManager or with MCE. Then you can do the reading of the files and the moving in parallel. To some extend you are of course I/O bound, but i think it should still give you some improvement, if implemented correctly.
Update: i whipped up a quick (untested) example for Parallel::Forkmanager:
use strict; use warnings; use File::Slurp; use Parallel::ForkManager; sub read_next_batch_of_filenames { my ($DH, $MAX_FILES) = @_; my @files = (); while (my $fn = readdir $DH) { next if ($fn !~ m/\.txt\z/); push @files, $fn; last if (scalar(@files) >= $MAX_FILES); } if (@files) { return \@files; } else { return; } } sub move_files { my ($outDir, $files) = @_; foreach my $f (@$files) { my $data = read_file($f); my ($channel) = $data =~ /A\|CHNL_ID\|(\d+)/i; move ($f, "$outDir/$channel") or die "Failed to move ' +$f' to '$outDir/$channel ($!)\n"; } } sub parallelized_move { my $dir = 'FIXME'; my $outDir = 'FIXME'; my $MAX_PROCESSES = 4; # tweak this to find the best nu +mber my $FILES_PER_PROCESS = 1000; # process in batches of 1000, to + limit forking my $pm = Parallel::ForkManager->new($MAX_PROCESSES); opendir my $DH, $dir or die "Failed to open '$dir' for reading + ($!)\n"; DATA_LOOP: while (my $files = read_next_batch_of_filenames($DH, $FILES_PE +R_PROCESS)) { # Forks and returns the pid for the child: my $pid = $pm->start and next DATA_LOOP; move_files($outDir, $files); $pm->finish; # Terminates the child process } closedir $DH or die "Failed to close directory handle for '$di +r' ($!)\n"; }

Replies are listed 'Best First'.
Re^2: Perl Program to efficiently process 500000 small files in a Directory (AIX)
by Athanasius (Archbishop) on Mar 17, 2018 at 07:43 UTC
      Which Slurping module is used cannot possibly the bottleneck for ops code
Re^2: Perl Program to efficiently process 500000 small files in a Directory (AIX)
by Anonymous Monk on Mar 17, 2018 at 07:46 UTC
    File slurp cannot make that kind of speed improvement
      Actually it did exactly give this performance gain. I changed the reading mechanism after benchmarking the programm. The benchmarks showed me that more than 90% of the real runtime of my programm was spent on I/O. The files were however larger (a few thousand xml files), and were located on a normal HDD not an SSD. He could simply try and see, whether it changes anything for him.
        :) there aint no way the OPs way of slurping is 15 times slower (45/3) than any of the File::Slurp... modules

        yes i believe you saw time reduction ... Whatever part slurp module played was by accident. Its not gonna provide 15x speedup

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1211103]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having a coffee break in the Monastery: (4)
As of 2024-04-18 01:49 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found