Running multiple sed commands enables parallel as can be seen in the output. Notice the user-time greater than real-time.
% time ./re.sh
real 0m5,201s
user 0m43,394s
sys 0m1,302
The following is a demonstration for processing a huge file (eg. > 700 MB) using MCE, as that is the demonstration given by the OP to tackle. I made this to consume minimum overhead regarding the use of MCE. For example, workers write directly to the output handle in an orderly fashion versus passing the result to the manager process.
#!/usr/bin/env perl
# https://www.perlmonks.org/?node_id=11147200
use strict;
use warnings;
use MCE;
die "usage: $0 infile1.txt [ infile2.txt ... ]\n" unless @ARGV;
my $OUT_FH; # output file-handle used by workers
# Spawn worker pool.
my $mce = MCE->new(
max_workers => MCE::Util::get_ncpu(),
chunk_size => '64K',
init_relay => 0, # specifying init_relay loads MCE::Relay
use_slurpio => 1, # enable slurpio
user_begin => sub {
# worker begin routine per each file to be processed
my ($outfile) = @{ MCE->user_args() };
open $OUT_FH, '>>', $outfile;
},
user_end => sub {
# worker end routine per each file to be processed
close $OUT_FH if defined $OUT_FH;
},
user_func => sub {
# worker chunk routine
my ($mce, $chunk_ref, $chunk_id) = @_;
process_chunk($chunk_ref);
}
)->spawn;
The above spawns a pool of workers. Let's process a file.
# first, truncate output file
{ open my $fh, '>', "out-sed.dat" or die "$!\n"; }
$mce->process("in.txt", { user_args => [ "out-sed.dat" ] })
$mce->shutdown;
Or replace the prior 4 lines with the following to process a list of files, re-using worker pool.
# Process file(s).
my $status = 0;
while (my $infile = shift @ARGV) {
if (-d $infile) {
warn "WARN: '$infile': Is a directory, skipped\n";
$status = 1;
}
elsif (! -f $infile) {
warn "WARN: '$infile': No such file, skipped\n";
$status = 1;
}
else {
my $outfile = $infile; $outfile =~ s/\.txt$/.dat/;
if ($outfile eq $infile) {
warn "WARN: '$outfile': matches input name, skipped\n";
$status = 1;
next;
}
# truncate output file
open my $fh, '>', $outfile or do {
warn "WARN: '$outfile': $!, skipped\n";
$status = 1;
next;
};
close $fh;
# process file; pass argument(s) to workers
$mce->process($infile, { user_args => [ $outfile ] });
}
}
$mce->shutdown; # reap workers
exit $status;
Next is the function in which workers process the chunk line by line. Since we specified use_slurpio, $chunk_ref is a scalar reference. This appends the result per each line to the $output scalar. Finally, upon exiting the loop, workers output to the file handle serially and orderly.
# Worker function.
sub process_chunk {
my ($chunk_ref) = @_;
my $output = '';
open my $fh, '<', $chunk_ref;
while (<$fh>) {
s/[[:punct:]]//g;
s/[0-9]//g;
s/w(as|ere)/be/gi;
...
# append to output var
$output .= $_;
}
close $fh;
# Output orderly and serially.
MCE->relay_lock;
print $OUT_FH $output; $OUT_FH->flush;
MCE->relay_unlock;
}
Another way is to process the chunk all at once, omitting the while loop.
# Worker function.
sub process_chunk {
my ($chunk_ref) = @_;
$$chunk_ref =~ s/[[:punct:]]//g;
$$chunk_ref =~ s/[0-9]//g;
$$chunk_ref =~ s/w(as|ere)/be/gi;
...
# Output orderly and serially.
MCE->relay_lock;
print $OUT_FH $$chunk_ref; $OUT_FH->flush;
MCE->relay_unlock;
}
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.