I benchmarked your code with same input file that I was using for other tests with the following results:
.../ManySubstitutions>perl mceparallel.pl nightfall.txt
file: nightfall.txt mins: 0 secs: 5.763
file: nightfall.txt mins: 0 secs: 4.290
file: nightfall.txt mins: 0 secs: 4.179
file: nightfall.txt mins: 0 secs: 4.293
So this about 3x the speed of my previous single threaded version (exec time was about 12 sec with that one). The slightly longer time seen for the first run is to be expected as this was after a clean reboot and Perl is not in memory yet, etc. I have 16 GB ram on this Win10 machine and after one run, a lot of the disk data winds up in the read cache.
From what I gather, your MCE code processes things in 64K chuncks. How does it go about determining the boundary for a 64K hunk? Is there a chance that a word or a line could get split between chunks? I guess that there is some slight inefficiency because MCE has to enforce a sequential finishing - and again - I wasn't really sure how that sequencing is done/guaranteed?
In the OP's situation, he/she says that there could be millions of files to process. We are just using a single file for benchmarking purposes. In practice, I would think that a scheme that processes say X streams of files simultaneously would be a more simple processing model. Where X is number of cores/cpus. My machine has 4 cores. So, I split the big input file into 4 smaller ones of approximately the same size and show some forking code below....
I had expected this to be faster than the MCE version because there are no conflicts between forks (or actually virtual forks, i.e. threads on Windows). However that turned out not to be the case. I am a bit disappointed in the performance as I have seen other applications where I can get much closer to the theoretical but not reachable limit of 4x better. But ~3x isn't bad. This fork business is weird on Windows and this code may work a lot better on Unix which can do "real" forks.
use strict;
use warnings;
use Fcntl qw(:flock);
use File::Copy 'move';
use POSIX "sys_wait_h"; #for waitpid FLAGS
use Time::HiRes 'time';
$|=1;
my @in_files = qw(nightfall1.txt nightfall2.txt nightfall3.txt nightfa
+ll4.txt );
my $start_epoch = time();
# Fire off number of child processes equal to the
# number of files in @in_files;
# Then the parent who started these little guys,
# goes into a blocking wait until they all finish
# In this simple example, they will finish at about the same time
# because all the input files are roughly identical
# Each child can return a status code via exit($code_number).
# Each child writes its own output file, so the only real "bottleneck"
# is max avg throughput of the file system.
############
## Common code for all forked processes
#
# substitute whole word only
my %w1 = qw{
going go
getting get
goes go
knew know
trying try
tried try
told tell
coming come
saying say
men man
women woman
took take
lying lie
dying die
made make
};
# substitute on prefix
my %w2 = qw{
need need
talk talk
tak take
used use
using use
};
# substitute on substring
my %w3 = qw{
mean mean
work work
read read
allow allow
gave give
bought buy
want want
hear hear
came come
destr destroy
paid pay
selve self
cities city
fight fight
creat create
makin make
includ include
};
my $re1 = qr{\b(@{[ join '|', reverse sort keys %w1 ]})\b}i;
my $re2 = qr{\b(@{[ join '|', reverse sort keys %w2 ]})\w*}i;
my $re3 = qr{\b\w*?(@{[ join '|', reverse sort keys %w3 ]})\w*}i;
#my $re3 = qr{\w*?(@{[ join '|', reverse sort keys %w3 ]})\w*}i; #hal
+f speed of \b version
#########
## Fork off the children
#
$SIG{CHLD} = 'IGNORE';
open(my $fh_log, '>>', "Alogfile.txt") or die "unable to open Alogfile
+.txt $!";
#$fh_log->autoflush; #not needed this is automatic before locking or
+unlocking a file!
foreach my $file_name (@in_files)
{
if(my $pid = fork)
{ # parent
safe_print ($fh_log, "Spawned child pid: $pid for $file_name\n
+");
}
elsif(defined $pid ) # pid==0
{ # child
safe_print ($fh_log, "This is child pid $$ for $file_name. I a
+m alive and working!\n");
process_file($file_name);
safe_print ($fh_log, "Child $$ finished work on $file_name\n")
+;
exit(0);
}
else
{ # fork failed pid undefined
die "MASSIVE ERROR - FORK FAILED with $!";
}
}
### now wait for all children to finish, no matter who they are
1 while wait != -1 ; # avoid zombies this is a blocking operation
safe_print ($fh_log, "Parenting talking...all my children are finished
+! Hooray!\n");
close $fh_log;
sub safe_print
{
my ($fh, @text) = @_;
my $now_epoch = time();
my $delta_secs = $now_epoch - $start_epoch;
flock $fh, LOCK_EX or die "flock can't get lock $!";
printf $fh "%.3f secs %s", $delta_secs, $_ foreach @text;
printf "%.3f secs %s", $delta_secs, $_ foreach @text;
flock $fh, LOCK_UN or die "flock can't release lock $!";
}
sub process_file
{
my $filename = shift;
open my $IN, '<', $filename or die "can't open input $filename $!";
my $outfile = $filename;
$outfile =~ s/\.txt$/\.out/;
open my $OUT, '>', $outfile or die "can't open output $outfile $!";
safe_print ($fh_log, "opened $filename and $outfile\n");
while (<$IN>)
{
tr/-!"#%&'()*,.\/:;?@\[\\\]_{}0123456789//d; #no punct no dig
+its
s/w(as|ere)/be/gi;
s{$re1}{ $w1{lc $1} }g; #this ~2-3 sec
s{$re2}{ $w2{lc $1} }g; #this ~3 sec
s{$re3}{ $w3{lc $1} }g; #this ~6 sec
print $OUT "$_";
}
close $IN;
close $OUT;
safe_print ($fh_log, "Child $$ finished working on $filename!\n");
exit(0); #CHILD has to exit itself!
}
__END__
0.007 secs Spawned child pid: -1620 for nightfall1.txt
0.008 secs This is child pid -1620 for nightfall1.txt. I am alive and
+working!
0.011 secs opened nightfall1.txt and nightfall1.out
0.013 secs Spawned child pid: -5660 for nightfall2.txt
0.014 secs This is child pid -5660 for nightfall2.txt. I am alive and
+working!
0.017 secs opened nightfall2.txt and nightfall2.out
0.019 secs Spawned child pid: -20048 for nightfall3.txt
0.020 secs This is child pid -20048 for nightfall3.txt. I am alive and
+ working!
0.022 secs opened nightfall3.txt and nightfall3.out
0.029 secs Spawned child pid: -4840 for nightfall4.txt
0.031 secs This is child pid -4840 for nightfall4.txt. I am alive and
+working!
0.034 secs opened nightfall4.txt and nightfall4.out
4.818 secs Child -4840 finished working on nightfall4.txt!
4.827 secs Child -1620 finished working on nightfall1.txt!
4.835 secs Child -5660 finished working on nightfall2.txt!
4.842 secs Child -20048 finished working on nightfall3.txt!
4.844 secs Parent talking...all my children are finished! Hooray!
Note the last process to be started finished first, but the file sizes aren't completely equal and also there is some variability between runs depending upon how the O/S does the core assignment and what else is going on in the machine.
File sizes for reference:
10/04/2022 06:44 PM 80,748,006 nightfall.txt
10/08/2022 01:30 PM 20,187,006 nightfall1.txt
10/08/2022 01:30 PM 20,187,078 nightfall2.txt
10/08/2022 01:30 PM 20,187,057 nightfall3.txt
10/08/2022 01:30 PM 20,186,865 nightfall4.txt