Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re: Need to speed up many regex substitutions and somehow make them a here-doc list (MCE solution)

by marioroy (Prior)
on Oct 05, 2022 at 22:10 UTC ( [id://11147266]=note: print w/replies, xml ) Need Help??


in reply to Need to speed up many regex substitutions and somehow make them a here-doc list

Running multiple sed commands enables parallel as can be seen in the output. Notice the user-time greater than real-time.

% time ./re.sh real 0m5,201s user 0m43,394s sys 0m1,302

The following is a demonstration for processing a huge file (eg. > 700 MB) using MCE, as that is the demonstration given by the OP to tackle. I made this to consume minimum overhead regarding the use of MCE. For example, workers write directly to the output handle in an orderly fashion versus passing the result to the manager process.

#!/usr/bin/env perl # https://www.perlmonks.org/?node_id=11147200 use strict; use warnings; use MCE; die "usage: $0 infile1.txt [ infile2.txt ... ]\n" unless @ARGV; my $OUT_FH; # output file-handle used by workers # Spawn worker pool. my $mce = MCE->new( max_workers => MCE::Util::get_ncpu(), chunk_size => '64K', init_relay => 0, # specifying init_relay loads MCE::Relay use_slurpio => 1, # enable slurpio user_begin => sub { # worker begin routine per each file to be processed my ($outfile) = @{ MCE->user_args() }; open $OUT_FH, '>>', $outfile; }, user_end => sub { # worker end routine per each file to be processed close $OUT_FH if defined $OUT_FH; }, user_func => sub { # worker chunk routine my ($mce, $chunk_ref, $chunk_id) = @_; process_chunk($chunk_ref); } )->spawn;

The above spawns a pool of workers. Let's process a file.

# first, truncate output file { open my $fh, '>', "out-sed.dat" or die "$!\n"; } $mce->process("in.txt", { user_args => [ "out-sed.dat" ] }) $mce->shutdown;

Or replace the prior 4 lines with the following to process a list of files, re-using worker pool.

# Process file(s). my $status = 0; while (my $infile = shift @ARGV) { if (-d $infile) { warn "WARN: '$infile': Is a directory, skipped\n"; $status = 1; } elsif (! -f $infile) { warn "WARN: '$infile': No such file, skipped\n"; $status = 1; } else { my $outfile = $infile; $outfile =~ s/\.txt$/.dat/; if ($outfile eq $infile) { warn "WARN: '$outfile': matches input name, skipped\n"; $status = 1; next; } # truncate output file open my $fh, '>', $outfile or do { warn "WARN: '$outfile': $!, skipped\n"; $status = 1; next; }; close $fh; # process file; pass argument(s) to workers $mce->process($infile, { user_args => [ $outfile ] }); } } $mce->shutdown; # reap workers exit $status;

Next is the function in which workers process the chunk line by line. Since we specified use_slurpio, $chunk_ref is a scalar reference. This appends the result per each line to the $output scalar. Finally, upon exiting the loop, workers output to the file handle serially and orderly.

# Worker function. sub process_chunk { my ($chunk_ref) = @_; my $output = ''; open my $fh, '<', $chunk_ref; while (<$fh>) { s/[[:punct:]]//g; s/[0-9]//g; s/w(as|ere)/be/gi; ... # append to output var $output .= $_; } close $fh; # Output orderly and serially. MCE->relay_lock; print $OUT_FH $output; $OUT_FH->flush; MCE->relay_unlock; }

Another way is to process the chunk all at once, omitting the while loop.

# Worker function. sub process_chunk { my ($chunk_ref) = @_; $$chunk_ref =~ s/[[:punct:]]//g; $$chunk_ref =~ s/[0-9]//g; $$chunk_ref =~ s/w(as|ere)/be/gi; ... # Output orderly and serially. MCE->relay_lock; print $OUT_FH $$chunk_ref; $OUT_FH->flush; MCE->relay_unlock; }

Replies are listed 'Best First'.
Re^2: Need to speed up many regex substitutions and somehow make them a here-doc list (MCE solution)
by marioroy (Prior) on Oct 05, 2022 at 22:49 UTC

    Sometimes, I like to know "really" what the overhead is for MCE. So, here is something to measure the chunking nature of MCE. Simply comment out user_begin, user_end, and the process routine. That's it.

    #!/usr/bin/env perl use strict; use warnings; use MCE; use Time::HiRes 'time'; die "usage: $0 infile1.txt\n" unless @ARGV; my $OUT_FH; # output file-handle used by workers # Spawn worker pool. my $mce = MCE->new( max_workers => MCE::Util::get_ncpu(), chunk_size => '64K', init_relay => 0, # specifying init_relay loads MCE::Relay use_slurpio => 1, # enable slurpio # user_begin => sub { # # worker begin routine per each file to be processed # my ($outfile) = @{ MCE->user_args() }; # open $OUT_FH, '>>', $outfile; # }, # user_end => sub { # # worker end routine per each file to be processed # close $OUT_FH if defined $OUT_FH; # }, user_func => sub { # worker chunk routine my ($mce, $chunk_ref, $chunk_id) = @_; # process_chunk($chunk_ref); } )->spawn; my $start = time; $mce->process($ARGV[0]); printf "%0.3f seconds\n", time - $start;

    I have a big file which is 767 MB. The overhead is a fraction of a second.

    $ ls -lh big.txt -rw-r--r-- 1 mario mario 767M Oct 5 10:07 big.txt $ perl demo.pl big.txt 0.154 seconds

    Edit: That was from OS level cache as I had read the file from prior testing.

      The OP mentioned a large number of text files (thousands to millions at a time, up to a couple of MB each). I think that parallelization is better broken down at the file level. Basically, create a list of input files and chunk the list instead. Since the list may range from thousands to millions, go with chunk_size 1 or 2.

      Notice that workers are spawned early, before creating a large array. Create the array and pass the array reference to MCE to not make an extra copy. This is how to tackle a big job, keeping overhead low. And then, fasten your seat belt and enjoy parallelization in top or htop.

      use strict; use warnings; use MCE; use Time::HiRes 'time'; sub process_file { my ($file) = @_; } my $mce = MCE->new( max_workers => MCE::Util::get_ncpu(), chunk_size => 2, user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; process_file($_) for @{ $chunk_ref }; } )->spawn; my @file_list = (1 .. 1_000_000); # simulate a list of 1 million files my $start = time; $mce->process(\@file_list); printf "%0.3f seconds\n", time - $start; $mce->shutdown; # reap workers

      Let's find out the IPC overhead. I wonder myself.

      chunk_size 1 3.773 seconds 1 million chunks chunk_size 2 1.930 seconds 500 thousand chunks chunk_size 10 0.423 seconds 100 thousand chunks chunk_size 20 0.234 seconds 50 thousand chunks

      It is mind-boggling nonetheless, just a fraction of a second for 50 thousand chunks. Moreover, 2 seconds will not be felt when processing 500 thousand files. Nor, 4 seconds handling 1 million files.

Re^2: Need to speed up many regex substitutions and somehow make them a here-doc list (MCE solution)
by marioroy (Prior) on Oct 05, 2022 at 22:28 UTC

    I ran with the following for comparing with the sed output. See complementary post for a version based on Marshall's implementation.

    # Worker function. sub process_chunk { my ($chunk_ref) = @_; $$chunk_ref =~ s/[[:punct:]]//g; $$chunk_ref =~ s/[0-9]//g; $$chunk_ref =~ s/w(as|ere)/be/gi; $$chunk_ref =~ s/ need.* / need /gi; $$chunk_ref =~ s/ .*meant.* / mean /gi; $$chunk_ref =~ s/ .*work.* / work /gi; $$chunk_ref =~ s/ .*read.* / read /gi; $$chunk_ref =~ s/ .*allow.* / allow /gi; $$chunk_ref =~ s/ .*gave.* / give /gi; $$chunk_ref =~ s/ .*bought.* / buy /gi; $$chunk_ref =~ s/ .*want.* / want /gi; $$chunk_ref =~ s/ .*hear.* / hear /gi; $$chunk_ref =~ s/ .*came.* / come /gi; $$chunk_ref =~ s/ .*destr.* / destroy /gi; $$chunk_ref =~ s/ .*paid.* / pay /gi; $$chunk_ref =~ s/ .*selve.* / self /gi; $$chunk_ref =~ s/ .*self.* / self /gi; $$chunk_ref =~ s/ .*cities.* / city /gi; $$chunk_ref =~ s/ .*fight.* / fight /gi; $$chunk_ref =~ s/ .*creat.* / create /gi; $$chunk_ref =~ s/ .*makin.* / make /gi; $$chunk_ref =~ s/ .*includ.* / include /gi; $$chunk_ref =~ s/ .*mean.* / mean /gi; $$chunk_ref =~ s/ talk.* / talk /gi; $$chunk_ref =~ s/ going / go /gi; $$chunk_ref =~ s/ getting / get /gi; $$chunk_ref =~ s/ start.* / start /gi; $$chunk_ref =~ s/ goes / go /gi; $$chunk_ref =~ s/ knew / know /gi; $$chunk_ref =~ s/ trying / try /gi; $$chunk_ref =~ s/ tried / try /gi; $$chunk_ref =~ s/ told / tell /gi; $$chunk_ref =~ s/ coming / come /gi; $$chunk_ref =~ s/ saying / say /gi; $$chunk_ref =~ s/ men / man /gi; $$chunk_ref =~ s/ women / woman /gi; $$chunk_ref =~ s/ took / take /gi; $$chunk_ref =~ s/ tak.* / take /gi; $$chunk_ref =~ s/ lying / lie /gi; $$chunk_ref =~ s/ dying / die /gi; $$chunk_ref =~ s/ made /make /gi; $$chunk_ref =~ s/ used.* / use /gi; $$chunk_ref =~ s/ using.* / use /gi; # Output orderly and serially. MCE->relay_lock; print $OUT_FH $$chunk_ref; $OUT_FH->flush; MCE->relay_unlock; }

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11147266]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (8)
As of 2024-03-29 13:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found