Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Help me beat NodeJS

by rickyw59 (Novice)
on Feb 12, 2016 at 17:09 UTC ( [id://1155109]=perlquestion: print w/replies, xml ) Need Help??

rickyw59 has asked for the wisdom of the Perl Monks concerning the following question:

Hello, I'm trying to write a script to go through hundreds of "log.gz" files, roughly 500,000 lines per file. Is there something limiting me? How can perl do a single file 3 times faster, but when I start forking perl's performance tanks? Below are the results of timing the parsing of a single file. When timing 70 files, nodejs takes 20 seconds and perl is at 60 seconds.

zcat &> /dev/null 0.54s user 0.01s system 99% cpu 0.549 total node test.js 0.79s user 0.05s system 130% cpu 0.646 total perl test.pl 0.23s user 0.03s system 38% cpu 0.686 total

I've tried forking for each file (limited to the number of cpus(24)). I've also tried dividing the logs by number of forks evenly, IE fork 24 times and each fork works n number of files, some how this was slightly slower. Both node and perl are spawning zcats and parsing line-by-line. I'm unable to use zlib, due to the files being zipped in-correctly by the device generating the logs.

*Edit: the directory is an nfsv3 mounted SAN. For tests, I'm only reading, no printing so IO on the test server should not be an issue. Also both node and perl tests are being run in the same environment.

#!/usr/local/bin/perl use strict; use warnings; use Parallel::ForkManager; my $pm = new Parallel::ForkManager(24); my $dir = '/data/logs/*.log.gz'; my @files = sort(glob "$dir"); for my $file(@files) { $pm->start and next; open(FH,"-|") || exec "/bin/zcat", $file; while(my $line = <FH>){ my @matches = $line =~ /".*?"|\S+/g; # print "$matches[0],$matches[1],$matches[3],$matche +s[4]; #matches[0] = date, matches[1] = time, matches[3] = source IP #matches[4] = dest IP, some other matches are used or may be used. #line is space seperated, but any field with a space is inside "", hen +ce regex instead of split. } $pm->finish; } $pm->wait_all_children;

Replies are listed 'Best First'.
Re: Help me beat NodeJS (sleep)
by tye (Sage) on Feb 12, 2016 at 20:34 UTC

      Parallel::ForkManager is only sleeping during " $pm->wait_all_children; ", correct? Or are you saying during every fork there is a hard-limit sleep?

Re: Help me beat NodeJS
by AnomalousMonk (Archbishop) on Feb 12, 2016 at 19:02 UTC

    Please see the many postings of BrowserUk in response to questions of the general form "Why doesn't multithreading/multiprocessing solve all my problems?" My impression is that he handles two or three applications like the one you describe before he finishes his first mug of coffee each morning — or maybe it's tea in his case.


    Give a man a fish:  <%-{-{-{-<

Re: Help me beat NodeJS
by runrig (Abbot) on Feb 12, 2016 at 18:46 UTC
    Slightly better is:
    open( my $fh, "-|", '/bin/zcat' => $file ) or die "Error opening $file +: $!";
    But since most of the time is spent reading/parsing, this will make negligible improvement performance wise.
Re: Help me beat NodeJS
by Laurent_R (Canon) on Feb 12, 2016 at 18:57 UTC
    Hmm, your process is probably limited by IO throughput, so that using more CPUs won't help much against disk access bottlenecks. In such cases, it often tends to be even somewhat slower, because of the multiple process management overhead.
Re: Help me beat NodeJS
by marioroy (Prior) on Feb 13, 2016 at 17:23 UTC

    Update: added pattern matching to the demonstration.

    Greetings,

    At first glance, am not seeing NodeJS performing regex for fields containing spaces. That's not really fair, IMHO :) The regex in Perl is likely the main reason for taking longer. Having 24 workers reading IO simultaneously from the nfsv3 mounted SAN may be another reason even though NodeJS seems not affected by it. Maybe, the regex statement in Perl is the reason.

    The following is an alternative solution. IO is faster for the upcoming 1.700 release versus MCE 1.608. Thus, the note on using 1.699_010. Also, inlined bits for the Windows platform.

    #!/usr/local/bin/perl use strict; use warnings; # IO performance in 1.699_010 is faster than MCE 1.608. # https://metacpan.org/release/MARIOROY/MCE-1.699_010 use MCE::Loop; use MCE::Candy; my $dir = 'logs/*.log.gz'; my @files = sort(glob "$dir"); my $pattern = "some_string"; MCE::Loop::init { gather => MCE::Candy::out_iter_fh(\*STDOUT), chunk_size => '240k', max_workers => 24, use_slurpio => 1, }; open( my $fh, "-|", "zcat", @files ) or die "open error: $!\n"; mce_loop { my ( $mce, $slurp_ref, $chunk_id ) = @_; my $buf = ''; # Quickly determine if a match is found... # ...and process slurped chunk only if true. if ( $$slurp_ref =~ /$pattern/m ) { # The following is fast on Unix, but performance degrades # drastically on Windows beyond 4 workers. open my $MEM_FH, '<', $slurp_ref; while ( my $line = <$MEM_FH> ) { if ( $line =~ /$pattern/ ) { my @matches = $line =~ /".*?"|\S+/g; $buf .= "$matches[0],$matches[1],$matches[3],$matches +[4]\n"; } } close $MEM_FH; # Therefore, use the following construction on Windows. # while ( $$slurp_ref =~ /([^\n]+\n)/mg ) { # # my $line = $1; # possibly save $1 to not lose the value # # not necessary for this demonstration # my @matches = $1 =~ /".*?"|\S+/g; # $buf .= "$matches[0],$matches[1],$matches[3],$matches[4]\ +n"; # } } # Send output to the manager process for orderly output to STDOUT $mce->gather($chunk_id, $buf); } $fh; close $fh;

    There is one reader across NFS by the manager process only, not workers. The effect is sequential IO which is typically faster than random IO.

    Regards, Mario

      Update: added pattern matching to the demonstration

      If parallel reads is desired, the following demonstration does the same thing. Basically, specifying chunk_size => 1 is all one needs to do for getting MCE to run like Parallel::ForkManager.

      #!/usr/local/bin/perl use strict; use warnings; use MCE::Loop; my $dir = 'logs/*.log.gz'; my @files = sort(glob "$dir"); my $pattern = "some_string"; MCE::Loop::init { max_workers => 24, chunk_size => 1, }; mce_loop { my ( $mce, $chunk_ref, $chunk_id ) = @_; open( my $fh, "-|", "zcat", $chunk_ref->[0] ) or die "open error: +$!\n"; while ( my $line = <$fh> ) { if ( $line =~ /$pattern/ ) { my @matches = $line =~ /".*?"|\S+/g; print "$matches[0],$matches[1],$matches[3],$matches[4]\n"; } } close $fh; } @files;

      Regards, Mario

        Wow thanks, I'll give this a shot. I will have to read some more on MCE, it looks very useful. I should have clarified, in the "parser" function in nodejs, I'm applying the same regex as perl to be fair. I've done the tests looking for a specific string (/some_string/) and I've done the regex in the above code (/".*?"|\S+/g), which captures everything in an array, since the lines are in this format: ' 1970-01-01 00:00:00 1.1.1.1 "A multi-word field" 2.2.2.2 '

Re: Help me beat NodeJS
by BrowserUk (Patriarch) on Feb 12, 2016 at 22:59 UTC
    When timing 70 files, nodejs takes 20 seconds

    Can you post the nodejs code?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.

      This was typed up quickly from memory. I haven't tested this code yet, but you get the general idea. I'm also brand new at JavaScript so please don't judge.

      'use strict' var cluster = require('cluster'); var spawn = require('child_process').spawn; var stream = require('stream'); var glob = require('glob'); var files = glob.sync('/data/logs/*.log.gz'); function Parser() { var t = new stream.Transform( { objectMode: true } ); var transform = t._transform; var flush = t._flush; var parse = t.parse; return t; } function transform(chunk, enc, done) { var data = chunk.toString(); //if flush gives us remaining data from last line, prepend to next + chunk if (this._lastLineData) data = this._lastLineData + data; var lines = data.split('\n'); this._lastLineData = lines[lines.length-1]; for (var i = 0; i < lines.length; i++) { this.push(parse.lines[i]); } done(); } function flush(done) { //If remaining data, pass to transform to prepend to next chunk if (this._lastLineData) this.push(this._lastLineData); this._lastLineData = null done() } function parse(line) { //Split line into fields, or apply regex to filter results return line; } if (cluster.isMaster) { var workers = 24; // number of CPUs var logsPerWorker = workers / files; var remainder = workers % files; var logSlice {}; var sliceStart = 0; var sliceEnd = 0; for (var i = 0; i < workers; i++) { if (remainder) { sliceEnd++; remainder--; } logSlice[i] = files.slice(sliceStart,sliceEnd); sliceStart++; sliceEnd = sliceStart; if (logsPerWorker === 0 && remainder === 0) { //if no more logs to distribute, break out of loop break; } } for (var i = 0; i < workers; i++) { //Fork and pass list of logs for child to process cluster.fork({logs: logSlice[i]}); } } else { var liner = Parser(); var logs = split.env.logs(','); //read logs passed by parent for (var i = 0; i < logs.length; i++){ var child = spawn('zcat', [ logs[i] ]); child.stdout.pipe(liner); //pipe this to stdout or to writeStr +eam to write to a file } }

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1155109]
Front-paged by Arunbear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (5)
As of 2024-04-23 23:05 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found