This is PerlMonks "Mobile"

Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  


in reply to Re: Use of do() to run lots of perl scripts
in thread Use of do() to run lots of perl scripts

Even with fork "do" is still much faster.
cmpthese( -3, { do => sub { unless (fork) { do 'script.pl'; exit } wait }, sys => sub { system 'perl script.pl' }, } );
map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

Replies are listed 'Best First'.
Re^3: Use of do() to run lots of perl scripts
by LanX (Saint) on Mar 03, 2021 at 23:59 UTC
    > Even with fork "do" is still much faster.

    Hmm... the whole picture might be more complicated.

    I just remembered that modern OS optimize the fork with a copy-on-write of the process' space.

    This means while the start of the fork might be very fast, it can slow down as soon as changes occur.

    OTOH this could also mean that large parts of the engine don't need to be physically copied, because they are static and no write is possible.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      There will be brief latency spikes as CoW page links are broken. Assuming that there is sufficient RAM available that this does not result in using swap, there will be no lasting slow down. The latency spikes can hit either parent or child process, whichever first writes to a CoW page.

      Note that newer Linux kernels also have a "kernel same-page merging" feature that opportunistically searches physical memory for pages that happen to have the same contents and replaces them with a single CoW page. If this is enabled, CoW-break latencies can hit even unrelated processes, if the kernel happened to notice that they had pages with the same contents. Note also that CoW-break should be much faster than swap and pages can also be swapped out, so this should not be a significant performance concern.

      The Perl runtime itself is written in C and therefore compiled in advance and demand loaded by mmapping libperl. Read-only mappings like those used for executable machine code are (or should be...) always shared between all processes that map the same file. You should only have one copy of libperl in RAM no matter how many (unrelated) perl processes you have running, but each Perl interpreter has considerable data structures that are built independently and not mapped from the filesystem and therefore will probably not be shareable between unrelated processes, although fork will "copy" them and "same-page merging" could combine them if two processes happen to have byte-identical structures.