Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re^2: Use of do() to run lots of perl scripts

by LanX (Saint)
on Mar 02, 2021 at 18:44 UTC ( [id://11129026]=note: print w/replies, xml ) Need Help??


in reply to Re: Use of do() to run lots of perl scripts
in thread Use of do() to run lots of perl scripts

if the intention of using do is to run all scripts on the same run-time engine, how are problems with changes in global state by individual scripts avoided?

Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery

Replies are listed 'Best First'.
Re^3: Use of do() to run lots of perl scripts
by jcb (Parson) on Mar 03, 2021 at 03:29 UTC

    That is the easy part that our questioner already mentioned: fork or do $script; or unless (fork) { do $script; exit; }

    Performance gains here will depend on how well the system implements fork — all modern real operating systems use copy-on-write, so fork itself will be very quick, but each child will execute do FILE independently. This later step will mean that perl will still need to compile every script for each request, which is probably our questioner's actual overhead problem.

    The best solution is probably to refactor the Perl scripts into modules that can be loaded into the master process, duplicated with everything else at fork, and then executed quickly in the forked child.

    Another possible workaround for compiling the scripts may be B::Bytecode and ByteLoader, although they do have some limitations. In this case, you would want the master process to have already loaded ByteLoader before forking: use ByteLoader (); will load the module without calling its import method.

      Yes I oversaw the fork part until choroba posted his other benchmark.

      Do or even require alone are not fast. Reducing the start up of perl might have a time impact but won't change the RAM consumption.

      My bet on the biggest time consumer is the filesystem not the compilation. Precompiling really payed off in the 90s, but now?

      So using a RAM-disk could have the best cost benefit ratio.

      But we are all speculating here, like others repeated over and over again, the OP should be more explicit

      • what his problems are (startup, ram, ...)
      • how frequently this happens
      • what he benchmarked.

      I have my doubts that refactoring 500 scripts is an option and even then...

      Precompiling them all into the master process would make them vulnerable to global effects in the BEGIN-phase.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery

        So using a RAM-disk could have the best cost benefit ratio.

        While our questioner has not told us what operating system this is using, modern systems tend to cache frequently used files in RAM. Linux kernels, in particular, will effectively move frequently-accessed files to a RAM disk, available RAM permitting. This is the "dcache" and Linux has had it for decades.

        I have my doubts that refactoring 500 scripts is an option

        The work could be done incrementally. The Pareto principle suggests that 20% of the scripts are probably used 80% of the time, with a long tail into the noise. Our questioner also mentioned that they are not all Perl scripts, so presumably it is already known that the largest contributors to server load are Perl scripts, otherwise the entire question is pointless.

        Precompiling them all into the master process would make them vulnerable to global effects in the BEGIN-phase.

        ByteLoader does not work like that. You use B::Bytecode to compile the script in advance, load ByteLoader itself in the master process, and use ByteLoader in each child to load and run the precompiled script. (Actually, the precompiled script can include use ByteLoader; so that fork or do "script.plc"; will cause ByteLoader to be correctly invoked.)

        Global effects in the BEGIN-phase could be an issue for refactoring the scripts into modules, but addressing those issues would be part of refactoring scripts into modules. :-)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11129026]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (5)
As of 2024-03-29 11:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found