so the best would be to always have a sleeping perl process with the most frequently modules loaded with use ? :)
#!/usr/bin/perl
use ....;
use ....;
use ....;
while(1){
sleep(60);
}
| [reply] [d/l] |
That's basically what these "office quickstarter" thingies do, but it's not a good idea generally. If you have enough RAM, the stuff you want to load quickly will likely be in the buffer cache anyway. If you don't, forcing it to stay resident will just slow down other things.
| [reply] |
That depends on what the module is...(and what you mean by "best")....
In your example above, you 'use' all the modules which reads them in initially, but then you sleep in a while loop. never touching those modules.
perl code isn't like binary modules since it can be executed -- modified and executed again.
It is possible, but I very much doubt that all of the -read-only text parts of a module are stored in 1 area where they could be mapped to a read-only, Copy-on-Write memory segment.
I'd say your best and maybe easiest bet would be
to to have your module copy is to create a dir in /dev/shm/ (I needed a space to store some tmp info
that I could examine -- later, shm's usage was
removed, and I used str8 pipes, but had a master process, that forked off 'n' copies of itself to do queries on a large file list in rpm.
I wanted to let them all dump their results in tmp files and exit when done -- the parent wouldn't have
to try to multiplex the streams which would have created contention in my code with the parent and children contending for the lock.
So instead, I created a tmpdir in /dev/shm to tmp files -- so no memory contention... and great thing was I could examine all of the intermediate results!
So -- if you REALLY need to keep something in memory, -- put a tmp dir in there and create a perl /lib tree... with your needed modules --
on my machine, /usr/lib/perl5 -- ALL OF IT (vendor, site, and a few archived previous releases)( only take up ~446M -- that's less than .5G, on a modern machine, not a major dent...depends on how important it is to keep things in memory!
| [reply] |
No. If you have a process of the static binary already running, it will not be loaded again from disk but the same physical memory will simply be mapped to a new virtual address space for the new process. That's the time to set up a few MMU tables and you're ready.
Um...no...only the R/O sections. Programs have initialized R/W spaces that once written are gone. There'd be no reason to have those sections marked
as COW, unless you already had someone sharing the page (like a forked copy). But any unrelated process likely wouldn't use a COW copy, as they'd need the pristine data as it was supposed to be when the program loaded.
| [reply] |
Um...no...only the R/O sections. Programs have initialized R/W spaces that once written are gone. There'd be no reason to have those sections marked as COW, unless you already had someone sharing the page (like a forked copy). But any unrelated process likely wouldn't use a COW copy, as they'd need the pristine data as it was supposed to be when the program loaded. Sure, any overview of virtual memory processes this length is bound to be oversimplified in some place. But we can safely ignore this detail as it doesn't differ between dynamically vs. statically linked programs.
| [reply] |