Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: Re: Re: 3 weeks wasted? - will threads help?

by waswas-fng (Curate)
on Jan 28, 2003 at 05:34 UTC ( [id://230508]=note: print w/replies, xml ) Need Help??


in reply to Re: Re: 3 weeks wasted? - will threads help?
in thread 3 weeks wasted? - will threads help?

Why not glob, then fork and proccess the globbed data in the child while sleeping 2 seconds in the parrent and looping all over again? also to answer you question about forking above check out your system's man page for fork(2) I am pretty sure HPUX using copy-on-write (only copies the page stack and changed mem locs) since 10.x

-Waswas
  • Comment on Re: Re: Re: 3 weeks wasted? - will threads help?

Replies are listed 'Best First'.
Re: Re: Re: Re: 3 weeks wasted? - will threads help?
by Limbic~Region (Chancellor) on Jan 28, 2003 at 08:47 UTC
    BrowserUK's solution was not to fork each globbed directory, but to create one large glob of all the directories. This is what I am claiming is not feasible.

    I admit that I thought the enormous sz from ps in each child processes I forked came from its own instance of the Perl intrepreter, but I never claimed that HPUX didn't use copy-on-write.

    My problem is I have no way of profiling it - how can I tell how much memory is really being used and how much is shared.

    I have thought of a few more ways to optimize speed and memory allocation of the original code, but that won't get rid of the overhead I mentioned by my example of just a simple script.

    #!/usr/bin/perl -w use strict; while (1) { print "I am only printing and sleeping\n"; sleep 1; }

    That tiny program shows up in ps with a sz comparable to my full blown script.

    If I can't tell how much of that is shared by fork'ing another process - I have no idea if the project is viable or if it should be scrapped.

    Now your proposal is a tad bit different from the others as your forked children die returning all memory to the system, they are just spawned each iteration, which means the memory is MORE available (during sleep) to the system and since all the variables will be pretty stagnant once the child is forked - it won't start getting dirty before its dead. This is food for thought.

    Thanks and cheers - L~R

      Not exactly, my solution (although badly worded that late at night) was to (in psudo code):
      while (1) { $totallistoffiles = glob all_dir_list/*; fork if child { proccess file list exit } else #parrent { sleep 2 } }
      The parrents task is to grab the list of files (read all 20 directories) every x seconds fork and create a child that proccesses such list -- only to loop sleep and do it all over in x seconds. your only downside here is that the child has to (on average) execute faster than the sleep time of the parent otherwise you will slowly gain procs. I hope this clears it up.

      -Waswas

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://230508]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (7)
As of 2024-04-19 08:28 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found