Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re^2: Wait for individual sub processes

by crackerjack.tej (Novice)
on Apr 25, 2015 at 10:19 UTC ( [id://1124664]=note: print w/replies, xml ) Need Help??


in reply to Re: Wait for individual sub processes
in thread Wait for individual sub processes [SOLVED]

Sod's Law guarantees that it will always be the first, middle and last chunks of the files that take the longest, so you'll still end up with 13 cpus standing idle while those 3 run on for hours trying to catch up.

As far as I could see, the speed of chunks is totally random. Sometimes, it is only one part (let's say part 3 of 16) that keeps running whereas the remaining 15 CPUs are idle. The reason I don't want to split the files further is because I have to deal with enough files already, and I would rather avoid the confusion. Also, the start of each process loads a huge file into memory (about 10GB), which takes some time in itself, and I would like to minimize that time as well. So I am not sure if I can follow this path.

  • Comment on Re^2: Wait for individual sub processes

Replies are listed 'Best First'.
Re^3: Wait for individual sub processes
by BrowserUk (Patriarch) on Apr 25, 2015 at 11:03 UTC
    Sometimes, it is only one part (let's say part 3 of 16) that keeps running whereas the remaining 15 CPUs are idle

    My numbers were only by way of example.

    Let's say your chunk 3 takes an hour whereas the other 15 chunks take 5 minutes. Your overall processing time is 1 hour, with 15*55 = 13.75 hours of wasted, idle processor by the time you complete.

    Now let's say that you split the 16 chunks into 16 bits for a total of 256 bits; and assume that the cost of processing those smaller bits is 1/16th of the time of the larger chunks.

    You now have 240 bits that take 0.3125 minutes each; and 16 bits that take 3.75 minutes each.

    1. The 16 bits of chunk 1 are processed in parallel in 0.3125 minutes.
    2. The 16 bits of chunk 2 are processed in parallel in 0.3125 minutes.
    3. The 16 bits of chunk 3 are processed in parallel in 3.75 minutes.
    4. The 16 bits of chunks 4 through 16 are processed in parallel in 0.3125 minutes each.

    Total processing time is 15*0.3125 + 1*3.75 = 8.4375 minutes; with 0 wasted cpu. You've saved 85% of your runtime, and fully utilised the cluster.

    It won't split as nice and evenly as that; but the smaller the bits you process; the more evenly the processing will divided between the processors; no matter how the slow bits are (randomly) distributed through the file.

    The reason I don't want to split the files further is because I have to deal with enough files already, and I would rather avoid the confusion.

    More seriously, the simple way to avoid the too-many-files syndrome, is don't create many files. Instead of having each process write a separate output file, have them all write to a single output file.

    I know, I know. you're thinking that the output file with get all mixed up and it involves a mess of locking to prevent corruption; but it doesn't have to be that way!

    Instead of writing variable length output lines (records) sequentially, you write fixed length records using random access. (Bear with me!)

    Let's say you normally write a variable length output line, of between 30 and 120 characters, for each input record. Instead, you round that up (and pad with spaces) to the largest possible output record (say 128 bytes), and then seek to position input_record_number * 128 and write this fixed-length, space-padded record.

    Because each input record is only ever read by one process, each output position will only ever be written by one process, thus you don't need locking. And as each output record has a known (calculated) fixed position in the file, it doesn't matter what order, or which processor writes them.

    Once the input file is processed and the output file completely written, you run a simple, line oriented filter program on the output file that reads each line, trims off any trailing space-padding and writes the result to a new output file. You then delete the padded one. You end up with your required variable length record output file all in the correct order.

    This final processing is far more efficient than merging many small output files together, and the whole too-many-small-files problem simply disappears.

    Your call.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

      That makes sense. I understand what you mean that the discrepancies will be much lesser with smaller inputs. In fact, I have seen that happen with my tool too. Considering I'm using Parallel::ForkManager now (please see Update 2 of my original post), I should be able to implement this idea relatively easily.

      But I won't be able to modify the output of the tool. As I mentioned in my other post, I cannot edit this tool in anyway. So I guess I will still have to deal with multiple files, which is not too much of a problem though. I can use a specific prefix for the output files generated by a single input, and merge them later. Not the best way, but certainly an easier way (given the noob I am).

      Will update how it goes. Thanks a lot for the detailed explanation.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1124664]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (2)
As of 2024-04-16 18:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found