Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: Performance Trap - Opening/Closing Files Inside a Loop

by tmoertel (Chaplain)
on Dec 10, 2004 at 06:51 UTC ( [id://413773]=note: print w/replies, xml ) Need Help??


in reply to Performance Trap - Opening/Closing Files Inside a Loop

Limbic~Region axed:
Leaving Java aside, is there a more run-time efficient way than my second suggestion in Perl?
Probably. (But in order to answer your question with confidence, I would need to know more about the OS and filesystem that you are using, the input size, and the distribution of files that must be opened for writing. Lacking that information, here is my best guess.)

Assuming sufficiently small input size, we can load the entire input into RAM and build an optimal write plan before attempting further I/O. The plan's goal would be to minimize disk seek time, which is likely the dominant run-time factor under our control. An optimal strategy would probably be to open one file at a time, write all of its lines, close it, and then move on to the next file. If input size is larger than RAM, the speediest approach would then be to divide the input into RAM-sized partitions and process each according to its own optimal write plan.

Caching the output filehandles (as in your second implementation) probably will not be competitive. Even if you can hold all of the output files open simultaneously, a write pattern that jumps among files seemingly at random will probably kill you with seek time. Your OS will do its best to combine writes and reduce head movement with elevator (and better) algorithms, but you'll still pay a heavy price. You'll do much better if you can keep the disk head writing instead of seeking.

If it turns out that the number of distinct files to be created is nearly the same as the number of input lines, no strategy is likely to improve performance significantly over the naive strategy of opening and closing files as you walk line by line through the input.

One more thing. If the input that Mr. Java tested your program against was millions of lines long, does that imply that your code may have been creating thousands of files? If so, you might want to determine whether the filesystem you were using has a hashed or tree-based directory implementation. If not, your run time may have been dominated by filesystem overhead. Many filesystems (e.g., ext2/3) bog down once you start getting more than a hundred or so entries in a directory.

Cheers,
Tom

  • Comment on Re: Performance Trap - Opening/Closing Files Inside a Loop

Replies are listed 'Best First'.
Re^2: Performance Trap - Opening/Closing Files Inside a Loop
by EverLast (Scribe) on Dec 10, 2004 at 12:16 UTC
    ... Many filesystems (e.g., ext2/3) bog down once you start getting more than a hundred or so entries in a directory.

    Actually, I have found VFAT to pale in comparison to ext3 and even ext2. ReiserFS should be even better i've heard. YMMV, of course - RAM/processor(s) etc.

    Update:

    A well-known approach to this 'many files' problem is to create a n-level directory structure based on filenames. File abc goes into a/b/abc, def goes into d/e/def etc. (for n=2). Filenames are then typically randomly generated - and if they're not you can use some transformation to create input for the directory. Reportedly, ReiserFS does this internally.

    ---Lars

      CPAN does something similar. For example, my uploads are placed in http://www.cpan.org/authors/id/R/RK/RKINYON/. Only the few few authors are actually in the authors directory. The rest of us are in the id directory. :-)

      Being right, does not endow the right to be rude; politeness costs nothing.
      Being unknowing, is not the same as being stupid.
      Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
      Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.

Re^2: Performance Trap - Opening/Closing Files Inside a Loop
by tachyon (Chancellor) on Dec 10, 2004 at 08:30 UTC

    We found that ext3 with the 2.4.x Linux Kernel was reasonably happy with 10,000 files in a directory but obviously unhappy with 1 million. By reasonably happy I mean other bottlenecks dominated affairs. I would be interested if anyone has done a study on the relation of file numbers per dir vs access time for different file systems that it a little more precise.

    cheers

    tachyon

      Out of curiousity, did you test with the 'dir_index' feature flag set? It allows the filesystem to use hashed b-trees for lookups in large directories.

      mhoward - at - hattmoward.org
Re^2: Performance Trap - Opening/Closing Files Inside a Loop
by iburrell (Chaplain) on Dec 10, 2004 at 18:03 UTC
    There is something to be said for letting the IO system and OS handle the buffering and writes. All filehandles have a write bufer which only written when it is full. One way to reduce seek times is increase the buffer size for the opened files. The advantage is that Perl decides when the write the buffers, the OS decides when the write it to disk, and both are pretty good at this.

    The big advantage of caching filehandles is that the open files can hold output in the buffers until they are full. If they are continually being closed and reopened, then each line is being written individually.

    What is need is some way to keep a limited number of filehandles opened to keep from hitting the limit. A LRU cache would be perfect. It see a couple of modules that implement this. Or reimplementing it would be pretty easy.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://413773]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (8)
As of 2024-04-19 08:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found