Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^5: greater efficiency required (ls, glob, or readdir?)

by shmem (Chancellor)
on Aug 27, 2008 at 20:55 UTC ( [id://707302]=note: print w/replies, xml ) Need Help??


in reply to Re^4: greater efficiency required (ls, glob, or readdir?)
in thread greater efficiency required (ls, glob, or readdir?)

Recently, I had to remove a directory containing 2.75 million files (some php debug blunder) in a vserver sub-directory of a machine which already ran under heavy I/O load. None of

rm -r $dir find $dir -exec rm {} \; ls $dir | xargs rm

was an option, since each would hog I/O, and the delay for productive tasks was unacceptable. Buffer cache was not an issue, plenty of memory being always available, and each large chain of multiple indirect blocks could be held in memory, processing each return from getdents(2) as it was delivered. Not so with ls, find et al, since those were hogging memory too and invalidating parts of the file system buffer cache while reading all entries.

Using perl, a readdir() loop, select() and unlink() solved that. My point is that readdir() gives you finer control than shelling out ls.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://707302]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (4)
As of 2024-03-19 10:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found