Problems? Is your data what you think it is? | |
PerlMonks |
Re^3: Perl solution for storage of large number of small filesby BrowserUk (Patriarch) |
on Apr 30, 2007 at 16:49 UTC ( [id://612807]=note: print w/replies, xml ) | Need Help?? |
I concur with rhesa's method. I've used this before with considerable success. I just untarred a test structure containing a million files distributed this way using 3 levels of subdirectory to give an average of ~250 file per directory. I then ran a quick test of opening and reading 10,000 files at random and got an average time to locate, open, read and close each file of 12ms.
The files in this case are all 4k, but that doesn't affect your seek time. If you envisage needing to deal with much more than 1 million files, moving to 4 levels of hierarchy woudl distribute the million files at just 15 per directory. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In Section
Seekers of Perl Wisdom
|
|