http://qs321.pair.com?node_id=612707

isync has asked for the wisdom of the Perl Monks concerning the following question:

I need to store a LOT of small files around 40K each.

In order to circumvent filesystem limits regarding the number of files, and keeping it all manageable and backup-able I decided to use a solution with a set of tied DB_File hashes as a kind of pseudo-filesystem, each file holding thousands of small recordsets/the files.
Currently I use 16 hashes, round-robin storing files into these buckets. But with about 100,000 files stored into the set of 16 50MB files, I get speed issues. (DB_File seems practical only for about 5,000 records after all, getting slower data insert times on growing files, under good conditions 0.04 seconds each and up).

Does anyone know about a fast perl solution, a module or a producedure to manage 1,000,000+ files in a relatively small set of data-buckets? Or is the large quantity of files no problem when I sort them away in enough subdirectories (am I fighting demons here..)?
I would use a solid file structure, like a .gz, but I need to be able to delete single files from the archive and re-allocate free'd space...

Another thought:
Tests show: 0.04 seconds per insert seems to be the barrier imposed by the filesystem for writing to disk. Same as writing a ~40K file. Larger files seem to write quicker (~200K in 0.02sec). Does this ring a bell for someone and point to a solution?