http://qs321.pair.com?node_id=612720


in reply to Perl solution for storage of large number of small files

My first suggestion would be to go with the filesystem, opening up levels of subdirectory as needed. (My second note would be that if this is an email storage system, there is a lot of prior code on this, check out some of the free IMAP servers).

Note that not all filesystems are created equal. They each make different tradeoffs and some will have much better performance for this load than others.

A journalling filesystem should perform better for a write-heavy load than a non-journalling fs. Disks are fastest when you are writing (or reading) lots of sequential data. File creation and updates will go to the (hopefully sequential) journal and should help a lot. If you simultaneously have a heavy read load, you'll lose a lot in performance - due to seeking - unless you can satisfy most reads from cache (which is, for example, the case in an MTA email queue under most sensible queueing strategies).

Your measurements showing that writing larger files is quicker is surprising. Can you try a 'strace' on the two cases in question to see if the application code is doing anything different in the two cases?

Can you tell any more about the application and expected access patterns? It sounds interesting.