http://qs321.pair.com?node_id=612748


in reply to Re^2: Perl solution for storage of large number of small files
in thread Perl solution for storage of large number of small files

The maximum number of files on a filesystem is limited by the number of inodes allocated when you create it (see 'mke2fs' and the output of 'df -i'). You can also tweak various settings on ext2/ext3 with tune2fs.

As you probably already know, written data is buffered in many places between your application and the disk. Firstly, the perlio layer (and/or stdio in the C library) may buffer data - this is controlled by $| or the equivalent methods in the more modern I/O packages.

Flushing will ensure the data is written to the kernel, but it won't ensure the kernel writes it to disk. You need the 'fsync' system call for this (and/or the 'sync' system call). You can get access to these via the File::Sync module.

Note that closing a filehandle only *flushes* it (write userland buffers), it does not *sync* it (write kernel buffers).

(If you're paranoid and/or writing email software, you may also want to note that syncing only guarantees that the kernel has successfully written the data to the disk. Most/all disks these days have a write buffer - there isn't a guarantee that data in that write buffer makes it onto persistent storage in the event of a power failure. You can get around this in various ways, but I'm drifting just a bit out of scope here...)

The above is to suggest an explanation for 'untie' taking a long time (flushing lots of buffered data on close), and it's also something anyone doing performance-related work on disk systems should know about. In particular, it may suggest why sqlite seemed slow on your workload. For robustness, sqlite calls 'fsync' (resulting in real disk I/O) at appropriate times (i.e. when it tells you that an insert has completed).

(Looking at one of your other replies...) If you are writing a lot of data to sqlite, you'll probably want to investigate the use of transactions and/or the 'async' mode. By default, sqlite is slow-but-safe. By default, writing data to a bunch of files is quick-but-unsafe. (But both systems can operate in both modes, you just need to make the right system calls or config options).

If you're going to be doing speed comparisons between storage approaches, you need to be sure of your needs for data integrity and then put each storage system into the mode that suits your needs before comparing. (You may well be doing all this already - apologies for the length response if so).

Replies are listed 'Best First'.
Re^4: Perl solution for storage of large number of small files
by isync (Hermit) on Apr 30, 2007 at 12:45 UTC
    Actually, thank you for the lengthy reply!

    I already learned about sqlite's async mode, but was too lazy to recompile it and just switched the design to in-memory (sqlite was used only on the index part - I am not such a big fan of binary data in databases yet..)
    Pooling updates/writes (as in your transactions hint) was planned to streamline sqlite, but I pulled the plug on this when I opted for the in-memory approach.

    Thanks for all your help guys! Until I need to handle more than 25,000,000 files, plain fs will do (without re-inventing the wheel..)
      You're very welcome.

      For completeness, I should mention that modern versions of sqlite can be put into and out of synchronous mode with a pragma, rather than recompilation.

      (I've been very impressed with sqlite. It has limitations, but the docs are up-front about them, it is so easy to get started with and feels very robust to me.)