http://qs321.pair.com?node_id=612724


in reply to Perl solution for storage of large number of small files

you can try using another database backend, for instance DBI + DBD::SQlite.

Replies are listed 'Best First'.
Re^2: Perl solution for storage of large number of small files
by isync (Hermit) on Apr 30, 2007 at 10:01 UTC
    Been there, done that. Actually for the meta-data index of the heavy-load storage...
    The first incarnation was a DBM:mldb. The second version sqlite, with which I ran into a heavy disk IO overhead inserting/updating meta-data, now the index is in-memory as plain data structure...

    So, do you actually recommend sqlite as storage for binary data?
      So, do you actually recommend sqlite as storage for binary data?

      Well, I don't recommend neither disrecommend it. I was only suggesting you should try another backend!

      Which database is the best for a given problem, does not depend exclusively on the data structures but also on the usage pattern.

      Anyway, if you need to access 2GB of data randomly, there is probably nothing you can do to stop disk trashing other than adding more RAM to your machine, so that all the disk sectors used for the database remain cached.

        Hi isync and salva, interesting topic.

        Anyway, if you need to access 2GB of data randomly, there is probably nothing you can do to stop disk trashing other than adding more RAM to your machine, so that all the disk sectors used for the database remain cached.

        In this situation - more data than memory, but not loads more - I've found memory mapping works well. In my situation the data accesses were randomly scattered but with a non-uniform distribution - if that makes sense. I.e. although the access wasn't sequential, some data was accessed more often than others. So memory mapping meant that the often-access data stayed cached in ram.

        Any decent database should be able to do pretty much the same thing - as long as you configure it with a big query cache - although disk access will be slower than for memory mapping.

        The real problem comes if you're making a lot of changes to the data, which busts your cache...

        Best wishes, andye