in reply to When is a flat file DB not enough?

Berkley DB has been mentioned a few times here. I think that this is probably the best idea in your situation (though I would strongly encourage migrating to a good sturdy DBMS (if you are sorting that much data). For just a couple hundred records, you will probably not notice much difference, but if this sucker is getting big... Well...

Of course, you should use a database hash and that will sort things out nicely

Another thought, is that you can breathe a LOT of life into flatfiles with a few simple methods.
One simple one is to use your filesystem to provide some of the services of a DBMS, but I don't really recommend that (IE, directories labelled as user numbers)
Another idea is to put some forethought into your file format. You can put tables in the front to sort the data, run simple hashes over files to speed up searching, use tree implementations and such. An important fact is that you don't actually need to visit each record in order to search a file. If the data is ordered, you can order records, and search positionally (kind of like the number game where the computer says "higher" or "lower"). This will speed up your search time exponentially (literally). Search time for a B-Tree is log(2)n visitations, through the judicious use of file pointers, this can be used in any which way you like. The simple fact of the matter though, is that implementing such a system is rather mind racking (there is probably a pm that allows something to this effect).

I personally would try splitting your file across some criteria into smaller files, which can be searched flatly, and hand it off to IT for a proper DBMS if access time gets to be a problem.

Just Another Perl Backpacker