http://qs321.pair.com?node_id=255973


in reply to speeding up a file-based text search

When dealing with flat files this large, a big part of the game is managing the disk head motion. If the disk seeks elsewhere, getting it back to the right place in your file (so that you can keep reading) is relatively expensive. Given that you're on a box that's sharing the disk with lots of processes, the read head isn't something that you can control, but you can influence it. One trick is to read in big chunks (but not so big that you risk having to page, which just causes more disk activity). You're on the right track by thinking 4K chunks, but I'd try something larger, like 8K, 16K, or more. (Use sysread() rather than read(), since the latter will actually use a sequence of OS reads if you try to read a large chunk. You can see this for yourself using ktrace (or equivalent).) By using large reads, you reduce the chance that some other process will sneak in and move the disk head.

Matching in huge files demonstrates a trick for scanning for a pattern through a sliding window in a large file, using sysread() to pull in 8K chunks. You might be able to adapt it to your problem.