http://qs321.pair.com?node_id=435392


in reply to Re: Displaying/buffering huge text files
in thread Displaying/buffering huge text files

In fact, you can stat the file and get back the system's preferred size of blocks, and then make your index be a mapping of line range to block number, and seek to block * block_size, and then skip lines from there.

Going through even 500 lines is nearly instantaneous, probably less than a screen redraw, and i guess that's about as much as you can expect to fit in a 4k block, which is pretty much the standard.

The advantage is that you will probably (if you're careful about off-by-one) can minimize the disk access so cleanly, that after several seeks the relevant items will all be in memorized pages, and subsequent reads will be cheap.

As for implementation, Event has good IO handling, idle callbacks, and Tk integration - it could be used to index the file incrementally, without threading, if that's scary.

Update: I suddenly remember a snippet in some perlfaq using tr to count the number of files efficiently. It uses 4k increments. This should make indexing very quick... keep a sum, and just increment the sum for every block, and record the intermediate results somewhere.

-nuffin
zz zZ Z Z #!perl