http://qs321.pair.com?node_id=938499


in reply to "Just use a hash": An overworked mantra?

One comment that I would make, also, (tangental to the immediate discussion though it be ...) is that unexpectedly-good results can be obtained by using “an old COBOL trick,” namely, use an external disk-sort to sort the file first.   Then, you can simply read the file sequentially.   Every occurrence of every value in every group will be adjacent ... just count ’em up until the value changes (or until end-of-file).   Any gap in the sequence indicates a complete absence of values, anywhere in the original file, that falls into that gap.   The amount of memory required is:   “basically, none.”

And, yes ... you can sort a file of millions of entries and you’ll be quite pleasantly surprised at how well even a run-of-the-mill disk sorter (or Perl module) can get the job done.   It isn’t memory intensive (although it will efficiently use what memory is available).   It is disk-space intensive, as it will create and discard many temporary spill-files, but moderately so.

The same technique is also ideally suited to comparing large files, or for merging them, because there is no “searching” to be done at all.   Merely sort all of the files the same way, and the process is once again sequential.