http://qs321.pair.com?node_id=963008


in reply to [OT] The statistics of hashing.

What is the time interval that you have available to come up with a solution given “billions of” records?

If the order of the records cannot be controlled, then a meaningful result can only be obtained by processing the entire dataset, which is not possible.   Therefore, I truly see no alternative but that of performing multiple passes through the data, processing a mutually exclusive subset of the data each time.

The results you describe in reply #2 seem to point to a reasonable compromise-solution:   if 300K records produce 61 maybes, and if (say...) 3 million records at a time produce a few thousand, you have reduced the problem for that pass to a serviceable level.   However, as the algorithm begins to become saturated, the ink-blots start to run together and you’re left with a black piece of paper and not a picture.   This will force you into a possibly large number of passes, that can be processed in parallel say on a cluster box.

You say that “a disk based solution isn’t feasible,” but a fair amount of that is going to happen regardless.   The information-content of any hash key, however derived, is considerably less than that of the record, and the mere fact that you haven’t yet seen what appears to be a duplicate of a particular record does not usefully predict that you won’t stumble into one with the very next record not yet seen.   There will unfortunately be an extremely narrow “sweet spot” in which the discriminating power of the algorithm will be usefully preserved.