|Don't ask to ask, just ask|
optimizing a linear search by Indexed or bucketed hashingby princepawn (Parson)
|on Oct 04, 2007 at 23:12 UTC||Need Help??|
princepawn has asked for the wisdom of the Perl Monks concerning the following question:
Ok, let's say file A has a series of strings, one per line. Let's say that file B has a series of strings, one per line.
The goal is, for each line in A, to return the best match from B using a subroutine named fuzzy_match, a function that takes two strings and returns a float from 0 to 1.
Now, let's assume that file B is enormous, making the prospect of applying fuzzy_match to each member infeasible. But let's also assume that the first character of each member of B will always be the best result from fuzzy_match for A. This means that instead of looking through all of B, you simply need to retrieve all records from B which start with the same first letter as the current record in A.
Hence you only search a "bucket" of B instead of all of B, saving you a good bit of time.
Now, I could write such an indexing / bucketing routine myself pretty easily, but I'm surprised that such a routine has not already been written. However CPAN showed no results for such a beast... any leads?
UPDATEOne thing to note is that the assumption about the two data sets should be parameterizable. In the description above, the assumption was that the first character of records which would correspond would be the same. But other assumptions are possible.
So, the best interface would be useable under a variety of hashing strategies... so the desired interface would be along the lines of:
Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality