Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re^2: Fuzzy Searching: Optimizing Algorithm Selection

by hv (Prior)
on Nov 12, 2004 at 11:48 UTC ( [id://407335]=note: print w/replies, xml ) Need Help??


in reply to Re: Fuzzy Searching: Optimizing Algorithm Selection
in thread Fuzzy Searching: Optimizing Algorithm Selection

If you double the number of 25-ers, you double the time.

Yes.

If you double the average length of the strings, you slightly less than double the time.

No: 1 + 50C1 + 50C2 = 1 + 50 + 1225 = 1276, nearly 4 times the 326 different regexps required for a length-25 string.

Of course there are two ways of allowing for fuzziness: replace /x/ with /[^x]/, or replace it with /./; the latter approach means you don't also need the 0- and 1-mismatch patterns, leaving 300 patterns (for length-25, up to 2 errors) or 1225 (for length-50).

Hugo

Replies are listed 'Best First'.
Re^3: Fuzzy Searching: Optimizing Algorithm Selection
by BrowserUk (Patriarch) on Nov 12, 2004 at 12:13 UTC
    No: 1 + 50C1 + 50C2 = 1 + 50 + 1225 = 1276, nearly 4 times the 326 different regexps required for a length-25 string.

    Sorry, I was lapse in my language. Instead of "...length of the strings...", I should have said sequences. That is, if you double the length of the sequences being searched, from 1000 to 2000 chars (average), but search for the same 25-char needle, then the time taken is slightly less that double.

    I agree if the length of the needles double, the number of regexes required close to quadruples.

    As I understand the problem, using /./ for the fuzziness is the right thing. However, whilst using a regex with 2 wildcards will match anythng that the same regex with only one wildcard would match, the problem with discarding the latter is that you will no longer be able to record how fuzzy the match was.

    Other than perhaps capturing the wildcards and then using substr to determine which of the wildcards would have matched had it not been wild. Overall, the slowdown through capturing + the additional work determining the fuzz factor, it is probably quicker to retain the 1-mis and 2-mis regexes?


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://407335]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (7)
As of 2024-04-23 14:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found