Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^5: Fuzzy Searching: Optimizing Algorithm ( A few improvements).

by demerphq (Chancellor)
on Dec 09, 2004 at 09:00 UTC ( [id://413456]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Fuzzy Searching: Optimizing Algorithm ( A few improvements).
in thread Fuzzy Searching: Optimizing Algorithm Selection

The math you do to correct this is the same as I had, but your positioning of it (in the if condition) is not.

I wasn't able to spend much time in analysing the optimal solution to the bug. Sorry.

If you intend comparing my code against other algorithms that rely upon all the keys in any given test having the same length, as your original does, and the version I've seen of ysth's extremely clever Xor/hash index hybrid does, then for a fair comparison, I should supply you with an updated version of mine that also relies upon this assumption.

Feel free to provide an updated version. If I havent started the new thread by the time you are ready to post then do it here. Also, while actually ysths solution and my current solution can both be modified to handle variable length string sets I dont think either of us have the time to actually put the changes into place so I would prefer that we stick to fixed length words. However if you want to post a version supporting variable length words then please treat the size provided to the constructor as the MINIMUM size of any string to be stored and I suppose youll get bonus points or something in the comparison. :-)

If the keys will be invarient length, and the other algorithms make this assumption, then that code need only be executed once, rather 100,000 scalar @keys times, and so should be lifted out of the for my $keys ( @keys )... loop.

I think this could either be done in an overloaded prepare() method if it only needs to be done once and should be done before the searching starts but after all the words to match are loaded. Since this is an OO framework you should be able to work out some form of caching pretty easily.

However, returning the results as a list, rather than by reference, seems a retrograde step as this implies replication of the results array at least once, with all the memory allocation and timecosts that implies, which from my breif purusal of the test harness, will be incorporated into the timings.

Ive changed the test framework a fair bit I think so wait for that. Regarding the list return I would have thought that since each routine would have basically the same overhead that it wouldnt matter that much. Nevertheless I'll try to switch things over if I have time. Howabout we say that fuzz_search should do a wantarray ? @results : \@results so that if I get the time to change the harness and the running implementations i have whatever you might be working on will take advantage of it?

Couldn't you record the index of the key, rather than it's value? Wouldn't either packing or stringifying the 3 records values reduce the overhead of the results array by 2/3rds and make for easy comparison?

I was actually thinking of using packed records as the return, so that each record would be a pack "NNA*",$string but for various reasons I think its not appropriate for these tests. And yes it does make it a lot easier for results comparison. But ive got that in the test harness and not in the object. :-)

You mentioned returning an index instead of a string which i avoided just for simplicty sakes. I think that the only way that would be workable would be to add an accessor method to the object for converting the number to a string. Which was one reason I avoided it. However if you are ok with that provision then I don't see why not. The base class can be updated to just do a lookup into the str_array, and then your solutions dont need an overloaded method for it at all. Doing the same to mine and ysths might be moderately harder but still doable. Ill work it in. (So the list will be triplets of longs) If we decide we want to use packed results for high density tests we can switch to pack "NNN". (BTW I say 'N' because it has the useful property of sorting lexicographically.)

I'm hoping I can get the tme to start the new thread this evening. Till then hopefully you have enough info to slot into what i publish when i do.

---
demerphq

Replies are listed 'Best First'.
Re^6: Fuzzy Searching: Optimizing Algorithm ( A few improvements).
by ysth (Canon) on Dec 09, 2004 at 11:42 UTC
    Also, while actually ysths solution and my current solution can both be modified to handle variable length string sets I dont think either of us have the time to actually put the changes into place so I would prefer that we stick to fixed length words.
    FWIW, my solution will lose efficiency with mixed length words.

    I would actually like to see us come up with a module providing all three methods, possibly including variants supporting fixed vs. mixed length (assuming there's at least something that each has as an advantage, of which I'm not sure as regards mine). I'm kind of expecting BrowserUk's to best handle high amounts of fuzz.

    Re: packing the return values; I seriously doubt a pack in pure perl is going to be a net win over just returning multiple elements, so I think that would benefit only the XS solution. It's also a lousy interface for a perl module to use. As far as returning an index goes, I suppose that's possible, but as is my algorithm has no need to keep the array around.

      FWIW, my solution will lose efficiency with mixed length words.

      Well, AFAIUI the efficiency will be determined by the ratio of MIN_KEY_LEN/FUZZ. The smaller it is the less efficient with the degenerate case being a slower version of a bruteforce XOR.

      Re: packing the return values; I seriously doubt a pack in pure perl is going to be a net win over just returning multiple elements, so I think that would benefit only the XS solution.

      Im happy with either way. And yes the benefit to an XS solution was one of the reasons I didnt do it originally. But i dont entirely agree it a lousy interface. For large numbers of hits and strings it means a lot less string copying is involved and has the inheirent property of being lexicographically sortable, and easy to dupecheck and compare.

      As far as returning an index goes, I suppose that's possible, but as is my algorithm has no need to keep the array around.

      Ok then we'll leave it as unpacked triplets of ($ofs,$diff,$string) returned via an arrayref.

      ---
      demerphq

Re^6: Fuzzy Searching: Optimizing Algorithm ( A few improvements).
by BrowserUk (Patriarch) on Dec 09, 2004 at 10:11 UTC
    However if you want to post a version supporting variable length words...

    You misunderstand (or more likely, I wasn't clear), every version I've posted so far quite happily accepts mixed length keys (strings). My point was that if we're getting into hard core performance testing, and I'm up against code that only handles fixed length keys, then I can gain a little by following suit.

    I'll wait and see what the final test framework, and you guys stuff, looks like before going any further.


    Examine what is said, not who speaks.        The end of an era!
    "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
    "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

      Id say if you can speed things up by assuming only a fixed width keyset then do so. However I was intending at some point to convert mine and ysths to a variable width set so it might be worthwhile going both ways. *shrug* For now its safe to assume the search keys are fixed width. :-)

      I looked at the optimisation you mentioned regarding moving certain logic outside of the keyloop in your second version. Im not sure if its a good idea to cache those strings, although it will of course speed things up I think it also may be problematic as it drammatically mushrooms the amount of memory your solution needs. For instance with 100_000 keys searching 100k strings you are going to have serious memory issues. So i guess its a tradeoff. I may build a memory ceiling into the test suit so that an object may be at most 400MB or so. While this may be somewhat small its necessary IMO because its around there that my machine will start thrashing and thus blow the utility of any benchmark.

      But yeah sure feel free to wait to see the full picture. I just figured youd prefer to get a contender suited up. I have already converted your original solution, and the uncached second solution you posted, and i thought you should have right of reply before i posted them in the new thread.

      ---
      demerphq

        Im not sure if its a good idea to cache those strings, although it will of course speed things up I think it also may be problematic as it drammatically mushrooms the amount of memory your solution needs.

        It's not the keys array I would move out, just the calculations and the $minZeros string, all of which would be constants if the keys are fixed length. I have done that locally and it is worth doing.

        My latest variation is better still, but has a bug in the logic that means it finds a few duplicates (again). Still trying to crack that. Basically, it removes the inner ($offset2) loop, which has a dramatic affect on performance--if only I can get the accuracy back.


        Examine what is said, not who speaks.        The end of an era!
        "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
        "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
        "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://413456]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others avoiding work at the Monastery: (6)
As of 2024-04-19 07:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found