Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Re^2: Fuzzy Searching: Optimizing Algorithm ( A few improvements).

by BrowserUk (Patriarch)
on Dec 04, 2004 at 08:56 UTC ( [id://412372]=note: print w/replies, xml ) Need Help??


in reply to Re: Fuzzy Searching: Optimizing Algorithm Selection
in thread Fuzzy Searching: Optimizing Algorithm Selection

In case Itatsumaki should ever come back. Here's a somewhat improved implementation of my Xor algorithm. The original projection of 3 1.2 years runtime to process 100,000 x 1,000 x 30,000 is now reduced to 7.6 days:

Update (2004/12/08): Updated code to correct an error that manifested itself when the sequences were not an exact multiple of the key length. (As noted below by demerphq)

#! perl -slw use strict; use bytes; our $FUZZY ||= 2; open KEYS, '<', $ARGV[ 0 ] or die "$ARGV[ 0 ] : $!"; my @keys = <KEYS>; close KEYS; chomp @keys; warn "Loaded ${ \scalar @keys } keys"; open SEQ, '<', $ARGV[ 1 ] or die "$ARGV[ 1 ] : $!"; my( $masked, $pos ); my $totalLen = 0; my $count = 0; while( my $seq = <SEQ> ) { chomp $seq; my $seqLen = length $seq; $totalLen += $seqLen; for my $key ( @keys ) { my $keyLen = length $key; my $mask = $key x ( int( $seqLen / $keyLen ) + 1 ); my $maskLen = length $mask; my $minZeros = chr( 0 ) x int( $keyLen / ( $FUZZY + 1 ) ); my $minZlen = length $minZeros; for my $offset1 ( 0 .. $keyLen-1 ) { $masked = $mask ^ substr( $seq, $offset1, $maskLen ); $pos = 0; while( $pos = 1+index $masked, $minZeros, $pos ) { $pos--; my $offset2 = $pos - ($pos % $keyLen ); last unless $offset1 + $offset2 + $keyLen <= $seqLen; my $fuz = $keyLen - ( substr( $masked, $offset2, $keyLen ) =~ tr[\0] +[\0] ); if( $fuz <= $FUZZY ) { printf "\tFuzzy matched key:'$key' -v- '%s' in lin +e:" . "%2d @ %6d (%6d+%6d) with fuzziness: %d\n", + substr( $seq, $offset1 + $offset2, $keyLen ), $., $offset1 + $offset2, $offset1, $offset2, $ +fuz; } $pos = $offset2 + $keyLen; } } } } warn "\n\nProcessed $. sequences"; warn "Average length: ", $totalLen / $.; close SEQ;

A coupe of runs (on single sequences for timing purposes) on data comparable (produced by the same code) as other timings published elsewhere:

[ 6:57:56.46] P:\test\demerphq> ..\406836-3 Fuzz-Words-W0025-S100000-WC100000-SC0001.fuzz Fuzz-Strings-W0025-S100000-WC100000-SC0001.fuzz Loaded 100000 keys at P:\test\406836-3.pl line 12. seq:00001 (100000) 1 @ 69364 ( 14+ 69350) with fuzziness: 0 1 @ 24886 ( 11+ 24875) with fuzziness: 0 1 @ 40056 ( 6+ 40050) with fuzziness: 0 1 @ 68870 ( 20+ 68850) with fuzziness: 0 1 @ 3264 ( 14+ 3250) with fuzziness: 0 1 @ 8744 ( 19+ 8725) with fuzziness: 0 1 @ 7493 ( 18+ 7475) with fuzziness: 0 1 @ 28209 ( 9+ 28200) with fuzziness: 0 1 @ 91337 ( 12+ 91325) with fuzziness: 0 1 @ 63018 ( 18+ 63000) with fuzziness: 0 1 @ 61025 ( 0+ 61025) with fuzziness: 0 1 @ 32114 ( 14+ 32100) with fuzziness: 0 1 @ 30461 ( 11+ 30450) with fuzziness: 0 1 @ 59174 ( 24+ 59150) with fuzziness: 0 1 @ 74084 ( 9+ 74075) with fuzziness: 0 1 @ 58322 ( 22+ 58300) with fuzziness: 0 1 @ 78465 ( 15+ 78450) with fuzziness: 0 1 @ 56190 ( 15+ 56175) with fuzziness: 0 1 @ 14968 ( 18+ 14950) with fuzziness: 0 1 @ 31986 ( 11+ 31975) with fuzziness: 0 1 @ 60748 ( 23+ 60725) with fuzziness: 0 1 @ 93369 ( 19+ 93350) with fuzziness: 0 1 @ 6242 ( 17+ 6225) with fuzziness: 0 1 @ 15282 ( 7+ 15275) with fuzziness: 0 1 @ 13293 ( 18+ 13275) with fuzziness: 0 Processed 1 sequences at P:\test\406836-3.pl line 57, <SEQ> line 1. Average length: 100000 at P:\test\406836-3.pl line 58, <SEQ> line 1. [ 7:28:22.37] P:\test\demerphq> [ 8:36:32.71] P:\test\demerphq> ..\406836-3 Fuzz-Words-W0025-S1000-WC100000-SC0010.fuzz Fuzz-Strings-W0025-S1000-WC100000-SC0010.fuzz Loaded 100000 keys at P:\test\406836-3.pl line 12. seq:00001 (01000) 1 @ 94 ( 19+ 75) with fuzziness: 0 1 @ 692 ( 17+ 675) with fuzziness: 0 1 @ 326 ( 1+ 325) with fuzziness: 0 1 @ 35 ( 10+ 25) with fuzziness: 0 1 @ 826 ( 1+ 825) with fuzziness: 0 1 @ 598 ( 23+ 575) with fuzziness: 0 1 @ 860 ( 10+ 850) with fuzziness: 0 1 @ 489 ( 14+ 475) with fuzziness: 0 1 @ 370 ( 20+ 350) with fuzziness: 0 1 @ 745 ( 20+ 725) with fuzziness: 0 1 @ 297 ( 22+ 275) with fuzziness: 0 1 @ 415 ( 15+ 400) with fuzziness: 0 1 @ 119 ( 19+ 100) with fuzziness: 0 1 @ 957 ( 7+ 950) with fuzziness: 0 1 @ 646 ( 21+ 625) with fuzziness: 0 1 @ 779 ( 4+ 775) with fuzziness: 0 Processed 1 sequences at P:\test\406836-3.pl line 57, <SEQ> line 1. Average length: 1000 at P:\test\406836-3.pl line 58, <SEQ> line 1. [ 8:36:54.42] P:\test\demerphq>

Examine what is said, not who speaks.
"But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
"Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Replies are listed 'Best First'.
Re^3: Fuzzy Searching: Optimizing Algorithm ( A few improvements).
by demerphq (Chancellor) on Dec 08, 2004 at 18:14 UTC

    As far as I can tell this code fails the "A" x 10/"A" x 11 test that you challenged my code with in a different thread. I was able to fix the bug by changing the if ($fuz <= $FUZZY) test to the following:

    if( $fuz <= $FUZZY and $offset1+$offset2+$keyLen<=$seqLen) {

    I have put together a test harness and framework for developing Fuzzy::Matcher implementations which I will post in a new thread (when i get sufficient time) as IMO this one has become too polluted with acrimony to be worth continuing. Your code as massaged to fit into this framework (along with the baseclass) is in the following readmore. Note that the test harness monitors memory utilization post-prepare() which is why the default prepare() is the way it is (to reduce memory overhead).

    ---
    demerphq

      ... this code fails the "A" x 10/"A" x 11 test...

      Yes. As I pointed out to ysth offline a couple of days ago, if the sequences are not an exact multiple of the key length, then the code produces some extra, erroneous matches.

      I've update the post and code above to note and correct that.

      The math you do to correct this is the same as I had, but your positioning of it (in the if condition) is not. Once the math determines that the end of the sequence has been reached, there is no point in allowing the loop to continue. It simply produces further bad matches and wastes time. The conditional last statement in the corrected code above does this.

      A few further comments re the test harness.

      1. If you intend comparing my code against other algorithms that rely upon all the keys in any given test having the same length, as your original does, and the version I've seen of ysth's extremely clever Xor/hash index hybrid does, then for a fair comparison, I should supply you with an updated version of mine that also relies upon this assumption.

        As coded above, the algorithm recalculates various pieces of information derived from the length of the keys, on each iteration of the key-processing loop. If the keys will be invarient length, and the other algorithms make this assumption, then that code need only be executed once, rather 100,000 scalar @keys times, and so should be lifted out of the for my $keys ( @keys )... loop.

      2. I applaud that you are avoiding producing a big array of lots of little arrays, by stacking the recorded information (offset/fuzziness/keymatched) sequentially.

        However, returning the results as a list, rather than by reference, seems a retrograde step as this implies replication of the results array at least once, with all the memory allocation and timecosts that implies, which from my breif purusal of the test harness, will be incorporated into the timings.

        Probably pretty even for all algorithms on any given run, but implying a disproportionate overhead on the quicker algorithms on any high density tests.

        Whilst I have had the same results from randomly generated sequences as ysth reported: that matches are sparsely populated in these randomly generated sequences. In a generic fuzzy-match application unrelated to the OPs problem, this may not be the case. So any comparisons should also include high density datasets also.

        My algorithm does particularly well in this regard relative to others, so this is me watching out for number one here, but I think a generic solution should cater for this scenario..

      3. Couldn't you record the index of the key, rather than it's value?
      4. Wouldn't either packing or stringifying the 3 records values reduce the overhead of the results array by 2/3rds and make for easy comparison?

      I agree that further investigations of this should be done in a new thread ,divorced from association with some parts of this one. I'll let you start it and I'll post an, adapted to your testharness, version of the latest incarnation of my code there, once I see what if any changes occur in the test harness that I need to adapt it to.


      Examine what is said, not who speaks.
      "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
      "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
      "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

        The math you do to correct this is the same as I had, but your positioning of it (in the if condition) is not.

        I wasn't able to spend much time in analysing the optimal solution to the bug. Sorry.

        If you intend comparing my code against other algorithms that rely upon all the keys in any given test having the same length, as your original does, and the version I've seen of ysth's extremely clever Xor/hash index hybrid does, then for a fair comparison, I should supply you with an updated version of mine that also relies upon this assumption.

        Feel free to provide an updated version. If I havent started the new thread by the time you are ready to post then do it here. Also, while actually ysths solution and my current solution can both be modified to handle variable length string sets I dont think either of us have the time to actually put the changes into place so I would prefer that we stick to fixed length words. However if you want to post a version supporting variable length words then please treat the size provided to the constructor as the MINIMUM size of any string to be stored and I suppose youll get bonus points or something in the comparison. :-)

        If the keys will be invarient length, and the other algorithms make this assumption, then that code need only be executed once, rather 100,000 scalar @keys times, and so should be lifted out of the for my $keys ( @keys )... loop.

        I think this could either be done in an overloaded prepare() method if it only needs to be done once and should be done before the searching starts but after all the words to match are loaded. Since this is an OO framework you should be able to work out some form of caching pretty easily.

        However, returning the results as a list, rather than by reference, seems a retrograde step as this implies replication of the results array at least once, with all the memory allocation and timecosts that implies, which from my breif purusal of the test harness, will be incorporated into the timings.

        Ive changed the test framework a fair bit I think so wait for that. Regarding the list return I would have thought that since each routine would have basically the same overhead that it wouldnt matter that much. Nevertheless I'll try to switch things over if I have time. Howabout we say that fuzz_search should do a wantarray ? @results : \@results so that if I get the time to change the harness and the running implementations i have whatever you might be working on will take advantage of it?

        Couldn't you record the index of the key, rather than it's value? Wouldn't either packing or stringifying the 3 records values reduce the overhead of the results array by 2/3rds and make for easy comparison?

        I was actually thinking of using packed records as the return, so that each record would be a pack "NNA*",$string but for various reasons I think its not appropriate for these tests. And yes it does make it a lot easier for results comparison. But ive got that in the test harness and not in the object. :-)

        You mentioned returning an index instead of a string which i avoided just for simplicty sakes. I think that the only way that would be workable would be to add an accessor method to the object for converting the number to a string. Which was one reason I avoided it. However if you are ok with that provision then I don't see why not. The base class can be updated to just do a lookup into the str_array, and then your solutions dont need an overloaded method for it at all. Doing the same to mine and ysths might be moderately harder but still doable. Ill work it in. (So the list will be triplets of longs) If we decide we want to use packed results for high density tests we can switch to pack "NNN". (BTW I say 'N' because it has the useful property of sorting lexicographically.)

        I'm hoping I can get the tme to start the new thread this evening. Till then hopefully you have enough info to slot into what i publish when i do.

        ---
        demerphq

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://412372]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (7)
As of 2024-04-19 15:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found