There is one difference between your algorithm and Grandfather's. His code returns the longest substring for each pair of input strings.
With my original data set your code returns one substring. Grandfather's code returned over three thousand (where $minmatch = 256). On the other hand your code finds multiple occurrences of the longest common substrings, if they all have the same length, which I like.
|Replies are listed 'Best First'.|
Re^5: Fast common substring matching
by Roy Johnson (Monsignor) on Nov 29, 2005 at 17:08 UTC
The (reasonably) obvious way to get the longest substring for each pair of input strings would be to run my algorithm using each pair of strings as input rather than the whole list of strings. That's probably more work than GF's method, though. I thought about trying it, but something shiny caught my attention...
Update: but now I've done it. It runs on 20 strings of 1000 characters in something under 10 seconds for me. 100 strings of 1000 characters takes about 4 minutes.
Caution: Contents may have been coded under pressure.
I had thought it was just some sort of cryptic progress meter. :-)
LOL - I know what you mean.
I'm still going over your original code to see how you did what you did -- trying to learn some perl :-)
I'll give the new code a try. I also see that the minimum length in your code doesn't have to be a power of 2. This should allow me to analyze a limit boundary that appears to be present in my data. Grandfather's code allowed me to come up with what I feel is a pretty good estimate for the value of the limit, but this should allow a closer examination of the limit.
Actually as far as I can remember my code doesn't require a power of 2 for the minimum size either. It may have been more important in earlier versions than in the current version.
Somewhere on my todo list is an item to look at Roy's code, but I've not got down to that item on the list yet. :)
DWIM is Perl's answer to Gödel
Thanks for the clarification. For some reason I got it in my head that the minimum length of the substring had to be a power of 2. That idea must have come from someone else's algorithm for the longest common string search.
Nontheless, your script has been very useful to me.
Update: Important on Windows is starting the shared-manager process immediately if construction for the shared variable comes after loading data. Unix platforms benefit from Copy-on-Write feature which is great.
Your 2nd demonstration scales wonderfully on multiple cores after loading the strings hash. For testing, I made a file containing 48 sequences. The serial and parallel code complete in 22.6 seconds and 6.1 seconds respectively. My laptop has 4 real cores plus 4 hyper-threads.
First, the construction for MCE::Hobo. This requires a later 1.699_011 dev release or soon after the final MCE 1.7 release.