in reply to Re: String Comparison & Equivalence Challenge
in thread String Comparison & Equivalence Challenge
That does appear like an interesting function with relevant output. How would one find a similar function for MariaDB?
Blessings,
~Polyglot~
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^3: String Comparison & Equivalence Challenge
by erix (Prior) on Mar 14, 2021 at 10:33 UTC | |
Sorry, no idea. I haven't tried. (Most likely it doesn't exist.) Loading your file into postgres wasn't hard, I just tried it. YMMV, of course, especially when you don't have postgresql installed yet. In case it helps, here is a quick-'n-dirty load into a (postgres!) table, of your file.
Output from that is:
| [reply] [Watch: Dir/Any] [d/l] [select] |
by Polyglot (Chaplain) on Mar 14, 2021 at 10:51 UTC | |
Thank you for taking your time to answer and for your suggestions. I did a little searching and found nGram, but that doesn't seem quite the same thing. The math on LanX's "tf-idf" option goes well over my head--to the point where it's not an option for me. (I got an "A" in college Calculus only by studying four hours for it every day, and asking peers lots of questions--but never really understood it, and it's all long gone from my memory.) I'm sorely tempted to arrange for some access to a PostgreSQL DB just to try this. But how would one go about checking the similarity index for each verse? by iterating through the table 31,102-squared times? Does the DB generate the index on the fly? How could the results be efficiently stored? (I've never used indexes before, so I'm entirely unfamiliar with the process.) Blessings, ~Polyglot~ | [reply] [Watch: Dir/Any] |
by erix (Prior) on Mar 14, 2021 at 11:11 UTC | |
That ngram looks to be a similar thing (although details will differ), but it's for mysql, not mariaDB. I don't know if it's available for your MariaDB. (I won't be able to help you with it, but perhaps some other monk will step up) In any case, it seems to me you cannot easily compare everything with everything - it would amount to around a billion comparisons, no? (as you said 31000 square, minus a few). So that's hardly feasible whichever route you take. You need a reduced plan (I think...). As for the 'indexing': pg_trgm of postgres (and mysql ngram as well, I imagine), consists of converting the words of text into triples of characters (trigrams, or, in the case of ngrams, maybe some other number than n=3), and then comparing the sets of such triples that resulted from each line/verse/record. You can do that without index, on the fly, or with an index, where all the triples are stored beforehand for later use. Of course, it generates large index files (but with this smallish table of 31000 records that's still ok). | [reply] [Watch: Dir/Any] |
by LanX (Saint) on Mar 14, 2021 at 16:10 UTC | |
But the explanation is good and there are plenty of more articles in the web. The basic idea is simple: For a each searchterm like God you'll calculate tf(God) for each other "document" and multiply it with the globally precalculated idf(God) of your "corpus". Tf-idf (term,doc) = tf (term,doc) * idf (term,corpus) God is a very frequent term hence it's idf will be low. Gomorrah is far less frequent hence it's idf will be high near 1. A document with no mention of God will have a tf(God) = 0 Here: $rank += tf-idf($_) foreach @term Tf-idf is a cornerstone of NLP the majority of search engines use it. The model is simple, robust and will lead quickly to good results. But you may need to adjust it to your needs for better results.
Cheers Rolf | [reply] [Watch: Dir/Any] [d/l] [select] |
by erix (Prior) on Mar 15, 2021 at 13:00 UTC | |
It turns out this text is not so interesting because there is so much 'formula', for want of a better word, and these large parts of 'formula'-sentence while meaning little will give the comparison a high hit number -- see below. By the way, this will be the same for the approach of Algorithm::Diff's LCS (the TK program that tybalt89 made you). I think, anyway. Yesterday, I generated comparisons and kept all above 0.25. This produced a table with almost 40M comparisons with their 'similarity' number. It 'worked', in a way, but the result is still a bit disappointing because of the type of text this is (I think). A more real information text with less repetition, less fluff, if you see what I mean, might be more interesting.
| [reply] [Watch: Dir/Any] [d/l] |
by Polyglot (Chaplain) on Mar 15, 2021 at 13:40 UTC | |
by LanX (Saint) on Mar 15, 2021 at 13:58 UTC | |
|