Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change

Re^3: String Comparison & Equivalence Challenge

by erix (Prior)
on Mar 14, 2021 at 10:33 UTC ( #11129608=note: print w/replies, xml ) Need Help??

in reply to Re^2: String Comparison & Equivalence Challenge
in thread String Comparison & Equivalence Challenge

Sorry, no idea. I haven't tried. (Most likely it doesn't exist.)

Loading your file into postgres wasn't hard, I just tried it. YMMV, of course, especially when you don't have postgresql installed yet.

In case it helps, here is a quick-'n-dirty load into a (postgres!) table, of your file.

#!/bin/bash file=KJV_fixed.csv schema=public table=kjv t=$schema.$table echo " drop table if exists $t ; create table if not exists $t ( recordnum serial PRIMARY KEY , book int , chapter int , verse int , text text ) " | psql -X < $file perl -ne 'chomp; my @arr = split(/[,]/, $_, 4); print join("\t", @arr), "\n"; ' | psql -c " copy $t(book, chapter, verse, text) from stdin (format csv, header false, delimiter E'\t'); analyze $t; "

Output from that is:

DROP TABLE CREATE TABLE Timing is on. ANALYZE Time: 326.648 ms

Replies are listed 'Best First'.
Re^4: String Comparison & Equivalence Challenge
by Polyglot (Friar) on Mar 14, 2021 at 10:51 UTC

    Thank you for taking your time to answer and for your suggestions.

    I did a little searching and found nGram, but that doesn't seem quite the same thing.

    The math on LanX's "tf-idf" option goes well over my head--to the point where it's not an option for me. (I got an "A" in college Calculus only by studying four hours for it every day, and asking peers lots of questions--but never really understood it, and it's all long gone from my memory.)

    I'm sorely tempted to arrange for some access to a PostgreSQL DB just to try this. But how would one go about checking the similarity index for each verse? by iterating through the table 31,102-squared times? Does the DB generate the index on the fly? How could the results be efficiently stored? (I've never used indexes before, so I'm entirely unfamiliar with the process.)



      That ngram looks to be a similar thing (although details will differ), but it's for mysql, not mariaDB. I don't know if it's available for your MariaDB. (I won't be able to help you with it, but perhaps some other monk will step up)

      In any case, it seems to me you cannot easily compare everything with everything - it would amount to around a billion comparisons, no? (as you said 31000 square, minus a few). So that's hardly feasible whichever route you take. You need a reduced plan (I think...).

      As for the 'indexing': pg_trgm of postgres (and mysql ngram as well, I imagine), consists of converting the words of text into triples of characters (trigrams, or, in the case of ngrams, maybe some other number than n=3), and then comparing the sets of such triples that resulted from each line/verse/record. You can do that without index, on the fly, or with an index, where all the triples are stored beforehand for later use. Of course, it generates large index files (but with this smallish table of 31000 records that's still ok).

      It only looks complicated because the wp-article lists multiple options for both tf and idf in order to adjust for different use cases.

      But the explanation is good and there are plenty of more articles in the web.

      The basic idea is simple:

      For a each searchterm like God you'll calculate tf(God) for each other "document" and multiply it with the globally precalculated idf(God) of your "corpus".

      Tf-idf (term,doc) = tf (term,doc) * idf (term,corpus)

      God is a very frequent term hence it's idf will be low. Gomorrah is far less frequent hence it's idf will be high near 1. A document with no mention of God will have a tf(God) = 0


      • Docs = verse
      • Corpus = bible
      A ranking function will combine the tf-idf for all relevant terms, e.g. most trivialy by summation

      $rank += tf-idf($_) foreach @term

      Tf-idf is a cornerstone of NLP the majority of search engines use it.

      The model is simple, robust and will lead quickly to good results. But you may need to adjust it to your needs for better results.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery

      It turns out this text is not so interesting because there is so much 'formula', for want of a better word, and these large parts of 'formula'-sentence while meaning little will give the comparison a high hit number -- see below. By the way, this will be the same for the approach of Algorithm::Diff's LCS (the TK program that tybalt89 made you). I think, anyway.

      Yesterday, I generated comparisons and kept all above 0.25. This produced a table with almost 40M comparisons with their 'similarity' number.

      It 'worked', in a way, but the result is still a bit disappointing because of the type of text this is (I think). A more real information text with less repetition, less fluff, if you see what I mean, might be more interesting.

      select substring(cmp.sim::text,1,5) sim , k1.text || chr(10) || k2.text || chr(10) from kjv_simil_0_25 cmp -- big table join kjv k1 on -- table from the KJV file join kjv k2 on -- ,, where = 50 -- look at just this book and sim > 0.7 -- remove too different and sim < 0.85 -- remove too identical ; sim | +?column? + -------+-------------------------------------------------------------- +--------------------------------------------------------------------- +-- 0.833 | The grace of our Lord Jesus Christ be with you all. Amen. + + + | Brethren the grace of our Lord Jesus Christ be with your spir +it. Amen. + + | 0.833 | The grace of our Lord Jesus Christ be with you all. Amen. + + + | The grace of our Lord Jesus Christ be with your spirit. Amen. + + + | 0.769 | Grace be unto you and peace from God our Father and from the +Lord Jesus Christ. + + | To Timothy my dearly beloved son: Grace mercy and peace from +God the Father and Christ Jesus our Lord. + + | 0.769 | Grace be unto you and peace from God our Father and from the +Lord Jesus Christ. + + | To Titus mine own son after the common faith: Grace mercy and + peace from God the Father and the Lord Jesus Christ our Saviour. + + | 0.736 | Grace be unto you and peace from God our Father and from the +Lord Jesus Christ. + + | Grace be with you mercy and peace from God the Father and fro +m the Lord Jesus Christ the Son of the Father in truth and love. + + | 0.723 | Grace be unto you and peace from God our Father and from the +Lord Jesus Christ. + + | Unto Timothy my own son in the faith: Grace mercy and peace f +rom God our Father and Jesus Christ our Lord. + + | 0.714 | The grace of our Lord Jesus Christ be with you all. Amen. + + + | The Lord Jesus Christ be with thy spirit. Grace be with you. +Amen. + + | 0.702 | Now unto God and our Father be glory for ever and ever. Amen. + + + | Saying Amen: Blessing and glory and wisdom and thanksgiving a +nd honour and power and might be unto our God for ever and ever. Amen +.+ | (8 rows) Time: 16.557 ms

        You make some very worthwhile observations. From my perspective, the extensive similarities of the various texts is what provides the incentive to identify them. If the text were much more diverse, it would be difficult to find comparisons with a sufficient similarity index to pass the threshold, at which point the results would lose significance.

        Any algorithm that gives less weight to the more common words will, in my opinion, be less able to handle this particular corpus and yield quality results. It is true that many of the keywords are used ubiquitously throughout the Bible: words like Lord, God, father, son, praise, faith, etc. There are also words that are much more common than in other types of text, such as inherit, blessing, perpetual, covenant, sacrifice, etc. And then, as has been pointed out, many of the words are of old-English form and would not be easy to accommodate with a stemmer: verb forms like hadst, hast, hath, gavest, giveth, rebuketh, etc.; pronouns like thee, thou, thine, and ye; and words like hence, hither, whither, whence, lest, whatsoever, whomsoever, etc. To add a little spice, there are the more unusual vocabulary words, like apothecary, winnowing, and all the wonderful names like Belteshazzar, Nebuchadnezzar, Abaddon, Apollyon, Zaphnathpaaneah, Mahershalalhashbaz, Sennacherib, etc. There are some interesting features of the language, along with some of the more repetitious aspects.

        The entire corpus contains close to 13,000 words--not far from the average vocabulary of a high-school graduate these days.

        A list of all the words in the KJV Bible, showing the number of occurrences for each, can be found HERE. It helps one to understand the peculiarities of this particular lexicon.



Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11129608]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (3)
As of 2022-05-23 23:50 GMT
Find Nodes?
    Voting Booth?
    Do you prefer to work remotely?

    Results (82 votes). Check out past polls.