Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

CLucene module for perl

by dpavlin (Friar)
on Dec 01, 2003 at 19:43 UTC ( [id://311384]=note: print w/replies, xml ) Need Help??


in reply to Simple Text Indexing

Ever since I found CLucene - a C++ search engine, I have been dreaming of perl module for it. Since XS is mistery to me, I started examining Inline::CPP. However, my C++ skills are not quite up to that task yet.

I'm aware theat there is GNU mifluz engine, which can also do the job. However, perl module for it Search::Mifluz is again XS which isn't working with current version of mifluz.

Any help in any of those issues from perl community would be greatly appriciated.

2share!2flame...

Replies are listed 'Best First'.
Re: CLucene module for perl
by cyocum (Curate) on Dec 02, 2003 at 22:18 UTC

    Thanks for the information! The only issue is that mifluz still has the same problem as before: it does not store where in the file a word is only that it is in the file. Take a look at the Introduction.

    I am beginning to belive that there needs to be a fundimental change in the way people think about text indexing. All the text indexing projects that I have seen only store that a word is in a file. They need to start behaving more like an index found in the back of a good academic book.

      Several points from some similar work I have been dabbling in, on an off, for some time now...

      If you have a file which allows comments or anchors (HTML etc.) I've found it really is easiest for indexing to set up a two-pass process... the first to set up appropriate markers, at reasonable intervals, the second to pull your wordlist out, ideally with a hash of words pointing to lists of markers or tags etc. Alternatively, a process which uses paragraph numbers, line numbers, or simply file offsets useable by a seek may suffice.

      You will need a more extensive stop-list for large bodies of text -- in fact, for really large ones you need to develop your own, suited to the text concerned. Some frequency analysis may assist here. Also see perlindex, which uses the __DATA__ area as a store for a longer list.

      My preferred technique with a corpus of plain text is actually to convert it (using perl, naturally) into HTML, inserting copious anchors for indexed points. This means I can view segments in a browser for context checking.

      (I assume you can always convert back, recording say, para numbers, if you need to have text back.)

      Frankly, the above for me is the easy bit. The hard bit is the establishment of context for an index marker, and the correct addition of synonyms to the index for extra terms not otherwise included in the text. That's why I find the HTML conversion and viewing really works best for me. There's still no substitute for human judgement on the context indexing question...

      WordNet modules may be the answer to the synonym problem here. That's the bit I'm looking at now.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://311384]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (6)
As of 2024-04-16 10:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found