Well, i have to regularly index a few million documents for a small intranet search engine.
Then you asked the wrong question. The right one is: "What is the fastest way to index a few million documents for a small intranet search engine?"
The answer, as I recently learned from tachyon, is Swish-e. Of course, you'll also want to grab the Perl interface, SWISH, from CPAN.
-sauoq
"My two cents aren't worth a dime.";
| [reply] |
I, too, concur with Swish. Granted, I used it 8 years ago, but it was an excellent tool.
------ We are the carpenters and bricklayers of the Information Age. Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement. Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.
| [reply] |
Are you trying to actually parse or just strip the text for indexing? How are you doing the indexing? You may want to try some benchmarks out to see where your code is spending the most time. look at: Devel::Profile and Benchmark to help see where the actual slowdowns are happening. In my experiance the strip to text is very fast and the indexing and updating the db is the slow part.
-Waswas | [reply] |
You made me think about another possible improvement, as I said I only use text and some layout tags, so I could use the report_tags() method of HTML::Parser to suppress all unneeded junk.
| [reply] |