Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re^2: (almost) preserving a web page

by punkish (Priest)
on Jun 18, 2011 at 22:31 UTC ( [id://910365]=note: print w/replies, xml ) Need Help??


in reply to Re: (almost) preserving a web page
in thread (almost) preserving a web page

It is possible that my original question was not clear enough, and hence, something else got answered. On the other hand, it is also possible that your answers are actually leading me to the right solution, but I can't see it yet. So, more discussion follows --

I don't really want to get text via JavaScript on a page by page basis. If I had only one, predictable web site, perhaps I could devise a mechanism to work around its idiosyncrasies.

However, what I have is an application that visits 30 different web sites on a periodic basis. It extracts the links from the "front page" of each of these web sites, discarding all the links that point outside the base domain. Then, it follows each one of those links. So, if we have an average of 10 links in the text of each web site's front page, the program will visit 30 * 10 web pages.

For each of the web pages that it visits, it downloads the content, makes a copy, and strips out all the HTML tags from the copies. Then, it searches the plain text for certain keywords. If the keywords are present, it stores the plain text version in a full-text search (FTS) table (using SQLite's FTS4 implementation), and also stores its original web source, with HTML tags and all.

At a later time, the user arrives at the application web page and is able to search the FTS content for various terms. If matching content is found, a link is presented to the user so the original web page may be examined. On clicking the link, the original web page (also stored in the database) is presented in an iFrame.

For the most part, actually, having the exact content as it was originally is a good thing. It allows reconstructing the original web page as truthfully as possible. Sometimes this tactic fails, and more often than not, the failure is because of JavaScript in the original page firing off and doing something wonky.

So, the intent is to be able to view the original web page as it appeared when it was published in a fool-proof, universally applicable manner.



when small people start casting long shadows, it is time to go to bed

Replies are listed 'Best First'.
Re^3: (almost) preserving a web page
by Anonymous Monk on Oct 14, 2011 at 08:28 UTC

    httrack does that by mining the javscript for links, gets the more common ones, but doesn't get them all, and some javascript will redirect you from your local copy back to the internet

    http://crawler.archive.org/ does that by inserting its own javascript which does url rewriting so the images show up (even the dynamic ones), but like httrack, actual links are rewritten ...

    Then there is Mozilla Archive Format (with Faithful Save), which does a much better version of save-as, its close to perfect :)

    Another common tactic is to print-to-pdf from a browser like firefox via automation

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://910365]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (6)
As of 2024-04-19 13:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found