Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??

(OP here)

salva, yes that was my thinking, too. With the advice here and quite a few optimisations it looks as if I can push this up. More tweaking, I think. That exit advice is new to me. How would I use that in the context of Parallel::ForkManager?

matija, good point. I'll eventually use ReiserFS which has superb support for large numbers of files, but I should probably use your approach now. I agree that it would probably give better performance.

Regarding HTTP::GHTTP and HTTP::MHTTP, MHTTP doesn't respect the Host header thus can't handle virtual hosts. GHTTP is indeed nice, but it neither supports HTTPS nor the features of LWP. (My main attraction to HTTP::Lite was that it was pure Perl and easy enough to hack to get the remote IP address -- now I can get that from LWP, Lite is less useful). It looks as if LWP supports using GHTTP internally, though, which sounds like a win-win. :-) I'll have to run some benchmarks on this...

merlyn, I'm afraid I do have to hit this number of external URLs. :-) It's for a research project that does have many merits. (I don't agree that we don't need a better search engine, but I guess that's academic). I'm going some way to support the Robots Exclusion Protocol. I do pre-processing on the list of URLs to identify the few hosts which will be hit more than a couple of times, then fetch their robots.txt. If they forbid crawling, I nix them from the input. By working with batches indexed by the hash of the URL I severely reduce the risk of hitting any server too hard (the host would have to have more than a trivial number of URLs in the index whose hashes start with at least the same two characters (even more when I implement matija's suggestion)). I've just wrote a script to double check this and only two hosts have multiple URLs in the same job bin: one has 2, the other 3. I appreciate your concern -- I run large sites myself, and am perfectly aware of the damage a runaway spider can cause. ;-)


In reply to Re: Advice on Efficient Large-scale Web Crawling by Anonymous Monk
in thread Advice on Efficient Large-scale Web Crawling by Anonymous Monk

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (2)
As of 2024-04-20 03:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found