Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: What is the fastest way to download a bunch of web pages?

by Anonymous Monk
on Mar 03, 2005 at 12:28 UTC ( [id://436178]=note: print w/replies, xml ) Need Help??


in reply to What is the fastest way to download a bunch of web pages?

Maybe. There are many parameters that determine what the most efficient way is to download a number of webpages. Some of the more important issues are the capacity of your box (number of CPUs, your amount of memory, your disk I/O, your network I/O, what else is running on it), the capacity of the network between your and the servers you are downloading from, and the setup of the servers you are querying.

If you are really serious about the speed issue, you need to look at your infrastructure. All we can do here is guess, or present our own experience as the absolute truth, both not uncommon on Perl forums, but probably not very useful for anyone.

  • Comment on Re: What is the fastest way to download a bunch of web pages?

Replies are listed 'Best First'.
Re^2: What is the fastest way to download a bunch of web pages?
by tphyahoo (Vicar) on Mar 03, 2005 at 12:35 UTC
    Thanks for the quick answer.

    Well, like I said, I'm developing on WinXP, ActiveState. Box, it's modern but nothing special. 512MB Ram, 1 CPU, Ghz I don't know, whatever was standard for new desktops in 2004.

    I have a vanilla 512MB dsl connection with deutsche telekom, as far as I know.

    Why would disk IO matter, and how do I find out disk IO? Ditto for the capacity of the network.

    If I accomplish what I want to accomplish, when this leaves the development phase I may be running the code on a linux box with more juice. Basically I just want to keep things flexible for the future.

      If you want to get a solid advice just based on a few raw specs, hire a consultant. There are many consultants who want to make a quick buck by giving advice based on just the numbers. You're mistaken if you think that there's a table that say that for those specs, this and that is the best algorithm.

      As for why disk I/O matters, well, I'm assuming you want to store your results, and you're downloading a significant amount of data, enough to not be able to keep in all in memory. So, you have to write to disk. Which means that it's a potential bottleneck (if all the servers you download from are on your local LAN, you could easily get more data per second over the network than your disk can write - depending of course on the disk(s) and the network).

      Of course, if all you care about is downloading a handful of pages, each from a different server, in a reasonable short time, perhaps something as simple as:

      system "wget $_ &" for @urls;
      will be good enough. But that doesn't work well if you need to download 10,000 documents, all from the same server.
        I had monkeyed with wget, but wget doesn't handle cookies, post requests, and redirects nearly as well as LWP, hence I'm doing it this way. My real program does more complicated stuff than I had presented in this post, but I just wanted to keep things simple.

        With regards to the bottleneck, I don't think this will be a problem. I'm not writing a web crawler, this is just something to automate a bunch of form post requests, and massage the data I get back. But that doesn't matter for getting the threading part right.

        I will eventually be storing stuff in mysql, but this is a future PM question....

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://436178]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (1)
As of 2024-04-19 00:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found