Eyck has asked for the wisdom of the Perl Monks concerning the following question:
I'm trying to immitate web browser downloading some page, I have an array containing all components, and then use WWW::Mechanize to do downloading:
use WWW::Mechanize; use Time::HiRes qw(time); my $links=[ "http://web-page.to.download.to/", "http://static.to.download.to/background.jpg", "http://static.to.download.to/first.css", "http://www.google-analytics.com/ga.js", "http://static.ak.fbcdn.net/rsrc.php/v2/yl/r/6KM-54hh6R2.css", ]; my $start=time; foreach (@$links) { $mech->get($_); }; my $stop=time;
This works more-or-less the way I intended, there are two problems though - since the list of links is dynamic, and partly created using javascript, I had to use the browser to create that list.
I need a way of parsing web page, and getting a list of all its component, and this is my first problem.
The other problem is that I'm serializing all downloading here - I should be using something more similiar to what browsers do - maybe use 4 concurrent downloaders?
How can I emulate 4 concurrent downloading threads?
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Timing web page download.
by Anonymous Monk on Jul 11, 2012 at 08:32 UTC | |
Re: Timing web page download.
by tospo (Hermit) on Jul 11, 2012 at 08:25 UTC | |
by Eyck (Priest) on Jul 11, 2012 at 09:21 UTC | |
by Anonymous Monk on Jul 11, 2012 at 09:32 UTC | |
Re: Timing web page download.
by phatWares (Initiate) on Jul 11, 2012 at 12:07 UTC | |
by Eyck (Priest) on Jul 11, 2012 at 12:14 UTC | |
Re: Timing web page download.
by Sinistral (Monsignor) on Jul 11, 2012 at 13:19 UTC | |
by Eyck (Priest) on Jul 12, 2012 at 12:23 UTC | |
by Sinistral (Monsignor) on Jul 12, 2012 at 18:32 UTC | |
Re: Timing web page download.
by mrguy123 (Hermit) on Jul 11, 2012 at 15:24 UTC | |
Re: Timing web page download.
by sundialsvc4 (Abbot) on Jul 12, 2012 at 12:48 UTC |
Back to
Seekers of Perl Wisdom