Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re^3: Advice on Efficient Large-scale Web Crawling

by salva (Abbot)
on Dec 19, 2005 at 14:22 UTC ( #517731=note: print w/replies, xml ) Need Help??


in reply to Re^2: Advice on Efficient Large-scale Web Crawling
in thread Advice on Efficient Large-scale Web Crawling

To achieve this with the current architecture I'm limited to about 12 -15 concurrent processes

that limit seems too low for the task you want to accomplish, specially if you have a good internet connection. Have you actually tried incrementing it to 30 or even 50. Forking is not so expensive in moderm Unix/Linux systems with support for COW.

update: actually, much of the overhead generated by the forked processes can be caused by perl cleaning up everything. On Unix, this cleanup is mostly useless, and you can get rid of it calling

exec $ok ? '/bin/true' : '/bin/false';
instead of exit($ok) to finalize child processes. Just remember to close first any file you had written to.

Replies are listed 'Best First'.
Re^4: Advice on Efficient Large-scale Web Crawling
by Celada (Monk) on Dec 19, 2005 at 15:28 UTC

    That's what POSIX::_exit is for. Exit the process without giving Perl (or anything else) a chance to clean up.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://517731]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (9)
As of 2020-08-07 12:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Which rocket would you take to Mars?










    Results (45 votes). Check out past polls.

    Notices?