Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re: Can your site handle this?

by Sewi (Friar)
on Nov 05, 2011 at 21:48 UTC ( [id://936192]=note: print w/replies, xml ) Need Help??


in reply to Can your site handle this?

Don't post DoS source in public.

People understanding the problem usually are able to write this little script themselfs, but now everybody trying to abuse the internet could simply copy your sample and (try to) take down any site.

I don't think that this source would do the job either, because 128 tasks would run on the client and few client computers are able to handle this. LWP::UserAgent::Parallel might do a better job here.
Finally Apache and most other webservers are able to queue some connection requests and thus handle more incoming connections than workers.
If the starting page isn't too much source, the webserver won't even go into a DoS state, because one request will be processed before the client is able to give enough CPU time to another task to send the next request.

Replies are listed 'Best First'.
Re^2: Can your site handle this?
by Anonymous Monk on Nov 05, 2011 at 23:25 UTC
    Don't post DoS source in public.

    pfft, even the most lame ass script kiddie can download far better dossing tools than this from a plethora of locations.

    The script is more about stress testing than dossing tbh, because very quickly you run into your own pipe's bandwidth limitations anyway, and even a tiny little $20 a month server which is coded properly can handle the worst anyone could throw using a script like this.

    A Quad Core Phenom II can easily handle running 128 workers, and when you remove the bandwidth bottleneck by running this script against localhost, the result is 100% CPU utilisation for quite a short period of time before the test is complete. (a matter of seconds rather than minutes)

    The point is seeing how well the server copes with large amounts of concurrent requests which an inefficient system would struggle with because it would exceed the available memory and start to grind around in virtual memory before crashing.

    That's why on modern hardware with multi-core multi-ghz processors, CPU utilisation is less important these days than memory utilisation, and older code which is optimised for slow single core CPU's by using large amounts of caching and so forth is less efficient on modern hardware than code which relies on using larger amounts of CPU whilst using memory sparingly.

    The computer industry is going through a change in the way it works and thinks because of the relentless increase in processor power. The game keeps changing and progress marches forwards without stopping. Anyone who thinks otherwise is a fool to be ignored.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://936192]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (5)
As of 2024-04-19 23:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found