Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Stress testing a web server

by ibanix (Hermit)
on Jan 04, 2003 at 22:44 UTC ( [id://224323]=perlquestion: print w/replies, xml ) Need Help??

ibanix has asked for the wisdom of the Perl Monks concerning the following question:

A few days ago I needed to do some very basic testing on a web-server: simulate a number of connections to it at once. The server was configured for a hard maximum number of active connections, and I needed to verify it would enforce that.

Here's the 10-minute script I wrote:
#!/usr/bin/perl -w use strict; my $runs = 100; while ($runs) { my $pid = fork; # Parent if ($pid) { $runs--; } # Child if ($pid == 0) { my $output = `wget -S http://www.mysitegoeshere.org -O + /dev/null 2> /dev/null`; exit; } }
As you can see, the script forks a number of children and calls out to wget to handle the dirty work (this was needed in a hurry, ok?).

I noted that the script would never produce more than 40 connection/s to my webserver, no matter what I set the $run variable to.

So I'm wondering where my bottleneck is. Calling out to wget? The time it takes to fork the process? The network bandwith? The OS's speed in creating sockets?

This is a bit more than a perl problem, and I would love any feedback. For refrence, the server running this script is single-CPU, 400MHZ, with 384MB of RAM, with 10Mbit of bandwith, running FreeBSD. The server it is attempting connections to is a dual-CPU 2.8GHZ Xeon, 2GB of RAM, on a 100Mbit network. Peak bandwith used on the script server was ~80KB/s -- well below it's maximum.

Thanks all!

ibanix

$ echo '$0 & $0 &' > foo; chmod a+x foo; foo;

edited: Sun Jan 5 00:26:29 2003 by jeffa - title change (was: Limitation: perl, my code, or something else?)

Replies are listed 'Best First'.
Re: Stress testing a web server
by tachyon (Chancellor) on Jan 05, 2003 at 00:14 UTC

    You are forking a child that then does another fork (in the system call). Assuming that the limitation is process number/fork speed based you can halve number of processes by using LWP::Simple.

    I have added in an optional time delay - the children will all sleep until say 20 seconds have elapsed (by which time you should have lots of kids) at which point they all hit the server simultaneously which ought to have the desired effect. You could add in a random element to space the hits out over X seconds just by adding rand(X) to the delay....

    #!/usr/bin/perl -w use strict; use LWP::Simple; my $runs = 100; my $time = time(); my $delay = 20; my $x = 1; my $hammer = 0; # set this to peak your network out! while ($runs) { my $pid = fork; # Parent if ($pid) { $runs--; } elsif ($pid == 0) { sleep 1 while $hammer and time() < $time + $delay + rand($x); my $output = get('http://www.mysitegoeshere.org'); exit; } else { die "Fork failed $!\n"; } }

    cheers

    tachyon

    s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

Re: Stress testing a web server
by atcroft (Abbot) on Jan 04, 2003 at 23:00 UTC

    As to doing what you are looking for, you might consider looking at merlyn's Web Techniques column 28, which was looking at stress-testing a CGI.

    It may be that your script is running into a maximum number of processes for a single user-just a guess. You might want to look at sending stdout or stderr from wget somewhere so you can look at it to see if it shows it to be running into a problem.

Re: Stress testing a web server
by Aristotle (Chancellor) on Jan 05, 2003 at 01:38 UTC
    You might want to have a look at LWP::Parallel - something like this:
    #!/usr/bin/perl -w use strict; use LWP::Parallel::UserAgent; use HTTP::Request; my $url = "http://localhost/foo/"; my $pua = LWP::Parallel::UserAgent->new(); $pua->nonblock(1); # accell. connection $pua->redirect(1); # follow $pua->max_req(1000); # simultaneous while(1) { foreach (0 .. 100) { my $res = $pua->register(HTTP::Request->new(GET => $url)); die $res->error_as_HTML if $res; } $pua->wait(0); # returns hashref $pua->initialize; }
    and expanding that to report the number of connections:
    #!/usr/bin/perl -w use strict; # make sure END block is run BEGIN { SIG{INT} = sub { exit } } package myPUA; use Exporter(); use LWP::Parallel::UserAgent qw(:CALLBACK); our @ISA = qw(LWP::Parallel::UserAgent Exporter); our @EXPORT = @LWP::Parallel::UserAgent::EXPORT_OK; our $connections = 0; sub on_return { ++$connections; return } package main; use HTTP::Request; use Time::HiRes qw(time); my $url = "http://localhost/foo/"; my $pua = myPUA->new(); $pua->nonblock(1); # accell. connection $pua->redirect(1); # follow $pua->max_req(1000); # simultaneous my $start = time; while(1) { foreach (0 .. 100) { my $res = $pua->register(HTTP::Request->new(GET => $url)); die $res->error_as_HTML if $res; } $pua->wait(0); # returns hashref $pua->initialize; } END { my $sec = time - $start; print "$myPUA::connections connections in $sec seconds\n"; printf "%.1f conn/s\n", $myPUA::connections / $sec; }
    Most of this is lifted straight out of the module's POD. Took about 25 minutes to write without any real former exposure to LWP::Parallel. :) Untested.

    Makeshifts last the longest.

Re: Stress testing a web server
by fokat (Deacon) on Jan 05, 2003 at 02:04 UTC

    All the good, offered solutions so far are heavily geared by the idea of "time" (ie, connections per second). This is not what is being asked. The maximum connections limit is a per server parameter that controls, more or less, how many accept()ed TCP connections are there in the server side.

    What ibanix needs to know, is if the server will process more than 100 concurrent connections in any time frame, which is not exactly the same problem.

    In order to test this, execute this simple script (I called it nsock) and pay attention to what happens...

    #!/usr/bin/perl use strict; use warnings; use Getopt::Std; use IO::Socket::INET; use vars qw($opt_s $opt_p $opt_n); getopts("s:p:n:"); die <<EOF nsock -s server [-p port] [-n number] -s: server to connect to -p: port to connect to. Defaults to 80 -n: number of concurrent connections. Defaults to 100 EOF unless $opt_s; $opt_p ||= 80; $opt_n ||= 100; my @socks = (); $| = 1; for my $c (1 .. $opt_n) { print ">>> Socket #$c\n"; my $s = new IO::Socket::INET->new ( PeerAddr => $opt_s, PeerPort => $opt_p, Proto => 'tcp', Timeout => 30, ); unless ($s) { warn "Failed to create socket $c: $!\n"; sleep 2; redo; } print "<<< $c connected ok\n"; push @socks, $s; } print "*** All concurrent connections done!\n";

    What it does is simply, open a TCP socket to the desired server and keep it open. If you run it with slightly larger number of connections (say, 101) you should see the 101th connection take forever to establish. This happens because it is going into the accept() queue and not established right away.

    If you run it with 100 connections, all of them should be established. Hope this helps.

    Best regards

    -lem, but some call me fokat

Re: Stress testing a web server
by IlyaM (Parson) on Jan 05, 2003 at 11:50 UTC
Re: Stress testing a web server
by pg (Canon) on Jan 05, 2003 at 00:59 UTC
    HTTP 1.1 spec does allow the client to specify Connection: Keep-Alive, but the server has a choice whether or not to really keep the connection alive. Also even the two sides agree upon this, the connection still can be dropped for various reasons.

    One part of his test has to be conducted from the server side, somethiong like a counter of connections.
      Hi, let me add some information.

      The web-server in question is IIS 5.0, and the maximum connections is being enforced via it's virtual server properties. Number of connection attempts per second and established (active) connections are monitored via PerfMon.

      The original script was able to generate a maximum of 40 connection attempts per second, for a very short period of time (around 2 seconds). After that, it dropped off to 15 or 20 connections attempts per second. This was enough to for my purposes at the time. I was able to show that the server setting for maximum active connections would be enforced.

      As I mentioned below, the question that started this was: 'What is my limiting factor in number of attempted connections per second?' All of the responses posted here have been excellent, and I will use them later if I conduct further testing. I was specially hoping that someone would reply and tell me fork was holding me back, or that I should use system() instead of back-ticks, etc.

      Another poster mentioned LWP as being more efficent, another one used IO::Socket. I love these -- but I wonder why they work better? Consume less CPU cycles? Less time to allocate memory? Context switching?

      Once again, my thanks.

      ibanix

      $ echo '$0 & $0 &' > foo; chmod a+x foo; foo;

        When you do a system call you actually do a fork() followed by an exec(). From the docs for system():

        Does exactly the same thing as ``exec LIST'' except that a fork is don +e first, and the parent process waits for the child process to comple +te.

        The OS requires a reasonable amount of time to fire up a new process (thus mod perl is faster than vanilla perl as you avoid this overhead) - sure less than a second but often in the order of hundreds of miliseconds.

        With your fork/system call example you have 2 forks (with the time overhead) per hit on the web-server. With the basic LWP method you have only one fork so in rough terms you should get double the kids hitting the server given that forking is a bottle neck, not loading/running code or memory.

        As a variation on the hammer theme described above just make the kids hit the server more than once. If each child hit the server say 50 times you will get a steadily increasing load (as more kids fork off); a sustained peak load (all kids hitting the server); and then a taper as the kids die off. This will give you a very sustained peak load. Don't blame me if your server cries - it is IIS ;-)

        <DISCLAIMER>Quite seriously this sort of code can bring you system to its knees so use with due care. It will in all liklihood saturate the network thus performing a DNS attack on yourself. Perhaps best run after hours.</DISCLAIMER>

        #!/usr/bin/perl -w package Brutal::Web::Server::Test; use strict; use LWP::Simple; my $kids = 100; my $hits_per_kid = 50; my $output; while ($kids) { my $pid = fork; # Parent if ($pid) { $kids--; } elsif ($pid == 0) { do {$output = get('http://www.mysitegoeshere.org') } for 1 .. + $hits_per_kid; exit; } else { die "Fork failed $!\n"; } }

        cheers

        tachyon

        s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

        Multi-process is definitely very expansive. I would suggest a single process, single thread solution.
        use IO::Socket::INET; use strict; use constant MAX_CONN => 25; $| ++; my @connections; my $request = "HEAD / HTTP/1.1\r\nConnection: Keep-Alive\r\nHost: www. +google.com\r\n\r\n"; for (1..MAX_CONN) { $connections[$_] = new IO::Socket::INET(proto => "tcp", PeerAddr => "www.google.com", PeerPort => 80); print "."; } print "\n"; while (1) { my $sum = 0; for (1.. MAX_CONN) { my $buffer; if ($connections[$_]) { $sum ++; $connections[$_]->send($request); $buffer = ""; while ($buffer !~ m/\r\n\r\n/) { my $piece; sysread($connections[$_], $piece, 10000); if ($piece eq "") { $connections[$_]->close(); $connections[$_] = undef; last; } else { $buffer .= $piece; } } } else { #get those disconnected back $connections[$_] = new IO::Socket::INET(proto => "tcp", PeerAddr => "www.google.com", PeerPort => 80); } } print "there are $sum connections now\n"; }
•Re: Stress testing a web server
by merlyn (Sage) on Jan 05, 2003 at 03:48 UTC
Re: Stress testing a web server
by ibanix (Hermit) on Jan 05, 2003 at 03:11 UTC
    Wow! Thanks for all the ideas!

    (Acutally, I was just wondering why I capped at 40 connection/s -- what was the limiting factor?)

    But all your advice helps!

    Cheers,
    ibanix

    $ echo '$0 & $0 &' > foo; chmod a+x foo; foo;
Re: Stress testing a web server
by osama (Scribe) on Jan 05, 2003 at 21:16 UTC
    why not use ab (apache bench) it is a tool specifically made for this purpose (it also works on non-apache web servers...)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://224323]
Approved by vek
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (3)
As of 2024-04-25 10:46 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found