Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??
While rooting about in a TCP packet trace, I discovered something that I had not heard mentioned before in discussions about building Web Applications using Perl: The "standard wisdom" on building CGIs prevents webapps from taking advantage of an HTTP/1.1 optimization.

Connection: Keep-Alive

HTTP/1.1 supports a network optimization. If a client (e.g., a browser) sends "Connection: Keep-Alive" in a request header, the web server can keep the connection persistent, so that the underlying socket can be reused to service subsequent requests. This avoids having to set up and tear down a socket every time a client needs to request something from the server. To avoid exhausting the socket pool, an HTTP/1.1-compliant server will eventually time-out a persistent connection and close the socket.

Content-Length

For connection reuse to work, a browser needs to completely digest a prior response. To do this, the browser relies on the "Content-Length" in the response header. After reading the header, the browser extracts the Content-length, and then reads the socket until it has consumed exactly that many bytes. The very next byte will be the beginning of the next response header.

What if there isn't a Content-Length in the response header? In HTTP/1.1, the final fallback method of determining the message body length is to read bytes from the socket until the socket closes. This doesn't cause a problem for the browser -- it will merely reopen a new socket if and when it needs to.

What does this have to do with Perl?

Just this: the standard wisdom on how to code Perl CGI scripts prevents web applications from taking advantage of the HTTP/1.1 Keep-Alive optimization.

The standard advice says to unbuffer STDOUT, then immediately either   print "Content-Type: text/html\n\n"; or, when using CGI.pm   print header(); followed by HTML. This has the effect of sending a response that omits a Content-Length, which means that even if the browser has sent a Keep-Alive request, the socket will be closed after the response is sent, and the browser will need to open a new socket for subsequent requests. If multiple script invocations are needed to render a page (e.g., if a page is framed, and each frame's contents are generated dynamically), the effect is multiplied.

Losing connection persistence isn't an issue during development, where the benefit of visibility onto script behavior far outweighs the barely measurable socket setup/teardown overhead. And in most low-volume situations, the socket setup/teardown overhead is relatively minimal. But in a high-latency situation, the difference in behavior can be noticeable. To understand why, we need to dig a bit.

Counting Packets

When a browser requests a page from a web server, the transaction takes a minimum of two TCP packets. One to package up the HTTP GET request, and one to package up the response from the server. (A large response gets split across multiple packets). But this doesn't count the packet overhead for opening (and later closing) the socket. Establishing a TCP connection takes 3 packets; closing the connection takes 4 packets (consult Stevens for the grisly details). Each request packet requires an acknowledgement, but this is typically included in the reply (data) packet. (There are some other tricks for box-carring ACKs. I'm going to beg forgiveness and ignore them, which will throw the math below off just a bit.)

A Web Application Scenario

To see why Keep-Alive might matter, consider a simple web application that consists of a frameset and two frames, all of which are generated dynamically. One of the frames includes an image. It takes 4 HTTP requests to get all of the pieces into a browser.

Without Keep-Alive, these 4 HTTP requests (on the same socket) require a minimum of (3 + 2 + 4) * 4 = 36 packets. With Keep-Alive, 3 + (2 * 4) + 4 = 15 packets are needed, the final 4 of which are deferred until the connection either times out or is closed by the browser.

In reality, the math doesn't work out quite this way, in part because browsers keep multiple sockets open so that HTTP requests can be made in parallel. (IE uses 2 sockets.) But the effect is the same. If the response to an HTTP GET doesn't include a Content-Length, then the socket gets closed, and a new one will be opened.

Now consider the impact of a hundred browsers running a more complicated web application that periodically polls the server. Are you going to want to keep those connections alive?

The Moral

If you're building a web application that might be deployed in a high network latency situation, consider taking advantage of HTTP/1.1 Keep-Alive. This requires that you build up the HTML that your CGI will emit, and then emit the HTML in one piece, with a Content-Length prepended. Something along the lines of

binmode(STDOUT); $html = ...; print "Content-Type: text/html\r\n", "Content-Length: ", length($html), "\r\n\r\n", $html;
or, if using CGI,
$html = ...; print header(-content_length => length($html)), $html;
will do the trick.

Or, at least do a packet trace so that you can see what's really going on under the covers.

References

Corrections to the any of the above will be appreciated.

In reply to Connection: Keep-Alive and Perl by dws

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (8)
As of 2024-03-28 09:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found