chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Chicken-users] Re: Very slow keep-alive behaviour on Spiffy


From: Graham Fawcett
Subject: [Chicken-users] Re: Very slow keep-alive behaviour on Spiffy
Date: Fri, 19 May 2006 14:16:47 -0400

On 5/19/06, Graham Fawcett <address@hidden> wrote:
Hi folks,

I'm seeing strange behaviour in Spiffy regarding Keep-alive
connections. On my Linux server, subsequent requests on Keep-alive
connections are taking much longer to complete than requests on new
connections -- the wall-clock time is more than 10x greater (CPU usage
on client and server is almost identical). Tests and results are below.

All right, I've gotten to the bottom of it. It definitely has to do
with small messages, and has little to do (directly) with Spiffy,
though Spiffy's design (or tcp-server's) does aggravate the problem.

Small content can be sent extremely efficiently if the entire HTTP
response can be fit into a single IP frame -- that is, both the
header+body is smaller than the MTU on the IP interface, and the
message is sent at once to the output-port, not dribbled out via
multiple write requests (which can force the output to be fragmented
across multiple packets). It may be a Linux-specific problem -- there
appear to be numerous posts on the Web about small-message passing
over TCP on Linux being something of a bottleneck.

Retooling my Web stack with a (short-response) procedure, which writes
only once if the message length <= MTU, immediately solved the
performance problem here.

I'm not a TCP wizard, so I'm not entirely clear why this behaviour
affects persistent connections more than newly-created ones, nor why
the performance difference at the client-side is so huge. It's
significant enough that, if anyone is using tcp-server (or
http-server, or Spiffy) for small-message passsing (e.g. an Ajax
application, where a client might ask for a lot of very small
messages) they might see a signifigant performance drop -- like I did!

A general solution would be to introduce a buffer in front of the
tcp-output port, where output queues up until the content reaches a
size near the MTU, and then flushes itself out. The buffer could have
port semantics, and so could be flushed, closed, etc., by client code,
passing these effects onward to the tcp-output port. Adding such an
element to tcp-server could improve performance significantly to a
number of network apps, with little effort. If I get time, I'll try to
whip up such a buffer, and do some tests; but I think my R&D time has
run out for the week.

Thanks for listening to me talk to myself in public! :-)

Graham




reply via email to

[Prev in Thread] Current Thread [Next in Thread]