lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-devel] Re: [task #7040] Work on tcp_enqueue


From: bill
Subject: RE: [lwip-devel] Re: [task #7040] Work on tcp_enqueue
Date: Mon, 2 Feb 2009 12:32:40 -0500

>We should decide what's our goal here: When I first started task #7040,
>I just wanted to make sure lwIP uses full segments where possible,
>regardless of the length of data tcp_write is called. To reach that, we
>would first make the simple change to create the first pbuf with such a
>length that it fits the last segment (if that is not full). I would even
>see that as a bug-fix, since it has the possibility to be a large
>performance hole.

It needs some more benchmarking, but for those sending a lot of data at
once, I think the less than MSS full pufs hits performance less than
avoiding it by searching to the end of the chain to split a pbuf and add to
the last one, especially when the chains are long.  With 64k-1 TCP_SND_BUF
the list could be quite large.  Is this worth the difference compared to a
one-line test in tcp_sent to ensure tcp_sent calls tcp_write with a multiple
of MSS?  If the API is changed, it's easier to return the number queued and
keep the size multiple adherence in tcp_enqueue.  But again, it's trivial
for the tcp_sent caller to maintain it.  And if the caller sending large
data doesn't adhere, there are a few (2%?) packets sent not at the full MSS.
For tcp_write calls with small amounts of data, it's totally different.
Then which camp benefits?  Those sending lots of small data or those sending
huge amounts of data in bursts?

I'm beginning to think I shouldn't have opened this can of worms.  It's just
that I'm trying to eke out every MbS I can from lwIP.

Are everyone's goals a little different?  Do we want a tiny footprint with
no concern for throughput, or maximum throughput without regard to code
size?  Do we need LWIP_BUILD_FAST and LWIP_BUILD_SMALL options where these
factors can be separated?

Bill






reply via email to

[Prev in Thread] Current Thread [Next in Thread]