lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-devel] [bug #23726] pbuf pool exhaustion on slow recv()


From: Jim Pettinato
Subject: [lwip-devel] [bug #23726] pbuf pool exhaustion on slow recv()
Date: Fri, 27 Jun 2008 14:30:09 +0000
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14

Follow-up Comment #3, bug #23726 (project lwip):


Well, with the callback (raw) API, since there is no packet queueing
typically between stack and app (unless the application itself implements one
for some reason) it seems to me that this issue would not be present.

Typically I would think that pbuf pool depletion when using the raw API comes
when the driver (at the int level) is queueing packets faster than the
combined lwIP+application(s) task can process them. Careful tuning can avoid
problems here (at least I finally figured out how to strike a balance).

This brings me to some related issues when using the raw API - one, the
potential for broadcast flooding, and also the presence of a disconnect in the
TCP receive window advertising. 

With the raw API, since there is not typically an app-level queue, incoming
packets must be buffered at the ISR/driver level until the stack task runs and
processes them. A high level of broadcast traffic really adds to the driver
buffering requirements. I don't know about your corporate LAN, but ours has a
ridiculous amount of broadcast traffic and makes a good stress test platform
for our lwIP devices!

Would it would be beneficial for raw API users to have a function or macro to
block broadcast packets at the ISR level - packets that will eventually be
discarded up the chain anyway? Saves buffering them and tying up pbufs...

Regarding the TCP window advertising... if you watch a large file download
via FTP (for example) to my lwIP raw API based system, I don't see the window
being reduced more than one TCP_MSS - since the stack hasn't seen the buffered
packets yet, and when it does, the app is processing the packet immediately
via the callback anyway and hence restoring the original window size. It makes
it difficult to get TCP working smoothly, since we have no way to tell a fast
sender that the driver (ISR) receive buffer is full other than to drop
packets. Am I missing something here?? I can't think of any way to provide
some method to do TCP receive window updates at the ISR level. With the raw
API, this seems the only location where it is known how much data has already
been received and buffered but not acknowledged.

Speaking of TCP windows - wouldn't that be a good way to handle the socket
buffer problem??... with UDP and raw IP, just dropping packets posted to a
full mailbox would be okay... and with TCP, synching the receive window to the
predetermined mailbox available size, rather than initializing all port
rcv_wnds to some generic value??



    _______________________________________________________

Reply to this item at:

  <http://savannah.nongnu.org/bugs/?23726>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.nongnu.org/





reply via email to

[Prev in Thread] Current Thread [Next in Thread]