lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] PPPOS and PBUF_POOL


From: Joel Cunningham
Subject: Re: [lwip-devel] PPPOS and PBUF_POOL
Date: Wed, 30 Aug 2017 11:23:19 -0500

> On Aug 29, 2017, at 10:43 AM, Axel Lin <address@hidden> wrote:
> 
> 2017-08-29 6:21 GMT+08:00 Joel Cunningham <address@hidden>:
>> I’m getting more familiar with the PPPOS implementation in one of my 
>> projects and was surprised to find that PBUF_POOL appears to be used in the 
>> transmit pathway of PPPOS (i.e. pppos_write and pppos_netif_output). This 
>> doesn’t match my understanding of PBUF_POOL which is that it should only be 
>> used for RX.
>> 
>> Also, if I look at pppoe.c and pppl2tp.c, both of those are using PBUF_RAM 
>> in the their netif_output functions (pppoe_netif_output, pppoe_netif_output)
>> 
>> Is this usage of PBUF_POOL intentional?
> 
> I think it's to avoid memory fragment.
> The way pppos_write/pppos_netif_output using the PBUF_POOL is different from
> other places because the pbuf is freed immediately by pppos_output_last().
> It's just being used as a temporary buffer.
> 

Thanks for the explanation.  Does anyone have insight on whether there is a 
fragmentation problem?  I don’t know of other cases in LwIP where we avoid 
using PBUF_RAM when making a copy of the pbuf.

I believe in my particular product, I’m seeing the exact behavior PBUF_POOL was 
designed to avoid.

I have a UDP socket which receives high frequency input from the modem.  The 
thread servicing the socket doesn’t tend to keep up and the socket’s receive 
buffer fills up with packets (data stored in PBUF_POOL). Then a separate TCP 
socket experiences transmit failures in pppos_netif_output because PBUF_POOL is 
exhausted.

I’m looking at limiting the UDP socket with SO_RCVBUF and this will work for my 
use case since the UDP datagrams are large, but SO_RCVBUF is a byte limit, not 
a datagram (PBUF_POOL) limit.

Joel


reply via email to

[Prev in Thread] Current Thread [Next in Thread]