lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] Q. PBUF_NEEDS_COPY relevant for input pbufs?


From: address@hidden
Subject: Re: [lwip-devel] Q. PBUF_NEEDS_COPY relevant for input pbufs?
Date: Mon, 31 Jul 2017 19:55:21 +0200
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

Douglas wrote:
Could use some help understanding PBUF_TYPE_FLAG_DATA_VOLATILE and
PBUF_NEEDS_COPY usage
[..]
So a question is: should these RX pbufs be flagged as
PBUF_TYPE_FLAG_DATA_VOLATILE, so that they are copied if the data is to
be kept around for long?

That would indeed be a good idea. Although it is not obeyed on RX yet (as you already saw), it would be a good idea to do so when input data is buffered (e.g. when TCP queues OOSEQ
pbufs or "refused_data" but also by applications).

Looking at the code it appears that PBUF_TYPE_FLAG_DATA_VOLATILE and
PBUF_NEEDS_COPY are only used in the TX paths, but might there be some
data flow paths that are relevant to RX pbufs?

Not yet.

On the TX side [..]
So another question is: does that pattern of TX pbuf use require
PBUF_TYPE_FLAG_DATA_VOLATILE or is the reference alone enough to prevent
it being re-used?
DATA_VOLATILE is contra productive here: the TX pattern is
- application allocates a pbuf and fills it
- passes it to TX function
- does *not* use it any more (except releasing its own reference via pbuf_free)
- stack can do with the pbuf what it wants for TX
- stack/driver frees it after transmission (or after retransmission etc)

DATA_VOLATILE would mean the stack would have to allocate a new pbuf and copy the data into that new pbuf if the data would be queued, i.e. would be needed *after* the tx function returns (e.g. DMA transfer that takes place a short time after the tx function returned).

DATA_VOLATILE is *really* only needed in cases where you don't have control about the external usage of the TX data buffer. This is the case in the (dumb) socket API for example.

Cheers,
Simon



reply via email to

[Prev in Thread] Current Thread [Next in Thread]