lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] How to limit the UDP Rx packet size to avoid big RAM al


From: R. Diez
Subject: Re: [lwip-users] How to limit the UDP Rx packet size to avoid big RAM allocations
Date: Wed, 27 Jun 2018 16:12:49 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0


I'm no expert,

Yet you try to sound like one ;-)

I don't. I do not know where you got that impression from. I already said a few times that I'm out of my depth here. But mind you, that still hasn't prevented me from finding the odd bug in lwIP. 8-)


As a matter of fact, it does. That's why I wrote it. MEM_LIBC_MALLOC is of no interest, here. MEMP_MEM_MALLOC uses the heap to allocate chunks of the pool's element size the pool would normally have. As such, PBUF_POOL pbufs still allocate a constant size (PBUF_POOL_BUFSIZE + struct pbuf + alignment). That might not be what you expected, but it's what it's like, currently.

I must admit that I am always in a rush. Did I miss some place in the documentation where this is explained?

Otherwise, please let me help understand how this works. If the remote host decides to send 10 fragments, each with just 4 bytes, will lwIP allocate (with or without malloc) 10 pbufs and leave them almost empty? Or will it fill the first pbuf first (appending on the first pbuf), and only allocate the next one when the first pbuf is full?

I seem to remember that appending on pbufs is not always possible, because some pbufs may actually be DMA buffers underneath that you cannot touch. I also found that comment that I do not yet understand:

* Currently, the pbuf_custom code is only needed for one specific configuration
 * of IP_FRAG, unless required by external driver/application code. */


When I realised that lwIP was fragmenting quite a lot, I increased TCP_MSS to 1460, because lwIP itself says that the default of 536 bytes is probably too conservative. It happens that PBUF_POOL_BUFSIZE is derived from TCP_MSS (at least by default). Is that potentially making my PBUF_POOL_BUFSIZE too big then?

It is not explicitly mentioned in the comment above PBUF_POOL_BUFSIZE, but hope that I can make it smaller, and each fragment will then allocate several pbufs if necessary.


> That's your assumption. My assumption is not that an embedded
> system has little RAM but that an embedded system has to
> run stable and cope with what it has. Of course, wasting RAM
> is no good, but using pool elements of constant size
> has advantages over using different-sized allocations.

This is a bold generalisation, and such assertions often do not always hold true. Depending on your system, the disadvantages of fixed-size allocations can outweigh their advantages. In any case, it would be good to have in the documentation a section explaining the up and downsides, and why a fixed-size allocation was ultimately chosen.

In my scenario, CPU load (or reaction time) is not so important, but overall memory usage is. I also did not want to overload the network too much with unnecessary fragmentation, so that is why In increased TCP_MSS. I am trying to find out the right strategy here. I don't find easy to grasp all these with lwIP.


So you propose to abort receiving the packet when the first fragment arrives that is behind the configured size range. That would work, but I would not see this as preventing an attack, as it's much easier to just send multiple fragments with different IP_ID and still waste all your memory. We'll need a different approach here, I think.

What else would you propose? Doing nothing at the moment leaves lwIP extremely vulnerable to memory exhaustion due to fragmentation.

So far I have identified 2 weaknesses in this respect (but my understanding is not yet complete):


Weakness 1) Exhaustion of pbuf memory if an attacker sends a huge packet fragmented in pieces. Such packets are not discarded until they have been completely reassembled. There is currently no limit on the maximum size that an IP packet can have. IP_REASS_MAX_PBUFS is not a size (byte) limit.

It may not take an attacker to trigger this, just someone happily assuming that the target can take a 64 KiB UDP (or whatever) packets (because we are using some jumbo Ethernet packets on the local network).

Or it may be an accidental oversight or implementation error, because the user did not read in the manual that my device cannot take UDP packets over 1,000 Bytes. Just send one big packet by mistake, and pbuf memory is exhausted, impacting all all other UDP/TCP/whatever connections.


Weakness 2) Exhaustion of pbufs due to too many fragments for too many incomplete packets.

Again, it may not take an attacker, just an overloaded network that randomly drops many fragments. That is actually a weakness in IPv4 fragmentation that I have seen documented in many places. That is one probably of the reasons why IPv6 severely limits where fragmentation can take place.

From your earlier comments I gather that a mitigation strategy would be to garbage-collect the reassembly queues in the same way lwIP garbage-collects the ooseq queue when it runs low on input pbufs. That seems "easy" enough.


I hope I can get to the bottom of this. I do want to develop perfect little devices. 8-)

Best regards,
  rdiez



reply via email to

[Prev in Thread] Current Thread [Next in Thread]