[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lwip-users] mem_malloc(): memory fragmentation
From: |
Jonathan Larmour |
Subject: |
Re: [lwip-users] mem_malloc(): memory fragmentation |
Date: |
Mon, 23 Oct 2006 13:42:41 +0100 |
User-agent: |
Thunderbird 1.5.0.7 (X11/20060913) |
Goldschmidt Simon wrote:
Hi,
I'm currently working on an embedded product which should use the
lwIP-stack. Since the application should be running for _very_ long
time, the memory allocation used by mem_malloc() is not really
appropriate since it seems to be using a normal malloc()-like heap.
My question is: has anyone ever bothered to somehow avoid using a heap
and using pools instead?
I would favor solving this problem by 'fixing' savannah bug #3031
submitted by Leon Woestenberg, which basically proposes getting rid of
PBUF_RAM and using pools instead. As other modules use mem_malloc() also
(dhcp/snmp/loopif),
DHCP and loopif at least hardly use any space.
maybe a better solution might be to implement
mem_malloc() as different pools and leave the PBUF_RAM implementation
since it would be allocated from pools then.
Note that despite what's implied in that bug, IMHO you can't actually let
it be the current pbuf_alloc(..., PBUF_POOL), otherwise if you use up all
the pbufs with TX data, you won't have room for any RX packets, including
the TCP ACK packets which will allow you to free some of your TX packets.
So either RX and TX packets should be allocated from different pools, or
there should be a low water mark on the pool for TX allocations in order to
reserve a minimum number of packets for RX.
The downside to this is that more RAM is needed, since only three or
four pools with different block sizes would be created, but this way,
you can calculate the memory needs based on application throughput and
e.g. TCP_WND, what you can't if your memory gets fragmented...
Any comments?
I agree that if there were just a set of fixed size pbuf pools of various
sizes, it could waste a lot of memory.
One good solution if using 2^n sized pools is to use a buddy allocator[1]
to divide up larger contiguous space, so it may not be as wasteful as you
think. One difference with a normal buddy allocator is that a normal one
would normally e.g. return a 2Kbyte buffer if you request 1025 bytes. An
lwIP implementation could work for maximum efficiency instead and allocate
that as a 1024 byte buffer plus a 64 byte buffer (or whatever the lowest
granularity would be) chained together.
But all this would be a non-trivial bit of coding so I'm sure people would
be grateful if you have time to do it (unfortunately I don't as I have a
lot of other things to address in my own lwIP work).
I could also believe the result will use more a fair bit more code space
than the present mem_malloc.
Jifl
[1] Just in case: http://en.wikipedia.org/wiki/Buddy_memory_allocation
--
eCosCentric http://www.eCosCentric.com/ The eCos and RedBoot experts
------["The best things in life aren't things."]------ Opinions==mine