lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] Transferring large data fast and pointing pbufs directl


From: Muhamad Ikhwan Ismail
Subject: RE: [lwip-users] Transferring large data fast and pointing pbufs directly to Ethernet receive buffer
Date: Wed, 12 Dec 2007 06:51:09 +0000

I myself working on PowerPC and know a little bit about the buffer descriptor.
What I did was :

1. Upon reception the data is copied from the BDs to preallocated pbufs (type pbuf pool)using memcopy. Since the Dual Port Ram memory where the BDs are found
are rather limited,so i made 3 pbufs with each 512  bytes (payload points to external memory) long. That takes care of the alignment problem too.

2. Upon transmission I just gave the address of the payload of the pbufs to the Transmit BD and set the flags and length.




> Date: Tue, 11 Dec 2007 22:40:39 +0000
> From: address@hidden
> To: address@hidden
> Subject: Re: [lwip-users] Transferring large data fast and pointing pbufs directly to Ethernet receive buffers
>
> address@hidden wrote:
> >> Secondly, the PowerPC loads packets directly into buffer descriptor
> >> memory. Is looks possible (best) at low_level_input to point pbufs
> >> right into the received packet without copying. If I’m correct that
> >> this can be done, the question I have is, where can I know the data
> >> was read by the upper layer to be able free the pbuf and at the same
> >> time to be able to free memory for the Ethernet controller?
> >>
> > Normally, you would allocate a pbuf including data buffer: type
> > PBUF_POOL and make sure the data buffer is in one piece (p->next ==
> > NULL; for that, PBUF_POOL_BUFSIZE has to be set to 1516 for ethernet).
>
> Just being pedantic I know, but that isn't strictly true - I'm working on
> one board which can scatter/gather pbufs and where the memory pointed to
> by the buffer descriptors is fixed at 128 bytes.
>
> However if you do have the memory it is more efficient to use the full
> 1516. The question then is whether you have the memory because _every_
> packet received will occupy the full 1516 bytes, and if you're keeping
> them around till you've got the lot, that can add up. If you are receiving
> your 4MB image in 256 byte chunks, you'd need at least 24MB of RAM ;).
>
> NB the value of 1516 bytes can depend on your hardware, e.g. if your
> hardware also transfers the CRCs. See
> http://sd.wareonearth.com/~phil/net/jumbo/ for example.
>
> > Then you set the ETH DMA to receive to p->payload. When a packet is
> > received, substract the offset from p->payload so you get the pbuf
> > pointer, which you can pass into the stack.
>
> Just to clarify what Simon is saying, pool pbufs consist of a 'struct
> pbuf' immediately followed in memory by the payload - so if you have the
> payload pointer, you know where the struct pbuf is.
>
> > Of course, this only works
> > if your ETH DMA can receive to the memory where the memp pools are
> > allocated.
> >
> > Unfortunately, using a PBUF_REF for receiving input packets doesn't work
> > so good I think...
>
> Indeed. I've certainly only been working in terms of (inherently
> preallocated) pool pbufs.
>
> Jifl
> --
> eCosCentric Limited http://www.eCosCentric.com/ The eCos experts
> Barnwell House, Barnwell Drive, Cambridge, UK. Tel: +44 1223 245571
> Registered in England and Wales: Reg No 4422071.
> ------["The best things in life aren't things."]------ Opinions==mine
>
>
> _______________________________________________
> lwip-users mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/lwip-users


Don't get caught with egg on your face. Play Chicktionary! Check it out!

reply via email to

[Prev in Thread] Current Thread [Next in Thread]