lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] snd_queuelen underflow frustration


From: JM
Subject: Re: [lwip-users] snd_queuelen underflow frustration
Date: Sun, 25 Mar 2018 00:29:51 +0000 (UTC)

Coincidentally I solved my problem just today and posted about it. I was freeing the pbuf in the low level transmit function, which is apparently not good. Works great now. However I get Rx overflow under load so I have some other issue, but I think that may be easier to track down as it's a hardware/DMA issue.

On Sunday, March 11, 2018, 5:10:34 PM CDT, Blind Bob <address@hidden> wrote:


you don’t say if the packets are a total of  1514 or 1514 each?

but be clear that the usual maximum MTU is 1500 for a tcp/ip packet, once you add in the other protocols, like TCP/ip header it is  less and if you are running on a VM where the packet has an outer wrapper to pass it up the VM stack it is worse.1450 is a safe value.  
 If you try to pack more than 1450 bytes at a time into a tcp/ip packet, then the packet has to be split by the hardware or wrapper software.

so it think you are seeing an issue with the data being split , increasing the packet size takes you away from the boundary condition and then it works.

On Mar 11, 2018, at 2:27 PM, goldsimon <address@hidden> wrote:

If the error persists with the current version, please open a bug at Savannah and upload everything required to reproduce.

Thanks,
Simon



Am 11. März 2018 03:59:09 MEZ schrieb JM <address@hidden>:
I'm trying to port lwIP 1.4.1 and httpserver_raw to a PIC32MZ. No RTOS.

After a web client establishes a TCP connection, it sends a GET to httpserver_raw/lwIP, it returns with two large packets (1514 bytes), then the client sends an ACK. It's at that point an underflow occurs in tcp_in.c line 1026. pbuf_clen() returns with MEMP_NUM_PBUF no matter what it's set to.

Here's the thing: I captured the exchange with Wireshark and converted the packets sent from the client to lwIP as C arrays and put it into Microchip MPLAB simulator, optimization is turned off. After initializing lwIP I simply feed each packet into ethernet_input() and it *still* does the same thing. I am not simulating any microcontroller hardware, I'm only running code. No interrupts, nothing. Should it even be possible to cause this by only feeding simulated packets into lwIP?

I deleted the lwIP and httpserver_raw source from my project and replaced it with new stuff retrieved from the lwIP website and it still fails in the same way. Other than inserting a simple if() statement in tcp_in.c to catch when pcb->snd_queuelen hits some crazy value and using a slightly larger webpage for httpserver_raw, it's all stock. 

I tried the fsdata.c I'm testing with on another lwIP 1.4.1 implementation that's successfully running on a TI ARM and it worked beautifully. I'm at my wits end with this. The nice thing is the simulation is behaving just like what's occurring on real hardware, so I can step through the entire process. The issue is being able to follow what's going on.


What can I look for? This isn't making any sense. I'd really like to get this working.


_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users


reply via email to

[Prev in Thread] Current Thread [Next in Thread]