http_recv gets a GET request, gets the file, calls http_send_data -> calls http_write -> calls tcp_write.
At this point http_sent is the callback function for tcp to call when it receives the last ack for all the data it was asked to send. Then http_sent calls http_send_data and the cycle goes again.
So in a high latency environment, even if the entire send buffer is filled in one go, http will not pass more data into tcp until tcp receives the final ack for the initial round of data. So increasing the amount of data sent into tcp per cycle definitely speeds things up, but is not addressing the root of the problem.
On Thu, Feb 14, 2013 at 9:07 AM, Simon Goldschmidt
<address@hidden> wrote:
Louis Wells wrote:
> An issue that I have noticed however is that this webserver is incredibly slow in situations with high latency. I initially thought it might be an issue with tcp in lwip, but discovered it is in the way the server is written.
[..]
>
> If anyone has any input of a good way to do this, I'd love to hear it, and I will be sure to send out an updated version of httpd when I find a way to speed it up.
Can't your simply leave away that whole limitation test? I think it's only in there because the webserver was originally meant as a small addition which low prio, and this test was in there to prevent it consuming too much memory which would then not be available for higher prio applications...
Simon
_______________________________________________
lwip-users mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/lwip-users