On Mon, 2010-04-12 at 23:00 +0800, edgar wrote:
> There are two shortcoming:
> 1. Variable time_needed is only included expired time while invoking
> sys_arch_mbox_fetch. timeouts->next->time must minus expired time of
> waiting
> and process message, such as time of invoking tcp_input.
Any time spent processing the packets should be very small compared to
time waiting. We're talking about times in milliseconds here: 1ms would
be a long time to spend processing a packet, so the difference of
including that time is going to be mostly nothing.
The problem just is that: if tcpip_thread received message very fast, waitting case would not occur while fetching from mbox, then expire would be enlarged, tcp_slowtmr called every 500ms is not true any more. tcp_ticks depends times of invoking tcp_slowtmr, and many function, such as keepalive, depend on tcp_ticks. This is my original meaning, I did not describe it clearly last times:).
> 2. Timer handler is delayed while flow enter the blue branch.
What's the blue branch?
Sorry, I forgot that highlight code by using blue colour could not be seen through email.
} else {
timeouts->next->time = 0;
}
I means the above code, if return from fetch with timeout(!=0), expire exceed or just equal
timeouts->next->time, time out occur and handler of list head must be inovked at this time, not next loop. If every message spend more time, timer will be delayed.
Are you seeing a problem with the way that timeout times are handled, or
do you just think the code could be improved here? If there's a real
problem you're trying to sort out it would be good to know what it is.
I had implemented a ftp server on lwip, but connection between server and client was turned off at times, found tcp_ticks is not the real running time when I handled the bug and think here could be improved.
Edgar