lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-devel] [task #6935] Problems to be solved with the current socket/


From: Frédéric Bernon
Subject: [lwip-devel] [task #6935] Problems to be solved with the current socket/netconn API
Date: Tue, 29 May 2007 17:23:49 +0000
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3

Follow-up Comment #12, task #6935 (project lwip):

Some details about "where is spend the time"?

First, the "sendto" measure is done inside lwip_sendto, from the first
instruction (just after the local variables declaration) until the return
(just before it). That's what David call the "total elapsed time". The
application thread and tcpip_thread have the same priority (to be in the
worth case, it's important for this bench).

Some details on the Current CVS HEAD code. On 204µs, there is:

* ~31µs before the sys_mbox_post in tcpip_apimsg (so, get_socket,
netbuf_ref+pbuf_alloc, netconn_send, prepare api_msg struct, and
tcpip_apimsg).

* The application task keep the hand ~6µs to do the sys_arch_mbox_fetch.

* Here, there is the "task switch".

* The tcpip_thread spend ~6µs to terminate the sys_arch_mbox_fetch (note the
delay between the "application post" and the "tcpip_thread fetch" is around
~12µs).

* There is ~18µs between the previous step and udp_sendto (decrement next
timeout, unpack tcpip_msg, unpack api_msg, some checks in api_msg).

* udp_sendto take ~74µs.

* The return and post take ~6µs. tcpip_thread keep the hand until ~32µs.

* Here, there is the "task switch".

* The application terminate the "fetch" in ~12µs. 

* To terminate the lwip_sendto, there is ~17µs.

Note the sum give a little bit different than 204µs, just because I don't
give exact measures. So, take that like "relative" values...

About priority, I would like to measure the worth concurrent case 
to maximize the delay before the end of tcpip_apimsg.

About the idea to change tcpip_apimsg, once again, I remember that it's
experimental, and just to give ideas for the "next" sequential layer. And
there is some exceptions, the main is of course the "connect": in all the
other cases, the "processing" is something "synchrone" done in only one
"tcpip_thread MAIN Loop". But "connect" is different, and the real "end of
processing" is done after an ip_input or a timers (by invoking the callback
"do_connected" or "err_tcp"). So, we can lock by the same way the core. But
perhaps we could keep the current feature for that...

I join a jpeg image about the tool I use to analyze (to give you an idea)...

(file #12899)
    _______________________________________________________

Additional Item Attachment:

File name: API Layers measures.JPG        Size:59 KB


    _______________________________________________________

Reply to this item at:

  <http://savannah.nongnu.org/task/?6935>

_______________________________________________
  Message posté via/par Savannah
  http://savannah.nongnu.org/





reply via email to

[Prev in Thread] Current Thread [Next in Thread]