libmicrohttpd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [libmicrohttpd] single-threaded daemon, multiple pending requests, r


From: Christian Grothoff
Subject: Re: [libmicrohttpd] single-threaded daemon, multiple pending requests, responses batched
Date: Sun, 10 Apr 2016 23:32:36 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0

On 04/10/2016 10:11 PM, Frank Ch. Eigler wrote:
> Hi -
> 
> One of the programs I work on uses libmicrohttpd to serve a web
> protocol.  It uses vanilla single-threaded select mode.
> 
> Some of the requests require timetaking operations on the order of
> seconds.  A web browser might make multiple such requests in parallel
> to the MHDish process (on separate sockets).  The problem is that
> responses to the requests are batched in a way, so nothing is sent out
> until all requests have finished processing.

That is, eh, odd, and suggests you have some bug in your application
logic (or found a bug in ours).  Responses should be delivered as soon
as they are available and the respective TCP socket is ready for
transmission.  However, especially if you use "external select", your
application might be responsible for "kicking" MHD into action (by
unblocking a select() call that might be hanging while you "finished"
assembling the reply).

> What seems to be happening is that the MHD_run function recognizes
> that the various incoming http connections are all ready for reading,
> does the recvfrom() to fetch the HTTP instruction, then dispatches
> each one, one at a time, to the MHD app for handling.  Each callback
> enqueues proper response documents (say 30K each), and returns
> MHD_YES.  But MHD_run does not try to send those responses right away.
> 
> I suspect this may be because MHD_run_from_select changes
> event_loop_info state of the connection from _READ to _WRITE at the
> bottom of the while (pos ... next) loop.  Even if the state changes,
> the loop still just proceeds to the next connection, instead of trying
> to send the available data via the pos->write_handler().

Sure, but then the FD should end up in the select() set for writing,
select() should tell us the FD is ready and then we re-enter the loop
and do the write. We don't immediately got to write() as we cannot be
sure that the TCP connection isn't still full from the previous response
(theoretically possible with pipelining).  Still, that's one run around
the event loop (<1ms), and not something that should be visible for your
apps at all.  As you experience (presumably) much longer delays, the
cause ought to be something else.

> The effect is that each MHD_run's worth of output takes until the next
> invocation of MHD_run in order to actually transmit output.  This
> increases the TTFB latency of the *all* responses to the sum of the
> processing time of *all* concurrently-arriving requests.

Are you, by chance, doing significant blocking operations within the
single-threaded event loop? (Outch...). I understand that then our extra
run through select() might be, eh, "annoying", but the big issue is your
blocking behavior in a single threaded server.

Anyway, we could probably do something about this (reducing your pain,
but not solving the main problem). I'm thinking of something along the
lines of the attached patch.  It MAY help, but ONLY if you're using
epoll() and TURBO mode, so you must pass both:

 MHD_USE_EPOLL_LINUX_ONLY | MHD_USE_EPOLL_TURBO

to MHD_start_daemon() for this diff to even do anything if it works
as intended and IF my understanding of your problem is correct.

Note that I won't push this change unless you do tell me it helped.

> I realize that single-threaded mode is not a great fit for this
> application.  But is there anything we can hope for in terms of
> tweaking the state machine, or use MHD options or funky flow-control
> MHD calls, or reentrant MHD_run* calls to improve on the latency?

The above may reduce latency a bit, but it'll still suck because
obviously the server won't do much in terms of accepting connections
while you're blocking the one thread. OTOH, if it is useful, I don't
think this will hurt anyone in practice and could thus be put into SVN HEAD.

Happy hacking!

Christian

Attachment: turbo-extend.diff
Description: Text Data

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]