libmicrohttpd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [libmicrohttpd] Trouble getting a response sent from a


From: Tom Cornell
Subject: Re: [libmicrohttpd] Trouble getting a response sent from a
Date: Wed, 5 Nov 2014 17:29:55 +0000

> Date: Wed, 5 Nov 2014 07:47:19 +0000
> From: Marcos Pindado Sebastian <address@hidden>
> To: Christian Grothoff <address@hidden>, "address@hidden"
>       <address@hidden>
> Subject: Re: [libmicrohttpd] Trouble getting a response sent from a
>       separateworker thread (with external select)
> Message-ID:
>       <address@hidden
> es.es>
> Content-Type: text/plain;     charset="utf-8"
> 
> Hi all, may I share some comments to this scenario?
> we have just implemented the external select with asynchronous tasks
> performed in different threads in centos+redhat in 0.37-0.38, so it obviously
> works.
> 
> 1. You have to supply your own file descriptor to the external select in order
> to notify the select thread the arrival of new messages. So the scenario
> should be:
>  - In access_handler you create an async task (thread)
>  - the task finishes and notifies the main thread (awaiting in select) 
> writing in
> an FD
> - The main thread calls MHD_run and the access_handler gets called again,
> this time queuing the response
> 
> So one thing to do to make this work is to call mhd functions only on the
> external select thread and to have a certain "state" in the access_handler,
> knowing that it can be called several times and only should queue the
> response when the worker thread has finished.
> 
> 2. This supplied file descriptor can be anything readable+writeable and
> should be set in your fdset. We have used pipes (the old way) and eventFds
> (in recent linux kernels) which are optimized.
> In each iteration, the external select thread calls mhd_getfdset and then the
> eventFd is added to the set. This fd is known by the worker threads:
>       - The worker threads do the job and insert the response in a shared
> memory object (obviously synchronized with a mutex).
>       - the worker threads writes to the fd => The select is notified and the
> main thread wakes up.
> 
> 3. The shared memory obviously should be created using the void** con_cls
> facility in the access_handler, creating an structure or class. The 
> transaction
> state should be stored there also (similarly as the post examples).
> 
> 4. About suspend/resume. If using external SELECT and NO EPOLL (linux
> only), we found not necessary to use suspend+resume. In fact, suspending
> the connection has one disadvantage: you do not notice when the client
> disconnects, or better, you do not notice at the time disconnect happens.
> BUT, if using external SELECT and EPOLL, (linux only), MHD do busy-waiting.
> And to avoid this you should call suspend+resume. With busy-waiting I mean
> that MHD begins to call the access_handler in a loop until a response is
> queued.
> To handle disconnects you should pass a request_completed_callback to
> mhd and check the termination code.
> 
> EPOLL+external+busy is something I would like to check with sources when I
> have time, having that would permit mhd handle thousands of connections
> in this mode.
> 
> Best regards
> Marcos

Thank you Marcos. I had actually tried using an eventfd to signal from 
the worker thread when it was ready. (I got that from an earlier 
conversation on the mailing list between you and Christian, I believe.)
It didn't work, but it turns out that that was my fault (no big surprise 
there), and I was in fact basically one line of code away from working code.
Turns out that although I took great care to make sure the 'max' argument 
to select included both my eventfd and MHD's fds, I neglected to actually 
set the bit for my eventfd in the read-fds bitmask. So my attempts to 
signal the main thread were getting masked out (D'oh!).

For reference, in case someone is searching the archive in the future, 
this is the essential outline of my little experiment. Wiser heads may 
want to suggest corrections.

1. Setup

* For this simple experiment, I just made a single global eventfd. Initialized 
to 0.
* Call MHD_start_daemon with the MHD_USE_SUSPEND_RESUME flag (though 
according to Marcos, since I'm using select not epoll this may not be necessary?
Or even advisable?).

2. Main Event Loop

* Construct and zero out the fd_set objects, call MHD_get_fdset 
to initialize them with the fds that MHD cares about.
* Set the bit corresponding to the global eventfd.
* Make sure 'max' is the max of MHD's value and the eventfd. 
(Though if the eventfd is created early on, this is probably always equal 
to MHD's max.)
* Call select().
* If FD_ISSET(evt_fd, &rs), read it (to clear it).
(If multiple workers have written to it, then maybe a single read will not 
clear it? 
I am only working with a single worker thread in my experiments, so this is 
still a messy area.)
* Call MHD_run.

3. MHD_AccessHandlerCallback

Called multiple times. 
First call:
* Construct a Task object, assign it to *con_cls.
Subsequent calls:
* Append data to the Task object's data field.
Last call:
* Store the MHD_Connection pointer in the task object.
* Suspend the connection (but see Marcos' post).
* Push the task object onto a shared queue. 
In all these cases, return MHD_YES.

4. Worker Thread

* Pop a task off the shared queue.
* Do some work (or pretend to, in the case of this little experiment).
* Create an MHD_Response (currently via a call to 
MHD_create_response_from_buffer).
* Resume the connection.
* Queue the response.
* Delete the task object, destroy the response 
(just decrements its reference counter, don't be alarmed!)
* Write '1' to the eventfd. (Which I believe will actually increment
the contents by 1, so this is not just a simple Boolean flag.)

This is pretty crude, but it does return a response (when it should) and 
does not totally fall on its face, so that's good!

Thanks to Marcos and Christian for your help and guidance. I expect
I still need more of it...

-Tom





reply via email to

[Prev in Thread] Current Thread [Next in Thread]