libmicrohttpd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [libmicrohttpd] MHD threading models: what model is similar to Least


From: Evgeny Grin
Subject: Re: [libmicrohttpd] MHD threading models: what model is similar to Least Connected?
Date: Fri, 2 Dec 2016 22:40:07 +0300
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.5.0

You will need something like:

-----------------
  size_t N = 0; /* number of inited daemons */
  size_t n = 0; /* current worker */
  MHD_Daemon* daemons[MAX_DAEMONS];
  daemons[N++] = MHD_start_daemon (MHD_USE_NO_LISTEN_SOCKET |
MHD_USE_SELECT_INTERNALLY, ....);
  while (processingAllowed())
  {
    int fd = accept (listen_fd, &addr, &addrlen);
    if (-1 == fd)
      continue;

    if (!isSomeFunctionOfMyAppResponding())
    {
      if (N < MAX_DAEMONS)
      { /* Add new daemon if space is available */
        daemons[N++] = MHD_start_daemon (MHD_USE_NO_LISTEN_SOCKET |
MHD_USE_SELECT_INTERNALLY, ....);
      }
      n++; /* Switch to next worker */
      if (MAX_DAEMONS == n)
      {
        n = 0; /* Return processing to first daemon */
      }
    }
    MHD_add_connection (daemons[n], fd, &addr, &addrlen);
  }
-----------------

"Slow" daemons will continue processing their connections, when slowdown
is detected, you will switch new connections to next daemon.

-- 
Best Wishes,
Evgeny Grin

On 02.12.2016 0:39, silvioprog wrote:
> On Thu, Dec 1, 2016 at 5:49 PM, Evgeny Grin <address@hidden
> <mailto:address@hidden>> wrote:
> 
>     It's a basic of theory of mass telecommunication.
>     Developed with first automatic telephone exchange.
>     In overloaded situation - just reject some part of incoming traffic as
>     retries only prevent end of overload.
> 
> 
> Exactly. :-)
> 
> But instead of rejecting it redirects them to a new server.
> 
>     You can do it manually.
>     Start MHD with MHD_USE_NO_LISTEN_SOCKET and process polling of listen
>     socket in your own thread. Use MHD_add_connection() when new connection
>     arrive.
> 
> 
> Hm I didn't know about MHD_add_connection()... it seems awesome. I'm
> going to check how to use it.
>  
> 
>     As soon as you detect "overload" of MHD - start new MHD instance without
>     listen socket and use MHD_add_connection() with new MHD instance.
> 
> 
> This is the problem: how to detect that a function of my application is
> not responding (overloaded) to not redirect new requests to it? :-/
> 
> (this is little bit funny: my app can't know if it is not responding
> because it is not responding... But MHD/nginx can! :-D)
> 
> I think nginx uses some system call to check if the TCP destination
> (proxy) is responding, but I don't know how it does that. See my
> environment:
> 
> . nginx running on port 443; // in Least Connected mode
> . fastcgiapp1 rutting on port 9000; // primary
> . fastcgiapp2 rutting on port 9001. // backup
> 
> (both fastcgitapp are blocking and doesn't have any threading support,
> so I think nginx creates the required threads)
> 
> When the fastcgiapp1(primary) is not responding, fastcgiapp2(backup) is
> used (nginx creates a new thread, now I have two threads). Some time
> after (timeout) nginx try to use fastcgiapp1 again (now one thread).
> Nginx never redirects new requests to fastcgiapp1 if it is not
> responding... it uses fastcgiapp2, however, fastcgiapp1 is the primary
> app, so nginx retries to use it after some time.
> 
> Supposing this pseudo code:
> 
> static int ahc_echo(void * cls, struct MHD_Connection * connection ...
> other params) {
>   if (isSomeFunctionOfMyAppResponding()) {
>     do something ...
>   } else {
>     create a new thread with blocking server and finally do something ...
>   }
> }
> 
> MHD_start_daemon(*MHD_USE_SELECT_INTERNALLY* ...
> 
> The pseudo model above seems MHD_USE_THREAD_PER_CONNECTION, but creating
> new threads only when it is really required.
> 
>     --
>     Best Wishes,
>     Evgeny Grin
> 
> 
> -- 
> Silvio Clécio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]