[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: async thread creation
From: |
Marcus Brinkmann |
Subject: |
Re: async thread creation |
Date: |
Thu, 21 Oct 2004 18:42:43 +0200 |
User-agent: |
Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI) |
At Wed, 20 Oct 2004 23:24:34 +0200,
Johan Rydberg wrote:
> One solution would be to have a fixed number of pre-allocated worker
> threads for physmem. If the RPC server runs out of worker threads it
> will block and wait for a worker thread to become available. This
> idea was dismissed since it would put a too heavy limit on the server.
>
> Another idea was to have a dedicated worker thread for communication
> between the task server and physmem. This idea was rejected because
> of some argument I can't remember right now.
It would require a deep hack in libhurd-cap-server, at a place where
it potentially would add some minor overhead to every RPC if you don't
use it (unless you compile the library twice). It could be done, but
I don't particularly fancy it.
> The third solution involved having a separate single-threaded RPC
> management (server) thread, just serving requests from the task
> server. But it would be troublesome to share data structures between
> the dedicated task RPC manager and the multi-threaded RPC manager that
> servers all other requests.
>
> It would also be possible to have a separate thread that performs
> worker thread allocation. When ever the server runs out of worker
> threads, it contacts the worker creator thread and lets it know that
> the server is all out of threads. The creator thread will in turn
> contact the task server (through a regular RPC) and create a new
> thread and insert it into the worker thread tool. If the task server
> tries to contact physmem while creating the new thread, the RPC
> manager thread would block until a worker thread becomes available.
Clarification: The RPC manager will always, after waking up the worker
creator thread (unless it already is in the process of allocating a
worker thread), wait until a worker thread becomes available.
Depending on timing, this will either be the thread created by the
worker creator thread, or some existing worker thread adding itself
back to the free list.
> Sooner or later a worker thread will become free, serving task's
> request. Progress will be made, till the point where the new worker
> thread is fully created and inserted into the thread pool.
The reason progress is made is of course that RPC worker threads in
physmem block indefinitely, ie on some outside event. This is just a
sanity requirement on the physmem implementation/interface.
> What is critical in this design, and different from the other
> solutions described above, is that physmem is allowed to keep
> processing RPC requests (making it possible for the task server to
> communicate with physmem while creating a new worker thread,) modulo
> creating new worker threads.
>
> Also, the (worker) thread doing the initial request for more workers
> may continue with its task. When done, it will put itself back on the
> pool. So there's a _big_ chance there's a free working thread if the
> task server decides to contact physmem for more memory (provided that
> there's isn't too much RPC pressure on physmem.)
Yes. This is of course a performance considerations: You already had
a situation where you needed one more worker thread. So it makes
sense to allocate one more. Even if the allocation request can
temporarily not be fulfilled, it will eventually succeed and then you
will reduce the chance to stumble upon the same situation next time.
<SIDENODE>
Alternatively, you could try to cancel the allocation request, but,
that doesn't really work (if you want to find out why yourself, skip
to the next paragraph). The reason is that if the task manager thread
itself is blocked on worker thread creation, it will not be able to
receive any RPCs, and thus both the thread allocation RPC and the
cancel RPC will be blocked in the send, not in the receive. :) The
other reason it doesn't work is that allocating a thread is not
actually something we consider as an operation that can block, so it
will be effectively un-cancellable (it will always "succeed in a short
time" - haha, an inside joke if you consider the problem we are trying
to solve). So, cancelling it has no effect. (This assumes that for
such RPCs which succeed in a short time, we return success instead of
ECANCEL, even if we notice the cancel request, because it is better to
return the result of the operation, and let the user thread notice its
cancellation at some other cancellation point further down the road).
Understanding all the side effects of cancellation and IPC is not
crucial here, if we just accept that we don't want to do that anyway.
</SIDENOTE>
> This is what I calls the async-worker-thread-creation policy. I hope
> I did not miss anything.
Quite good summary. I was planning to stuff this code into an #ifdef
HURD_CAP_SERVER_ASYNC_WORKER_ALLOCATION or so.
> Was your (Marcus) idea that this async-worker-thread-creation policy
> should only be implemented in physmem, or as a general part of the RPC
> facility?
It should be implemented in libhurd-cap-server, using a public
interface. Because allocating a worker thread already is in the slow
path, adding an extra test there if a separate worker thread should be
used or not is not harming at all. The idea is that if you want to
use it, you call a function after creating the bucket, but before
starting the manager, that starts the allocator thread and registers
it in the bucket structure.
Then, when you have the need to allocate a new worker, there is a test
for the allocator thread id, if it is not pthread null, we know that
we are in async mode.
If we are in async mode, we use conditions to wake up the allocator.
In fact, all the locks and conditions I need are already there. I
have drafted a flow chart of the necessary code yesterday evening.
It's just a few lines.
Note that the cost of this is: One extra thread that loops over
condition sleep and thread allocation, and some runtime overhead when
actually running out of worker thread. That's not much at all.
However, I would only use it in physmem, as for other servers there is
no need. Thread allocation should be fast. Still, you could use it
in other servers, too, if we feel like it is worth it.
> If the latter is true, the question is how often this scenario would
> arise. I would imagine that after a while the number of worker
> threads would stabilize. What worries me is bursts of RPC requests.
Yes.
> To stress a server you could simply spawn a large number of threads
> and then do parallel RPC's to the server.
Note that this is the only way to stress a server. Worker threads are
only used after it is verified that the sender thread is not already
in an RPC. What a malicious client could do is to send IPCs to every
server in the system (without going into a reply), and try to create
extra worker threads in all of them.
But in each server, the number of worker threads will never exceed the
number of other threads in the system (excluding the servers threads,
unless the server makes RPCs to objects it provides itself, which is
theoretically possible, but kinda weird).
> But if you do that, I'm
> pretty sure you would put larger constrains on the system, and the
> starvation of the target server would be a small problem in
> comparison.
Well, thread allocation can be restricted by quota. Still, there is a
question if server thread allocation should be limited or not. You
may want to throttle thread allocation (avoid it for a short time
span, and only do it if you really don't make progress).
This would be a straightforward extension over the current code,
irregardless of if we use async allocation or not. Instead allocation
a new thread (or waking up the allocator), you could sleep for a short
time on the respective condition, to check if a worker thread wakes up
soon. It's a good idea to do that, but it's not yet in the code (in
particular, as our pthread is not fully implemented yet, and I am not
sure if conditions, or even conditions with timeouts, actually work).
I will put this on the TODO list.
> I would prefer if we did not have any special cases for task/physmem,
> especially when it comes to fundamental things such as RPC handling
> and management. I understand how hard it will be to accomplish this,
> though.
The only reason not to use async worker thread allocation in the
normal case is to avoid having an extra thread for it.
Also, usually thread allocation should be _fast_, so that it is not
worth it to start a thread allocation, and then not just wait for it
to complete (synchronous), but wait for _any_ worker thread to become
available (asynchronous). If worker thread allocation is fast, it
doesn't matter if we wait for it complete (we wouldn't save a lot of
time by waiting for _any_ worker thread).
The only case where worker thread allocation is not fast is if it
deadlocks :) Especially if we run task and physmem at a higher (L4)
priority than normal tasks (not sure if we want to, but it's an
option, I guess).
If you get my drift.
Thanks,
Marcus