l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: auth handshake and rendevouz objects


From: Niels Möller
Subject: Re: auth handshake and rendevouz objects
Date: 05 Nov 2002 21:01:37 +0100
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Marcus Brinkmann <address@hidden> writes:

> On Tue, Nov 05, 2002 at 06:39:45PM +0100, Niels Möller wrote:
> > Do we really expect a high volume of rpc calls that transfer handles?
> 
> The question is, do you really want to implement a known race condition,
> that is just bound to be triggered at some time?

You have to loop around and retry, in some way. Either in B, which
should cause problems only when the system seriously overloaded, or in
A, in which case you could fall back to serializing all handle
transfers. But I agree it doesn't seem quite satisfactory.

And I'm afraid I'm not quite sure what counts as a timeout. If there S
sends a message to A, with zero timeout, and A is ready to receive but
isn't scheduled for a while, will the call timeout? If A is ready to
receive, and a dozen threads sends rpc:s to it at about the same time,
all with zero timeout, can one be sure that all but one of the rpc:s
will timeout?

> > It's not obvious to me that one global thread won't do (either because
> > the probability that it's busy is low, or because rpc calls involving
> > handle transfers are serialized).
> 
> On the server side?  That would still not work because those notification
> messages would be processed asynchonrously.

Here I was thinking about a single thread in A, and S sending it
messages with a timeout of zero. So S also has only one thread doing
this protocol, getting messages from B (and other clients), relaying
them to A (with zero timeout), and replying back to B.

> I didn't study your example right now.  I think we are touching here the
> generic question of how to safely receive notification messages from servers
> in clients.  I have pondered this in the past, and did not come to a
> conclusion yet.  In Mach, we have the concept of buffering and the
> possibility to receive a notification when the receiver is ready.

Would it help to have a single one-message buffer per-receiver (in our
case, A), in S, and a corresponding thread? When B asks for one of A:s
handles, it will block until the server's buffer is empty. Then the
server thread will receive the message from B, and it can block while
delivering it to A (using the same timeout as B used when calling S).

There seems to be two problems one gets into pretty easily:

* A number of threads that is linear in some potentially pretty large
  parameter, like number of open files, number of clients, etc. I
  don't know if this is a problem, it seems to be a basic assumption
  in the hurd design that threads are cheap.

* Unpredictable behaviour if a malicious task floods other threads
  with messages. I'm afraid that is a problem that's hard to solve,
  and which we should perhaps ignore for now. I think it's basically a
  resource limits and resource allocation problem. Sending a message
  should cost you some cpu-time in the scheduling algorithm, to
  compensate for the cputime it costs the receiver to check if you're
  authorized to talk to it.

/Niels




reply via email to

[Prev in Thread] Current Thread [Next in Thread]