l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Comments on the hurd-on-l4 document


From: Marcus Brinkmann
Subject: Re: Comments on the hurd-on-l4 document
Date: Wed, 08 Jun 2005 22:23:15 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At 08 Jun 2005 18:49:38 +0200,
Niels Möller wrote:
> This sounds reasonable. That the receiver endpoint is fixed is a
> limitation that must be removed in some higher layer, since at least
> one has to be able to inherit a receive end point through exec.

No, you are wrong.  Only send points are inherited.  Well, if you
really want to remain some end points, you will have to keep the
thread.

BTW, threads are also managed via mappings in Espen's design.

> > The receiver gets the information on which receive end point the
> > message was received, but he gets _no_ information about the sender.
> 
> Well, the common case is that the server for a capability needs to
> reply to whoever used the capability to send a message. I guess the
> kernel has to automatically create a capability in the receivers local
> name space? Perhaps even a single-use capability.

I see where you are getting at, but the kernel does no such thing, IIRC.
You have to do it yourself, for example by mapping a "reply send
point" to the server in your RPC.  Other models that you could think
of are send only interfaces which don't require a reply, or state-ful
communication channels with long-lived reply capabilities.

> > The second part, the death notifications, that is the "reference
> > counting" problem, and should be optional (reference counting may leak
> > information and thus add covert channels).  There is a fundamental
> > issue here: Microkernel designs usually despise reference counting,
> > and require explicit destruction, which makes a lot of sense.  But the
> > question is not if you have reference counting at the lowest level,
> > but if you can add it on top of that, and this is where I have not yet
> > seen a convincing solution.
> 
> Hmm. Maybe you can see the capability reference counting service as an
> optional service.

Some people want to do that.

> A server gets to choose to kill the corresponding object when the
> capability server sends it a no-senders notification, or it can keep
> it open, causing no great harm to anybody else.
> 
> Clients get to choose to either register their capabilities with the
> capability server, and be guaranteed that as long as the server
> doesn't crash or misbehave, the capability will work. Or it can choose
> to not register it, and then all that happens is that it risks that
> the capability may stop working at any time. An unregistered reference
> will in effect behave precisely as a "weak reference" to the server
> object. I don't think there's a security problem with allowing this
> choice, it might even turn out to be useful.

Right.  In my email, I talked about "temporary" capability mappings,
which are such capabilities for which you didn't get your own
reference (and direct mapping from the cap server).  They can be
extremely useful as an optimization.  Not always do you need your own
reference, you can work on the reference of the task providing the
mapping.

> To make this robust, the cap server needs to cooperate with the task
> server so that references are deleted on task death.

Yes, this is true.  But as both are system services which trust each
other, this is easy to accomplish.

> And we must
> ensure that a task can't register a reference to a capability that
> nobody granted it (or else, a malicious task could create "fake"
> references and in effect, disabled no-senders notifications for random
> objects). This latter requirement might be non-trivial.

Well, this is why when registering a reference with the cap server,
the caller must map the capability to the cap server.  If the cap
server sees the mapping in the RPC, then it knows that the caller has
the capability.  The problem is identifying the object behind the
mapping.  I have at least three ideas how to do that, but that's still
something that is pretty much an open problem.
 
> If capabilities can be unilaterally revoked by the server, this must
> also be handled properly. Perhaps it's sufficient to give the server
> the responsibility of informing the cap server of any revoked
> capabilities.

Indeed.
 
> If we compare the set of mapped/granted capabilities according to the
> kernel, and the view of the capability server, we can probably not
> maintain a perfect correspondence, but we can probably use some
> relaxed constraints on this correspondence, and still get things to
> work out.

The correspondence can be quite loose if you have misbehaving clients
or servers, but it never matters.  For example, a client can request a
reference and a mapping, and then just unmap the capability without
destroying the reference.  But then it only shoots itself into the foot.

Likewise, as you said, the server can revoke the mapping without
telling the cap server.  But that just means the clients won't be able
to send messages, and that's something the server can do in other
ways, too.  For example, never to receive.

The cap server only needs to act unilateral on task death, as you said.
 
> > There are some aspects of the design
> > which are obvious, but consider this call to the capability server:
> > 
> > cap_get_ref (cap_server, cap)
> > 
> > The cap server needs to identify what the object "cap" is.  If this is
> > just a mapping, it can't know that.  So, again, you need to lookup
> > objects associated as caps when the caps are given as arguments.
> 
> I don't think I fully understand this. If it helps, one could have the
> cap server maintain a unique id to each capability, and pass this id
> around in all ipc that needs it (no idea about the security concerns
> though).

Well, to really clarify this, we would have to establish what a
capability exactly is, and that is of course unclear.  However,
imagine a capability would just be represented by a send point.  Then
it works very much like memory.  But imagine you had a mapping of a
memory page, and then at some point you would get via some other task
a mapping of the same physical page.  How could you find out?  Let's
say the memory is mapped read only.  There is no way to establish
identity.

For the cap server to manage the capability objects, it needs some way
to securely identify objects in messages.  IE, we need a unique way to
"talk" about capabilities, while also proving that we have the
capability.

One way to do this is something that is called "protected payload" in
EROS.  IE, the server can associate a machine word with each
capability, and if the server has a mapping of a capability, it can
"unwrap" it and read out the protected payload.  That protected
payload would be not be modificable by the clients (ie mappers).  It
would be kernel protected.  The server could set it to an object ID.

However, this is still not quite good enough.  With the cap server, we
need to identify not objects we provide ourselves, but we just manage
objects provided by others.  So we need to be able to identify
communication send points without ever seeing the end point.  And the
cap server needs to ensure the identifier is unique among all objects
it manages.

So, there is some extra hair, which is a bit specific to the way the
Hurd is designed.  Other systems with less asymmetric trust
relationship problems may have it easier.

Here is one idea I had today.  Consider this mapping tree:

    server S
     |
     v
    cap server
    P1       P2
     |       ^
     v       |
 client A -> client B 

The cap server must be able to see that the mapping P2 from B to
itself is based on the mapping P1 it gave to A: Both mappings
represent the same capability.  One way to do it would be a kernel
system call that allows to "unroll" such "loops" and allows the cap
server to lookup the origin in its own address space for such
cyclically mapped objects.  Ie, it would lookup (P2) and the kernel
would walk back the tree and find the mapping P1 which originates from
the cap server and return that information to the cap server.

In the simpler case of the container_copy call, it would look like this:

   server S
   P1 P2  P2'
    | |   ^
    v v   | (container_copy RPC)
   client A

The container copy RPC would be invoked on the object P1, which can be
figured out someway by the server.  The P2 object would be mapped back
to the server as an argument to the RPC.  The server does the magic
system call, which gives the server the info that P2' is really the
same as the original P2.

I wish I could express all this clearer, I just lack the vocabulary to
talk about this stuff.

> > Disclaimer: I may change my mind anytime :)
> 
> I'm afraid it's not useful to get too much into details until we know
> which services L4 will and will not provide.
> 
> Does this mean that l4-hurd development is effectively stalled until an
> updated L4 spec is published?

No.  We have to get a clearer picture of what our requirements are.
To that extend, based on what we already do know, we must anticipate
future L4 developments and actively talk with the L4 people about the
problems we have.  To some extend, this has already happened, but it
will continue.

Also, at a high level, we already know how capabilities should work,
and we can work on the rest of the system even without nailing this
down in detail.

In particular, I am contemplating to write a cap+notification server
which works on L4 pistachio, using random object IDs instead of
mappings, but otherwise following protocols that will likely transfer
well to future kernel versions + our enhancements if needed.

There is some upheaval, but it's controllable.  But it all just means
that we didn't really get the fundamentals right, and have to put
everything under the magnifying glass again.

Thanks,
Marcus






reply via email to

[Prev in Thread] Current Thread [Next in Thread]