l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: instance and instantiator


From: Jonathan S. Shapiro
Subject: Re: instance and instantiator
Date: Thu, 13 Oct 2005 17:23:35 -0400

On Thu, 2005-10-13 at 21:52 +0100, Neal H. Walfield wrote:
> At Mon, 10 Oct 2005 09:11:37 -0400,
> Jonathan S. Shapiro wrote:
> > It is often true that subprograms trust their instantiator, but it is
> > not always true. In EROS and Coyotos this assumption is not necessary.
> > We have a number of programs that wield capabilities that their users do
> > not (and must not ever) possess. The ability to use and guard these
> > capabilities against a hostile creator is one of the foundations of our
> > security.
> > 
> > These "suspicious subsystems" do *not* trust capabilities provided by
> > their creator. They verify them. In particular, almost *all* of our
> > programs test their space bank to learn whether it was a valid space
> > bank.
> 
> Are these program instances those started via a meta-constructor?  If
> not, how do they get these other capabilities that the instantiator
> didn't possess?

In EROS, *all* program instances are started by a constructor. Most
program instances serve a single master, keeping no secrets from that
master, and have no reason to be suspicious or worry about capability
authenticity.

For the rest, capabilities provided by the constructor itself were, in
effect, provided at developer request, so the program really has no
issue with these. Capabilities provided from the client are
authenticated when it matters. For example, the space bank capability
comes from the instantiator, but it is trusted because it is an
authentic space bank.

The capabilities used to perform the authentication come from the
constructor. They are installed there by the developer (or the
installation program) at the time of constructor creation. These are
considered "safe" by the constructor because they are constructor
capabilities (note the identify operation poking it's head up again
here).

The constructor decides whether a capability is "safe" by three tests:

  1. Is the capability one of a small number of kernel-implemented
     capabilities that is considered safe by definition? => SAFE

  2. Is the capability read-only and weak? => SAFE

  3. Is the capability a constructor capability to a sealed
     constructor that certifies that it's yield is in turn SAFE
     by these rules? => SAFE

  4. All other cases => UNCONFINED

If all of the initial capabilities are "safe" according to this test,
then the yield created is known to be confined. This test is actually
performed one capability at a time at constructor fabrication time. The
answer is already precomputed at program instantiation time, and the
client-requested check is done with a simple boolean test.

The "weak" part needs explanation.

Suppose that you hold a read-only capability to a node that contains a
read-write capability? You fetch the read-write capability and you have
now broken confinement.

The effect of the "weak" restriction is to downgrade each capability as
it is fetched, ensuring that the fetched capability is both weak and
read-only. This downgrade is conservative. Endpoint capabilities, for
example, become Void capabilities.

The end result of this restriction is to ensure that all capabilities
that are reachable starting from a weak, read-only capability are
transitively read-only. This is the essential foundation of the
confinement check.

However, the inductive part of the definition is astoundingly powerful,
because it lets you build entire complex subsystems that are
collectively confined.

> What if the instantiator deallocates the space bank in the middle of a
> critical operation (thus rendering the object in a partially updated
> state)?

If this is a "homogeneous storage" object, then the entire object is
going to disappear along with the partially updated state.

If this is a "heterogeneous storage" object, then it better be prepared
to do something sensible when the memory fault hits, and it probably
needs to restrict itself to a carefully designed transactional interface
with proper isolation.

Which may explain why we work really really hard to keep storage
homogeneous. At the moment, the two examples of applications that
actually deal with this are the low-level ethernet driver and the window
system. Both manage it by keeping the client-supplied memory segregated
and touching it only through a single procedure that should be (but
isn't) called "ReallyDamnedScaryParanoidMemcpy()".

Definitely not an appropriate design challenge for a first year
programmer...

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]