l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Perils of Pluggability (was: capability authentication)


From: Jonathan S. Shapiro
Subject: Re: The Perils of Pluggability (was: capability authentication)
Date: Tue, 11 Oct 2005 11:30:45 -0400

On Tue, 2005-10-11 at 12:34 +0200, Bas Wijnen wrote:

> 
> > It is often true that subprograms trust their instantiator, but it is
> > not always true. In EROS and Coyotos this assumption is not necessary.
> > We have a number of programs that wield capabilities that their users do
> > not (and must not ever) possess. The ability to use and guard these
> > capabilities against a hostile creator is one of the foundations of our
> > security.
> 
> I was talking solely about the Hurd.  Of course it is possible to write
> systems which work differently.  But in the Hurd, capabilities from the parent
> process are trusted (at least the ones that are passed at process startup
> time).  Other capabilities are not trusted in general, or at least not fully.
> Of course the user may for example force passing an untrusted capability to a
> child, which can make it a trusted capability for the child.

Okay. I understand that this is the Hurd design. You may not agree with
the alternative position that EROS/Coyotos takes, but have I described
it clearly enough?

> > Trust in a capability is not a function of the source of that
> > capability. It is a function of knowledge of behavior. Trust in the
> > source of the capability only becomes interesting when knowledge of
> > behavior is impossible or unavailable.
> 
> In the Hurd we chose to not make this knowledge available.  Trusting the
> parent process seems a good solution for all flexibility to me, without
> compromising security.

Until you got to "without trusting security", I agreed. I will note that
the Plan 9 team agreed with my assessment that restricting themselves to
strict hierarchies of this sort made security impossible. Note: not
"they didn't get it right", but rather, "it can't be accomplished within
a strict hierarchy."

> > 2. If the service process has capabilities that it does not get from the
> > parent, then the parent *can't* do all of the malicious things that the
> > service might do. This turns out to be a very useful design pattern.
> 
> There are two sources of capabilities for a process: From the parent, and from
> third parties.  Any capability that can be aquired from a third party can be
> aquired by the parent itself as well.  So this scenario is not possible in the
> Hurd.

Okay. I understand what you are describing for Hurd. In EROS/Coyotos, it
is not the case that a parent has access to the capabilities of its
children.

> > By convention, EROS does provide an operation on (nearly) all
> > capabilities: getAllegedType(). This returns a unique identifier for the
> > interface type. Note, however, the word "alleged". Just because a
> > process *says* that it implements an interface does not mean that it
> > *does*.
> 
> In the above example, not only what it does, but also on which server, is
> important.  A question "for which server is this capability?" seems
> unneccesary to me, but "is this yours?" may be useful, I think.

Yes. The GetAllegedType() information specifically does NOT answer "on
which server", which is why we call it the *alleged* type.

The problem with an "is this yours" operation is this: if you are in a
position where you need it, you probably cannot rely on having gotten a
valid capability to the server either. The party who provided you this
capability is in a position to lie.

So in EROS, you ask the constructor of the server instead. Constructors
are part of the system TCB, and can be obtained independent of your
parent or any other source.

> > > I don't see where security is lost if you trust the creator of your 
> > > process.
> > 
> > Please explain how /sbin/passwd would work in this model.
> 
> passwd is started by the filesystem, which has access to /etc/passwd and
> /etc/shadow.  It gets its capabilities for stdin/stdout from an untrusted
> source (the process which runs the command, not the parent), but that's ok, as
> it doesn't use them for any security-sensitive operations *that it's
> responsible for*.  If stdin is monitored by some malicious process, then the
> process which runs the command made that possible.  This is not the
> responsibility of /bin/passwd, and there's nothing it can do about it anyway.
> This is akin to running passwd over an unencrypted network connection.  While
> passwd may behave well, there's still a huge security problem.

It is apparent that I chose a poor example. Your implementation
of /sbin/passwd *is* able to trust its parent, because its parent is
part of the system TCB.

However, do you see that there might be other programs that need to
guard things that are NOT started by the system TCB? The design pattern
you propose does not generalize.

> > Then try to explain it in a system that does not have set[ug]id.
> 
> There is no way to run Hurd and still be secure on such a system.  I do not
> see this as a problem, it is simply a requirement for the Hurd.

There is no way to have uids and gids as a basis for access control in
such a system and be secure at all. Ever. This is not an implementation
deficiency. It is a mathematical certainty. Depending on Hurd's design
objectives, this may not be a problem.

> > But you often *do* have rights that your parent does not, and there
> > sometimes *are* more appropriate places.
> 
> Not in the Hurd. :-)  But if you disagree, please give an example.

I gave an example yesterday with our login agent. I will explain shortly
how the EROS constructor works. This may provide a clearer answer in
practice.

> > > >   2. I do not know that information sent on this stream will remain
> > > >      private. The implementor of the IOstream interface could very
> > > >      well broadcast it to the world.
> > > 
> > > In case of an external untrusted server, this is neccesarily the case.  I 
> > > see
> > > no other way...
> > 
> > I think you have not yet considered the role of confinement. All of the
> > properties that you identified for a library implementation can be
> > achieved for a completely untrusted service, provided the instance of
> > the service is confined. The confinement check can be done without code
> > inspection.
> 
> When I wrote "external", I meant that it is beyond the control of the client.
> So it cannot confine it, and doesn't have any guarantees about its behaviour.

I would argue that the existence of such an unconfinable but untrusted
server is almost always a mistake. It is sometimes necessary, but it is
something that one should go to great effort to minimize.

> > > > The current state of the art gives us only three mechanisms for dealing
> > > > with this:
> > > > 
> > > >   3. Risk: we recognize that we have no reasonable basis for trust,
> > > >      and we decide to use something anyway. The key to this is to
> > > >      arrive at a system architecture where risk is survivable.
> > > 
> > > This is up to the user, not the process IMO.
> > 
> > It is up to a combination of parties:
> > 
> >   + The user who decides what to run
> >   + The administrator, who decides what is *available* to run
> >   + The architect of the application (in your example, xmms), who
> >     decides what plugins they will run and what environment the plugin
> >     will run within.
> 
> Ok, that makes sense.  But in the end, all this comes down to "the user" IMO
> (where I assume the user can go to the administrator and have things installed
> if she wants to).  The architect of the application can be bypassed by using a
> different application. :-)

Yes. Actually, there is a fourth party: the architect of the operating
system. Since they write the installer, and the installer can (in
principle) be constructed in a way that cannot be bypassed, they are
also in a position to impose policy. Whether they *should* do so, and
what kind of policy, is very much dependent on your design philosophy.

> > Actually, I think you have given a very nice example. All I really want
> > for this example is a system architecture that makes running *confined*
> > plugins so easy and efficient that it becomes the default approach used
> > by developers.
> 
> That sounds good.  Forking and dropping (almost) all capabilities seems easy
> enough for me.

ARRGGHH!! NOOOO!!! This design is how every UNIX server in history has
come to have holes!

No. The design you want is to start with *no* authority and provide
exactly what is needed. Trying to throw authority away *always* leads to
compromise.

> I very much like the UNIX approach: Be secure by default, be flexible by
> request.  The request often comes in the form of --force.  If someone messes
> up when using --force, I don't think the developer is to blame.  I am very
> much against limiting possibilities because they can be abused.

As long as the consequences can be contained to the person who used
--force, I tend to agree. It would be pleasant if I, as a user, also had
the option to first build a safe box to use --force inside.

> > > > In general, pluggability must not be opaque: if you change a
> > > > contract that I rely on, I need to be able to detect this.
> > > 
> > > You mean if the implementation of the interface changes?  I do not see the
> > > difference between having an interface which was defined from the start as
> > > "I'll do A, but after 15 minutes I'll be doing B" and not changing it, and
> > > "I'll do A", and after 15 minutes the implementation is changed into 
> > > "I'll do
> > > B".  I can understand that it matters for verification, but I'm assuming 
> > > here
> > > that that's not possible.
> > 
> > The difference is that the first one is testable and broken. You can go
> > to the implementor and demand a fix, or you can replace the broken
> > component. The second happens *after* your program ships, and it
> > violates your dependency relationships.
> 
> So you want some version checking of the interface?  That sounds reasonable..

That is completely useless here. I want it to be IMPOSSIBLE for you to
swap code out from under me without my consent. Usually, I will not
care, and you can swap as much as you like. However, if I need to make a
guarantee of behavior, I cannot do it if you can change the
implementations out from under me.

> ...It shouldn't prevent intentional interface changes though.

Yes. In the situation I am concerned about it MUST prevent intentional
interface changes and also intentional changes of implementation. If you
wish to change these for *your* programs, that is fine. You should not
have the authority to change them for *my* programs.

> > > >   2. The consequences of failure are manageable. For example, the
> > > >      risk of introducing a new audio CODEC is acceptable because the
> > > >      worst that happens is you kill the player.
> > > 
> > > Even that is not possible if the player doesn't trust the plugin, see my 
> > > xmms
> > > example above.
> > 
> > Actually, this is exactly the scenario that we *can* make manageable if
> > we can do identify on a small number of very low level services.
> 
> I don't have a clear picture of what you mean here.  Could you clarify a bit?

Probably not until we have a better picture of how confinement works.
Since we already have enough to consider, I propose that we come back to
this when the current discussions have stabilized.

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]