l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The Perils of Pluggability (was: capability authentication)


From: Bas Wijnen
Subject: Re: The Perils of Pluggability (was: capability authentication)
Date: Tue, 11 Oct 2005 19:49:26 +0200
User-agent: Mutt/1.5.11

On Tue, Oct 11, 2005 at 11:30:45AM -0400, Jonathan S. Shapiro wrote:
> > I was talking solely about the Hurd.  Of course it is possible to write
> > systems which work differently.  But in the Hurd, capabilities from the
> > parent process are trusted (at least the ones that are passed at process
> > startup time).  Other capabilities are not trusted in general, or at least
> > not fully.  Of course the user may for example force passing an untrusted
> > capability to a child, which can make it a trusted capability for the
> > child.
> 
> Okay. I understand that this is the Hurd design. You may not agree with the
> alternative position that EROS/Coyotos takes, but have I described it
> clearly enough?

Not really, I guess.  I still don't understand what makes something
trustworthy.  In the Hurd it's easy: If the parent says it is, then it is.
Some processes may take other sources (shells for example will take stdin as
an authoritive source), but the rules are clear: When a process has started
up, anything out there is by default untrusted.  Only if we already hold some
proof of the contrary (a capability, in general) can we trust something.

In case of your login program, who decides what can be trusted?  Is it the
system administrator?  Is it the operating system designer?  Both sound very
restricting to me compared to the Hurd's design.

> > > Trust in a capability is not a function of the source of that
> > > capability. It is a function of knowledge of behavior. Trust in the
> > > source of the capability only becomes interesting when knowledge of
> > > behavior is impossible or unavailable.
> > 
> > In the Hurd we chose to not make this knowledge available.  Trusting the
> > parent process seems a good solution for all flexibility to me, without
> > compromising security.
> 
> Until you got to "without trusting security", I agreed. I will note that
> the Plan 9 team agreed with my assessment that restricting themselves to
> strict hierarchies of this sort made security impossible. Note: not
> "they didn't get it right", but rather, "it can't be accomplished within
> a strict hierarchy."

Ok, I'll summarise what I think about this:
- For any action there is a capability which allows it.  All capabilities are
  given to the parent process at server startup.  For system servers, this is
  the root server (wortel) and system boot time.
- A process which needs to do something will need to get the capability from
  somewhere.  Usually, this will be their parent, because the child is just
  started by the parent to perform a certain task.  The child never has any
  rights that the parent doesn't, when it starts up.  Of course it may aquire
  them later, but the parent could have aquired anything the client did.
- If some process needs to do something for which it doesn't hold a
  capability, it needs to get it.  It can only get it from a process which
  does have it.  For example, there could be a "su server", which gives a
  capability to files owned by root in exchange for a password.  However, to
  support flexibility, the "su server" capability which is used to ask the
  question should come from the parent.  If the parent decides to use a
  different su server, then the parent will have a good reason for it.  It is
  none of the child's business to mistrust things it received from the parent.
- If some process has a capability, it should not give it away to anyone
  unless it trusts the one it's giving it to (usually that's because it has
  some other capability, sometimes it may additionally need a password).

Probably there's nothing new there for you, I just wanted to make sure we're
not mixing up definitions.

So what is the security problem with it?

> > > 2. If the service process has capabilities that it does not get from the
> > > parent, then the parent *can't* do all of the malicious things that the
> > > service might do. This turns out to be a very useful design pattern.
> > 
> > There are two sources of capabilities for a process: From the parent, and
> > from third parties.  Any capability that can be aquired from a third party
> > can be aquired by the parent itself as well.  So this scenario is not
> > possible in the Hurd.
> 
> Okay. I understand what you are describing for Hurd. In EROS/Coyotos, it
> is not the case that a parent has access to the capabilities of its
> children.

I understand that.  Note that in the Hurd a parent doesn't have access to the
capabilities of its children, but the child simply cannot do anything that the
parent itself couldn't.  However, if the child somehow received a capability
from a third party, then that capability is not available to the parent
(unless it is copied).  However, since the parent chooses which children to
create, it can easily create one which does copy all it's translators back to
it.  I think if this happens it should be regarded as a security hole in the
parent (and in the for this purpose crafted children).

> The problem with an "is this yours" operation is this: if you are in a
> position where you need it, you probably cannot rely on having gotten a
> valid capability to the server either. The party who provided you this
> capability is in a position to lie.

The capability to the server should come from the parent, which makes it
trustable.  If we need to ask "is this yours?", then the other capability
doesn't.

> So in EROS, you ask the constructor of the server instead. Constructors
> are part of the system TCB, and can be obtained independent of your
> parent or any other source.

So the answer to my question above, "where is the trust", would be the system
administrator?

> > > > I don't see where security is lost if you trust the creator of your
> > > > process.
> > > 
> > > Please explain how /sbin/passwd would work in this model.
> > 
> > passwd is started by the filesystem, which has access to /etc/passwd and
> > /etc/shadow.  It gets its capabilities for stdin/stdout from an untrusted
> > source (the process which runs the command, not the parent), but that's
> > ok, as it doesn't use them for any security-sensitive operations *that
> > it's responsible for*.  If stdin is monitored by some malicious process,
> > then the process which runs the command made that possible.  This is not
> > the responsibility of /bin/passwd, and there's nothing it can do about it
> > anyway.  This is akin to running passwd over an unencrypted network
> > connection.  While passwd may behave well, there's still a huge security
> > problem.
> 
> It is apparent that I chose a poor example. Your implementation
> of /sbin/passwd *is* able to trust its parent, because its parent is
> part of the system TCB.

That depends on your definition of "system".  The file system may be something
which was started by the user.  However, the top /bin/passwd would access
/etc/passwd in it's own root filesystem, which cannot be masked by anyone but
root.

> However, do you see that there might be other programs that need to
> guard things that are NOT started by the system TCB? The design pattern
> you propose does not generalize.

No, I do not see this.  If something should be guarded, then someone with
access to it should guard it.  It should give out the capability only to
processes which it can trust.

> > > Then try to explain it in a system that does not have set[ug]id.
> > 
> > There is no way to run Hurd and still be secure on such a system.  I do not
> > see this as a problem, it is simply a requirement for the Hurd.
> 
> There is no way to have uids and gids as a basis for access control in
> such a system and be secure at all. Ever. This is not an implementation
> deficiency. It is a mathematical certainty. Depending on Hurd's design
> objectives, this may not be a problem.

I think it is a good idea to not use UIDs and GIDs, but capabilities, as much
as possible.  However, we will need to emulate the POSIX layer.  If that
cannot be done securely, we may want to make it easy to disable certain POSIX
parts in order to get a secure system.  One way would be to drop all UID/GID
capabilities, so processes cannot use them to get other capabilities.
Instead, they would need to use the normal way of aquiring capabilities.  I'm
assuming here that UIDs and GIDs are implemented through a server which gives
anyone who holds the right UID capability any capability that that user should
have access to.

> > > But you often *do* have rights that your parent does not, and there
> > > sometimes *are* more appropriate places.
> > 
> > Not in the Hurd. :-)  But if you disagree, please give an example.
> 
> I gave an example yesterday with our login agent. I will explain shortly
> how the EROS constructor works. This may provide a clearer answer in
> practice.

Ok.

> > When I wrote "external", I meant that it is beyond the control of the
> > client.  So it cannot confine it, and doesn't have any guarantees about
> > its behaviour.
> 
> I would argue that the existence of such an unconfinable but untrusted
> server is almost always a mistake. It is sometimes necessary, but it is
> something that one should go to great effort to minimize.

Not at all.  An example of such a server is a webserver.  You aren't sending
your password file to it, but that doesn't mean you don't want to use it.  If
you mean that they shouldn't be part of the core of the operating system, then
I agree.  But it should be possible (and easy) to use them.

> Yes. Actually, there is a fourth party: the architect of the operating
> system. Since they write the installer, and the installer can (in
> principle) be constructed in a way that cannot be bypassed, they are
> also in a position to impose policy. Whether they *should* do so, and
> what kind of policy, is very much dependent on your design philosophy.

Agreed.

> > > Actually, I think you have given a very nice example. All I really want
> > > for this example is a system architecture that makes running *confined*
> > > plugins so easy and efficient that it becomes the default approach used
> > > by developers.
> > 
> > That sounds good.  Forking and dropping (almost) all capabilities seems
> > easy enough for me.
> 
> ARRGGHH!! NOOOO!!! This design is how every UNIX server in history has
> come to have holes!
> 
> No. The design you want is to start with *no* authority and provide
> exactly what is needed. Trying to throw authority away *always* leads to
> compromise.

Well, given the fact that you cannot get a capability back when you've dropped
it, you're going to have to drop things you don't want to use, not "drop
everything, pick up what you still need".

Of course, this should be done by specifying what you want to keep, not what
you want to drop.

> > I very much like the UNIX approach: Be secure by default, be flexible by
> > request.  The request often comes in the form of --force.  If someone messes
> > up when using --force, I don't think the developer is to blame.  I am very
> > much against limiting possibilities because they can be abused.
> 
> As long as the consequences can be contained to the person who used
> --force, I tend to agree.

If they cannot, then the user is probably root.  If someone uses --force, then
it's his responsibility not to mess things up, not the program's.  If that
person is root, then you have a problem with your system administrator.
However, this is a social problem, not a technical one.

> It would be pleasant if I, as a user, also had the option to first build a
> safe box to use --force inside.

Yes, that sounds good.  Dropping capabilities seems like a good way of
building such a box to me, and it should be easy to do it from the shell (the
shell needs to be changed to allow it, of course).

> > > > > In general, pluggability must not be opaque: if you change a
> > > > > contract that I rely on, I need to be able to detect this.
> > > > 
> > > > You mean if the implementation of the interface changes?  I do not see
> > > > the difference between having an interface which was defined from the
> > > > start as "I'll do A, but after 15 minutes I'll be doing B" and not
> > > > changing it, and "I'll do A", and after 15 minutes the implementation
> > > > is changed into "I'll do B".  I can understand that it matters for
> > > > verification, but I'm assuming here that that's not possible.
> > > 
> > > The difference is that the first one is testable and broken. You can go
> > > to the implementor and demand a fix, or you can replace the broken
> > > component. The second happens *after* your program ships, and it
> > > violates your dependency relationships.
> > 
> > So you want some version checking of the interface?  That sounds
> > reasonable..
> 
> That is completely useless here.

It's like library versioning.  It's very annoying to get right, but useful
nevertheless.

> I want it to be IMPOSSIBLE for you to swap code out from under me without my
> consent.

It normally is.  Only if you are running code on my filesystem (which is not
something that happens by accident) I could do such things (because the
filesystem provides shared libraries).  Even then, I think we thought of
procedures that physmem doesn't allow me to do things to them, as they are
copy on write (so when my filesystem changes the library, it only changes its
own copy of it).

I can certainly do it if you are using my physmem instead of the system one,
but that is definitely something you will notice doing.

> Usually, I will not care, and you can swap as much as you like. However, if
> I need to make a guarantee of behavior, I cannot do it if you can change the
> implementations out from under me.

These kind of setups (where you run things in someone else's untrusted
sub-hurd) aren't the kind of situations people are making guarantees about.
If your program is meant to be used with a certain library, then the system
will not allow anyone to swap it out from under you.  Only if you are working
in specially crafted subsystems is that possible.  That doesn't happen
accidentily.

> > ...It shouldn't prevent intentional interface changes though.

I meant implementation changes, sorry.

> Yes. In the situation I am concerned about it MUST prevent intentional
> interface changes and also intentional changes of implementation. If you
> wish to change these for *your* programs, that is fine. You should not
> have the authority to change them for *my* programs.

As I said above, *you* must go through some trouble to allow me to do this.
By default it will not and must not be possible.

> > > Actually, this is exactly the scenario that we *can* make manageable if
> > > we can do identify on a small number of very low level services.
> > 
> > I don't have a clear picture of what you mean here.  Could you clarify a
> > bit?
> 
> Probably not until we have a better picture of how confinement works.
> Since we already have enough to consider, I propose that we come back to
> this when the current discussions have stabilized.

Ok.

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]