l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design principles and ethics (was Re: Execute without read (was [...


From: Bas Wijnen
Subject: Re: Design principles and ethics (was Re: Execute without read (was [...]))
Date: Sun, 30 Apr 2006 18:53:39 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Sun, Apr 30, 2006 at 10:19:08AM -0400, Jonathan S. Shapiro wrote:
> > >   Does the mere fact that the child was instantiated by the parent
> > >   imply that the child consents to disclose state to the parent?
> > 
> > And the answer is: We assume that it does.  Is there anything that breaks if
> > we assume this?  Yes, there is.  But so far, for all the things in that
> > category one of these is true:
> > - It can be implemented through some other mechanism
> > - We do not want to support this case, because we find it morally
> >   objectionable.
> > 
> > If you have a use case where both these are not true, please share it with 
> > us.
> 
> Three:
> 
>   1. The ability for an administrator to back up my content without
>      being able to examine it.

That is very well possible using a system call which doesn't allow examination
of the backed up data.  In case the place where the backup is going is not
under control of the system (which is likely), it can be encrypted.

>   2. The ability to safely store cryptographic keys on a machine having
>      more than one user.

I already explained how this can be done: by recursively not having an
untrusted parent.  This is easier than it might sound.  See also below.

>   3. The ability to securely manipulate a password database.

This is pretty much equivalent to 2, I would think.  Unless by "securely" you
mean:
- The password database itself is not a trusted program (unlikely, why else
  would you submit your password to it, you must at least have trust that it
  gives sensible results)
- You don't want the database program to tell about your password
- You don't want to actually do anything except checking the password.  In
  practice, I would expect that you get a capability in response to the
  correct password, which makes the program unconfined anyway.

This is actually a use case which I suggested to Marcus as well (untrusted
confined password checkers).  But it simply doesn't make sense to use such a
thing in a real environment.

> > Marcus posed a theorem, namely that there exist no use cases of the child
> > hiding data from the parent that we want to support.  If you have an
> > abstract way of proving or disproving that, please go ahead.  As far as I
> > can see, the way to go is to come up with use cases and see if they work.
> > If not, it disproves the theorem.
> 
> Well, I have offered the first two examples above several times.

Yes, but they're invalid. :-)

> > Please name a use case where the party which worries about the confinement
> > (that is, the one that doesn't want capabilities getting out) cannot be
> > the parent.
> 
> This is not the definition of confinement. Confinement is not a question
> of capabilities escaping. It is a question of *data* escaping.

And how is data supposed to escape without a capability escaping?  Again, we
are considering a trusted chain of parents, so the fact that they could
inspect it is irrelevant: they will not do that.

> > > It is also not confinement if the parent can read the child without the
> > > consent of the child. Therefore it is not confinement at all.
> > 
> > If the child doesn't trust the parent, then you have chosen the wrong
> > parent for your child.
> 
> Your user shell is the parent of /sbin/passwd when you
> execute /sbin/passwd. It is entirely proper that /sbin/passwd should not
> trust its parent.

You have indeed chosen the wrong parent. ;-)

In the current hurd, for setuid applications, the filesystem is the parent,
not the process that triggers the startup.  The filesystem is perhaps not an
ideal choice for a parent, but it must be a party which holds the capabilities
that this particular "setuid" application uses.  And that's not the process
triggering the startup, otherwise it wouldn't need to be setuid.

> > > > > Marcus proposes that any "parent" should have intrinsic access to the
> > > > > state of its "children". This property is necessarily recursive. It
> > > > > follows that the system administrator has universal access to all user
> > > > > state, and that "safe" backups are impossible.
> > > > 
> > > > Nonsense.  As you said yourself a few months ago, the administrator
> > > > might not have the right to touch everything.
> > > 
> > > In the purely hierarchical model that Marcus proposes, this property is
> > > not achieved. That is the problem that I am objecting to.
> > 
> > Of course it is.  You nicely cut out my comment that the kernel also has
> > access to all memory, so I'll say it again. ;-)  In your model, the kernel
> > has access to all morory in the system.  The administrator doesn't have
> > the right to change the kernel, so he cannot abuse this fact to get
> > access.  There is no reason that this can't be true for other parts of the
> > system as well.
> 
> I am not sure that the system administrator does not have the right to
> change the kernel. I think that they should not, but some of the strong
> opinions on this subject have said "the owner of the machine must have
> unconditional control."

Well, he will usually.  You need a DRM chip to have the ability at all to do
things otherwise, but I'm ignoring that for the moment.  But I wasn't actually
talking about what happens when the machine is not running the Hurd.  Anyone
who can power the machine down and take the hard drive to inspect it has
ultimate power.  The system cannot change this.

However, while the system is running things are different.  The system _can_
prevent anyone (including the machine owner) from accessing data.  If we
choose to give the machine owner unconditional control when the system is
running, then by definition no data (encryption keys, passwords, etc
included) is exempt from this.  But there's no reason to choose this, in fact
there's a lot of reason not to.  Of course there are capabilities for this,
and it is pretty trivial to give them to someone (by changing the snapshot
while the system is not running).  That would breach security very much
though.

> > The administrator needs to create user sessions.  Fine.  But this can be
> > done by making a call to the system, so he doesn't himself become the
> > parent of them.
> 
> What is this "the system" that you are discussing?

The Hurd.  Some microkernel, for example Coyotos, with stuff around it to make
it do useful things.

> In a system without confinement, the administrator *controls* that!

This _is not_ a system without confinement.  The person who controls it is the
one who sets up and possibly maintains the snapshot image from which the
system boots.  This is _not_ the administrator in the sense of the account
which gives out new accounts and installs applications.

> In order to have this conversation usefully, you need to draw a system
> block diagram showing the processes and their relationships (the rights
> that they hold)

Ok, I did.  It is attached as a xfig file.  The capability held by the
administrator session manager allows creating new sessions for users, but it
doesn't actually give the administrator access to the top-level space bank
(although he can of course log into the user account before giving it to the
actual user and send capabilities to his own session, the session will refuse
to give away the "session memory" capability, so anything created from there
after account creation is in fact safe from this attack.  The encryption key
storage is just an example.

> and demonstrate that there are no leaks.

The only way that there could be leaks is if there are untrusted parties with
access to the space bank, or a parent of the space bank.  As you can see,
there aren't.

> You must then explain how this system state is bootstrapped.

Bootstrapped?  We have a persistent system, we don't do bootstrapping.  You
just put it in the snapshot before booting it.  Or do you mean account
creation by the administrator?  I described that above.

> > > > > If I have a right to choice, it is a right to *stupid* choice.
> > > > 
> > > > Choice is not a right in all situations.
> > > 
> > > I agree. However, choice is a right in all situations where no
> > > *overwhelming* third party harm can be shown to the satisfaction of the
> > > consensus of the society.
> > 
> > No, it isn't.  Choice is wrong in situations where the people who choose
> > are not knowledgeble enough to understand what they're doing, or they
> > can't actually use it for something good.
> 
> Then you should certainly stop making choices about confinement. :-) :-)

So how can you know what to implement if you're not making a choice? ;-)

> > > > I do.  Evil is when a person acts in a way that is against his or her
> > > > own moral values.
> > > 
> > > No. This is the second type of evil. The first type is when a person
> > > acts in a way that imposes their values on others without sufficient
> > > evidence of universal merit.
> > 
> > That doesn't fit with my meaning of evil, and depending on the details, it
> > may not even be a bad thing at all.
> > 
> > If someone believes that what he does is good, then that is _by (my)
> > definition_ not evil.  Evil is intentionally doing morally objectionable
> > things.
> 
> Ah. So if I cut you into small pieces and hang you from trees, it is not
> evil so long as I believe that doing this is good.

Indeed.  That doesn't mean I consider it a good idea though. ;-)

> Indeed, I might imagine that allowing your definition of evil to propagate
> is bad, and justify myself by imagining that I am pruning the moral garden
> (lovely image, but even *I* don't know what it means).

Note that there's nothing to win by "not being evil".  If you pretend to
consider it a good thing, then that doesn't make it any less evil.  It's just
that nobody knows.

> The fact that this statement is consistent with your definition of evil
> suggests that the definition needs re-examination.

In particular, I think the crusaders were not evil, because they considered
what they did good.  That doesn't mean I think it was good that they did it.

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]