l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design principles and ethics (was Re: Execute without read (was [...


From: Marcus Brinkmann
Subject: Re: Design principles and ethics (was Re: Execute without read (was [...]))
Date: Sun, 30 Apr 2006 17:52:48 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sun, 30 Apr 2006 10:19:08 -0400,
"Jonathan S. Shapiro" <address@hidden> wrote:
>   1. The ability for an administrator to back up my content without
>      being able to examine it.

This requires more precise definitions.  First, it is not the
administrator you are worried about, but whoever decides what is
initially installed on the machine.  I call this entity the "machine
owner".

Second, the machine owner can only voluntarily give up control over
the machine partially.  He can always do this, with or without trusted
computing hardware.  However, only with trusted computing hardware the
user is able to _verify_ (by remote attestation) that the machine
owner has given up control over the machine partially.

So, the requirements you need to stipulate to get to a situation that
you can't support in my model are much tighter than you make it sound.

As a side note: In the model you are proposing, the administrator can
take your backups hostage, because you rely on him for durability.  In
the same fashion, the trusted computer chip manufacturer can hold your
data hostage, because they are the only party who can transfer the
root key from one computer to another.  In fact, the backup chain in
such a setup relies on the integrity (and durability!) of the trusted
computing chip.  This is one reason why the backup scenario does not
make much sense in the first place.  Trusted computing is a threat to
long term storage of digital data.  This may or may not concern you,
but:

I have not yet heard a convincing use case that I want to support
which combines non-durable remote storage with privacy.

>   2. The ability to safely store cryptographic keys on a machine having
>      more than one user.

Again, you are stating the wrong problem.  What you mean to say,
presumably (I am just fixing it up so it makes sense), is to store a
cryptographic key on a machine where the machine owner is different
from the user wanting to store the keys.

Well, I further suppose that the keys themselves are not protected, or
are stored in a non-protected form at any time.  The same comments as
above apply, because it is exactly the same problem.

Again, it is questionable if you should do it in the first place.
What's the use case?  Seems to me that this is a matter of "don't do
that".

>   3. The ability to securely manipulate a password database.

Same problem, same answer, same reservations.

> > Marcus posed a theorem, namely that there exist no use cases of the child
> > hiding data from the parent that we want to support.  If you have an 
> > abstract
> > way of proving or disproving that, please go ahead.  As far as I can see, 
> > the
> > way to go is to come up with use cases and see if they work.  If not, it
> > disproves the theorem.
> 
> Well, I have offered the first two examples above several times.

A complete use case would give information on the actual parties
involved performing the operations, and their relationships with each
other.

As you stated the three examples, they are fully equivalent.  They are
equivalent for a reason: The reason is that they are just the abstract
model you have in mind, with various words substituted for some other
words.  This is meaningless.  A use case would give us information
(and presumably requirements or conditions) beyond the ones
intrinsical to the abstract case.

> > > It is also not confinement if the parent can read the child without the
> > > consent of the child. Therefore it is not confinement at all.
> > 
> > If the child doesn't trust the parent, then you have chosen the wrong parent
> > for your child.
> 
> Your user shell is the parent of /sbin/passwd when you
> execute /sbin/passwd. It is entirely proper that /sbin/passwd should not
> trust its parent.

True, but that shows that the Unix model is broken.  It is in fact
broken in several ways:

(1) passwords should not be used in the first place
(2) passwords, if they are used, should not be stored in system storage,
    but in user storage (ssh does get this right)
(3) if passwords are stored in system storage, they should be updated
    by a program instantiated by the system, running on system
    resources, not instantiated by a user program running on user
    resources (which are not durable).  In other words, passwd should
    be an advertised service.

Quick, answer this: Can a user kill a setuid program on Unix?  Pick
any quota-extension to Unix: How does it attribute the memory and cpu
time consumed by a setuid program?  How does it attribute the disk
space?  How does this behaviour fit with your model of resource
accounting?

Short answer: setuid is a kludge.  The setuid programs are to be
eliminated or replaced by system services that are advertised (via
capabilities) to the user's session.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]