l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design principles and ethics (was Re: Execute without read (was [...


From: Bas Wijnen
Subject: Re: Design principles and ethics (was Re: Execute without read (was [...]))
Date: Sun, 30 Apr 2006 12:17:14 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Sat, Apr 29, 2006 at 10:23:16PM -0400, Jonathan S. Shapiro wrote:
> On Sun, 2006-04-30 at 03:52 +0200, Bas Wijnen wrote:
> > > What Marcus describes is a situation where (a) the parent establishes
> > > the authorized channels and (b) the parent can spy on the child's state.
> > > The second provision violates the requirement for intent.
> > 
> > Huh?  Why can't the child intend to transmit if it was started by the
> > parent?
> 
> You have it backwards. The correct question is:
> 
>   Does the mere fact that the child was instantiated by the parent
>   imply that the child consents to disclose state to the parent?

And the answer is: We assume that it does.  Is there anything that breaks if
we assume this?  Yes, there is.  But so far, for all the things in that
category one of these is true:
- It can be implemented through some other mechanism
- We do not want to support this case, because we find it morally
  objectionable.

If you have a use case where both these are not true, please share it with us.
However, you will probably want to wait with that until Marcus has explained
everything about it (and although he talked with me about it, I'm sure I
didn't hear the whole story yet either).

> > We are talking here about things like browser plugins.
> 
> You were, but my comment is in the broader context of a debate about
> confinement. It is not limited to subordinate subsystems. These are a
> useful special case, but not instructive for purposes of the broader
> debate.

Marcus posed a theorem, namely that there exist no use cases of the child
hiding data from the parent that we want to support.  If you have an abstract
way of proving or disproving that, please go ahead.  As far as I can see, the
way to go is to come up with use cases and see if they work.  If not, it
disproves the theorem.  If they do, it strengthens our trust that this theorem
is ok.  But proving it seems impossible.

So indeed, browser plugins are not all we talk about.  But they're the best we
can do.  Please name a use case where the party which worries about the
confinement (that is, the one that doesn't want capabilities getting out)
cannot be the parent.  If you have a case where there is more than one party
who wants this, _and_ none of these parties trusts the other, then you have a
case.  And I (and Marcus) say it's probably something which we don't want to
support. ;-)

> > > So: what Marcus calls "trivial confinement" is not confinement at all. I
> > > do not agree with what he proposes, but the policy that he proposes is
> > > not morally wrong. I *do* object very strongly to calling it
> > > confinement, because it is not confinement. What Marcus actually
> > > proposes is hierarchical exposure.
> > 
> > That too, but that's not the reason it's confinement.  It's confinement
> > because the child process cannot communicate with anyone, except with
> > explicit permission of the parent (in the form of a capability transfer).
> 
> It is also not confinement if the parent can read the child without the
> consent of the child. Therefore it is not confinement at all.

If the child doesn't trust the parent, then you have chosen the wrong parent
for your child.

Assume the constructor is just an other process.  If A wants to start a
client B, but it needs capabilities that A doesn't have, it calls C (the
constructor) which starts B for him.  Now C, not A, is the parent of B.  And
yes, B can trust C that it will not inspect its code.

The difference with a constructor is that the confinement check is no longer
possible.  C knows that it's confined, because it's the one starting it.  A
will have to trust C, or not use the service.  Theoretically this is a
limitation, but again, we have so far not seen a single use case where this is
a problem.

> > > Marcus proposes that any "parent" should have intrinsic access to the
> > > state of its "children". This property is necessarily recursive. It
> > > follows that the system administrator has universal access to all user
> > > state, and that "safe" backups are impossible.
> > 
> > Nonsense.  As you said yourself a few months ago, the administrator might
> > not have the right to touch everything.
> 
> In the purely hierarchical model that Marcus proposes, this property is
> not achieved. That is the problem that I am objecting to.

Of course it is.  You nicely cut out my comment that the kernel also has
access to all memory, so I'll say it again. ;-)  In your model, the kernel has
access to all morory in the system.  The administrator doesn't have the right
to change the kernel, so he cannot abuse this fact to get access.  There is no
reason that this can't be true for other parts of the system as well.

The administrator needs to create user sessions.  Fine.  But this can be done
by making a call to the system, so he doesn't himself become the parent of
them.

> > > Further, it follows the cryptography is impractical, because there
> > > exists no location on the machine where a cryptographic key can be
> > > stored without exposure to the administrator.
> > > 
> > > That is: in Marcus's proposal, there is no possibility of privacy.
> > 
> > I believe I have disproven that statement.
> 
> Sorry. You have not.

You probably still think so, but I want to hear why. :-)

> > > > My position on the confined constructor design pattern, ie non-trivial
> > > > confinement, is NOT that "it supports DRM, therefore it should be
> > > > banned".  My position on the confined constructor pattern is: "I have
> > > > looked at ALL use cases that people[*] suggest for it, and find all of
> > > > them either morally objectionable, or, in the context of the Hurd,
> > > > replacable by other mechanisms which don't require it." 
> > > 
> > > Excellent. Please propose an alternative mechanism -- ANY alternative
> > > mechanism -- in which it is possible for a user to store cryptography
> > > keys without fear of exposure. If we can solve this, then I am prepared
> > > to concede that we can store private data in general.
> > 
> > In general, keep the chain of parents short and trusted.
> 
> Since all processes are (ultimately) in some chain derived from
> processes that the administrator controls, no privacy against the
> administrator is possible.

No he doesn't.  You seem to propose that the administrator should control the
top-level space bank or something.  He certainly shouldn't.  The administrator
is just another user in many respects, which some rights which make his job
possible.  That doesn't include the right to inspect all memory in the system.
You agree with that.  The fact that the method for disallowing it that you
planned to use doesn't work anymore, doesn't mean we're just going to forget
about it.  We'll use an other method to accomplish it.  This is a trivial
thing, in fact.

> > > We are discussing a very important, foundational point. I believe that
> > > this debate should be public, that it should be uncompromising, and that
> > > it should evolve over time. Your ideas are incomplete. So are mine. Let
> > > us start a Wiki page for this discussion that will allow us to evolve
> > > it. Such decisions NEED the light of day.
> > 
> > Personally, I prefer the mailing list for discussions.  It would be a very
> > good idea if the resulting conclusions are archived in a better way than
> > "somewhere in the list archives".  For that a wiki is useful.  But I
> > wouldn't want to need to poll web pages in order to see if someone said
> > something.
> 
> Yes. But the result needs to be edited and maintained as well.

Ok.  As long as I don't need to poll anything except my e-mail, I'm happy. :-D

> > > If I have a right to choice, it is a right to *stupid* choice.
> > 
> > Choice is not a right in all situations.
> 
> I agree. However, choice is a right in all situations where no
> *overwhelming* third party harm can be shown to the satisfaction of the
> consensus of the society.

No, it isn't.  Choice is wrong in situations where the people who choose are
not knowledgeble enough to understand what they're doing, or they can't
actually use it for something good.

Both of these make me think that we should not give people this choice.

And all this doesn't need some consensus in society.  We're not some
dictatorship forcing people to use our program!  We design things, they can
choose to use it or not.  That's how things work.  There's no intrinsic right
that people have which forces us to give them any choice at all.

Also, of course I don't propose to force people to use the Hurd.  If they do
know what they want, and they do want non-trivial confinement, then they can
try to get it from somewhere else.  But as long as I haven't seen a single use
case which I want to support that needs it, I'm not going to build or support
building it.  I believe that not supporting it may make the world a better
place, and that's what I want to do.

> > > You propose to solve *your* long-term social objectives by undermining the
> > > social process of consensus.
> > 
> > What consensus?
> 
> Yes. That is the point. In the absence of social consensus it is immoral
> to impose *any* dogma on society in the absence of demonstrated harm to
> third parties.

You are thinking as a benevolent monopolist. ;-)  There's no problem at all in
making certain choices for the people.  If they don't like it, they should
simply not use this product.  You sound like that is impossible.  In other
words: I'm not imposing anything on society, I'm just not offering something.
As much as I think the gun-analogy doesn't quite work, I do want to extend it
a bit: If I believe that guns are bad for society, but there is no societal
consensus about this fact, then that doesn't mean I must open a gun shop.  It
does mean that other people may do that.  But I can do what I think is good.

> > > If there is a better definition of evil, I do not know it.
> > 
> > I do.  Evil is when a person acts in a way that is against his or her own
> > moral values.
> 
> No. This is the second type of evil. The first type is when a person
> acts in a way that imposes their values on others without sufficient
> evidence of universal merit.

That doesn't fit with my meaning of evil, and depending on the details, it may
not even be a bad thing at all.

If someone believes that what he does is good, then that is _by (my)
definition_ not evil.  Evil is intenionally doing morally objectionable
things.  Hmm, my dictionary isn't too clear about this either (both our
definitions seem to fit the description).  Anyway, I don't think discussing
definitions is that interesting, so I hope you understood my definition, I
think I understood yours, let's leave it at that. :-)

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]