l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design principles and ethics (was Re: Execute without read (was [...


From: Jonathan S. Shapiro
Subject: Re: Design principles and ethics (was Re: Execute without read (was [...]))
Date: Sat, 29 Apr 2006 20:09:09 -0400

On Sat, 2006-04-29 at 11:11 +0200, Marcus Brinkmann wrote:
> At Sat, 29 Apr 2006 01:22:27 -0400,
> "Jonathan S. Shapiro" <address@hidden> wrote:
> > Perhaps I have misunderstood your position on confinement. If so, I
> > invite clarification.
> 
> You did.

Marcus later wrote;

> Going back to confinement, let me state it very clearly, once and for
> all, because you keep getting it wrong:
> 
>   * * *   Every process in the Hurd will be confined.   * * *
> 
> It will be confined because it was created by its parent, so it meets
> the definition of confinement in the most trivial sense.

This is complete nonsense. The confinement property states:

  A confined application can only transmit data through authorized
  channels.

However, any reading of the original paper makes clear that the
definition of confinement occurs in a context:

  - There is a process that is attempting to transmit.
  - The process is free from external coercion in regard to
    transmission. That is: transmission requires both permission
    **and intent**.

What Marcus describes is a situation where (a) the parent establishes
the authorized channels and (b) the parent can spy on the child's state.
The second provision violates the requirement for intent.

So: what Marcus calls "trivial confinement" is not confinement at all. I
do not agree with what he proposes, but the policy that he proposes is
not morally wrong. I *do* object very strongly to calling it
confinement, because it is not confinement. What Marcus actually
proposes is hierarchical exposure.

Let me now go back to our discussion of system administrators and
backup, which will illustrate that this is insufficient.

Marcus proposes that any "parent" should have intrinsic access to the
state of its "children". This property is necessarily recursive. It
follows that the system administrator has universal access to all user
state, and that "safe" backups are impossible. Further, it follows the
cryptography is impractical, because there exists no location on the
machine where a cryptographic key can be stored without exposure to the
administrator.

That is: in Marcus's proposal, there is no possibility of privacy.

[Picture of silly proposal plummeting to the ground, leaving large
cartoon crater.]

> My position on the confined constructor design pattern, ie non-trivial
> confinement, is NOT that "it supports DRM, therefore it should be
> banned".  My position on the confined constructor pattern is: "I have
> looked at ALL use cases that people[*] suggest for it, and find all of
> them either morally objectionable, or, in the context of the Hurd,
> replacable by other mechanisms which don't require it." 

Excellent. Please propose an alternative mechanism -- ANY alternative
mechanism -- in which it is possible for a user to store cryptography
keys without fear of exposure. If we can solve this, then I am prepared
to concede that we can store private data in general.

However, I do not believe that this can be solved in principle without
true confinement (as opposed to trivial non-confinement). There is a
fundamental bootstrapping problem.

> I have some theories about _why_ there are no use cases I find
> legitimate, but they are still somewhat immature.  It has to do with
> questions of ownership and control, which are intrinsically political,
> non-technical subject matters.  I will give a hint at the end of this
> mail.

If you wish to do so, I would be sincerely interested to test these
ideas, in the context of understanding that they are early ideas subject
to revision and enhancement. I propose that this should be done with a
new subject heading.

> > If I really wanted to ban something, I would ban software. Software has
> > been responsible for *far* greater harm than DRM. Think about it.
> 
> Which just proves that your metric (or let me say your insinuated
> metric) of how to decide what to ban and why is completely false.
> Luckily, it is not my metric.

Mainly, it proves that people advocating dogma never have a sense of
humor....

> > I believe that rejecting confinement as a basic building block is a
> > profoundly unethical decision.
> 
> Interesting.  Because the exact line of argument leads me to
> vehemently reject "non-trivial confinement".

There is some risk in what I am about to propose. I believe that it is
overcome by the benefit.

We are discussing a very important, foundational point. I believe that
this debate should be public, that it should be uncompromising, and that
it should evolve over time. Your ideas are incomplete. So are mine. Let
us start a Wiki page for this discussion that will allow us to evolve
it. Such decisions NEED the light of day.

> Non-trivial confinement is a grave security threat.  It compromises
> the users autonomy, or what I called the "principles of user freedom":
> The ability of the user to use, inspect, copy, modify and distribute
> the content of the resources that are attributed to him.  It wriggles
> control over the resources out of the users hand, and thus amplifies
> existing imbalances of power.

Nonsense. In true confinement, the user remains in a position to say "I
elect not to run an undisclosed application". Please explain in what
sense this constitutes a loss of control or a loss of security.

No. Your real concern here is that the user will not *choose* to
*exercise* this control. Your objection, fundamentally, is that users
will not accept the dogma that you propose, and you therefore plan to
preempt their right to choice. Ultimately, you justify this on the basis
that these short-term decisions imply bad long-term outcomes, but you
neglect the point that the users have a *right* to choose those bad
long-term outcomes, even when they do not understand them.

This is not defense of morality or ethics. It is a classic example of
unethical behavior. If I have a right to choice, it is a right to
*stupid* choice. The ethics you propose are the ethics of Mao and
Stalin, not the ethics of reasonable adults. You propose to solve *your*
long-term social objectives by undermining the social process of
consensus.

If there is a better definition of evil, I do not know it. The MORAL
behavior would be to escalate the issue into public awareness, and seek
to change the public decision process OPENLY.



shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]