l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The gun analogy (Was: Design Principles)


From: Marcus Brinkmann
Subject: Re: The gun analogy (Was: Design Principles)
Date: Sun, 30 Apr 2006 23:55:12 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sun, 30 Apr 2006 14:58:28 -0400,
"Jonathan S. Shapiro" <address@hidden> wrote:
> Fundamentally, however, the point that we disagree on appears to be
> this:
> 
>   You believe that it is proper behavior to lecture others on why
>   they should not use "immoral" devices (technical means) in order
>   to solve legitimate problems.
>
>   I believe that this behavior is merely invasive, rude, and foolish.
>   The obligation of the truly moral actor is to find or build a more
>   appropriate tool.

In these short sentences, you are wrong on just about any account that
I can possibly imagine.

First, what you describe is not what I believe.  The question if an
immoral device should be used in order to solve a legitimate problem
is a complex question, and must take into account circumstances of the
situation.

Furthermore, you make two implications, both of which are wrong.
First, you talk about ""immoral" devices".  However, as I said before,
I do not believe in intrinsic moral values of tools (I said it in the
context of DRM).  Second, you talk about legitimate problems.  But I
believe there are "problems" which are not legitimate.

I don't know what a "true" moral is, so I can't really say anything
about obligations of "truly moral actors".  However, for me personally
it is clear that if somebody asks me to solve an illegitimate problem,
the right answer is to refuse.

To put this into the context of the actual discussion that is taking
place, the right question to ask is if non-trivial confinement (or
trusted computing) solves a legitimate problem.  I have looked at all
the use cases I could find, and I did not think that any of these
posed legitimate (to me) problems.  I also looked at the nature of the
non-trivial confinement mechanism, and at the nature of trusted
computing, and found something in the nature of these tools that makes
me suspicious that a legitimate problem exists that they solve which
can not adequately be addressed by a different, less troubling (to me)
tool.  Such a suspicion can not count, at first, as a conclusive
proof.  However, it is a strong indication that it is not safe to bet
on the speculation that legitimate use cases will just pop up into
existence.

What really baffles me is that whenever you try to recount my position
on this issue, you get it terribly wrong.  This is really hard to
understand, because you do not indicate that you have problems
understanding any of my explanantions.  I want to ask you, again, to
not speak for me.  It's tedious to go forward from such a
misrepresentation.  The discussion will be much easier to follow if we
work it out incrementally.

To make this work in practice, I think I will refrain from explaining
and commenting my position on speculation of what needs explaining.  I
have, at some point or the other in the discussion, said almost
everything I wanted to say about the moral dimensions of the topic,
and I think the right step from here is to answer specific questions
only.  This will also help to reduce repetition.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]