l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Potential use case for opaque space bank: domain factored network st


From: Marcus Brinkmann
Subject: Re: Potential use case for opaque space bank: domain factored network stack
Date: Sun, 07 Jan 2007 05:10:15 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sun, 7 Jan 2007 03:02:23 +0100,
Pierre THIERRY <address@hidden> wrote:
> 
> Scribit Marcus Brinkmann dies 07/01/2007 hora 02:35:
> > The right question to ask is if there is a significant difference in
> > the harm that can result from such an arrangement if I am out of
> > control compared to when I retain control.  I believe that to be the
> > case.
> 
> I may not remember well the past debate, but did you already give facts
> supporting that belief? I'm not sure that opaque memory as we discussed
> it now can do any harm per se.

I don't claim that it can do harm per se.  I claim that it can do harm
if used as a tool of exercising inappropriate power over somebody
else.  I certainly have given many arguments supporting that, see my
mail Part 1: Ownership and contracts, which is essentially about these
issues.

> > Nono, I agree that within the system there is no permanent, or more
> > specifically irreversible change in your arrangement.  The change
> > happens outside the system, involving the actors, that means real
> > human like you and me.
> 
> Well, then what is the irreversible change involving actors that can
> occur with a Hurd with opaque memory that couldn't with the Hurd without
> opaque memory?

You mean hypothetically?  That is easier answered when looking at a
specific proposal.  The canonical example for opaque storage used to
be the construction of (possibly confined) isolated program instances.
In that case the harm is that I am using software which I can not
inspect or modify, thus I am making myself dependend on proprietary
technology.  This gives control over my computational activities to
somebody else, who can use this control to extort money from me for
services and complementary products, etc, and who can threaten the
very basis of my activities by denying these services.  This strategy
is commonly known as "lock-in".  I don't know about careful studies,
but it's Quantifying the economic harm caused by this is a new
discipline, but there are many examples where this has worked out
quite well (of course only for some of the involved parties).

I wish I had known about this one earlier: Ross Anderson seems to have
done studies of the economics of "trusted computing".  I have not read
the below paper yet in full, but what I've seen mirrors what I have
said on the topic before, expressed in a better way and probably with
more authority.
http://www.cl.cam.ac.uk/~rja14/#Econ
http://www.cl.cam.ac.uk/~rja14/Papers/tcpa.pdf

> > This assumes that revocation is feasible, which may not always be the
> > case, depending on what the relationship of power is between the
> > actors and the application.  Another point to consider.
> 
> In what case wouldn't revocation be possible? I think it's up to the
> designers of the space bank to gaurantee that revocation is always
> possible. And I don't see any sensible reason to do otherwise.

I have not said "possible", but "feasible".  Presumably you are using
a service.  The question is if the switching cost to stop using that
service is higher than the costs it inflicts on you.  Unfortunately,
in many important cases the answer is yes, but not because the harm is
small, but because the switching cost is so damn high.  This is basic
economics.

Pierre, I have the impression you are exclusively focusing on
mechanisms and technology.  If that is true, we will keep talking
beside each other, because I am focusing on policies and people.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]