l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Separate trusted computing designs


From: Marcus Brinkmann
Subject: Re: Separate trusted computing designs
Date: Thu, 31 Aug 2006 16:07:34 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 30 Aug 2006 19:44:58 +0200,
Christian Stüble <address@hidden> wrote:
> > > > If they are open issues, where does the confidence come from your
> > > > research group that they not only can be solved, but in fact that they
> > > > are solved in your design?  From the EMSCB home page (under
> > > > "Benefits"):
> > >
> > > I am sure we cannot solve all problems. However, some of the problems
> > > already have been solved (e.g., on design requirement is to ensure that
> > > application cannot violate the user's security policy, as descibed by
> > > Ross). But I don't want to discuss about one of our projects, nor about
> > > the group itself. Only about the given privacy use case.
> >
> > If you know that you cannot solve all problems, why does your group
> > claim repeatedly on your web page that you have, indeed, found a
> > solution that "guarantees a balance among interests"?
> From the main web pages:
> 
> "PERSEUS is a trustworthy computing framework that aims at establishing an 
> open security architecture..."
> 
> "European Multilaterally Secure Computing Base (EMSCB) aims at developing a 
> trustworthy computing platform with open standards that.."
> 
> No, we do not claim to provide a perfectly secure system, and we do not claim 
> to ultimately solve all existing problems. But I do not want to continue a 
> discussion about text of group and project web pages here. Many different 
> people with different interests decide which text will be used on a web page,
> and this is IMO completely OT.

The reason I insisted on an explanation is because I want you to open
up for the type of criticism that I am actually making.

So far, you have said that you see interesting use cases for the
technology, and just want to improve its implementation.  So far, I
have said that I do not see interesting use cases, but fundamental
flaws in the ideology behind the technology.  These intellectual goals
are not very compatible.  As long as we each focus on our special
interest, we are not making much progress.

Part of my criticism is an explanation why achieving a "balance" in a
"trusted computing" application is *impossible*.  Thus, I consider it
important and interesting to discuss why (or if) you think that such a
balance is at least achievable.  It is not just a single quote on a
web page: It is a fundamental ideological assumption behind the
technology that information can and should be proprietarized when
shared with other people.

To many questions I asked you responded that you don't want to discuss
them or at least not on this forum.  That's fine, but it leaves us
with very little left to discuss.

As the projects we are talking about are publically funded research
projects from a university, I would expect that any extraordinary
claims on your side have at least a referencable justification, and
that challenging the fundamental assumptions is not unheard of, but in
fact expected and welcomed, even sought for.

> > > > I asked for use cases that have a clear benefit for the public
> > > > as a whole or the free software community.
> > >
> > > I personally would like to be able to enforce my privacy rules even on
> > > platforms that have another owner.
> >
> > If you can enforce a property about a system, then it is not owned
> > exclusively by another party.  That's a contradiction in terms.
> Maybe this depends on your definition of "owned" or that you have not read my 
> text carefully. I said that I want to enforce "my rules", not "my 
> properties". If the platform has certain properties that I accept (and that 
> imply that the platform owner cannot bypass security mechanisms enforced by 
> my agent), my agent will execute. Else it will not.

My definition of ownership is quite narrow: It means exclusive right
to possess, use and destroy something.  In the proposed technical
measures to implement such mechanisms as you describe, the other party
is not in complete possession and control over the computer: Rather,
there is a chip in the computer which has content that they can not
read out.  Thus, this chip is not part of their property.  How much
this affects the rest of the computer depends on the question if and
how the chip is used or not.

> > What you can do is to engage in a contract with somebody else, where
> > this other party will, for the purpose of the contract (ie, the
> > implementation of a common will), alienate his ownership of the
> > machine so that it can be used for the duration and purpose of the
> > contract.  The contract may have provisions that guarantee your
> > privacy for the use of it.
> >
> > But, the crucial issue is that for the duration the contract is
> > engaged under such terms, the other party will *not* be the owner of
> > the machine.
> Is the owner of a house not the owner, because s/he is not allowed to
> open the electric meter? If you sign the contract with the power supplier,
> you accept not to open it. And it becomes a part of your house. Now,
> you are not the house owner any more? Sorry, but I do not understand why a 
> platform owner alienates his ownership by accepting a contract not to
> access the internal state of an application.

You are definitely not the owner of your house, at least not in
Germany.  You do not even need to have a power meter: In Germany,
there exists laws that regulate how you can modify your house, what
type of extensions you are allowed to build, etc, and you need permits
to do so.  This is because there is an acknowledged public interest in
the safety and appearance of your structural modifications.  And this
is only one small example, there are many other reasons why your house
is not owned by you.

You may have some (even extensive) property rights to your house, but
you do not have the exclusive right to possess, use and destroy it.
The other rights are contracted away to the public.  In fact, you were
only allowed to build your house in the first place because the public
allowed you to do so.

Of course, with such a narrow definition of ownership, we do not own
very much, as any of our property is subjectable to outside
interference in the case of emergencies etc, even our own body.
However, it is useful for the sake of discussion to simplify away some
of the more remote public rights to make more clear where the major
parts of control come from.  But this is my point: The security
measures contained in "trusted computing" are overreaching.  Even our
very dearest material property is not owned as strongly by us as is
proposed for bits and bytes in a remote computer subjected to "trusted
computing" policies.

I like the power meter example, by the way.  The analogy to trusted
computing is that the whole interior in your house is cut into many
small slices and blocks, and that for each piece of furniture, each
painting on the wall, each book on the shelf, and every recipe in the
kitchen you have to negotiate a contract with a provider, and that
contract contain things like that you have to pay a buck every time
you look at the painting for longer than 10s, or everytime you open
the cupboard and get a plate.  In other words: You will still own the
bricks the house is made of, but everything inside it will be owned by
somebody else.

> > > > > If there
> > > > > are two
> > > > > comparable open operating systems - one providing these features and
> > > > > one that
> > > > > does not, I would select the one that does. I do not want to discuss
> > > > > the
> > > > > opinion of the government or the industry. And I don't want to
> > > > > discuss
> > > > > whether people are intelligent enough to use privacy-protecting
> > > > > features or
> > > > > not. If other people do not want to use them, they don't have to. My
> > > > > requirement is that they have the chance to decide (explicitly or by
> > > > > defining, or using a predefined, privacy policy enforced by the
> > > > > system).
> > > >
> > > > I am always impressed how easily some fall to the fallacy that the use
> > > > of this technology is voluntarily for the people.  It is not.  First,
> > > > the use of the technology will be required to access the content.  And
> > > > people will need to access the content, to be able to participate in
> > > > our culture and society.  All the major cultural distribution channels
> > > > are completely owned by the big industry, exactly because this allows
> > > > these industries to have a grip-hold over our culture.  There is an
> > > > option for popular struggle against this, but it will require a huge
> > > > effort, and success is by no means guaranteed.
> > >
> > > I did not talk about TC in general, but about the "privacy-protecting
> > > agent".
> >
> > I am not sure what you mean by that term.  The crucial point here is
> > that TC removes the choice from the people which software to run.
> I never said that I think that the users will have the free choice to use TC 
> technology or not. Different circumstances may force him to use it,
> e.g., his employer, or that s/he prefers an operating system that does not 
> allow to disable TC support.
> 
> I suggested a use case that uses TC in a meaningful sense (at least in my 
> opinion), and as a response people are asking me whether users will be able 
> to use this technology. My statement was that I would like to have such a 
> system and that I am currently not interested in opinions of the industry or 
> the government, or whether other people need this feature.

I have trouble following you here.  If nobody else uses this
technology, how will _you_ be able to use it?  The technology only
makes sense if more than one party takes part in it.

This is by the way the reason that the "trusted computing" technology
is inseparably tied to social issues and politics: It is a technology
that affects the relationship of power between people.  It's very
existance is in the domain of contracts.  Thus, every question related
to "trusted computing" is a social question.  In comparison, the
technical issues are pale in relevance.

[...]

> > The views of the FSF on DRM and TC are well-published, and easily
> > available.  For example, search for "TiVo-ization".
> >
> > What is incompatible with the free software principles is exactly
> > this: I am only free to run the software that I want to run, with my
> > modifications, if the hardware obeys my command.  If the hardware puts
> > somebody else's security policy over mine, I lose my freedom to run
> > modified versions of the software.  This is why any attempt to enforce
> > the security policy of the author or distributor of a free software
> > work is in direct conflict with the rights given by the free software
> > license to the recipient of the work.
> What does the view say about a user that freely accepts a policy? I _never_ 
> talked about a system that "puts somebody else's security policy over mine".

First, again, the user never "freely" accepts a security policy.  The
only reason to accept a security policy is to get at the information
subjected to it.  The acceptance of a security policy is always the
means to a goal, not a goal in itself.

Furthermore, any mode of distribution of free software that
effectively restricts the way in which the software can be
redistributed, modified or executed violates the free software
principles.  It does not matter if the user accepts the policy or not:
The author of the software does not accept the providers policy, and
that's where the buck stops (assuming that the intent of the FSF as I
understand it is implemented in the GPLv3 final version).

As for how to define the distinctions between the various use cases:
The GPL is not a technical document, but a legal document.  Thus, it
will respond to the various real world challenges that are considered
a threat to the free software asset of the FSF.  For example, a couple
of years ago, web services were considered as a threat in a similar
way that DRM is considered a threat now.  Insofar you may be right
that from a GPL point of view examples such as a hosted service and
DRM are not easily distinguishable.  In practice, however, people have
not built such hosted systems and marketed them at any significant
scale in a way that posed a threat to the free software principles.
This is because usually there is no cryptographic coupling between the
software and the data it processes in such hosted services.  If there
were, we would probably see a reaction to that.  Well, with DRM there
is such a coupling, and, surprise, there is a reaction.

I am not sure how we ended up here, but let me stress again that for
me the problems with "trusted computing" are far worse than its
inherent conflict with the free software principles in some of its
applications.  I have given sufficient examples and rationale
elsewhere.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]