l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Separate trusted computing designs


From: Christian Stüble
Subject: Re: Separate trusted computing designs
Date: Wed, 30 Aug 2006 19:44:58 +0200
User-agent: KMail/1.9.1

Hi,

> > IMO not. Maybe this is an influence of my PhD-advisor(s), but I would try
> > to _prove_ that the technology is fundamentally flawed. BTW, the abstract
> > security properties provides are IMO useful.
>
> Well, I believe that my arguments, as disclosed in the email I
> referenced, comes as close to a proof as you can get in a discussion
> about social issues, where not much is known with scientific
> confidence.
[..] and
> The policy that you are suggesting is, in my opinion, quite dangerous.
> Before a technology is deployed we should try and prove that the
> technology is not fundamentally flawed.  I do not believe that proof
> that a technology is fundamentally flawed should be the requirement by
> which we prevent deployment, reasonable suspicion is sufficient.
I agree that this would be the better way. Unfortunately, the TCG did not ask 
me whether they should publish the spec or create TPM's. They claim that
the technology is measingful. From my position, the only chance I have is
to prove that it is fundamentally flawed and/or to suggest better solutions.

Until now, I have not seen an alternative solution providing security 
properties to realize the same use cases. And the TCG Spec is public for 
about 6 years now...

> > > > You are asking a lot of questions that I cannot answer, because they
> > > > are
> > > > the well known "open issues". The challenge is to be able to answer
> > > > them
> > > > sometimes...
> > >
> > > If they are open issues, where does the confidence come from your
> > > research group that they not only can be solved, but in fact that they
> > > are solved in your design?  From the EMSCB home page (under
> > > "Benefits"):
> >
> > I am sure we cannot solve all problems. However, some of the problems
> > already have been solved (e.g., on design requirement is to ensure that
> > application cannot violate the user's security policy, as descibed by
> > Ross). But I don't want to discuss about one of our projects, nor about
> > the group itself. Only about the given privacy use case.
>
> If you know that you cannot solve all problems, why does your group
> claim repeatedly on your web page that you have, indeed, found a
> solution that "guarantees a balance among interests"?
From the main web pages:

"PERSEUS is a trustworthy computing framework that aims at establishing an 
open security architecture..."

"European Multilaterally Secure Computing Base (EMSCB) aims at developing a 
trustworthy computing platform with open standards that.."

No, we do not claim to provide a perfectly secure system, and we do not claim 
to ultimately solve all existing problems. But I do not want to continue a 
discussion about text of group and project web pages here. Many different 
people with different interests decide which text will be used on a web page,
and this is IMO completely OT.

>
> > > I asked for use cases that have a clear benefit for the public as a
> > > whole or the free software community.
> >
> > I personally would like to be able to enforce my privacy rules even on
> > platforms that have another owner.
>
> If you can enforce a property about a system, then it is not owned
> exclusively by another party.  That's a contradiction in terms.
Maybe this depends on your definition of "owned" or that you have not read my 
text carefully. I said that I want to enforce "my rules", not "my 
properties". If the platform has certain properties that I accept (and that 
imply that the platform owner cannot bypass security mechanisms enforced by 
my agent), my agent will execute. Else it will not.

If a program requires a specific amount of memory (or a specific vga adapter) 
for execution, does this mean that the platform is not owned exclusively? In 
my scenario, the owner can define any security policy it would like to. I 
will never be able to change this technically. But I can decide whether my 
agent will be executed in this environment or not.

>
> What you can do is to engage in a contract with somebody else, where
> this other party will, for the purpose of the contract (ie, the
> implementation of a common will), alienate his ownership of the
> machine so that it can be used for the duration and purpose of the
> contract.  The contract may have provisions that guarantee your
> privacy for the use of it.
>
> But, the crucial issue is that for the duration the contract is
> engaged under such terms, the other party will *not* be the owner of
> the machine.
Is the owner of a house not the owner, because s/he is not allowed to
open the electric meter? If you sign the contract with the power supplier,
you accept not to open it. And it becomes a part of your house. Now,
you are not the house owner any more? Sorry, but I do not understand why a 
platform owner alienates his ownership by accepting a contract not to
access the internal state of an application.

> > > > If there
> > > > are two
> > > > comparable open operating systems - one providing these features and
> > > > one that
> > > > does not, I would select the one that does. I do not want to discuss
> > > > the
> > > > opinion of the government or the industry. And I don't want to
> > > > discuss
> > > > whether people are intelligent enough to use privacy-protecting
> > > > features or
> > > > not. If other people do not want to use them, they don't have to. My
> > > > requirement is that they have the chance to decide (explicitly or by
> > > > defining, or using a predefined, privacy policy enforced by the
> > > > system).
> > >
> > > I am always impressed how easily some fall to the fallacy that the use
> > > of this technology is voluntarily for the people.  It is not.  First,
> > > the use of the technology will be required to access the content.  And
> > > people will need to access the content, to be able to participate in
> > > our culture and society.  All the major cultural distribution channels
> > > are completely owned by the big industry, exactly because this allows
> > > these industries to have a grip-hold over our culture.  There is an
> > > option for popular struggle against this, but it will require a huge
> > > effort, and success is by no means guaranteed.
> >
> > I did not talk about TC in general, but about the "privacy-protecting
> > agent".
>
> I am not sure what you mean by that term.  The crucial point here is
> that TC removes the choice from the people which software to run.
I never said that I think that the users will have the free choice to use TC 
technology or not. Different circumstances may force him to use it,
e.g., his employer, or that s/he prefers an operating system that does not 
allow to disable TC support.

I suggested a use case that uses TC in a meaningful sense (at least in my 
opinion), and as a response people are asking me whether users will be able 
to use this technology. My statement was that I would like to have such a 
system and that I am currently not interested in opinions of the industry or 
the government, or whether other people need this feature.


> > > In the end, this technology, if it succeeds, will be pushed down
> > > people's throat.  Everybody seems to know and admit this except the
> > > "intelligent researchers" (well, and the marketing departments of the
> > > big corporations).
> >
> > Do you think that the open-source community will implement such features?
>
> They may, either because they support them or because there is a
> tactical reason to do so.  I am not part of the open-source community,
> I am part of the free software community, and there the only reason to
> support such features would be tactical to not give ground back to
> proprietary software vendors.  However, it seems that the free
> software community forcefully rejects this technology, and implements
> various means to protect itself from these developments.  The
> Defective By Design movement does political activism against it, and
> the GPL v3 will contain provisions that prohibit the use of free
> software on restricted systems.
I follow the discussion about GPLv3 for a long time now and I am really 
interested in the results. But no, I do not want to continue this discussion 
here...

> > Do you expect that nobody will use open-source, if the industry will
> > implement such features?
>
> I fully expect them to do so.  In fact, they are already doing it.
> This is why the GPLv3 will contain provisions against it.
>
> > Do you expect that the european governments will use software
> > violating privacy laws if there is a better and secure alternative?
>
> Privacy laws are not violated by software, they are violated by people.
>
> The decision which software to use for any given project will
> (hopefully) be guided by many factors, including questions of
> protection, but also including questions of access to data.
>
> > > > This is (except of the elementary security properties provided by the
> > > > underlying virtualization layer, e.g., a microkernel) an
> > > > implementation
> > > > detail of the appropriate service. There may be implementations
> > > > enforcing
> > > > strong isolation between compartments and others that do not. That't
> > > > basic
> > > > idea behind our high-level design how to provide multilateral
> > > > security: The
> > > > system enforces the user-defined security policy with one exception:
> > > > Applications can decide themselves whether they want to continue
> > > > execution
> > > > based on the (integer) information they get (e.g., whether the GUI
> > > > enforces
> > > > isolation or not). But this requires that users cannot access the
> > > > applications's internal state.
> > >
> > > That's incompatible with my ideas on user freedom and protection the
> > > user from the malicious influences of applications.
> >
> > I know. But this is IMO a basic requirement to be able to provide some
> > kind of multilateral security. A negotiation of policies 'before' the
> > application is executed.
>
> It's not a requirement to provide multilateral security, it is only a
> requirement for an attempt to enforce multilateral security by
> technological means.  Issues of multilateral security exists since the
> first time people engaged into contracts with each other.
Yes.

>
> The problem with negotiation of policies is that balanced policies as
> they exist in our society are not representable in a computer, 
Of course not. And nobody wants to replace the judge by the computer.
But if rights can be enforced technically, I prefer this solution over
good-will of the software vendor or the judges. Moreover, I think that
we often have to prove that a better solution exists to convice judges
to "ban" the not-so-good solutions. In the real world, even the bad 
solutions solve some problems.


> and 
> that the distribution of power today will often do away with
> negotiation altogether.
>
> I think it is very important to understand what "balanced policies"
> means in our society.  For example, if an employer asks in a job
> interview if the job applicant is pregnant or wants a child in the
> near future, the applicant is allowed to consciously lie.  
Good example. In one of our papers we suggested to encrypt the PCR values,
and/or to allow users to lie about their values, and allow a decryption
only in the context of a conflict, e.g., at a judge.

> Similarly, 
> shrink-wrap licenses often contain unenforcable provisions.  However,
> one does not need to negotiate the provisions, one can simply "accept"
> them and then violate them without violating the law.  Our social
> structure allows for bending of the rules in all sort of places,
> including situations which involve an imbalance of power (as the above
> examples), emergencies, customary law, and cases where simply no one
> cares.
This is exactly the main motivation behind our work. 

> Thus, it is completely illusorical to expect that a balanced policy
> can be defined in the terms of a computing machine, and that it is the
> result of _prior_ negotiation.  Life is much more complicated than
> that.  
It is. But in my opinion computing machines can do better than today.

> Thus, "Trusted computing" and the assumptions underlying its 
> security model are a large-scale assault on our social fabric if
> deployed in a socially significant scope.
A unlogic conclusion, since TC does not aim to solve the problems discussed 
above.

> > > It is also incompatible with the free software principles.
I am sure TC is not. But some implementations based on it may be.

> >
> > What exactly is in your opinion incompatible with the free software
> > principles?
>
> From the current GPLv3 draft:
>
> "Some computers are designed to deny users access to install or run
> modified versions of the software inside them. This is fundamentally
> incompatible with the purpose of the GPL, which is to protect users'
> freedom to change the software. Therefore, the GPL ensures that the
> software it covers will not be restricted in this way."
No problem, my "privacy agent" does not prevent users from installing
or running modified versions of software. And the underlying TCB does
not, too.

But what about a Linux user that is not allowed to install software? This can 
be configured by root. Is Linux incompatible to GPL-v3? Or only certain
configurations?

The problem is that you can implement everything on a platfrom without TC that 
the GPL-v3 wants to prevent. Most often, it is only a configuration option. 
But does the GPL-v3 restrict users regarding the allowed configurations?
 
>
> The views of the FSF on DRM and TC are well-published, and easily
> available.  For example, search for "TiVo-ization".
>
> What is incompatible with the free software principles is exactly
> this: I am only free to run the software that I want to run, with my
> modifications, if the hardware obeys my command.  If the hardware puts
> somebody else's security policy over mine, I lose my freedom to run
> modified versions of the software.  This is why any attempt to enforce
> the security policy of the author or distributor of a free software
> work is in direct conflict with the rights given by the free software
> license to the recipient of the work.
What does the view say about a user that freely accepts a policy? I _never_ 
talked about a system that "puts somebody else's security policy over mine".

Regards,
Chris





reply via email to

[Prev in Thread] Current Thread [Next in Thread]