l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Separate trusted computing designs


From: Christian Stüble
Subject: Re: Separate trusted computing designs
Date: Tue, 29 Aug 2006 10:41:22 +0200
User-agent: KMail/1.9.1

Hi,

Am Donnerstag, 17. August 2006 09:18 schrieb Marcus Brinkmann:
> Christian Stüble <address@hidden> schrieb:
> > Hi Marcus, hi all,
> >
> > thanks for the questions. As I said in my last message, I would
> > prefer to
> > discuss only OS-related questions in this list..
>
> I think it's a mistake for a practical project to ignore the implications
> and connections to real world issues.  Western intellectual culture suffers
> dramatically from a too strong focus on domain expertise, and too little
> cross-thinking.
Of course it is. But I did not suggest to skip the discussion, but to continue 
it using a platform that allows non-OS developers to participate.

>
> > To prevent misunderstandings: I don't want to promote TC, nor do I
> > like its
> > technical instantiation completely. IMO there are a lot of technical
> > and
> > social issues to be corrected; that's the reason why I am working on
> > this
> > topic. Nevertheless, a lot of intelligent researchers have worked on
> > it, and
> > therefore it makes IMO sense to analyse what can be done with this
> > technology. In fact, who else should do this?
>
> If the technology is fundamentally flawed, then the correct answer is
> "nobody", and instead it should be rejected outright.  
IMO not. Maybe this is an influence of my PhD-advisor(s), but I would try to 
_prove_ that the technology is fundamentally flawed. BTW, the abstract 
security properties provides are IMO useful.

> History shows that 
> intelligence and morality are completely independent properties.
Yes. So what?

> Of course, it makes sense to analyse what can be done with the technology
> even if one rejects it.  However, one would come to very different
> conclusions.  In this sense, "what can be done" translates for me to "what
> are the combined effects of its application on society" rather than "what
> is the best we can make out of it".
We have different assumption resp. goals. Based on my goals, I currently try 
to find out 'what can we make out of it'. I am sure that even the inventors 
of the current TCG spec are not aware of it. If I have an idea how to fulfill 
the requirements I need, without the disadvantages of the current spec. I 
will tell you and the TCG. 

> > You are asking a lot of questions that I cannot answer, because they
> > are
> > the well known "open issues". The challenge is to be able to answer
> > them
> > sometimes...
>
> If they are open issues, where does the confidence come from your research
> group that they not only can be solved, but in fact that they are solved in
> your design?  From the EMSCB home page (under "Benefits"):
I am sure we cannot solve all problems. However, some of the problems already 
have been solved (e.g., on design requirement is to ensure that application 
cannot violate the user's security policy, as descibed by Ross). But I don't 
want to discuss about one of our projects, nor about the group itself. Only 
about the given privacy use case. 

> I asked for use cases that have a clear benefit for the public as a whole
> or the free software community.
I personally would like to be able to enforce my privacy rules even on 
platforms that have another owner.

>
> > If there
> > are two
> > comparable open operating systems - one providing these features and
> > one that
> > does not, I would select the one that does. I do not want to discuss
> > the
> > opinion of the government or the industry. And I don't want to
> > discuss
> > whether people are intelligent enough to use privacy-protecting
> > features or
> > not. If other people do not want to use them, they don't have to. My
> > requirement is that they have the chance to decide (explicitly or by
> > defining, or using a predefined, privacy policy enforced by the
> > system).
>
> I am always impressed how easily some fall to the fallacy that the use of
> this technology is voluntarily for the people.  It is not.  First, the use
> of the technology will be required to access the content.  And people will
> need to access the content, to be able to participate in our culture and
> society.  All the major cultural distribution channels are completely owned
> by the big industry, exactly because this allows these industries to have a
> grip-hold over our culture.  There is an option for popular struggle
> against this, but it will require a huge effort, and success is by no means
> guaranteed.
I did not talk about TC in general, but about the "privacy-protecting agent".

>
> In the end, this technology, if it succeeds, will be pushed down people's
> throat.  Everybody seems to know and admit this except the "intelligent
> researchers" (well, and the marketing departments of the big corporations).
Do you think that the open-source community will implement such features?
Do you expect that nobody will use open-source, if the industry will implement 
such features? Do you expect that the european governments will use software 
violating privacy laws if there is a better and secure alternative? They 
currently thinking about using Linux instead of Windows.

> Even the publications of the "trusted computing" group admits this quite
> explicitely.  The "Best Practices and Principles" document says a lot about
> how bad it is to use this technology to coerce people into use of the
> technology, but then frankly admits that "preventing potentially coercive
> and anticompetitive behavior is outside the scope of TCG", and earlier that
> "there are inherent limitations that a specification setting organization
> has with respect to enforcement".
I agree. No discussion about that. A secure operating system enforces a 
security policy and thus always allows a misuse. In a military environment, 
e.g., information can be kept confidential even without TC technology.


> > This is (except of the elementary security properties provided by the
> > underlying virtualization layer, e.g., a microkernel) an
> > implementation
> > detail of the appropriate service. There may be implementations
> > enforcing
> > strong isolation between compartments and others that do not. That't
> > basic
> > idea behind our high-level design how to provide multilateral
> > security: The
> > system enforces the user-defined security policy with one exception:
> > Applications can decide themselves whether they want to continue
> > execution
> > based on the (integer) information they get (e.g., whether the GUI
> > enforces
> > isolation or not). But this requires that users cannot access the
> > applications's internal state.
>
> That's incompatible with my ideas on user freedom and protection the user
> from the malicious influences of applications. 
I know. But this is IMO a basic requirement to be able to provide some kind of 
multilateral security. A negotiation of policies 'before' the application is 
executed.

> It is also incompatible with the free software principles.
What exactly is in your opinion incompatible with the free software 
principles?

Regards,
Chris




reply via email to

[Prev in Thread] Current Thread [Next in Thread]