l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Confinement (even with TPMs) and DRM are not mutually exclusive


From: Bas Wijnen
Subject: Re: Confinement (even with TPMs) and DRM are not mutually exclusive
Date: Tue, 6 Jun 2006 23:13:01 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Tue, Jun 06, 2006 at 11:13:55AM -0400, Eric Northup wrote:
> I have been very concerned to see the discussions leaning towards
> abandoning the security benefits associated with the design patterns
> from KeyKOS and its descendants.

The security you speak of, as far as I understand (but I agree with Marcus
it's better to be specific, so I will be) is the security of programs against
users owning them (where owning means the program received all its
capabilities and the initial code image from that user).  This will never
work.  Trying it, however, does lead to undesirable behaviour: programs will
try to protect against the user, and the user will need to do a lot of work to
prevent it.  It'll be a war.  Some wars need to be fought, but this one does
not.  The sole reason this program exists is the user.  It shouldn't have
protection against her.

What we do want is security with user approval.  This is very well possible in
the system we have discussed.  What is not possible is protection against the
user's will.  And that's a good thing.

(Things get complicated when more than one user is involved.  That's not what
I'm speaking of here.)

> I think there may be a design which supports both goals.
> 
> It seems to me that DRM applications have two requirements:
> 
> 1) Private storage for crypto keys and the cleartext of the protected
> data.
> 
> 2) Private communication channels to trusted output devices, so
> that the protected data isn't captured.
> 
> Several desirable scenarios have been identified which require #1 -
> storing the users' crypto keys, client programs providing sever
> programs with storage, etc.

But never against the will of the user.  So it is totally acceptable to let
the user decide, and let let the program assume that the user decided
correctly.  The program musn't be able to check this decision, because it
mustn't be able to react on it being "wrong" (in the program's opinion).

The system we suggested already supports the user to do this, but it
specifically doesn't support the program to check it.

> #2 seems rarer to me among desirable programs, and might be an
> appropriate place to put restrictions.

#2 needs trusted communication partners and a communication channel.  If those
partners want to set up a secure connection, they can use encryption.  But
they don't even need to: it's not the programs which want it, it's the user.
The user can simply make sure that the capabilities aren't sniffed, because
the system guarantees that the session is the only one holding them (they are
revoked when the terminal is reset, for example).  So if they are sniffed,
then the user wants them to be sniffed.  Again, that is not something to
protect against.  Protecting against the user directly means that the user's
freedom is limited.

> There are situations where programs want to know that they have a
> *mostly* private communication channel to an output device.  For
> example, a spreadsheet which stores patient information in a medical
> practice must be careful that random applications don't take
> screenshots or steal their clipboard contents.

This is sort of a special case, because the "user" in system terms is perhaps
not exactly the person sitting behind the terminal.  The user's session can be
designed to limit the ability of the person in such a case.  However,
protecting against screenshots is nonsense, as has been said.  Those can be
made with a digital camera or even by memorizing the things anyway.  If the
person isn't trusted, she mustn't be able to get that information on screen
at all.

> Also, password entry dialog boxes, etc.  But these applications do not want
> to prohibit the *user* (ie, the shell) from taking screen dumps.

And they don't want to prohibit the user from doing other strange things
either.  If some other process has access to the capability, it must have
received it from the user (possibly indirectly).

> They want to protect their data from other applications (including, perhaps,
> the application which initiated their execution)

No, they don't[1].  They simply trust the user (or the program which started
them) to be doing the right thing.  If the user starts a program A, which
starts an other program B, which needs a password, then there are two options:

1. A would be allowed itself to ask for a password.  In that case, there is no
need to protect against A, and A can be trusted to pass on its power box
(which has the ability to ask for a password, or more likely to check it
only).
2. A is not allowed to ask for a password.  In that case, B cannot be started
by A (because the user can only give things to B via A, and it doesn't trust A
to give it the capability.  And anyway, B is under complete control of A, so
the user doesn't trust B either).  So B will be started by the user directly,
not by A (although this will probably be done in response to A requesting it).
B will then have its own power box, and presumably with more rights than A
has.

[1] As I said, I'm assuming single-user here.  The discussion we have had was
mostly about multi-user, where one user (or the system) wants to protect
against an other user.  That happens all the time.  But if a program
instantiates a child and provides it with all its capabilities (and its code),
then there is no defense against this parent, and it doesn't make sense to
even try (as a programmer.  The child program itself _can't_ even try).

> Capabilities that can be Authenticated:
> 
> Space Bank.
> 
>   (Described in earlier threads already)

We haven't quite decided (AFAIK) if we want if even for space banks, but
I'm very certain we don't want to allow programs to authenticate against the
will of the user.

> Human Input Device.

The user session knows if the device is trusted, because the terminal tells it
when it connects.  That is, if I log in, that terminal's capabilities will be
given to my session (and revoked when I log out).  Not just anyone can give
capabilities through that channel.  Only the system can.  And it will tell if
they are trusted.  For example, when I log in over an ssh connection, it will
say that they aren't, because it doesn't know where my ssh client is running
(there may be a key logger installed).

So the user knows if the device is "direct".  The program doesn't need to
know.

> Output Device (window system session, audio output, printer, etc...)
> 
>   I'm not sure exactly what guarantees we want to make here, but
>   probably they would include:
> 
>       -If output is monitored/logged, it is done with the explicit
>        approval of the user's shell.

This is automatically the case.  Only the shell has access to these
capabilities.  If a program gets them, the shell has given them to it.

>       -Some devices may offer limited guarantees of exclusivity.  For
>        example, that while printing a contract, no other program can
>        insert the word "not".

I'd say that invoking a capability to a printer should send a whole document,
not only a part.  No need for exclusivity in that case.

>        Or that other programs can not change the display of a window (but
>        rather, they must display their content in separate windows).

Most programs shouldn't get capabilities to the whole desktop, but only to
their own window.  Again, if the party which owns them thinks it is a good
idea to mess with the window contents, then the program must not prevent that.
All its parents either are the user, or are working on behalf of the user.  If
the parents of a program are not trustable, then (from the program's point of
view) all is lost.  In other words, this isn't about trust.  A program
(including the environment it lives in) is defined by its parent.  Not
trusting the parent means not trusting (any part of) the world.

I want to write some things down here, as I see them.  I think Jonathan
doesn't agree with me, but I'd like to hear from anyone (including Jonathan)
what's wrong with it.

A programmer writes a program.  The programmer guarantees (within limits) that
the program works according to his specification.  However, she only
guarantees this if the program is properly used.

The system administrator installs the program.  She proxies the guarantee of
the programmer: if it's properly used, there is some guarantee.

The user uses the program.  She receives the guarantee.

Now when the program is being run as it should be by the user, everything is
fine.  But what happens if the user will run it "improperly", for example in a
debugger?  Then the guarantee is no longer valid.  However, that doesn't mean
that it's a wrong thing to do.  In particular, when the user starts a program,
that's the user's business, not the administrator's, and certainly not the
programmer's.  If the user wants to run the program in a debugger, then it
mustn't start protesting.  It must do what the user wants, which in this case
means "notice nothing, just do as if there isn't a debugger".  We cannot leave
this to programmers.  Some of them will check this, and they will let their
programs protest.  Eventually this will not work: the user can change the code
and do what they want anyway.  But this is expensive.  And there is no reason
that it is ever useful (in a way that I care about), AFAICS.  So it simply
shouldn't be possible at all.

The result of allowing these protections is that normal users have trouble
doing things slightly different from how the programmers expected them to do
it, while there is no protection for the programmer anyway, because some
people will "crack" their programs and then the sensitive data is public (or
at least as public as those people want it to be).

It's like those "copy-protected" CDs: People who buy them are having lots of
trouble using them, while for people who copy them everything works fine.

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]