l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

DRM vs. Privacy


From: Jonathan S. Shapiro
Subject: DRM vs. Privacy
Date: Mon, 07 Nov 2005 14:06:34 -0500

There are two types of secrecy in computational systems:

  1. Secrecy by encryption
  2. Secrecy by protection.

Let me consider these in turn, and then summarize what I perceive as the
implications for Hurd.

SECRECY BY ENCRYPTION

If you have encrypted a document, and I wish to read it, then I have two
tasks:

  1. To identify the encrypted content, and
  2. To discover the key (perhaps by brute force).

Similarly, if I want to apply DRM to a movie, and you want to extract
the movie, you must perform the same steps. As far as I can tell, there
is only one difference between these two scenarios:

  In the privacy case, the key *may* be known to the
  user. In the DRM case, the goal is to ensure that the
  key *must not* be disclosed to the user.

So we must now ask: what technical means is used by DRM systems to
prevent the disclosure of the key?

If disk forensics is practical, then the answer boils down to:
"obscurity". If I can read the disk and simulate the execution of the
system, I can eventually discover the key.

If disk forensics is NOT practical, then the answer boils down to:
"encryption". This simply reduces us to the previous problem:
discovering the key.

So for secrecy by encryption, we come down to the following statements:

  => In order for DRM to be effective, disk forensics must be prevented.

  => The only currently known or proposed technology that is
     pragmatically capable of preventing disk forensics is the TPM/TCPA
     chip.

  => Even in the presence of this technology, the operating system must
     collude. In particular, the operating system must do *all* of the
     following things in order to support DRM:

       1. It must implement the remote authentication mechanism.
          [DRM can be done without this, but the content provider cannot
           *verify* that it is done.]

       2. It must use secrecy by protection combined with the TPM/TCPA
          mechanism to prevent the user from learning the master
          disk encryption key.

       3. It must enable an application to store an encryption key
          in such a way that the user cannot extract it.

So let us attempt to remove these mechanisms one at a time and discover
what we lose.

0. If no TPM/TCPA chip is present, we are done. We are now arguing only
about the difficulty of undoing the key obfuscation. I do not know any
general way to prevent programs from obfuscating data, but I don't think
that this is made any worse in a more secure system.

1. If we remove the authenticate operation, then we lose the ability to
create mutually trusting federations of Hurd systems. A trusted
distributed system is only possible if each node can verify that the
other nodes support the expected behavior contract.

It is a feasible solution for the Hurd project to declare that it will
not support highly trustworthy federation of this kind, but there are
many *legitimate* uses of this mechanism. Consider, for example, a bank
server authenticating an ATM machine (which is simply DRM applied to
money, when you think about it). Or *you* authenticating your home
machine when you log in (which is DRM applied to *your* content).

2. If we disclose the master disk encryption key, then we similarly
cannot build highly trusted federations, and we expose our users to
various forms of search -- some legal, others not. I am not sure that I
want to build a system in which an employer can examine the disk of an
employee without restriction.

3. If we prevent the storage of unrecoverable keys, then the strength of
cryptography is reduced to the strength of login authentication, which
we know is extremely weak.


SECRECY BY PROTECTION

The basic scenarios of "secrecy by protection" all boil down to
variations on one idea: a program holds something in its memory that it
does not wish to disclose.

It seems obvious that we *want* programs to hold such secrets in some
cases. ssh, for example, must be able to hold a cryptographic key
without fear that the key will be exposed. Similarly, I would like to be
able to grant you write access to a file, and later to revoke it.

There are two means by which this type of protection might be bypassed:

  1. Forensics, which reduces us to the previously discussed case
  2. Debugging -- examining the memory image of a program during
     execution.

I think it is pretty clear that in a multiuser system we *must* be able
to prevent debugging. We do not, for example, want ordinary users to be
able to debug "sudo" (or its equivalent). In any system where privacy is
desired, we do not want the system administrator to be able to examine
arbitrary programs either.

The problem is that all software in a system is subject to the same
rules, and the software that implements DRM can hide its secrets too.
Since we must permit the system administrator to install non-disclosing
programs, I do not see a way to prevent the administrator from
installing a non-disclosing program whose purpose is to implement DRM.
The best I think we can do is alert the system administrator that such
programs do not always operate with the user's interests in mind.


Discussion?


shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]