plash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Plash] Re: [cap-talk] Plash: Empowering Security


From: Mark Seaborn
Subject: [Plash] Re: [cap-talk] Plash: Empowering Security
Date: Mon, 07 Apr 2008 21:11:56 +0100 (BST)

Toby Murray <address@hidden> wrote:

> Anyone interested in POLA needs to know about Plash. It's woefully
> under-hyped and much more powerful than I believe many (including those
> in the POLA community) are aware. I've tried to write something short,
> sharp and sweet to address this.

Thanks for writing this!

> Please read it if you're interested and give me feedback.
> Eventually, I'd like to push this to a wider audience to spread the
> word further but want to get more of a mandate for doing so first.

What audiences do you have in mind?


> Introducing Plash
> 
> This tool is called Plash [2] and currently runs on Debian-compatible
> Linux distributions such as Debian and Ubuntu [*].

"Debian-based distributions" or "Debian and derivative distributions"
might be better terms than "Debian-compatible".

> Plash enables ordinary users to install software packages that might
> have been built by anyone in the world, ensuring that the software
> cannot harm the user nor the rest of the system.  This allows
> non-Administrators to install any software they might require in
> order to get their work done.  With Plash, Administrators,
> meanwhile, need not lie awake fretting that their users will have
> rendered their systems insecure by doing so.

Maybe I'll lie awake fretting that you've made too-strong claims about
Plash. ;-)

Instead of saying the software cannot harm the user, it would be
better to say that the damage can be limited to whatever the user has
granted to it.

OLPC seem to be distinguishing between their security architecture,
Bitfrost, and its implementation, Rainbow.  I wonder if we should do
something similar.


> The trick lies in how Plash provides its security. We'll use an
> example to illustrate. Suppose Bob, an ordinary user, needs to install
> a new wordprocessor to enable him to work more productively. He checks
> to see whether the wordprocessor is available as a package for his
> system, e.g. by using "apt-cache search" etc. and is pleased to learn
> that it is.  However, his delight is soon dampened when he realises
> that he doesn't have permission to install the package and must ask
> the Administrator, Alice, to install it for him.  Alice must now
> decide whether the wordprocessor can be trusted. In almost all cases,
> unless the software is well known and widely used, Alice has no choice
> but to err on the side of caution and assume it could be dangerous --
> either because it is purposefully malicious or because it contains
> vulnerabilities that, if exploited, could allow an attacker to comprise
> the system's security. Inevitably this leads Alice to deny Bob's
> request to have the package installed. Alice and Bob are both left
> frustrated with Bob unable to do his work. In short, nobody wins.
> Bob's PC is rendered impotent by its archaic requirement that all
> software it runs to be trustworthy.

I'm not sure this story will be convincing, although I can see where
you are coming from with your introduction.  Are we just talking about
Unix here?  For many Unix users, especially in your audience, the
administrator and the user are the same person, so this conflict does
not happen as much as it might have done in the past.  A user might
hold back from installing a package themselves, but that's a different
scenario.

If you've found a package using "apt-cache search" it's probably part
of the distribution you are already using, so trustworthiness is
usually less of an issue.

Maybe it's worth talking about threat models here.  I can think of
four kinds of problems installing a package could cause:

 a) The package messes stuff up, with no malice required on anyone's part
 b) It makes users vulnerable to other users on the system
 c) It makes the user vulnerable to the outside world
 d) The package is malicious

A sysadmin is most directly concerned with (a) and (b), but they're
also concerned with (c) and (d) because they have to deal with some of
the consequences.

However, the user-sysadmin on a single-user system is not concerned
with (b).  For systems like Zero-Install, (b) does not apply because
software is installed under a user account, not as root.

(a) can be quite a problem sometimes.  If you mix and match Debian
binary packages too much you can get into frustrating situations where
installing or upgrading package A causes another package B, which you
want to keep, to be removed.

Centralised distributions such as Ubuntu mitigate these problems.  For
(d), there is some minimal review of code, and the distribution's
reputation is at stake.  For (b) and (c), there is some limited
review, but probably more importantly, they provide an upgrade path to
fix bugs, although that's as far as it goes because of the lack of
POLA.

(c) is probably the most important use case to talk about.  Most
programs these days are exposed to potentially-malicious data
downloaded from the Internet, even if they're not exposed to the
Internet directly.  The number of vulnerabilities of this kind is
huge.

(d) is a realistic threat model for Javascript in browsers, but I'm
not sure it is useful to talk about this for normal applications.
People's expectations are low.  Few people expect systems to cope with
malicious desktop applications.  Maybe it would be worthwhile to
compare desktop apps with web apps and question why web apps can't do
as much as desktop apps, and why desktop apps aren't as easy to deploy
as web apps.

This reminds me of the wonderful "safety vs. functionality" graph with
labelled transitions in the draft Caja spec. [1]

[1] http://google-caja.googlecode.com/files/caja-spec-2007-10-11.pdf
    though there is a newer version at
    http://google-caja.googlecode.com/files/caja-spec-2008-01-15.pdf


> Finally, Plash grants access to standard, innocuous, facilities that
> the application might require when it is run, such as the X display
> system and the network.

X11 access is not quite innocuous. :-)  X is a big can of worms that
will require a lot of work to make safe. [2]

Network access is not innocuous either because processes can use it to
send spam, attack other systems and receive instructions.

It would be better to describe Plash as a work-in-progress in these
areas.

[2] http://plash.beasts.org/wiki/X11Security


> Unlike other sandbox approaches, Plash removes the need to specify
> detailed policy information for each application by leveraging the
> information that is already available about the application in the form
> of standard package dependencies and by making smart use of existing
> facilities like the "Open File" dialog to infer security information. 

That's a good description.  Can you reference CapDesk, Polaris,
Bitfrost and earlier stuff about powerboxes?

On terminology:  I picked the term "sandbox" to describe Plash, but I
know others, such as the authors of the Polaris paper, have used the
term pejoratively to describe environments such as Java applets, where
processes can't acquire enough authority to do useful work.

I avoided the term for a while, but then decided that saying
"sandboxed process" and "unsandboxed process" was easier than saying
"process running with limited authority" and "process running with all
the user's authority".  Maybe we can find a better term for
limited-usefulness sandboxes or better adjectives than "sandboxed" and
"unsandboxed"?

Regards,
Mark




reply via email to

[Prev in Thread] Current Thread [Next in Thread]