l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: setuid vs. EROS constructor


From: Jun Inoue
Subject: Re: setuid vs. EROS constructor
Date: Fri, 14 Oct 2005 23:23:41 -0700

On Thu, 13 Oct 2005 12:22:46 -0400
"Jonathan S. Shapiro" <address@hidden> wrote:

> We do NOT have anything that would automatically instantiate a new copy
> of foo when you open foocap. The problem is that the protocol for
> starting a program requires providing a source of storage and a schedule
> capability.
>
> If we added such a function, we certainly would NOT do it in the file
> system, because the file system does not (and should not) receive my
> source of storage and/or my schedule. The closest we might come is to
> add an advisory "active" bit to the directory entry so that my C library
> might be told to transparently instantiate the "foo" program.
> 
> In general, however, this is EXTREMELY dangerous. What we are creating
> here is a convention where you set a bit (the active bit) that my
> library code will obey without consulting me. In effect, this means that
> you get to take over my execution. There are certainly times that this
> is an appropriate thing to do, but I don't think it is something that
> should EVER be done transparently!

I didn't mean that simply accessing the settrans'ed (setcap'ed) file
would automagically spawn a new process, whether that be a read/write
request or a file open.  What I meant by "whenever someone types
/pub/jun/foo, [it spawns]" was that whenever someone types that in bash
(or csh, sh, whatever) a child is created.  In that case, the user is
explicitly asking for a new process out of that particular node in the
VFS.  If the node (or its creator) can't be trusted, the user should
sanbox it (which the shell might do by default to every executable not
on the standard path).

;; But I changed my mind on this.  "Trust it xor imprison it" isn't too
;; flexible, as you say.

>From a programmer, accesses to that constructor node would be the same
as with any file: you just get a capability.  No process creation
there.  Only if the program asks libc to create a process out of the
file will a child be created, resource be given.

I'm sorry I worded it so obscurely.


> I am not sure whether this response makes things clearer, so I would
> like to pause to get your reaction.

I still feel I'm missing some thing(s).  This sentence seems to be the
root of my confusion:

> > > The only thing that Bas missed is that if you have persistence
> > > you do not need a constructor server.

What is meant by this sentence?  What is the "constructor server" in
this context?

Initially, I interpreted Bas's "constructor server" to be a central
server (part of the TCB) handing out capabilities of constructor
objects.  I thought it was the equivalent of a metaconstructor.  Then,
I interpreted your sentence to mean that persistence makes such a TCB
component unnecessary, which was really dumb considering EROS *does*
have a metaconstructor.  I should have known that you weren't
schizophrenic. :-)

Reading subsequent posts, Bas and you both seem to be using the word
"constructor server" to mean the thing I called "constructor process",
i.e. a server that *implements* a constructor object.  So perhaps I
was just being annoying...sorry.


> What it tests is whether the initial program image contains any
> capability (excluding those that came from the instantiating requestor)
> that would permit write authority. The test handles the transitive case,
> so really, it tests whether any operation on those capabilities would
> allow the new program to ever *obtain* write authority.
> 
> The confinement test is precise but conservative, because it is a static
> test. It is possible to write programs that hold leaky capabilities but
> do not use them. The constructor test will reject these programs,
> because in general we don't have the technology to check this property
> robustly.
> 
> I'll be happy to describe the mechanism, but I think it is better to get
> the idea across first.

Somehow I was thinking that EROS tests for confinement, but given that
it actually tests for leaky capabilities, I could imagine a pretty
straightforward mechanism.  ...which you went ahead and desribed in
another thread ("Re: instance and instantiator").


> The number is manageably small. It is not driven by user interfaces.
> Here is an intuition:
> 
> If you are already planning to hand a specific file to the sub-program,
> simply open the file and pass the capability. There isn't any
> negotiation required here, and you have already restricted access as far
> down as you can practically go.
> [...]
> So in practice, a mediating agent is required when an application has
> justified need for some capability that is contained in one of these
> aggregates. That is: mediating agents exist to guard aggregates.
> 
> There is one other case: a mediating agent must exist to restrict
> communication across user-established confinement boundaries. In the
> same way that you do not want XMMS scribbling on all of your files,
> there is no reason why it should be sending arbitrary cut and paste
> buffers to other programs through the window system. In the EROS Window
> System, cut and paste still works, but *only* when the user has actually
> executed the necessary actions. In X11, programs can do cut&paste
> without the user ever seeing the interaction at all.
> 
> So yes: the guard agents can be seen as sub-programs that serve the
> shell.

That intuition was pretty much what I had in mind :)

I can see it works pretty nicely in most cases, but how can this work
with applications featuring scripting?  Maybe I've got it all wrong
again, so here's my understanding:

A UID is a (usually) large, static aggregate that consists of all (or
most) capabilities held by a user.  A traditional, uid-based system
gives the entire aggregate associated with a uid to any program
started by a user, because that program is a representative working on
behalf of that user.  Only, it isn't; the program is an agent hired
tentatively by the user to exercise *some* of his or her rights, not
all.

We wish to hand out capabilities in a way that more accurately
reflects the (sub)set of the user's rights being exercised by the
program (I imagine that's what an "authority boundary" is).  In many
programs, the capabilities required for operation can't be determined
at program startup.  We can't just let the program grab additional
capabilities at will, so some trusted component must intervene.

The question I want to ask, then, is: "how far should that trusted
mediator intervene?"  Let's say I want to add some capabilities to
application A, using mediator M.  If M doesn't directly accept input
from the user, A can lie to M about what I ordered.  So M needs to
inspect and interpret input on its own.

I don't think such an agent which always manifests itself can play
well with scripting.  How would rename(1) work?  Or emacs?  GIMP?
Should they ask the user if they can temporarily fork a child with a
huge number of capabilities?  How do we know what to give to that
child?  Should we put the target files in a directory with some
trusted utility?  What's the point of scripting then?

Maybe scripting can be dismissed as fundamentally insecure, but the
line would be hard to draw, I think.

-- 
Jun Inoue
address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]