l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: design goals vs mechanisms (was: Re: Let's do some coding :-)


From: Marcus Brinkmann
Subject: Re: design goals vs mechanisms (was: Re: Let's do some coding :-)
Date: Wed, 26 Oct 2005 23:28:17 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 26 Oct 2005 22:43:06 +0200,
Bas Wijnen <address@hidden> wrote:
> On Wed, Oct 26, 2005 at 10:16:47PM +0200, Marcus Brinkmann wrote:
> Agreed.  Actually the main two reasons I see persistence as a good thing at
> the moment are removing the need for boot scripts (including rebooting to test
> if they work as expected), and the fact that they can remove the need for
> passive translators.

Just for completeness, let me add one of the reasons why KeyKOS/EROS
have persistence: One of the goals was to have some evidence that the
system is secure.  This is very hard to do in normal bootstrap
procedures, where the system starts out with full authority, and then
gradually removes authorities when passing capabilities to other
processes.  It's very hard to argue successfully that the privileges
are correctly removed at all the right places.

Persistence "solves" this by first creating a checkpoint of the system
in its initial state manually, then argueing about its validity and
correctness manually, based on the static disk image, and then
"starting" the system by simply running it.

In other words: Such a system is never booted, not even the first time
it starts up.  The initial capabilities exist at all the right places
with the right authorities at the moment you write the disk image out.

> Ok, so the current list of goals is:

I hope, for now, you mean "possible goals" :)

> - security (needs to be more specific)

I agree very strongly that this needs to be more specific.  It's a
very vague term.

> - confinement
> - confinement with endogenous verification

- flexible support for a broad variety of security policies

(this goal is a tad generic, but is intended to hint at the use of
a capability system :)

> - persistance
> - no ACLs

Well, you can implement ACLs on top of capabilities.  So, maybe "fine
grained authority delegation possible" would be a goal.

> - persistent sessions for users
> - hard real time
> - soft real time
> - stability
> - robustness (what's the difference with stability?)

Stability means the system does not crash.  Robustness means that if a
part of the system does crash, you can recover.

For example, the Hurd filesystem server design has some robustness
built into it.  If a filesystem server crashes, it can be restarted
(as long as it is not the root filesystem!).  But, they are not
necessarily stable.  We have little confidence in the correctness of
the code, and the code is not written to deal very well with lack of
resources, etc.

If you want stability, you probably want to do some of the following:

* try formal verification
* allocate a fixed amount of resources statically up front,
  instead dynamically at run time.
* keep the source code small and simple, to allow easier verification
* etc

If you want robustness, you may want to try to:

* implement fault detection, recovery and tolerance
* add redundancy
* be careful about your dependency hierarchies

> - small memory footprint
> - support for legacy applications

I forgot to add:

- resource accountability
- setting diverse resource distribution policies

The first goal is a precondition to the second.  It means that I
_know_ which resources are consumed by which process or class of
processes (to clarify: the administrator doesn't need to know how much
resources my individual processes consume, he just needs to know how
much resources my user consumes.  As a user, I might need to know how
much my individual processes use, in a hierarchical fashion).

The second goal means that I can control the consumption of resources.

> I'd say all of them are nice. :-)  But it may be possible that we need to
> choose some and drop others.

Well, of course this list is hand-picked based on present and past
discussion.  My feeling is that we are missing a big deal of potential
goals here.  Does anybody know about an "operating system feature
matrix" where we can cross lots of little check boxes?
 
> > I think in the end we will be faced with a dilemma: Either we have to accept
> > the logical consequences of our stated design goals, which may be deep.
> > This includes removing any paradoxes by dropping some in a set of
> > conflicting design goals.  Or we drop some of our design goals to open up
> > space for other engineering mechanisms.
> > 
> > What's it gonna be?
> 
> Eh...  What did you just say?  Either we drop some goals so the rest is
> possible, or we make the rest possible by dropping some goals?  I don't think
> I understand the dilemma...

It sounded a bit strange to me when I wrote it, too :) I wasn't being
very clear.  This is actually a two stage process.  Clearly, you have
to eliminate conflicts first, to get a feasible design at all.  Then,
if you notice that you don't like the mechanisms you need to implement
the design, you have to cut some of the design goals to include more
mechanisms into your tool set.

IE, let's say that the only feasible mechanism to support the security
policies we want to support is a capability system.  Then you are
pretty much forced to implement that, even if you don't want one,
because you said you wanted to support those security policies, right?
Or you can say, ah, no, I don't want this type of security properties
at all, and then you may not need to implement a capability system.

This depends on how important the goals are to us, of course, and how
good our knowledge of the engineering techniques is.  In the end, we
may have to prioritize our goals.

My current feeling is that for many of us, we would happily subscribe
to the above goals and then some, but have a certain tendency to
reject the consequences.  Maybe because we don't understand why the
consequences are necessary---we may believe that there are alternative
mechanisms achieving the same.  Or maybe because we are afraid that
the required mechanisms are too hard to realize.  All of these are
valid concerns.  We have to decide in each case if we try to get a
better understanding and accept the consequences (if they are
consequential ;), if we can realize them, or if we want to step down
from some of our design goals.

What troubles me is if you would say that you want a very secure
system (insert specification of security requirements here), and then
say that the native interface should be POSIX.  This is something that
appears to be a contradiction.  It can be resolved by showing how to
make POSIX secure (there is lots of experience with that, apparently
negative), dropping security requirements, or it can be resolved by
deemphasizing POSIX.  In the end, you have to pick.

This is a very difficult road, and I have a lot of sympathy for
outright disbelief and rejection.  After all, I have been in denial of
the lessons from the KeyKOS/EROS project for a couple of years.  The
first time I heard of it, I said: "Uh, persistence, that's just too
weird.".  The second and third time I said the same.  Only much later
I made the connection that some security requirements motivate very
strongly a persistent design.  And even then I had a hard time to keep
reminding myself of the arguments that lead to these conclusions.  It
took me a while to get from "secure, but persistence" to "secure
because of persistence".

It's like unlearning a bad behaviour.  It's hard, it's discomforting,
it takes time.  You keep dropping back into old patterns.  You might
temporarily find some replacement bad behaviour.  And eventually you
wonder how you could ever not see it ;)

And at that point you can start to take a critical look at your new
situation.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]