l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Alternative network stack design (was: Re: Potential use case for op


From: Marcus Brinkmann
Subject: Re: Alternative network stack design (was: Re: Potential use case for opaque space bank: domain factored network stack
Date: Mon, 08 Jan 2007 04:58:56 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 8 Jan 2007 03:04:55 +0100,
Pierre THIERRY <address@hidden> wrote:
> 
> Scribit Marcus Brinkmann dies 08/01/2007 hora 00:51:
> > I don't think so.  The effect of the proposal is intended to be the
> > following: Pages can be "tagged" so that they can be made opaque by
> > certain designated (and thus authorized) subsystems only.  Ownership
> > of the resource remains within the party who did the tagging.  I think
> > that this description catches the main idea.
> 
> So basically you introduce the opaque memory but only some parts of the
> system can use it.

Yes.
 
> How will they be differentiated, and how will it be extensible?

The decision which processes can access a tagged resource with extra
privileges would be made by the processes doing the tagging according
to its security policy.

> If it
> is, what will prevent the system to be configured so that anyone can use
> it?

Pierre, no matter how often you ask this question, the answer will
always be the same: "If anything, then only the license."  I can only
build a system that is secure by default.  You have to break it
yourself.  That is true no matter what.

> What I don't understand in your proposal is that it looks like "I don't
> want opaque memory, but I need it, so I'll use it and pretend it's not
> there".
> 
> The net effect is that if someone wants to use opaque memory to do the
> harm you want to make impossible, it seems to me that his task to make
> it possible will be quite simple.

I can't and don't want to make anything "impossible".  I have said so
many, many times, but you keep bringing it up.  I notice that there
are quite a couple of such ideas that you keep bringing up again and
again, and the answer is always the same, but somehow I am not getting
through to you.  Do you have the same impression?  And do you have any
suggestion how we can work this out to avoid endless repetition?

The best we can achieve in principle is to make a system safe by
default, and require any deviation from the safe default to be
authorized by explicit user action.  That is luckily also sufficient
from a practical point of view.

> Even if the discrimination between services that are able to use opaque
> memory and services that can't is hardwired into Hurd's source code in a
> way that prevents any extensibility that could be used to bypass that
> protection, it will be a matter of recompiling the kernel or the space
> bank, and the resulting "harmful" one could be transparently used in
> place of the "harmless" one[1].
> 
> And patches to do so would probably be very easy to manage and keep
> up-to-date.
> 
> I don't agree with the transparent memory design, but it could probably
> make it really harder to run undebuggable proprietary software on Hurd.
> On the other hand, I don't see what your restricted opaque memory can
> really achieve, apart from making the Hurd less powerful than what it
> could be (or powerful, but with a power a bit less easy to use).

It is the maxime goal of security systems to make systems less
"powerful" than insecure systems.  That's why one approach to secure
design is to first take everything away, and then only add little bits
of power where they are truly needed.  Of course, what is needed
depends on both technical issues and policies.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]