l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Server granularity


From: Jonathan S. Shapiro
Subject: Server granularity
Date: Sat, 15 Oct 2005 16:18:22 -0400

On Fri, 2005-10-14 at 23:23 -0700, Jun Inoue wrote:
> Reading subsequent posts, Bas and you both seem to be using the word
> "constructor server" to mean the thing I called "constructor process",
> i.e. a server that *implements* a constructor object.  So perhaps I
> was just being annoying...sorry.

There is no need to apologize. Confusion is something we should all work
together to cure! :-)


I would like to describe another difference between EROS/Coyotos and
Hurd: our assumptions about the granularity of servers. From various
statements that have been made by Hurd people in this discussion, it
sounds like the Hurd structure is traditionally client/server: a server
serves many objects. This design is very natural -- and probably
unavoidable -- in a non-persistent system.

In EROS and Coyotos, we very often adopt designs where one process
implements only one object. For example, each EROS constructor is a
process. There is no "constructor server" that implements all
constructors. [This was the source of my reaction to the term
"constructor server" when Bas used it.]

We do not do this universally. All space banks, for example, are
implemented by a single server. We conventionally refer to the root of
the space bank hierarchy as the "prime space bank". In practice, we
often refer to the *server* by this name too; it is rarely ambiguous in
context.

Let me give some examples of which objects are done which way in our
system, and then try to give a rationale for each case:

  spacebank:     one process, many objects
  window system: one process, many objects
  TCP stack:     one process per stack
  constructor:   one process per constructor
  file server:   one process, one file (KeyKOS)
                 one process, many files (EROS)
  directory:     one process, one object  [1,2]

In the case of the space bank and the window system, the situation is
that the server is guarding some real resource that is part of the TCB.

In KeyKOS, the rule was that each file was a separate process. In EROS
we have relaxed this, because we eventually realized that a shared
collection of files already requires homogeneous storage sources and
mutual trust among the participants. Using a single server in this case
lets us be more space efficient. However, there may be many
instantiations of the file server -- one for each user's files, for
example.

Also, we have no single process that implements a traditional file
system. If you think about it, you will conclude that a traditional file
system has three jobs:

  implementing the file data structure and organization (i.e. indirect
    and content blocks) => EROS file server
  implementing a human-compatible naming and organization scheme
    (directories) => EROS directory object
  implementing I/O in a way that preserves consistency
    => handled by checkpoint mechanism

In practice, I think we need to reconsider the directory case in the
same way that we reconsidered the file server case, because traversing
directory hierarchies is too expensive in the current design.


The advantage to the "one process, one object" design is that it
simplifies sandboxing. Any time you have a single process implementing
multiple objects, you are forced to choose between two design positions:

  (1) The server is trusted, and collaborates fully in the enforcement
      of security policies. That is, it is part of the *systemwide* TCB.

  (2) The server is untrusted, and therefore does NOT cooperate in
      the enforcement of isolation contracts.

Case (2) is acceptable only if you have reason to know that all of the
objects implemented by that server exist in a coherently isolatable
"group" that you will never need to subdivide for purposes of isolation.

For this reason, we tend to prefer the "one process implements one
object" approach unless there is a really compelling reason to aggregate
things together.


shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]