  [ Should I make the distinction between a ``process'' in
    a monolithic-kernel and a ``task'' in a micro-kernel? ]

Compared to a micro-kernel, resource accounting in a monolithic-kernel
is easy.  The monolithic-kernel has control over everything needed to
track and control the resources each process is using.  All processor
time the kernel spends executing on behalf of a process can be
attributed directly to the process that caused the kernel do the work
and because the memory is also controlled by the kernel and can also
be easily attributed tracked.  Either it will be associated with a
single process, or it's a shared resource (like the file-system cache)
that the kernel can control so applications get enough memory, when
they want it, and the kernel has enough for its own needs.  A
micro-kernel is a slightly different beast.  Mutually untrusting tasks
are suddenly expected to cooperate with each other and one of the
abstractions that allow this to happen safely is abstraction known,
suitably, as a ``Container''.

In a monolithic-kernel if a process needs to read data out of a file
it can just allocate a buffer and ask the kernel to read the contents
of the file into it.  The kernel can do make all sorts of assurances
that the buffer is valid and it can attrubite the work spent reading
the data back to the process that initiated the request.  In a
micro-kernel, if a task asks the file-system to read a block of data
from a file we need some way of the file-system task to be able to
describe to the kernel that ``this memory I've got allocated here
should really be attributed to this task over there because I'm doing
something it asked me to''.  However, we want the user to be able to
have control their own parts of the file-system and if some task that
looks like a file-system can start saying ``I'm really doing work for
that task over there'' how is the kernel going to know that it's
telling the truth?  What's to stop a file-system lookalike from lying
about all of its memory?

Likewise, if somehow the file-system can be made responsible, we can't
have any way for the users of the file-system to be able to say ``no,
I didn't ask the file-system to do that for me''.  If the task could
do that then whenever it needs more memory, it would be able to just
ask the file-system to read data of disk and ignore the data that it
actually gets back and just use the memory space the file system gave
the data in.  The solution to this conundrum is surprisingly subtle.

We force the task that's initiating the request to allocate the
necessary memory and give this memory directly to the file-system.
The file-system would then be able to safely read the contents of the
file into this memory and then, when it finishes, tell the initiating
task that the requested data is now available.  This solution solves
the problems noted above because the file-system task isn't able to
attribute the expended resources (memory at least) inaccuratly and the
file-system is safe from tasks abusing it because the client task is
forced to expend its own resources to get work done.

The next problem is how does the memory get allocated and passed to
the file-system in a safe way.  The obvious answer, is to use a
trusted intermediatry.  As previously noted, physmem is the logical
choice for this decision as trust in physmem is one of the bases
(axiom?) of this system's design.  To initiate this request, the task
would contact physmem and ask it for a container of the appropiate
size and physmem, if it is able to, will give a container back to the
originating task.  Now that the task has a container, it can pass it
down to the file-system, the file-system in turn can fill the
container with data.

Each container would have to have a pair of capabilities associated
with them, the ``Control'' capability allows the task to resize (add
and remove pages from) a container, the ``Access'' capability allows
the container to be mapped into the tasks address space and some
limited locking may be performed, but nothing else.
