l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: capability address space and virtualizing objects


From: Jonathan S. Shapiro
Subject: Re: capability address space and virtualizing objects
Date: Thu, 28 Aug 2008 11:48:56 -0400

> From: Neal H. Walfield <address@hidden>
> 
> I'm currently working on IPC in Viengoos.  I've decided to mostly
> divorce IPC from threads by reifying message buffers.  Thus, instead
> of a thread sending a message to another thread, a thread loads a
> message into a kernel message buffer and invokes another message
> buffer specifying the first as an argument.

Interesting. Out of curiosity:

  1. Are the buffers bounded in size?
  2. Who allocates their storage?
  3. Are message boundaries preserved?

Also, have you concluded that the double copy cost associated with
buffering is acceptable? If so, I might suggest a mild refinement to the
protocol you seem to be describing:

  Sender allocates/obtains a message, specifies the destination
    queue *before* copying the payload.
  Sender copies payload.
  Sender "presses transmit".

By knowing the destination early, it will become possible for the kernel
IPC path to perform direct copy-through in many cases. This is
conceptually similar to the pipe optimizations that got added in Linux.

> A message buffer contains a capability slot designating
> a thread to optionally activate when a message transfer occurs.

I am not clear what "optionally activate" means here. If it is important
to the question that you are trying to ask, then could you clarify?

> When the message in SRC is delivered to DEST, the thread designated by
> SRC is activated, indicating that the message in SRC has been
> delivered, and the thread designated by DEST is activated indicating
> that a message in DEST has arrived.

Ah. So what you mean to say is not that the activation is optional, but
that the presence of a thread capability in the buffer is optional?

If so, I would suggest a change of terms. What you are describing as
"buffers" have traditionally been called ports or mailboxes. Generally,
a buffer holds payload, while the thing it is queued on is a port,
queue, or mailbox.

> This interface poses a problem for virtualization.  One of the goals
> of Viengoos is that all interfaces be virtualizable.  This has (so
> far) included the ability to fully virtualize kernel objects.
> Virtualizing an object is done by way of a message buffer, on which
> the same interface is implemented as the object that is being
> virtualized.
> 
> This means that to virtualize a cappage...

Initially I thought that you were concerned with virtualizing
buffers/mailboxes, but now you seem to be speaking about virtualizing
cappages. I will proceed on the assumption that your goal is to
virtualize cappages. If I have misunderstood, please clarify.

> , it must also be possible to
> virtualize cappage indexing.  Imagine that to find the source message
> buffer in the above msg_buf_enqueue invocation includes traversing a
> virtualized cappage.  That is, after translating some bits of the
> address, the kernel encounters a message buffer.  This means that
> instead of a kernel-implemented cappage, a message buffer is
> encountered while traversing the address space.  Instead of failing,
> the kernel should conceptually index the message buffer.  This means
> sending it an index message with the residual address.
> 
> This is a problem.  The kernel cannot wait for the message buffer to
> reply.  It can also not allocate another message buffer.  The reason
> is that these techniques violate other Viengoos design principles.
> Also, this cannot be reflected to the sender as a "retry using" as the
> virtualization would not be transparent.

I am not sure if the following will prove to be helpful, but let me
blather for a moment.

There is a situation in Coyotos that may be analogous: sender sends a
message, is willing to block for delivery, but receiver buffer contains
invalid pages. Appropriate keeper must be notified, but kernel will not
hold any storage.

In the Coyotos case, what we do is roll the transmission back (in an
unbounded message system we could leave the two processes in
mid-transfer). The kernel up-calls the handler, attributing the call to
the sender (equally well, the receiver). The handler, on reply, restarts
the alleged sender, thereby resuming the message transfer.


Now the problem that you face in managing mailboxes is not quite
analogous. Ultimately, the problem you are really dealing with is that
you cannot use the communication substrate primitives to simulate
themselves. There is a reductio problem.

It appears to me that there are (qualitatively) only two solutions to
this reductio:

  1. Define the messaging architecture in such a way that the transient
     message body can be elided in some cases, and ensure that the
     traversal reductio can be implemented entirely within these cases.

     In particular, kernel-implemented objects such as cappages are
     invariably very simple, and you may be able to exploit the fact
     that all of the required operations for this object are both
     unit-time and involve very small messages.

     OR

  2. Define the messaging queues as a *cache* backed by the respective
     applications, and design the traversal solution in such a way that
     the caches involved in the traversal are likely to converge.

> Note further that the message may contain capabilities, which need to
> be looked up in the source address space and saved in the target
> address space.

And just to make life fun this may block.

> What we'd like then is to iterate over each capability that we need to
> lookup and if we encounter such a scenario, save the state in the
> source message buffer, invoke the message buffer with the index method
> specifying the source buffer as the reply buffer.  And then, when the
> reply comes in, recognize that we are in the middle of processing a
> message and resume.

I would need to understand the structure of the messaging system much
better to offer any opinion. Unfortunately I am under a deadline at the
moment, and I will not have time to look soon.


shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]