l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: capability address space and virtualizing objects


From: Jonathan S. Shapiro
Subject: Re: capability address space and virtualizing objects
Date: Fri, 29 Aug 2008 11:23:46 -0400

On Fri, 2008-08-29 at 17:01 +0200, Neal H. Walfield wrote:
> At Fri, 29 Aug 2008 09:47:41 -0400,
> Jonathan S. Shapiro wrote:
> > I can see no reason why this revocation should be required. None of the
> > content that you describe as existing in this frame is in any way
> > sensitive, and there is no hazard to the kernel if the sender alters the
> > payload on the fly, provided minimal care is taken in kernel accesses to
> > the frame.
> 
> Isn't exposing the capability addresses in the target message
> problematic?

Yes. I meant that it wasn't necessary to revoke the page in order for
the *kernel* to consult it.

> > >   - the kernel finds the first MIN(source.cap_count, target.cap_count)
> > >     capabilities specified in the source message buffer and copies
> > >     them into the slots specified in the target message buffer,
> > 
> > Unless there is a very small bound on cap_count, this phase needs to be
> > preemptible.
> 
> It's bounded in that there is only a page worth of space.  I have not
> yet decided whether to further restrict this.  But this is a
> convincing reason.

Depending on how your address spaces are implemented, a page worth of
capabilities involving suitably scattered sources and destinations can
entangle an *awesome* number of kernel data structures.

> > >   - the kernel frees the frame associated with the target user message
> > >     buffer object and assigns it the frame that was associated with
> > >     the source user message buffer object.
> > 
> > Somewhere in all this I am reasonably certain that a data payload gets
> > copied, but that description seems to have gone missing.
> 
> The source frame is modified and transferred.  The bytes are not
> touched, however.

Then there is a one page data payload limit? How are long messages
handled?

> > I would not have expected the old target frame to be freed. Given the
> > road you seemed to be proceeding down, I anticipated that the protocol
> > would clear the target frame and then execute a frame exchange.
> 
> Why would it clear the target frame? Do you mean to appropriately
> account the CPU cycles since it has to be cleared eventually?

Before any page swap, the page needs to be cleansed of any originator
state that should not be disclosed. Example: residual payload from a
previous, unrelated message.

Arguably, the kernel can push this responsibility to the receiver, who
can then elect not to zero if (as in most cases) they don't care.

> A frame exchange would also be possible but as frames are second
> class, that seems to me to just be an optimization of some sort.  Or
> is there another reason that I am missing?

If there is no frame swap, then the sender is constantly doing
replacement allocations and the receiver is constantly doing
deallocations (due to overwrite). It is not obvious whether "clear and
swap" improves matters, but it may.

> > I definitely think the name needs to change. When people here the term
> > "buffer", what leaps to mind is "some resource that contains the payload
> > of a message". They definitely do not think "a thing on which a message
> > can be enqueued", and I cannot envision a scenario in which it makes
> > sense to enqueue one piece of payload on a second piece of payload. I
> > can envision useful scenarios in which queues might be first class and
> > capabilities to them might be transferred, but I cannot envision a
> > scenario in which a queue should get enqueued on another queue.
> > 
> > The concepts of "the message being transferred" and "the destination of
> > transfer" seem (to me) to want to be clearly separated. If there is a
> > reason not to do this, I would be interested to understand it, but
> > offhand I can see only complications and confusions arising from what
> > you seem to be describing.
> 
> Do you mean to have two kernel object types instead of one?  One for
> messages buffers and one for queues?

They are distinct concepts whose temporal scope and extent are quite
different. This would seem to warrant separate kernel abstractions, but
I probably still do not understand your scheme.

Part of my confusion is that there seems to be a "one or the other"
choice in prior systems. Either (a) there are no ports, and messages go
to receiving processes directly, or (b) ports were first-class entities,
or at least clearly ought to have been in hindsight.

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]