l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Comparing "copy" and "map/unmap"


From: Matthieu Lemerre
Subject: Re: Comparing "copy" and "map/unmap"
Date: Tue, 11 Oct 2005 01:15:22 +0200
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.50 (gnu/linux)

"Jonathan S. Shapiro" <address@hidden> writes:

> On Mon, 2005-10-10 at 16:09 +0200, Matthieu Lemerre wrote:
>
>> OK.  This works because you manage that every single byte of ressource
>> is allocated by the client.  Thus, upon client destruction, every
>> object is automatically destroyed without the server knowing it.
>> 
>> We originally planned that metadata used to manage the ressources was
>> allocated by the server (because, I think, we do not know how to
>> achieve this on L4, where the smallest allocation unit is the page).
>
> The smallest unit of allocation in Coyotos is also (for practical
> purposes) the page. For example, our current FS implementation allocates
> in units of 4K blocks for efficiency. A more sophisticated
> implementation could certainly fix this.
>
> But I think that the real difference lies in the fact that our system
> tends to favor designs where storage used by a server is allocated from
> a homogeneous source. This greatly simplifies matters.

What do you mean by homogeneous source?

>
>> BTW, I'm interrested by your space bank solution in the other mail.
>> How do you manage that the server does not write beyond the space bank
>> for instance?  Does each server has one page where their allocated
>> space banks are allocated, but on behalf of the client by the kernel?
>
> I think that I have not described the space bank clearly.
>
>>>From the perspective of the client, a space bank is a server. In actual
> implementation, it is an object implemented by the space bank server.
> The operations on this object are things like:
>
>       buy [123] page(s) => cap[]
>       buy [123] node(s) => cap[]
>       destroy page <cap> => void
>
> There is nothing to overrun.
>
> I think I have said previously that storage management is all handled at
> user level. I meant that quite literally. The kernel does not do
> allocation.

If I understand you correctly, then for each allocated object on a
server, a client has to allocate a whole page frame and supply it to
the server?

If it is so, isn't it a waste of memory for capabilities on objects
that don't need that much data?  If we take, for instance, a time
server, whose objects have the current time recorded upon creation
(this is a theoretical example).  Should then the client allocates one
whole page for the three objects it wants to create?

(Sorry if my question seems dumbly simple, but I didn't manage to find
something that explicates in detail how this works).

>
>> > For POSIX, you definitely need reference counts, but these are not
>> > capability-layer reference counts. These are reference counts that are
>> > implemented by the POSIX server, which is an entirely different thing.
>> > There is absolutely no need for these to be implemented in the kernel.
>> 
>> In the Hurd, we don't have something like a POSIX server.  I hope that
>> it would still work if this POSIX server was split into several
>> servers, but I would have to study how you do reference counting on
>> your POSIX server first.
>
> When I talked about this with Neal and Marcus, I made the following
> observation:
>
> The POSIX API assumes a fairly tight integration around the process
> structure. In particular, there are very close interactions involving
> the signal mask. While a multiserver implementation can be built,
> portions of the process state tends to end up in memory that is shared
> across these servers in any efficient implementation.
>
> Further, process teardown is always done in one particular server. This
> is the place that should be responsible for the reference counting.
>

I don't know neither POSIX nor the Hurd enough to answer to you about
this.  Sorry.

>> I think that my example (again :)) with the notification server does
>> not fall into these two categories.  A client would allocate some
>> space on the notification server, to receive messages.
>
> Can you describe for me what the notification server does in your
> design?

We want to be able to abort any blocking operation and this seemed to
be quite complex to manage in a server.  So Marcus came with the idea
of a notification server: instead of sending a message to the server
to tell it to abort the RPC, we could just have the thread stop
waiting for the notification server.  The server wouldn't even notice.

Sorry, I did not have time to read your mails on asynchronous message
yet, but I will as soon as I can.

>
>> So when the client is done with a server, it could revoke the
>> capability it gave to it to give it to another server.  By doing so,
>> it ensures that there is always only one sender of a message to a
>> message box.
>
> If you need one sender and you potentially have many, it sounds like a
> queueing mistake somewhere, but I would like to wait until I understand
> the messaging server protocol better.
>

In the notification server I descirbe, you allocate some space for the
message, and the notification server gives you back a capability to
send messages in the allocated space.  It's then up to you to give the
capability to only one server at a time.

Revoking a capability from a server is only a way not to allocate new
space on the notification buffer, but re-use an older space.

>
>> This is an example, maybe we don't need a message server in EROS (we
>> planned to use this for blocking RPCs).  But still, a similar example
>> could occur (revocation to ensure exclusivity).
>
> Let me take this up separately. There *is* an issue here. I am not sure
> if our solution would work for you, but at least it will provide
> something to consider and possibly something to react to.
>

Well, the forwarding object solution seems to work perfectly in this
case.

Thanks, 
Matthieu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]