l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

space banks and DMA


From: Neal H. Walfield
Subject: space banks and DMA
Date: Thu, 13 Oct 2005 14:24:19 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.4 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sun, 09 Oct 2005 15:37:02 -0400,
Jonathan S. Shapiro wrote:
> In EROS/Coyotos, we do not consider any of these outcomes acceptable. We
> have a very foundational, common sense design rule in EROS/Coyotos:
> 
>   The party who pays must always be permitted to deallocate. If
>   this results in a violation of behavior contract, then either
>   the contract is mis-designed or the wrong party is paying.
> 
> In practice, the problem usually turns out to be that the wrong party is
> paying. This is why we have first-class storage allocators (space banks)
> that a client can pass to a server.

In <address@hidden> (Approaches to
storage allocation), you note that:

> - When a server allocates storage this way, it must be prepared for
>   the storage to spontaneously disappear. In some cases, this can
>   restrict the choice of algorithms and data structures (e.g. linked
>   lists).

How do you deal with DMA?  The specific scenario that I'm think of is
when a task wants to read a block from disk.  I imagine that the task
invokes an appropriate capability supplying a space bank from which
the server will allocate a frame to store the read data.  The problem
that I see is if the client can revoke the space bank while the DMA
operation is in progress, the frame to which the DMA operation is
targeted could be reallocated and reused by another task before the
operation finishes resulting in leaked information and corrupted data.

It seems to me that the server needs a mechanism to lock the resource
during the DMA operation.  (Physmem provides a mechanism like this:
the client specifies a maximum locking time and the server specifies
how long it needs the lock for.  If server's required maximum time is
less than the client's imposed maximum, the lock is granted.)

There might be other times where the ability to lock a resource for a
short time would be useful.  I think that under this model, a lot of
code will be dedicated to backing out of scenarios where a resource is
being actively used and suddenly disappears.  If we are able to lock
the resource for short periods then that code could be eliminated.

The difficult problem I see in both cases (for both the server and the
client) is determining how long appropriate is.  An added dimension is
the difference is distinguishing between wall time and cpu time.

Thanks,
Neal




reply via email to

[Prev in Thread] Current Thread [Next in Thread]