l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: libhurd-cap-server/class-alloc.c


From: Neal H. Walfield
Subject: Re: libhurd-cap-server/class-alloc.c
Date: Mon, 17 Jan 2005 11:01:53 +0000
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 17 Jan 2005 10:43:02 +0100,
Johan Rydberg wrote:
> 
> Marcus Brinkmann <address@hidden> writes:
> 
> >> hurd_cap_class_alloc manipulates the object slab
> >> (i.e. CAP_CLASS->OBJ_SPACE) but does not lock the class.  Since the
> >> locking interface is not exported, callers can't be expected to lock
> >> the class.  Here is a patch to change hurd_cap_class_alloc to lock
> >> CAP_CLASS.  Okay to check in?
> >
> > Funny enough, locking is not required, as the slab space already is
> > protected by its own internal lock.
> 
> It's Neal opinion that there is no need for a internal lock in the
> slab space.  If I recall correctly, we (me and Marcus) have discussed
> this off-line earlier.

If we serialize access to a slab space, there is no need to do locking
internally. (This is similar to the hash and btree interfaces.)

> Anyway, the reason for having a lock in the slab space is to protect
> it when the system goes low on memory and need to reap any slabs that
> are fully free (i.e., no outstanding allocated objects.)  This, at
> this point imaginary, function needs to be invoked from the pager.

Yes, this makes sense and motivates the per-space lock.

> There are, as I see it, three reads to go; 
> 
>   1) keep the lock,
>   2) provide a hook to hurd_slab_space that will lock any other 
>      structures or,
>   3) provide a pthread_mutex_t* to hurd_slab_space.

The reaper assumes that the slab structures and the buffers are wired.
In the case of a task's memory manager, this will be true.  It would
be useful to allow user code to use libhurd-slab and allow that memory
to be paged.  If we permit this without making the slab reaper know
about the MM internals, then the reaper may access data which is paged
out (if the pager invokes the reaper because there is memory pressure,
for example, we may end up with a live lock or dead lock scenario
depending on the implementation).

I see several choices:

 - add an attribute that a slab's meta-data structure and buffers are
   wired allowing the reaper to be smarter
 - include up to two implementations of libhurd-slab in a binary one
   for wired memory (i.e. for the memory manager) and one for
   non-wired memory (i.e. for the application proper) removing the
   need to change the implementation
 - only reap on a per-slab basis shifting the burden to the pager

I find the first two options excessive.  The first will introduce
additional complexity in the API and the implementation and be a
source of difficult to track bugs: if a user indicates that a slab is
wired but only wires the buffers and not the struct hurd_slab the
reaper may blow up.  The second option gratuitously wastes memory.

The last option seems more realistic: the memory manager has a limited
number of slab spaces which it can easily iterate over when it needs
to reap unused frames from the slabs.  If we go with this option then
we can require that the callers do any required locking.

Finally, it is worth adding that caching buffers for non-wired spaces
may negatively impact performance: if the buffer is paged out and
eventually reused, the page-in operation may be more expensive than
just initializing some fresh memory.  Going with the third option
above, we can add an attribute to the slab creation function
indicating whether to cache unused buffers.  Even if a user gets this
option wrong it won't affect the pager, just cause some gratuitous
paging.

In brief, I suggest we:

 - have callers invoke the reap function on specific slabs
 - require callers to serialize access to slabs
 - modify hurd_slab create to take an additional attribute which
   indicates whether unused buffers should be cached or just
   immediately deallocated

Comments?

Thanks,
Neal





reply via email to

[Prev in Thread] Current Thread [Next in Thread]