l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: self-paging


From: Jonathan S. Shapiro
Subject: Re: self-paging
Date: Tue, 06 Dec 2005 21:28:32 -0500

On Tue, 2005-12-06 at 18:28 +0100, Bas Wijnen wrote:
> On Tue, Dec 06, 2005 at 11:16:12AM -0500, Jonathan S. Shapiro wrote:

> > When sharing is considered, this stops being true. Suppose you and I both
> > need a page and have assigned it a position. I need it badly, you need it
> > less badly. You fault, and your pager says "shoot that one". The problem is
> > that I still need it. Getting this right requires a unified management
> > scheme for shared items.
> 
> My pager can say "shoot that one" for example by swapping it with a page which
> is currently not mapped in my memory.  If the old one is shared, and the new
> one isn't, then I should get a new page if it is available (and my quota
> should probably decrease a bit in the next adjustment round), or my quota
> should be decreased immediately if there is no new page available.  As long as
> you have the page in your memory, it will not really be thrown out.  It's just
> thrown out of my address space.
> 
> I think this isn't a big thing: most shared pages will be libraries, and I
> think they will usually be somewhere at the start of the list.  And anyway,
> they're extremely fast to be mapped in from "swap", because they don't have to
> actually come from disk.

It is a *very* big thing. You have just created a design where one RT
process can evict a frame required by another.

The reason your design has this problem is that you have the quotas
attached in the wrong place. What you need to resolve this is a
subdivisible quota mechanism that allows you to attach different quotas
to different pools. A shared pool of frames needs to have exactly one
quota, and this quota should not (usually) be used to cover any other
pool.

It's a granularity issue. You want the granularity of residency to be
independent of the granularity of either process or address space.

Another example: you definitely want to processes that share an address
space to share all of the same residency properties (think: symmetric
multithreading).

> 
> > > Ok, perhaps I didn't understand what a space bank is.  Here's what I
> > > thought: A space bank gives space to store things.  Storage is either on
> > > disk, or in memory, or both.  Storage on disk is "swapped out".  Storage
> > > in memory and on disk is normal memory, with a reserved place to put it
> > > when it will be swapped out.  Storage only in memory is cache, which is
> > > lost when it would be swapped out.
> > 
> > Okay. I understand what you thought, but it doesn't work that way. In a
> > persistent system, *all* pages are disk pages, and residency is managed
> > orthogonally.
> 
> I think that depends on the level at which you look at it.  Space banks are
> appearantly a bit lower level than I expected.  However, at the address space
> level, there are pages in memory which have storage in a space bank.

This is not so. Address spaces have no intrinsic relationship to memory
in the sense that you mean. An address space is an object (actually, a
collection of objects) that defines a mapping from page address to
[disk] pages. The residency of the components of the address space is a
separate thing. Address spaces can even exist without any process.

> You said before that the list management would need to be in the kernel if it
> would be implemented.  I now agree with that. :-)

Good. Now tackle the *rest* of the challenge problem: how to manage this
list sensibly in constant per-frame storage in the face of sharing.

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]