l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: self-paging


From: Bas Wijnen
Subject: Re: self-paging
Date: Wed, 7 Dec 2005 09:53:50 +0100
User-agent: Mutt/1.5.11

On Tue, Dec 06, 2005 at 09:28:32PM -0500, Jonathan S. Shapiro wrote:
> > > When sharing is considered, this stops being true. Suppose you and I
> > > both need a page and have assigned it a position. I need it badly, you
> > > need it less badly. You fault, and your pager says "shoot that one". The
> > > problem is that I still need it. Getting this right requires a unified
> > > management scheme for shared items.
> > 
> > My pager can say "shoot that one" for example by swapping it with a page
> > which is currently not mapped in my memory.  If the old one is shared, and
> > the new one isn't, then I should get a new page if it is available (and my
> > quota should probably decrease a bit in the next adjustment round), or my
> > quota should be decreased immediately if there is no new page available.
> > As long as you have the page in your memory, it will not really be thrown
> > out.  It's just thrown out of my address space.
> > 
> > I think this isn't a big thing: most shared pages will be libraries, and I
> > think they will usually be somewhere at the start of the list.  And
> > anyway, they're extremely fast to be mapped in from "swap", because they
> > don't have to actually come from disk.
> 
> It is a *very* big thing. You have just created a design where one RT
> process can evict a frame required by another.

No, I haven't.  If two processes A and B both hold a page in memory, then A
can throw it out of its own address space.  It cannot force the page to be
really swapped out.  Assume A has 10 pages, 3 of which are swapped out (so
only P[0]...P[6] are actually in memory).  P[5] and P[6] is are shared pages,
P[8] is not shared, and there are no free pages at all (I think this is a rare
thing, but it does happen every now and then of course).  Now A exchanges P[6]
and P[8].  Then its quota immediately decreases by one, so only P[0]...P[5]
are still in memory.  B doesn't notice a thing.

Note that A still doesn't have access to P[8] (now at position 6).  To be sure
you can use a page, it shouldn't be moved to the top of the quota, since the
quota can change.

Perhaps you meant that B is giving A problems by holding the shared page,
which means A's quota decreases on unmapping the page.  I don't think this is
a problem.  Real-time processes will have a guaranteed number of pages.  The
extra quota they get due to shared memory can indeed be revoked again, but a
clever real-time process doesn't use more than the guaranteed amount, so the
whole problem isn't relevant for them.  Obviously, the sum of guaranteed pages
given to real-time processes must always be lower than the amount of physical
memory in the machine, and preferably much lower.

> The reason your design has this problem is that you have the quotas attached
> in the wrong place. What you need to resolve this is a subdivisible quota
> mechanism that allows you to attach different quotas to different pools. A
> shared pool of frames needs to have exactly one quota, and this quota should
> not (usually) be used to cover any other pool.

If you have a pool of pages which is shared, and residency is a property of
the pool, then who will decide when to swap out a page?  The file system
providing the shared library?  The first process to use it?  Any process which
uses it?  The user agent (incorrectly assuming that the pool cannot be shared
by multiple users)?  I don't think any of these can both make the correct
decision and be trusted enough to get the power.  In my system, a page is
swapped out when no process needs it enough that it will use its quota for it.
That sounds like a good criterion to me.  If you want to implement this with
the pool approach, then there needs to be an extra accounting process which
seems to add a lot of unneeded complexity.

> It's a granularity issue. You want the granularity of residency to be
> independent of the granularity of either process or address space.

I want each process (the collective threads of an address space) to decide for
itself which pages it wants in memory.  If processes on the system will share
pages, then after filling all the quota there are still some pages available.
So we increase the quota (using the usual priority scheme, so if A shares a
page with B, it is very well possible that neither A nor B gets the extra
quota that results from it).

> Another example: you definitely want to processes that share an address
> space to share all of the same residency properties (think: symmetric
> multithreading).

There are too many grammatical errors here for me to parse it unambiguously.
Are you saying I do want two processes to share the same residency properties?
That doesn't correspond to your previous statement that ganularity of
residence must be independant of granularity of address space.  Anyway I don't
think it must be.  I have no problem with having pages resident as a property
of an address space.  However, swapping a page out is something that should
only result from the collective actions of all processes holding the page.
Perhaps this is what you mean with independant granularity?

> > > Okay. I understand what you thought, but it doesn't work that way. In a
> > > persistent system, *all* pages are disk pages, and residency is managed
> > > orthogonally.
> > 
> > I think that depends on the level at which you look at it.  Space banks
> > are appearantly a bit lower level than I expected.  However, at the
> > address space level, there are pages in memory which have storage in a
> > space bank.
> 
> This is not so. Address spaces have no intrinsic relationship to memory in
> the sense that you mean. An address space is an object (actually, a
> collection of objects) that defines a mapping from page address to [disk]
> pages. The residency of the components of the address space is a separate
> thing. Address spaces can even exist without any process.

Eh, I'm not sure if this isn't what I meant.  As I said, there are pages in
memory which have storage in a space bank, so they have a mapping to disk
pages.  What I didn't say is that there are also pages which aren't currently
in memory, which do have a mapping to disk pages (those are currently "swapped
out", although they weren't necessarily loaded into memory before).  I think
there should also be "cache pages", which don't have a mapping to disk at all,
but that's just an optimization.

> > You said before that the list management would need to be in the kernel if
> > it would be implemented.  I now agree with that. :-)
> 
> Good. Now tackle the *rest* of the challenge problem: how to manage this
> list sensibly in constant per-frame storage in the face of sharing.

What exactly do you mean with "constant per-frame storage"?  That a shared
page has only one mapping to disk?  I was always assuming that.  I can see
some problems with respect to the storage needed to administrate who has which
pages mapped, but they should be solvable.

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]