l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: self-paging


From: Jonathan S. Shapiro
Subject: Re: self-paging
Date: Wed, 07 Dec 2005 12:06:38 -0500

On Wed, 2005-12-07 at 09:53 +0100, Bas Wijnen wrote:
> On Tue, Dec 06, 2005 at 09:28:32PM -0500, Jonathan S. Shapiro wrote:
> > > I think this isn't a big thing: most shared pages will be libraries, and I
> > > think they will usually be somewhere at the start of the list.  And
> > > anyway, they're extremely fast to be mapped in from "swap", because they
> > > don't have to actually come from disk.
> > 
> > It is a *very* big thing. You have just created a design where one RT
> > process can evict a frame required by another.
> 
> No, I haven't.  If two processes A and B both hold a page in memory, then A
> can throw it out of its own address space.  It cannot force the page to be
> really swapped out.  Assume A has 10 pages, 3 of which are swapped out (so
> only P[0]...P[6] are actually in memory).  P[5] and P[6] is are shared pages,
> P[8] is not shared, and there are no free pages at all (I think this is a rare
> thing, but it does happen every now and then of course).  Now A exchanges P[6]
> and P[8].  Then its quota immediately decreases by one, so only P[0]...P[5]
> are still in memory.  B doesn't notice a thing.

Perhaps this is what you intended to describe all along, but it's a
significant change from what I understood. What you now appear to be
saying is that a frame pool is really a sponsorship mechanism. A given
frame can be dominated by different frame pools in different address
spaces, and if so it may be multiply sponsored. It remains pinned in
memory as long as it is sponsored by at least one frame pool.

Conceptually, this is a workable design. We are going to run into two
very complex implementation issues:

1. Underutilization.

We will soon conclude that every RT process must sponsor libc (actually,
libc may be important enough to pin as a system matter, but there will
turn out to be some library that gets multiply sponsored by a bunch of
apps, but isn't important enough to sponsor globally). This requires a
surprisingly large number of sponsorship tickets.

The problem with this is that the overall total number of system
sponsorship tickets must never exceed the number of pinnable frames.

2. Reverse fan-out

This is a kernel implementation issue. We will now need a mechanism to
back-trace from a physical frame to all of its current sponsors. This is
very hard to handle under a constant storage constraint.

3. Checkpointing

Pinning a page isn't enough. We also have to ensure that it remains
promptly accessable. If the page is clean this is no problem. If the
page is dirty and subject to checkpoint, it becomes subject to in-memory
copy on write behavior and we need to ensure that we reserve enough
frames for this to happen. One saving grace here is that the redundant
sponsorship will tend to lead to lots of available frames that can be
used for copy on write purposes.

In most RT situations requiring large amounts of memory, however, the
bulk of dirty pages will be exempted from checkpoint. There is no point
checkpointing a frame buffer or audio buffer, for example.

> > The reason your design has this problem is that you have the quotas attached
> > in the wrong place. What you need to resolve this is a subdivisible quota
> > mechanism that allows you to attach different quotas to different pools. A
> > shared pool of frames needs to have exactly one quota, and this quota should
> > not (usually) be used to cover any other pool.
> 
> If you have a pool of pages which is shared, and residency is a property of
> the pool, then who will decide when to swap out a page?

The fault handler for the pool. The key idea here is that if you and I
are sharing a resource, and we both have a residency requirement, we can
often manage the residency issues jointly. Perhaps more important, we
want to separate the sponsoring container from the process so that a
multithreaded application can manage sponsorship.

> I don't think any of these can both make the correct
> decision and be trusted enough to get the power.  In my system, a page is
> swapped out when no process needs it enough that it will use its quota for it.
> That sounds like a good criterion to me.  If you want to implement this with
> the pool approach, then there needs to be an extra accounting process which
> seems to add a lot of unneeded complexity.

I think you are forgetting that *any* process that can pin resource is
partially trusted. This tends to make the problem a little easier. In
truth, I'm not so convinced about shared management. What I *am*
convinced about is that I want both a finer granularity of control than
"one process, one pool" and also I want to be able to share a pool among
threads. This means that the pool abstraction needs to be first class
independent of the process abstraction.

> > It's a granularity issue. You want the granularity of residency to be
> > independent of the granularity of either process or address space.
> 
> I want each process (the collective threads of an address space) to decide for
> itself which pages it wants in memory.

We have a term collision. Remember that EROS doesn't have threads. The
way you set up conventional multithreading is to have multiple processes
that share an endpoint and an address space. So when I say that you want
multiple processes to be able to operate out of the same pool(s), I am
basically saying the same thing that you are.

But come to think of it, you also want sharing. You want to be able to
have two codecs in your video player. Both rely on libc, but you don't
want different parts of the same application sponsoring libc
redundantly. As long as we are talking about behavior within an
application, this is all very doable without any management problems.

>   If processes on the system will share
> pages, then after filling all the quota there are still some pages available.
> So we increase the quota (using the usual priority scheme, so if A shares a
> page with B, it is very well possible that neither A nor B gets the extra
> quota that results from it).

You cannot do this. What will happen is that the processes will expand
to *use* this quota, and at some point somebody will discover that the
guarantee was a lie. Think about it.


> > Another example: you definitely want to processes that share an address
> > space to share all of the same residency properties (think: symmetric
> > multithreading).
> 
> There are too many grammatical errors here for me to parse it unambiguously.
> Are you saying I do want two processes to share the same residency properties?

Yes, if the two processes are acting as two threads of a single
application. This is the thread/process term collision again.

> > > You said before that the list management would need to be in the kernel if
> > > it would be implemented.  I now agree with that. :-)
> > 
> > Good. Now tackle the *rest* of the challenge problem: how to manage this
> > list sensibly in constant per-frame storage in the face of sharing.
> 
> What exactly do you mean with "constant per-frame storage"?  That a shared
> page has only one mapping to disk?  I was always assuming that.

Neither. I meant that you need to find a way to (a) record all sponsors
of a frame, but (b) accomplish this MxN relationship (M sponsors, N
frames) in O(N) storage.

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]