l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: VMM


From: Neal H. Walfield
Subject: Re: VMM
Date: 13 Oct 2002 14:00:18 -0400
User-agent: Gnus/5.0808 (Gnus v5.8.8) Emacs/21.2

> 1. The physical memory server will need an algorithm to decide:
>    * which pages to revoke once memory becomes scarce

Yes and no.  First, (from the physical memory server's perspective)
there are two types of memory: guaranteed memory and extra memory.
Guaranteed memory is just that: guaranteed.  When a task starts, it
(or its parent) negotiates for some guaranteed memory on a medium term
contract.  Extra memory is short term memory that a task may be
granted by the physical memory server.  The bind is that the physical
memory server may ask for the memory back at any time: the client
knows this and would have to make the appropriate decisions.  All
memory is allocated lazily and only on copy on write (in fact, the
actual physical address is not guaranteed except when the memory is
explicitly locked).

What all of this means is that the physical memory server never makes
revocations for specific pages: it may ask for a number of pages back
from a particular client but it is up to the client to decide which
pages to send to the file system or swap or discard, etc.

>    * what resources to allocate to clients, without risking
>      starving those clients (e.g. wired memory, etc...)

All allocated memory is wired: the physical memory server never
overcommits guaranteed memory.  As for being fair, I have not done
much research in this area, however, I hope to explore some type of
economic model.

> 2. Competing VM managers (possibly with non-compatible
>    strategies) that access a physical memory pool (or server)
>    can have conflicting goals. This behavior has not been carefully
>    studied yet (AFAIK. If you know of any paper, please let me
> know).

You seem confused.  If I have interpreted this paragraph and the rest
of your email correctly, you seem to think I am suggesting a single
physical memory server, multi-VMMs and then clients talking to the
different VMMs.  This is not what I am aiming for at all.  Rather, in
my model, there is normally a single physical memory server and all of
the tasks in the system interact with it.  Each task is then its own
VMM: it locally implements paging, page eviction, sbrk, mmap, etc.

> It may be wise to keep strategies out of a physical memory server.
> Consider UVM: The whole system uses a physical memory management
> module called pmap, which implements a policy-free interface.
> [Actually, pmap is very easy to implement in L4]. The decisions
> which pages to hand out, which ones to page out/in and which ones
> to evict, are not pmap's but UVMs.

In our case, pmap would be L4's interface: no matter what architecture
you are on, you get the same semantics.  I have read the pmap
interface and I am not sure what is to be gained by implementing it on
top of the L4 semantics.

> Actually, separating the physical memory server from the (multiple!)
> VM servers would lead to more inefficiencies w.r.t. zeroing memory:
> In an integrated pmap+VM system, the VM system knows when zeroed-
> pages are needed and when uninitialized pages would be enough.
> In the distributed case, the 'pmap' would always need to zero
> pages before handing them over to the competing VM servers (for
> obvious security reasons). That's a lot more cycles to the memory
> bus. Because the phys-server would probably be accessed very often
> by VM servers that do aggressive memory allocation/deallocation
> [of scratch buffers e.g.]; that would lead to a _lot_ of unnecessary
> zeroing. Hmmm....

How is that true?  When you mmap anonymous memory, it is guaranteed to
be zeroed.  As you state, Unix makes this requirement for security
reasons.  When the memory is reused locally, we do not have that type
of concern.

As for aggressive allocation/deallocation: fix your damn server.
There is no reason to do this and not keep the memory in a local
cache.

> Sure, clients know best what to do which their memory.
> BUT, they'll still compete with other clients, who have
> conflicting goals.

And how does this change when a monolithic VMM is managing memory?

> The "magic" of VM really lives in the VM module/task/phys-server/...
> which resolves the conflicts by determining which pages are more
> recently used, etc. No single client can do this, because it simply
> doesn't see the "big picture". 

Most recently used is a farce: it does not reflect the "big picture."
Many applications will benefit from the ability to make decisions
about the memory they use: there are many papers covering this topic.

> It all boils do to this: You partition
> the memory space into sub-spaces that would be individually managed
> and locally optimzied, but you won't get global optimization here.

No, I don't.

> Basically, if you allocate a "fixed" amount of physical memory to
> clients, this would almost always be sub-optimial:

I don't do this either.

> Consider an algorithm by which
> the mem-server IPCs the clients, requiring them to clean up some
> pages and return them to the mem-server. Some clients may be able
> to do so, others won't and some may be buggy. So finally, it will
> still be mem-server's call to evict more pages, should the pages
> that are returned on client's free will not be enough... etc.

Nope.  If the client fails to evict the require pages in the amount of
time given, the physical memory server revokes all of the clients
contracts and reclaims the memory.

> Think also about the benefits that resulted from the merge of VM
> and buffer cache. Sure, the result is an ugly mess of dependencies,
> yet it is blindingly fast. Moreover, it's practical too, not only
> for mmap(2) & friends. Here again, unifying the memory requirements
> into a single VM management space (like UVM, VM+Buffercache, etc...)
> is a Good Thing(tm), because of the "Big Picture" mentioned earlier.
> The system simply adapts better to global requirements and adjusts
> itself more gracefully.

This is unified: in the tasks themselves; the physical memory server
does not know about buffers, etc.

Please read the presentation again, I thought it was relatively clear
on some of the points that you have seem to have confused.

Thanks.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]