l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: VMM


From: Farid Hajji
Subject: Re: VMM
Date: Sun, 13 Oct 2002 20:21:39 +0200 (CEST)

> In our case, pmap would be L4's interface: no matter what architecture
> you are on, you get the same semantics.  I have read the pmap
> interface and I am not sure what is to be gained by implementing it on
> top of the L4 semantics.
Yes, I have a (theortical) pmap.c that uses the X.2 mapping semantics.
This is actually a null-layer, so you can say, that pmap is already
part of L4 ;)

> > Actually, separating the physical memory server from the (multiple!)
> > VM servers would lead to more inefficiencies w.r.t. zeroing memory:
> > In an integrated pmap+VM system, the VM system knows when zeroed-
> > pages are needed and when uninitialized pages would be enough.
> > In the distributed case, the 'pmap' would always need to zero
> > pages before handing them over to the competing VM servers (for
> > obvious security reasons). That's a lot more cycles to the memory
> > bus. Because the phys-server would probably be accessed very often
> > by VM servers that do aggressive memory allocation/deallocation
> > [of scratch buffers e.g.]; that would lead to a _lot_ of unnecessary
> > zeroing. Hmmm....
> 
> How is that true?  When you mmap anonymous memory, it is guaranteed to
> be zeroed.  As you state, Unix makes this requirement for security
> reasons.  When the memory is reused locally, we do not have that type
> of concern.

Of course you don't need to zero memory within local servers that
do their own VM (but it would be dangerous nonetheless not to do it).
The point is that as soon as memory crosses management boundaries,
zeroing must be done. It depends on the amount of memory pages floating
between the physical server and the clients...

> As for aggressive allocation/deallocation: fix your damn server.
> There is no reason to do this and not keep the memory in a local
> cache.

...which may or may not use caching etc, but which more importantly
will interact with phys-server more often than not (agreed, it depends
upon the clients). There is a trade-off here.

> > Sure, clients know best what to do which their memory.
> > BUT, they'll still compete with other clients, who have
> > conflicting goals.
> 
> And how does this change when a monolithic VMM is managing memory?

The monolithic (or single, if you prefer) VMM can make educated
guesses on the working set of the whole set of applications,
not only on the working set of one application. Single apps
don't see what's going on and can't adapt themselves wihout
some hints from the outside.

> > The "magic" of VM really lives in the VM module/task/phys-server/...
> > which resolves the conflicts by determining which pages are more
> > recently used, etc. No single client can do this, because it simply
> > doesn't see the "big picture". 
> 
> Most recently used is a farce: it does not reflect the "big picture."
> Many applications will benefit from the ability to make decisions
> about the memory they use: there are many papers covering this topic.

Of course, LRU, MRU, best hit etc... are all optimized for a
specific pattern of usage. The point is that such algorithms
are only making sense, when they operate on the whole set of
applications, not on separate ones.

> > Consider an algorithm by which
> > the mem-server IPCs the clients, requiring them to clean up some
> > pages and return them to the mem-server. Some clients may be able
> > to do so, others won't and some may be buggy. So finally, it will
> > still be mem-server's call to evict more pages, should the pages
> > that are returned on client's free will not be enough... etc.
> 
> Nope.  If the client fails to evict the require pages in the amount of
> time given, the physical memory server revokes all of the clients
> contracts and reclaims the memory.

This is what I've meant. Additional interaction (like forcibely
killing the client) would be necessary. So you give the client
an opportunity to optimize-or-die. I don't see why this is supposed
to be more efficient rather than simply do statistical page-out
over the complete set.

> Please read the presentation again, I thought it was relatively clear
> on some of the points that you have seem to have confused.

Please give me more time to think about it. This was but a first
ad-hoc reply ;)

Regards,

-Farid.

-- 
Farid Hajji -- Unix Systems and Network Admin | Phone: +49-2131-67-555
Broicherdorfstr. 83, D-41564 Kaarst, Germany  | address@hidden
- - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - -
Due to budget cuts, light at end of tunnel will be out. --Unknown.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]