bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: providing memory objects to users


From: Thomas Bushnell, BSG
Subject: Re: providing memory objects to users
Date: 12 Jun 2002 10:35:16 -0700
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-bochum.de> writes:

> Ok ;)  I do this in my current version, and it seems to work fine.  I will
> check in the code right now, but I have one more question.  When using
> libpager, I must provide my own backing store, right?  At any time, it can
> happen that pager_write_page is called with my pages, and then they are
> removed.

Yes, this is correct.

> So I need to provide my own simple backing store for the pages on the one
> hand to save the pages when they are "swapped out", and my own server side
> mapping on the other hand to access the pages.  I will use a malloc'ed
> area as backing store, and make all server side accesses to the buffer
> through the mapping.  Then pager_read_page and pager_write_page only need
> to read from and write to the malloc'ed area.

Yes, this is the right strategy.  The system-wide behavior is
suboptimal this way, because when the kernel needs to free memory, it
will page things out to you, and you will effectively just sit on
them.  That will impose a delay in pageout performance.  But as long
as your total number of pages isn't large (and I see no reason it
should be), then this is fine.

> Seems a bit redundant to me :) but it should work.

It is redundant!

Ideally, we want a default pager to be handing out pages for things
like this.  It's just not set up to do that at present.

Hey, another strategy is for you to use an Actual File for it.  Maybe
this is the right thing: create a file in /tmp, the usual way, and
just map it.  

Roland, what do you think?  This is mostly untravelled ground, and
what Marcus does here is likely to become the "usual way" to handle
it.

Thomas



reply via email to

[Prev in Thread] Current Thread [Next in Thread]