qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v4: 03/10] more verbose documen


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 03/10] more verbose documentation of the RDMA transport
Date: Wed, 20 Mar 2013 12:08:40 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2


On 03/20/2013 11:55 AM, Michael S. Tsirkin wrote:
On Wed, Mar 20, 2013 at 11:15:48AM -0400, Michael R. Hines wrote:
OK, can we make a deal? =)

I'm willing to put in the work to perform the dynamic registration
on the destination side,
but let's go a step further and piggy-back on the effort:

We need to couple this registration with a very small modification
to save_ram_block():

Currently, save_ram_block does:

1. is RDMA turned on?      if yes, unconditionally add to next chunk
                                          (will be made to
dynamically register on destination)
2. is_dup_page() ?            if yes, skip
3. in xbzrle cache?           if yes, skip
4. still not sent?                if yes, transmit

I propose adding a "stub" function that adds:

0. is page mapped?         if yes, skip   (always returns true for now)
1. same
2. same
3. same
4. same

Then, later, in a separate patch, I can implement /dev/pagemap support.

When that's done, RDMA dynamic registration will actually take effect and
benefit from actually verifying that the page is mapped or not.

- Michael
Mapped into guest? You mean e.g. for ballooning?


No, not just ballooning. Overcommit (i.e. cgroups).

Anytime cgroups kicks out a page (or anytime the balloon kicks in),
the page would become unmapped.

The make dynamic registration useful, we have to actually have something
in place in the future that knows how to *check* if a page is unmapped
from the virtual machine, either because it has never been dirtied before
(and might be pointing to the zero page) or because it has been madvised()
out or has been detatched because of a cgroup limit.

- Michael





reply via email to

[Prev in Thread] Current Thread [Next in Thread]