qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Split migration bitmaps by ramblock


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [RFC] Split migration bitmaps by ramblock
Date: Fri, 31 Mar 2017 18:50:40 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

* Juan Quintela (address@hidden) wrote:
> "Dr. David Alan Gilbert" <address@hidden> wrote:
> > * Juan Quintela (address@hidden) wrote:
> >> Note that there are two reason for this, ARM and PPC do things like
> >> guests with 4kb pages on hosts with 16/64kb hosts, and then we have
> >> HugePages.  Note all the workarounds that postcopy has to do because
> >> to work in HugePages size.
> >
> > There are some fun problems with changing the bitmap page size;
> > off the top of my head, the ones I can remember include:
> >     a) I'm sure I've seen rare cases where a target page is marked as
> >        dirty inside a hostpage; I'm guessing that was qemu's doing, but
> >        there are more subtle cases, e.g. running a 4kb guest on a 64kb host;
> >        it's legal - and 4kb power guests used to exist;  I think in those
> >        cases you see KVM only marking one target page as dirty.
> 
>         /*
>          * bitmap-traveling is faster than memory-traveling (for addr...)
>          * especially when most of the memory is not dirty.
>          */
>         for (i = 0; i < len; i++) {
>             if (bitmap[i] != 0) {
>                 c = leul_to_cpu(bitmap[i]);
>                 do {
>                     j = ctzl(c);
>                     c &= ~(1ul << j);
>                     page_number = (i * HOST_LONG_BITS + j) * hpratio;
>                     addr = page_number * TARGET_PAGE_SIZE;
>                     ram_addr = start + addr;
>                     cpu_physical_memory_set_dirty_range(ram_addr,
>                                        TARGET_PAGE_SIZE * hpratio, clients);
>                 } while (c != 0);
>             }
>         }
> 
> 
> Thisis the code that we end using when we are synchronizing from kvm, so
> if we don't have all pages of a host page set to one (or zero)  I think
> we are doing something wrong, no?  Or I am missunderstanding the code?

Hmm, that does look like that - so perhaps the case I was seeing was just
qemu setting it somewhere?
(I definitely remember seeing it because I remember dumping the bitmaps
and checking for them; but I can't remember the circumstance)

> >     b) Are we required to support migration across hosts of different 
> > pagesize;
> >        and if we do that what size should a bit represent?
> >        People asked about it during postcopy but I think it's restricted to
> >        matching sizes.  I don't think precopy has any requirement for 
> > matching
> >        host pagesize at the moment.  64bit ARM does 4k, 64k and I think 16k 
> > was
> >        added later.
> 
> With current precopy, we should work independently of the host page size
> (famous last words), and in a 1st step, I will only send pages of the
> size TARGET_PAGE_SIZE.  I will only change the bitmaps.  We can add
> bigger pages later.
> 
> >     c) Hugepages have similar issues; precopy doesn't currently have any
> >        requirement for the hugepage selection on the two hosts to match,
> >        but it does on postcopy.  Also you don't want to have a single dirty
> >        bit for a 1GB host hugepage if you can handle detecting changes at
> >        a finer grain level.
> 
> I agree here, I was thinking more on the Power/ARM case than the
> HugePage case.  For the 2MB page, we could think about doing it, for the
> 1GB case, it is not gonna work.

Yep,

Dave

> Later, Juan.
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]