qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dir


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dirty_pages
Date: Fri, 21 Mar 2014 13:22:27 +0000
User-agent: Mutt/1.5.21 (2010-09-15)

* Juan Quintela (address@hidden) wrote:
> "Dr. David Alan Gilbert (git)" <address@hidden> wrote:
> > From: "Dr. David Alan Gilbert" <address@hidden>
> >
> > This is a fix for a bug* triggered by a migration after hot unplugging
> > a few virtio-net NICs, that caused migration never to converge, because
> > 'migration_dirty_pages' is incorrectly initialised.
> 
> Good catch.
> 
> > 'migration_dirty_pages' is used as a tally of the number of outstanding
> > dirty pages, to give the migration code an idea of how much more data
> > will need to be transferred, and thus whether it can end the iterative
> > phase.
> >
> > It was initialised to the total size of the RAMBlock address space,
> > however hotunplug can leave this space sparse, and hence
> > migration_dirty_pages ended up too large.
> >
> > Note that the code tries to be careful when counting to deal with
> > RAMBlocks that share the same end/start page - I don't know
> > if this is actually possible and it does complicate the code,
> > but since there was other code that dealt with unaligned RAMBlocks
> > it seemed possible.
> 
> Couldn't we just check at block addition that it dont' overlap?
> 
> What code do you mean?
> 
> My understanding is that the "normal" way of creating new RAMBlocks is
> with qemu_ram_alloc_from_ptr(), and my reading is that block never
> overlap.  (Important words of the sentence: "my reading").

I don't think they overlap, but I worry that the end of one block
and the start of the next might be on the same page.
The code that got me worried was migration_bitmap_sync_range
that seemd to be general; but actually that's worrying about 64bit words
not pages.
What happens with things like '/address@hidden/table-loader' which is only
4k on x86 when they are on boxes with bigger target_page.


> > Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> >
> > (* https://bugzilla.redhat.com/show_bug.cgi?id=1074913 )
> > ---
> >  arch_init.c | 41 +++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 37 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch_init.c b/arch_init.c
> > index f18f42e..ef0e98d 100644
> > --- a/arch_init.c
> > +++ b/arch_init.c
> > @@ -727,11 +727,8 @@ static void reset_ram_globals(void)
> >  static int ram_save_setup(QEMUFile *f, void *opaque)
> >  {
> >      RAMBlock *block;
> > -    int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> > +    int64_t ram_bitmap_pages;
> >  
> > -    migration_bitmap = bitmap_new(ram_pages);
> > -    bitmap_set(migration_bitmap, 0, ram_pages);
> > -    migration_dirty_pages = ram_pages;
> >      mig_throttle_on = false;
> >      dirty_rate_high_cnt = 0;
> >  
> > @@ -770,6 +767,42 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> >      bytes_transferred = 0;
> >      reset_ram_globals();
> >  
> > +    ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> > +    migration_bitmap = bitmap_new(ram_bitmap_pages);
> > +    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> > +    /*
> > +     * Count the total number of pages used by ram blocks. We clear the 
> > dirty
> > +     * bit for the start/end of each ramblock as we go so that we don't 
> > double
> > +     * count ramblocks that have overlapping pages - at entry the whole 
> > dirty
> > +     * bitmap is set.
> > +     */
> > +    migration_dirty_pages = 0;
> > +    QTAILQ_FOREACH(block, &ram_list.blocks, next) {
> > +        uint64_t block_pages = 0;
> > +        ram_addr_t saddr, eaddr;
> > +
> > +        saddr = block->mr->ram_addr;
> > +        eaddr = saddr + block->length - 1;
> 
> If my assumtpion is true:  block->lenght-1 / TARGET_PAGE_SIZE (rounded
> up) should be enough, no?
> 
> Reason for this is that migration bitmap handling is already slow, and
> we are adding a whole two passes here?

Oh yes, if we are sure the ramblocks never meet on the same page all this
becomes a lot simpler (indeed it was a lot simpler yesterday before I got
worried about them touching).

Dave
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]