qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Fix off-by-1 error in RAM migration code


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH] Fix off-by-1 error in RAM migration code
Date: Sun, 4 Nov 2012 02:00:54 +1100
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Nov 02, 2012 at 11:58:32AM +0100, Juan Quintela wrote:
> David Gibson <address@hidden> wrote:
> > On Wed, Oct 31, 2012 at 01:08:16PM +0200, Orit Wasserman wrote:
> >> On 10/31/2012 05:43 AM, David Gibson wrote:
> >> > The code for migrating (or savevm-ing) memory pages starts off by 
> >> > creating
> >> > a dirty bitmap and filling it with 1s.  Except, actually, because bit
> >> > addresses are 0-based it fills every bit except bit 0 with 1s and puts an
> >> > extra 1 beyond the end of the bitmap, potentially corrupting unrelated
> >> > memory.  Oops.  This patch fixes it.
> >> > 
> >> > Signed-off-by: David Gibson <address@hidden>
> >> > ---
> >> >  arch_init.c |    2 +-
> >> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >> > 
> >> > diff --git a/arch_init.c b/arch_init.c
> >> > index e6effe8..b75a4c5 100644
> >> > --- a/arch_init.c
> >> > +++ b/arch_init.c
> >> > @@ -568,7 +568,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> >> >      int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> >> >  
> >> >      migration_bitmap = bitmap_new(ram_pages);
> >> > -    bitmap_set(migration_bitmap, 1, ram_pages);
> >> > +    bitmap_set(migration_bitmap, 0, ram_pages);
> >> >      migration_dirty_pages = ram_pages;
> >> >  
> >> >      bytes_transferred = 0;
> >> > 
> >> You are correct, good catch.
> >> Reviewed-by: Orit Wasserman <address@hidden>
> >
> > Juan,
> >
> > Sorry, forgot to CC you on the original mailing here, which I should
> > have done.  This is a serious bug in the migration code and we should
> > apply to mainline ASAP.
> 
> Reviewed-by: Juan Quintela <address@hidden> 
> 
> Good catch, I missunderstood the function when fixing a different bug,
> and never undrestood why it fixed it.

Actually.. it just occurred to me that I think there has to be another
bug here somewhere..

I haven't actually observed any effects from the memory corruption -
though it's certainly a real bug.  I found this because another effect
of this bug is that migration_dirty_pages count was set to 1 more than
the actual number of dirty bits in the bitmap.  That meant the dirty
pages count was never reaching zero and so the migration/savevm never
terminated.

Except.. that every so often the migration *did* terminate (maybe 1
time in 5).  Also I kind of hope somebody would have noticed this
earlier if migrations never terminated on x86 too.  But as far as I
can tell, if initially mismatched like this it ought to be impossible
for the dirty page count to ever reach zero.  Which suggests there is
another bug with the dirty count tracking :(.

It's possible the memory corruption could account for this, of course
- since that in theory at least, could have almost any strange effect
on the program's behavior.  But that doesn't seem particularly likely
to me.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson




reply via email to

[Prev in Thread] Current Thread [Next in Thread]