qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [pve-devel] QEMU LIve Migration - swap_free: Bad swap f


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [pve-devel] QEMU LIve Migration - swap_free: Bad swap file entry
Date: Fri, 14 Feb 2014 09:06:13 +0000
User-agent: Mutt/1.5.21 (2010-09-15)

* Stefan Priebe (address@hidden) wrote:
> 
> Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
> >* Stefan Priebe (address@hidden) wrote:
> >>Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
> >>>* Stefan Priebe (address@hidden) wrote:
> >>>>i could fix it by explicitly disable xbzrle - it seems its
> >>>>automatically on if i do not set the migration caps to false.
> >>>>
> >>>>So it seems to be a xbzrle bug.
> >>>
> >>>Stefan can you give me some more info on your hardware and
> >>>migration setup;   that stressapptest (which is a really nice
> >>>find!) really batters the memory and it means the migration
> >>>isn't converging for me, so I'm curious what your setup is.
> >>
> >>That one is devlopment by google and known to me since a few years.
> >>Google has detected that memtest and co are not good enough to
> >>stress test memory.
> >
> >Hi Stefan,
> >   I've just posted a patch to qemu-devel that fixes two bugs that
> >we found; I've only tried a small stressapptest run and it seems
> >to survive with them (where it didn't before);  you might like to try
> >it if you're up for rebuilding qemu.
> >
> >It's the one entitled ' [PATCH] Fix two XBZRLE corruption issues'
> >
> >I'll try and get a larger run done myself, but I'd be interested to
> >hear if it fixes it for you (or anyone else who hit the problem).
> 
> Yes works fine - now no crash but it's sower than without XBZRLE ;-)
> 
> Without XBZRLE: i needed migrate_downtime 4 around 60s
> With XBZRLE: i needed migrate_downtime 16 and 240s

Hmm; how did that compare with the previous (broken) with XBZRLE
time?   (i.e. was XBZRLE always slower for you?)

If you're driving this from the hmp/command interface then
the result of the
      info migrate

command at the end of each of those runs would be interesting.

Another thing you could try is changing the xbzrle_cache_zero_page
in arch_init.c that I added so it reads as:

static void xbzrle_cache_zero_page(ram_addr_t current_addr)
{
    if (ram_bulk_stage || !migrate_use_xbzrle()) {
        return;
    }

    if (!cache_is_cached(XBZRLE.cache, current_addr)) {
        return;
    }

    /* We don't care if this fails to allocate a new cache page
     * as long as it updated an old one */
    cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE);
}

Dave
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]