qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [pve-devel] QEMU LIve Migration - swap_free: Bad swap f


From: Stefan Priebe - Profihost AG
Subject: Re: [Qemu-devel] [pve-devel] QEMU LIve Migration - swap_free: Bad swap file entry
Date: Tue, 11 Feb 2014 15:49:55 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0

Am 11.02.2014 14:45, schrieb Orit Wasserman:
> On 02/11/2014 03:33 PM, Stefan Priebe - Profihost AG wrote:
>>
>> Am 11.02.2014 14:32, schrieb Orit Wasserman:
>>> On 02/08/2014 09:23 PM, Stefan Priebe wrote:
>>>> i could fix it by explicitly disable xbzrle - it seems its
>>>> automatically on if i do not set the migration caps to false.
>>>>
>>>> So it seems to be a xbzrle bug.
>>>>
>>>
>>> XBZRLE is disabled by default (actually all capabilities are off by
>>> default)
>>> What version of QEMU are you using that you need to disable it
>>> explicitly?
>>> Maybe you run migration with XBZRLE and canceled it, so it stays on?
>>
>> No real idea why this happens - but yes this seems to be a problem for
>> me.
>>
> 
> I checked upstream QEMU and it is still off by default (always been)

May be i had it on in the past and the VM was still running from an
older migration.

>> But the bug in XBZRLE is still there ;-)
>>
> 
> We need to understand the exact scenario in order to understand the
> problem.
> 
> What exact version of Qemu are you using?

Qemu 1.7.0

> Can you try with the latest upstream version, there were some fixes to the
> XBZRLE code?

Sadly not - i have some custom patches (not related to xbzrle) which
won't apply to current upstream.

But i could cherry-pick the ones you have in mind - if you give me the
commit ids.

Stefan

>> Stefan
>>
>>> Orit
>>>
>>>> Stefan
>>>>
>>>> Am 07.02.2014 21:10, schrieb Stefan Priebe:
>>>>> Am 07.02.2014 21:02, schrieb Dr. David Alan Gilbert:
>>>>>> * Stefan Priebe (address@hidden) wrote:
>>>>>>> anything i could try or debug? to help to find the problem?
>>>>>>
>>>>>> I think the most useful would be to see if the problem is
>>>>>> a new problem in the 1.7 you're using or has existed
>>>>>> for a while; depending on the machine type you used, it might
>>>>>> be possible to load that image on an earlier (or newer) qemu
>>>>>> and try the same test, however if the problem doesn't
>>>>>> repeat reliably it can be hard.
>>>>>
>>>>> I've seen this first with Qemu 1.5 but was not able to reproduce it
>>>>> for
>>>>> month. 1.4 was working fine.
>>>>>
>>>>>> If you have any way of simplifying the configuration of the
>>>>>> VM it would be good; e.g. if you could get a failure on
>>>>>> something without graphics (-nographic) and USB.
>>>>>
>>>>> Sadly not ;-(
>>>>>
>>>>>> Dave
>>>>>>
>>>>>>>
>>>>>>> Stefan
>>>>>>>
>>>>>>> Am 07.02.2014 14:45, schrieb Stefan Priebe - Profihost AG:
>>>>>>>> it's always the same "pattern" there are too many 0 instead of X.
>>>>>>>>
>>>>>>>> only seen:
>>>>>>>>
>>>>>>>> read:0x0000000000000000 ... expected:0xffffffffffffffff
>>>>>>>>
>>>>>>>> or
>>>>>>>>
>>>>>>>> read:0xffffffff00000000 ... expected:0xffffffffffffffff
>>>>>>>>
>>>>>>>> or
>>>>>>>>
>>>>>>>> read:0x0000bf000000bf00 ... expected:0xffffbfffffffbfff
>>>>>>>>
>>>>>>>> or
>>>>>>>>
>>>>>>>> read:0x0000000000000000 ... expected:0xb5b5b5b5b5b5b5b5
>>>>>>>>
>>>>>>>> no idea if this helps.
>>>>>>>>
>>>>>>>> Stefan
>>>>>>>>
>>>>>>>> Am 07.02.2014 14:39, schrieb Stefan Priebe - Profihost AG:
>>>>>>>>> Hi,
>>>>>>>>> Am 07.02.2014 14:19, schrieb Paolo Bonzini:
>>>>>>>>>> Il 07/02/2014 14:04, Stefan Priebe - Profihost AG ha scritto:
>>>>>>>>>>> first of all i've now a memory image of a VM where i can
>>>>>>>>>>> reproduce it.
>>>>>>>>>>
>>>>>>>>>> You mean you start that VM with -incoming 'exec:cat
>>>>>>>>>> /path/to/vm.img'?
>>>>>>>>>> But google stress test doesn't report any error until you start
>>>>>>>>>> migration _and_ it finishes?
>>>>>>>>>
>>>>>>>>> Sorry no i meant i have a VM where i saved the memory to disk -
>>>>>>>>> so i
>>>>>>>>> don't need to wait hours until i can reproduce as it does not
>>>>>>>>> happen
>>>>>>>>> with a fresh started VM. So it's a state file i think.
>>>>>>>>>
>>>>>>>>>> Another test:
>>>>>>>>>>
>>>>>>>>>> - start the VM with -S, migrate, do errors appear on the
>>>>>>>>>> destination?
>>>>>>>>>
>>>>>>>>> I started with -S and the errors appear AFTER resuming/unpause
>>>>>>>>> the VM.
>>>>>>>>> So it is fine until i resume it on the "new" host.
>>>>>>>>>
>>>>>>>>> Stefan
>>>>>>>>>
>>>>>>>
>>>>>> -- 
>>>>>> Dr. David Alan Gilbert / address@hidden / Manchester, UK
>>>>>>
>>>>
>>>
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]