qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V1 1/1] tests: Add migration test for aarch64


From: Marc Zyngier
Subject: Re: [Qemu-devel] [PATCH V1 1/1] tests: Add migration test for aarch64
Date: Mon, 29 Jan 2018 10:32:12 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2

On 29/01/18 10:04, Peter Maydell wrote:
> On 29 January 2018 at 09:53, Dr. David Alan Gilbert <address@hidden> wrote:
>> * Peter Maydell (address@hidden) wrote:
>>> On 26 January 2018 at 19:46, Dr. David Alan Gilbert <address@hidden> wrote:
>>>> * Peter Maydell (address@hidden) wrote:
>>>>> I think the correct fix here is that your test code should turn
>>>>> its MMU on. Trying to treat guest RAM as uncacheable doesn't work
>>>>> for Arm KVM guests (for the same reason that VGA device video memory
>>>>> doesn't work). If it's RAM your guest has to arrange to map it as
>>>>> Normal Cacheable, and then everything should work fine.
>>>>
>>>> Does this cause problems with migrating at just the wrong point during
>>>> a VM boot?
>>>
>>> It wouldn't surprise me if it did, but I don't think I've ever
>>> tried to provoke that problem...
>>
>> If you think it'll get the RAM contents wrong, it might be best to fail
>> the migration if you can detect the cache is disabled in the guest.
> 
> I guess QEMU could look at the value of the "MMU disabled/enabled" bit
> in the guest's system registers, and refuse migration if it's off...
> 
> (cc'd Marc, Christoffer to check that I don't have the wrong end
> of the stick about how thin the ice is in the period before the
> guest turns on its MMU...)

Once MMU and caches are on, we should be in a reasonable place for QEMU
to have a consistent view of the memory. The trick is to prevent the
vcpus from changing that. A guest could perfectly turn off its MMU at
any given time if it needs to (and it is actually required on some HW if
you want to mitigate headlining CVEs), and KVM won't know about that.

You may have to pause the vcpus before starting the migration, or
introduce a new KVM feature that would automatically pause a vcpu that
is trying to disable its MMU while the migration is on. This would
involve trapping all the virtual memory related system registers, with
an obvious cost. But that cost would be limited to the time it takes to
migrate the memory, so maybe that's acceptable.

Thoughts?

        M.
-- 
Jazz is not dead. It just smells funny...



reply via email to

[Prev in Thread] Current Thread [Next in Thread]