qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] powerpc iommu: enable multiple TCE requests


From: Alexey Kardashevskiy
Subject: Re: [Qemu-devel] [PATCH] powerpc iommu: enable multiple TCE requests
Date: Mon, 19 Aug 2013 18:44:03 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7

On 08/19/2013 06:01 PM, Alexander Graf wrote:
> 
> 
> Am 19.08.2013 um 09:30 schrieb Alexey Kardashevskiy <address@hidden>:
> 
>> On 08/19/2013 01:22 AM, Paolo Bonzini wrote:
>>> Il 16/08/2013 11:49, Alexey Kardashevskiy ha scritto:
>>>> With KVM, we could fall back to the qemu implementation
>>>>> +     * when KVM doesn't support them, but that would be much slower
>>>>> +     * than just using the KVM implementations of the single TCE
>>>>> +     * hypercalls. */
>>>>> +    if (kvmppc_spapr_use_multitce()) {
>>>>> +        _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas_propm,
>>>>> +                           sizeof(hypertas_propm))));
>>>>> +    } else {
>>>>> +        _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas_prop,
>>>>> +                           sizeof(hypertas_prop))));
>>>>> +    }
>>>
>>> This prevents migration from newer kernel to older kernel.  Can you
>>> ensure that the fallback to the QEMU implementation works, even though
>>> it is not used in practice?
>>
>> How would it break? By having a device tree with "multi-tce" in it and not
>> having KVM PPC capability for that?
>>
>> If this is the case, it will not prevent from migration as the "multi-tce"
>> feature is supported anyway by this patch. The only reason for not
>> advertising it to the guest is that the host kernel already has
>> acceleration for H_PUT_TCE (single page map/unmap) and advertising
>> "multi-tce" without having it in the host kernel (but only in QEMU) would
>> slow things down (but it still will work).
> 

> It means that if you use the same QEMU version with the same command
> line on a different kernel version, your guest looks different because
> we generate the dtb differently.

Oh. Sorry for my ignorance again, I am not playing dump or anything like
that - I do not understand how the device tree (which we cook in QEMU) on
the destination can possibly survive migration and not to be overwritten by
the one from the source. What was in the destination RAM before migration
does not matter at all (including dt), QEMU device tree is what matters but
this does not change. As it is "the same QEMU version", hypercalls are
supported anyway, the only difference where they will be handled - in the
host kernel or QEMU. What do I miss?


> The usual way to avoid this is to have a command line option to at least
> make it possible for a management tool to nail down feature flags
> regardless of the host configuration.


> Considering that IIRC we haven't actually flagged -M pseries as
> backwards compatible (avoid breaking migration, etc) we can probably get
> away with enabling multi-tce always and live with the performance
> penalty on older host kernels.

We have H_PUT_TCE accelerated in older kernel for quite a while and we do
not want guests running on older hosts become slower for no good reason,
this is why we added this capability at the first place.



-- 
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]