[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model
From: |
Kashyap Chamarthy |
Subject: |
Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model |
Date: |
Thu, 26 May 2016 11:45:46 +0200 |
User-agent: |
Mutt/1.6.0.1 (2016-04-01) |
On Wed, May 25, 2016 at 11:13:24PM -0600, Chris Friesen wrote:
[...]
> However, if I explicitly specify a custom CPU model of "qemu64" the
> instance refuses to boot and I get a log saying:
[Not a direct answer to the exact issue you're facing, but a related
issue that is being investigated presently...]
Currently there's a related (regression) in upstream libvirt 1.3.4:
The crux of the issue here is: the libvirt custom 'gate64' model is not
being translated into a CPU definition that QEMU can recognize
(which you can find from `qemu-system-x86 -cpu \?`).
See this bug (it has reproducer, and discussion):
https://bugzilla.redhat.com/show_bug.cgi?id=1339680 -- libvirt CPU
driver fails to translate a custom CPU model into something that
QEMU recognizes
The bug (regression) is bisected, by Jiri Denemark, to this commit:
v1.2.9-31-g445a09b "qemu: Don't compare CPU against host for TCG".
> libvirtError: unsupported configuration: guest and host CPU are not
> compatible: Host CPU does not provide required features: svmlibvirtError:
> unsupported configuration: guest and host CPU are not compatible: Host CPU
> does not provide required features: svm
>
> When this happens, some of the XML for the domain looks like this:
> <os>
> <type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type>
> ....
>
> <cpu mode='custom' match='exact'>
> <model fallback='allow'>qemu64</model>
> <topology sockets='1' cores='1' threads='1'/>
> </cpu>
>
> Of course "svm" is an AMD flag and I'm running an Intel CPU. But why does
> it work when I just rely on the default virtual CPU? Is
> kvm_default_unset_features handled differently when it's implicit vs
> explicit?
>
> If I explicitly specify a custom CPU model of "kvm64" then it boots, but of
> course I get a different virtual CPU from what I get if I don't specify
> anything.
>
> Following some old suggestions I tried turning off nested kvm, deleting
> /var/cache/libvirt/qemu/capabilities/*, and restarting libvirtd. Didn't
> help.
>
> So...anyone got any ideas what's going on? Is there no way to explicitly
> specify the model that you get by default?
>
>
> Thanks,
> Chris
>
> --
> libvir-list mailing list
> address@hidden
> https://www.redhat.com/mailman/listinfo/libvir-list
--
/kashyap