qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] target-i386: "custom" CPU model + script to


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH 0/2] target-i386: "custom" CPU model + script to dump existing CPU models
Date: Wed, 24 Jun 2015 11:31:37 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Wed, Jun 24, 2015 at 12:21:57PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jun 24, 2015 at 11:20:50AM +0200, Jiri Denemark wrote:
> > On Tue, Jun 23, 2015 at 14:32:00 +0200, Andreas Färber wrote:
> > > Am 08.06.2015 um 22:18 schrieb Jiri Denemark:
> > > >> To help libvirt in the transition, a x86-cpu-model-dump script is 
> > > >> provided,
> > > >> that will generate a config file that can be loaded using -readconfig, 
> > > >> based on
> > > >> the -cpu and -machine options provided in the command-line.
> > > > 
> > > > Thanks Eduardo, I never was a big fan of moving (or copying) all the CPU
> > > > configuration data to libvirt, but now I think it actually makes sense.
> > > > We already have a partial copy of CPU model definitions in libvirt
> > > > anyway, but as QEMU changes some CPU models in some machine types (and
> > > > libvirt does not do that) we have no real control over the guest CPU
> > > > configuration. While what we really want is full control to enforce
> > > > stable guest ABI.
> > > 
> > > That sounds like FUD to me. Any concrete data points where QEMU does not
> > > have a stable ABI for x86 CPUs? That's what we have the pc*-x.y machines
> > > for.
> > 
> > QEMU provides stable ABI for x86 CPUs only if you use -cpu ...,enforce.
> > Without enforce the CPU may change everytime a domain is started or
> > migrated. A small example: let's say a CPU model called "Model" includes
> > feature "xyz"; when QEMU is started with -cpu Model (no enforce) on a
> > host which supports xyz, the guest OS will see a CPU with xyz, but when
> > you migrate it to a host which does not support xyz, QEMU will just
> > silently drop xyz. In other words, we need to use enforce to make sure
> > CPU ABI does not change.
> 
> Are there really many examples like this?  Could someone supply some
> examples? Eduardo gave examples of CPU changes across machine types
> but I haven't seen examples where we would break runnability.
> 
> > But the problem is we can't use enforce because we don't know how a
> > specific CPU model looks like for a given machine type. Remember, while
> > libvirt allows users to explicitly ask for a specific CPU model and
> > features, it also has a mode when libvirt itself computes the right CPU
> > model and features. And this is impossible with enforce without us
> > knowing all details about CPU models.
> > 
> > So there are two possible ways to address this:
> > 1. enhance QEMU to give us all we need
> >     - either by providing commands that would do all the computations
> >       (CPU model comparison, intersections or denominator, something
> >       like -cpu best)
> >     - or provide a way to probe for all (currently 700+) combinations of
> >       a CPU model and a machine type without actually having to start
> >       QEMU with each CPU and a machine type separately
> > 
> > 2. manage CPU models in libvirt (aka -cpu custom)
> > 
> > During the past several years Eduardo tried to do (1) without getting
> > anywhere close to something that QEMU would be willing to accept.
> 
> And the reason, presumably, is because it's a hard problem to solve.
> Why is it easier to solve at the libvirt level?

One of the main reasons it is hard is because QEMU machine types are
not statically introspectable - you have to actually instantiate the
machine type to determine what config it produces. This is ultimately
a limitation of QOM, and while it could be fixed it would be a pretty
significant design change for QEMU at this point. So the reason it
would be simpler in libvirt is that we would not have any need to
attempt such introspection - the data we need would immediately
available to libvirt in the format it needs to use it in.

The OpenStack scheduling example I mentioned elsewhere is another
reason where the current scheme causes pain - the point at which
OpenStack wants to make decisions about host/guest CPU compatibility,
we don't even have a guest configuration available yet, so we don't
know what machine type we'd want to use, and QEMU isn't even installed
on the hosts doing this decision making. Currently OpenStack just has
to pretend that CPU models don't change based on machine type. Most of
the time we'll be lucky and that won't hurt us, but obviously it is
not a desirable thing to have todo.

> > On the
> > other hand (2) is a pretty minimal change to QEMU and is more flexible
> > than (1) because it allows CPU model versions to be decoupled from
> > machine types (but this was already discussed a lot in the other emails
> > in this thread).
> > 
> > Jirka
> 
> I'm fine with the change itself, it's useful e.g. for testing.
> 
> But how is it a solution for libvirt's problems?
> What is libvirt going to do in the above cases?

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]