qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt


From: Andrew Cathrow
Subject: Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
Date: Sat, 10 Mar 2012 19:55:25 -0500 (EST)


----- Original Message -----
> From: "Anthony Liguori" <address@hidden>
> To: "Daniel P. Berrange" <address@hidden>, address@hidden, address@hidden, 
> "Gleb Natapov"
> <address@hidden>, "Jiri Denemark" <address@hidden>, "Avi Kivity" 
> <address@hidden>, address@hidden
> Sent: Saturday, March 10, 2012 1:24:47 PM
> Subject: Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with 
> libvirt
> 
> On 03/10/2012 09:58 AM, Eduardo Habkost wrote:
> > On Sat, Mar 10, 2012 at 12:42:46PM +0000, Daniel P. Berrange wrote:
> >>>
> >>> I could have sworn we had this discussion a year ago or so, and
> >>> had decided
> >>> that the default CPU models would be in something like
> >>> /usr/share/qemu/cpu-x86_64.conf
> >>> and loaded regardless of the -nodefconfig setting.
> >>> /etc/qemu/target-x86_64.conf
> >>> would be solely for end user configuration changes, not for QEMU
> >>> builtin
> >>> defaults.
> >>>
> >>> But looking at the code in QEMU, it doesn't seem we ever
> >>> implemented this ?
> >>
> >> Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu
> >> RPMs but,
> >> contrary to our normal RHEL development practice, it was not based
> >> on
> >> a cherry-pick of an upstream patch :-(
> >>
> >> For sake of reference, I'm attaching the two patches from the
> >> RHEL6 source
> >> RPM that do what I'm describing
> >>
> >> NB, I'm not neccessarily advocating these patches for upstream. I
> >> still
> >> maintain that libvirt should write out a config file containing
> >> the
> >> exact CPU model description it desires and specify that with
> >> -readconfig.
> >> The end result would be identical from QEMU's POV and it would
> >> avoid
> >> playing games with QEMU's config loading code.
> >
> > I agree that libvirt should just write the config somewhere. The
> > problem
> > here is to define: 1) what information should be mandatory on that
> > config data; 2) who should be responsible to test and maintain sane
> > defaults (and where should they be maintained).
> >
> > The current cpudef definitions are simply too low-level to require
> > it to
> > be written from scratch. Lots of testing have to be done to make
> > sure we
> > have working combinations of CPUID bits defined, so they can be
> > used as
> > defaults or templates. Not facilitating reuse of those tested
> > defauls/templates by libvirt is duplication of efforts.
> >
> > Really, if we expect libvirt to define all the CPU bits from
> > scratch on
> > a config file, we could as well just expect libvirt to open
> > /dev/kvm
> > itself and call the all CPUID setup ioctl()s itself. That's how
> > low-level some of the cpudef bits are.
> 
> Let's step back here.
> 
> Why are you writing these patches?  It's probably not because you
> have a desire
> to say -cpu Westmere when you run QEMU on your laptop.  I'd wager to
> say that no
> human has ever done that or that if they had, they did so by accident
> because
> they read documentation and thought they had to.
> 
> Humans probably do one of two things: 1) no cpu option or 2) -cpu
> host.
> 
> So then why are you introducing -cpu Westmere?  Because ovirt-engine
> has a
> concept of datacenters and the entire datacenter has to use a
> compatible CPU
> model to allow migration compatibility.  Today, the interface that
> ovirt-engine
> exposes is based on CPU codenames.  Presumably ovirt-engine wants to
> add a
> Westmere CPU group and as such have levied a requirement down the
> stack to QEMU.
> 
> But there's no intrinsic reason why it uses CPU model names.  VMware
> doesn't do
> this.  It has a concept of compatibility groups[1].

s/has/had

That was back in the 3.5 days and it was hit and miss, it relied on a user 
putting the same kind of machines in the resource groups and often caused 
issues.
Now they've moved up to a model very similar to what we're using:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003212


> 
> oVirt could just as well define compatibility groups like GroupA,
> GroupB,
> GroupC, etc. and then the -cpu option we would be discussing would be
> -cpu GroupA.
> 
> This is why it's a configuration option and not builtin to QEMU.
>  It's a user
> interface as as such, should be defined at a higher level.
> 
> Perhaps it really should be VDSM that is providing the model info to
> libvirt?
> Then they can add whatever groups then want whenever they want as
> long as we
> have the appropriate feature bits.

I think the "real" (model specific) names are the best place to start.
But if a user wants to override those with their own specific types then it 
should be allowed


> 
> P.S. I spent 30 minutes the other day helping a user who was
> attempting to
> figure out whether his processor was a Conroe, Penryn, etc.  Making
> this
> determination is fairly difficult and it makes me wonder whether
> having CPU code
> names is even the best interface for oVirt..

I think that was more about a bad choice in UI than a bad choice in the 
architecture.
It should be made clear to a user what kind of machine they have and what it's 
capabilities are
This bug was borne out of that issue  
https://bugzilla.redhat.com/show_bug.cgi?id=799708


> 
> [1]
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
> 
> Regards,
> 
> Anthony Liguori
> 
> >
> > (Also, there are additional low-level bits that really have to be
> > maintained somewhere, just to have sane defaults. Currently many
> > CPUID
> > leafs are exposed to the guest without letting the user control
> > them,
> > and worse: without keeping stability of guest-visible bits when
> > upgrading Qemu or the host kernel. And that's what machine-types
> > are
> > for: to have sane defaults to be used as base.)
> >
> > Let me give you a practical example: I had a bug report about
> > improper
> > CPU topology information[1]. After investigating it, I have found
> > out
> > that the "level" cpudef field is too low; CPU core topology
> > information
> > is provided on CPUID leaf 4, and most of the Intel CPU models on
> > Qemu
> > have level=2 today (I don't know why). So, Qemu is responsible for
> > exposing CPU topology information set using '-smp' to the guest OS,
> > but
> > libvirt would have to be responsible for choosing a proper "level"
> > value
> > that makes that information visible to the guest. We can _allow_
> > libvirt
> > to fiddle with these low-level bits, of course, but requiring every
> > management layer to build this low-level information from scratch
> > is
> > just a recipe to waste developer time.
> >
> > (And I really hope that there's no plan to require all those
> > low-level
> > bits to appear as-is on the libvirt XML definitions. Because that
> > would
> > require users to read the Intel 64 and IA-32 Architectures Software
> > Developer's Manual, or the AMD64 Architecture Programmer's Manual
> > and
> > BIOS and Kernel Developer's Guides, just to understand why
> > something is
> > not working on his Virtual Machine.)
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=689665
> >
> 
> --
> libvir-list mailing list
> address@hidden
> https://www.redhat.com/mailman/listinfo/libvir-list
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]