qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86


From: Ryan Harper
Subject: Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86
Date: Mon, 16 Jan 2012 15:33:12 -0600
User-agent: Mutt/1.5.6+20040907i

* Alexander Graf <address@hidden> [2012-01-16 14:52]:
> 
> On 16.01.2012, at 21:13, Ryan Harper wrote:
> 
> > * Alexander Graf <address@hidden> [2012-01-16 13:52]:
> >> 
> >> On 16.01.2012, at 20:46, Ryan Harper wrote:
> >> 
> >>> * Alexander Graf <address@hidden> [2012-01-16 13:37]:
> >>>> 
> >>>> On 16.01.2012, at 20:30, Ryan Harper wrote:
> >>>> 
> >>>>> * Alexander Graf <address@hidden> [2012-01-08 17:53]:
> >>>>>> When running QEMU without -cpu parameter, the user usually wants a sane
> >>>>>> default. So far, we're using the qemu64/qemu32 CPU type, which 
> >>>>>> basically
> >>>>>> means "the maximum TCG can emulate".
> >>>>> 
> >>>>> it also means we all maximum possible migration targets.  Have you
> >>>>> given any thought to migration with -cpu best? 
> >>>> 
> >>>> If you have the same boxes in your cluster, migration just works. If
> >>>> you don't, you usually use a specific CPU model that is the least
> >>>> dominator between your boxes either way.
> >>> 
> >>> Sure, but the idea behind -cpu best is to not have to figure that out;
> >>> you had suggested that the qemu64/qemu32 were just related to TCG, and
> >>> what I'm suggesting is that it's also the most compatible w.r.t
> >>> migration.  
> >> 
> >> The, the most compatible wrt migration is -cpu kvm64 / kvm32.
> >> 
> >>> it sounds like if migration is a requirement, then -cpu best probably
> >>> isn't something that would be used.  I suppose I'm OK with that, or at
> >>> least I don't have a better suggestion on how to carefully push up the
> >>> capabilities without at some point breaking migration.
> >> 
> >> Yes, if you're interested in migration, then you're almost guaranteed to 
> >> have a toolstack on top that has knowledge of your whole cluster and can 
> >> do the least dominator detection over all of your nodes. On the QEMU level 
> >> we don't know anything about other machines.
> >> 
> >>> 
> >>>> 
> >>>> The current kvm64 type is broken. Libgmp just abort()s when we pass it
> >>>> in. So anything is better than what we do today on AMD hosts :).
> >>> 
> >>> I wonder if it breaks with Cyris cpus... other tools tend to do runtime
> >>> detection (mplayer).
> >> 
> >> It probably does :). But then again those don't do KVM, do they?
> > 
> > not following; mplayer issues SSE2, 3 and 4 instructions to see what
> > works to figure out how to optimize; it doesn't care if the cpu is
> > called QEMU64 or Cyrus or AMD.  I'm not saying that we can't do better
> > than qemu64 w.r.t best cpu to select by default, but there are plenty of
> > applications that want to optimize their code based on what's available,
> > but this is done via code execution instead of string comparision.
> 
> The problem with -cpu kvm64 is that we choose a family/model that
> doesn't exist in the real world, and then glue AuthenticAMD or
> GenuineIntel in the vendor string. Libgmp checks for existing CPUs,
> finds that this CPU doesn't match any real world IDs and abort()s.
> 
> The problem is that there is not a single CPU on this planet in
> silicon that has the same model+family numbers, but exists in
> AuthenticAMD _and_ GenuineIntel flavors. We need to pass the host
> vendor in though, because the guest uses it to detect if it should
> execute SYSCALL or SYSENTER, because intel and amd screwed up heavily
> on that one.

I forgot about this one.  =(


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]