!-------------------------------------------------------------------|
CAUTION: External Email
|-------------------------------------------------------------------!
Hi Igor,
On Wed, Jul 24, 2024 at 02:54:32PM +0200, Igor Mammedov wrote:
Date: Wed, 24 Jul 2024 14:54:32 +0200
From: Igor Mammedov <imammedo@redhat.com>
Subject: Re: [PATCH v1] target/i386: Always set leaf 0x1f
X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-redhat-linux-gnu)
On Wed, 24 Jul 2024 12:13:28 +0100
John Levon <john.levon@nutanix.com> wrote:
On Wed, Jul 24, 2024 at 03:59:29PM +0530, Manish wrote:
Leaf 0x1f is superset of 0xb, so it makes sense to set 0x1f equivalent
to 0xb by default and workaround windows issue.>
This change adds a
new property 'cpuid-0x1f-enforce' to set leaf 0x1f equivalent to 0xb in
case extended CPU topology is not configured and behave as before otherwise.
repeating question
why we need to use extra property instead of just adding 0x1f leaf for CPU
models
that supposed to have it?
As i mentioned in earlier response. "Windows expects it only when we have
set max cpuid level greater than or equal to 0x1f. I mean if it is exposed
it should not be all zeros. SapphireRapids CPU definition raised cpuid level
to 0x20, so we starting seeing it with SapphireRapids."
Windows does not expect 0x1f to be present for any CPU model. But if it is
exposed to the guest, it expects non-zero values.
I think Igor is suggesting:
- leave x86_cpu_expand_features() alone completely
yep, drop that if possible
- change the 0x1f handling to always report topology i.e. never report all
zeroes
Do this but only for CPU models that have this leaf per spec,
to avoid live migration issues create a new version of CPU model,
so it would apply only for new version. This way older versions
and migration won't be affected.
So that in the future every new Intel CPU model will need to always
enable 0x1f. Sounds like an endless game. So my question is: at what
point is it ok to consider defaulting to always enable 0x1f and just
disable it for the old CPU model?
Thanks,
Zhao