qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] i386/cpu: fixup number of addressable IDs for processor c


From: Zhao Liu
Subject: Re: [PATCH v3] i386/cpu: fixup number of addressable IDs for processor cores in the physical package
Date: Mon, 1 Jul 2024 15:11:11 +0800

On Mon, Jul 01, 2024 at 09:48:02AM +0300, Michael Tokarev wrote:
> Date: Mon, 1 Jul 2024 09:48:02 +0300
> From: Michael Tokarev <mjt@tls.msk.ru>
> Subject: Re: [PATCH v3] i386/cpu: fixup number of addressable IDs for
>  processor cores in the physical package
> 
> 11.06.2024 06:23, Chuang Xu wrote:
> > When QEMU is started with:
> > -cpu host,host-cache-info=on,l3-cache=off \
> > -smp 2,sockets=1,dies=1,cores=1,threads=2
> > Guest can't acquire maximum number of addressable IDs for processor cores in
> > the physical package from CPUID[04H].
> > 
> > When creating a CPU topology of 1 core per package, host-cache-info only
> > uses the Host's addressable core IDs field (CPUID.04H.EAX[bits 31-26]),
> > resulting in a conflict (on the multicore Host) between the Guest core
> > topology information in this field and the Guest's actual cores number.
> > 
> > Fix it by removing the unnecessary condition to cover 1 core per package
> > case. This is safe because cores_per_pkg will not be 0 and will be at
> > least 1.
> > 
> > Fixes: d7caf13b5fcf ("x86: cpu: fixup number of addressable IDs for logical 
> > processors sharing cache")
> > Signed-off-by: Guixiong Wei <weiguixiong@bytedance.com>
> > Signed-off-by: Yipeng Yin <yinyipeng@bytedance.com>
> > Signed-off-by: Chuang Xu <xuchuangxclwt@bytedance.com>
> > ---
> >   target/i386/cpu.c | 6 ++----
> >   1 file changed, 2 insertions(+), 4 deletions(-)
> > 
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > index bc2dceb647..b68f7460db 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -6426,10 +6426,8 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
> > uint32_t count,
> >               if (*eax & 31) {
> >                   int host_vcpus_per_cache = 1 + ((*eax & 0x3FFC000) >> 14);
> > -                if (cores_per_pkg > 1) {
> > -                    *eax &= ~0xFC000000;
> > -                    *eax |= max_core_ids_in_package(&topo_info) << 26;
> > -                }
> > +                *eax &= ~0xFC000000;
> > +                *eax |= max_core_ids_in_package(&topo_info) << 26;
> >                   if (host_vcpus_per_cache > threads_per_pkg) {
> >                       *eax &= ~0x3FFC000;
> 
> In qemu 9.0, the context is a bit different here:
> 
> 
>             if (*eax & 31) {
>                 int host_vcpus_per_cache = 1 + ((*eax & 0x3FFC000) >> 14);
>                 int vcpus_per_socket = cs->nr_cores * cs->nr_threads;
>                 if (cs->nr_cores > 1) {
>                     *eax &= ~0xFC000000;
>                     *eax |= (pow2ceil(cs->nr_cores) - 1) << 26;
>                 }
>                 if (host_vcpus_per_cache > vcpus_per_socket) {
> 
> Ie, no max_core_ids_in_package(), cores_per_pkg etc, introduced in
> v9.0.0-790-gf602eb925a "i386/cpu: Use CPUCacheInfo.share_level to encode
> CPUID[4]" and nearby.
> 
> Am I right the above change becomes
> 
>              if (*eax & 31) {
>                  int host_vcpus_per_cache = 1 + ((*eax & 0x3FFC000) >> 14);
>                  int vcpus_per_socket = cs->nr_cores * cs->nr_threads;
> -                if (cs->nr_cores > 1) {
> -                    *eax &= ~0xFC000000;
> -                    *eax |= (pow2ceil(cs->nr_cores) - 1) << 26;
> -                }
> +                *eax &= ~0xFC000000;
> +                *eax |= (pow2ceil(cs->nr_cores) - 1) << 26;
>                  if (host_vcpus_per_cache > vcpus_per_socket) {
>                      *eax &= ~0x3FFC000;
>                      *eax |= (pow2ceil(vcpus_per_socket) - 1) << 14;
> 
> in 9.0 -- in other words, just remove the nr_cores condition check
> and do the *eax assignment unconditionally ?
> 
> From the patch description it seems like it is, but I thought I'd
> ask anyway :)

Hi Michael,

I can help confirm your changes are correct.

-Zhao




reply via email to

[Prev in Thread] Current Thread [Next in Thread]