qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH RFC V2 23/37] arm/virt: Release objects for *disabled* possib


From: Salil Mehta
Subject: RE: [PATCH RFC V2 23/37] arm/virt: Release objects for *disabled* possible vCPUs after init
Date: Mon, 16 Oct 2023 23:28:38 +0000

Hi Gavin,

> From: Gavin Shan <gshan@redhat.com>
> Sent: Friday, September 29, 2023 12:58 AM
> To: Salil Mehta <salil.mehta@huawei.com>; qemu-devel@nongnu.org; 
> qemu-arm@nongnu.org
> Cc: maz@kernel.org; jean-philippe@linaro.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; lpieralisi@kernel.org;
> peter.maydell@linaro.org; richard.henderson@linaro.org;
> imammedo@redhat.com; andrew.jones@linux.dev; david@redhat.com;
> philmd@linaro.org; eric.auger@redhat.com; will@kernel.org; ardb@kernel.org;
> oliver.upton@linux.dev; pbonzini@redhat.com; mst@redhat.com;
> rafael@kernel.org; borntraeger@linux.ibm.com; alex.bennee@linaro.org;
> linux@armlinux.org.uk; darren@os.amperecomputing.com;
> ilkka@os.amperecomputing.com; vishnu@os.amperecomputing.com;
> karl.heubaum@oracle.com; miguel.luis@oracle.com; salil.mehta@opnsrc.net;
> zhukeqian <zhukeqian1@huawei.com>; wangxiongfeng (C)
> <wangxiongfeng2@huawei.com>; wangyanan (Y) <wangyanan55@huawei.com>;
> jiakernel2@gmail.com; maobibo@loongson.cn; lixianglai@loongson.cn
> Subject: Re: [PATCH RFC V2 23/37] arm/virt: Release objects for *disabled*
> possible vCPUs after init
> 
> Hi Salil,
> 
> On 9/26/23 20:04, Salil Mehta wrote:
> > During machvirt_init(), QOM ARMCPU objects are also pre-created along
> with the
> > corresponding KVM vCPUs in the host for all possible vCPUs. This necessary
> > because of the architectural constraint, KVM restricts the deferred 
> > creation of
> > the KVM vCPUs and VGIC initialization/sizing after VM init. Hence, VGIC is
> > pre-sized with possible vCPUs.
> >
> > After initialization of the machine is complete disabled possible KVM vCPUs 
> > are
> > then parked at the per-virt-machine list "kvm_parked_vcpus" and we release 
> > the
> > QOM ARMCPU objects for the disabled vCPUs. These shall be re-created at the 
> > time
> > when vCPU is hotplugged again. QOM ARMCPU object is then re-attached with
> > corresponding parked KVM vCPU.
> >
> > Alternatively, we could've never released the QOM CPU objects and kept on
> > reusing. This approach might require some modifications of qdevice_add()
> > interface to get old ARMCPU object instead of creating a new one for the 
> > hotplug
> > request.
> >
> > Each of the above approaches come with their own pros and cons. This 
> > prototype
> > uses the 1st approach.(suggestions are welcome!)
> >
> > Co-developed-by: Salil Mehta <salil.mehta@huawei.com>
> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > Co-developed-by: Keqian Zhu <zhukeqian1@huawei.com>
> > Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > ---
> >   hw/arm/virt.c | 32 ++++++++++++++++++++++++++++++++
> >   1 file changed, 32 insertions(+)
> >
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index f1bee569d5..3b068534a8 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1965,6 +1965,7 @@ static void virt_cpu_post_init(VirtMachineState
> *vms, MemoryRegion *sysmem)
> >   {
> >       CPUArchIdList *possible_cpus = vms->parent.possible_cpus;
> >       int max_cpus = MACHINE(vms)->smp.max_cpus;
> > +    MachineState *ms = MACHINE(vms);
> >       bool aarch64, steal_time;
> >       CPUState *cpu;
> >       int n;
> > @@ -2025,6 +2026,37 @@ static void virt_cpu_post_init(VirtMachineState
> *vms, MemoryRegion *sysmem)
> >               }
> >           }
> >       }
> > +
> > +    if (kvm_enabled() || tcg_enabled()) {
> > +        for (n = 0; n < possible_cpus->len; n++) {
> > +            cpu = qemu_get_possible_cpu(n);
> > +
> > +            /*
> > +             * Now, GIC has been sized with possible CPUs and we don’t 
> > require
> > +             * disabled vCPU objects to be represented in the QOM. Release 
> > the
> > +             * disabled ARMCPU objects earlier used during init for 
> > pre-sizing.
> > +             *
> > +             * We fake to the guest through ACPI about the 
> > presence(_STA.PRES=1)
> > +             * of these non-existent vCPUs at VMM/qemu and present these as
> > +             * disabled vCPUs(_STA.ENA=0) so that they cant be used. These 
> > vCPUs
> > +             * can be later added to the guest through hotplug exchanges 
> > when
> > +             * ARMCPU objects are created back again using 'device_add' QMP
> > +             * command.
> > +             */
> > +            /*
> > +             * RFC: Question: Other approach could've been to keep them 
> > forever
> > +             * and release it only once when qemu exits as part of 
> > finalize or
> > +             * when new vCPU is hotplugged. In the later old could be 
> > released
> > +             * for the newly created object for the same vCPU?
> > +             */
> > +            if (!qemu_enabled_cpu(cpu)) {
> > +                CPUArchId *cpu_slot;
> > +                cpu_slot = virt_find_cpu_slot(ms, cpu->cpu_index);
> > +                cpu_slot->cpu = NULL;
> > +                object_unref(OBJECT(cpu));
> > +            }
> > +        }
> > +    }
> >   }
> >
> 
> Needn't we release those CPU instances for hve and qtest? Besides, I think 
> it's
> hard for reuse those objects because they're managed by QOM, which is almost
> transparent to us, correct?

For now, this code leg won't hit for TCG, HVE or qtest. These are not supported
in this release. I might enable support of TCG in coming months. Some fixing and
testing is required.

I had created 2 working prototypes earlier in the year 2020. One did not release
the CPU objects. Hence, the ACPI CPU Hotplug state was always in sync with QOM
CPUState. But it required some changes in the qdev creation leg for the reuse of
the existing CPU object - which Igor was not in favor of. Plus, it had some 
issues
with Live migration (which were left unresolved at that time).  Hence, used the
current approach as the primary one.

Thanks
Salil.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]