qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 05/13] cpu: Reclaim vCPU objects


From: Thomas Huth
Subject: Re: [Qemu-devel] [PATCH v7 05/13] cpu: Reclaim vCPU objects
Date: Fri, 19 Feb 2016 16:21:21 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0

On 28.01.2016 06:49, Bharata B Rao wrote:
> From: Gu Zheng <address@hidden>
> 
> In order to deal well with the kvm vcpus (which can not be removed without any
> protection), we do not close KVM vcpu fd, just record and mark it as stopped
> into a list, so that we can reuse it for the appending cpu hot-add request if
> possible. It is also the approach that kvm guys suggested:
> https://www.mail-archive.com/address@hidden/msg102839.html
> 
> Signed-off-by: Chen Fan <address@hidden>
> Signed-off-by: Gu Zheng <address@hidden>
> Signed-off-by: Zhu Guihua <address@hidden>
> Signed-off-by: Bharata B Rao <address@hidden>
>                [- Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu()
>                   isn't needed as it is done from cpu_exec_exit()
>                 - Use iothread mutex instead of global mutex during
>                   destroy
>                 - Don't cleanup vCPU object from vCPU thread context
>                   but leave it to the callers (device_add/device_del)]
> Reviewed-by: David Gibson <address@hidden>
> ---
>  cpus.c               | 38 +++++++++++++++++++++++++++++++++++
>  include/qom/cpu.h    | 10 +++++++++
>  include/sysemu/kvm.h |  1 +
>  kvm-all.c            | 57 
> +++++++++++++++++++++++++++++++++++++++++++++++++++-
>  kvm-stub.c           |  5 +++++
>  5 files changed, 110 insertions(+), 1 deletion(-)
> 
> diff --git a/cpus.c b/cpus.c
> index 1e97cc4..c5631f0 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -953,6 +953,18 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void 
> *data), void *data)
>      qemu_cpu_kick(cpu);
>  }
>  
> +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
> +{
> +    if (kvm_destroy_vcpu(cpu) < 0) {
> +        error_report("kvm_destroy_vcpu failed");
> +        exit(EXIT_FAILURE);
> +    }
> +}
> +
> +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
> +{

Will this be populated by a later patch in your series? If not, maybe
add a debugging statement here so that it is clear that there is a TODO
left here?

> +}
> +
>  static void flush_queued_work(CPUState *cpu)
>  {
>      struct qemu_work_item *wi;
> @@ -1053,6 +1065,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>              }
>          }
>          qemu_kvm_wait_io_event(cpu);
> +        if (cpu->exit && !cpu_can_run(cpu)) {
> +            qemu_kvm_destroy_vcpu(cpu);
> +            qemu_mutex_unlock_iothread();
> +            return NULL;
> +        }
>      }

You could increase readability of the code by changing the condition of
the loop instead - currently it is a "while (1)" ... you could turn that
into a "do { ... } while (!cpu->exit || cpu_can_run(cpu))" and then
destroy the cpu after the loop.

>      return NULL;
...
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 2e5229d..32a2c71 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -232,6 +232,7 @@ struct kvm_run;
>   * @halted: Nonzero if the CPU is in suspended state.
>   * @stop: Indicates a pending stop request.
>   * @stopped: Indicates the CPU has been artificially stopped.
> + * @exit: Indicates the CPU has exited due to an unplug operation.

There is also a "exit_request" member in this struct already ... maybe
you could name your new variable differently to avoid confusion?
Something like "remove_request" or "unplug_request" ?

>   * @crash_occurred: Indicates the OS reported a crash (panic) for this CPU
>   * @tcg_exit_req: Set to force TCG to stop executing linked TBs for this
>   *           CPU and return to its top level loop.
> @@ -284,6 +285,7 @@ struct CPUState {
>      bool created;
>      bool stop;
>      bool stopped;
> +    bool exit;
>      bool crash_occurred;
>      bool exit_request;
>      uint32_t interrupt_request;
...

Apart from that, the patch looks fine to me.

 Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]