[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH V2] qemu-xen: HVM domain S3 bugfix
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [PATCH V2] qemu-xen: HVM domain S3 bugfix |
Date: |
Thu, 05 Sep 2013 21:57:41 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130805 Thunderbird/17.0.8 |
Il 29/08/2013 10:25, Liu, Jinsong ha scritto:
> Currently HVM S3 has a bug coming from the difference between
> qemu-traditioanl and qemu-xen. For qemu-traditional, the way
> to resume from hvm s3 is via 'xl trigger' command. However,
> for qemu-xen, the way to resume from hvm s3 inherited from
> standard qemu, i.e. via QMP, and it doesn't work under Xen.
>
> The root cause is, for qemu-xen, 'xl trigger' command didn't reset
> devices, while QMP didn't unpause hvm domain though they did qemu
> system reset.
>
> We have 2 patches to fix the HVM S3 bug: qemu-xen patch and xl patch.
> This patch is the qemu-xen patch. It registers a wakeup later notify,
> so that when 'xl trigger' command invokes QMP system_wakeup and after
> qemu system reset, it hypercalls to hypervisor to unpause domain, then
> hvm S3 resumes successfully.
>
> Signed-off-by: Liu Jinsong <address@hidden>
> ---
> vl.c | 13 +++++++++++++
> xen-all.c | 9 +++++++++
> 2 files changed, 22 insertions(+), 0 deletions(-)
>
> diff --git a/vl.c b/vl.c
> index 5314f55..aeebd83 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -1478,6 +1478,8 @@ static NotifierList suspend_notifiers =
> NOTIFIER_LIST_INITIALIZER(suspend_notifiers);
> static NotifierList wakeup_notifiers =
> NOTIFIER_LIST_INITIALIZER(wakeup_notifiers);
> +static NotifierList wakeup_later_notifiers =
> + NOTIFIER_LIST_INITIALIZER(wakeup_later_notifiers);
> static uint32_t wakeup_reason_mask = ~0;
> static RunState vmstop_requested = RUN_STATE_MAX;
>
> @@ -1625,6 +1627,11 @@ static void qemu_system_suspend(void)
> monitor_protocol_event(QEVENT_SUSPEND, NULL);
> }
>
> +static void qemu_system_wakeup(void)
> +{
> + notifier_list_notify(&wakeup_later_notifiers, NULL);
> +}
> +
> void qemu_system_suspend_request(void)
> {
> if (runstate_check(RUN_STATE_SUSPENDED)) {
> @@ -1668,6 +1675,11 @@ void qemu_register_wakeup_notifier(Notifier *notifier)
> notifier_list_add(&wakeup_notifiers, notifier);
> }
>
> +void qemu_register_wakeup_later_notifier(Notifier *notifier)
> +{
> + notifier_list_add(&wakeup_later_notifiers, notifier);
> +}
> +
> void qemu_system_killed(int signal, pid_t pid)
> {
> shutdown_signal = signal;
> @@ -1744,6 +1756,7 @@ static bool main_loop_should_exit(void)
> cpu_synchronize_all_states();
> qemu_system_reset(VMRESET_SILENT);
> resume_all_vcpus();
> + qemu_system_wakeup();
Does Xen work if the hypercall is placed before resume_all_vcpus? In
this case, you can just move the wakeup_notifiers invocation from
qemu_system_wakeup_request to here, and avoid introducing a separate list.
Paolo
> monitor_protocol_event(QEVENT_WAKEUP, NULL);
> }
> if (qemu_powerdown_requested()) {
> diff --git a/xen-all.c b/xen-all.c
> index 15be8ed..3353f63 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -97,6 +97,7 @@ typedef struct XenIOState {
>
> Notifier exit;
> Notifier suspend;
> + Notifier wakeup_later;
> } XenIOState;
>
> /* Xen specific function for piix pci */
> @@ -139,6 +140,11 @@ static void xen_suspend_notifier(Notifier *notifier,
> void *data)
> xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 3);
> }
>
> +static void xen_wakeup_later_notifier(Notifier *notifier, void *data)
> +{
> + xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0);
> +}
> +
> /* Xen Interrupt Controller */
>
> static void xen_set_irq(void *opaque, int irq, int level)
> @@ -1106,6 +1112,9 @@ int xen_hvm_init(void)
> state->suspend.notify = xen_suspend_notifier;
> qemu_register_suspend_notifier(&state->suspend);
>
> + state->wakeup_later.notify = xen_wakeup_later_notifier;
> + qemu_register_wakeup_later_notifier(&state->wakeup_later);
> +
> xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &ioreq_pfn);
> DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
> state->shared_page = xc_map_foreign_range(xen_xc, xen_domid,
> XC_PAGE_SIZE,
>