qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 3/5] shutdown: Add source information to SHUT


From: Markus Armbruster
Subject: Re: [Qemu-devel] [PATCH v7 3/5] shutdown: Add source information to SHUTDOWN and RESET
Date: Tue, 09 May 2017 13:56:46 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux)

Resending because first send didn't get through to some recipients...

Eric Blake <address@hidden> writes:

> Time to wire up all the call sites that request a shutdown or
> reset to use the enum added in the previous patch.
>
> It would have been less churn to keep the common case with no
> arguments as meaning guest-triggered, and only modified the
> host-triggered code paths, via a wrapper function, but then we'd
> still have to audit that I didn't miss any host-triggered spots;
> changing the signature forces us to double-check that I correctly
> categorized all callers.
>
> Since command line options can change whether a guest reset request
> causes an actual reset vs. a shutdown, it's easy to also add the
> information to reset requests.
>
> Replay adds a FIXME to preserve the cause across the replay stream,
> that will be tackled in the next patch.
>
> Signed-off-by: Eric Blake <address@hidden>
> Acked-by: David Gibson <address@hidden> [ppc parts]
> Reviewed-by: Mark Cave-Ayland <address@hidden> [SPARC part]
[...]
> diff --git a/hw/acpi/core.c b/hw/acpi/core.c
> index e890a5d..95fcac9 100644
> --- a/hw/acpi/core.c
> +++ b/hw/acpi/core.c
> @@ -561,7 +561,7 @@ static void acpi_pm1_cnt_write(ACPIREGS *ar, uint16_t val)
>          uint16_t sus_typ = (val >> 10) & 7;
>          switch(sus_typ) {
>          case 0: /* soft power off */
> -            qemu_system_shutdown_request();
> +            qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
>              break;
>          case 1:
>              qemu_system_suspend_request();
> @@ -569,7 +569,7 @@ static void acpi_pm1_cnt_write(ACPIREGS *ar, uint16_t val)
>          default:
>              if (sus_typ == ar->pm1.cnt.s4_val) { /* S4 request */
>                  qapi_event_send_suspend_disk(&error_abort);
> -                qemu_system_shutdown_request();
> +                qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);

I'm fine with using SHUTDOWN_CAUSE_GUEST_SHUTDOWN for suspend, but have
you considered SHUTDOWN_CAUSE_GUEST_SUSPEND?

>              }
>              break;
>          }
[...]
> diff --git a/qmp.c b/qmp.c
> index ab74cd7..95949d0 100644
> --- a/qmp.c
> +++ b/qmp.c
> @@ -84,7 +84,7 @@ UuidInfo *qmp_query_uuid(Error **errp)
>  void qmp_quit(Error **errp)
>  {
>      no_shutdown = 0;
> -    qemu_system_shutdown_request();
> +    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_QMP);
>  }
>
>  void qmp_stop(Error **errp)
> @@ -105,7 +105,7 @@ void qmp_stop(Error **errp)
>
>  void qmp_system_reset(Error **errp)
>  {
> -    qemu_system_reset_request();
> +    qemu_system_reset_request(SHUTDOWN_CAUSE_HOST_QMP);

This is the only place where we pass something other than
SHUTDOWN_CAUSE_GUEST_RESET.  We could avoid churn the obvious way, but I
guess having the churn eases patch review.  Okay.

>  }
>
>  void qmp_system_powerdown(Error **erp)
> diff --git a/replay/replay.c b/replay/replay.c
> index f810628..604fa4f 100644
> --- a/replay/replay.c
> +++ b/replay/replay.c
> @@ -51,7 +51,8 @@ bool replay_next_event_is(int event)
>          switch (replay_state.data_kind) {
>          case EVENT_SHUTDOWN:
>              replay_finish_event();
> -            qemu_system_shutdown_request();
> +            /* FIXME - store actual reason */
> +            qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);

The temporary replay breakage is no big deal.  Still, can we avoid it by
extending replay first, using a dummy value like
SHUTDOWN_CAUSE_HOST_ERROR until the real cause becomes available?  Not
sure it's worth a respin, though.

>              break;
>          default:
>              /* clock, time_t, checkpoint and other events */
[...]

Reviewed-by: Markus Armbruster <address@hidden>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]