qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: Fix race of image locking between sr


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH] migration: Fix race of image locking between src and dst
Date: Mon, 19 Jun 2017 15:49:32 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

On Sat, Jun 17, 2017 at 12:06:58AM +0800, Fam Zheng wrote:
> Previously, dst side will immediately try to lock the write byte upon
> receiving QEMU_VM_EOF, but at src side, bdrv_inactivate_all() is only
> done after sending it. If the src host is under load, dst may fail to
> acquire the lock due to racing with the src unlocking it.
> 
> Fix this by hoisting the bdrv_inactivate_all() operation before
> QEMU_VM_EOF.
> 
> N.B. A further improvement could possibly be done to cleanly handover
> locks between src and dst, so that there is no window where a third QEMU
> could steal the locks and prevent src and dst from running.
> 
> Reported-by: Peter Maydell <address@hidden>
> Signed-off-by: Fam Zheng <address@hidden>
> ---
>  migration/colo.c      |  2 +-
>  migration/migration.c | 19 +++++++------------
>  migration/savevm.c    | 19 +++++++++++++++----
>  migration/savevm.h    |  3 ++-
>  4 files changed, 25 insertions(+), 18 deletions(-)

[snip]

> @@ -1695,20 +1695,15 @@ static void migration_completion(MigrationState *s, 
> int current_active_state,
>          ret = global_state_store();
>  
>          if (!ret) {
> +            bool inactivate = !migrate_colo_enabled();
>              ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
>              if (ret >= 0) {
>                  qemu_file_set_rate_limit(s->to_dst_file, INT64_MAX);
> -                qemu_savevm_state_complete_precopy(s->to_dst_file, false);
> +                ret = qemu_savevm_state_complete_precopy(s->to_dst_file, 
> false,
> +                                                         inactivate);
>              }
> -            /*
> -             * Don't mark the image with BDRV_O_INACTIVE flag if
> -             * we will go into COLO stage later.
> -             */
> -            if (ret >= 0 && !migrate_colo_enabled()) {
> -                ret = bdrv_inactivate_all();
> -                if (ret >= 0) {
> -                    s->block_inactive = true;
> -                }
> +            if (inactivate && ret >= 0) {
> +                s->block_inactive = true;
>              }
>          }
>          qemu_mutex_unlock_iothread();

[snip]

> @@ -1173,6 +1174,15 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, 
> bool iterable_only)
>          json_end_object(vmdesc);
>      }
>  
> +    if (inactivate_disks) {
> +        /* Inactivate before sending QEMU_VM_EOF so that the
> +         * bdrv_invalidate_cache_all() on the other end won't fail. */
> +        ret = bdrv_inactivate_all();
> +        if (ret) {
> +            qemu_file_set_error(f, ret);
> +            return ret;
> +        }
> +    }

IIUC as well as fixing the race condition, you're also improving
error reporting by using qemu_file_set_error() which was not done
previously. Would be nice to mention that in the commit message
too if you respin for any other reason, but that's just a nit-pick
so

  Reviewed-by: Daniel P. Berrange <address@hidden>

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]