qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] block/rbd: add .bdrv_reopen_prepare() stub


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] block/rbd: add .bdrv_reopen_prepare() stub
Date: Wed, 18 May 2016 10:19:31 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 17.05.2016 um 20:48 hat Josh Durgin geschrieben:
> On 05/17/2016 03:03 AM, Sebastian Färber wrote:
> >Hi Kevin,
> >
> >>A correct reopen implementation must consider all options and flags that
> >>.bdrv_open() looked at.
> >>
> >>The options are okay, as both "filename" and "password-secret" aren't
> >>things that we want to allow a reopen to change. However, in the flags
> >>BDRV_O_NOCACHE makes a difference:
> >>
> >>     if (flags & BDRV_O_NOCACHE) {
> >>         rados_conf_set(s->cluster, "rbd_cache", "false");
> >>     } else {
> >>         rados_conf_set(s->cluster, "rbd_cache", "true");
> >>     }
> >>
> >>A reopen must either update the setting, or if it can't (e.g. because
> >>librbd doesn't support it) any attempt to change the flag must fail.
> 
> Updating this setting on an open image won't do anything, but if you
> rbd_close() and rbd_open() it again the setting will take effect.
> rbd_close() will force a flush of any pending I/O in librbd and
> free the memory for librbd's ImageCtx, which may or may not be desired
> here.

First rbd_close() and then rbd_open() risks that the rbd_open() fails
and we end up with no usable image at all. Can we open a second instance
of the image first and only close the first one if that succeeded?

We already flush all requests before calling this, so that part
shouldn't make a difference.

> >Thanks for the feedback.
> >As far as i can tell it's not possible to update the cache settings
> >without reconnecting. I've added a check in the following patch.
> >Would be great if someone who knows the internals of ceph/rbd could
> >have a look as well.
> 
> There's no need to reset the librados state, so connections to the
> cluster can stick around. I'm a bit unclear on the bdrv_reopen_*
> functions though - what is their intended use and semantics?

They change the options and flags that were specified in .bdrv_open().
The most important use case today is switching between read-only and
read-write, but changing the cache mode or any other option can be
requested as well.

> >Sebastian
> >
> >-- >8 --
> >Subject: [PATCH] block/rbd: add .bdrv_reopen_prepare() stub
> >
> >Add support for reopen() by adding the .bdrv_reopen_prepare() stub
> >
> >Signed-off-by: Sebastian Färber <address@hidden>
> >---
> >  block/rbd.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> >
> >diff --git a/block/rbd.c b/block/rbd.c
> >index 5bc5b32..8ecf096 100644
> >--- a/block/rbd.c
> >+++ b/block/rbd.c
> >@@ -577,6 +577,19 @@ failed_opts:
> >      return r;
> >  }
> >
> >+/* Note that this will not re-establish a connection with the Ceph cluster
> >+   - it is effectively a NOP.  */
> >+static int qemu_rbd_reopen_prepare(BDRVReopenState *state,
> >+                                   BlockReopenQueue *queue, Error **errp)
> >+{
> >+    if (state->flags & BDRV_O_NOCACHE &&
> >+        ((state->bs->open_flags & BDRV_O_NOCACHE) == 0)) {

This misses the other direction, where you try to turn on caching. If we
don't implement the real functionality, we should always error out if
the bit changes. The most readable check is probably:

(state->flags & BDRV_O_NOCACHE) != (state->bs->open_flags & BDRV_O_NOCACHE)

> >+        error_setg(errp, "Cannot turn off rbd_cache during reopen");
> >+        return -EINVAL;
> >+    }
> >+    return 0;
> >+}

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]