qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Nbd] [PATCH v3] doc: Add NBD_CMD_BLOCK_STATUS extensio


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [Nbd] [PATCH v3] doc: Add NBD_CMD_BLOCK_STATUS extension
Date: Tue, 29 Nov 2016 09:17:14 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Mon, Nov 28, 2016 at 06:33:24PM +0100, Wouter Verhelst wrote:
> Hi Stefan,
> 
> On Mon, Nov 28, 2016 at 11:19:44AM +0000, Stefan Hajnoczi wrote:
> > On Sun, Nov 27, 2016 at 08:17:14PM +0100, Wouter Verhelst wrote:
> > > Quickly: the reason I haven't merged this yes is twofold:
> > > - I wasn't thrilled with the proposal at the time. It felt a bit
> > >   hackish, and bolted onto NBD so you could use it, but without defining
> > >   everything in the NBD protocol. "We're reading some data, but it's not
> > >   about you". That didn't feel right
> > >
> > > - There were a number of questions still unanswered (you're answering a
> > >   few below, so that's good).
> > > 
> > > For clarity, I have no objection whatsoever to adding more commands if
> > > they're useful, but I would prefer that they're also useful with NBD on
> > > its own, i.e., without requiring an initiation or correlation of some
> > > state through another protocol or network connection or whatever. If
> > > that's needed, that feels like I didn't do my job properly, if you get
> > > my point.
> > 
> > The out-of-band operations you are referring to are for dirty bitmap
> > management.  (The goal is to read out blocks that changed since the last
> > backup.)
> > 
> > The client does not access the live disk, instead it accesses a
> > read-only snapshot and the dirty information (so that it can copy out
> > only blocks that were written).  The client is allowed to read blocks
> > that are not dirty too.
> 
> I understood as much, yes.
> 
> > If you want to implement the whole incremental backup workflow in NBD
> > then the client would first have to connect to the live disk, set up
> > dirty tracking, create a snapshot export, and then connect to that
> > snapshot.
> > 
> > That sounds like a big feature set and I'd argue it's for the control
> > plane (storage API) and not the data plane (NBD).  There were
> > discussions about transferring the dirty information via the control
> > plane but it seems more appropriate to it in the data plane since it is
> > block-level information.
> 
> I agree that creating and managing snapshots is out of scope for NBD. The
> protocol is not set up for that.
> 
> However, I'm arguing that if we're going to provide information about
> snapshots, we should be able to properly refer to these snapshots from
> within an NBD context. My previous mail suggested adding a negotiation
> message that would essentially ask the server "tell me about the
> snapshots you know about", giving them an NBD identifier in the process
> (accompanied by a "foreign" identifier that is decidedly *not* an NBD
> identifier and that could be used to match the NBD identifier to
> something implementation-defined). This would be read-only information;
> the client cannot ask the server to create new snapshots. We can then
> later in the protocol refer to these snapshots by way of that NBD
> identifier.
> 
> My proposal also makes it impossible to get updates of newly created
> snapshots without disconnecting and reconnecting (due to the fact that
> you can't go from transmission back to negotiation), but I'm not sure
> that's a problem.
> 
> Doing so has two advantages:
> - If a client is accidentally (due to misconfiguration or implementation
>   bugs or whatnot) connecting to the wrong server after having created a
>   snapshot through a management protocol, we have an opportunity to
>   detect this error, due to the fact that the "foreign" identifiers
>   passed to the client during negotiation will not match with what the
>   client was expecting.
> - A future version of the protocol could possibly include an extended
>   version of the read command, allowing a client to read information
>   from multiple storage snapshots without requiring a reconnect, and
>   allowing current clients information about allocation status across
>   various snapshots (although a first implementation could very well
>   limit itself to only having one snapshot).

Sorry, I misunderstood you.

Snapshots are not very different from NBD exports.  Especially if the
storage system supports writeable-snapshot (aka cloning).  Should we
just used named exports?

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]