qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: x-blockdev-reopen & block-dirty-bitmaps


From: Peter Krempa
Subject: Re: x-blockdev-reopen & block-dirty-bitmaps
Date: Tue, 18 Feb 2020 16:35:26 +0100
User-agent: Mutt/1.13.0 (2019-11-30)

On Tue, Feb 18, 2020 at 15:25:33 +0100, Kevin Wolf wrote:
> Am 18.02.2020 um 13:58 hat Peter Krempa geschrieben:
> > On Mon, Feb 17, 2020 at 10:52:31 +0100, Kevin Wolf wrote:
> > > Am 14.02.2020 um 21:32 hat John Snow geschrieben:

[...]

> > Well, while we probably want it to be stable for upstream acceptance
> > that didn't prevent me from actually trying to use reopening. It would
> > be probably frowned upon if I tried to use it upstream.
> > 
> > The problem is that we'd have to carry the compatibility code for at
> > least the two possible names of the command if nothing else changes and
> > also the fact that once the command is declared stable, some older
> > libvirt versions might not know to use it.
> 
> I think this is exactly the thing we need before we can mark it stable:
> Some evidence that it actually provides the functionality that
> management tools need. So thanks for giving it a try.

Yes, this is the unfortunate circular dependency :). We want to use it
only once it's stable and you want some testing for it. Finding a good
use case for us is the hardest usually.

> > The implementation was surprisingly easy though and works well to reopen
> > the backing files in RW mode. The caveat was that the reopen somehow
> > still didn't reopen the bitmaps and qemu ended up reporting:
> > 
> > libvirt-1-format: Failed to make dirty bitmaps writable: Cannot update 
> > bitmap directory: Bad file descriptor
> > 
> > So unfortunately it didn't work out for that scenario.
> 
> I'm not completely sure, but this sounds a bit like a reopen bug in the
> file-posix driver to me, where we keep using the old file descriptor
> somewhere?
> 
> Someone (TM) should turn this into a qemu-iotests case and then we can
> debug it.

I'm not sure I'd be very helpful in turning it into a test but I can
provide the (rough) steps if that will be helpful:

The images both had some bitmaps already present and active:

      "dirty-bitmaps": [
        {
          "name": "b",
          "recording": true,
          "persistent": true,
          "busy": false,
          "status": "active",
          "granularity": 65536,
          "count": 0
        }
      ],

I've started qemu with:

-blockdev 
'{"driver":"file","filename":"/tmp/copy4.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-2-format","read-only":true,"driver":"qcow2","file":"libvirt-2-storage","backing":null}'
 \
-blockdev 
'{"driver":"file","filename":"/tmp/copy4.1582023995","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}'
 \
-blockdev 
'{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-2-format"}'
 \
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=libvirt-1-format,id=virtio-disk0
 \

Tried to reopen the backing image:

{"execute":"x-blockdev-reopen","arguments":{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-
2-storage","backing":null},"id":"libvirt-370"}

I suspect that at that point this was printed to stderr (but I don't
have timestamps in the log):

  libvirt-2-format: Failed to make dirty bitmaps writable: Cannot update bitmap 
directory: Bad file descriptor

I then tried to remove the bitmaps but that failed:

{"execute":"transaction","arguments":{"actions":[{"type":"block-dirty-bitmap-remove","data":{"node":"libvirt-1-format","name":
"b"}},{"type":"block-dirty-bitmap-remove","data":{"node":"libvirt-2-format","name":"b"}}]},"id":"libvirt-373"}


> > <sidetrack alert>
> > 
> > Also I'm afraid I have another use case for it:
> > 
> > oVirt when doing their 'live storage migration' actually uses libvirt to
> > mirror only the top layer in shallow mode and copies everything else
> > while the mirror is running using qemu-img.
> > 
> > Prior to libvirt's use of -blockdev this worked well, because qemu
> > reopened the mirror destination (which caused to open the backing files)
> > only at the end. With -blockdev we have to open the backing files right
> > away so that they can be properly installed as backing of the image
> > being mirrored and oVirt's qemu-img instance gets a locking error as the
> > images are actually opened for reading already.
> > 
> > I'm afraid that we won't be able to restore the previous semantics
> > without actually opening the backing files after the copy is
> > synchronized before completing the job and then installing them as the
> > backing via blockdev-reopen.
> > 
> > Libvirt's documentation was partially alibistic [1] and asked the user to
> > actually provide a working image but oVirt actually exploited the qemu
> > behaviour to allow folding the two operations together.
> > 
> > [1] https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainBlockCopy
> 
> This sounds like a case that blockdev-snapshot might be able to solve:
> After the offline copy has completed, you blockdev-add the whole backing
> chain for the target and then use blockdev-snapshot to add the active
> layer (that had 'backing': null) to it.

Interresting idea! I'll give it a try. If you think that at least trying
blockdev-reopen in this case might be of some value I might want to give
it a try since I have some of the framework prepared now.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]