qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.12 0/4] qmp dirty bitmap API


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH for-2.12 0/4] qmp dirty bitmap API
Date: Tue, 26 Dec 2017 17:45:09 +0800
User-agent: Mutt/1.9.1 (2017-09-22)

On Tue, 12/26 11:57, Vladimir Sementsov-Ogievskiy wrote:
> 26.12.2017 10:07, Fam Zheng wrote:
> > On Wed, 12/20 11:20, Vladimir Sementsov-Ogievskiy wrote:
> > > external backup:
> > > 
> > > 0. we have active_disk and attached to it dirty bitmap bitmap0
> > > 1. qmp blockdev-add tmp_disk (backing=active_disk)
> > > 2. guest fsfreeze
> > > 3. qmp transaction:
> > >          - block-dirty-bitmap-add node=active_disk name=bitmap1
> > >          - block-dirty-bitmap-disable node=active_disk name=bitmap0
> > >          - blockdev-backup drive=active_disk target=tmp_disk sync=none
> > > 4. guest fsthaw
> > > 5. (? not designed yet) qmp blockdev-add filter_node - special filter node
> > > over tmp_disk for synchronization of nbd-reads and backup(sync=none) cow
> > > requests (like it is done in block/replication)
> > > 6. qmp nbd-server-start
> > > 7. qmp nbd-server-add filter_node (there should be possibility of 
> > > exporting
> > > bitmap of child node filter_node->tmp_disk->active_disk->bitmap0)
> > > 
> > > then, external tool can connect to nbd server and get exported bitmap and
> > > read data (including bitmap0) accordingly to nbd specification.
> > > (also, external tool may get a merge of several bitmaps, if we already 
> > > have
> > > a sequence of them)
> > > then, after backup finish, what can be done:
> > > 
> > > 1. qmp block-job-cancel device=active_disk (stop our backup(sync=none))
> > > 2. qmp nbd-server-stop (or qmp nbd-server-remove filter_node)
> > > 3. qmp blockdev-remove filter_node
> > > 4. qmp blockdev-remove tmp_disk
> > > 
> > > on successful backup, you can drop old bitmap if you want (or do not drop
> > > it, if you need to keep sequence of disabled bitmaps):
> > > 1. block-dirty-bitmap-remove node=active_disk name=bitmap0
> > > 
> > > on failed backup, you can merge bitmaps, to make it look like nothing
> > > happened:
> > > 1. qmp transaction:
> > >         - block-dirty-bitmap-merge node=active_disk name-source=bitmap1
> > > name-target=bitmap0
> > Being done in a transaction, will merging a large-ish bitmap synchronously 
> > hurt
> > the responsiveness? Because we have the BQL lock held here which pauses all
> > device emulation.
> > 
> > Have you measured how long it takes to merge two typical bitmaps. Say, for 
> > a 1TB
> > disk?
> > 
> > Fam
> 
> We don't need merge in a transaction.

Yes. Either way, the command is synchronous and the whole merge process is done
with BQL held, so my question still stands. But your numbers have answered it
and the time is neglectable.

Bitmap merging even doesn't have to be synchronous if it really matters, but we
can live with a synchronous implementation for now.

Thanks!

Fam

> 
> Anyway, good question.
> 
> two full of ones bitmaps, 64k granularity, 1tb disk:
> # time virsh qemu-monitor-command tmp '{"execute":
> "block-dirty-bitmap-merge", "arguments": {"node": "disk", "src_name": "a",
> "dst_name": "b"}}'
> {"return":{},"id":"libvirt-1181"}
> real    0m0.009s
> user    0m0.006s
> sys     0m0.002s
> 
> and this is fine:
> for last level of hbitmap we will have
>    disk_size / granularity / nb_bits_in_long = (1024 ^ 4) / (64 * 1024) / 64
> = 262144
> oparations, which is quite a few
> 
> 
> 
> bitmaps in gdb:
> 
> (gdb) p bdrv_lookup_bs ("disk", "disk", 0)
> $1 = (BlockDriverState *) 0x7fd3f6274940
> (gdb) p *$1->dirty_bitmaps.lh_first
> $2 = {mutex = 0x7fd3f6277b28, bitmap = 0x7fd3f5a5adc0, meta = 0x0, successor
> = 0x0,
>   name = 0x7fd3f637b410 "b", size = 1099511627776, disabled = false,
> active_iterators = 0,
>   readonly = false, autoload = false, persistent = false, list = {le_next =
> 0x7fd3f567c650,
>     le_prev = 0x7fd3f6277b58}}
> (gdb) p *$1->dirty_bitmaps.lh_first ->bitmap
> $3 = {size = 16777216, count = 16777216, granularity = 16, meta = 0x0,
> levels = {0x7fd3f6279a90,
>     0x7fd3f5506350, 0x7fd3f5affcb0, 0x7fd3f547a860, 0x7fd3f637b200,
> 0x7fd3f67ff5c0, 0x7fd3d8dfe010},
>   sizes = {1, 1, 1, 1, 64, 4096, 262144}}
> (gdb) p *$1->dirty_bitmaps.lh_first ->list .le_next
> $4 = {mutex = 0x7fd3f6277b28, bitmap = 0x7fd3f567cb30, meta = 0x0, successor
> = 0x0,
>   name = 0x7fd3f5482fb0 "a", size = 1099511627776, disabled = false,
> active_iterators = 0,
>   readonly = false, autoload = false, persistent = false, list = {le_next =
> 0x0,
>     le_prev = 0x7fd3f6c779e0}}
> (gdb) p *$1->dirty_bitmaps.lh_first ->list .le_next ->bitmap
> $5 = {size = 16777216, count = 16777216, granularity = 16, meta = 0x0,
> levels = {0x7fd3f5ef8880,
>     0x7fd3f5facea0, 0x7fd3f5f1cec0, 0x7fd3f5f40a00, 0x7fd3f6c80a00,
> 0x7fd3f66e5f60, 0x7fd3d8fff010},
>   sizes = {1, 1, 1, 1, 64, 4096, 262144}}
> 
> -- 
> Best regards,
> Vladimir
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]