qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] blockdev: qmp_transaction: harden transaction properties


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 1/2] blockdev: qmp_transaction: harden transaction properties for bitmaps
Date: Tue, 3 Oct 2023 10:18:09 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1

On 19.09.23 13:02, Andrey Zhadchenko wrote:
Hi!

Thanks for the review

On 9/12/23 21:29, Vladimir Sementsov-Ogievskiy wrote:
On 04.09.23 11:31, Andrey Zhadchenko wrote:
Unlike other transaction commands, bitmap operations do not drain target
bds. If we have an IOThread, this may result in some inconsistencies, as
bitmap content may change during transaction command.
Add bdrv_drained_begin()/end() to bitmap operations.

Signed-off-by: Andrey Zhadchenko<andrey.zhadchenko@virtuozzo.com>

Hi!

First, please always include cover letter when more than 1 patch.

Next. Hmm. Good idea, but I'm afraid that's still not enough.

Assume you have two BSs A and B in two different iothreads. So, the sequence 
may be like this:

1. drain_begin A

2. do operation with bitmap in A

3. guest writes to B, B is modified and bitmap in B is modified as well

4. drain_begin B

5. do operation with bitmap in B

6. drain_end B

7. drain_end A

User may expect, that all the operations are done atomically in relation to any 
guest IO operations. And if operations are dependent, the intermediate write 
[3] make break the result.

I see your point, but can the difference really be observed in this case? It is 
probably only relevant if user can copy/merge bitmaps between block nodes (as 
far as I know this is not the case for now)

User can (see qmp block-dirty-bitmap-merge). Also consider, for example, 
starting several backups of all disks, which are in different iothreads. The 
whole backup should be consistent and should correspond to one point of time, 
so all backup jobs should be started inside one drained_all section.



So, we should drain all participant drives during the whole transactions. The 
simplest solution is bdrv_drain_all_begin() / bdrv_drain_all_end() pair in 
qmp_transaction(), could we start with it?


That would definitely get rig of all problems, but I do not really like the 
idea of draining unrelated nodes.

What do you think if I add a new function prototype to the 
TransactionActionDrv, which will return transaction bds? Then we can get a list 
of all bds and lock them before prepairing and clean afterwards.

that looks not generic.. Transaction API is not part of block layer, and I'd 
better not change this. And you add it into TransactionActionDrv, but you'll 
use it in blockdev.c? You'll have to add somethink like transaction_action() 
with big switch-case operation. And than no reason to add something to 
TransactionActionDrv, as that may be just function defined in blockdev.c

--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]