qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children be


From: Max Reitz
Subject: Re: [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents
Date: Mon, 26 Nov 2018 13:40:30 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0

On 26.11.18 13:05, Max Reitz wrote:
> On 26.11.18 12:28, Kevin Wolf wrote:
>> bdrv_child_cb_inactivate() asserts that parents are already inactive
>> when children get inactivated. This precondition is necessary because
>> parents could still issue requests in their inactivation code.
>>
>> When block nodes are created individually with -blockdev, all of them
>> are monitor owned and will be returned by bdrv_next() in an undefined
>> order (in practice, in the order of their creation, which is usually
>> children before parents), which obviously fails the assertion.
>>
>> This patch fixes the ordering by skipping nodes with still active
>> parents in bdrv_inactivate_recurse() because we know that they will be
>> covered by recursion when the last active parent becomes inactive.
>>
>> Signed-off-by: Kevin Wolf <address@hidden>
>> ---
>>  block.c | 22 ++++++++++++++++++++++
>>  1 file changed, 22 insertions(+)
>>
>> diff --git a/block.c b/block.c
>> index 5ba3435f8f..0569275e31 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp)
>>      }
>>  }
>>  
>> +static bool bdrv_has_active_bds_parent(BlockDriverState *bs)
>> +{
>> +    BdrvChild *parent;
>> +
>> +    QLIST_FOREACH(parent, &bs->parents, next_parent) {
>> +        if (parent->role->parent_is_bds) {
>> +            BlockDriverState *parent_bs = parent->opaque;
>> +            if (!(parent_bs->open_flags & BDRV_O_INACTIVE)) {
>> +                return true;
>> +            }
>> +        }
>> +    }
> 
> Now I see why you say this might make sense as a permission.
> 
>> +
>> +    return false;
>> +}
>> +
>>  static int bdrv_inactivate_recurse(BlockDriverState *bs,
>>                                     bool setting_flag)
>>  {
>> @@ -4622,6 +4638,12 @@ static int bdrv_inactivate_recurse(BlockDriverState 
>> *bs,
>>          return -ENOMEDIUM;
>>      }
>>  
>> +    /* Make sure that we don't inactivate a child before its parent.
>> +     * It will be covered by recursion from the yet active parent. */
>> +    if (bdrv_has_active_bds_parent(bs)) {

Another thing I found while testing: I think this should only return
early if setting_flag is true.  BDRV_O_INACTIVE will only be set on the
second pass.  If you skip nodes with active parents unconditionally,
bs->drv->bdrv_inactivate() will not be called for any non-root BDS
(because bdrv_has_active_bds_parents() returns true for all non-root
BDSs during the first pass).

>> +        return 0;
>> +    }
>> +
> 
> Hm.  Wouldn't it make more sense to always return early when there are
> any BDS parents?  Because if there are any BDS parents and none of them
> are active (anymore), then this child will have been inactivated
> already, and we can save ourselves the trouble of going through the rest
> of the function again.

Hm, well, no, at least not directly here.  (Otherwise
bdrv_inactive_recurse() will not really recurse when it calls itself...)
But bdrv_inactive_all() could check that before invoking this function.

Ah, but then it is possible to still run into the exact bug you're
fixing here, because a node might inactivate a child that has another
parent still (which is inactivated later).

But still, I think bdrv_inactive_all() should skip non-root BDSs,
because calling bs->drv->bdrv_inactive() and parent->role->inactivate()
multiple times cannot be right.

Max

> Do drivers support multiple calls to .bdrv_in_activate() at all?
> 
> Max
> 
>>      if (!setting_flag && bs->drv->bdrv_inactivate) {
>>          ret = bs->drv->bdrv_inactivate(bs);
>>          if (ret < 0) {
>>
> 
> 


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]