qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] throttle-groups: update tg->any_timer_armed[


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v2] throttle-groups: update tg->any_timer_armed[] on detach
Date: Wed, 20 Sep 2017 15:16:43 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

On Wed, Sep 20, 2017 at 02:17:51PM +0200, Alberto Garcia wrote:
> On Wed 20 Sep 2017 01:39:02 PM CEST, Manos Pitsidianakis wrote:
> >>>  void throttle_group_detach_aio_context(ThrottleGroupMember *tgm)
> >>>  {
> >>>      ThrottleTimers *tt = &tgm->throttle_timers;
> >>> +    ThrottleGroup *tg = container_of(tgm->throttle_state, ThrottleGroup, 
> >>> ts);
> >>> +
> >>> +    qemu_mutex_lock(&tg->lock);
> >>> +    if (timer_pending(tt->timers[0])) {
> >>> +        tg->any_timer_armed[0] = false;
> >>> +    }
> >>> +    if (timer_pending(tt->timers[1])) {
> >>> +        tg->any_timer_armed[1] = false;
> >>> +    }
> >>> +    qemu_mutex_unlock(&tg->lock);
> >>> +
> >>>      throttle_timers_detach_aio_context(tt);
> >>>      tgm->aio_context = NULL;
> >>>  }
> >>
> >>I'm sorry that I didn't noticed this in my previous e-mail, but after
> >>this call you might have destroyed the timer that was set for that
> >>throttling group, so if there are pending requests waiting it can
> >>happen that no one wakes them up.
> >>
> >>I think that the queue needs to be restarted after this, probably
> >>after having reattached the context (or actually after detaching it
> >>already, but then what happens if you try to restart the queue while
> >>aio_context is NULL?).
> >
> > aio_co_enter in the restart queue function requires that aio_context
> > is non-NULL. Perhaps calling throttle_group_restart_tgm in
> > throttle_group_attach_aio_context will suffice.
> 
> But can we guarantee that everything is safe between the _detach() and
> _attach() calls?

Here is a second patch that I didn't send because I was unsure whether it's
needed.

The idea is to move the throttle_group_detach/attach_aio_context() into
bdrv_set_aio_context() so it becomes part of the bdrv_drained_begin/end()
region.

The guarantees we have inside the drained region:

1. Throttling has been temporarily disabled.
2. throttle_group_restart_tgm() has been called to kick throttled reqs.
3. All requests are completed.

This is a nice, controlled environment to do the detach/attach.  That said,
Berto's point still stands: what about other ThrottleGroupMembers who have
throttled requests queued?

The main reason I didn't publish this patch is because Manos' "block: remove
legacy I/O throttling" series ought to remove this code anyway soon.

diff --git a/block/block-backend.c b/block/block-backend.c
index 45d9101be3..624eb3dc15 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1747,10 +1747,6 @@ void blk_set_aio_context(BlockBackend *blk, AioContext 
*new_context)
     ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
 
     if (bs) {
-        if (tgm->throttle_state) {
-            throttle_group_detach_aio_context(tgm);
-            throttle_group_attach_aio_context(tgm, new_context);
-        }
         bdrv_set_aio_context(bs, new_context);
     }
 }
@@ -2029,6 +2025,13 @@ static void blk_root_drained_end(BdrvChild *child)
     BlockBackend *blk = child->opaque;
     assert(blk->quiesce_counter);
 
+    if (tgm->throttle_state) {
+        AioContext *new_context = bdrv_aio_context(blk_bs(blk));
+
+        throttle_group_detach_aio_context(tgm);
+        throttle_group_attach_aio_context(tgm, new_context);
+    }
+
     assert(blk->public.throttle_group_member.io_limits_disabled);
     atomic_dec(&blk->public.throttle_group_member.io_limits_disabled);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]