[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PULL 11/22] throttle-groups: only start one coroutine from
From: |
Fam Zheng |
Subject: |
[Qemu-devel] [PULL 11/22] throttle-groups: only start one coroutine from drained_begin |
Date: |
Fri, 26 May 2017 15:52:35 +0800 |
From: Paolo Bonzini <address@hidden>
Starting all waiting coroutines from bdrv_drain_all is unnecessary;
throttle_group_co_io_limits_intercept calls schedule_next_request as
soon as the coroutine restarts, which in turn will restart the next
request if possible.
If we only start the first request and let the coroutines dance from
there the code is simpler and there is more reuse between
throttle_group_config, throttle_group_restart_blk and timer_cb. The
next patch will benefit from this.
We also stop accessing from throttle_group_restart_blk the
blkp->throttled_reqs CoQueues even when there was no
attached throttling group. This worked but is not pretty.
The only thing that can interrupt the dance is the QEMU_CLOCK_VIRTUAL
timer when switching from one block device to the next, because the
timer is set to "now + 1" but QEMU_CLOCK_VIRTUAL might not be running.
Set that timer to point in the present ("now") rather than the future
and things work.
Reviewed-by: Alberto Garcia <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>
Signed-off-by: Paolo Bonzini <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Fam Zheng <address@hidden>
---
block/throttle-groups.c | 45 +++++++++++++++++++++++++--------------------
1 file changed, 25 insertions(+), 20 deletions(-)
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
index 69bfbd4..85169ec 100644
--- a/block/throttle-groups.c
+++ b/block/throttle-groups.c
@@ -292,7 +292,7 @@ static void schedule_next_request(BlockBackend *blk, bool
is_write)
} else {
ThrottleTimers *tt = &blk_get_public(token)->throttle_timers;
int64_t now = qemu_clock_get_ns(tt->clock_type);
- timer_mod(tt->timers[is_write], now + 1);
+ timer_mod(tt->timers[is_write], now);
tg->any_timer_armed[is_write] = true;
}
tg->tokens[is_write] = token;
@@ -340,15 +340,32 @@ void coroutine_fn
throttle_group_co_io_limits_intercept(BlockBackend *blk,
qemu_mutex_unlock(&tg->lock);
}
+static void throttle_group_restart_queue(BlockBackend *blk, bool is_write)
+{
+ BlockBackendPublic *blkp = blk_get_public(blk);
+ ThrottleGroup *tg = container_of(blkp->throttle_state, ThrottleGroup, ts);
+ bool empty_queue;
+
+ aio_context_acquire(blk_get_aio_context(blk));
+ empty_queue = !qemu_co_enter_next(&blkp->throttled_reqs[is_write]);
+ aio_context_release(blk_get_aio_context(blk));
+
+ /* If the request queue was empty then we have to take care of
+ * scheduling the next one */
+ if (empty_queue) {
+ qemu_mutex_lock(&tg->lock);
+ schedule_next_request(blk, is_write);
+ qemu_mutex_unlock(&tg->lock);
+ }
+}
+
void throttle_group_restart_blk(BlockBackend *blk)
{
BlockBackendPublic *blkp = blk_get_public(blk);
- int i;
- for (i = 0; i < 2; i++) {
- while (qemu_co_enter_next(&blkp->throttled_reqs[i])) {
- ;
- }
+ if (blkp->throttle_state) {
+ throttle_group_restart_queue(blk, 0);
+ throttle_group_restart_queue(blk, 1);
}
}
@@ -376,8 +393,7 @@ void throttle_group_config(BlockBackend *blk,
ThrottleConfig *cfg)
throttle_config(ts, tt, cfg);
qemu_mutex_unlock(&tg->lock);
- qemu_co_enter_next(&blkp->throttled_reqs[0]);
- qemu_co_enter_next(&blkp->throttled_reqs[1]);
+ throttle_group_restart_blk(blk);
}
/* Get the throttle configuration from a particular group. Similar to
@@ -408,7 +424,6 @@ static void timer_cb(BlockBackend *blk, bool is_write)
BlockBackendPublic *blkp = blk_get_public(blk);
ThrottleState *ts = blkp->throttle_state;
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
- bool empty_queue;
/* The timer has just been fired, so we can update the flag */
qemu_mutex_lock(&tg->lock);
@@ -416,17 +431,7 @@ static void timer_cb(BlockBackend *blk, bool is_write)
qemu_mutex_unlock(&tg->lock);
/* Run the request that was waiting for this timer */
- aio_context_acquire(blk_get_aio_context(blk));
- empty_queue = !qemu_co_enter_next(&blkp->throttled_reqs[is_write]);
- aio_context_release(blk_get_aio_context(blk));
-
- /* If the request queue was empty then we have to take care of
- * scheduling the next one */
- if (empty_queue) {
- qemu_mutex_lock(&tg->lock);
- schedule_next_request(blk, is_write);
- qemu_mutex_unlock(&tg->lock);
- }
+ throttle_group_restart_queue(blk, is_write);
}
static void read_timer_cb(void *opaque)
--
2.9.4
- [Qemu-devel] [PULL 02/22] docker: Add bzip2 and hostname to fedora image, (continued)
- [Qemu-devel] [PULL 02/22] docker: Add bzip2 and hostname to fedora image, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 03/22] docker: Add libaio to fedora image, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 01/22] docker: Run tests with current user, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 04/22] docker: Add flex and bison to centos6 image, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 05/22] block: access copy_on_read with atomic ops, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 06/22] block: access quiesce_counter with atomic ops, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 08/22] block: access serialising_in_flight with atomic ops, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 07/22] block: access io_limits_disabled with atomic ops, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 09/22] block: access wakeup with atomic ops, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 10/22] block: access io_plugged with atomic ops, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 11/22] throttle-groups: only start one coroutine from drained_begin,
Fam Zheng <=
- [Qemu-devel] [PULL 12/22] throttle-groups: do not use qemu_co_enter_next, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 13/22] throttle-groups: protect throttled requests with a CoMutex, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 14/22] util: add stats64 module, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 15/22] block: use Stat64 for wr_highest_offset, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 16/22] block: access write_gen with atomics, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 17/22] block: protect tracked_requests and flush_queue with reqs_lock, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 19/22] migration/block: reset dirty bitmap before reading, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 18/22] block: introduce dirty_bitmap_mutex, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 20/22] block: protect modification of dirty bitmaps with a mutex, Fam Zheng, 2017/05/26
- [Qemu-devel] [PULL 21/22] block: introduce block_account_one_io, Fam Zheng, 2017/05/26