[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [PATCH v4] migration/block: move bdrv_is_allocated() in
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-block] [PATCH v4] migration/block: move bdrv_is_allocated() into bb's AioContext |
Date: |
Tue, 9 May 2017 18:01:48 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.0 |
On 08/05/2017 22:54, Stefan Hajnoczi wrote:
> On Fri, May 05, 2017 at 04:03:49PM +0800, address@hidden wrote:
>> From: Lidong Chen <address@hidden>
>>
>> when block migration with high-speed, mig_save_device_bulk hold the
>> BQL and invoke bdrv_is_allocated frequently. This patch moves
>> bdrv_is_allocated() into bb's AioContext. It will execute without
>> blocking other I/O activity.
>>
>> Signed-off-by: Lidong Chen <address@hidden>
>> ---
>> v4 changelog:
>> Use the prototype code written by Stefan and fix some bug.
>> moves bdrv_is_allocated() into bb's AioContext.
>> ---
>> migration/block.c | 48 +++++++++++++++++++++++++++++++++++++++---------
>> 1 file changed, 39 insertions(+), 9 deletions(-)
>
> Added Paolo because he's been reworking AioContext and locking.
>
> The goal of this patch is to avoid waiting for bdrv_is_allocated() to
> complete while holding locks. Do bdrv_is_allocated() in the AioContext
> so event processing continues after yield.
Since raw_co_get_block_status is coroutine-based I think you should
instead use the thread pool (thread_pool_submit_co) in
block/file-posix.c. This way the lseek is done in a separate thread.
The problem is that find_allocation() is not atomic, so you need either
a CoMutex around raw_co_get_block_status's call to
thread_pool_submit_co, or a QemuMutex in find_allocation itself.
Thanks,
Paolo
>>
>> diff --git a/migration/block.c b/migration/block.c
>> index 060087f..c871361 100644
>> --- a/migration/block.c
>> +++ b/migration/block.c
>> @@ -263,6 +263,30 @@ static void blk_mig_read_cb(void *opaque, int ret)
>> blk_mig_unlock();
>> }
>>
>> +typedef struct {
>> + int64_t *total_sectors;
>> + int64_t *cur_sector;
>> + BlockBackend *bb;
>> + QemuEvent event;
>> +} MigNextAllocatedClusterData;
>> +
>> +static void coroutine_fn mig_next_allocated_cluster(void *opaque)
>> +{
>> + MigNextAllocatedClusterData *data = opaque;
>> + int nr_sectors;
>> +
>> + /* Skip unallocated sectors; intentionally treats failure as
>> + * an allocated sector */
>> + while (*data->cur_sector < *data->total_sectors &&
>> + !bdrv_is_allocated(blk_bs(data->bb), *data->cur_sector,
>> + MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> + *data->cur_sector += nr_sectors;
>> + }
>> +
>> + bdrv_dec_in_flight(blk_bs(data->bb));
>> + qemu_event_set(&data->event);
>> +}
>> +
>> /* Called with no lock taken. */
>>
>> static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
>> @@ -274,17 +298,23 @@ static int mig_save_device_bulk(QEMUFile *f,
>> BlkMigDevState *bmds)
>> int nr_sectors;
>>
>> if (bmds->shared_base) {
>> + AioContext *bb_ctx;
>> + Coroutine *co;
>> + MigNextAllocatedClusterData data = {
>> + .cur_sector = &cur_sector,
>> + .total_sectors = &total_sectors,
>> + .bb = bb,
>> + };
>> + qemu_event_init(&data.event, false);
>> +
>> qemu_mutex_lock_iothread();
>> - aio_context_acquire(blk_get_aio_context(bb));
>> - /* Skip unallocated sectors; intentionally treats failure as
>> - * an allocated sector */
>> - while (cur_sector < total_sectors &&
>> - !bdrv_is_allocated(blk_bs(bb), cur_sector,
>> - MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> - cur_sector += nr_sectors;
>> - }
>> - aio_context_release(blk_get_aio_context(bb));
>> + bdrv_inc_in_flight(blk_bs(bb));
>
> Please add a comment explaining why bdrv_inc_in_flight() is invoked.
>
>> + bb_ctx = blk_get_aio_context(bb);
>> + co = qemu_coroutine_create(mig_next_allocated_cluster, &data);
>> + aio_co_schedule(bb_ctx, co);
>> qemu_mutex_unlock_iothread();
>> +
>> + qemu_event_wait(&data.event);
>> }
>>
>> if (cur_sector >= total_sectors) {
>> --
>> 1.8.3.1
>>
>>
signature.asc
Description: OpenPGP digital signature