qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 7/9] mirror: allow to save buffer for QEMUIOVect


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH 7/9] mirror: allow to save buffer for QEMUIOVector in MirrorOp
Date: Tue, 14 Jun 2016 22:11:17 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 06/14/2016 09:25 AM, Denis V. Lunev wrote:

In the subject: 'allow to save buffer' is not idiomatic English; better
would be 'allow saving the buffer' or simply 'save the buffer'

> Properly cook MirrorOp initialization/deinitialization. The field is not
> yet used actually.

Another "what" but not "why" - expanding the commit message to mention
"why" makes it easier to review.

> 
> Signed-off-by: Denis V. Lunev <address@hidden>
> Reviewed-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> CC: Stefan Hajnoczi <address@hidden>
> CC: Fam Zheng <address@hidden>
> CC: Kevin Wolf <address@hidden>
> CC: Max Reitz <address@hidden>
> CC: Jeff Cody <address@hidden>
> CC: Eric Blake <address@hidden>
> ---
>  block/mirror.c | 32 +++++++++++++++++++-------------
>  1 file changed, 19 insertions(+), 13 deletions(-)
> 
> diff --git a/block/mirror.c b/block/mirror.c
> index d8be80a..7471211 100644
> --- a/block/mirror.c
> +++ b/block/mirror.c
> @@ -73,6 +73,7 @@ typedef struct MirrorOp {
>      QEMUIOVector qiov;
>      int64_t sector_num;
>      int nb_sectors;
> +    void *buf;
>  } MirrorOp;
>  
>  static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
> @@ -100,24 +101,28 @@ static void mirror_iteration_done(MirrorOp *op, int ret)
>      s->in_flight--;
>      s->sectors_in_flight -= op->nb_sectors;
>      iov = op->qiov.iov;
> -    for (i = 0; i < op->qiov.niov; i++) {
> -        MirrorBuffer *buf = (MirrorBuffer *) iov[i].iov_base;
> -        QSIMPLEQ_INSERT_TAIL(&s->buf_free, buf, next);
> -        s->buf_free_count++;
> -    }
>  
> -    sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
> -    chunk_num = op->sector_num / sectors_per_chunk;
> -    nb_chunks = DIV_ROUND_UP(op->nb_sectors, sectors_per_chunk);
> -    bitmap_clear(s->in_flight_bitmap, chunk_num, nb_chunks);
> -    if (ret >= 0) {
> -        if (s->cow_bitmap) {
> -            bitmap_set(s->cow_bitmap, chunk_num, nb_chunks);
> +    if (op->buf == NULL) {
> +        for (i = 0; i < op->qiov.niov; i++) {
> +            MirrorBuffer *buf = (MirrorBuffer *) iov[i].iov_base;
> +            QSIMPLEQ_INSERT_TAIL(&s->buf_free, buf, next);
> +            s->buf_free_count++;
> +        }
> +
> +        sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
> +        chunk_num = op->sector_num / sectors_per_chunk;
> +        nb_chunks = DIV_ROUND_UP(op->nb_sectors, sectors_per_chunk);
> +        bitmap_clear(s->in_flight_bitmap, chunk_num, nb_chunks);

I still think it might be smarter to fix bitmap operations to work on
byte inputs (still sectors, or rather granularity chunks, under the
hood, but no need to make users scale things just to have it scaled again).

> +        if (ret >= 0) {
> +            if (s->cow_bitmap) {
> +                bitmap_set(s->cow_bitmap, chunk_num, nb_chunks);
> +            }
> +            s->common.offset += (uint64_t)op->nb_sectors * BDRV_SECTOR_SIZE;
>          }
> -        s->common.offset += (uint64_t)op->nb_sectors * BDRV_SECTOR_SIZE;
>      }
>  
>      qemu_iovec_destroy(&op->qiov);
> +    g_free(op->buf);
>      g_free(op);
>  
>      if (s->waiting_for_io) {
> @@ -255,6 +260,7 @@ static int mirror_do_read(MirrorBlockJob *s, int64_t 
> sector_num,
>      op->s = s;
>      op->sector_num = sector_num;
>      op->nb_sectors = nb_sectors;
> +    op->buf = NULL;
>  
>      /* Now make a QEMUIOVector taking enough granularity-sized chunks
>       * from s->buf_free.
> 

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]