qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC V6 24/33] qcow2: Integrate deduplication in qcow2_


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC V6 24/33] qcow2: Integrate deduplication in qcow2_co_writev loop.
Date: Fri, 8 Feb 2013 11:43:24 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Feb 06, 2013 at 01:31:57PM +0100, Benoît Canet wrote:
>      while (remaining_sectors != 0) {
>  
>          l2meta = NULL;
>  
>          trace_qcow2_writev_start_part(qemu_coroutine_self());
> +
> +        if (atomic_dedup_is_running && ds.nb_undedupable_sectors == 0) {
> +            /* Try to deduplicate as much clusters as possible */
> +            deduped_sectors_nr = qcow2_dedup(bs,
> +                                             &ds,
> +                                             sector_num,
> +                                             dedup_cluster_data,
> +                                             dedup_cluster_data_nr);
> +
> +            if (deduped_sectors_nr < 0) {
> +                goto fail;
> +            }
> +
> +            remaining_sectors -= deduped_sectors_nr;
> +            sector_num += deduped_sectors_nr;
> +            bytes_done += deduped_sectors_nr * 512;
> +
> +            /* no more data to write -> exit */
> +            if (remaining_sectors <= 0) {
> +                goto fail;
> +            }

goto fail?  Should this be "break" so we get ret = 0?

> +
> +            /* if we deduped something trace it */
> +            if (deduped_sectors_nr) {
> +                trace_qcow2_writev_done_part(qemu_coroutine_self(),
> +                                             deduped_sectors_nr);
> +                trace_qcow2_writev_start_part(qemu_coroutine_self());
> +            }
> +        }
> +
>          index_in_cluster = sector_num & (s->cluster_sectors - 1);
> -        n_end = index_in_cluster + remaining_sectors;
> +        n_end = atomic_dedup_is_running &&
> +                ds.nb_undedupable_sectors < remaining_sectors ?
> +                index_in_cluster + ds.nb_undedupable_sectors :
> +                index_in_cluster + remaining_sectors;

I find this expression hard to understand.

We only get here if ds.nb_undedupable_sectors > 0.  In other words, we
tried to dedup but failed, so we must write data into the image file.

Can we ensure that ds.nb_undedupable_sectors is limited to at most
remaining_sectors?  Then the expression becomes clearer:

if (atomic_dedup_is_running) {
    n_end = index_in_cluster + ds.nb_undedupable_sectors;
} else {
    n_end = index_in_cluster + remaining_sectors;
}

> +
>          if (s->crypt_method &&
>              n_end > QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors) {
>              n_end = QCOW_MAX_CRYPT_CLUSTERS * s->cluster_sectors;
> @@ -852,6 +915,24 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState 
> *bs,
>                  cur_nr_sectors * 512);
>          }
>  
> +        /* Write the non duplicated clusters hashes to disk */
> +        if (atomic_dedup_is_running) {
> +            int count = cur_nr_sectors / s->cluster_sectors;
> +            int has_ending = ((cluster_offset >> 9) + index_in_cluster +
> +                             cur_nr_sectors) & (s->cluster_sectors - 1);
> +            count = index_in_cluster ? count + 1 : count;

An if statement is easier to read:

if (index_in_cluster) {
    count++;
}

> +            count = has_ending ? count + 1 : count;

Same here.

> +            ret = qcow2_dedup_store_new_hashes(bs,
> +                                               &ds,
> +                                               count,
> +                                               sector_num,
> +                                               (cluster_offset >> 9));

Is it safe to store the new hashes before the data itself is written?
What happens if there is a power failure before data is written.

> +            if (ret < 0) {
> +                goto fail;
> +            }
> +        }
> +
> +        BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);

Merge conflict?  BLKDBG_EVENT() is already called below.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]