qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v22 19/30] qcow2: add persistent dirty bitmaps s


From: John Snow
Subject: Re: [Qemu-block] [PATCH v22 19/30] qcow2: add persistent dirty bitmaps support
Date: Fri, 30 Jun 2017 13:58:55 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0


On 06/30/2017 01:47 PM, Eric Blake wrote:
> On 06/29/2017 09:23 PM, Max Reitz wrote:
>> On 2017-06-30 04:18, Eric Blake wrote:
>>> On 06/28/2017 07:05 AM, Vladimir Sementsov-Ogievskiy wrote:
>>>> Store persistent dirty bitmaps in qcow2 image.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>>> Reviewed-by: Max Reitz <address@hidden>
>>>> ---
> 
>>>
>>> This grabs the size (currently in sectors, although I plan to fix it to
>>> be in bytes)...
>>>
>>>> +    const char *bm_name = bdrv_dirty_bitmap_name(bitmap);
>>>> +    uint8_t *buf = NULL;
>>>> +    BdrvDirtyBitmapIter *dbi;
>>>> +    uint64_t *tb;
>>>> +    uint64_t tb_size =
>>>> +            size_to_clusters(s,
>>>> +                bdrv_dirty_bitmap_serialization_size(bitmap, 0, bm_size));
>>>
>>> ...then finds out how many bytes are required to serialize the entire
>>> image, where bm_size should be the same as before...
>>>
>>>> +    while ((sector = bdrv_dirty_iter_next(dbi)) != -1) {
>>>> +        uint64_t cluster = sector / sbc;
>>>> +        uint64_t end, write_size;
>>>> +        int64_t off;
>>>> +
>>>> +        sector = cluster * sbc;
>>>> +        end = MIN(bm_size, sector + sbc);
>>>> +        write_size =
>>>> +            bdrv_dirty_bitmap_serialization_size(bitmap, sector, end - 
>>>> sector);
>>>
>>> But here, rather than tackling the entire image at once, you are
>>> subdividing your queries along arbitrary lines. But nowhere do I see a
>>> call to bdrv_dirty_bitmap_serialization_align() to make sure your
>>> end-sector value is properly aligned; if it is not aligned, you will
>>> trigger an assertion failure here...
>>
>> It's 4:21 am here, so I cannot claim to be right, but if I remember
>> correctly, it will automatically aligned because sbc is the number of
>> bits (and thus sectors) a bitmap cluster covers.
> 
> Okay, I re-read the spec.  First thing I noticed: we have a potential
> conflict if an image is allowed to be resized:
> 
> "All stored bitmaps are related to the virtual disk stored in the same
> image, so each bitmap size is equal to the virtual disk size."
> 
> If you resize an image, does the bitmap size have to be adjusted as
> well?  What if you create one bitmap, then take an internal snapshot,
> then resize?  Or do we declare that (at least for now) the presence of a
> bitmap is incompatible with the use of an internal snapshot?
> 
> Conversely, we state that:
> 
> 
> "Structure of a bitmap directory entry:
> ...
>          8 - 11:    bitmap_table_size
>                     Number of entries in the bitmap table of the bitmap."
> 

This is the number of bitmaps stored in the qcow2, not the size of one
particular bitmap.

> Since a bitmap therefore tracks its own size, I think the earlier
> statement that all bitmap sizes are equal to the virtual disk size is
> too strict (there seems to be no technical reason why a bitmap can't
> have a different size that the image).
> 
> 
> But, having read that, you are correct that we are subdividing our
> bitmaps according to what fits in a qcow2 cluster, and the smallest
> qcow2 cluster we can come up with is 512-bytes (or 4k bits of the
> bitmap).  Checking hbitmap.c, we are merely asserting that our alignment
> is always a multiple of 64 << hb->granularity.  But since we are
> partitioning a cluster at a time, our minimum alignment will be 512 <<
> hb->granularity, which is always aligned.
> 
> So all the more I need to do is add an assertion.
> 
>> I'm for fixing it later. I would have announced the series "applied" an
>> hour ago if I hadn't had to bisect iotest 055 breakage...
> 
> I'm working it into my v4 posting of dirty-bitmap cleanups.
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]