qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 5/8] qcow2: Return real error in qcow2_snapshot_


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH 5/8] qcow2: Return real error in qcow2_snapshot_goto
Date: Fri, 18 Nov 2011 17:26:12 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:7.0) Gecko/20110927 Thunderbird/7.0

Am 18.11.2011 17:08, schrieb Stefan Hajnoczi:
> On Thu, Nov 17, 2011 at 3:13 PM, Kevin Wolf <address@hidden> wrote:
>> Signed-off-by: Kevin Wolf <address@hidden>
>> ---
>>  block/qcow2-snapshot.c |   50 
>> +++++++++++++++++++++++++++++++++++++----------
>>  1 files changed, 39 insertions(+), 11 deletions(-)
>>
>> diff --git a/block/qcow2-snapshot.c b/block/qcow2-snapshot.c
>> index 066d56b..9f6647f 100644
>> --- a/block/qcow2-snapshot.c
>> +++ b/block/qcow2-snapshot.c
>> @@ -392,17 +392,32 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const 
>> char *snapshot_id)
>>     QCowSnapshot *sn;
>>     int i, snapshot_index;
>>     int cur_l1_bytes, sn_l1_bytes;
>> +    int ret;
>>
>> +    /* Search the snapshot */
>>     snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
>> -    if (snapshot_index < 0)
>> +    if (snapshot_index < 0) {
>>         return -ENOENT;
>> +    }
>>     sn = &s->snapshots[snapshot_index];
>>
>> -    if (qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 
>> -1) < 0)
>> +    /* Decrease refcount of clusters of current L1 table.
>> +     * FIXME This is too early! */
>> +    ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset,
>> +                                         s->l1_size, -1);
> 
> Just following along your comments:
> 
> Here we may free clusters.  Should any of the following intermediate
> steps fail, we're left without a backup plan ;).
> 
> If this function fails we're in trouble.  We still have the l1 table
> in memory but refcounts are broken, especially if we execute
> qcow2_alloc_clusters() and freed clusters get reallocated.
> 
> So if this function fails I think the image is in a dangerous state.
> It may not be possible to recover data referenced by the current l1
> table.

Correct. This patch doesn't do anything else than making this clear (and
fixing return codes, of course). The next one fixes the order.

Initially, I had both of them in the same patch, but I found it hard to
understand because fixing the order is really hard stuff. So I decided
to do the boring stuff in this patch so that it doesn't distract
reviewers when they try to understand the hard part.

I guess I should have mentioned this in the commit log.

>> +    if (ret < 0) {
>>         goto fail;
>> +    }
>>
>> -    if (qcow2_grow_l1_table(bs, sn->l1_size, true) < 0)
>> +    /*
>> +     * Make sure that the current L1 table is big enough to contain the 
>> whole
>> +     * L1 table of the snapshot. If the snapshot L1 table is smaller, the
>> +     * current one must be padded with zeros.
>> +     */
>> +    ret = qcow2_grow_l1_table(bs, sn->l1_size, true);
>> +    if (ret < 0) {
>>         goto fail;
>> +    }
>>
>>     cur_l1_bytes = s->l1_size * sizeof(uint64_t);
>>     sn_l1_bytes = sn->l1_size * sizeof(uint64_t);
>> @@ -411,19 +426,31 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const 
>> char *snapshot_id)
>>         memset(s->l1_table + sn->l1_size, 0, cur_l1_bytes - sn_l1_bytes);
>>     }
>>
>> -    /* copy the snapshot l1 table to the current l1 table */
>> -    if (bdrv_pread(bs->file, sn->l1_table_offset,
>> -                   s->l1_table, sn_l1_bytes) < 0)
>> +    /*
>> +     * Copy the snapshot L1 table to the current L1 table.
>> +     *
>> +     * Before overwriting the old current L1 table on disk, make sure to
>> +     * increase all refcounts for the clusters referenced by the new one.
>> +     */
>> +    ret = bdrv_pread(bs->file, sn->l1_table_offset, s->l1_table, 
>> sn_l1_bytes);
>> +    if (ret < 0) {
>>         goto fail;
>> -    if (bdrv_pwrite_sync(bs->file, s->l1_table_offset,
>> -                    s->l1_table, cur_l1_bytes) < 0)
>> +    }
>> +
>> +    ret = bdrv_pwrite(bs->file, s->l1_table_offset, s->l1_table, 
>> cur_l1_bytes);
> 
> Now this function does not issue an explicit bdrv_flush() anymore.  Is
> this really okay?

No. The next patch reintroduces the sync. Yes, I should learn how to
split patches properly. :-)

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]