qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [BUG] qemu-1.1.2 [FIXED-BY] qcow2: Fix avail_sectors in


From: Kevin Wolf
Subject: Re: [Qemu-devel] [BUG] qemu-1.1.2 [FIXED-BY] qcow2: Fix avail_sectors in cluster allocation code
Date: Wed, 23 Jan 2013 12:43:47 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0

Am 22.01.2013 15:14, schrieb Kevin Wolf:
> The series of requests to get into this state is somewhat tricky as it
> requires a specific physical ordering of clusters in the image file:
> 
> Host cluster h:     Guest cluster g is mapped to here
> Host cluster h + 1: Free, can be allocated for not yet mapped g + 1
> Host cluster h + 2: Guest cluster g + 2 is mapped to here
> 
> I can get this with this command on a fresh qcow2 image (64k cluster size):
> 
> ./qemu-io
>   -c 'write -P 0x66 0 64k'
>   -c 'write 64k 64k'
>   -c 'write -P 0x88 128k 64k'
>   -c 'discard 64k 64k'
>   -c 'write -P 0x77 0 160k'
>   /tmp/test.qcow2
> 
> Now I get an additional COW for the area from 96k to 128k. However, this
> alone doesn't corrupt an image - a copy on write by itself is harmless
> because it only rewrites the data that is already there.
> 
> The two things I'm going to check next are:
> 
> a) What happens with concurrent requests now, are requests for the
>    same area still correctly serialised?
> 
> b) Can I modify the parameters so that copy_sectors() is working
>    with values it wasn't designed for so that the data is copied to
>    a wrong place or something like that. m->nb_available is used as
>    an argument to it, so it certainly seems realistic.

I can reliably reproduce a bug with a) at least. It requires some
backports to qemu-io in order to get more control of the timing of AIO
requests, but then it works like this:

write -P 0x66 0 320k
write 320k 128k
write -P 0x88 448k 128k
discard 320k 128k
aio_flush

break cow_write A
aio_write -P 0x77 0 480k
wait_break A
write -P 0x99 480k 32k
resume A
aio_flush

read -P 0x77 0 480k
read -P 0x99 480k 32k
read -P 0x88 512k 64k

What happens is that the write of 0x99 to 480k doesn't detect an overlap
with the running AIO request (because obviously it doesn't overlap), but
the COW is actually taking place in the wrong cluster and therefore
unprotected. Stop the AIO request between the COW read and the COW write
to write 0x99 to that area, and the wrong COW will overwrite it, i.e.
corrupt the image.

Basing a real test case on this is not trivial, though, because I have
to stop on the blkdebug event cow_write - which obviously isn't even
emitted in the fixed version.

I still have to look into option b)

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]