qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH][RESEND] qcow2: Fix cluster allocation


From: Kevin Wolf
Subject: [Qemu-devel] [PATCH][RESEND] qcow2: Fix cluster allocation
Date: Wed, 07 Jan 2009 18:16:58 +0100
User-agent: Thunderbird 2.0.0.17 (X11/20080922)

When allocating multiple clusters at once, the qcow2 implementation
tries to find as many physically contiguous clusters as possible to
allow larger writes. This search includes allocated clusters which are
in the right place and still free clusters. If the range to allocate
spans clusters in patterns like "10 allocated, then 10 free, then again
10 allocated" it is only checked that the chunks of allocated clusters
are contiguous for themselves.

However, what is actually needed is to have _all_ allocated clusters
contiguous, starting at the first cluster of the allocation and spanning
multiple such chunks. This patch changes the check so that each offset
is not compared to the offset of the first cluster in its own chunk but
to the first cluster in the whole allocation.

I haven't seen it happen, but without this fix data corruption on qcow2
images is possible.

Signed-off-by: Kevin Wolf <address@hidden>
Acked-by: Gleb Natapov <address@hidden>
Index: qemu-svn/block-qcow2.c
===================================================================
--- qemu-svn.orig/block-qcow2.c
+++ qemu-svn/block-qcow2.c
@@ -615,7 +615,7 @@ static int size_to_clusters(BDRVQcowStat
 }
 
 static int count_contiguous_clusters(uint64_t nb_clusters, int cluster_size,
-        uint64_t *l2_table, uint64_t mask)
+        uint64_t *l2_table, uint64_t start, uint64_t mask)
 {
     int i;
     uint64_t offset = be64_to_cpu(l2_table[0]) & ~mask;
@@ -623,11 +623,11 @@ static int count_contiguous_clusters(uin
     if (!offset)
         return 0;
 
-    for (i = 0; i < nb_clusters; i++)
+    for (i = start; i < start + nb_clusters; i++)
         if (offset + i * cluster_size != (be64_to_cpu(l2_table[i]) & ~mask))
             break;
 
-       return i;
+       return (i - start);
 }
 
 static int count_contiguous_free_clusters(uint64_t nb_clusters, uint64_t 
*l2_table)
@@ -714,7 +714,7 @@ static uint64_t get_cluster_offset(Block
     } else {
         /* how many allocated clusters ? */
         c = count_contiguous_clusters(nb_clusters, s->cluster_size,
-                &l2_table[l2_index], QCOW_OFLAG_COPIED);
+                &l2_table[l2_index], 0, QCOW_OFLAG_COPIED);
     }
 
    nb_available = (c * s->cluster_sectors);
@@ -968,7 +968,7 @@ static uint64_t alloc_cluster_offset(Blo
 
     if (cluster_offset & QCOW_OFLAG_COPIED) {
         nb_clusters = count_contiguous_clusters(nb_clusters, s->cluster_size,
-                &l2_table[l2_index], 0);
+                &l2_table[l2_index], 0, 0);
 
         cluster_offset &= ~QCOW_OFLAG_COPIED;
         m->nb_clusters = 0;
@@ -985,7 +985,7 @@ static uint64_t alloc_cluster_offset(Blo
 
     while (i < nb_clusters) {
         i += count_contiguous_clusters(nb_clusters - i, s->cluster_size,
-                &l2_table[l2_index + i], 0);
+                &l2_table[l2_index], i, 0);
 
         if(be64_to_cpu(l2_table[l2_index + i]))
             break;

reply via email to

[Prev in Thread] Current Thread [Next in Thread]