qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] qed: Add QEMU Enhanced Disk format


From: Khoa Huynh
Subject: Re: [Qemu-devel] [RFC] qed: Add QEMU Enhanced Disk format
Date: Thu, 16 Sep 2010 22:51:36 -0500

address@hidden wrote on Mon, 6 Sep 2010 11:04:38 +0100:

> QEMU Enhanced Disk format is a disk image format that forgoes features
> found in qcow2 in favor of better levels of performance and data
> integrity.  Due to its simpler on-disk layout, it is possible to safely
> perform metadata updates more efficiently.
>
> Installations, suspend-to-disk, and other allocation-heavy I/O workloads
> will see increased performance due to fewer I/Os and syncs.  Workloads
> that do not cause new clusters to be allocated will perform similar to
> raw images due to in-memory metadata caching.
>
> The format supports sparse disk images.  It does not rely on the host
> filesystem holes feature, making it a good choice for sparse disk images
> that need to be transferred over channels where holes are not supported.
>
> Backing files are supported so only deltas against a base image can be
> stored.
>
> The file format is extensible so that additional features can be added
> later with graceful compatibility handling.
>
> Internal snapshots are not supported.  This eliminates the need for
> additional metadata to track copy-on-write clusters.
>
> Compression and encryption are not supported.  They add complexity and
> can be implemented at other layers in the stack (i.e. inside the guest
> or on the host).
>
> The format is currently functional with the following features missing:
>  * Resizing the disk image.  The capability has been designed in but the
>    code has not been written yet.
>  * Resetting the image after backing file commit completes.
>  * Changing the backing filename.
>  * Consistency check (fsck).  This is simple due to the on-disk layout.

I don't have Stefan's original post about QED (QEMU Enhanced Disk) format
in my inbox, so I'm trying to reply to it from the mail archive. I hope
this post would be OK.  If not, please let me know and I'll resend.
(Here's the link to Stefan's original post:
http://lists.nongnu.org/archive/html/qemu-devel/2010-09/msg00310.html)

In any case, Stefan Hajnoczi and Anthony Liguori proposed this new qed
format earlier this month, and has generated quite a bit of discussion
on this mailing list.  Now I'd like to add some performance data comparing
qed against qcow2 and the raw format.

TEST ENVIRONMENT:
- Server (KVM Host): IBM x3650 M2 (8 x E5530 @ 2.40 GHz, 16 cpu threads,
  12GB memory).
- Physical storage: IBM DS3400 (with 8 x 24-disk RAID10 arrays, 4-gbps
  fiber host links to server); a single LVM volume was created across
  these 8 disk arrays (LUNs).
- KVM Guest: 2 virtual CPUs, 4GB memory.
- Virtual storage: 512GB sparse file was created on LVM volume on host
  and passed to KVM guest as a block device (/dev/vdb); KVM guest
  formatted and mounted this block device as ext4 virtual disk with no
  barrier (barrier=0).
- Benchmark: open-source Flexible File System Benchmark (FFSB), which
  was run in the KVM guest against the ext4 virtual disk.
- Kernel (for both KVM host and guest):  2.6.32

PERFORMANCE RESULTS:

The following throughput data (in MB/sec) was reported by FFSB:

(Note: The qcow2 version I tested did have Kevin Wolf's change to
eliminate unnecessary flushes, but did not have the zero-copy change.)

Sequential Writes (block size = 256KB)
1 thread:  Raw = 127.7; QED = 121.5; QCOW2 = 93.7
8 threads:  Raw = 455.7; QED = 156.5; QCOW2 = 96.7
16 threads:  Raw = 575.3; QED = 134.5; QCOW2 = 95.1

Sequential Writes (block size = 8KB)
1 thread:  Raw = 20.7; QED = 16.3; QCOW2 = 11.8
8 threads:  Raw = 94.6; QED = 41.0; QCOW2 = 18.3
16 threads:  Raw = 121.5; QED = 43.3; QCOW2 = 17.3

Sequential Reads (block size = 256KB)
1 thread:  Raw = 117.3; QED = 149.0; QCOW2 = 73.6
8 threads:  Raw = 733.7; QED = 817.5; QCOW2 = 253.0
16 threads:  Raw = 1016.3; QED = 922.0; QCOW2 = 242.0

Sequential Reads (block size = 8KB)
1 thread:  Raw = 22.1; QED = 22.2; QCOW2 = 16.5
8 threads:  Raw = 134.5; QED = 131.5; QCOW2 = 12.6
16 threads:  Raw = 177.0; QED = 160.0; QCOW2 = 12.6

Random Reads (block size = 8KB)
1 thread:  Raw = 3.5; QED = 3.6; QCOW2 = 3.5
8 threads:  Raw = 25.8; QED = 25.7; QCOW2 = 12.2
16 threads:  Raw = 48.8; QED = 48.4; QCOW2 = 13.0

Random Writes (block size = 8KB)
1 thread:  Raw = 23.5; QED = 22.6; QCOW2 = 22.3
8 threads:  Raw = 116.8; QED = 112.7; QCOW2 = 12.8
16 threads:  Raw = 135.0; QED = 126.7; QCOW2 = 11.9

Mixed I/O (70% Reads, 30% Writes, block size = 8KB)
1 thread:  Raw = 8.5; QED = 8.8; QCOW2 = 8.9
8 threads:  Raw = 60.5; QED = 62.9; QCOW2 = 23.5
16 threads:  Raw = 112.9; QED = 111.9; QCOW2 = 23.4

Mail Server (mix of file creates, deletes, reads, writes,
etc. to very small files)
1 thread:  Raw = 8.6; QED = 8.3; QCOW2 = 7.2
8 threads:  Raw = 50.4; QED = 46.4; QCOW2 = 10.0
16 threads:  Raw = 78.2; QED = 73.2; QCOW2 = 11.9

A couple of quick observations:

1) The QED performance was better than QCOW2 in all scenarios tested.
2) QCOW2 did not scale well with number of threads in most scenarios.
3) QED was able to keep up with RAW format in all scenarios except
   sequential writes where the overhead of allocating new clusters
   was substantial. However, even for these sequential write scenarios,
   QED still outperformed QCOW2 by quite a bit.

Please let me know if you have any comments, suggestions, or need
more info/data, etc.  Thanks.

-Khoa




reply via email to

[Prev in Thread] Current Thread [Next in Thread]