qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH RFC 0/1] Allow storing the qcow2 L2


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH RFC 0/1] Allow storing the qcow2 L2 cache in disk
Date: Mon, 12 Dec 2016 16:53:00 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Fri, Dec 09, 2016 at 03:47:03PM +0200, Alberto Garcia wrote:
> as we all know, one of the main things that can make the qcow2 format
> slow is the need to load entries from the L2 table in order to map a
> guest offset (on the virtual disk) to a host offset (on the qcow2
> image).
> 
> We have an L2 cache to deal with this, and as long as the cache is big
> enough then the peformance is comparable to that of a raw image.
> 
> For large qcow2 images the amount of RAM we need in order to cache all
> L2 tables can be big (128MB per TB of disk image if we're using the
> default cluster size of 64KB). In order to solve this problem we have
> a setting that allows the user to clean unused cache entries after a
> certain interval of time. This works fine most of the time, although
> we can still have peaks of RAM usage if there's a lot of I/O going on
> in one or more VMs.
> 
> In some scenarios, however, there's a different alternative: if the
> qcow2 image is stored in a slow backend (eg. HDD), we could save
> memory by putting the L2 cache in a faster one (SSD) instead of in
> RAM.
> 
> I have been making some tests with exactly that scenario and the
> results look good: storing the cache in disk gives roughly the same
> performance as storing it in memory.
> 
> |---------------------+------------+------+------------+--------|
> |                     | Random 4k reads   | Sequential 4k reads |
> |                     | Throughput | IOPS | Throughput |  IOPS  |
> |---------------------+------------+------+------------+--------|
> | Cache in memory/SSD | 406 KB/s   |   99 | 84 MB/s    |  21000 |
> | Default cache (1MB) | 200 KB/s   |   60 | 83 MB/s    |  21000 |
> | No cache            | 200 KB/s   |   49 | 56 MB/s    |  14000 |
> |---------------------+------------+------+------------+--------|
> 
> I'm including the patch that I used to get these results. This is the
> simplest approach that I could think of.
> 
> Opinions, questions?

The root of the performance problem is the L2 table on-disk format,
which also happens to be used as the in-memory L2 table format.  It does
not scale to large disk images.

The simplest tweak is to use larger cluster sizes.  64 KB has been the
default for a long time and it may be time to evaluate performance
effects of increasing it.  I suspect this doesn't solve the problem,
instead we need to decouple metadata scalability from the cluster
size...

Is it time for a new on-disk representation?  Modern file systems seem
to use extent trees instead of offset tables.  That brings a lot of
complication because a good B-tree implementation would require quite a
bit of code changes.

Maybe a more modest change to the on-disk representation could solve
most of the performance problem.  In a very sparsely allocated L2 table
something like run-length encoding is more space-efficient than an
offset table.  In a very densely allocated L2 table it may be possible
to choose a "base offset" and then use much smaller offset entries
relative to the base.  For example:

typedef struct {
    uint64_t base_offset;
    uint16_t rel_offset[];  /* covers 4 GB with 64 KB cluster size */
} L2TableRelative;

uint64_t offset = l2->base_offset + l2->rel_offset[i] * cluster_size;

A final option is to leave the on-disk representation alone but
convert to an efficient in-memory representation when loading from disk.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]