qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 0/7] Manual writethrough cache and cache mode


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v3 0/7] Manual writethrough cache and cache mode toggle
Date: Fri, 8 Jun 2012 15:52:36 +0100

On Tue, Jun 5, 2012 at 11:04 PM, Paolo Bonzini <address@hidden> wrote:
> This is v3 of the alternative implementation of writethrough caching
> for QEMU 1.2.  By always opening drivers in writethrough mode and
> doing flushes manually after every write, it achieves three objectives:
> 1) it makes flipping the cache mode extremely easy; 2) it lets formats
> control flushes during metadata updates even in writethrough mode,
> which makes the updates more efficient; 3) it makes cache=writethrough
> automatically flush metadata without needing extra work in the formats.
>
> The last point should also make implementation of "QED mode" a little
> bit simpler.
>
> v2->v3: patch 3 changed again to always add the flag.  Patches reordered,
>    the new order is better now that BDRV_O_CACHE_WB is added to all
>    BlockDriverStates now.
>
> v1->v2: only patch 3 changed, was completely backwards in v1
>
>
> Paolo Bonzini (7):
>  block: flush in writethrough mode after writes
>  savevm: flush after saving vm state
>  block: copy enable_write_cache in bdrv_append
>  block: add bdrv_set_enable_write_cache
>  block: always open drivers in writeback mode
>  ide: support enable/disable write cache
>  qcow2: always operate caches in writeback mode
>
>  block.c                |   29 +++++++++++++++++++++++++----
>  block.h                |    1 +
>  block/qcow2-cache.c    |   25 ++-----------------------
>  block/qcow2-refcount.c |   12 ------------
>  block/qcow2.c          |    7 ++-----
>  block/qcow2.h          |    5 +----
>  hw/ide/core.c          |   21 ++++++++++++++++++---
>  savevm.c               |    2 +-
>  8 files changed, 50 insertions(+), 52 deletions(-)

Could you run sequential and random read and write tests with fio on a
raw image file, LVM volume, or partition?  Those raw cases perform
best and I'm curious if pwritev()+fdatasync() is noticably different
than open(..., O_DSYNC)+pwritev().  I understand why this is a nice
change for image formats, but for raw we need to make sure there is no
regression.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]