qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Disk integrity in QEMU


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU
Date: Fri, 10 Oct 2008 07:34:31 -0500
User-agent: Thunderbird 2.0.0.17 (X11/20080925)

Avi Kivity wrote:
Anthony Liguori wrote:

[O_DSYNC, O_DIRECT, and 0]


Thoughts?

There are (at least) three usage models for qemu:

- OS development tool
- casual or client-side virtualization
- server partitioning

The last two uses are almost always in conjunction with a hypervisor.

When using qemu as an OS development tool, data integrity is not very important. On the other hand, performance and caching are, especially as the guest is likely to be restarted multiple times so the guest page cache is of limited value. For this use model the current default (write back cache) is fine.

The 'causal virtualization' use is when the user has a full native desktop, and is also running another operating system. In this case, the host page cache is likely to be larger than the guest page cache. Data integrity is important, so write-back is out of the picture. I guess for this use case O_DSYNC is preferred though O_DIRECT might not be significantly slower for long-running guests. This is because reads are unlikely to be cached and writes will not benefit much from the host pagecache.

For server partitioning, data integrity and performance are critical. The host page cache is significantly smaller than the guest page cache; if you have spare memory, give it to your guests.

I don't think this wisdom is bullet-proof. In the case of server partitioning, if you're designing for the future then you can assume some form of host data deduplification either through qcow deduplification, a proper content addressable storage mechanism, or file system level deduplification. It's becoming more common to see large amounts of homogeneous consolidation either because of cloud computing, virtual appliances, or just because most x86 virtualization involves Windows consolidation and there aren't that many versions of Windows.

In this case, there is an awful lot of opportunity for increasing overall system throughput by caching common data access across virtual machines.

O_DIRECT is practically mandataed here; the host page cache does nothing except to impose an additional copy.

Given the rather small difference between O_DSYNC and O_DIRECT, I favor not adding O_DSYNC as it will add only marginal value.

The difference isn't small. Our fio runs are defeating the host page cache on write so we're adjusting the working set size. But the difference in read performance between dsync and direct is many factors when the data can be cached.

Regarding choosing the default value, I think we should change the default to be safe, that is O_DIRECT. If that is regarded as too radical, the default should be O_DSYNC with options to change it to O_DIRECT or writeback. Note that some disk formats will need updating like qcow2 if they are not to have abyssal performance.

I think qcow2 will be okay because the only issue is image expansion and that is a relatively uncommon case that is amortized throughout the life time of the VM. So far, while there is objection to using O_DIRECT by default, I haven't seen any objection to O_DSYNC by default so as long as no one objects in the next few days, I think that's what we'll end up doing.

Regards,

Anthony Liguori





reply via email to

[Prev in Thread] Current Thread [Next in Thread]