qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC]QEMU disk I/O limits


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits
Date: Tue, 31 May 2011 13:39:47 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Lightning/1.0b2 Thunderbird/3.1.10

On 05/31/2011 12:59 PM, Vivek Goyal wrote:
On Tue, May 31, 2011 at 09:25:31AM -0500, Anthony Liguori wrote:
On 05/31/2011 09:04 AM, Vivek Goyal wrote:
On Tue, May 31, 2011 at 08:50:40AM -0500, Anthony Liguori wrote:
On 05/31/2011 08:45 AM, Vivek Goyal wrote:
On Mon, May 30, 2011 at 01:09:23PM +0800, Zhi Yong Wu wrote:
Hello, all,

     I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm 
projeect.
     This feature will enable the user to cap disk I/O amount performed by a 
VM.It is important for some storage resources to be shared among multi-VMs. As 
you've known, if some of VMs are doing excessive disk I/O, they will hurt the 
performance of other VMs.


Hi Zhiyong,

Why not use kernel blkio controller for this and why reinvent the wheel
and implement the feature again in qemu?

blkio controller only works for block devices.  It doesn't work when
using files.

So can't we comeup with something to easily determine which device backs
up this file? Though that will still not work for NFS backed storage
though.

Right.

Additionally, in QEMU, we can rate limit based on concepts that make
sense to a guest.  We can limit the actual I/O ops visible to the
guest which means that we'll get consistent performance regardless
of whether the backing file is qcow2, raw, LVM, or raw over NFS.


Are you referring to merging taking place which can change the definition
of IOPS as seen by guest?

No, with qcow2, it may take multiple real IOPs for what the guest sees as an IOP.

That's really the main argument I'm making here. The only entity that knows what a guest IOP corresponds to is QEMU. On the backend, it may end up being a network request, multiple BIOs to physical disks, file access, etc.

That's why QEMU is the right place to do the throttling for this use case. That doesn't mean device level throttling isn't useful but just that for virtualization, it makes more sense to do it in QEMU.

Regards,

Anthony Liguori



reply via email to

[Prev in Thread] Current Thread [Next in Thread]