qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC]QEMU disk I/O limits


From: Zhi Yong Wu
Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits
Date: Thu, 2 Jun 2011 14:29:29 +0800
User-agent: Mutt/1.5.20 (2009-08-17)

On Thu, Jun 02, 2011 at 09:17:06AM +0300, Sasha Levin wrote:
>Date: Thu, 02 Jun 2011 09:17:06 +0300
>From: Sasha Levin <address@hidden>
>To: Zhi Yong Wu <address@hidden>
>Cc: address@hidden, address@hidden, address@hidden,
>       address@hidden, address@hidden,
>       address@hidden, address@hidden, address@hidden,
>       address@hidden, address@hidden, address@hidden,
>       address@hidden, address@hidden, address@hidden
>Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits
>X-Mailer: Evolution 2.32.2 
>
>Hi,
>
>On Mon, 2011-05-30 at 13:09 +0800, Zhi Yong Wu wrote:
>> Hello, all,
>> 
>>     I have prepared to work on a feature called "Disk I/O limits" for 
>> qemu-kvm projeect.
>>     This feature will enable the user to cap disk I/O amount performed by a 
>> VM.It is important for some storage resources to be shared among multi-VMs. 
>> As you've known, if some of VMs are doing excessive disk I/O, they will hurt 
>> the performance of other VMs.
>> 
>>     More detail is available here:
>>     http://wiki.qemu.org/Features/DiskIOLimits
>> 
>>     1.) Why we need per-drive disk I/O limits 
>>     As you've known, for linux, cgroup blkio-controller has supported I/O 
>> throttling on block devices. More importantly, there is no single mechanism 
>> for disk I/O throttling across all underlying storage types (image file, 
>> LVM, NFS, Ceph) and for some types there is no way to throttle at all. 
>> 
>>     Disk I/O limits feature introduces QEMU block layer I/O limits together 
>> with command-line and QMP interfaces for configuring limits. This allows I/O 
>> limits to be imposed across all underlying storage types using a single 
>> interface.
>> 
>>     2.) How disk I/O limits will be implemented
>>     QEMU block layer will introduce a per-drive disk I/O request queue for 
>> those disks whose "disk I/O limits" feature is enabled. It can control disk 
>> I/O limits individually for each disk when multiple disks are attached to a 
>> VM, and enable use cases like unlimited local disk access but shared storage 
>> access with limits. 
>>     In mutliple I/O threads scenario, when an application in a VM issues a 
>> block I/O request, this request will be intercepted by QEMU block layer, 
>> then it will calculate disk runtime I/O rate and determine if it has go 
>> beyond its limits. If yes, this I/O request will enqueue to that introduced 
>> queue; otherwise it will be serviced.
>> 
>>     3.) How the users enable and play with it
>>     QEMU -drive option will be extended so that disk I/O limits can be 
>> specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or 
>> -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument 
>> is specified, it means that "disk I/O limits" feature is enabled for this 
>> drive disk.
>>     The feature will also provide users with the ability to change per-drive 
>> disk I/O limits at runtime using QMP commands.
>
>I'm wondering if you've considered adding a 'burst' parameter -
>something which will not limit (or limit less) the io ops or the
>throughput for the first 'x' ms in a given time window.
Currently no, Do you let us know what scenario it will make sense to?

Regards,

Zhiyong Wu
>
>> Regards,
>> 
>> Zhiyong Wu
>> 
>
>-- 
>
>Sasha.
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]