qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operati


From: Andrey Korolyov
Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?
Date: Fri, 18 Jul 2014 20:30:03 +0400

On Fri, Jul 18, 2014 at 8:26 PM, Chris Friesen
<address@hidden> wrote:
> On 07/18/2014 09:54 AM, Andrey Korolyov wrote:
>>
>> On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen
>> <address@hidden> wrote:
>>>
>>> Hi,
>>>
>>> I've recently run up against an interesting issue where I had a number of
>>> guests running and when I started doing heavy disk I/O on a virtio disk
>>> (backed via ceph rbd) the memory consumption spiked and triggered the
>>> OOM-killer.
>>>
>>> I want to reserve some memory for I/O, but I don't know how much it can
>>> use
>>> in the worst-case.
>>>
>>> Is there a limit on the number of in-flight I/O operations?  (Preferably
>>> as
>>> a configurable option, but even hard-coded would be good to know as
>>> well.)
>>>
>>> Thanks,
>>> Chris
>>>
>>
>> Hi, are you using per-vm cgroups or it was happened on bare system?
>> Ceph backend have writeback cache setting, may be you hitting it but
>> it must be set enormously large then.
>>
>
> This is without cgroups.  (I think we had tried cgroups and ran into some
> issues.)  Would cgroups even help with iSCSI/rbd/etc?
>
> The "-drive" parameter in qemu was using "cache=none" for the VMs in
> question.  But I'm assuming it keeps the buffer around until acked by the
> far end in order to be able to handle retries.
>
> Chris
>
>

This is probably a bug even if the legitimate mechanisms causing it -
peak memory footprint for an emulator should be predictable. Never hit
something like this on any kind of workload, will try to reproduce by
myself.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]