qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Add option to mlock guest and qemu memory


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH] Add option to mlock guest and qemu memory
Date: Tue, 22 Jan 2013 15:58:30 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2013-01-22 15:45, Satoru Moriya wrote:
> On 01/21/2013 04:43 PM, Marcelo Tosatti wrote:
>> On Fri, Sep 28, 2012 at 10:05:09AM +0200, Jan Kiszka wrote:
>>> On 2012-09-28 01:21, Satoru Moriya wrote:
>>>> This is a first time for me to post a patch to qemu-devel.
>>>> If there is something missing/wrong, please let me know.
>>>>
>>>> We have some plans to migrate old enterprise systems which require 
>>>> low latency (msec order) to kvm virtualized environment. Usually, we 
>>>> uses mlock to preallocate and pin down process memory in order to 
>>>> avoid page allocation in latency critical path. On the other hand, 
>>>> in kvm environment, mlocking in guests is not effective because it 
>>>> can't avoid page reclaim in host. Actually, to avoid guest memory 
>>>> reclaim, qemu has "mem-path" option that is actually for using 
>>>> hugepage. But a memory region of qemu is not allocated on hugepage, 
>>>> so it may be reclaimed. That may cause a latency problem.
>>>>
>>>> To avoid guest and qemu memory reclaim, this patch introduces a new 
>>>> "mlock" option. With this option, we can preallocate and pin down 
>>>> guest and qemu memory before booting guest OS.
>>>
>>> I guess this reduces the likeliness of multi-millisecond latencies 
>>> for you but not eliminate them. Of course, mlockall is part of our 
>>> local changes for real-time QEMU/KVM, but it is just one of the many 
>>> pieces required. I'm wondering how the situation is on your side.
>>>
>>> I think mlockall should once be enabled automatically as soon as you 
>>> ask for real-time support for QEMU guests. How that should be 
>>> controlled is another question. I'm currently carrying a top-level 
>>> switch "-rt maxprio=x[,policy=y]" here, likely not the final 
>>> solution. I'm not really convinced we need to control memory locking 
>>> separately. And as we are very reluctant to add new top-level 
>>> switches, this is even more important.
>>
>> In certain scenarios, latency induced by paging is significant and 
>> memory locking is sufficient.
>>
>> Moreover, scenarios with untrusted guests for which latency 
>> improvement due to mlock is desired, realtime priority is problematic 
>> (guests whose QEMU threads have realtime priority can abuse the host system).
> 
> Right, our usecase is of multiple guests with untrusted VMs.

If you cannot dedicate resources (CPU cores) to the guest, you can still
throttle its RT bandwidth.

Nevertheless, I'm also fine with making this property separately
controllable via -realtime. Enabling -realtime may not require setting a
priority > 0, thus will keep all threads at SCHED_OTHER in that case.
But it will default to enable mlockall. In addition, if you feel like,
-realtime mlock=true|false could be provided to make even this configurable.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]