qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Add option to mlock guest and qemu memory


From: Satoru Moriya
Subject: Re: [Qemu-devel] [PATCH] Add option to mlock guest and qemu memory
Date: Mon, 1 Oct 2012 21:24:17 +0000

Hi Jan,

Thank you for reviewing.

On 09/28/2012 04:05 AM, Jan Kiszka wrote:
> On 2012-09-28 01:21, Satoru Moriya wrote:
>> We have some plans to migrate old enterprise systems which require
>> low latency (msec order) to kvm virtualized environment. Usually,
>> we uses mlock to preallocate and pin down process memory in order
>> to avoid page allocation in latency critical path. On the other
>> hand, in kvm environment, mlocking in guests is not effective
>> because it can't avoid page reclaim in host. Actually, to avoid
>> guest memory reclaim, qemu has "mem-path" option that is actually
>> for using hugepage. But a memory region of qemu is not allocated
>> on hugepage, so it may be reclaimed. That may cause a latency
>> problem.
>>
>> To avoid guest and qemu memory reclaim, this patch introduces
>> a new "mlock" option. With this option, we can preallocate and
>> pin down guest and qemu memory before booting guest OS.
>
> I guess this reduces the likeliness of multi-millisecond latencies for
> you but not eliminate them. Of course, mlockall is part of our local
> changes for real-time QEMU/KVM, but it is just one of the many pieces
> required. I'm wondering how the situation is on your side.

You're right. I think this is a first step toward solving latency issue
on qemu/kvm.

> I think mlockall should once be enabled automatically as soon as you ask
> for real-time support for QEMU guests. How that should be controlled is
> another question. I'm currently carrying a top-level switch "-rt
> maxprio=x[,policy=y]" here, likely not the final solution. I'm not

Could you please tell me what that option actually do?
Do you have any public repositories or something for me to look at your
real-time qemu/kvm changes?

> really convinced we need to control memory locking separately. And as we
> are very reluctant to add new top-level switches, this is even more
> important.

Regards,
Satoru



reply via email to

[Prev in Thread] Current Thread [Next in Thread]