qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 09/10] Exit loop if we have been there too long


From: Avi Kivity
Subject: [Qemu-devel] Re: [PATCH 09/10] Exit loop if we have been there too long
Date: Tue, 30 Nov 2010 15:58:35 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101103 Fedora/1.0-0.33.b2pre.fc14 Lightning/1.0b3pre Thunderbird/3.1.6

On 11/30/2010 03:47 PM, Anthony Liguori wrote:
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:

BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.

The problem is that the file rate limit is not hit because work is done elsewhere. The rate can limit the bandwidth used and makes QEMU aware that socket operations may block (because that's what the buffered file freeze/unfreeze logic does); but it cannot be used to limit the _time_ spent in the migration code.

Yes, it can, if you set the rate limit sufficiently low.

The caveats are 1) the kvm.ko interface for dirty bits doesn't scale for large memory guests so we spend a lot more CPU time walking it than we should 2) zero pages cause us to burn a lot more CPU time than we otherwise would because compressing them is so effective.

What's the problem with burning that cpu? per guest page, compressing takes less than sending. Is it just an issue of qemu mutex hold time?


In the short term, fixing (2) by accounting zero pages as full sized pages should "fix" the problem.

In the long term, we need a new dirty bit interface from kvm.ko that uses a multi-level table. That should dramatically improve scan performance.

Why would a multi-level table help? (or rather, please explain what you mean by a multi-level table).

Something we could do is divide memory into more slots, and polling each slot when we start to scan its page range. That reduces the time between sampling a page's dirtiness and sending it off, and reduces the latency incurred by the sampling. There are also non-interface-changing ways to reduce this latency, like O(1) write protection, or using dirty bits instead of write protection when available.

We also need to implement live migration in a separate thread that doesn't carry qemu_mutex while it runs.

IMO that's the biggest hit currently.

--
error compiling committee.c: too many arguments to function




reply via email to

[Prev in Thread] Current Thread [Next in Thread]