|
From: | Anthony Liguori |
Subject: | [Qemu-devel] Re: KVM call agenda for Feb 9 |
Date: | Tue, 09 Feb 2010 08:15:57 -0600 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-4.fc12 Lightning/1.0pre Thunderbird/3.0 |
On 02/09/2010 12:56 AM, Avi Kivity wrote:
On 02/09/2010 03:28 AM, Chris Wright wrote:Please send in any agenda items you are interested in covering.hpet overhead on large smp guestsI measured hpet consuming about a half a core's worth of cpu on an idle Windows 2008 R2 64-way guest. This is mostly due to futex contention, likely from the qemu mutex.Options:- ignore, this is about 1% of the entire system (but overhead might increase greatly if a workload triggers more hpet accesses) - push hpet into kernel, with virtio-net, virtio-blk, and kernel-hpet, there's little reason to exit into qemu
Security, shamurity, let's just stick all of qemu in the kernel :-)
- rcuify/fine-grain qemu locks
Should be pretty straight forward. It would start with removing the locking within kvm*.c such that qemu_mutex isn't acquired until we dispatch I/O operations. Then we can add lockless paths for dispatch as we convert device models over.
Regards, Anthony Liguori
[Prev in Thread] | Current Thread | [Next in Thread] |