[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Block I/O optimizations
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] Block I/O optimizations |
Date: |
Mon, 4 Mar 2013 09:45:53 +0100 |
On Sun, Mar 3, 2013 at 10:35 AM, Abel Gordon <address@hidden> wrote:
>
>
> Stefan Hajnoczi <address@hidden> wrote on 01/03/2013 12:54:54 PM:
>
>> On Thu, Feb 28, 2013 at 08:20:08PM +0200, Abel Gordon wrote:
>> > Stefan Hajnoczi <address@hidden> wrote on 28/02/2013 04:43:04 PM:
>> > > I think extending and tuning the existing mechanisms is the way to
> go.
>> > > I don't see obvious advantages other than reducing context switches.
>> >
>> > Maybe it is worth checking...
>> > We did experiments using vhost-net and vhost-blk. We measured and
> compared
>> > the traditional model (kernel thread per VM/virtual device) to the
>> > shared-thread model with fine-grained I/O scheduling (single kernel
> thread
>> > used to serve multiple VMs). We noticed improvements up-to 2.5x
>> > in throughput and almost half the latency when running up-to 14 VMs.
>>
>> Can you post patches?
>
> We will publish the code soon but note the patches are for vhost
> kernel back-end and not for the qemu user-space back-end.
That's fine. The only difference the codebase makes is which mailing list:
* address@hidden - QEMU userspace
* address@hidden - kvm kernel module
* address@hidden - broader scope Linux
kernel virtualization (vhost, virtio, hyperv drivers, etc)
Stefan