qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and sc


From: Christian Borntraeger
Subject: Re: [Qemu-devel] [PATCH 0/3] virtio: disable notifications in blk and scsi
Date: Fri, 18 Nov 2016 12:36:43 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 11/18/2016 12:02 PM, Stefan Hajnoczi wrote:
> On Thu, Nov 17, 2016 at 12:01:30PM +0100, Christian Borntraeger wrote:
>> On 11/16/2016 10:53 PM, Stefan Hajnoczi wrote:
>>> Disabling notifications during virtqueue processing reduces the number of
>>> exits.  The virtio-net device already uses virtio_queue_set_notifications() 
>>> but
>>> virtio-blk and virtio-scsi do not.
>>>
>>> The following benchmark shows a 15% reduction in virtio-blk-pci MMIO exits:
>>>
>>>   (host)$ qemu-system-x86_64 \
>>>               -enable-kvm -m 1024 -cpu host \
>>>               -drive if=virtio,id=drive0,file=f24.img,format=raw,\
>>>                      cache=none,aio=native
>>>   (guest)$ fio # jobs=4, iodepth=8, direct=1, randread
>>>   (host)$ sudo perf record -a -e kvm:kvm_fast_mmio
>>>
>>> Number of kvm_fast_mmio events:
>>> Unpatched: 685k
>>> Patched: 592k (-15%, lower is better)
>>>
>>> Note that a workload with iodepth=1 and a single thread will not benefit - 
>>> this
>>> is a batching optimization.  The effect should be strongest with large 
>>> iodepth
>>> and multiple threads submitting I/O.  The guest I/O scheduler also affects 
>>> the
>>> optimization.
>>
>> I have trouble seeing any difference in terms of performances or CPU load 
>> (other than 
>> a reduced number of kicks).
>> I was expecting some benefit by reducing the spinlock hold times in 
>> virtio-blk,
>> but this needs some more setups to actually find the sweet spot.
> 
> Are you testing on s390 with ccw?  

Yes

I'm not familiar with the performance
> characteristics of the kick under ccw.

The kick is a diagnose instruction, that exits the guest into the host kernel.
In the host kernel it will notify an eventfd and return back to the guest, 
so in essence it should be the same as x86. I was using host ramdisks, maybe 
this
has affected the performance.

> 
>> Maybe it will show its benefit with the polling thing?
> 
> Yes, I hope it will benefit polling.  I'll build patches for polling on
> top of this.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]