qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened wi


From: Zhang Haoyu
Subject: Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened with virito-serial
Date: Mon, 22 Sep 2014 21:23:24 +0800

>> > >>> Hi, all
>> > >>> 
>> > >>> I start a VM with virtio-serial (default ports number: 31), and found 
>> > >>> that virtio-blk performance degradation happened, about 25%, this 
>> > >>> problem can be reproduced 100%.
>> > >>> without virtio-serial:
>> > >>> 4k-read-random 1186 IOPS
>> > >>> with virtio-serial:
>> > >>> 4k-read-random 871 IOPS
>> > >>> 
>> > >>> but if use max_ports=2 option to limit the max number of virio-serial 
>> > >>> ports, then the IO performance degradation is not so serious, about 5%.
>> > >>> 
>> > >>> And, ide performance degradation does not happen with virtio-serial.
>> > >>
>> > >>Pretty sure it's related to MSI vectors in use.  It's possible that
>> > >>the virtio-serial device takes up all the avl vectors in the guests,
>> > >>leaving old-style irqs for the virtio-blk device.
>> > >>
>> > >I don't think so,
>> > >I use iometer to test 64k-read(or write)-sequence case, if I disable the 
>> > >virtio-serial dynamically via device manager->virtio-serial => disable,
>> > >then the performance get promotion about 25% immediately, then I 
>> > >re-enable the virtio-serial via device manager->virtio-serial => enable,
>> > >the performance got back again, very obvious.
>> > add comments:
>> > Although the virtio-serial is enabled, I don't use it at all, the 
>> > degradation still happened.
>> 
>> Using the vectors= option as mentioned below, you can restrict the
>> number of MSI vectors the virtio-serial device gets.  You can then
>> confirm whether it's MSI that's related to these issues.
>
>Amit,
>
>It's related to the big number of ioeventfds used in virtio-serial-pci. With
>virtio-serial-pci's ioeventfd=off, the performance is not affected no matter if
>guest initializes it or not.
>
>In my test, there are 12 fds to poll in qemu_poll_ns before loading guest
>virtio_console.ko, whereas 76 once modprobe virtio_console.
>
>Looks like the ppoll takes more time to poll more fds.
>
>Some trace data with systemtap:
>
>12 fds:
>
>time  rel_time      symbol
>15    (+1)          qemu_poll_ns  [enter]
>18    (+3)          qemu_poll_ns  [return]
>
>76 fd:
>
>12    (+2)          qemu_poll_ns  [enter]
>18    (+6)          qemu_poll_ns  [return]
>
>I haven't looked at virtio-serial code, I'm not sure if we should reduce the
>number of ioeventfds in virtio-serial-pci or focus on lower level efficiency.
>
Does ioeventfd=off hamper the performance of virtio-serial?
In my opinion, virtio-serial's use scenario is not so high throughput rate, 
so ioventfd=off has slight impaction on the performance.

Thanks,
Zhang Haoyu

>Haven't compared with g_poll but I think the underlying syscall should be the
>same.
>
>Any ideas?
>
>Fam
>
>
>> 
>> > >So, I think it has no business with legacy interrupt mode, right?
>> > >
>> > >I am going to observe the difference of perf top data on qemu and perf 
>> > >kvm stat data when disable/enable virtio-serial in guest,
>> > >and the difference of perf top data on guest when disable/enable 
>> > >virtio-serial in guest,
>> > >any ideas?
>> > >
>> > >Thanks,
>> > >Zhang Haoyu
>> > >>If you restrict the number of vectors the virtio-serial device gets
>> > >>(using the -device virtio-serial-pci,vectors= param), does that make
>> > >>things better for you?
>> 
>> 
>> 
>>              Amit




reply via email to

[Prev in Thread] Current Thread [Next in Thread]