qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened wi


From: Fam Zheng
Subject: Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened with virito-serial
Date: Tue, 23 Sep 2014 09:29:35 +0800
User-agent: Mutt/1.5.23 (2014-03-12)

On Mon, 09/22 21:23, Zhang Haoyu wrote:
> >
> >Amit,
> >
> >It's related to the big number of ioeventfds used in virtio-serial-pci. With
> >virtio-serial-pci's ioeventfd=off, the performance is not affected no matter 
> >if
> >guest initializes it or not.
> >
> >In my test, there are 12 fds to poll in qemu_poll_ns before loading guest
> >virtio_console.ko, whereas 76 once modprobe virtio_console.
> >
> >Looks like the ppoll takes more time to poll more fds.
> >
> >Some trace data with systemtap:
> >
> >12 fds:
> >
> >time  rel_time      symbol
> >15    (+1)          qemu_poll_ns  [enter]
> >18    (+3)          qemu_poll_ns  [return]
> >
> >76 fd:
> >
> >12    (+2)          qemu_poll_ns  [enter]
> >18    (+6)          qemu_poll_ns  [return]
> >
> >I haven't looked at virtio-serial code, I'm not sure if we should reduce the
> >number of ioeventfds in virtio-serial-pci or focus on lower level efficiency.
> >
> Does ioeventfd=off hamper the performance of virtio-serial?

In theory it has an impact, but I have no data about this. If you have a
performance demand, it's best to try it against your use case to answer this
question.

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]