qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU GSoC 2018 Project Idea (Apply polling to QEMU NVMe


From: Huaicheng Li
Subject: Re: [Qemu-devel] QEMU GSoC 2018 Project Idea (Apply polling to QEMU NVMe)
Date: Tue, 27 Feb 2018 08:36:02 -0600

Sounds great. Thanks!

On Tue, Feb 27, 2018 at 5:04 AM, Paolo Bonzini <address@hidden> wrote:

> On 27/02/2018 10:05, Huaicheng Li wrote:
> >     Including a RAM disk backend in QEMU would be nice too, and it may
> >     interest you as it would reduce the delta between upstream QEMU and
> >     FEMU.  So this could be another idea.
> >
> > Glad you're also interested in this part. This can definitely be part of
> the
> > project.
> >
> >     For (3), there is work in progress to add multiqueue support to
> QEMU's
> >     block device layer.  We're hoping to get the infrastructure part in
> >     (removing the AioContext lock) during the first half of 2018.  As you
> >     say, we can see what the workload will be.
> >
> > Thanks for letting me know this. Could you provide a link to the on-going
> > multiqueue implementation? I would like to learn how this is done. :)
>
> Well, there is no multiqueue implementation yet, but for now you can see
> a lot of work in block/ regarding making drivers and BlockDriverState
> thread safe.  We can't just do it for null-co:// so we have a little
> preparatory work to do. :)
>
> >     However, the main issue that I'd love to see tackled is interrupt
> >     mitigation.  With higher rates of I/O ops and high queue depth (e.g.
> >     32), it's common for the guest to become slower when you introduce
> >     optimizations in QEMU.  The reason is that lower latency causes
> higher
> >     interrupt rates and that in turn slows down the guest.  If you have
> any
> >     ideas on how to work around this, I would love to hear about it.
> >
> > Yeah, indeed interrupt overhead (host-to-guest notification) is a
> headache.
> > I thought about this, and one intuitive optimization in my mind is to add
> > interrupt coalescing support into QEMU NVMe. We may use some heuristic
> to batch
> > I/O completions back to guest, thus reducing # of interrupts. The
> heuristic
> > can be time-window based (i.e., for I/Os completed in the same time
> window,
> > we only do one interrupt for each CQ).
> >
> > I believe there are several research papers that can achieve direct
> interrupt
> > delivery without exits for para-virtual devices, but those need KVM side
> > modifications. It might be not a good fit here.
>
> No, indeed.  But the RAM disk backend and interrupt coalescing (for
> either NVMe or virtio-blk... or maybe a generic scheme that can be
> reused by virtio-net and others too!) is a good idea for the third part
> of the project.
>
> >     In any case, I would very much like to mentor this project.  Let me
> know
> >     if you have any more ideas on how to extend it!
> >
> >
> > Great to know that you'd like to mentor the project! If so, can we make
> it
> > an official project idea and put it on QEMU GSoC page?
>
> Submissions need not come from the QEMU GSoC page.  You are free to
> submit any idea that you think can be worthwhile.
>
> Paolo
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]