qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC] vhost: fix vhost force with msix=off


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH RFC] vhost: fix vhost force with msix=off
Date: Wed, 17 Apr 2013 16:42:33 +0300

On Wed, Apr 17, 2013 at 02:03:12PM +0200, Christian Borntraeger wrote:
> On 15/04/13 16:40, Michael S. Tsirkin wrote:
> > In response to a bug report on IRC: this should fix it I think?
> > Need to test properly but out of time for today,
> > compiled only. Hope this helps.
> > 
> > Signed-off-by: Michael S. Tsirkin <address@hidden>
> > 
> > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > index 8bba0f3..d0fcc6c 100644
> > --- a/hw/virtio/virtio-pci.c
> > +++ b/hw/virtio/virtio-pci.c
> > @@ -758,6 +758,10 @@ static int virtio_pci_set_guest_notifier(DeviceState 
> > *d, int n, bool assign,
> >          event_notifier_cleanup(notifier);
> >      }
> > 
> > +    if (!msix_enabled(&proxy->pci_dev) && 
> > proxy->vdev->guest_notifier_mask) {
> > +        proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
> > +    }
> > +
> >      return 0;
> >  }
> > 
> > 
> 
> FYI, 
> actually we have done a similar thing in our private tree to repair vhost
> with virtio ccw (which still has to go through qemu to inject an interrupt).
> (The host notifiers are fine, but the guest notifiers are faked by 
> reconnecting
> the eventfd back to qemu, which boils down to the same problem (no msix also
> has qemu irq injection if I understand that correctly)
> 
> IIRC, the problem was introduced with:
> 
> commit f56a12475ff1b8aa61210d08522c3c8aaf0e2648
> Author: Michael S. Tsirkin <address@hidden>
> Date:   Mon Dec 24 17:37:01 2012 +0200
> 
>     vhost: backend masking support
> 
> 
> Our notifier code looks like 
> 
> Christian
> [...]
> +static int virtio_ccw_set_host_notifier(DeviceState *d, int n, bool assign)
> +{
> +    VirtioCcwDevice *dev = VIRTIO_CCW_DEVICE(d);
> +
> +    /* Stop using the generic ioeventfd, we are doing eventfd handling
> +     * ourselves below */
> +    dev->ioeventfd_disabled = assign;
> +    if (assign) {
> +        virtio_ccw_stop_ioeventfd(dev);
> +    }
> +    return virtio_ccw_set_guest2host_notifier(dev, n, assign, false);
> +}
> +
> +static int virtio_ccw_set_guest_notifier(VirtioCcwDevice *dev, int n,
> +                                         bool assign, bool with_irqfd)
> +{
> +    VirtQueue *vq = virtio_get_queue(dev->vdev, n);
> +    EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
> +
> +    if (assign) {
> +        int r = event_notifier_init(notifier, 0);
> +
> +        if (r < 0) {
> +            return r;
> +        }
> +        virtio_queue_set_guest_notifier_fd_handler(vq, true, with_irqfd);
> +        /* We do not support irqfd for classic I/O interrupts, because the
> +         * classic interrupts are intermixed with the subchannel status, that
> +         * is queried with test subchannel. We want to use vhost, though.
> +         * Lets make sure to have vhost running and wire up the irq fd to 
> +         * land in qemu (and only the irq fd) in this code.

Still, this is not optimal.
Isn't there some way to actually use irqfd to bypass qemu?
Is there some other kind of interrupt virtio can use that
works fine with irqfd?

> +         */
> +        if (dev->vdev->guest_notifier_mask) {                        
> <------------- Same code
> +            dev->vdev->guest_notifier_mask(dev->vdev, n, false);
> +        }
> +        /* get lost events and re-inject */
> +        if (dev->vdev->guest_notifier_pending &&
> +            dev->vdev->guest_notifier_pending(dev->vdev, n)) {
> +            event_notifier_set(notifier);
> +        }
> +    } else {
> +        if (dev->vdev->guest_notifier_mask) {
> +            dev->vdev->guest_notifier_mask(dev->vdev, n, true);
> +        }
> +        virtio_queue_set_guest_notifier_fd_handler(vq, false, with_irqfd);
> +        event_notifier_cleanup(notifier);
> +    }
> +    return 0;
> +}
> +
> +static int virtio_ccw_set_guest_notifiers(DeviceState *d, int nvqs,
> +                                          bool assigned)
> +{
> +    VirtioCcwDevice *dev = VIRTIO_CCW_DEVICE(d);
> +    VirtIODevice *vdev = dev->vdev;
> +    int r, n;
> +
> +    for (n = 0; n < nvqs; n++) {
> +        if (!virtio_queue_get_num(vdev, n)) {
> +            break;
> +        }
> +        /* false -> true, as soon as irqfd works */
> +        r = virtio_ccw_set_guest_notifier(dev, n, assigned, false);
> +        if (r < 0) {
> +            goto assign_error;
> +        }
> +    }
> +    return 0;
> +
> +assign_error:
> +    while (--n >= 0) {
> +        virtio_ccw_set_guest_notifier(dev, n, !assigned, false);
> +    }
> +    return r;
> +}
> 
> [...]
> 
> 
> So it might be an alternative to have a look at vhost code,which currently
> requires the virtio transport to actively maintain the masking thing .
> 
> Christian

I suggest we merge your code as is, then look for
similiarities with PCI to move into virtio core.


-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]