qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/3] vhost-net: force guest_notifier_mask bypass


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 3/3] vhost-net: force guest_notifier_mask bypass in vhost-user case
Date: Thu, 4 Feb 2016 15:06:37 +0200

On Thu, Dec 03, 2015 at 10:53:19AM +0100, Didier Pallard wrote:
> Since guest_mask_notifier can not be used in vhost-user
> mode due to buffering implied by unix control socket,
> force VIRTIO_PCI_FLAG_USE_NOTIFIERMASK on virtio pci
> of vhost-user interfaces, and send correct callfd
> to the guest at vhost start.
> 
> Signed-off-by: Didier Pallard <address@hidden>
> Reviewed-by: Thibaut Collet <address@hidden>

I queued this now so we have a bugfix, but I think we
should clean this up using a property and avoid
depending on virtio pci.

> ---
>  hw/net/vhost_net.c | 19 ++++++++++++++++++-
>  hw/virtio/vhost.c  | 13 +++++++++++++
>  2 files changed, 31 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 318c3e6..74318dc 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -36,9 +36,11 @@
>  #include <stdio.h>
>  
>  #include "standard-headers/linux/virtio_ring.h"
> +#include "hw/pci/pci.h"
>  #include "hw/virtio/vhost.h"
>  #include "hw/virtio/virtio-bus.h"
>  #include "hw/virtio/virtio-access.h"
> +#include "hw/virtio/virtio-pci.h"
>  
>  struct vhost_net {
>      struct vhost_dev dev;
> @@ -314,7 +316,22 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> *ncs,
>      }
>  
>      for (i = 0; i < total_queues; i++) {
> -        vhost_net_set_vq_index(get_vhost_net(ncs[i].peer), i * 2);
> +        struct vhost_net *net= get_vhost_net(ncs[i].peer);
> +        vhost_net_set_vq_index(net, i * 2);
> +
> +        /* Force VIRTIO_PCI_FLAG_USE_NOTIFIERMASK reset in vhost user case
> +         * Must be done before set_guest_notifier call
> +         */
> +        if (net->nc->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
> +            BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
> +            DeviceState *d = DEVICE(qbus->parent);
> +            if (!strcmp(object_get_typename(OBJECT(d)), 
> TYPE_VIRTIO_NET_PCI)) {
> +                VirtIOPCIProxy *proxy = VIRTIO_PCI(d);
> +
> +                /* Force proxy to not use mask notifier */
> +                proxy->flags &= ~VIRTIO_PCI_FLAG_USE_NOTIFIERMASK;
> +            }
> +        }
>      }
>  
>      r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index de29968..7a4c1d3 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -854,8 +854,21 @@ static int vhost_virtqueue_start(struct vhost_dev *dev,
>      /* Clear and discard previous events if any. */
>      event_notifier_test_and_clear(&vq->masked_notifier);
>  
> +    /* If vhost user, register now the call eventfd, guest_notifier_mask
> +     * function is not used anymore
> +     */
> +    if (dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER) {
> +        file.fd = 
> event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
> +        r = dev->vhost_ops->vhost_set_vring_call(dev, &file);
> +        if (r) {
> +            r = -errno;
> +            goto fail_call;
> +        }
> +    }
> +
>      return 0;
>  
> +fail_call:
>  fail_kick:
>  fail_alloc:
>      cpu_physical_memory_unmap(vq->ring, virtio_queue_get_ring_size(vdev, 
> idx),
> -- 
> 2.1.4
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]