qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
Date: Wed, 6 Mar 2013 12:31:35 +0200

On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
> On 06/03/13 01:23, Michael S. Tsirkin wrote:
> >On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
> >>On 05/03/13 23:56, Michael S. Tsirkin wrote:
> >>>>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> >>>>masking support" breaks virtio-net + vhost=on on PPC64 platform.
> >>>>
> >>>>The problem command line is:
> >>>>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> >>>>-device virtio-net-pci,netdev=tapnet,addr=0.0 \
> >>>
> >>>I think the issue is irqfd in not supported on kvm ppc.
> >>
> >>How can I make sure this is the case? Some work has been done there
> >>recently but midnight is quite late to figure this out :)
> >
> >Look in virtio_pci_set_guest_notifiers, what is the
> >value of with_irqfd?
> >   bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
> >         kvm_msi_via_irqfd_enabled();
> >
> >Also check what each of the values in the expression above is.
> 
> Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
> 
> >>>Could you please check this:
> >>>
> >>>+        /* If guest supports masking, set up irqfd now.
> >>>+         * Otherwise, delay until unmasked in the frontend.
> >>>+         */
> >>>+        if (proxy->vdev->guest_notifier_mask) {
> >>>+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> >>>+            if (ret < 0) {
> >>>+                kvm_virtio_pci_vq_vector_release(proxy, vector);
> >>>+                goto undo;
> >>>+            }
> >>>+        }
> >>>
> >>>
> >>>Could you please add a printf before "undo" and check whether the
> >>>error path above is triggered?
> >>
> >>
> >>Checked, it is not triggered.
> >>
> >>
> >>--
> >>Alexey
> >
> >I think I get it.
> >Does the following help (probably not the right thing to do, but just
> >for testing):
> 
> 
> It did not compile (no "queue_no") :) I changed it a bit and now
> vhost=on works fine:
> 
> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> index a869f53..df1e443 100644
> --- a/hw/virtio-pci.c
> +++ b/hw/virtio-pci.c
> @@ -798,6 +798,10 @@ static int
> virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
> assign)
>          if (r < 0) {
>              goto assign_error;
>          }
> +
> +        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> +            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
> +        }
>      }
> 
>      /* Must set vector notifier after guest notifier has been assigned */
> 
> 

I see, OK, the issue is that vhost now starts in a masked state
and no one unmasks it. While the patch will work I think,
it does not benefit from backend masking, the right thing
to do is to add mask notifiers, like what the irqfd path does.

Will look into this, thanks.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]