qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/2] vhost: double check used memslots number


From: Zhoujian (jay)
Subject: Re: [Qemu-devel] [PATCH v2 2/2] vhost: double check used memslots number
Date: Wed, 3 Jan 2018 14:19:29 +0000

> -----Original Message-----
> From: Igor Mammedov [mailto:address@hidden
> Sent: Thursday, December 28, 2017 7:29 PM
> To: Michael S. Tsirkin <address@hidden>
> Cc: Huangweidong (C) <address@hidden>; wangxin (U)
> <address@hidden>; address@hidden; Liuzhe (Cloud Open
> Labs, NFV) <address@hidden>; address@hidden; Gonglei (Arei)
> <address@hidden>; Zhoujian (jay) <address@hidden>
> Subject: Re: [Qemu-devel] [PATCH v2 2/2] vhost: double check used memslots
> number
> 
> On Fri, 22 Dec 2017 23:15:09 +0200
> "Michael S. Tsirkin" <address@hidden> wrote:
> 
> > On Fri, Dec 22, 2017 at 07:48:55PM +0100, Igor Mammedov wrote:
> > > On Fri, 15 Dec 2017 16:45:55 +0800
> > > Jay Zhou <address@hidden> wrote:
> > >
> > > > If the VM already has N(N>8) available memory slots for vhost
> > > > user, the VM will be crashed in vhost_user_set_mem_table if we try
> > > > to hotplug the first vhost user NIC.
> > > > This patch checks if memslots number exceeded or not after
> > > > updating vhost_user_used_memslots.
> > > Can't understand commit message, pls rephrase (what is being fixed,
> > > and how it's fixed) also include reproducing steps for crash and
> > > maybe describe call flow/backtrace that triggers crash.
> > >
> > > PS:
> > > I wasn't able to reproduce crash
> > >
> > > >
> > > > Signed-off-by: Jay Zhou <address@hidden>
> > > > ---
> > > >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> > > >  1 file changed, 23 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index
> > > > 59a32e9..e45f5e2 100644
> > > > --- a/hw/virtio/vhost.c
> > > > +++ b/hw/virtio/vhost.c
> > > > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct
> vhost_virtqueue *vq)
> > > >      event_notifier_cleanup(&vq->masked_notifier);
> > > >  }
> > > >
> > > > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev
> > > > +*hdev) {
> > > > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > > > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > > > +        error_report("vhost backend memory slots limit is less"
> > > > +                " than current number of present memory slots");
> > > > +        return true;
> > > > +    }
> > > > +
> > > > +    return false;
> > > > +}
> > > > +
> > > >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > > >                     VhostBackendType backend_type, uint32_t
> > > > busyloop_timeout)  { @@ -1252,10 +1264,7 @@ int
> > > > vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > > >          goto fail;
> > > >      }
> > > >
> > > > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > > > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > > > -        error_report("vhost backend memory slots limit is less"
> > > > -                " than current number of present memory slots");
> > > > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > > why do you keep this check?
> > > it seems always be false
> > >
> > >
> > >
> > > >          r = -1;
> > > >          goto fail;
> > > >      }
> > > > @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev,
> void *opaque,
> > > >      hdev->memory_changed = false;
> > > >      memory_listener_register(&hdev->memory_listener,
> &address_space_memory);
> > > >      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> > > > +
> > > > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > > > +        r = -1;
> > > > +        if (busyloop_timeout) {
> > > > +            goto fail_busyloop;
> > > > +        } else {
> > > > +            goto fail;
> > > > +        }
> > > > +    }
> > > seem to be right thing to do, since after registering listener for
> > > the first time used_memslots will be updated to actual value.
> > >
> > >
> > > I did some testing and without this hunk/patch
> > >
> > > on 'device_add  virtio-net-pci,netdev=net0' qemu prints:
> > >
> > > qemu-system-x86_64: vhost_set_mem_table failed: Argument list too
> > > long (7)
> > > qemu-system-x86_64: unable to start vhost net: 7: falling back on
> > > userspace virtio

Error code 7 is E2BIG, which means

        if (mem.nregions > max_mem_regions)
                return -E2BIG;

happened in the kernel.

> > >
> > > and network is operational in guest, but with this patch
> > >
> > > "netdev_add ...,vhost-on" prints:
> > >
> > > vhost backend memory slots limit is less than current number of
> > > present memory slots vhost-net requested but could not be
> > > initialized
> > >
> > > and following "device_add  virtio-net-pci,netdev=net0" prints:
> > >
> > > TUNSETOFFLOAD ioctl() failed: Bad file descriptor TUNSETOFFLOAD
> > > ioctl() failed: Bad file descriptor
> > >
> > > adapter is still hot-plugged but guest networking is broken (can't
> > > get IP address via DHCP)
> > >
> > > so patch seems introduces a regression or something broken elsewhere
> > > and this exposes issue, not sure what qemu reaction should be in
> > > this case  i.e. when netdev_add fails
> > >     1: should we fail followed up device_add or
> > >     2: make it fall back to userspace virtio
> > >
> > > I'd go for #2,
> > > Michael what's your take on it?
> >
> > OK but there's a vhost force flag, if that is set we definitely should
> > fail device_add.
> >
> > Also, hotplug can follow device_add, should be handled similarly.
> I was testing with vhost-kernel (as it doesn't need extra environment to
> setup) and it's able to fallback to virtio transport.
> 
> However in case of vhost-user, is there even an option to fallback to?

Using error code(which do it like vhost-kernel) instead of asserting in
vhost_user_set_mem_table(), I have tested:
"netdev_add vhost-user,chardev=charnet0,id=hostnet0" is successful,
following "device_add virtio-net-pci,netdev=hostnet0,id=net0,bus=pci.0" prints:

"qemu-system-x86_64: vhost_set_mem_table failed: Interrupted system call (4)
qemu-system-x86_64: unable to start vhost net: 4: falling back on userspace 
virtio"

or

"qemu-system-x86_64: vhost_set_mem_table failed: Resource temporarily 
unavailable (11)
qemu-system-x86_64: unable to start vhost net: 11: falling back on userspace 
virtio"

adapter is still hot-plugged but guest networking is broken (can't get IP
address via DHCP), does this mean make no sense for vhost-user to fallback
to?

> Perhaps our only choice here is to fail backend creation cleanly, so no
> one would be able to add a frontend refering to non existing backend.

Not sure what to do.

> 
> 
> PS:
> even if we have to fail on error for vhost-user, this patch shouldn't
> change current vhost-kernel behavior (fallback should still work)

Does it mean vhost-kernel don't need to care about the value of used_memslots
(because it's able to fall back to userspace virtio)?

Is it enough to use error code in vhost_user_set_mem_table() and
vhost_kernel_set_mem_table()?
  1. If yes, how about removing the check of used_memslots totally?
  2. If no, is it enough to check used_memslots for vhost-user only after
    memory listener is registered?


Regards,
Jay

> 
> >
> > > > +
> > > >      return 0;
> > > >
> > > >  fail_busyloop:
> >




reply via email to

[Prev in Thread] Current Thread [Next in Thread]