qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] vhost: add used memslot number for vhost-use


From: Zhoujian (jay)
Subject: Re: [Qemu-devel] [PATCH v3] vhost: add used memslot number for vhost-user and vhost-kernel separately
Date: Fri, 29 Dec 2017 12:54:44 +0000


> -----Original Message-----
> From: Igor Mammedov [mailto:address@hidden
> Sent: Friday, December 29, 2017 7:22 PM
> To: Zhoujian (jay) <address@hidden>
> Cc: address@hidden; Huangweidong (C) <address@hidden>;
> address@hidden; wangxin (U) <address@hidden>; Liuzhe (Cloud
> Open Labs, NFV) <address@hidden>; Gonglei (Arei)
> <address@hidden>
> Subject: Re: [Qemu-devel] [PATCH v3] vhost: add used memslot number for
> vhost-user and vhost-kernel separately
> 
> On Fri, 29 Dec 2017 10:37:40 +0000
> "Zhoujian (jay)" <address@hidden> wrote:
> 
> > Hi Igor,
> >
> > > -----Original Message-----
> > > From: Igor Mammedov [mailto:address@hidden
> > > Sent: Friday, December 29, 2017 5:31 PM
> > > To: Zhoujian (jay) <address@hidden>
> > > Cc: address@hidden; Huangweidong (C)
> > > <address@hidden>; address@hidden; wangxin (U)
> > > <address@hidden>; Liuzhe (Cloud Open Labs, NFV)
> > > <address@hidden>; Gonglei (Arei) <address@hidden>
> > > Subject: Re: [Qemu-devel] [PATCH v3] vhost: add used memslot number
> > > for vhost-user and vhost-kernel separately
> > >
> > > On Fri, 29 Dec 2017 10:35:11 +0800
> > > Jay Zhou <address@hidden> wrote:
> > >
> > > > Used_memslots is equal to dev->mem->nregions now, it is true for
> > > > vhost kernel, but not for vhost user, which uses the memory
> > > > regions that have file descriptor. In fact, not all of the memory
> > > > regions have file descriptor.
> > > > It is usefully in some scenarios, e.g. used_memslots is 8, and
> > > > only
> > > > 5 memory slots can be used by vhost user, it is failed to hotplug
> > > > a new DIMM memory because vhost_has_free_slot just returned false,
> > > > however we can hotplug it safely in fact.
> > > >
> > > > Meanwhile, instead of asserting in vhost_user_set_mem_table(),
> > > > error number is used to gracefully prevent device to start. This
> > > > fixed the VM crash issue.
> > >
> > > below mostly style related comments, otherwise patch looks good to
> > > me
> > > >
> > > > Suggested-by: Igor Mammedov <address@hidden>
> > > > Signed-off-by: Jay Zhou <address@hidden>
> > > > Signed-off-by: Zhe Liu <address@hidden>
> > > > ---
> > > >  hw/virtio/vhost-backend.c         | 14 +++++++
> > > >  hw/virtio/vhost-user.c            | 84
> +++++++++++++++++++++++++++++---
> > > -------
> > > >  hw/virtio/vhost.c                 | 16 ++++----
> > > >  include/hw/virtio/vhost-backend.h |  4 ++
> > > >  4 files changed, 91 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/vhost-backend.c b/hw/virtio/vhost-backend.c
> > > > index 7f09efa..866718c 100644
> > > > --- a/hw/virtio/vhost-backend.c
> > > > +++ b/hw/virtio/vhost-backend.c
> > > > @@ -15,6 +15,8 @@
> > > >  #include "hw/virtio/vhost-backend.h"
> > > >  #include "qemu/error-report.h"
> > > >
> > > > +static unsigned int vhost_kernel_used_memslots;
> > > > +
> > > >  static int vhost_kernel_call(struct vhost_dev *dev, unsigned long
> > > > int
> > > request,
> > > >                               void *arg)  { @@ -233,6 +235,16 @@
> > > > static void vhost_kernel_set_iotlb_callback(struct vhost_dev *dev,
> > > >          qemu_set_fd_handler((uintptr_t)dev->opaque, NULL, NULL,
> > > > NULL);  }
> > > >
> > > > +static void vhost_kernel_set_used_memslots(struct vhost_dev *dev) {
> > > > +    vhost_kernel_used_memslots = dev->mem->nregions; }
> > > > +
> > > > +static unsigned int vhost_kernel_get_used_memslots(void)
> > > > +{
> > > > +    return vhost_kernel_used_memslots; }
> > > > +
> > > >  static const VhostOps kernel_ops = {
> > > >          .backend_type = VHOST_BACKEND_TYPE_KERNEL,
> > > >          .vhost_backend_init = vhost_kernel_init, @@ -264,6 +276,8
> > > > @@ static const VhostOps kernel_ops = {  #endif /*
> CONFIG_VHOST_VSOCK */
> > > >          .vhost_set_iotlb_callback = vhost_kernel_set_iotlb_callback,
> > > >          .vhost_send_device_iotlb_msg =
> > > > vhost_kernel_send_device_iotlb_msg,
> > > > +        .vhost_set_used_memslots = vhost_kernel_set_used_memslots,
> > > > +        .vhost_get_used_memslots =
> > > > + vhost_kernel_get_used_memslots,
> > > >  };
> > > >
> > > >  int vhost_set_backend_type(struct vhost_dev *dev,
> > > > VhostBackendType
> > > > backend_type) diff --git a/hw/virtio/vhost-user.c
> > > > b/hw/virtio/vhost-user.c index 093675e..0f913be 100644
> > > > --- a/hw/virtio/vhost-user.c
> > > > +++ b/hw/virtio/vhost-user.c
> > > > @@ -122,6 +122,8 @@ static VhostUserMsg m __attribute__
> > > > ((unused));
> > > >  /* The version of the protocol we support */
> > > >  #define VHOST_USER_VERSION    (0x1)
> > > >
> > > > +static unsigned int vhost_user_used_memslots;
> > > > +
> > > >  struct vhost_user {
> > > >      CharBackend *chr;
> > > >      int slave_fd;
> > > > @@ -289,12 +291,53 @@ static int vhost_user_set_log_base(struct
> > > vhost_dev *dev, uint64_t base,
> > > >      return 0;
> > > >  }
> > > >
> > > > +static int vhost_user_prepare_msg(struct vhost_dev *dev,
> > > > +VhostUserMsg
> > > *msg,
> > > > +                                  int *fds) {
> > > > +    int r = 0;
> > > > +    int i, fd;
> > > > +    size_t fd_num = 0;
> > > fd_num is redundant
> > > you can use msg->payload.memory.nregions as a counter
> >
> > If using msg->payload.memory.nregions as a counter, referencing the
> > member of msg->payload.memory.regions will be like this:
> >
> >    msg->payload.memory.regions[msg-
> >payload.memory.nregions].userspace_addr = ...
> >    msg->payload.memory.regions[msg->payload.memory.nregions].memory_size
> = ...
> >
> > which will make the line more longer...
> >
> > >
> > > > +
> > > > +    for (i = 0; i < dev->mem->nregions; ++i) {
> > >        for (i = 0, msg->payload.memory.nregions = 0; ...
> > >
> > > > +        struct vhost_memory_region *reg = dev->mem->regions + i;
> > > > +        ram_addr_t offset;
> > > > +        MemoryRegion *mr;
> > > > +
> > > > +        assert((uintptr_t)reg->userspace_addr == reg-
> >userspace_addr);
> > > > +        mr = memory_region_from_host((void *)(uintptr_t)reg-
> > > >userspace_addr,
> > > > +                                     &offset);
> > > > +        fd = memory_region_get_fd(mr);
> > > > +        if (fd > 0) {
> > > > +            if (fd_num < VHOST_MEMORY_MAX_NREGIONS) {
> > > instead of shifting below block to the right, I'd write it like this:
> >
> > Without this patch, the number of characters for these two lines
> >
> >         msg.payload.memory.regions[fd_num].userspace_addr = reg-
> >userspace_addr;
> >         msg.payload.memory.regions[fd_num].guest_phys_addr =
> > reg->guest_phys_addr;
> >
> > are more than 80 already...
> >
> > >
> > >                if (msg->payload.memory.nregions ==
> > > VHOST_MEMORY_MAX_NREGIONS) {
> > >                    return -1;
> > >                }
> >
> > msg->payload.memory.nregions as a counter for vhost-user setting mem
> > msg->table,
> > while fd_num as a counter for vhost_user_used_memslots, IIUC they can
> > not be merged into one counter.
> >
> > If return -1 when
> > msg->payload.memory.nregions == VHOST_MEMORY_MAX_NREGIONS, the
> > vhost_user_used_memslots maybe not assigned correctly. fd_num should
> > be added by one if fd > 0 regardless of whether
> > msg->payload.memory.nregions equals to or larger than
> VHOST_MEMORY_MAX_NREGIONS.
> 
> why do you need to continue counting beyond VHOST_MEMORY_MAX_NREGIONS?
> 

I think they're two choices
(1) stop continue counting beyond VHOST_MEMORY_MAX_NREGIONS, then
    vhost_user_used_memslots will never larger than 8
(2) continue counting beyond VHOST_MEMORY_MAX_NREGIONS, then
   "hdev->vhost_ops->vhost_get_used_memslots() >
      hdev->vhost_ops->vhost_backend_memslots_limit(hdev)"
    will have a chance to be true to fail backend intialization early,
    which keeps commit aebf81680b does

which one do you prefer?

Regards,
Jay



reply via email to

[Prev in Thread] Current Thread [Next in Thread]