qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.4 v2 0/2] vhost: check if vhost has capaci


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH for-2.4 v2 0/2] vhost: check if vhost has capacity for hotplugged memory
Date: Fri, 31 Jul 2015 11:09:40 +0200

On Thu, 30 Jul 2015 18:54:40 +0300
"Michael S. Tsirkin" <address@hidden> wrote:

> On Thu, Jul 30, 2015 at 11:17:41AM +0200, Igor Mammedov wrote:
> > On Thu, 30 Jul 2015 10:58:23 +0300
> > "Michael S. Tsirkin" <address@hidden> wrote:
> > 
> > > On Thu, Jul 30, 2015 at 09:31:34AM +0200, Igor Mammedov wrote:
> > > > On Thu, 30 Jul 2015 09:29:56 +0300
> > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > 
> > > > > On Thu, Jul 30, 2015 at 09:25:55AM +0300, Michael S. Tsirkin wrote:
> > > > > > On Thu, Jul 30, 2015 at 08:22:18AM +0200, Igor Mammedov wrote:
> > > > > > > On Wed, 29 Jul 2015 18:03:36 +0300
> > > > > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > > > > 
> > > > > > > > On Wed, Jul 29, 2015 at 01:49:47PM +0200, Igor Mammedov wrote:
> > > > > > > > > v1->v2:
> > > > > > > > >   * replace probbing with checking for
> > > > > > > > >     /sys/module/vhost/parameters/max_mem_regions and
> > > > > > > > >     if it's missing has non wrong value return
> > > > > > > > >     hardcoded legacy limit (64 slots).
> > > > > > > > > 
> > > > > > > > > it's defensive patchset which helps to avoid QEMU crashing
> > > > > > > > > at memory hotplug time by checking that vhost has free 
> > > > > > > > > capacity
> > > > > > > > > for an additional memory slot.
> > > > > > > > 
> > > > > > > > What if vhost is added after memory hotplug? Don't you need
> > > > > > > > to check that as well?
> > > > > > > vhost device can be hotplugged after memory hotplug as far as
> > > > > > > current slots count doesn't exceed its limit,
> > > > > > > if limit is exceeded device_add would fail or virtio device
> > > > > > > would fallback to non vhost mode at its start-up (depends on
> > > > > > > how particular device treats vhost_start failure).
> > > > > > 
> > > > > > Where exactly does it fail?
> > > > > > memory_listener_register returns void so clearly it's not that ...
> > > > > 
> > > > > Oh, dev_start fails. But that's not called at device_add time.
> > > > > And vhost-user can't fall back to anything.
> > > > Yes, looks like it would lead to non functional vhost-user backed device
> > > > since there isn't any error handling at that stage.
> > > > 
> > > > But it's would be the same without memory hotplug also, one just has to
> > > > start QEMU with several -name memdev=xxx options to cause that 
> > > > condition.
> > > 
> > > Absolutely. And kvm has this problem too if using kernels before 2014.
> > > 
> > > But I have a question: do we have to figure the number of
> > > chunks exactly? How about being blunt, and just limiting the
> > > number of memory devices?
> > it would be guess work, number of chunks is not static and
> > changes during guest runtime as it configures devices so
> > chunks # at startup != chunks # at any other time
> > 
> 
> But #chunks < # memory devices, correct?
nope, it's opposite way around 1 memdev = 1MR and that
1MR could be fragmented in multiple chunks in flat view
if something overlay-ed over it.

> 
> 
> > > 
> > > How about this:
> > >   - teach memory listeners about a new "max mem devices" field
> > # of mem devices != # of memory ranges
> 
> but # of memory ranges <= # of mem devices.
> If we can support them all separately, we are fine.
that should be # of memory ranges >= # of mem devices


> And the big advantage is it's easy to document.
> Just tell users "we support at most X mem devices" and that's all.
That won't work with #ranges >= #devices, I've already tried.

How about other way around to teach vhost handle upto 509 like it's
been done with KVM.
That should cover close future where we have max 256 hotplugged memory
devices and the rest would be free for using with "-numa memdev" or
for other purposes.


> > >   - when registering a listener, check that # of mem devices
> > >     does not exceed this limit, if not - fail registering
> > >     listener
> > >   - when adding mem device, check no existing listener
> > >     has a limit that conflicts with it
> > > 
> > > Of course we could add a separate linked list+ register API with just this
> > > field instead of adding it to a memory listener, if that seems
> > > more appropriate.
> > > 
> > > 
> > > > Probably the best place to add this check is at vhost_net_init()
> > > > so that backend creation fails when one tries to add it on monitor/CLI
> > > 
> > > I'd say vhost_dev_init - it's not network specific at all.
> > > 
> > > > > 
> > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > 
> > > > > > > > > Igor Mammedov (2):
> > > > > > > > >   vhost: add vhost_has_free_slot() interface
> > > > > > > > >   pc-dimm: add vhost slots limit check before commiting to 
> > > > > > > > > hotplug
> > > > > > > > > 
> > > > > > > > >  hw/mem/pc-dimm.c                  |  7 +++++++
> > > > > > > > >  hw/virtio/vhost-backend.c         | 21 ++++++++++++++++++++-
> > > > > > > > >  hw/virtio/vhost-user.c            |  8 +++++++-
> > > > > > > > >  hw/virtio/vhost.c                 | 21 +++++++++++++++++++++
> > > > > > > > >  include/hw/virtio/vhost-backend.h |  2 ++
> > > > > > > > >  include/hw/virtio/vhost.h         |  1 +
> > > > > > > > >  stubs/Makefile.objs               |  1 +
> > > > > > > > >  stubs/vhost.c                     |  6 ++++++
> > > > > > > > >  8 files changed, 65 insertions(+), 2 deletions(-)
> > > > > > > > >  create mode 100644 stubs/vhost.c
> > > > > > > > > 
> > > > > > > > > -- 
> > > > > > > > > 1.8.3.1
> > > > > 
> > > 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]