qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug wit


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on
Date: Wed, 15 Jul 2015 17:12:01 +0200

On Thu,  9 Jul 2015 13:47:17 +0200
Igor Mammedov <address@hidden> wrote:

there also is yet another issue with vhost-user. It also has
very low limit on amount of memory regions (if I recall correctly 8)
and it's possible to trigger even without memory hotplug.
one just need to start QEMU with a several -numa memdev= options
to create a necessary amount of memory regions to trigger it.

lowrisk option to fix it would be increasing limit in vhost-user
backend.

another option is disabling vhost and fall-back to virtio,
but I don't know much about vhost if it's possible to 
to switch it off without loosing packets guest was sending
at the moment and if it will work at all with vhost.



> Changelog:
>  v3->v4:
>    * drop patch extending memory_region_subregion_add()
>      with error argument
>    * and add memory_region_add_subregion_to_hva() API instead
>    * add madvise(DONTNEED) when returning range to HVA container
>  v2->v3:
>    * fixed(work-arouned) unmapping issues,
>      now memory subsytem keeps track of HVA mapped
>      regions and doesn't allow to map a new region
>      at address where previos has benn mapped until
>      previous region is gone
>    * fixed offset calculations in memory_region_find_hva_range()
>      in 2/8
>    * redone MemorySection folding into HVA range for VHOST,
>      now compacted memory map is temporary and passed only to vhost
>      backend and doesn't touch original memory map used by QEMU
>  v1->v2:
>    * take into account Paolo's review comments
>      * do not overload ram_addr
>      * ifdef linux specific code
>    * reseve HVA using API from exec.c instead of calling
>      mmap() dircely from memory.c
>    * support unmapping of HVA remapped region
> 
> When more than ~50 pc-dimm devices are hotplugged with
> vhost enabled, QEMU will assert in vhost vhost_commit()
> due to backend refusing to accept too many memory ranges.
> 
> Series introduces Reserved HVA MemoryRegion container
> where to all hotplugged memory is remapped and passes
> the single container range to vhost instead of multiple
> memory ranges for each hotlugged pc-dimm device.
> 
> It's an alternative approach to increasing backend supported
> memory regions limit. 
> 
> Tested it a bit more, so now
>  - migration from current master to patched version seems to work
>  - memory is returned to host after device_del+object_del sequence,
>    but I can't bet if cgroups won't charge it.
> 
> git branch for testing:
>   https://github.com/imammedo/qemu/commits/vhost_one_hp_range_v4
> 
> 
> Igor Mammedov (7):
>   memory: get rid of memory_region_destructor_ram_from_ptr()
>   memory: introduce MemoryRegion container with reserved HVA range
>   pc: reserve hotpluggable memory range with
>     memory_region_init_hva_range()
>   pc: fix QEMU crashing when more than ~50 memory hotplugged
>   exec: make sure that RAMBlock descriptor won't be leaked
>   exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area
>   memory: add support for deleting HVA mapped MemoryRegion
> 
>  exec.c                    |  71 +++++++++++++++++++----------
>  hw/i386/pc.c              |   4 +-
>  hw/mem/pc-dimm.c          |   6 ++-
>  hw/virtio/vhost.c         |  47 ++++++++++++++++++--
>  include/exec/cpu-common.h |   3 ++
>  include/exec/memory.h     |  67 +++++++++++++++++++++++++++-
>  include/exec/ram_addr.h   |   1 -
>  include/hw/virtio/vhost.h |   1 +
>  memory.c                  | 111 
> +++++++++++++++++++++++++++++++++++++++++++---
>  9 files changed, 272 insertions(+), 39 deletions(-)
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]