[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug wit
From: |
Igor Mammedov |
Subject: |
Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on |
Date: |
Thu, 16 Jul 2015 11:42:36 +0200 |
On Thu, 16 Jul 2015 10:35:33 +0300
"Michael S. Tsirkin" <address@hidden> wrote:
> On Thu, Jul 16, 2015 at 09:26:21AM +0200, Igor Mammedov wrote:
> > On Wed, 15 Jul 2015 19:32:31 +0300
> > "Michael S. Tsirkin" <address@hidden> wrote:
> >
> > > On Wed, Jul 15, 2015 at 05:12:01PM +0200, Igor Mammedov wrote:
> > > > On Thu, 9 Jul 2015 13:47:17 +0200
> > > > Igor Mammedov <address@hidden> wrote:
> > > >
> > > > there also is yet another issue with vhost-user. It also has
> > > > very low limit on amount of memory regions (if I recall correctly 8)
> > > > and it's possible to trigger even without memory hotplug.
> > > > one just need to start QEMU with a several -numa memdev= options
> > > > to create a necessary amount of memory regions to trigger it.
> > > >
> > > > lowrisk option to fix it would be increasing limit in vhost-user
> > > > backend.
> > > >
> > > > another option is disabling vhost and fall-back to virtio,
> > > > but I don't know much about vhost if it's possible to
> > > > to switch it off without loosing packets guest was sending
> > > > at the moment and if it will work at all with vhost.
> > >
> > > With vhost-user you can't fall back to virtio: it's
> > > not an accelerator, it's the backend.
> > >
> > > Updating the protocol to support a bigger table
> > > is possible but old remotes won't be able to support it.
> > >
> > it looks like increasing limit is the only option left.
> >
> > it's not ideal that old remotes /with hardcoded limit/
> > might not be able to handle bigger table but at least
> > new ones and ones that handle VhostUserMsg payload
> > dynamically would be able to work without crashing.
>
> I think we need a way for hotplug to fail gracefully. As long as we
> don't implement the hva trick, it's needed for old kernels with vhost in
> kernel, too.
I don't see a reliable way to fail hotplug though.
In case of hotplug failure path comes from memory listener
which can't fail by design but it fails in vhost case, i.e.
vhost side doesn't follow protocol.
We already have considered idea of querying vhost, for limit
from memory hotplug handler before mapping memory region
but it has drawbacks:
1. amount of memory ranges changes during guest lifecycle
as it initializes different devices.
which leads to a case when we can hotplug more pc-dimms
than cold-plug.
Which leads to inability to migrate guest with hotplugged
pc-dimms since target QEMU won't start with that amount
of dimms from source due to hitting limit.
2. it's ugly hack to query random 'vhost' entity when plugging
dimm device from modeling pov, but we can live with it
if it helps QEMU not to crash.
If it's acceptable to break/ignore #1 issue, I can post related
QEMU patches that I have, at least qemu won't crash with old
vhost backends.
- [Qemu-devel] [PATCH v4 3/7] pc: reserve hotpluggable memory range with memory_region_init_hva_range(), (continued)
- [Qemu-devel] [PATCH v4 3/7] pc: reserve hotpluggable memory range with memory_region_init_hva_range(), Igor Mammedov, 2015/07/09
- [Qemu-devel] [PATCH v4 6/7] exec: add qemu_ram_unmap_hva() API for unmapping memory from HVA area, Igor Mammedov, 2015/07/09
- [Qemu-devel] [PATCH v4 2/7] memory: introduce MemoryRegion container with reserved HVA range, Igor Mammedov, 2015/07/09
- [Qemu-devel] [PATCH v4 5/7] exec: make sure that RAMBlock descriptor won't be leaked, Igor Mammedov, 2015/07/09
- [Qemu-devel] [PATCH v4 1/7] memory: get rid of memory_region_destructor_ram_from_ptr(), Igor Mammedov, 2015/07/09
- [Qemu-devel] [PATCH v4 7/7] memory: add support for deleting HVA mapped MemoryRegion, Igor Mammedov, 2015/07/09
- Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on, Igor Mammedov, 2015/07/15