qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged
Date: Fri, 10 Jul 2015 12:12:36 +0200

On Thu, 9 Jul 2015 16:46:43 +0300
"Michael S. Tsirkin" <address@hidden> wrote:

> On Thu, Jul 09, 2015 at 03:43:01PM +0200, Paolo Bonzini wrote:
> > 
> > 
> > On 09/07/2015 15:06, Michael S. Tsirkin wrote:
> > > > QEMU asserts in vhost due to hitting vhost backend limit
> > > > on number of supported memory regions.
> > > > 
> > > > Describe all hotplugged memory as one continuos range
> > > > to vhost with linear 1:1 HVA->GPA mapping in backend.
> > > > 
> > > > Signed-off-by: Igor Mammedov <address@hidden>
> > >
> > > Hmm - a bunch of work here to recombine MRs that memory listener
> > > interface breaks up.  In particular KVM could benefit from this too (on
> > > workloads that change the table a lot).  Can't we teach memory core to
> > > pass hva range as a single continuous range to memory listeners?
> > 
> > Memory listeners are based on memory regions, not HVA ranges.
> > 
> > Paolo
> 
> Many listeners care about HVA ranges. I know KVM and vhost do.
I'm not sure about KVM, it works just fine with fragmented memory regions,
the same will apply to vhost once module parameter to increase limit
is merged.

but changing generic memory listener interface to replace HVA mapped
regions with HVA container would lead to a case when listeners
won't see exact layout that they might need.

In addition vhost itself will suffer from working with big HVA
since it allocates log depending on size of memory => bigger log.
That's one of the reasons that in this patch HVA ranges in
memory map are compacted only for backend consumption,
QEMU's side of vhost uses exact map for internal purposes.
And the other reason is I don't know vhost enough to rewrite it
to use big HVA for everything.

> I guess we could create dummy MRs to fill in the holes left by
> memory hotplug?
it looks like nice thing from vhost pov but complicates other side,
hence I dislike an idea inventing dummy MRs for vhost's convenience.


> vhost already has logic to recombine
> consequitive chunks created by memory core.
which looks a bit complicated and I was thinking about simplifying
it some time in the future.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]