qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v4 4/7] pc: fix QEMU crashing when more than ~50 memory hotplugged
Date: Mon, 13 Jul 2015 09:55:18 +0300

On Fri, Jul 10, 2015 at 12:12:36PM +0200, Igor Mammedov wrote:
> On Thu, 9 Jul 2015 16:46:43 +0300
> "Michael S. Tsirkin" <address@hidden> wrote:
> 
> > On Thu, Jul 09, 2015 at 03:43:01PM +0200, Paolo Bonzini wrote:
> > > 
> > > 
> > > On 09/07/2015 15:06, Michael S. Tsirkin wrote:
> > > > > QEMU asserts in vhost due to hitting vhost backend limit
> > > > > on number of supported memory regions.
> > > > > 
> > > > > Describe all hotplugged memory as one continuos range
> > > > > to vhost with linear 1:1 HVA->GPA mapping in backend.
> > > > > 
> > > > > Signed-off-by: Igor Mammedov <address@hidden>
> > > >
> > > > Hmm - a bunch of work here to recombine MRs that memory listener
> > > > interface breaks up.  In particular KVM could benefit from this too (on
> > > > workloads that change the table a lot).  Can't we teach memory core to
> > > > pass hva range as a single continuous range to memory listeners?
> > > 
> > > Memory listeners are based on memory regions, not HVA ranges.
> > > 
> > > Paolo
> > 
> > Many listeners care about HVA ranges. I know KVM and vhost do.
> I'm not sure about KVM, it works just fine with fragmented memory regions,
> the same will apply to vhost once module parameter to increase limit
> is merged.
> 
> but changing generic memory listener interface to replace HVA mapped
> regions with HVA container would lead to a case when listeners
> won't see exact layout that they might need.

I don't think they care, really.

> In addition vhost itself will suffer from working with big HVA
> since it allocates log depending on size of memory => bigger log.

Not really - it allocates the log depending on the PA range.
Leaving unused holes doesn't reduce it's size.


> That's one of the reasons that in this patch HVA ranges in
> memory map are compacted only for backend consumption,
> QEMU's side of vhost uses exact map for internal purposes.
> And the other reason is I don't know vhost enough to rewrite it
> to use big HVA for everything.
> 
> > I guess we could create dummy MRs to fill in the holes left by
> > memory hotplug?
> it looks like nice thing from vhost pov but complicates other side,

What other side do you have in mind?

> hence I dislike an idea inventing dummy MRs for vhost's convenience.
> 
> 
> > vhost already has logic to recombine
> > consequitive chunks created by memory core.
> which looks a bit complicated and I was thinking about simplifying
> it some time in the future.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]