qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 2/2] reduce qemu's heap Rss size from 12252kB


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v1 2/2] reduce qemu's heap Rss size from 12252kB to 2752KB
Date: Sat, 11 Mar 2017 12:04:15 -0500 (EST)

> > Subpages never have subregions, so the loop never runs.  The begin/commit
> > pair then becomes:
> > 
> >     ++memory_region_transaction_depth;
> >     --memory_region_transaction_depth;
> >     if (!memory_region_transaction_depth) {
> >         if (memory_region_update_pending) {
> >             ...
> >         } else if (ioeventfd_update_pending) {
> >             ...
> >         }
> >         // memory_region_clear_pending()
> >         memory_region_update_pending = false;
> >         ioeventfd_update_pending = false;
> >    }
> > 
> > If memory_region_transaction_depth is > 0 the begin/commit pair does
> > nothing.
> > 
> > But if memory_region_transaction_depth is == 0, there should be no update
> > pending because the loop has never run.  So I don't see what your patch can
> > change.
> 
> As I mentioned in PATCH1, this patch is used to fix an issue after we remove
> the global lock in RCU callback. After global lock is removed, other thread
> may set up update pending, so memory_region_transaction_commit
> may try to rebuild PhysPageMap even the loop doesn’t run, other thread may
> try to rebuild PhysPageMap at the same time, it is a race condition.
> subpage MemoryRegion is a specific MemoryRegion, it doesn't belong to any
> address space, it is only used to handle subpage. We may use a new structure
> other than MemoryRegion to handle subpage to make the logic more clearer. 
> After
> the change, RCU callback will not free any MemoryRegion.

This is not true.  Try hot-unplugging a device.

I'm all for reducing the scope of the global QEMU lock, but this needs a plan
and a careful analysis of the involved data structures across _all_ 
instance_finalize
implementations.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]