qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [patch v5 5/8] memory: introduce local lock for address


From: liu ping fan
Subject: Re: [Qemu-devel] [patch v5 5/8] memory: introduce local lock for address space
Date: Fri, 2 Nov 2012 08:52:51 +0800

On Fri, Nov 2, 2012 at 2:44 AM, Jan Kiszka <address@hidden> wrote:
> On 2012-11-01 16:45, Avi Kivity wrote:
>> On 10/29/2012 11:46 AM, liu ping fan wrote:
>>> On Mon, Oct 29, 2012 at 5:32 PM, Avi Kivity <address@hidden> wrote:
>>>> On 10/29/2012 01:48 AM, Liu Ping Fan wrote:
>>>>> For those address spaces which want to be able out of big lock, they
>>>>> will be protected by their own local.
>>>>>
>>>>> Signed-off-by: Liu Ping Fan <address@hidden>
>>>>> ---
>>>>>  memory.c |   11 ++++++++++-
>>>>>  memory.h |    5 ++++-
>>>>>  2 files changed, 14 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/memory.c b/memory.c
>>>>> index 2f68d67..ff34aed 100644
>>>>> --- a/memory.c
>>>>> +++ b/memory.c
>>>>> @@ -1532,9 +1532,15 @@ void memory_listener_unregister(MemoryListener 
>>>>> *listener)
>>>>>      QTAILQ_REMOVE(&memory_listeners, listener, link);
>>>>>  }
>>>>>
>>>>> -void address_space_init(AddressSpace *as, MemoryRegion *root)
>>>>> +void address_space_init(AddressSpace *as, MemoryRegion *root, bool lock)
>>>>
>>>>
>>>> Why not always use the lock?  Even if the big lock is taken, it doesn't
>>>> hurt.  And eventually all address spaces will be fine-grained.
>>>>
>>> I had thought only mmio is out of big lock's protection. While others
>>> address space will take extra expense. So leave them until they are
>>> ready to be out of big lock.
>>
>> The other address spaces are pio (which also needs fine-grained locking)
>> and the dma address spaces (which are like address_space_memory, except
>> they are accessed via DMA instead of from the vcpu).
>
> The problem is with memory regions that don't do fine-grained locking
> yet, thus don't provide ref/unref. Then we fall back to taking BQL
> across dispatch. If the dispatch caller already holds the BQL, we will
> bail out.
>
Yes, these asymmetrice callers are bothering. Currently, I just make
exceptions for them, and would like to make the biglock recursive.
But this motivation may make bug not easy to find.

> As I understand the series, as->lock == NULL means that we will never
> take any lock during dispatch as the caller is not yet ready for
> fine-grained locking. This prevents the problem - for PIO at least. But
> this series should break TCG as it calls into MMIO dispatch from the
> VCPU while holding the BQL.
>
What about add another condition "dispatch_type == DISPATCH_MMIO" to
tell this situation.

Regards,
Pingfan
> Jan
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]