qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [patch v5 5/8] memory: introduce local lock for address


From: Jan Kiszka
Subject: Re: [Qemu-devel] [patch v5 5/8] memory: introduce local lock for address space
Date: Fri, 02 Nov 2012 09:00:50 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-11-02 01:52, liu ping fan wrote:
> On Fri, Nov 2, 2012 at 2:44 AM, Jan Kiszka <address@hidden> wrote:
>> On 2012-11-01 16:45, Avi Kivity wrote:
>>> On 10/29/2012 11:46 AM, liu ping fan wrote:
>>>> On Mon, Oct 29, 2012 at 5:32 PM, Avi Kivity <address@hidden> wrote:
>>>>> On 10/29/2012 01:48 AM, Liu Ping Fan wrote:
>>>>>> For those address spaces which want to be able out of big lock, they
>>>>>> will be protected by their own local.
>>>>>>
>>>>>> Signed-off-by: Liu Ping Fan <address@hidden>
>>>>>> ---
>>>>>>  memory.c |   11 ++++++++++-
>>>>>>  memory.h |    5 ++++-
>>>>>>  2 files changed, 14 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/memory.c b/memory.c
>>>>>> index 2f68d67..ff34aed 100644
>>>>>> --- a/memory.c
>>>>>> +++ b/memory.c
>>>>>> @@ -1532,9 +1532,15 @@ void memory_listener_unregister(MemoryListener 
>>>>>> *listener)
>>>>>>      QTAILQ_REMOVE(&memory_listeners, listener, link);
>>>>>>  }
>>>>>>
>>>>>> -void address_space_init(AddressSpace *as, MemoryRegion *root)
>>>>>> +void address_space_init(AddressSpace *as, MemoryRegion *root, bool lock)
>>>>>
>>>>>
>>>>> Why not always use the lock?  Even if the big lock is taken, it doesn't
>>>>> hurt.  And eventually all address spaces will be fine-grained.
>>>>>
>>>> I had thought only mmio is out of big lock's protection. While others
>>>> address space will take extra expense. So leave them until they are
>>>> ready to be out of big lock.
>>>
>>> The other address spaces are pio (which also needs fine-grained locking)
>>> and the dma address spaces (which are like address_space_memory, except
>>> they are accessed via DMA instead of from the vcpu).
>>
>> The problem is with memory regions that don't do fine-grained locking
>> yet, thus don't provide ref/unref. Then we fall back to taking BQL
>> across dispatch. If the dispatch caller already holds the BQL, we will
>> bail out.
>>
> Yes, these asymmetrice callers are bothering. Currently, I just make
> exceptions for them, and would like to make the biglock recursive.
> But this motivation may make bug not easy to find.
> 
>> As I understand the series, as->lock == NULL means that we will never
>> take any lock during dispatch as the caller is not yet ready for
>> fine-grained locking. This prevents the problem - for PIO at least. But
>> this series should break TCG as it calls into MMIO dispatch from the
>> VCPU while holding the BQL.
>>
> What about add another condition "dispatch_type == DISPATCH_MMIO" to
> tell this situation.

An alternative pattern that we will also use for core services is to
provide an additional entry point, one that indicates that the caller
doesn't hold the BQL. Then we will gradually move things over until the
existing entry point is obsolete.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]