qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [patch v4 07/16] memory: make mmio dispatch able to be


From: liu ping fan
Subject: Re: [Qemu-devel] [patch v4 07/16] memory: make mmio dispatch able to be out of biglock
Date: Wed, 24 Oct 2012 14:56:16 +0800

On Wed, Oct 24, 2012 at 2:31 PM, liu ping fan <address@hidden> wrote:
> On Tue, Oct 23, 2012 at 8:36 PM, Avi Kivity <address@hidden> wrote:
>> On 10/23/2012 02:12 PM, Jan Kiszka wrote:
>>> On 2012-10-22 11:23, Liu Ping Fan wrote:
>>>> Without biglock, we try to protect the mr by increase refcnt.
>>>> If we can inc refcnt, go backward and resort to biglock.
>>>>
>>>> Another point is memory radix-tree can be flushed by another
>>>> thread, so we should get the copy of terminal mr to survive
>>>> from such issue.
>>>>
>>>> +
>>>>  void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
>>>>                              int len, int is_write)
>>>>  {
>>>> @@ -3413,14 +3489,28 @@ void cpu_physical_memory_rw(target_phys_addr_t 
>>>> addr, uint8_t *buf,
>>>>      uint8_t *ptr;
>>>>      uint32_t val;
>>>>      target_phys_addr_t page;
>>>> -    MemoryRegionSection *section;
>>>> +    MemoryRegionSection *section, obj_mrs;
>>>> +    int safe_ref;
>>>>
>>>>      while (len > 0) {
>>>>          page = addr & TARGET_PAGE_MASK;
>>>>          l = (page + TARGET_PAGE_SIZE) - addr;
>>>>          if (l > len)
>>>>              l = len;
>>>> -        section = phys_page_find(page >> TARGET_PAGE_BITS);
>>>> +        qemu_mutex_lock(&mem_map_lock);
>>>> +        safe_ref = phys_page_lookup(page, &obj_mrs);
>>>> +        qemu_mutex_unlock(&mem_map_lock);
>>>> +        if (safe_ref == 0) {
>>>> +            qemu_mutex_lock_iothread();
>>>> +            qemu_mutex_lock(&mem_map_lock);
>>>> +            /* At the 2nd try, mem map can change, so need to judge it 
>>>> again */
>>>> +            safe_ref = phys_page_lookup(page, &obj_mrs);
>>>> +            qemu_mutex_unlock(&mem_map_lock);
>>>> +            if (safe_ref > 0) {
>>>> +                qemu_mutex_unlock_iothread();
>>>> +            }
>>>> +        }
>>>> +        section = &obj_mrs;
>>>>
>>>>          if (is_write) {
>>>>              if (!memory_region_is_ram(section->mr)) {
>>>> @@ -3491,10 +3581,16 @@ void cpu_physical_memory_rw(target_phys_addr_t 
>>>> addr, uint8_t *buf,
>>>>                  qemu_put_ram_ptr(ptr);
>>>>              }
>>>>          }
>>>> +
>>>> +        memory_region_section_unref(&obj_mrs);
>>>
>>> The mapping cannot change from not-referenced to reference-counted while
>>> we were dispatching? I mean the case where we found not ref callback on
>>> entry and took the big lock, but now there is an unref callback.
>>
>> We drop the big lock in that case, so we end up in the same situation.
>>
>>>
>>>>          len -= l;
>>>>          buf += l;
>>>>          addr += l;
>>>> +        if (safe_ref == 0) {
>>>> +            qemu_mutex_unlock_iothread();
>>>> +        }
>>>>      }
>>>> +
>>>>  }
>>>>
>>>>  /* used for ROM loading : can write in RAM and ROM */
>>>> @@ -3504,14 +3600,18 @@ void 
>>>> cpu_physical_memory_write_rom(target_phys_addr_t addr,
>>>>      int l;
>>>>      uint8_t *ptr;
>>>>      target_phys_addr_t page;
>>>> -    MemoryRegionSection *section;
>>>> +    MemoryRegionSection *section, mr_obj;
>>>>
>>>>      while (len > 0) {
>>>>          page = addr & TARGET_PAGE_MASK;
>>>>          l = (page + TARGET_PAGE_SIZE) - addr;
>>>>          if (l > len)
>>>>              l = len;
>>>> -        section = phys_page_find(page >> TARGET_PAGE_BITS);
>>>> +
>>>> +        qemu_mutex_lock(&mem_map_lock);
>>>> +        phys_page_lookup(page, &mr_obj);
>>>> +        qemu_mutex_unlock(&mem_map_lock);
>>>> +        section = &mr_obj;
>>>
>>> But here we don't care about the return code of phys_page_lookup and all
>>> related topics? Because we assume the BQL is held? Reminds me that we
>>> will need some support for assert(qemu_mutex_is_locked(&lock)).
>>
>> I guess it's better to drop that assumption than to have asymmetric APIs.
>>
> Yes, now the updater of physmap based on mem_map_lock, and the same it
> will be for readers.
>>>>
>>>> @@ -4239,9 +4345,12 @@ bool virtio_is_big_endian(void)
>>>>  #ifndef CONFIG_USER_ONLY
>>>>  bool cpu_physical_memory_is_io(target_phys_addr_t phys_addr)
>>>>  {
>>>> -    MemoryRegionSection *section;
>>>> +    MemoryRegionSection *section, mr_obj;
>>>>
>>>> -    section = phys_page_find(phys_addr >> TARGET_PAGE_BITS);
>>>> +    qemu_mutex_lock(&mem_map_lock);
>>>> +    phys_page_lookup(phys_addr, &mr_obj);
>>>> +    qemu_mutex_unlock(&mem_map_lock);
>>>> +    section = &mr_obj;
>>>
>>> Err, no unref needed here?
>>
>> Need _ref in the name to remind reviewers that it leaves the refcount
>> unbalanced.
>>
> Oh, here is a bug, need unref.  As to unbalanced refcount, it will be
> adopted for virtio-blk listener (not implement in this patchset)
>
It is like cpu_physical_memory_map/unmap, the map will hold the
unbalanced ref, and unmap release it.

> Regards,
> pingfan
>> --
>> error compiling committee.c: too many arguments to function
>>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]