qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/5] exec.c: use refcnt to protect device during


From: Avi Kivity
Subject: Re: [Qemu-devel] [PATCH 2/5] exec.c: use refcnt to protect device during dispatching
Date: Thu, 26 Jul 2012 16:13:13 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120615 Thunderbird/13.0.1

On 07/26/2012 04:06 PM, liu ping fan wrote:
> On Wed, Jul 25, 2012 at 8:27 PM, Avi Kivity <address@hidden> wrote:
>> On 07/25/2012 01:58 PM, Avi Kivity wrote:
>>>>      while (len > 0) {
>>>>          page = addr & TARGET_PAGE_MASK;
>>>>          l = (page + TARGET_PAGE_SIZE) - addr;
>>>>          if (l > len)
>>>>              l = len;
>>>> +
>>>> +        qemu_rwlock_rdlock_devtree();
>>>>          section = phys_page_find(page >> TARGET_PAGE_BITS);
>>>
>>> Does the devtree lock also protect the data structures accessed by
>>> phys_page_find()?  Seems wrong.
>>
>> The right way is to object_ref() in core_region_add() and object_unref()
>> in core_region_del().  We're guaranteed that mr->object is alive during
>> _add(), and DeviceClass::unmap() ensures that the extra ref doesn't
>> block destruction.
>>
> OK, I see. I will try in this way.  But when
> memory_region_destroy()->..->core_region_del(), should we reset the
> lp.ptr to phys_section_unassigned , otherwise, if using  removed
> target_phys_addr_t, we will still get the pointer to invalid
> MemoryRegion?

The intent was to use rcu, so when we rebuild phys_map we build it into
a new tree, use rcu_assign_pointer() to switch into the new tree, then
synchronize_rcu() and drop the old tree.

Since we don't have rcu yet we can emulate it with a lock.  We can start
with a simple mutex around the lookup and rebuild, then switch to rwlock
or rcu if needed.

(without the lock or rcu, just changing lp.ptr is dangerous, since it is
a bit field)

-- 
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]