qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/5] exec.c: use refcnt to protect device during


From: Avi Kivity
Subject: Re: [Qemu-devel] [PATCH 2/5] exec.c: use refcnt to protect device during dispatching
Date: Wed, 25 Jul 2012 13:58:03 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120615 Thunderbird/13.0.1

On 07/25/2012 06:31 AM, Liu Ping Fan wrote:
> From: Liu Ping Fan <address@hidden>
> 
> acquire device's refcnt with qemu_device_tree_mutex rwlock, so we
> can safely handle it when mmio dispatch.
> 
> If in radix-tree, leaf is subpage, then move further step to acquire
> opaque which is the type --DeiveState.
> 
>  
> +static MemoryRegionSection *subpage_get_backend(subpage_t *mmio,
> +    target_phys_addr_t addr)
> +{
> +    MemoryRegionSection *section;
> +    unsigned int idx = SUBPAGE_IDX(addr);
> +
> +    section = &phys_sections[mmio->sub_section[idx]];
> +    return section;
> +}
> +
> +void *get_backend(MemoryRegion* mr,  target_phys_addr_t addr)
> +{
> +    MemoryRegionSection *p;
> +    Object *ret;
> +
> +    if (mr->subpage) {
> +        p = subpage_get_backend(mr->opaque, addr);
> +        ret = OBJECT(p->mr->opaque);
> +    } else {
> +        ret = OBJECT(mr->opaque);
> +    }
> +    return ret;
> +}
> +

You don't enforce that mr->opaque is an object.

The name 'backend' is inappropriate here (actually I don't like it
anywhere).  If we can s/opaque/object/ (and change the type too, we can
call it get_object() (and return an Object *).

>  static const MemoryRegionOps subpage_ops = {
>      .read = subpage_read,
>      .write = subpage_write,
> @@ -3396,13 +3420,25 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, 
> uint8_t *buf,
>      uint32_t val;
>      target_phys_addr_t page;
>      MemoryRegionSection *section;
> +    Object *bk;
>  
>      while (len > 0) {
>          page = addr & TARGET_PAGE_MASK;
>          l = (page + TARGET_PAGE_SIZE) - addr;
>          if (l > len)
>              l = len;
> +
> +        qemu_rwlock_rdlock_devtree();
>          section = phys_page_find(page >> TARGET_PAGE_BITS);

Does the devtree lock also protect the data structures accessed by
phys_page_find()?  Seems wrong.

> +        if (!(memory_region_is_ram(section->mr) ||
> +            memory_region_is_romd(section->mr)) && !is_write) {
> +            bk = get_backend(section->mr, addr);
> +            object_ref(bk);
> +        } else if (!memory_region_is_ram(section->mr) && is_write) {
> +            bk = get_backend(section->mr, addr);
> +            object_ref(bk);
> +        }

Best push the ugliness that computes bk into a small helper, and do just
the object_ref() here.

> +        qemu_rwlock_unlock_devtree();
>  
>          if (is_write) {
>              if (!memory_region_is_ram(section->mr)) {
> @@ -3426,6 +3462,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, 
> uint8_t *buf,
>                      io_mem_write(section->mr, addr1, val, 1);
>                      l = 1;
>                  }
> +                object_unref(bk);
>              } else if (!section->readonly) {
>                  ram_addr_t addr1;
>                  addr1 = memory_region_get_ram_addr(section->mr)
> @@ -3464,6 +3501,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, 
> uint8_t *buf,
>                      stb_p(buf, val);
>                      l = 1;
>                  }
> +                object_unref(bk);
>              } else {
>                  /* RAM case */
>                  ptr = qemu_get_ram_ptr(section->mr->ram_addr
> diff --git a/memory.h b/memory.h
> index 740c48e..e5a86dc 100644
> --- a/memory.h
> +++ b/memory.h
> @@ -748,6 +748,8 @@ void memory_global_dirty_log_stop(void);
>  
>  void mtree_info(fprintf_function mon_printf, void *f);
>  
> +void *get_backend(MemoryRegion* mr,  target_phys_addr_t addr);
> +

This is a private interface, shouldn't be in memory.h.


-- 
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]