qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 0/7] memory: Clean up MemoryRegion.ram_addr a


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [PATCH v2 0/7] memory: Clean up MemoryRegion.ram_addr and optimize address_space_translate
Date: Mon, 7 Mar 2016 09:53:58 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

(CC Janosch)

Hi,

On 03/01/16 07:18, Fam Zheng wrote:
> v2: In the optimization patch, factor out section_covers_addr() and use it.
>     [Paolo, Peter]
>     Check "ram_block == NULL" in patch 3. [Gonglei]
>     Add Gonglei's rev-by in patches 1, 2, 4 and 5.
> 
> The first four patches drop ram_addr from MemoryRegion on top of Gonglei's
> optimization.
> 
> The next patch simplifies qemu_ram_free a bit by passing the RAMBlock pointer.
> 
> The last patch speeds up address_space_translate with a cache pointer inside
> the AddressSpaceDispatch.
> 
> Fam Zheng (7):
>   exec: Return RAMBlock pointer from allocating functions
>   memory: Move assignment to ram_block to memory_region_init_*
>   memory: Implement memory_region_get_ram_addr with mr->ram_block
>   memory: Drop MemoryRegion.ram_addr
>   exec: Pass RAMBlock pointer to qemu_ram_free
>   exec: Factor out section_covers_addr
>   exec: Introduce AddressSpaceDispatch.mru_section
> 
>  cputlb.c                |   4 +-
>  exec.c                  | 106 
> +++++++++++++++++++++++++-----------------------
>  hw/misc/ivshmem.c       |   9 ++--
>  include/exec/memory.h   |   9 +---
>  include/exec/ram_addr.h |  24 +++++------
>  kvm-all.c               |   3 +-
>  memory.c                |  56 ++++++++++++++-----------
>  7 files changed, 111 insertions(+), 100 deletions(-)
> 

Does this series preserve "scripts/dump-guest-memory.py" in working
shape? One of the patch titles above says "Drop MemoryRegion.ram_addr",
and I think that might break the memory_region_get_ram_ptr() method.

.. This might prove a false alarm, but I thought I'd ask.

Thanks
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]