qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V2 2/5] xen mapcache: Check if a memory space ha


From: Stefano Stabellini
Subject: Re: [Qemu-devel] [PATCH V2 2/5] xen mapcache: Check if a memory space has moved.
Date: Mon, 12 Dec 2011 12:53:16 +0000
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)

On Fri, 9 Dec 2011, Anthony PERARD wrote:
> This patch change the xen_map_cache behavior. Before trying to map a guest
> addr, mapcache will look into the list of range of address that have been 
> moved
> (physmap/set_memory). There is currently one memory space like this, the vram,
> "moved" from were it's allocated to were the guest will look into.
> 
> This help to have a succefull migration.
> 
> Signed-off-by: Anthony PERARD <address@hidden>
> ---
>  xen-all.c      |   18 +++++++++++++++++-
>  xen-mapcache.c |   22 +++++++++++++++++++---
>  xen-mapcache.h |    9 +++++++--
>  3 files changed, 43 insertions(+), 6 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index b5e28ab..b2e9853 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -218,6 +218,22 @@ static XenPhysmap *get_physmapping(XenIOState *state,
>      return NULL;
>  }
>  
> +static target_phys_addr_t xen_phys_offset_to_gaddr(target_phys_addr_t 
> start_addr,
> +                                                   ram_addr_t size, void 
> *opaque)
> +{
> +    target_phys_addr_t addr = start_addr & TARGET_PAGE_MASK;
> +    XenIOState *xen_io_state = opaque;
> +    XenPhysmap *physmap = NULL;
> +    QLIST_FOREACH(physmap, &xen_io_state->physmap, list) {
> +        if (range_covers_byte(physmap->phys_offset, physmap->size, addr)) {
> +            return physmap->start_addr;
> +        }
> +    }
> +
> +    return start_addr;
> +}

Considering that xen_phys_offset_to_gaddr can be called with addresses
that are not page aligned, you should be able to return the translated
address with the right offset in the page, rather than just the
translated address, that is incorrect.
Otherwise if start_addr has to be page aligned, just add a BUG_ON at
the beginning of the function, to spot cases in which it is not.


>  #if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 340
>  static int xen_add_to_physmap(XenIOState *state,
>                                target_phys_addr_t start_addr,
> @@ -938,7 +954,7 @@ int xen_hvm_init(void)
>      }
>  
>      /* Init RAM management */
> -    xen_map_cache_init();
> +    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
>      xen_ram_init(ram_size);

This patch is better than the previous version but I think there is
still room for improvement.
For example, the only case in which xen_phys_offset_to_gaddr should
actually be used is for the videoram on restore, right?
In that case, why don't we set xen_phys_offset_to_gaddr only on restore
rather than all cases?
We could even unset xen_phys_offset_to_gaddr after the videoram address
has been translated correctly so that it is going to be called only once.



>      qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
> diff --git a/xen-mapcache.c b/xen-mapcache.c
> index 7bcb86e..d5f52b2 100644
> --- a/xen-mapcache.c
> +++ b/xen-mapcache.c
> @@ -76,6 +76,9 @@ typedef struct MapCache {
>      uint8_t *last_address_vaddr;
>      unsigned long max_mcache_size;
>      unsigned int mcache_bucket_shift;
> +
> +    phys_offset_to_gaddr_t phys_offset_to_gaddr;
> +    void *opaque;
>  } MapCache;
>  
>  static MapCache *mapcache;
> @@ -89,13 +92,16 @@ static inline int test_bits(int nr, int size, const 
> unsigned long *addr)
>          return 0;
>  }
>  
> -void xen_map_cache_init(void)
> +void xen_map_cache_init(phys_offset_to_gaddr_t f, void *opaque)
>  {
>      unsigned long size;
>      struct rlimit rlimit_as;
>  
>      mapcache = g_malloc0(sizeof (MapCache));
>  
> +    mapcache->phys_offset_to_gaddr = f;
> +    mapcache->opaque = opaque;
> +
>      QTAILQ_INIT(&mapcache->locked_entries);
>      mapcache->last_address_index = -1;
>  
> @@ -191,9 +197,14 @@ uint8_t *xen_map_cache(target_phys_addr_t phys_addr, 
> target_phys_addr_t size,
>                         uint8_t lock)
>  {
>      MapCacheEntry *entry, *pentry = NULL;
> -    target_phys_addr_t address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
> -    target_phys_addr_t address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
> +    target_phys_addr_t address_index;
> +    target_phys_addr_t address_offset;
>      target_phys_addr_t __size = size;
> +    bool translated = false;
> +
> +tryagain:
> +    address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
> +    address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
>  
>      trace_xen_map_cache(phys_addr);
>  
> @@ -235,6 +246,11 @@ uint8_t *xen_map_cache(target_phys_addr_t phys_addr, 
> target_phys_addr_t size,
>      if(!test_bits(address_offset >> XC_PAGE_SHIFT, size >> XC_PAGE_SHIFT,
>                  entry->valid_mapping)) {
>          mapcache->last_address_index = -1;
> +        if (!translated && mapcache->phys_offset_to_gaddr) {
> +            phys_addr = mapcache->phys_offset_to_gaddr(phys_addr, size, 
> mapcache->opaque);
> +            translated = true;
> +            goto tryagain;
> +        }
>          trace_xen_map_cache_return(NULL);
>          return NULL;
>      }

It would be interesting to check how many times we end up needlessly
calling phys_offset_to_gaddr during the normal life of a VM. It should
be very few, may only one, right?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]