qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/4] exec: add wrapper for host pointer access


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 2/4] exec: add wrapper for host pointer access
Date: Mon, 17 Nov 2014 13:36:33 +0200

On Mon, Nov 17, 2014 at 10:58:53AM +0000, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (address@hidden) wrote:
> > host pointer accesses force pointer math, let's
> > add a wrapper to make them safer.
> > 
> > Signed-off-by: Michael S. Tsirkin <address@hidden>
> > ---
> >  include/exec/cpu-all.h |  5 +++++
> >  exec.c                 | 10 +++++-----
> >  2 files changed, 10 insertions(+), 5 deletions(-)
> > 
> > diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
> > index c085804..9d8d408 100644
> > --- a/include/exec/cpu-all.h
> > +++ b/include/exec/cpu-all.h
> > @@ -313,6 +313,11 @@ typedef struct RAMBlock {
> >      int fd;
> >  } RAMBlock;
> >  
> > +static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
> > +{
> > +    return (char *)block->host + offset;
> > +}
> 
> I'm a bit surprised you don't need to pass a length to this to be able
> to tell how much you can access.

This is because at the moment all accesses only touch a single page.
Said assumption seems to be made all over the code, and won't
be easy to remove.

> >  typedef struct RAMList {
> >      QemuMutex mutex;
> >      /* Protected by the iothread lock.  */
> > diff --git a/exec.c b/exec.c
> > index ad5cf12..9648669 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -840,7 +840,7 @@ static void tlb_reset_dirty_range_all(ram_addr_t start, 
> > ram_addr_t length)
> >  
> >      block = qemu_get_ram_block(start);
> >      assert(block == qemu_get_ram_block(end - 1));
> > -    start1 = (uintptr_t)block->host + (start - block->offset);
> > +    start1 = (uintptr_t)ramblock_ptr(block, start - block->offset);
> >      cpu_tlb_reset_dirty_all(start1, length);
> >  }
> >  
> > @@ -1500,7 +1500,7 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t 
> > length)
> >      QTAILQ_FOREACH(block, &ram_list.blocks, next) {
> >          offset = addr - block->offset;
> >          if (offset < block->length) {
> > -            vaddr = block->host + offset;
> > +            vaddr = ramblock_ptr(block, offset);
> >              if (block->flags & RAM_PREALLOC) {
> >                  ;
> >              } else if (xen_enabled()) {
> > @@ -1551,7 +1551,7 @@ void *qemu_get_ram_block_host_ptr(ram_addr_t addr)
> >  {
> >      RAMBlock *block = qemu_get_ram_block(addr);
> >  
> > -    return block->host;
> > +    return ramblock_ptr(block, 0);
> >  }
> >  
> >  /* Return a host pointer to ram allocated with qemu_ram_alloc.
> > @@ -1578,7 +1578,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
> >                  xen_map_cache(block->offset, block->length, 1);
> >          }
> >      }
> > -    return block->host + (addr - block->offset);
> > +    return ramblock_ptr(block, addr - block->offset);
> >  }
> 
> which then makes me wonder if all the uses of this are safe near the
> end of the block.
> 
> >  /* Return a host pointer to guest's ram. Similar to qemu_get_ram_ptr
> > @@ -1597,7 +1597,7 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, 
> > hwaddr *size)
> >              if (addr - block->offset < block->length) {
> >                  if (addr - block->offset + *size > block->length)
> >                      *size = block->length - addr + block->offset;
> > -                return block->host + (addr - block->offset);
> > +                return ramblock_ptr(block, addr - block->offset);
> >              }
> 
> but then this sounds like it's going to have partial duplication, it already 
> looks
> like it's only going to succeed if it finds itself a block that the access 
> fits
> in.
> 
> 
> Dave

Sorry, I don't really understand what you are saying here.

> >          }
> >  
> > -- 
> > MST
> > 
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]