qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 08/13] util/mmap-alloc: Prepare for resizable mmaps


From: Murilo Opsfelder Araújo
Subject: Re: [PATCH v1 08/13] util/mmap-alloc: Prepare for resizable mmaps
Date: Thu, 06 Feb 2020 09:31:40 -0300

Hello, David.

On Thursday, February 6, 2020 5:52:26 AM -03 David Hildenbrand wrote:
> On 06.02.20 00:00, Murilo Opsfelder Araújo wrote:
> > Hello, David.
> >
> > On Monday, February 3, 2020 3:31:20 PM -03 David Hildenbrand wrote:
> >> When shrinking a mmap we want to re-reserve the already populated area.
> >> When growing a memory region, we want to populate starting with a given
> >> fd_offset. Prepare by allowing to pass these parameters.
> >>
> >> Also, let's make sure we always process full pages, to avoid
> >> unmapping/remapping pages that are already in use when
> >> growing/shrinking. (existing callers seem to guarantee this, but that's
> >> not obvious)
> >>
> >> Cc: "Michael S. Tsirkin" <address@hidden>
> >> Cc: Greg Kurz <address@hidden>
> >> Cc: Murilo Opsfelder Araujo <address@hidden>
> >> Cc: Eduardo Habkost <address@hidden>
> >> Cc: "Dr. David Alan Gilbert" <address@hidden>
> >> Signed-off-by: David Hildenbrand <address@hidden>
> >> ---
> >>
> >>  util/mmap-alloc.c | 32 +++++++++++++++++++++-----------
> >>  1 file changed, 21 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
> >> index f043ccb0ab..63ad6893b7 100644
> >> --- a/util/mmap-alloc.c
> >> +++ b/util/mmap-alloc.c
> >> @@ -83,12 +83,12 @@ size_t qemu_mempath_getpagesize(const char *mem_path)
> >>
> >>  }
> >>
> >>  /*
> >>
> >> - * Reserve a new memory region of the requested size to be used for
> >> mapping - * from the given fd (if any).
> >> + * Reserve a new memory region of the requested size or re-reserve parts
> >> + * of an existing region to be used for mapping from the given fd (if
> >> any). */
> >> -static void *mmap_reserve(size_t size, int fd)
> >> +static void *mmap_reserve(void *ptr, size_t size, int fd)
> >>
> >>  {
> >>
> >> -    int flags = MAP_PRIVATE;
> >> +    int flags = MAP_PRIVATE | (ptr ? MAP_FIXED : 0);
> >>
> >>  #if defined(__powerpc64__) && defined(__linux__)
> >>
> >>      /*
> >>
> >> @@ -111,19 +111,23 @@ static void *mmap_reserve(size_t size, int fd)
> >>
> >>      flags |= MAP_ANONYMOUS;
> >>
> >>  #endif
> >>
> >> -    return mmap(0, size, PROT_NONE, flags, fd, 0);
> >> +    return mmap(ptr, size, PROT_NONE, flags, fd, 0);
> >>
> >>  }
> >>
> >>  /*
> >>
> >>   * Populate memory in a reserved region from the given fd (if any).
> >>   */
> >>
> >> -static void *mmap_populate(void *ptr, size_t size, int fd, bool shared,
> >> -                           bool is_pmem)
> >> +static void *mmap_populate(void *ptr, size_t size, int fd, size_t
> >> fd_offset, +                           bool shared, bool is_pmem)
> >>
> >>  {
> >>
> >>      int map_sync_flags = 0;
> >>      int flags = MAP_FIXED;
> >>      void *new_ptr;
> >>
> >> +    if (fd == -1) {
> >> +        fd_offset = 0;
> >> +    }
> >> +
> >>
> >>      flags |= fd == -1 ? MAP_ANONYMOUS : 0;
> >>      flags |= shared ? MAP_SHARED : MAP_PRIVATE;
> >>      if (shared && is_pmem) {
> >>
> >> @@ -131,7 +135,7 @@ static void *mmap_populate(void *ptr, size_t size,
> >> int
> >> fd, bool shared, }
> >>
> >>      new_ptr = mmap(ptr, size, PROT_READ | PROT_WRITE, flags |
> >>
> >> map_sync_flags, -                   fd, 0);
> >> +                   fd, fd_offset);
> >>
> >>      if (new_ptr == MAP_FAILED && map_sync_flags) {
> >>
> >>          if (errno == ENOTSUP) {
> >>
> >>              char *proc_link = g_strdup_printf("/proc/self/fd/%d", fd);
> >>
> >> @@ -153,7 +157,7 @@ static void *mmap_populate(void *ptr, size_t size,
> >> int
> >> fd, bool shared, * If mmap failed with MAP_SHARED_VALIDATE | MAP_SYNC, we
> >> will try * again without these flags to handle backwards compatibility.
> >> */
> >> -        new_ptr = mmap(ptr, size, PROT_READ | PROT_WRITE, flags, fd, 0);
> >> +        new_ptr = mmap(ptr, size, PROT_READ | PROT_WRITE, flags, fd,
> >> fd_offset); }
> >>
> >>      return new_ptr;
> >>
> >>  }
> >>
> >> @@ -178,13 +182,16 @@ void *qemu_ram_mmap(int fd,
> >>
> >>      size_t offset, total;
> >>      void *ptr, *guardptr;
> >>
> >> +    /* we can only map whole pages */
> >> +    size = QEMU_ALIGN_UP(size, pagesize);
> >> +
> >
> > Caller already rounds up size to block->page_size.
> >
> > Why this QEMU_ALIGN_UP is necessary?
>
> Thanks for having a look
>
> I guess you read the patch description, right? :)
>
> "(existing callers seem to guarantee this, but that's
>   not obvious)"
>
> Do you prefer a g_assert(IS_ALIGNED()) instead?

I guess you mean QEMU_IS_ALIGNED().  But yes, I think we could just check
alignments here, so callers do the alignments first.

We could have, for example, a new helper function mmap_align(int size) that
returned size already aligned.

>
> Thanks!

--
Murilo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]