qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 13/13] exec: Ram blocks with resizable anonymous allocatio


From: David Hildenbrand
Subject: Re: [PATCH v1 13/13] exec: Ram blocks with resizable anonymous allocations under POSIX
Date: Mon, 10 Feb 2020 11:12:11 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1

On 03.02.20 19:31, David Hildenbrand wrote:
> We can now make use of resizable anonymous allocations to implement
> actually resizable ram blocks. Resizable anonymous allocations are
> not implemented under WIN32 yet and are not available when using
> alternative allocators. Fall back to the existing handling.
> 
> We also have to fallback to the existing handling in case any ram block
> notifier does not support resizing (esp., AMD SEV, HAX) yet. Remember
> in RAM_RESIZEABLE_ALLOC if we are using resizable anonymous allocations.
> 
> As the mmap()-hackery will invalidate some madvise settings, we have to
> re-apply them after resizing. After resizing, notify the ram block
> notifiers.
> 
> The benefit of actually resizable ram blocks is that e.g., under Linux,
> only the actual size will be reserved (even if
> "/proc/sys/vm/overcommit_memory" is set to "never"). Additional memory will
> be reserved when trying to resize, which allows to have ram blocks that
> start small but can theoretically grow very large.
> 
> Cc: Richard Henderson <address@hidden>
> Cc: Paolo Bonzini <address@hidden>
> Cc: "Dr. David Alan Gilbert" <address@hidden>
> Cc: Eduardo Habkost <address@hidden>
> Cc: Marcel Apfelbaum <address@hidden>
> Cc: Stefan Weil <address@hidden>
> Signed-off-by: David Hildenbrand <address@hidden>
> ---
>  exec.c                    | 68 +++++++++++++++++++++++++++++++++++----
>  hw/core/numa.c            | 10 ++++--
>  include/exec/cpu-common.h |  2 ++
>  include/exec/memory.h     |  8 +++++
>  4 files changed, 79 insertions(+), 9 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index fc65c4f7ca..a59d1efde3 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2053,6 +2053,16 @@ void qemu_ram_unset_migratable(RAMBlock *rb)
>      rb->flags &= ~RAM_MIGRATABLE;
>  }
>  
> +bool qemu_ram_is_resizable(RAMBlock *rb)
> +{
> +    return rb->flags & RAM_RESIZEABLE;
> +}
> +
> +bool qemu_ram_is_resizable_alloc(RAMBlock *rb)
> +{
> +    return rb->flags & RAM_RESIZEABLE_ALLOC;
> +}
> +
>  /* Called with iothread lock held.  */
>  void qemu_ram_set_idstr(RAMBlock *new_block, const char *name, DeviceState 
> *dev)
>  {
> @@ -2139,6 +2149,8 @@ static void qemu_ram_apply_settings(void *host, size_t 
> length)
>   */
>  int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp)
>  {
> +    const uint64_t oldsize = block->used_length;
> +
>      assert(block);
>  
>      newsize = HOST_PAGE_ALIGN(newsize);
> @@ -2147,7 +2159,7 @@ int qemu_ram_resize(RAMBlock *block, ram_addr_t 
> newsize, Error **errp)
>          return 0;
>      }
>  
> -    if (!(block->flags & RAM_RESIZEABLE)) {
> +    if (!qemu_ram_is_resizable(block)) {
>          error_setg_errno(errp, EINVAL,
>                           "Length mismatch: %s: 0x" RAM_ADDR_FMT
>                           " in != 0x" RAM_ADDR_FMT, block->idstr,
> @@ -2163,10 +2175,26 @@ int qemu_ram_resize(RAMBlock *block, ram_addr_t 
> newsize, Error **errp)
>          return -EINVAL;
>      }
>  
> +    if (qemu_ram_is_resizable_alloc(block)) {
> +        g_assert(ram_block_notifiers_support_resize());
> +        if (qemu_anon_ram_resize(block->host, block->used_length,
> +                                 newsize, block->flags & RAM_SHARED) == 
> NULL) {
> +            error_setg_errno(errp, -ENOMEM,
> +                             "Could not allocate enough memory.");
> +            return -ENOMEM;
> +        }
> +    }
> +

I'll most probably rework to have separate paths when growing/shrinking,
with a different sequence of steps. (e.g., perform actual shrinking as
last step, so all mappings remain valid, and ignore errors (which are
unlikely either way)).


-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]