qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 01/14] libqos: Split apart pc_alloc_init


From: John Snow
Subject: Re: [Qemu-devel] [PATCH 01/14] libqos: Split apart pc_alloc_init
Date: Tue, 13 Jan 2015 11:29:42 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0



On 01/13/2015 03:54 AM, Marc Marí wrote:
El Mon, 12 Jan 2015 22:34:26 -0500
John Snow <address@hidden> escribió:
Move the list-specific initialization over into
malloc.c, to keep all of the list implementation
details within the same file.

The allocation and freeing of these structures are
now both back within the same layer.

Signed-off-by: John Snow <address@hidden>
---
  tests/libqos/malloc-pc.c | 20 ++++----------------
  tests/libqos/malloc.c    | 17 +++++++++++++++++
  tests/libqos/malloc.h    |  1 +
  3 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/tests/libqos/malloc-pc.c b/tests/libqos/malloc-pc.c
index c9c48fd..36a0740 100644
--- a/tests/libqos/malloc-pc.c
+++ b/tests/libqos/malloc-pc.c
@@ -32,31 +32,19 @@ void pc_alloc_uninit(QGuestAllocator *allocator)

  QGuestAllocator *pc_alloc_init_flags(QAllocOpts flags)
  {
-    QGuestAllocator *s = g_malloc0(sizeof(*s));
+    QGuestAllocator *s;
      uint64_t ram_size;
      QFWCFG *fw_cfg = pc_fw_cfg_init();
-    MemBlock *node;
+
+    ram_size = qfw_cfg_get_u64(fw_cfg, FW_CFG_RAM_SIZE);
+    s = alloc_init(1 << 20, MIN(ram_size, 0xE0000000));

      s->opts = flags;
      s->page_size = PAGE_SIZE;

Is there a reason to leave page_size out of the function? (flags is
considered in a patch later, ok). I think it would be interesting to
have both, so pc_alloc_init_flags can forget about the fields in
QGuestAllocator.

Thanks
Marc


There was no strong motivation, I just saw it as something that perhaps other architectures may want to change.

A setter method would also work well for not needing to know the internal representation of the allocator object.

Thanks,
--js



reply via email to

[Prev in Thread] Current Thread [Next in Thread]