qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 07/18] nvdimm: reserve address range for NVDI


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH v2 07/18] nvdimm: reserve address range for NVDIMM
Date: Sun, 6 Sep 2015 15:22:07 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0



On 09/04/2015 08:02 PM, Igor Mammedov wrote:
On Fri, 14 Aug 2015 22:52:00 +0800
Xiao Guangrong <address@hidden> wrote:

NVDIMM reserves all the free range above 4G to do:
- Persistent Memory (PMEM) mapping
- implement NVDIMM ACPI device _DSM method

Signed-off-by: Xiao Guangrong <address@hidden>
---
  hw/i386/pc.c               | 12 ++++++++++--
  hw/mem/nvdimm/pc-nvdimm.c  | 13 +++++++++++++
  include/hw/mem/pc-nvdimm.h |  1 +
  3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 7661ea9..41af6ea 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -64,6 +64,7 @@
  #include "hw/pci/pci_host.h"
  #include "acpi-build.h"
  #include "hw/mem/pc-dimm.h"
+#include "hw/mem/pc-nvdimm.h"
  #include "qapi/visitor.h"
  #include "qapi-visit.h"

@@ -1302,6 +1303,7 @@ FWCfgState *pc_memory_init(MachineState *machine,
      MemoryRegion *ram_below_4g, *ram_above_4g;
      FWCfgState *fw_cfg;
      PCMachineState *pcms = PC_MACHINE(machine);
+    ram_addr_t offset;

      assert(machine->ram_size == below_4g_mem_size + above_4g_mem_size);

@@ -1339,6 +1341,8 @@ FWCfgState *pc_memory_init(MachineState *machine,
          exit(EXIT_FAILURE);
      }

+    offset = 0x100000000ULL + above_4g_mem_size;
+
      /* initialize hotplug memory address space */
      if (guest_info->has_reserved_memory &&
          (machine->ram_size < machine->maxram_size)) {
@@ -1358,8 +1362,7 @@ FWCfgState *pc_memory_init(MachineState *machine,
              exit(EXIT_FAILURE);
          }

-        pcms->hotplug_memory.base =
-            ROUND_UP(0x100000000ULL + above_4g_mem_size, 1ULL << 30);
+        pcms->hotplug_memory.base = ROUND_UP(offset, 1ULL << 30);

          if (pcms->enforce_aligned_dimm) {
              /* size hotplug region assuming 1G page max alignment per slot */
@@ -1377,8 +1380,13 @@ FWCfgState *pc_memory_init(MachineState *machine,
                             "hotplug-memory", hotplug_mem_size);
          memory_region_add_subregion(system_memory, pcms->hotplug_memory.base,
                                      &pcms->hotplug_memory.mr);
+
+        offset = pcms->hotplug_memory.base + hotplug_mem_size;
      }

+     /* all the space left above 4G is reserved for NVDIMM. */
+    pc_nvdimm_reserve_range(offset);
I'd drop 'offset' in this patch and just use:
   foo(pcms->hotplug_memory.base + hotplug_mem_size)


That works only if hotplug is used... however we can enable nvdimm separately.

+
      /* Initialize PC system firmware */
      pc_system_firmware_init(rom_memory, guest_info->isapc_ram_fw);

diff --git a/hw/mem/nvdimm/pc-nvdimm.c b/hw/mem/nvdimm/pc-nvdimm.c
index a53d235..7a270a8 100644
--- a/hw/mem/nvdimm/pc-nvdimm.c
+++ b/hw/mem/nvdimm/pc-nvdimm.c
@@ -24,6 +24,19 @@

  #include "hw/mem/pc-nvdimm.h"

+#define PAGE_SIZE      (1UL << 12)
+
+static struct nvdimms_info {
+    ram_addr_t current_addr;
+} nvdimms_info;
no globals please, so far it looks like pcms->hotplug_memory
so add asimmilar nvdimm_memory field to PCMachineState


Okay, it's good to me.

+
+/* the address range [offset, ~0ULL) is reserved for NVDIMM. */
+void pc_nvdimm_reserve_range(ram_addr_t offset)
do you plan to reuse this function, if not then just inline it at call site

I prefer it as a inline function and move it to the nvdimm.h file since it's 
easier
to port it to other platforms - avoid to find the pieces of code related to 
nvdimm
in x86 arch and only needed to implement the functions in nvdimm.h.


+{
+    offset = ROUND_UP(offset, PAGE_SIZE);
I'd suggest round up to 1Gb as we do with mem hotplug

Okay, good to me.

Really appreciate for all your time/comment in the whole patchset, Igor!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]