qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 22/32] nvdimm: init the address region used b


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH v3 22/32] nvdimm: init the address region used by NVDIMM ACPI
Date: Tue, 20 Oct 2015 10:27:27 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0



On 10/19/2015 06:42 PM, Igor Mammedov wrote:
On Mon, 19 Oct 2015 18:01:17 +0800
Xiao Guangrong <address@hidden> wrote:



On 10/19/2015 05:46 PM, Igor Mammedov wrote:
On Mon, 19 Oct 2015 12:17:22 +0300
"Michael S. Tsirkin" <address@hidden> wrote:

On Mon, Oct 19, 2015 at 03:44:13PM +0800, Xiao Guangrong wrote:


On 10/19/2015 03:39 PM, Michael S. Tsirkin wrote:
On Mon, Oct 19, 2015 at 03:27:21PM +0800, Xiao Guangrong wrote:
+        nvdimm_init_memory_state(&pcms->nvdimm_memory,
system_memory, machine,
+                                 TARGET_PAGE_SIZE);
+

Shouldn't this be conditional on presence of the nvdimm device?


We will enable hotplug on nvdimm devices in the near future once
Linux driver is ready. I'd keep it here for future development.

No, I don't think we should add stuff unconditionally. If not
nvdimm, some other flag should indicate user intends to hotplug
things.


Actually, it is not unconditionally which is called if parameter
"-m aaa, maxmem=bbb" (aaa < bbb) is used. It is on the some path
of memoy-hotplug initiation.


Right, but that's not the same as nvdimm.


it could be pc-machine property, then it could be turned on like
this: -machine nvdimm_support=on

Er, I do not understand why this separate switch is needed and why
nvdimm and pc-dimm is different. :(

NVDIMM reuses memory-hotplug's framework, such as maxmem, slot, and
dimm device, even some of ACPI logic to do hotplug things, etc. Both
nvdimm and pc-dimm are built on the same infrastructure.
NVDIMM support consumes precious low RAM  and MMIO resources and
not small amount at that. So turning it on unconditionally with
memory hotplug even if NVDIMM wouldn't ever be used isn't nice.

However that concern could be dropped if instead of allocating it's
own control MMIO/RAM regions, NVDIMM would reuse memory hotplug's MMIO
region and replace RAM region with serializing/marshaling label data
over the same MMIO interface (yes, it's slower but it's not
performance critical path).12

I really do not want to reuse all memory-hotplug's resource, NVDIMM and
memory-hotplug do not have the same ACPI logic, that makes the AML code
really complex.

Another point is, Microsoft does use label data area oon its own way - label
data area will not be used as namespace area at all, too slow access for
_DSM is not acceptable for vNVDIMM usage.

Most important point is, we do not want to slow down system boot with NVDIMM
attached, (imagine accessing 128K data with single 8 bytes MMIO access, crazy
slowly.), NVDIMM will be use as boot device and it will be used for
light-way virtualization, such as Clear Container and Hyper, which require
boot the system up as fast as possible.

I understand your concern that reserve big resource is not so acceptable - okay,
then how about just reserve 4 bit IO port and 1 RAM?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]