qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource u


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI
Date: Mon, 15 Feb 2016 19:22:13 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1



On 02/15/2016 06:47 PM, Igor Mammedov wrote:
On Mon, 15 Feb 2016 18:13:38 +0800
Xiao Guangrong <address@hidden> wrote:

On 02/15/2016 05:18 PM, Michael S. Tsirkin wrote:
On Mon, Feb 15, 2016 at 10:11:05AM +0100, Igor Mammedov wrote:
On Sun, 14 Feb 2016 13:57:27 +0800
Xiao Guangrong <address@hidden> wrote:

On 02/08/2016 07:03 PM, Igor Mammedov wrote:
On Wed, 13 Jan 2016 02:50:05 +0800
Xiao Guangrong <address@hidden> wrote:

32 bits IO port starting from 0x0a18 in guest is reserved for NVDIMM
ACPI emulation. The table, NVDIMM_DSM_MEM_FILE, will be patched into
NVDIMM ACPI binary code

OSPM uses this port to tell QEMU the final address of the DSM memory
and notify QEMU to emulate the DSM method
Would you need to pass control to QEMU if each NVDIMM had its whole
label area MemoryRegion mapped right after its storage MemoryRegion?


No, label data is not mapped into guest's address space and it only
can be accessed by DSM method indirectly.
Yep, per spec label data should be accessed via _DSM but question
wasn't about it,

Ah, sorry, i missed your question.

Why would one map only 4Kb window and serialize label data
via it if it could be mapped as whole, that way _DMS method will be
much less complicated and there won't be need to add/support a protocol
for its serialization.


Is it ever accessed on data path? If not I prefer the current approach:

The label data is only accessed via two DSM commands - Get Namespace Label
Data and Set Namespace Label Data, no other place need to be emulated.

limit the window used, the serialization protocol seems rather simple.


Yes.

Label data is at least 128k which is big enough for BIOS as it allocates
memory at 0 ~ 4G which is tight region. It also needs guest OS to support
lager max-xfer (the max size that can be transferred one time), the size
in current Linux NVDIMM driver is 4k.

However, using lager DSM buffer can help us to simplify NVDIMM hotplug for
the case that too many nvdimm devices present in the system and their FIT
info can not be filled into one page. Each PMEM-only device needs 0xb8 bytes
and we can append 256 memory devices at most, so 12 pages are needed to
contain this info. The prototype we implemented is using ourself-defined
protocol to read piece of _FIT and concatenate them before return to Guest,
please refer to:
https://github.com/xiaogr/qemu/commit/c46ce01c8433ac0870670304360b3c4aa414143a

As 12 pages are not small region for BIOS and the _FIT size may be extended in 
the
future development (eg, if PBLK is introduced) i am not sure if we need this. Of
course, another approach to simplify it is that we limit the number of NVDIMM
device to make sure their _FIT < 4k.
My suggestion is not to have only one label area for every NVDIMM but
rather to map each label area right after each NVDIMM's data memory.
That way _DMS can be made non-serialized and guest could handle
label data in parallel.


Sounds great to me. I like this idea. :D

As for a _FIT we can use the same approach as mem hotplug
(IO port window) or Michael's idea to add vendor specific
PCI_config region to a current PM device to avoid using
IO ports.

Thanks for your reminder, i will learn it.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]