qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource u


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI
Date: Fri, 19 Feb 2016 10:08:37 +0200

On Thu, Feb 18, 2016 at 11:05:23AM +0100, Igor Mammedov wrote:
> On Thu, 18 Feb 2016 12:03:36 +0800
> Xiao Guangrong <address@hidden> wrote:
> 
> > On 02/18/2016 01:26 AM, Michael S. Tsirkin wrote:
> > > On Wed, Feb 17, 2016 at 10:04:18AM +0800, Xiao Guangrong wrote:  
> > >>>>> As for the rest could that commands go via MMIO that we usually
> > >>>>> use for control path?  
> > >>>>
> > >>>> So both input data and output data go through single MMIO, we need to
> > >>>> introduce a protocol to pass these data, that is complex?
> > >>>>
> > >>>> And is any MMIO we can reuse (more complexer?) or we should allocate 
> > >>>> this
> > >>>> MMIO page (the old question - where to allocated?)?  
> > >>> Maybe you could reuse/extend memhotplug IO interface,
> > >>> or alternatively as Michael suggested add a vendor specific PCI_Config,
> > >>> I'd suggest PM device for that (hw/acpi/[piix4.c|ihc9.c])
> > >>> which I like even better since you won't need to care about which ports
> > >>> to allocate at all.  
> > >>
> > >> Well, if Michael does not object, i will do it in the next version. :)  
> > >
> > > Sorry, the thread's so long by now that I'm no longer sure what does "it" 
> > > refer to.  
> > 
> > Never mind i saw you were busy on other loops.
> > 
> > "It" means the suggestion of Igor that "map each label area right after each
> > NVDIMM's data memory"
> Michael pointed out that putting label right after each NVDIMM
> might burn up to 256GB of address space due to DIMM's alignment for 256 
> NVDIMMs.
> However if address for each label is picked with pc_dimm_get_free_addr()
> and label's MemoryRegion alignment is default 2MB then all labels
> would be allocated close to each other within a single 1GB range.
> 
> That would burn only 1GB for 500 labels which is more than possible 256 
> NVDIMMs.

I thought about it, once we support hotplug, this means that one will
have to pre-declare how much is needed so QEMU can mark the correct
memory reserved, that would be nasty. Maybe we always pre-reserve 1Gbyte.
Okay but next time we need something, do we steal another Gigabyte?
It seems too much, I'll think it over on the weekend.

Really, most other devices manage to get by with 4K chunks just fine, I
don't see why do we are so special and need to steal gigabytes of
physically contigious phy ranges.

> Assuming labels are mapped before storage MemoryRegion is mapped, layout with 
> 1Gb hugepage backend CLI
>   -device nvdimm,size=1G -device nvdimm,size=1G -device nvdimm,size=1G
> would look like:
> 
> 0  2M  4M       1G    2G    3G    4G
> L1 | L2 | L3 ... | NV1 | NV2 | NV3 |
> 
> > so we do not emulate it in QEMU and is good for the performance
> > of label these are the points i like. However it also brings 
> > complexity/limitation for
> > later DSM commands supports since both dsm input & output need to go 
> > through single MMIO.
> > 
> > Your idea?
> > 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]