qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] Added iopmem device emulation


From: Logan Gunthorpe
Subject: Re: [Qemu-block] [PATCH] Added iopmem device emulation
Date: Fri, 4 Nov 2016 09:47:33 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.4.0

Hi Stefan,

On 04/11/16 04:49 AM, Stefan Hajnoczi wrote:
> QEMU already has NVDIMM support (https://pmem.io/).  It can be used both
> for passthrough and fake non-volatile memory:
> 
>   qemu-system-x86_64 \
>     -M pc,nvdimm=on \
>     -m 1024,maxmem=$((4096 * 1024 * 1024)),slots=2 \
>     -object memory-backend-file,id=mem0,mem-path=/tmp/foo,size=$((64 * 1024 * 
> 1024)) \
>     -device nvdimm,memdev=mem0
> 
> Please explain where iopmem comes from, where the hardware spec is, etc?

Yes, we are aware of nvdimm and, yes, there are quite a few
commonalities. The difference between nvdimm and iopmem is that the
memory that backs iopmem is on a PCI device and not connected through
system memory. Currently, we are working with prototype hardware so
there is no open spec that I'm aware of but the concept is really
simple: a single bar directly maps volatile or non-volatile memory.

One of the primary motivations behind iopmem is to provide memory to do
peer to peer transactions between PCI devices such that, for example, an
RDMA NIC could transfer data directly to storage and bypass the system
memory bus all together.


> Perhaps you could use nvdimm instead of adding a new device?

I'm afraid not. The main purpose of this patch is to enable us to test
kernel drivers for this type of hardware. If we use nvdimm, there is no
PCI device for our driver to enumerate and the existing, different,
NVDIMM drivers would be used instead.

Thanks for the consideration,

Logan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]