On 04.11.24 21:56, Steven Sistare wrote:
On 11/4/2024 3:15 PM, David Hildenbrand wrote:
On 04.11.24 20:51, David Hildenbrand wrote:
On 04.11.24 18:38, Steven Sistare wrote:
On 11/4/2024 5:39 AM, David Hildenbrand wrote:
On 01.11.24 14:47, Steve Sistare wrote:
Allocate anonymous memory using mmap MAP_ANON or memfd_create depending
on the value of the anon-alloc machine property. This option applies to
memory allocated as a side effect of creating various devices. It does
not apply to memory-backend-objects, whether explicitly specified on
the command line, or implicitly created by the -m command line option.
The memfd option is intended to support new migration modes, in which the
memory region can be transferred in place to a new QEMU process, by sending
the memfd file descriptor to the process. Memory contents are preserved,
and if the mode also transfers device descriptors, then pages that are
locked in memory for DMA remain locked. This behavior is a pre-requisite
for supporting vfio, vdpa, and iommufd devices with the new modes.
A more portable, non-Linux specific variant of this will be using shm,
similar to backends/hostmem-shm.c.
Likely we should be using that instead of memfd, or try hiding the
details. See below.
For this series I would prefer to use memfd and hide the details. It's a
concise (and well tested) solution albeit linux only. The code you supply
for posix shm would be a good follow on patch to support other unices.
Unless there is reason to use memfd we should start with the more
generic POSIX variant that is available even on systems without memfd.
Factoring stuff out as I drafted does look quite compelling.
I can help with the rework, and send it out separately, so you can focus
on the "machine toggle" as part of this series.
Of course, if we find out we need the memfd internally instead under
Linux for whatever reason later, we can use that instead.
But IIUC, the main selling point for memfd are additional features
(hugetlb, memory sealing) that you aren't even using.
FWIW, I'm looking into some details, and one difference is that shmem_open()
under Linux (glibc) seems to go to /dev/shmem and memfd/SYSV go to the internal
tmpfs mount. There is not a big difference, but there can be some difference
(e.g., sizing of the /dev/shm mount).
Sizing is a non-trivial difference. One can by default allocate all memory
using memfd_create.
To do so using shm_open requires configuration on the mount. One step harder
to use.
Yes.
This is a real issue for memory-backend-ram, and becomes an issue for the
internal RAM
if memory-backend-ram has hogged all the memory.
Regarding memory-backend-ram,share=on, I assume we can use memfd if available,
but then fallback to shm_open().
Yes, and if that is a good idea, then the same should be done for internal RAM
-- memfd if available and fallback to shm_open.
Yes.
I'm hoping we can find a way where it just all is rather intuitive, like
"default-ram-share=on": behave for internal RAM just like
"memory-backend-ram,share=on"
"memory-backend-ram,share=on": use whatever mechanism we have to give us
"anonymous" memory that can be shared using an fd with another process.
Thoughts?
Agreed, though I thought I had already landed at the intuitive specification in
my patch.
The user must explicitly configure memory-backend-* to be usable with CPR, and
anon-alloc
controls everything else. Now we're just riffing on the details: memfd vs
shm_open, spelling
of options and words to describe them.
Well, yes, and making it all a bit more consistent and the "machine option" behave just
like "memory-backend-ram,share=on".