qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v18 13/14] memory backend: fill memory backend r


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v18 13/14] memory backend: fill memory backend ram fields
Date: Wed, 26 Feb 2014 14:47:28 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0

Il 26/02/2014 14:43, Igor Mammedov ha scritto:
On Wed, 26 Feb 2014 13:45:38 +0100
Paolo Bonzini <address@hidden> wrote:

Il 26/02/2014 13:31, Igor Mammedov ha scritto:
The problem is that some backends might not be handled the same way.
For example, not all backends might produce a single void*/size_t pair
for the entire region.  Think of a "composite" backend that produces a
large memory region from two smaller ones.
I'd prefer to keep backends simple, with 1:1 mapping to memory regions.

I agree.  However not all backends may have a mapping to a RAM memory
region.  A composite backend could create a container memory region
whose children are other HostMemoryBackend objects.

Is there a need in composite one or something similar?

I've heard of users that want a node backed partially by hugetlbfs and
partially by regular RAM.  Not sure why.
Isn't issue here in how backend is mapped into GPA? Well that is not
backend's job.

Once one starts to put layout (alignment, noncontinuously mapped
memory regions inside of container, ...), mapping HPA->GPA gets complicated.

It would be better to use simple building blocks and model as:
2 separate backends (ram + hugetlbfs) and 2 corresponding DIMM devices.

Right, I had forgotten that you can have cold-plugged DIMM devices. That's a nice solution, also because it simplifies passing the GPA configuration down to the guest.

How would that translate to sharing HostMemoryBackend code for memory policies? Which of Hu Tao's proposals do you like best?

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]