qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options


From: Paul Brook
Subject: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options
Date: Tue, 2 Mar 2010 14:33:51 +0000
User-agent: KMail/1.12.4 (Linux/2.6.32-trunk-amd64; KDE/4.3.4; x86_64; ; )

> > I think this assumption is unsafe. There are machines where RAM mappings
> > can change. It's not uncommon for a chip select (i.e. physical memory
> > address region) to be switchable to several different sources, one of
> > which may be RAM.  I'm pretty sure this functionality is present (but not
> > actually implemented) on some of the current qemu targets.
> 
> But I presume this is more about switching a dim to point at a different
> region in memory.  It's a rare event similar to memory hot plug.

Approximately that, yes. One use is to start with ROM at address zero, then 
switch to RAM once you've initialised the DRAM controller (using a small 
internal SRAM as a trampoline). 
 
> Either way, if there are platforms where we don't treat ram with the new
> ram api, that's okay.
> 
> > I agree that changing RAM mappings under an active DMA is a fairly
> > suspect thing to do. However I think we need to avoid cache mappings
> > between separate DMA transactions i.e. when the guest can know that no
> > DMA will occur, and safely remap things.
> 
> One thing I like about having a new ram api is it gives us a stronger
> interface than what we have today.  Today, we don't have a strong
> guarantee that mappings won't be changed during a DMA transaction.
> 
> With a new api, cpu_physical_memory_map() changes semantics.  It only
> returns pointers for static ram mappings.  Everything else is bounced
> which guarantees that an address can't change during DMA.

Doesn't this mean that only the initial RAM is directly DMA-able?

While memory hotplug(and unplug) may be an infrequent event, having the 
majority of ram be hotplug seems much more likely.  Giving each guest a small 
base allocation, then hotplug the rest as required/paid for seems an entirely 
reasonable setup.  I use VMs which are normally hosted on a multi-core machine 
with buckets of ram, but may be migrated to a single CPU host with limited ram 
in an emergency.

Paul




reply via email to

[Prev in Thread] Current Thread [Next in Thread]