qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options
Date: Tue, 2 Mar 2010 13:12:47 -0300
User-agent: Mutt/1.5.20 (2009-08-17)

On Sun, Feb 28, 2010 at 02:57:56PM -0600, Anthony Liguori wrote:
> On 02/28/2010 11:19 AM, Michael S. Tsirkin wrote:
> >>Both have  security implications so I think it's important that they
> >>be addressed.   Otherwise, I'm pretty happy with how things are.
> >Care suggesting some solutions?
> 
> The obvious thing to do would be to use the memory notifier in vhost
> to keep track of whenever something remaps the ring's memory region
> and if that happens, issue an ioctl to vhost to change the location
> of the ring.  Also, you would need to merge the vhost slot
> management code with the KVM slot management code.

There are no security implications as long as vhost uses the qemu
process mappings.

But your point is valid.

> I'm sympathetic to your arguments though.  As qemu is today, the
> above is definitely the right thing to do.  But ram is always ram
> and ram always has a fixed (albeit non-linear) mapping within a
> guest.  We can probably be smarter in qemu.
> 
> There are areas of MMIO/ROM address space that *sometimes* end up
> behaving like ram, but that's a different case.  The one other case
> to consider is ram hot add/remove in which case, ram may be removed
> or added (but it's location will never change during its lifetime).
> 
> Here's what I'll propose, and I'd really like to hear what Paul
> think about it before we start down this path.
> 
> I think we should add a new API that's:
> 
> void cpu_ram_add(target_phys_addr_t start, ram_addr_t size);
> 
> This API would do two things.  It would call qemu_ram_alloc() and
> cpu_register_physical_memory() just as code does today.  It would
> also add this region into a new table.
> 
> There would be:
> 
> void *cpu_ram_map(target_phys_addr_t start, ram_addr_t *size);
> void cpu_ram_unmap(void *mem);
> 
> These calls would use this new table to lookup ram addresses.  These
> mappings are valid as long as the guest is executed.  Within the
> table, each region would have a reference count.  When it comes time
> to do hot add/remove, we would wait to remove a region until the
> reference count went to zero to avoid unmapping during DMA.
>
> cpu_ram_add() never gets called with overlapping regions.  We'll
> modify cpu_register_physical_memory() to ensure that a ram mapping
> is never changed after initial registration.

What is the difference between your proposal and
cpu_physical_memory_map?

What i'd like to see is binding between cpu_physical_memory_map and qdev 
devices, so that you can use different host memory mappings for device
context and for CPU context (and provide the possibility for, say, map 
a certain memory region as read-only).

> vhost no longer needs to bother keeping the dynamic table up to date
> so it removes all of the slot management code from vhost.  KVM still
> needs the code to handle rom/ram mappings but we can take care of
> that next.  virtio-net's userspace code can do the same thing as
> vhost and only map the ring once which should be a big performance
> improvement.

Pinning the host virtual addresses as you propose reduces flexibility.
See above about different mappings for DMA/CPU contexes.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]