qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 03/11] dataplane: add host memory mapping cod


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v4 03/11] dataplane: add host memory mapping code
Date: Wed, 5 Dec 2012 09:13:39 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Nov 29, 2012 at 02:57:05PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 29, 2012 at 02:54:26PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Nov 29, 2012 at 01:45:19PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Nov 29, 2012 at 02:33:11PM +0200, Michael S. Tsirkin wrote:
> > > > On Thu, Nov 22, 2012 at 04:16:44PM +0100, Stefan Hajnoczi wrote:
> > > > > The data plane thread needs to map guest physical addresses to host
> > > > > pointers.  Normally this is done with cpu_physical_memory_map() but 
> > > > > the
> > > > > function assumes the global mutex is held.  The data plane thread does
> > > > > not touch the global mutex and therefore needs a thread-safe memory
> > > > > mapping mechanism.
> > > > > 
> > > > > Hostmem registers a MemoryListener similar to how vhost collects and
> > > > > pushes memory region information into the kernel.  There is a
> > > > > fine-grained lock on the regions list which is held during lookup and
> > > > > when installing a new regions list.
> > > > > 
> > > > > When the physical memory map changes the MemoryListener callbacks are
> > > > > invoked.  They build up a new list of memory regions which is finally
> > > > > installed when the list has been completed.
> > > > > 
> > > > > Note that this approach is not safe across memory hotplug because 
> > > > > mapped
> > > > > pointers may still be in used across memory unplug.  However, this is
> > > > > currently a problem for QEMU in general and needs to be addressed in 
> > > > > the
> > > > > future.
> > > > 
> > > > Sounds like a serious problem.
> > > > I'm not sure I understand - do you say this currently a problem for QEMU
> > > > virtio? Coul you give an example please?
> > > 
> > > This is a limitation of the memory API but cannot be triggered by users
> > > today since we don't support memory hot unplug.  I'm just explaining
> > > that virtio-blk-data-plane has the same issue as hw/virtio-blk.c or any
> > > other device emulation code here.
> > > 
> > > Some more detail:
> > > 
> > > The issue is that hw/virtio-blk.c submits an asynchronous I/O request on
> > > the host with the guest buffer.  Then virtio-blk emulation returns back
> > > to the caller and continues QEMU execution.
> > > 
> > > It is unsafe to unplug memory while the I/O request is pending since
> > > there's no mechanism (e.g. refcount) to wait until the guest memory is
> > > no longer in use.
> > > 
> > > This is a known issue.  There's no way to trigger a problem today but we
> > > need to eventually enhance QEMU's memory API to handle this case.
> > > 
> > > Stefan
> > 
> > For this problem we would simply need to flush outstanding aio
> > before freeing memory for unplug, no refcount necessary.
> > 
> > This patch however introduces the issue in the frontend
> > and it looks like there won't be any way to solve
> > it without changing the API.
> 
> To clarify, as you say it is not triggerable
> so I don't think this is strictly required to address
> this at this point though it should not be too hard:
> just register callback that flushes the frontend processing.
> 
> But if you can't code it at this point, please add
> a TODO comment in code.

Yes, I'm adding a TODO and your suggestion to flush the frontend sounds
like a simple solution - we already quiesce at other critical points
like live migration.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]