qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 3/4] util: Add VFIO helper library


From: Paolo Bonzini
Subject: Re: [Qemu-block] [PATCH 3/4] util: Add VFIO helper library
Date: Wed, 21 Dec 2016 18:02:24 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1


On 21/12/2016 17:19, Fam Zheng wrote:
> It's clever! It'd be a bit more complicated than that, though. Things like
> queues etc in block/nvme.c have to be preserved, and if we already ensure 
> that,
> ram blocks can be preserved similarly, but indeed bounce buffers can be 
> handled
> that way. I still need to think about how to make sure none of the invalidated
> IOVA addresses are in use by other requests.

Hmm, that's true.  As you said, we'll probably want to split the IOVA
space in two, with a relatively small part for "volatile" addresses.

You can add two counters that track how many requests are using volatile
space.  When it's time to do the VFIO_IOMMU_UNMAP_DMA, you do something
like:

    if (vfio->next_phase == vfio->current_phase) {
        vfio->next_phase = !vfio->current_phase;
        while (vfio->request_counter[vfio->current_phase] != 0) {
            wait on CoQueue
        }
        ioctl(VFIO_IOMMU_UNMAP_DMA)
        vfio->current_phase = vfio->next_phase;
        wake up everyone on CoQueue
    } else {
        /* wait for the unmap to happen */
        while (vfio->next_phase != vfio->current_phase) {
            wait on CoQueue
        }
    }

As an optimization, incrementing/decrementing request_counter can be
delayed until you find an item of the QEMUIOVector that needs a volatile
IOVA.  Then it should never be incremented in practice during guest
execution.

Paolo

> Also I wonder how expensive the huge VFIO_IOMMU_UNMAP_DMA is. In the worst 
> case
> the "throwaway" IOVAs can be limited to a small range.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]