qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 11/12] spapr vfio: enable for spapr


From: Alexey Kardashevskiy
Subject: Re: [Qemu-devel] [PATCH v4 11/12] spapr vfio: enable for spapr
Date: Fri, 13 Sep 2013 21:34:41 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130805 Thunderbird/17.0.8

On 09/11/2013 08:13 AM, Alex Williamson wrote:
> On Tue, 2013-09-10 at 19:00 +1000, Alexey Kardashevskiy wrote:
>> On 09/06/2013 05:05 AM, Alex Williamson wrote:
>>> On Fri, 2013-08-30 at 20:15 +1000, Alexey Kardashevskiy wrote:
>>>> This turns the sPAPR support on and enables VFIO container use
>>>> in the kernel.
>>>>
>>>> Signed-off-by: Alexey Kardashevskiy <address@hidden>
>>>> ---
>>>> Changes:
>>>> v4:
>>>> * fixed format string to use %m which is a glibc extension:
>>>> "Print output of strerror(errno). No argument is required."
>>>> ---
>>>>  hw/misc/vfio.c | 30 ++++++++++++++++++++++++++++++
>>>>  1 file changed, 30 insertions(+)
>>>>
>>>> diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
>>>> index 4210471..882da70 100644
>>>> --- a/hw/misc/vfio.c
>>>> +++ b/hw/misc/vfio.c
>>>> @@ -2815,6 +2815,36 @@ static int vfio_connect_container(VFIOGroup *group, 
>>>> VFIOAddressSpace *space)
>>>>  
>>>>          memory_listener_register(&container->iommu_data.listener,
>>>>                                   container->space->as);
>>>> +    } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
>>>> +        ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
>>>> +        if (ret) {
>>>> +            error_report("vfio: failed to set group container: %m");
>>>> +            g_free(container);
>>>> +            close(fd);
>>>> +            return -errno;
>>>> +        }
>>>> +
>>>> +        ret = ioctl(fd, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU);
>>>> +        if (ret) {
>>>> +            error_report("vfio: failed to set iommu for container: %m");
>>>> +            g_free(container);
>>>> +            close(fd);
>>>> +            return -errno;
>>>> +        }
>>>> +
>>>> +        ret = ioctl(fd, VFIO_IOMMU_ENABLE);
>>>> +        if (ret) {
>>>> +            error_report("vfio: failed to enable container: %m");
>>>> +            g_free(container);
>>>> +            close(fd);
>>>> +            return -errno;
>>>
>>> These (and the copies that already exist in this function) are screaming
>>> for a goto.
>>
>>
>> Heh. So. There should be 2 patches then - one adding gotos to the existing
>> code and another one adding new functionality-with-gotos-already.
>> I can do that, is it what you suggest?
> 
> Yes please.
> 
>> What about the rest of the series? Next time I will split "[Qemu-devel]
>> [PATCH v4 05/12] spapr_pci: convert init to realize" but the rest will be
>> still the same. I have understanding that Alex Graf is expecting you to
>> review the whole thing (ack/sob? not sure how this all works) before he
>> pulls it into his tree.
> 
> Oh, I've been picking out the vfio ones to review.  Ok, on the next
> revision I'll review them all, but please still split it into series for
> vfio and series for ppc.  Thanks,
> 
> Alex
> 

This is the whole set of what I have on this matter:

KVM: spapr-vfio: enable in-kernel acceleration
spapr-vfio: enable for spapr

spapr-vfio: add spapr-pci-vfio-host-bridge to support vfio
spapr-iommu: add SPAPR VFIO IOMMU device
spapr-iommu: introduce SPAPR_TCE_TABLE class
spapr-pci: converts fprintf to error_report
spapr-pci: add spapr_pci trace
spapr-pci: introduce a finish_realize() callback
spapr-pci: convert init() callback to realize()

spapr vfio: add vfio_container_spapr_get_info()
vfio: Add guest side IOMMU support
vfio: Create VFIOAddressSpace objects as needed
vfio: Introduce VFIO address spaces
vfio: rework to have error paths
vfio: Fix 128 bit handling
vfio: Fix debug output for int128 values
int128: add int128_exts64()

There are 3 sets: vfio stuff (8 patches), spapr-pci rework (7 patches) and
finally the enablement (2 patches). Post it as 3 series? Or split 128bit
things to a series as well? :)


-- 
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]