qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine


From: alvise rigo
Subject: Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
Date: Thu, 8 Jan 2015 14:36:10 +0100

On Thu, Jan 8, 2015 at 1:55 PM, Claudio Fontana
<address@hidden> wrote:
> (added cc: Alvise which I mistakenly assumed was in Cc: already)
>
> On 07.01.2015 22:47, Alexander Graf wrote:
>>
>>
>> On 07.01.15 16:52, Claudio Fontana wrote:
>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>>
>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>
>>>> Signed-off-by: Alexander Graf <address@hidden>
>>>>
>>>> ---
>>>>
>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>>> systems. If you want to use it with AArch64 guests, please apply the 
>>>> following
>>>> patch or wait until upstream cleaned up the code properly:
>>>>
>>>>   http://csgraf.de/agraf/pci/pci-3.19.patch
>>>> ---
>>>>  default-configs/arm-softmmu.mak |  2 +
>>>>  hw/arm/virt.c                   | 83 
>>>> ++++++++++++++++++++++++++++++++++++++---
>>>>  2 files changed, 80 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/default-configs/arm-softmmu.mak 
>>>> b/default-configs/arm-softmmu.mak
>>>> index f3513fa..7671ee2 100644
>>>> --- a/default-configs/arm-softmmu.mak
>>>> +++ b/default-configs/arm-softmmu.mak
>>>> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
>>>>  CONFIG_VERSATILE_PCI=y
>>>>  CONFIG_VERSATILE_I2C=y
>>>>
>>>> +CONFIG_PCI_GENERIC=y
>>>> +
>>>>  CONFIG_SDHCI=y
>>>>  CONFIG_INTEGRATOR_DEBUG=y
>>>>
>>>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>>>> index 2353440..b7635ac 100644
>>>> --- a/hw/arm/virt.c
>>>> +++ b/hw/arm/virt.c
>>>> @@ -42,6 +42,7 @@
>>>>  #include "exec/address-spaces.h"
>>>>  #include "qemu/bitops.h"
>>>>  #include "qemu/error-report.h"
>>>> +#include "hw/pci-host/gpex.h"
>>>>
>>>>  #define NUM_VIRTIO_TRANSPORTS 32
>>>>
>>>> @@ -69,6 +70,7 @@ enum {
>>>>      VIRT_MMIO,
>>>>      VIRT_RTC,
>>>>      VIRT_FW_CFG,
>>>> +    VIRT_PCIE,
>>>>  };
>>>>
>>>>  typedef struct MemMapEntry {
>>>> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
>>>>      [VIRT_FW_CFG] =     { 0x09020000, 0x0000000a },
>>>>      [VIRT_MMIO] =       { 0x0a000000, 0x00000200 },
>>>>      /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that 
>>>> size */
>>>> -    /* 0x10000000 .. 0x40000000 reserved for PCI */
>>>> +    [VIRT_PCIE] =       { 0x10000000, 0x30000000 },
>>>>      [VIRT_MEM] =        { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>>>>  };
>>>>
>>>>  static const int a15irqmap[] = {
>>>>      [VIRT_UART] = 1,
>>>>      [VIRT_RTC] = 2,
>>>> +    [VIRT_PCIE] = 3,
>>>>      [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
>>>>  };
>>>>
>>>> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
>>>>      }
>>>>  }
>>>>
>>>> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>>>> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
>>>>  {
>>>>      uint32_t gic_phandle;
>>>>
>>>> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>>>>                                       2, vbi->memmap[VIRT_GIC_CPU].base,
>>>>                                       2, vbi->memmap[VIRT_GIC_CPU].size);
>>>>      qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
>>>> +
>>>> +    return gic_phandle;
>>>>  }
>>>>
>>>> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>>  {
>>>>      /* We create a standalone GIC v2 */
>>>>      DeviceState *gicdev;
>>>> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, 
>>>> qemu_irq *pic)
>>>>          pic[i] = qdev_get_gpio_in(gicdev, i);
>>>>      }
>>>>
>>>> -    fdt_add_gic_node(vbi);
>>>> +    return fdt_add_gic_node(vbi);
>>>>  }
>>>>
>>>>  static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
>>>>      g_free(nodename);
>>>>  }
>>>>
>>>> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
>>>> +                        uint32_t gic_phandle)
>>>> +{
>>>> +    hwaddr base = vbi->memmap[VIRT_PCIE].base;
>>>> +    hwaddr size = vbi->memmap[VIRT_PCIE].size;
>>>> +    hwaddr size_ioport = 64 * 1024;
>>>> +    hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
>>>> +    hwaddr size_mmio = size - size_ecam - size_ioport;
>>>> +    hwaddr base_mmio = base;
>>>> +    hwaddr base_ioport = base_mmio + size_mmio;
>>>> +    hwaddr base_ecam = base_ioport + size_ioport;
>>>> +    int irq = vbi->irqmap[VIRT_PCIE];
>>>> +    MemoryRegion *mmio_alias;
>>>> +    MemoryRegion *mmio_reg;
>>>> +    DeviceState *dev;
>>>> +    char *nodename;
>>>> +
>>>> +    dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>> +
>>>> +    qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>> +    qdev_init_nofail(dev);
>>>> +
>>>> +    sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>> +    sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>> +
>>>> +    /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>> +    mmio_alias = g_new0(MemoryRegion, 1);
>>>> +    mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>> +    memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>> +                             mmio_reg, base_mmio, size_mmio);
>>>> +    memory_region_add_subregion(get_system_memory(), base_mmio, 
>>>> mmio_alias);
>>>> +
>>>> +    sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>> +
>>>> +    nodename = g_strdup_printf("/address@hidden" PRIx64, base);
>>>> +    qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>> +    qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>> +                            "compatible", "pci-host-ecam-generic");
>>>
>>> is this the only compatible string we should set here? Is this not legacy 
>>> pci compatible?
>>> In other device trees I see this mentioned as compatible = 
>>> "arm,versatile-pci-hostbridge", "pci" for example,
>>> would it be sensible to make it a list and include "pci" as well?
>>
>> I couldn't find anything that defines what an "pci" compatible should
>> look like. We definitely don't implement the legacy PCI config space
>> accessor registers.
>
> I see, I assumed this patch would support both CAM and ECAM as configuration 
> methods, while now I understand
> Alvise's patches support only CAM, while these support only ECAM..

Hi Claudio,

I've run some tests using these patches with an ARM guest and Linux.
Both virtio-net-pci and lsi (the devices I used to test my series)
were working.
So I would say that at least from a Linux perspective these patches
support both CAM and ECAM.

Regards,
alvise

> So basically I should look at the compatible string and then choose 
> configuration method accordingly.
> I wonder if I should deal with the case where the compatible string contains 
> both ECAM and CAM.
>
>>>
>>>> +    qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>> +    qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>> +    qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>> +    qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>> +
>>>> +    qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>> +                                 2, base_ecam, 2, size_ecam);
>>>> +    qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>> +                                 1, 0x01000000, 2, 0,
>>>> +                                 2, base_ioport, 2, size_ioport,
>>>> +
>>>> +                                 1, 0x02000000, 2, base_mmio,
>>>> +                                 2, base_mmio, 2, size_mmio);
>>>> +
>>>> +    qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>> +    qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>> +                           0, 0, 0, /* device */
>>>> +                           0,       /* PCI irq */
>>>> +                           gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>> +                             GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>>> +    qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>> +                           0, 0, 0, /* device */
>>>> +                           0        /* PCI irq */);
>>>
>>> Interrupt map does not seem to work for me; incidentally this ends up being 
>>> the same kind of undocumented blob that Alvise posted in his series.
>>
>> How exactly is this undocumented? The "mask" is a mask over the first
>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>> means "Any device with any function and any IRQ on it, map to device IRQ
>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>
> (see my answer to Peter below in thread)
>
> this is a bit different to what Alvise's series is doing I think (see later).
>
>>
>>> Can you add a good comment about what the ranges property contains (the 
>>> 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is 
>>> no need to be cryptic about it).
>>
>> You're saying you'd prefer a define?
>
> Yes that would be helpful :)
>
>>
>>> How does your interrupt map implementation differ from the patchset posted 
>>> by Alvise? I ask because that one works for me (tm).
>>
>> His implementation explicitly listed every PCI slot, while mine actually
>> makes use of the mask and simply routes everything to a single IRQ line.
>>
>> The benefit of masking devfn out is that you don't need to worry about
>> the number of slots you support - and anything performance critical
>> should go via MSI-X anyway ;).
>
> The benefit for me (but for me only probably..) is that with one IRQ per slot 
> I didn't have to implement shared irqs and msi / msi-x in the guest yet. But 
> that should be done eventually anyway..
>
>>
>> So what exactly do you test it with? I've successfully received IRQs
>> with a Linux guest, so I'm slightly puzzled you're running into problems.
>
> Of course, I am just implementing PCI guest support for OSv/AArch64.
> Works with the legacy pci support using Alvise's series to do virtio-pci with 
> block, net, rng etc,
> but I am now considering going for PCIE support directly although that means 
> more implementation work for me.
>
>>
>>
>> Alex
>>
>
> Ciao & thanks,
>
> Claudio
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]