qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 2/2] hw/riscv/virt: Add IOPMP support


From: Alistair Francis
Subject: Re: [PATCH v7 2/2] hw/riscv/virt: Add IOPMP support
Date: Mon, 8 Jul 2024 14:26:32 +1000

On Wed, Jun 26, 2024 at 3:53 PM Ethan Chen <ethan84@andestech.com> wrote:
>
> On Wed, Jun 26, 2024 at 11:22:46AM +1000, Alistair Francis wrote:
> >
> > On Mon, Jun 24, 2024 at 11:47 AM Ethan Chen <ethan84@andestech.com> wrote:
> > >
> > > Hi Alistair,
> > >
> > > IOPMP can applies all device. In this patch series, PCI devices on the 
> > > bridge
> > > can connect to IOPMP by pci_setup_iommu(), but other devices need change 
> > > their
> > > memory access address space from system memory to IOPMP by themself.
> >
> > We should be really clear about that then. The documentation and the
> > flag `iopmp=[on|off]` implies that either the IOPMP is on or off.
> >
> > For example, what happens in the future if we extend support to apply
> > to all devices? That will be a breaking change for anyone currently
> > using `iopmp=on`.
> >
> > Maybe we should have use something like `iopmp=[pci|off]` instead, and
> > then be really clear in the docs what is and isn't going through the
> > IOPMP.
> >
> > Alistair
>
> Hi Alistair,
>
> According to Zhiwei's suggestion in this patch series, we will remove
> iopmp_setup_pci because it will be exclusive with IOMMU integration.
>
> We are looking for an interface to make device memory access to be
> checked by IOPMP. After iopmp_setup_pci is removed, all devices need
> to change memory access target to iommu memory region in IOPMP
> themselves in current method. Therefore, by default, all devices won't
> go through the IOPMP even if iopmp=on.
>
> Another method is to replace the memory region of protected device
> in the system memory by iommu memory region in IOPMP (similar to
> MPC in arm/mps2-tz) when iopmp=on. With this method, all devices are
> going through the IOPMP by default when iopmp=on.
>
> Which method is more suitable for the RISC-V virt machine?

Sorry for the delay here, I missed this question.

I think if a user specifies `iopmp=on` we should run everything
through the IOPMP. If we don't we need to be really clear about what
does/does not go through the IOPMP.

It's unfortunately a little clunky, which isn't ideal, but I feel it's
better to properly support the IOPMP in this case. Maybe this is a
good reason for a "server" platform...

Alistair

>
> Thanks,
> Ethan
>
> >
> > >
> > > Thanks,
> > > Ethan
> > >
> > > On Fri, Jun 21, 2024 at 03:54:15PM +1000, Alistair Francis wrote:
> > > > On Wed, Jun 12, 2024 at 1:25 PM Ethan Chen via <qemu-devel@nongnu.org> 
> > > > wrote:
> > > > >
> > > > > If a requestor device is connected to the IOPMP device, its memory 
> > > > > access will
> > > > > be checked by the IOPMP rule.
> > > > >
> > > > > - Add 'iopmp=on' option to add an iopmp device and make the Generic 
> > > > > PCI Express
> > > > >   Bridge connect to IOPMP.
> > > >
> > > > I have only had a chance to have a quick look at this series and the 
> > > > spec.
> > > >
> > > > But the IOPMP spec applies to all devices right, but this series seems
> > > > to only work with PCI. Am I missing something?
> > > >
> > > > Alistair
> > > >
> > > > >
> > > > > Signed-off-by: Ethan Chen <ethan84@andestech.com>
> > > > > ---
> > > > >  docs/system/riscv/virt.rst |  6 ++++
> > > > >  hw/riscv/Kconfig           |  1 +
> > > > >  hw/riscv/virt.c            | 57 
> > > > > ++++++++++++++++++++++++++++++++++++--
> > > > >  include/hw/riscv/virt.h    |  5 +++-
> > > > >  4 files changed, 66 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
> > > > > index 9a06f95a34..3b2576f905 100644
> > > > > --- a/docs/system/riscv/virt.rst
> > > > > +++ b/docs/system/riscv/virt.rst
> > > > > @@ -116,6 +116,12 @@ The following machine-specific options are 
> > > > > supported:
> > > > >    having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not 
> > > > > specified,
> > > > >    the default number of per-HART VS-level AIA IMSIC pages is 0.
> > > > >
> > > > > +- iopmp=[on|off]
> > > > > +
> > > > > +  When this option is "on", an IOPMP device is added to machine. It 
> > > > > checks dma
> > > > > +  operations from the generic PCIe host bridge. This option is 
> > > > > assumed to be
> > > > > +  "off".
> > > > > +
> > > > >  Running Linux kernel
> > > > >  --------------------
> > > > >
> > > > > diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
> > > > > index a2030e3a6f..0b45a5ade2 100644
> > > > > --- a/hw/riscv/Kconfig
> > > > > +++ b/hw/riscv/Kconfig
> > > > > @@ -56,6 +56,7 @@ config RISCV_VIRT
> > > > >      select PLATFORM_BUS
> > > > >      select ACPI
> > > > >      select ACPI_PCI
> > > > > +    select RISCV_IOPMP
> > > > >
> > > > >  config SHAKTI_C
> > > > >      bool
> > > > > diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> > > > > index 4fdb660525..53a1b71c71 100644
> > > > > --- a/hw/riscv/virt.c
> > > > > +++ b/hw/riscv/virt.c
> > > > > @@ -55,6 +55,7 @@
> > > > >  #include "hw/acpi/aml-build.h"
> > > > >  #include "qapi/qapi-visit-common.h"
> > > > >  #include "hw/virtio/virtio-iommu.h"
> > > > > +#include "hw/misc/riscv_iopmp.h"
> > > > >
> > > > >  /* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated 
> > > > > by QEMU. */
> > > > >  static bool virt_use_kvm_aia(RISCVVirtState *s)
> > > > > @@ -82,6 +83,7 @@ static const MemMapEntry virt_memmap[] = {
> > > > >      [VIRT_UART0] =        { 0x10000000,         0x100 },
> > > > >      [VIRT_VIRTIO] =       { 0x10001000,        0x1000 },
> > > > >      [VIRT_FW_CFG] =       { 0x10100000,          0x18 },
> > > > > +    [VIRT_IOPMP] =        { 0x10200000,      0x100000 },
> > > > >      [VIRT_FLASH] =        { 0x20000000,     0x4000000 },
> > > > >      [VIRT_IMSIC_M] =      { 0x24000000, VIRT_IMSIC_MAX_SIZE },
> > > > >      [VIRT_IMSIC_S] =      { 0x28000000, VIRT_IMSIC_MAX_SIZE },
> > > > > @@ -1006,6 +1008,24 @@ static void 
> > > > > create_fdt_virtio_iommu(RISCVVirtState *s, uint16_t bdf)
> > > > >                             bdf + 1, iommu_phandle, bdf + 1, 0xffff - 
> > > > > bdf);
> > > > >  }
> > > > >
> > > > > +static void create_fdt_iopmp(RISCVVirtState *s, const MemMapEntry 
> > > > > *memmap,
> > > > > +                             uint32_t irq_mmio_phandle) {
> > > > > +    g_autofree char *name = NULL;
> > > > > +    MachineState *ms = MACHINE(s);
> > > > > +
> > > > > +    name = g_strdup_printf("/soc/iopmp@%lx", 
> > > > > (long)memmap[VIRT_IOPMP].base);
> > > > > +    qemu_fdt_add_subnode(ms->fdt, name);
> > > > > +    qemu_fdt_setprop_string(ms->fdt, name, "compatible", 
> > > > > "riscv_iopmp");
> > > > > +    qemu_fdt_setprop_cells(ms->fdt, name, "reg", 0x0, 
> > > > > memmap[VIRT_IOPMP].base,
> > > > > +        0x0, memmap[VIRT_IOPMP].size);
> > > > > +    qemu_fdt_setprop_cell(ms->fdt, name, "interrupt-parent", 
> > > > > irq_mmio_phandle);
> > > > > +    if (s->aia_type == VIRT_AIA_TYPE_NONE) {
> > > > > +        qemu_fdt_setprop_cell(ms->fdt, name, "interrupts", 
> > > > > IOPMP_IRQ);
> > > > > +    } else {
> > > > > +        qemu_fdt_setprop_cells(ms->fdt, name, "interrupts", 
> > > > > IOPMP_IRQ, 0x4);
> > > > > +    }
> > > > > +}
> > > > > +
> > > > >  static void finalize_fdt(RISCVVirtState *s)
> > > > >  {
> > > > >      uint32_t phandle = 1, irq_mmio_phandle = 1, msi_pcie_phandle = 1;
> > > > > @@ -1024,6 +1044,10 @@ static void finalize_fdt(RISCVVirtState *s)
> > > > >      create_fdt_uart(s, virt_memmap, irq_mmio_phandle);
> > > > >
> > > > >      create_fdt_rtc(s, virt_memmap, irq_mmio_phandle);
> > > > > +
> > > > > +    if (s->have_iopmp) {
> > > > > +        create_fdt_iopmp(s, virt_memmap, irq_mmio_phandle);
> > > > > +    }
> > > > >  }
> > > > >
> > > > >  static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap)
> > > > > @@ -1404,7 +1428,7 @@ static void virt_machine_init(MachineState 
> > > > > *machine)
> > > > >      RISCVVirtState *s = RISCV_VIRT_MACHINE(machine);
> > > > >      MemoryRegion *system_memory = get_system_memory();
> > > > >      MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> > > > > -    DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip;
> > > > > +    DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip, 
> > > > > *gpex_dev;
> > > > >      int i, base_hartid, hart_count;
> > > > >      int socket_count = riscv_socket_count(machine);
> > > > >
> > > > > @@ -1570,7 +1594,7 @@ static void virt_machine_init(MachineState 
> > > > > *machine)
> > > > >              qdev_get_gpio_in(virtio_irqchip, VIRTIO_IRQ + i));
> > > > >      }
> > > > >
> > > > > -    gpex_pcie_init(system_memory, pcie_irqchip, s);
> > > > > +    gpex_dev = gpex_pcie_init(system_memory, pcie_irqchip, s);
> > > > >
> > > > >      create_platform_bus(s, mmio_irqchip);
> > > > >
> > > > > @@ -1581,6 +1605,14 @@ static void virt_machine_init(MachineState 
> > > > > *machine)
> > > > >      sysbus_create_simple("goldfish_rtc", memmap[VIRT_RTC].base,
> > > > >          qdev_get_gpio_in(mmio_irqchip, RTC_IRQ));
> > > > >
> > > > > +    if (s->have_iopmp) {
> > > > > +        DeviceState *iopmp_dev = sysbus_create_simple(TYPE_IOPMP,
> > > > > +            memmap[VIRT_IOPMP].base,
> > > > > +            qdev_get_gpio_in(DEVICE(mmio_irqchip), IOPMP_IRQ));
> > > > > +
> > > > > +        iopmp_setup_pci(iopmp_dev, PCI_HOST_BRIDGE(gpex_dev)->bus);
> > > > > +    }
> > > > > +
> > > > >      for (i = 0; i < ARRAY_SIZE(s->flash); i++) {
> > > > >          /* Map legacy -drive if=pflash to machine properties */
> > > > >          pflash_cfi01_legacy_drive(s->flash[i],
> > > > > @@ -1684,6 +1716,21 @@ static void virt_set_aclint(Object *obj, bool 
> > > > > value, Error **errp)
> > > > >      s->have_aclint = value;
> > > > >  }
> > > > >
> > > > > +static bool virt_get_iopmp(Object *obj, Error **errp)
> > > > > +{
> > > > > +    RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
> > > > > +
> > > > > +    return s->have_iopmp;
> > > > > +}
> > > > > +
> > > > > +static void virt_set_iopmp(Object *obj, bool value, Error **errp)
> > > > > +{
> > > > > +    RISCVVirtState *s = RISCV_VIRT_MACHINE(obj);
> > > > > +
> > > > > +    s->have_iopmp = value;
> > > > > +}
> > > > > +
> > > > > +
> > > > >  bool virt_is_acpi_enabled(RISCVVirtState *s)
> > > > >  {
> > > > >      return s->acpi != ON_OFF_AUTO_OFF;
> > > > > @@ -1794,6 +1841,12 @@ static void 
> > > > > virt_machine_class_init(ObjectClass *oc, void *data)
> > > > >                                NULL, NULL);
> > > > >      object_class_property_set_description(oc, "acpi",
> > > > >                                            "Enable ACPI");
> > > > > +
> > > > > +    object_class_property_add_bool(oc, "iopmp", virt_get_iopmp,
> > > > > +                                   virt_set_iopmp);
> > > > > +    object_class_property_set_description(oc, "iopmp",
> > > > > +                                          "Set on/off to 
> > > > > enable/disable "
> > > > > +                                          "iopmp device");
> > > > >  }
> > > > >
> > > > >  static const TypeInfo virt_machine_typeinfo = {
> > > > > diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
> > > > > index 3db839160f..81460e29c4 100644
> > > > > --- a/include/hw/riscv/virt.h
> > > > > +++ b/include/hw/riscv/virt.h
> > > > > @@ -55,6 +55,7 @@ struct RISCVVirtState {
> > > > >
> > > > >      int fdt_size;
> > > > >      bool have_aclint;
> > > > > +    bool have_iopmp;
> > > > >      RISCVVirtAIAType aia_type;
> > > > >      int aia_guests;
> > > > >      char *oem_id;
> > > > > @@ -84,12 +85,14 @@ enum {
> > > > >      VIRT_PCIE_MMIO,
> > > > >      VIRT_PCIE_PIO,
> > > > >      VIRT_PLATFORM_BUS,
> > > > > -    VIRT_PCIE_ECAM
> > > > > +    VIRT_PCIE_ECAM,
> > > > > +    VIRT_IOPMP,
> > > > >  };
> > > > >
> > > > >  enum {
> > > > >      UART0_IRQ = 10,
> > > > >      RTC_IRQ = 11,
> > > > > +    IOPMP_IRQ = 12,
> > > > >      VIRTIO_IRQ = 1, /* 1 to 8 */
> > > > >      VIRTIO_COUNT = 8,
> > > > >      PCIE_IRQ = 0x20, /* 32 to 35 */
> > > > > --
> > > > > 2.34.1
> > > > >
> > > > >



reply via email to

[Prev in Thread] Current Thread [Next in Thread]