qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 1/2] hw/arm/virt: add a Calxeda XGMAC device


From: Eric Auger
Subject: Re: [Qemu-devel] [RFC 1/2] hw/arm/virt: add a Calxeda XGMAC device
Date: Wed, 5 Mar 2014 05:27:40 +0100

Hi Kim,

- Parameter 'driver' expects pluggable device type
this can be fixed setting dc->cannot_instantiate_with_device_add_yet to false in vfio_platform_dev_class_init. You do not get the error anymore. However I must aknowledge I have not studies all possible side effects of this setting and maybe there is a cleaner way to achieve the same result. Then you can add additional properties in dc->props.
- location of the device in gpa space and impact on RAM size: my understanding is that device could sit in another location than the hpa's one and you are not compelled to use the same address.
- we effectively need to work on fdt tree generation for VFIO devices

Best Regards

Eric


On 26 February 2014 03:37, Kim Phillips <address@hidden> wrote:
This is a hack and only serves as an example of what needs to be
done to make the next RFC - add vfio-platform support - work
for development purposes on a Calxeda Midway system.  We don't want
mach-virt to always create this ethernet device - DO NOT APPLY, etc.

Initial attempts to convince QEMU to create a memory mapped device
on the command line (e.g., -device vfio-platform,name=fff51000.ethernet)
would fail with "Parameter 'driver' expects pluggable device type".
Any guidance as to how to overcome this apparent design limitation
is welcome.

RAM is reduced from 30 to 1GiB such as to not overlap the xgmac device's
physical address.  Not sure if the 30GiB RAM (or whatever the user sets
it to with -m) could be set up above 0x1_0000_0000, but there is probably
extra work needed to resolve this type of conflict.

note: vfio-platform interrupt support development may want interrupt
property data filled; here it's omitted for the time being.

Not-signed-off-by: Kim Phillips <address@hidden>
---
 hw/arm/virt.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 517f2fe..75edf33 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -65,6 +65,7 @@ enum {
     VIRT_GIC_CPU,
     VIRT_UART,
     VIRT_MMIO,
+    VIRT_ETHERNET,
 };

 typedef struct MemMapEntry {
@@ -106,7 +107,8 @@ static const MemMapEntry a15memmap[] = {
     [VIRT_MMIO] = { 0xa000000, 0x200 },
     /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
     /* 0x10000000 .. 0x40000000 reserved for PCI */
-    [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
+    [VIRT_MEM] = { 0x40000000, 1ULL * 1024 * 1024 * 1024 },
+    [VIRT_ETHERNET] = { 0xfff51000, 0x1000 },
 };

 static const int a15irqmap[] = {
@@ -291,6 +293,25 @@ static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
     g_free(nodename);
 }

+static void create_ethernet(const VirtBoardInfo *vbi, qemu_irq *pic)
+{
+    char *nodename;
+    hwaddr base = vbi->memmap[VIRT_ETHERNET].base;
+    hwaddr size = vbi->memmap[VIRT_ETHERNET].size;
+    const char compat[] = "calxeda,hb-xgmac";
+
+    sysbus_create_simple("vfio-platform", base, NULL);
+
+    nodename = g_strdup_printf("/address@hidden" PRIx64, base);
+    qemu_fdt_add_subnode(vbi->fdt, nodename);
+
+    /* Note that we can't use setprop_string because of the embedded NUL */
+    qemu_fdt_setprop(vbi->fdt, nodename, "compatible", compat, sizeof(compat));
+    qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg", 2, base, 2, size);
+
+    g_free(nodename);
+}
+
 static void create_virtio_devices(const VirtBoardInfo *vbi, qemu_irq *pic)
 {
     int i;
@@ -419,6 +440,7 @@ static void machvirt_init(QEMUMachineInitArgs *args)
     }

     create_uart(vbi, pic);
+    create_ethernet(vbi, pic);

     /* Create mmio transports, so the user can create virtio backends
      * (which will be automatically plugged in to the transports). If
--
1.9.0



reply via email to

[Prev in Thread] Current Thread [Next in Thread]