qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 1/3] pci: Add support for Designware IP block


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v6 1/3] pci: Add support for Designware IP block
Date: Wed, 14 Feb 2018 00:15:52 +0200

On Tue, Feb 13, 2018 at 12:24:40PM -0800, Andrey Smirnov wrote:
> On Tue, Feb 13, 2018 at 10:13 AM, Michael S. Tsirkin <address@hidden> wrote:
> > On Tue, Feb 13, 2018 at 09:07:10AM -0800, Andrey Smirnov wrote:
> >> +static void designware_pcie_root_class_init(ObjectClass *klass, void 
> >> *data)
> >> +{
> >> +    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> >> +    DeviceClass *dc = DEVICE_CLASS(klass);
> >> +
> >> +    set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> >> +
> >> +    k->vendor_id = PCI_VENDOR_ID_SYNOPSYS;
> >> +    k->device_id = 0xABCD;
> >> +    k->revision = 0;
> >> +    k->class_id = PCI_CLASS_BRIDGE_PCI;
> >> +    k->is_express = true;
> >> +    k->is_bridge = true;
> >> +    k->exit = pci_bridge_exitfn;
> >> +    k->realize = designware_pcie_root_realize;
> >> +    k->config_read = designware_pcie_root_config_read;
> >> +    k->config_write = designware_pcie_root_config_write;
> >> +
> >> +    dc->reset = pci_bridge_reset;
> >> +    /*
> >> +     * PCI-facing part of the host bridge, not usable without the
> >> +     * host-facing part, which can't be device_add'ed, yet.
> >> +     */
> >> +    dc->user_creatable = false;
> >> +    dc->vmsd = &vmstate_designware_pcie_root;
> >> +}
> >> +
> >> +static uint64_t designware_pcie_host_mmio_read(void *opaque, hwaddr addr,
> >> +                                               unsigned int size)
> >> +{
> >> +    PCIHostState *pci = PCI_HOST_BRIDGE(opaque);
> >> +    PCIDevice *device = pci_find_device(pci->bus, 0, 0);
> >> +
> >> +    return pci_host_config_read_common(device,
> >> +                                       addr,
> >> +                                       pci_config_size(device),
> >> +                                       size);
> >> +}
> >> +
> >> +static void designware_pcie_host_mmio_write(void *opaque, hwaddr addr,
> >> +                                            uint64_t val, unsigned int 
> >> size)
> >> +{
> >> +    PCIHostState *pci = PCI_HOST_BRIDGE(opaque);
> >> +    PCIDevice *device = pci_find_device(pci->bus, 0, 0);
> >> +
> >> +    return pci_host_config_write_common(device,
> >> +                                        addr,
> >> +                                        pci_config_size(device),
> >> +                                        val, size);
> >> +}
> >> +
> >> +static const MemoryRegionOps designware_pci_mmio_ops = {
> >> +    .read       = designware_pcie_host_mmio_read,
> >> +    .write      = designware_pcie_host_mmio_write,
> >> +    .endianness = DEVICE_NATIVE_ENDIAN,
> >> +    .impl = {
> >> +        /*
> >> +         * Our device would not work correctly if the guest was doing
> >> +         * unaligned access. This might not be a limitation on the real
> >> +         * device but in practice there is no reason for a guest to access
> >> +         * this device unaligned.
> >> +         */
> >> +        .min_access_size = 4,
> >> +        .max_access_size = 4,
> >> +        .unaligned = false,
> >> +    },
> >> +};
> >
> > Could you pls add some comments explaining why is DEVICE_NATIVE_ENDIAN
> > appropriate here?  Most of these cases are plain "we never bothered
> > about cross-endian setups". Some are "there's a mix of different
> > endian-ness values, need to handle in a special way".
> >
> > I suspect you really need DEVICE_LITTLE_ENDIAN.
> >
> 
> That MemoryRegion corresponds to a register file permanently mapped
> into CPU's address space, so my assumption is that SoC designers will
> wire it according to CPUs endianness be it big or little. I am not
> aware of any big-endian CPU based SoC on the market using Designware's
> IP block, so I don't think there are any precedent confirming or
> denying correctness of my assumption. IMHO, this is also the reason
> why all of Linux driver code for that IP assumes little endianness.

IMHO if Linux driver code does cpu_to_le then it seems best to be
consistent with that.

> I can't say that I testing this code against a big-endian guest/CPU,
> but that is primarily due to the fact that there's no real use case
> and any test set up I can put toghere would be a contrived example
> pointlessly proving my point.
> 
> Anyway, I am more than happy to switch it to use DEVICE_LITTLE_ENDIAN,
> I just don't know if doing so is any more justified than keeping it
> DEVICE_NATIVE_ENDIAN.
> 
> Thanks,
> Andrey Smirnov

I agree it's probably not critical for a target-specific device.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]