qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/4] arm: Add PCIe host bridge in virt machin


From: Alexander Graf
Subject: Re: [Qemu-devel] [PATCH v2 3/4] arm: Add PCIe host bridge in virt machine
Date: Thu, 29 Jan 2015 15:49:58 +0100
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.3.0


On 29.01.15 15:45, Peter Maydell wrote:
> On 29 January 2015 at 14:37, Alexander Graf <address@hidden> wrote:
>> On 29.01.15 15:34, Peter Maydell wrote:
>>> I kind of see, but isn't this just a window from CPU address
>>> space into PCI address space, not vice-versa?
>>
>> Yup, exactly. But PCI devices need to map themselves somewhere into the
>> PCI address space. So if I configure a BAR to live at 0x10000000, it
>> should also show up at 0x10000000 when accessed from the CPU. That's
>> what the mapping above is about.
> 
> No, it doesn't have to. It's a choice to make the mapping
> be such that the system address for a BAR matches the address
> in PCI memory space, not a requirement. I agree it's a
> sensible choice, though.
> 
> But as I say, this code is setting up one mapping (the
> system address -> PCI space mapping), not two.

Yes, the other one is done implicitly by the OS based on what device
tree tells it to do. If we map it at 0, our good old if (BAR == 0)
break; friend hits us again though - and any other arbitrary offset is
as good as a 1:1 map.

> 
>>> DMA by PCI devices bus-mastering into system memory must be
>>> being set up elsewhere, I think.
>>
>> Yes, that's a different mechanism that's not implemented yet for GPEX
>> :).
> 
> We can't not implement DMA, it would break lots of the usual
> PCI devices people want to use. In fact I thought the PCI
> core code implemented a default of "DMA by PCI devices goes
> to the system address space" if you didn't specifically
> set up something else by calling pci_setup_iommu(). This is
> definitely how it works for plain PCI host bridges, are
> PCIe bridges different?

Nono, this is exactly the way it works. The thing that's not implemented
is the SMMU to make that dynamic.

>> On ARM this would happen via SMMU emulation.
> 
> There's no requirement for a PCI host controller to be
> sat behind an SMMU -- that's a system design choice. We
> don't need to implement the SMMU yet (or perhaps ever?);

The main benefit of implementing a guest SMMU is that you don't have to
pin all guest memory at all times. Apart from that and the usual
security aid it only makes things slower ;).

> we definitely need to support PCI DMA.

We do.


Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]