qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [big lock] Discussion about the convention of device's


From: liu ping fan
Subject: Re: [Qemu-devel] [big lock] Discussion about the convention of device's DMA each other after breaking down biglock
Date: Sat, 29 Sep 2012 17:20:28 +0800

On Thu, Sep 27, 2012 at 5:16 PM, Avi Kivity <address@hidden> wrote:
> On 09/27/2012 05:13 AM, liu ping fan wrote:
>> On Mon, Sep 24, 2012 at 5:42 PM, Avi Kivity <address@hidden> wrote:
>>> On 09/24/2012 10:32 AM, liu ping fan wrote:
>>>> On Mon, Sep 24, 2012 at 3:44 PM, Avi Kivity <address@hidden> wrote:
>>>>> On 09/24/2012 08:33 AM, liu ping fan wrote:
>>>>>> On Wed, Sep 19, 2012 at 5:50 PM, Avi Kivity <address@hidden> wrote:
>>>>>> > On 09/19/2012 12:34 PM, Jan Kiszka wrote:
>>>>>> >>
>>>>>> >> What about the following:
>>>>>> >>
>>>>>> >> What we really need to support in practice is MMIO access triggers RAM
>>>>>> >> access of device model. Scenarios where a device access triggers 
>>>>>> >> another
>>>>>> >> MMIO access could likely just be rejected without causing troubles.
>>>>>> >>
>>>>>> >> So, when we dispatch a request to a device, we mark that the current
>>>>>> >> thread is in a MMIO dispatch and reject any follow-up c_p_m_rw that 
>>>>>> >> does
>>>>>> >> _not_ target RAM, ie. is another, nested MMIO request - independent of
>>>>>> >> its destination. How much of the known issues would this solve? And 
>>>>>> >> what
>>>>>> >> would remain open?
>>>>>> >
>>>>>> > Various iommu-like devices re-dispatch I/O, like changing endianness or
>>>>>> > bitband.  I don't know whether it targets I/O rather than RAM.
>>>>>> >
>>>>>> Have not found the exact code. But I think the call chain may look
>>>>>> like this: dev mmio-handler --> c_p_m_rw() --> iommu mmio-handler -->
>>>>>> c_p_m_rw()
>>>>>> And I think you worry about the case for "c_p_m_rw() --> iommu
>>>>>> mmio-handler". Right? How about introduce an member can_nest for
>>>>>> MemoryRegionOps of iommu's mr?
>>>>>>
>>>>>
>>>>> I would rather push the iommu logic into the memory API:
>>>>>
>>>>>   memory_region_init_iommu(MemoryRegion *mr, const char *name,
>>>>>                            MemoryRegion *target, MemoryRegionIOMMUOps 
>>>>> *ops,
>>>>>                            unsigned size)
>>>>>
>>>>>   struct MemoryRegionIOMMUOps {
>>>>>       target_physical_addr_t (*translate)(target_physical_addr_t addr,
>>>>> bool write);
>>>>>       void (*fault)(target_physical_addr_t addr);
>>>>>   };
>>>>>
>>>> So I guess, after introduce this, the code logic in c_p_m_rw() will
>>>> look like this
>>>>
>>>> c_p_m_rw(dev_virt_addr, ...)
>>>> {
>>>>    mr = phys_page_lookup();
>>>>    if (mr->iommu_ops)
>>>>        real_addr = translate(dev_virt_addr,..);
>>>>
>>>>    ptr = qemu_get_ram_ptr(real_addr);
>>>>    memcpy(buf, ptr, sz);
>>>> }
>>>>
>>>
>>> Something like that.  It will be a while loop, to allow for iommus
>>> strung in series.
>>>
>> Will model the system like the following:
>>
>> --.Introduce iommu address space. It will be the container of the
>> regions which are put under the management of iommu.
>> --.In the system address space, using alias-iommu-mrX with priority=1
>> to expose iommu address space and obscure the overlapped regions.
>> -- Device's access to address manged by alias-iommu-mrX
>> c_p_m_rw(target_physical_addr_t addrA, ..)
>> {
>>     while (len > 0) {
>>     mr = phys_page_lookup();
>>     if (mr->iommu_ops)
>>         addrB = translate(addrA,..);
>>
>>     ptr = qemu_get_ram_ptr(addrB);
>>     memcpy(buf, ptr, sz);
>>     }
>> }
>>
>> Is it correct?
>
> iommus only apply to device accesses, not cpu accesses (as in
> cpu_p_m_w()).  So we will need a generic dma function:
>
Yes, during model it, I found that c_p_m_rw() operate on MMU's result
, it like the translation result of IOMMU.  But here the iommu, I
think, is just a very limited device -- adjust bitband and endian, so
it mapped address into self.

>   typedef struct MemoryAddressSpace {
>       MemoryRegion *root;
>       PhysPageEntry phys_map;
>       ...
>       // linked list entry for list of all MemoryAddressSpaces
>   }
>
>   void memory_address_space_rw(MemoryAddressSpace *mas, ...)
>   {
>      look up mas->phys_map
>      dispatch
>   }
>
>   void cpu_physical_memory_rw(...)
>   {
>       memory_address_space_rw(&system_memory, ...);
>   }
>
> The snippet
>
>     if (mr->iommu_ops)
>         addrB = translate(addrA,..);
>
> needs to be a little more complicated.  After translation, we need to
> look up the address again in a different phys_map.  So a MemoryRegion
> that is an iommu needs to hold its own phys_map pointer for the lookup.
>
> But let's ignore the problem for now, we have too much on our plate.
> With a recursive big lock, there is no problem with iommus, yes?  So as

Do we have iommus in qemu now, since there are no separate phys_maps
for real address and dev's virt address, and I think the iommu is only
needed by host, not guest, so need not emulated by qemu.  If no, we
can just reject the nested DMA, and the c_p_m_rw() can only be nested
once, so if introduce a wrapper for c_p_m_rw(), we can avoid
recursive big lock, right?

Regards,
pingfan

> long as there is no intersection between converted devices and platforms
> with iommus, we're safe.
>
> --
> error compiling committee.c: too many arguments to function



reply via email to

[Prev in Thread] Current Thread [Next in Thread]