qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 13/13] iommu: Add a memory barrier to DMA RW fun


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 13/13] iommu: Add a memory barrier to DMA RW function
Date: Sat, 19 May 2012 09:24:32 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1

Il 19/05/2012 00:26, Benjamin Herrenschmidt ha scritto:
>> In theory you would need a memory barrier before the first ld/st and one
>> after the last... considering virtio uses map/unmap, what about leaving
>> map/unmap and ld*_phys/st*_phys as the high performance unsafe API?
>> Then you can add barriers around ld*_pci_dma/st*_pci_dma.
> 
> So no, my idea is to make anybody using ld_* and st_*  (non _dma)
> responsible for their own barriers. The _dma are implemented in term of
> cpu_physical_memory_rw so should inherit the barriers.

Yeah, after these patches they are.

> As for map/unmap, there's an inconsistency since when it falls back to
> bounce buffering, it will get implicit barriers. My idea was to put a
> barrier before always, see blow.

The bounce buffering case is never hit in practice.  Your reasoning
about adding a barrier before always makes sense, but probably it's
better to add (a) a variant of map with no barrier; (b) a variant that
takes an sglist that would add only one barrier.

I agree that a barrier in unmap is not needed.

>>> The full sync should provide all the synchronization we need
>>
>> You mean "sync; ld; sync" for load and "sync; st" for store?  That would
>> do, yes.
> 
> No, just sync,ld
> 
> IE. If I put a barrier "before" in cpu_physical_memory_rw I ensure
> ordering vs all previous accesses. 

Ok.

I guess the C11/C++ guys required an isync barrier after either loads or
stores, because they need to order the load/store vs. code accessing
other memory.  This is not needed in QEMU because all guest accesses go
through cpu_physical_memory_rw (or has its own barriers).

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]