qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 1/2] pci-dma-api-v1


From: Andrea Arcangeli
Subject: Re: [Qemu-devel] [RFC 1/2] pci-dma-api-v1
Date: Fri, 28 Nov 2008 19:50:01 +0100

On Fri, Nov 28, 2008 at 07:59:13PM +0200, Blue Swirl wrote:
> I don't know, here's a pointer:
> http://lists.gnu.org/archive/html/qemu-devel/2008-08/msg00092.html

I'm in total agreement with it. The missing "proper vectored AIO
operations" are bdrv_aio_readv/writev ;).

I wonder how can possibly aio_readv/writev be missing in posix aio?
Unbelievable. It'd be totally trivial to add those to glibc, much
easier infact than to pthread_create by hand, but how can we add a
dependency on a certain glibc version? Ironically it'll be more
user-friendly to add dependency on linux kernel-aio implementation
that is already available for ages and it's guaranteed to run faster
(or at least not slower).

> Sorry, my description seems to have lead you to a totally wrong track.
> I meant this scenario: device (Lance Ethernet) -> DMA controller
> (MACIO) -> IOMMU -> physical memory. (In this case vectored DMA won't
> be useful since there is byte swapping involved, but serves as an
> example about generic DMA). At each step the DMA address is rewritten.
> It would be nice if the interface between Lance and DMA, DMA and IOMMU
> and IOMMU and memory was the same.

No problem. So you think I should change it to qemu_dma_sg instead of
pci_dma_sg? We can decide it later, but surely we can think about it
in the meantime ;).

> Here's some history, please have a look.
> 
> My first failed attempt:
> http://lists.gnu.org/archive/html/qemu-devel/2007-08/msg00179.html
> 
> My second failed rough sketch:
> http://lists.gnu.org/archive/html/qemu-devel/2007-10/msg00626.html
> 
> Anthony's version:
> http://lists.gnu.org/archive/html/qemu-devel/2008-03/msg00474.html
> 
> Anthony's second version:
> http://lists.gnu.org/archive/html/qemu-devel/2008-04/msg00077.html

Thanks a lot for the pointers.

BTW, lots of the credit in the design of my current implementation
goes to Avi, I forgot to mention it in previous emails.

The little cache layer I added at the last minute was very buggy so
don't look much of it, just assume it works when reading the patch ;).
I think I fixed it now in my tree, so next version will be much
better. I've also noticed some problems with windows (I didn't test
windows before posting), those aren't related to the cache layer as I
added a #define to disable it and replace it with malloc/free. But
that's not the cache layer, as soon as windows runs completely
flawlessy I post an update.

The iov cache layer is now also improved so that it only caches at
most N elements, where N is the max number of simultaneous in-flight
dma that ever happened during the runtime, so it's a bit smarter than
a generic slab cache.

Last but not the least there's still one malloc in the direct fast
path but the plan is to eliminate that too, by embedding the iov
inside the param (keeping it at the end of the struct like if I'd be
extending the linear_iov), so then the cache layer will handle it all
and there will be zero mallocs. The bounce path will be penalized as
it'll have to allocate the direct-iov too, but we don't care.

Thanks!
Andrea




reply via email to

[Prev in Thread] Current Thread [Next in Thread]