qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API


From: Jamie Lokier
Subject: Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API
Date: Tue, 20 Jan 2009 18:08:19 +0000
User-agent: Mutt/1.5.13 (2006-08-11)

Avi Kivity wrote:
> Framebuffers?  Those are RAM.  USB webcams?  These can't be interrupted 
> by SIGINT.  Are you saying a guest depends on an O_DIRECT USB transfer 
> not affecting memory when a USB cable is pulled out.

The USB thing is probably more about emulated-UHCI/EHCI pass-through
to real host USB devices which aren't emulated in QEMU.

> >>We don't have a reliable amount to pass.
> >>    
> >A device which _really_ doesn't have a reliable amount to pass, and
> >which is entitled to scribble all over the RAM it was to DMA to even
> >if it does only a partial transfer, can simply pass the total transfer
> >length.  That would be no different to your proposal.
> >  
> 
> I'm suggesting we do that unconditionally (as my patch does) and only 
> add that complexity when we know it's needed for certain.

Fair enough.  Things can be added if needed.  But please make it real
clear in the DMA API comments that the whole buffer may be
overwritten.  If a guest actually does depend on this, it won't show
up in testing because the whole buffer won't be overwritten except
when a bounce copy is used.

Linux itself had some issues with _its_ DMA API recently: people have
been writing drivers using the Linux DMA API making broken assumptions
that happen to work on x86, and work some of the time on other
architectures.  These things don't show up during driver tests, and
are very difficult to track down later.  I suspect "overwrites the
remaining buffer with randomness/zeros but only under bounce-buffer
conditions" will be similarly unlikely to trigger, and very difficult
to track down if it causes a problem anywhere.

The recent solution in Linux is to add some debugging options which
check it's used correctly, even on x86.

Here's a final thought, to do with performance:

e1000, configured for jumbo frames, receives logs of small packets,
and the DMA subsystem has to bounce-copy the data for some reason (Ian
suggested maybe always doing that with Xen for DMA to guest RAM?)

Without a length passed to unmap, won't it copy 65536 bytes per packet
(most from cold cache) because that's the amount set up for DMA to
receive from /dev/tap, instead of 256 or 1514 bytes per packet which
is their actual size?

-- Jamie




reply via email to

[Prev in Thread] Current Thread [Next in Thread]