qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API


From: Ian Jackson
Subject: Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API
Date: Wed, 21 Jan 2009 16:50:58 +0000

Avi Kivity writes ("Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping 
API"):
> Ian Jackson wrote:
> > Which devices ?  All devices ever that want to do zero-copy DMA into
> > the guest ?
> 
> IDE, scsi, virtio-blk, virtio-net, e1000, maybe a few more.

Yesterday I produced the example of a SCSI tape drive, which is
vanishingly unlikely to results in writes past the actual transfer
length since the drive definitely produces all of the data in order.

> >> They are passed scatter-gather lists, and I don't think they make 
> >> guarantees about the order in which they're accessed.
> >
> > Many devices will be able to make such promises.
> 
> If they do, and if guests actually depend on these promises, then we 
> will not use the new API until someone is sufficiently motivated to send 
> a patch to enable it.

As I have already pointed out, we won't discover that any guest
depends on those promises in testing, because it's the kind of thing
that will only happen in practice with reasonably obscure situations
including some error conditions.

So "let's only do this if we discover we need it" is not good enough.
We won't know that we need it.  What will probably happen is that some
user somewhere who is already suffering from some kind of problem will
experience additional apparently-random corruption.  Naturally that's
not going to result in a good bug report.

Even from our point of view as the programmers this isn't a good
approach because the proposed fix is an API and API change.  What
you're suggesting is that we introduce a bug, and wait and see if it
bites anyone, in the full knowledge that by then fixing the bug will
involve either widespread changes to all of the DMA API users or
changing a particular driver to be much slower.

> >> This DMA will be into RAM, not mmio.
> >
> > As previously discussed, we might be using bounce buffers even for
> > RAM, depending on the execution model.  You said earlier:
> >
> >   Try it out. I'm sure it will work just fine (if incredibly slowly, 
> >   unless you provide multiple bounce buffers).
> >
> > but here is an example from Jamie of a situation where it won't work
> > right.
> 
> Framebuffers?  Those are RAM.  USB webcams?  These can't be interrupted 
> by SIGINT.  Are you saying a guest depends on an O_DIRECT USB transfer 
> not affecting memory when a USB cable is pulled out?

No, as I said earlier, and as you appeared to accept, it is quite
possible that in some uses of the qemu code - including some uses of
Xen - _all_ DMA will go through bounce buffers.

> >> We don't have a reliable amount to pass.
> >
> > A device which _really_ doesn't have a reliable amount to pass, and
> > which is entitled to scribble all over the RAM it was to DMA to even
> > if it does only a partial transfer, can simply pass the total transfer
> > length.  That would be no different to your proposal.
> 
> I'm suggesting we do that unconditionally (as my patch does) and only 
> add that complexity when we know it's needed for certain.

At the moment there are no such devices (your claims about ide
notwithstanding) but I think it will be easier to argue about the
specific case after we have agreed on a non-deficient API.

Ian.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]