qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 4/7] ide: use the PCI memory access interface


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCH 4/7] ide: use the PCI memory access interface
Date: Thu, 2 Sep 2010 12:58:13 +0300
User-agent: Mutt/1.5.20 (2009-12-10)

On Thu, Sep 02, 2010 at 12:12:00PM +0300, Eduard - Gabriel Munteanu wrote:
> On Thu, Sep 02, 2010 at 08:19:11AM +0300, Michael S. Tsirkin wrote:
> > On Sat, Aug 28, 2010 at 05:54:55PM +0300, Eduard - Gabriel Munteanu wrote:
> > > Emulated PCI IDE controllers now use the memory access interface. This
> > > also allows an emulated IOMMU to translate and check accesses.
> > > 
> > > Map invalidation results in cancelling DMA transfers. Since the guest OS
> > > can't properly recover the DMA results in case the mapping is changed,
> > > this is a fairly good approximation.
> > > 
> > > Signed-off-by: Eduard - Gabriel Munteanu <address@hidden>
> > > ---
> 
> [snip]
> 
> > > +static inline void bmdma_memory_read(BMDMAState *bm,
> > > +                                     target_phys_addr_t addr,
> > > +                                     uint8_t *buf,
> > > +                                     target_phys_addr_t len)
> > > +{
> > > +    bm->rw(bm->opaque, addr, buf, len, 0);
> > > +}
> > > +
> > > +static inline void bmdma_memory_write(BMDMAState *bm,
> > > +                                      target_phys_addr_t addr,
> > > +                                      uint8_t *buf,
> > > +                                      target_phys_addr_t len)
> > > +{
> > > +    bm->rw(bm->opaque, addr, buf, len, 1);
> > > +}
> > > +
> > 
> > Here again, I am concerned about indirection and pointer chaising on data 
> > path.
> > Can we have an iommu pointer in the device, and do a fast path in case
> > there is no iommu?
> > 
> 
> See my other reply.

I don't insist on this solution, but what other way do you propose to
avoid the overhead for everyone not using an iommu?
I'm all for a solution that would help iommu as well,
but one wasn't yet proposed.

> > >  static inline IDEState *idebus_active_if(IDEBus *bus)
> > >  {
> > >      return bus->ifs + bus->unit;
> > > diff --git a/hw/ide/macio.c b/hw/ide/macio.c
> > > index bd1c73e..962ae13 100644
> > > --- a/hw/ide/macio.c
> > > +++ b/hw/ide/macio.c
> > > @@ -79,7 +79,7 @@ static void pmac_ide_atapi_transfer_cb(void *opaque, 
> > > int ret)
> > >  
> > >      s->io_buffer_size = io->len;
> > >  
> > > -    qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1);
> > > +    qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, 
> > > NULL);
> > >      qemu_sglist_add(&s->sg, io->addr, io->len);
> > >      io->addr += io->len;
> > >      io->len = 0;
> > > @@ -141,7 +141,7 @@ static void pmac_ide_transfer_cb(void *opaque, int 
> > > ret)
> > >      s->io_buffer_index = 0;
> > >      s->io_buffer_size = io->len;
> > >  
> > > -    qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1);
> > > +    qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, 
> > > NULL);
> > >      qemu_sglist_add(&s->sg, io->addr, io->len);
> > >      io->addr += io->len;
> > >      io->len = 0;
> > > diff --git a/hw/ide/pci.c b/hw/ide/pci.c
> > > index 4d95cc5..5879044 100644
> > > --- a/hw/ide/pci.c
> > > +++ b/hw/ide/pci.c
> > > @@ -183,4 +183,11 @@ void pci_ide_create_devs(PCIDevice *dev, DriveInfo 
> > > **hd_table)
> > >              continue;
> > >          ide_create_drive(d->bus+bus[i], unit[i], hd_table[i]);
> > >      }
> > > +
> > > +    for (i = 0; i < 2; i++) {
> > > +        d->bmdma[i].rw = (void *) pci_memory_rw;
> > > +        d->bmdma[i].map = (void *) pci_memory_map;
> > > +        d->bmdma[i].unmap = (void *) pci_memory_unmap;
> > > +        d->bmdma[i].opaque = dev;
> > > +    }
> > >  }
> > 
> > These casts show something is wrong with the API, IMO.
> > 
> 
> Hm, here's an oversight on my part: I think I should provide explicit
> bmdma hooks, since pcibus_t is a uint64_t and target_phys_addr_t is a
> uint{32,64}_t depending on the guest machine, so it might be buggy on
> 32-bit wrt calling conventions. But that introduces yet another
> non-inlined function call :-(. That would drop the (void *) cast,
> though.
> 
> 
>       Eduard

So we get away with it without casts but only because C compiler
will let us silently convert the types, possibly discarding
data in the process. Or we'll add a check that will try and detect
this, but there's no good way to report a DMA error to user.
IOW, if our code only works because target fits in pcibus, what good
is the abstraction and using distinct types?

This is why I think we need a generic DMA APIs using dma addresses.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]