qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 14/28] exec: make address spaces 64-bit wide


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PULL 14/28] exec: make address spaces 64-bit wide
Date: Mon, 20 Jan 2014 22:37:18 +0200

On Mon, Jan 20, 2014 at 10:16:01AM -0700, Alex Williamson wrote:
> On Mon, 2014-01-20 at 19:04 +0200, Michael S. Tsirkin wrote:
> > On Mon, Jan 20, 2014 at 09:45:25AM -0700, Alex Williamson wrote:
> > > On Mon, 2014-01-20 at 11:20 -0500, Mike Day wrote:
> > > > Do you know which device is writing to the BAR below? From the trace
> > > > it appears it should be restoring the memory address to the BAR after
> > > > writing all 1s to the BAR and reading back the contents. (the protocol
> > > > for finding the length of the bar memory.)
> > > 
> > > The guest itself is writing the the BARs.  This is a standard sizing
> > > operation by the guest.
> > 
> > Question is maybe device memory should be disabled?
> > Does windows do this too (sizing when memory enabled)?
> 
> Per the spec I would have expected memory & I/O to be disabled on the
> device during a sizing operation, but that's not the case here.  I
> thought you were the one that said Linux doesn't do this because some
> devices don't properly re-enable.

Yes. But maybe we can white-list devices or something.
I'm guessing modern express devices are all sane
and let you disable/enable memory any number
of times.

> I'm not sure how it would change our
> approach to this to know whether Windows behaves the same since sizing
> while disabled is not an issue and we apparently need to support sizing
> while enabled regardless.  Thanks,
> 
> Alex

I'm talking about changing Linux here.
If windows is already doing this - this gives us more
hope that this will actually work.
Yes we need the work-around in qemu regardless.


> > > > On Thu, Jan 9, 2014 at 12:24 PM, Alex Williamson
> > > > <address@hidden> wrote:
> > > > > On Wed, 2013-12-11 at 20:30 +0200, Michael S. Tsirkin wrote:
> > > > >> From: Paolo Bonzini <address@hidden>
> > > > > vfio: vfio_pci_read_config(0000:01:10.0, @0x10, len=0x4) febe0004
> > > > > (save lower 32bits of BAR)
> > > > > vfio: vfio_pci_write_config(0000:01:10.0, @0x10, 0xffffffff, len=0x4)
> > > > > (write mask to BAR)
> > > > 
> > > > Here the device should restore the memory address (original contents)
> > > > to the BAR.
> > > 
> > > Sorry if it's not clear, the trace here is what the vfio-pci driver
> > > sees.  We're just observing the sizing operation of the guest, therefore
> > > we see:
> > > 
> > > 1) orig = read()
> > > 2) write(0xffffffff)
> > > 3) size_mask = read()
> > > 4) write(orig)
> > > 
> > > We're only at step 2)
> > > 
> > > > > vfio: region_del febe0000 - febe3fff
> > > > > (memory region gets unmapped)
> > > > > vfio: vfio_pci_read_config(0000:01:10.0, @0x10, len=0x4) ffffc004
> > > > > (read size mask)
> > > 
> > > step 3)
> > > 
> > > > > vfio: vfio_pci_write_config(0000:01:10.0, @0x10, 0xfebe0004, len=0x4)
> > > > > (restore BAR)
> > > 
> > > step 4)
> > > 
> > > > > vfio: region_add febe0000 - febe3fff [0x7fcf3654d000]
> > > > > (memory region re-mapped)
> > > > > vfio: vfio_pci_read_config(0000:01:10.0, @0x14, len=0x4) 0
> > > > > (save upper 32bits of BAR)
> > > > > vfio: vfio_pci_write_config(0000:01:10.0, @0x14, 0xffffffff, len=0x4)
> > > > > (write mask to BAR)
> > > > 
> > > > and here ...
> > > 
> > > This is the same as above to the next BAR, which is the upper 32bits of
> > > the 64bit BAR.
> > > 
> > > > > vfio: region_del febe0000 - febe3fff
> > > > > (memory region gets unmapped)
> > > > > vfio: region_add fffffffffebe0000 - fffffffffebe3fff [0x7fcf3654d000]
> > > > > (memory region gets re-mapped with new address)
> > > > > qemu-system-x86_64: vfio_dma_map(0x7fcf38861710, 0xfffffffffebe0000, 
> > > > > 0x4000, 0x7fcf3654d000) = -14 (Bad address)
> > > > > (iommu barfs because it can only handle 48bit physical addresses)
> > > > 
> > > > I looked around some but I couldn't find an obvious culprit. Could it
> > > > be that the BAR is getting unmapped automatically due to
> > > > x-intx-mmap-timeout-ms before the device has a chance to finish
> > > > restoring the correct value to the BAR?
> > > 
> > > No, this is simply the guest sizing the BAR, this is not an internally
> > > generated operation.  The INTx emulation isn't used here as KVM
> > > acceleration is enabled.  That also only toggles the enable setting on
> > > the mmap'd MemoryRegion, it doesn't change the address it's mapped to.
> > > Thanks,
> > > 
> > > Alex
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]