qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [iGVT-g] <summary> RE: VFIO based vGPU(was Re: [Announc


From: Tian, Kevin
Subject: Re: [Qemu-devel] [iGVT-g] <summary> RE: VFIO based vGPU(was Re: [Announcement] 2015-Q3 release of XenGT - a Mediated ...)
Date: Thu, 4 Feb 2016 04:16:06 +0000

> From: Neo Jia [mailto:address@hidden
> Sent: Thursday, February 04, 2016 11:52 AM
> 
> > > > 4) Map/unmap guest memory
> > > > --
> > > > It's there for KVM.
> > >
> > > Map and unmap for who?  For the vGPU or for the VM?  It seems like we
> > > know how to map guest memory for the vGPU without KVM, but that's
> > > covered in 7), so I'm not entirely sure what this is specifying.
> >
> > Map guest memory for emulation purpose in vGPU device model, e.g. to r/w
> > guest GPU page table, command buffer, etc. It's the basic requirement as
> > we see in any device model.
> >
> > 7) provides the database (both GPA->IOVA and GPA->HPA), where GPA->HPA
> > can be used to implement this interface for KVM. However for Xen it's
> > different, as special foreign domain mapping hypercall is involved which is
> > Xen specific so not appropriate to be in VFIO.
> >
> > That's why we list this interface separately as a key requirement (though
> > it's obvious for KVM)
> 
> Hi Kevin,
> 
> It seems you are trying to map the guest physical memory into your kernel 
> driver
> on the host, right?

yes.

> 
> If yes, I think we have already have the required information to achieve that.
> 
> The type1 IOMMU VGPU interface has provided <QEMU_VA, iova, qemu_mm>, which is
> enough for us to do any lookup.

As I said, it's easy for KVM. but not the same case for Xen which needs special
hypercall to map guest memory in kernel and VFIO is not used by Xen today.

> 
> >
> > >
> > > > 5) Pin/unpin guest memory
> > > > --
> > > > IGD or any PCI passthru should have same requirement. So we should be
> > > > able to leverage existing code in VFIO. The only tricky thing (Jike may
> > > > elaborate after he is back), is that KVMGT requires to pin EPT entry 
> > > > too,
> > > > which requires some further change in KVM side. But I'm not sure whether
> > > > it still holds true after some design changes made in this thread. So 
> > > > I'll
> > > > leave to Jike to further comment.
> > >
> > > PCI assignment requires pinning all of guest memory, I would think that
> > > IGD would only need to pin selective memory, so is this simply stating
> > > that both have the need to pin memory, not that they'll do it to the
> > > same extent?
> >
> > For simplicity let's first pin all memory, while taking selective pinning 
> > as a
> > future enhancement.
> >
> > The tricky thing is that existing 'pin' action in VFIO doesn't actually pin
> > EPT entry too (only pin host page tables for Qemu process). There are
> > various places where EPT entries might be invalidated when guest is
> > running, while KVMGT requires EPT entries to be pinned too. Let's wait
> > for Jike to elaborate whether this part is still required today.
> 
> Sorry, don't quite follow the logic here. The current VFIO TYPE1 IOMMU 
> (including API
> and underlying IOMMU implementation) will pin the guest physical memory and
> install those pages to the proper device domain. Yes, it is only for the QEMU
> process as that is what the VM is running at.
> 
> Do I miss something here?

For Qemu there are two page tables involved: one is host page table as
you mentioned here for root mode, the other is EPT page table used
as the 2nd level translation when guest is running in non-root mode. I'm
not sure why KVMGT requires to pin EPT entry. Jike should know better
here.

> >
> > b) services to support device emulation, which gonna be hypervisor
> > specific, including:
> >     4) Map/unmap guest memory
> 
> I think we have the ability to support this already with VFIO, see my comments
> above.

Again, please don't consider only KVM/VFIO. We need support both KVM/Xen
in this common framework.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]