qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 6/8] KVM: Handle page fault for private memory


From: Nikunj A. Dadhania
Subject: Re: [PATCH v6 6/8] KVM: Handle page fault for private memory
Date: Fri, 24 Jun 2022 09:28:23 +0530
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.3.2

On 5/19/2022 9:07 PM, Chao Peng wrote:
> A page fault can carry the information of whether the access if private
> or not for KVM_MEM_PRIVATE memslot, this can be filled by architecture
> code(like TDX code). To handle page faut for such access, KVM maps the
> page only when this private property matches host's view on this page
> which can be decided by checking whether the corresponding page is
> populated in the private fd or not. A page is considered as private when
> the page is populated in the private fd, otherwise it's shared.
> 
> For a successful match, private pfn is obtained with memfile_notifier
> callbacks from private fd and shared pfn is obtained with existing
> get_user_pages.
> 
> For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to
> userspace. Userspace then can convert memory between private/shared from
> host's view then retry the access.
> 
> Co-developed-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> ---
>  arch/x86/kvm/mmu.h              |  1 +
>  arch/x86/kvm/mmu/mmu.c          | 70 +++++++++++++++++++++++++++++++--
>  arch/x86/kvm/mmu/mmu_internal.h | 17 ++++++++
>  arch/x86/kvm/mmu/mmutrace.h     |  1 +
>  arch/x86/kvm/mmu/paging_tmpl.h  |  5 ++-
>  include/linux/kvm_host.h        | 22 +++++++++++
>  6 files changed, 112 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index 7e258cc94152..c84835762249 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -176,6 +176,7 @@ struct kvm_page_fault {
>  
>       /* Derived from mmu and global state.  */
>       const bool is_tdp;
> +     const bool is_private;
>       const bool nx_huge_page_workaround_enabled;
>  
>       /*
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index afe18d70ece7..e18460e0d743 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2899,6 +2899,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
>       if (max_level == PG_LEVEL_4K)
>               return PG_LEVEL_4K;
>  
> +     if (kvm_slot_is_private(slot))
> +             return max_level;

Can you explain the rationale behind the above change? 
AFAIU, this overrides the transparent_hugepage=never setting for both 
shared and private mappings.

>       host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot);
>       return min(host_level, max_level);
>  }

Regards
Nikunj



reply via email to

[Prev in Thread] Current Thread [Next in Thread]