qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v1 19/22] exec: set debug attribute in SEV-e


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC PATCH v1 19/22] exec: set debug attribute in SEV-enabled guest
Date: Wed, 14 Sep 2016 01:06:13 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0


On 13/09/2016 16:49, Brijesh Singh wrote:
> When debug version of physical memory read APIs are called on SEV guest
> then set the MemTxAttrs.sev_debug attribute to indicate that memory
> read/write is requested for debug purposes.
> 
> On SEV guest, the memory region read/write callback will check this
> attribute and if its set then it will use SEV DEBUG DECRYPT/ENCRYPT commands
> to read/write into guest memory.

You should set it always.

Paolo

> Signed-off-by: Brijesh Singh <address@hidden>
> ---
>  exec.c |   11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/exec.c b/exec.c
> index 604bd05..b1df25d 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3773,7 +3773,11 @@ void cpu_physical_memory_rw_debug(hwaddr addr, uint8_t 
> *buf,
>  {
>      MemTxAttrs attrs;
>  
> -    attrs = MEMTXATTRS_UNSPECIFIED;
> +    if (kvm_sev_enabled()) {
> +        attrs = MEMTXATTRS_SEV_DEBUG;
> +    } else {
> +        attrs = MEMTXATTRS_UNSPECIFIED;
> +    }
>  
>      address_space_rw(&address_space_memory, addr, attrs, buf, len, is_write);
>  }
> @@ -3793,6 +3797,11 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong 
> addr,
>          page = addr & TARGET_PAGE_MASK;
>          phys_addr = cpu_get_phys_page_attrs_debug(cpu, page, &attrs);
>          asidx = cpu_asidx_from_attrs(cpu, attrs);
> +
> +        if (kvm_sev_enabled()) {
> +            attrs = MEMTXATTRS_SEV_DEBUG;
> +        }
> +
>          /* if no physical page mapped, return an error */
>          if (phys_addr == -1)
>              return -1;
> 
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]