[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] Re: [PATCH v3 05/10] KVM: x86: Restrict writeback of VCPU s
From: |
Jan Kiszka |
Subject: |
[Qemu-devel] Re: [PATCH v3 05/10] KVM: x86: Restrict writeback of VCPU state |
Date: |
Thu, 25 Feb 2010 00:51:43 +0100 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 |
Marcelo Tosatti wrote:
> On Wed, Feb 24, 2010 at 03:17:53PM +0100, Jan Kiszka wrote:
>> Do not write nmi_pending, sipi_vector, and mpstate unless we at least go
>> through a reset. And TSC as well as KVM wallclocks should only be
>> written on full sync, otherwise we risk to drop some time on during
>> state read-modify-write.
>>
>> Signed-off-by: Jan Kiszka <address@hidden>
>> ---
>> kvm.h | 2 +-
>> qemu-kvm-x86.c | 2 +-
>> target-i386/kvm.c | 32 ++++++++++++++++++++------------
>> target-i386/machine.c | 2 +-
>> 4 files changed, 23 insertions(+), 15 deletions(-)
>>
>> diff --git a/kvm.h b/kvm.h
>> index 3ec5b59..3ee307d 100644
>> --- a/kvm.h
>> +++ b/kvm.h
>> @@ -44,7 +44,7 @@ int kvm_log_stop(target_phys_addr_t phys_addr, ram_addr_t
>> size);
>> int kvm_has_sync_mmu(void);
>> int kvm_has_vcpu_events(void);
>> int kvm_has_robust_singlestep(void);
>> -int kvm_put_vcpu_events(CPUState *env);
>> +int kvm_put_vcpu_events(CPUState *env, int level);
>> int kvm_get_vcpu_events(CPUState *env);
>>
>> void kvm_cpu_register_phys_memory_client(void);
>> diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c
>> index 4e6ae70..b0f9670 100644
>> --- a/qemu-kvm-x86.c
>> +++ b/qemu-kvm-x86.c
>> @@ -1391,7 +1391,7 @@ void kvm_arch_push_nmi(void *opaque)
>> void kvm_arch_cpu_reset(CPUState *env)
>> {
>> kvm_arch_reset_vcpu(env);
>> - kvm_put_vcpu_events(env);
>> + kvm_put_vcpu_events(env, KVM_PUT_RESET_STATE);
>> kvm_reset_mpstate(env);
>> if (!cpu_is_bsp(env) && !kvm_irqchip_in_kernel()) {
>> env->interrupt_request &= ~CPU_INTERRUPT_HARD;
>> diff --git a/target-i386/kvm.c b/target-i386/kvm.c
>> index 5f0829b..f1f44d3 100644
>> --- a/target-i386/kvm.c
>> +++ b/target-i386/kvm.c
>> @@ -541,7 +541,7 @@ static void kvm_msr_entry_set(struct kvm_msr_entry
>> *entry,
>> entry->data = value;
>> }
>>
>> -static int kvm_put_msrs(CPUState *env)
>> +static int kvm_put_msrs(CPUState *env, int level)
>> {
>> struct {
>> struct kvm_msrs info;
>> @@ -555,7 +555,6 @@ static int kvm_put_msrs(CPUState *env)
>> kvm_msr_entry_set(&msrs[n++], MSR_IA32_SYSENTER_EIP, env->sysenter_eip);
>> if (kvm_has_msr_star(env))
>> kvm_msr_entry_set(&msrs[n++], MSR_STAR, env->star);
>> - kvm_msr_entry_set(&msrs[n++], MSR_IA32_TSC, env->tsc);
>> kvm_msr_entry_set(&msrs[n++], MSR_VM_HSAVE_PA, env->vm_hsave);
>> #ifdef TARGET_X86_64
>> /* FIXME if lm capable */
>> @@ -564,8 +563,12 @@ static int kvm_put_msrs(CPUState *env)
>> kvm_msr_entry_set(&msrs[n++], MSR_FMASK, env->fmask);
>> kvm_msr_entry_set(&msrs[n++], MSR_LSTAR, env->lstar);
>> #endif
>> - kvm_msr_entry_set(&msrs[n++], MSR_KVM_SYSTEM_TIME,
>> env->system_time_msr);
>> - kvm_msr_entry_set(&msrs[n++], MSR_KVM_WALL_CLOCK, env->wall_clock_msr);
>> + if (level == KVM_PUT_FULL_STATE) {
>> + kvm_msr_entry_set(&msrs[n++], MSR_IA32_TSC, env->tsc);
>> + kvm_msr_entry_set(&msrs[n++], MSR_KVM_SYSTEM_TIME,
>> + env->system_time_msr);
>> + kvm_msr_entry_set(&msrs[n++], MSR_KVM_WALL_CLOCK,
>> env->wall_clock_msr);
>> + }
>>
>> msr_data.info.nmsrs = n;
>>
>> @@ -783,7 +786,7 @@ static int kvm_get_mp_state(CPUState *env)
>> }
>> #endif
>>
>> -int kvm_put_vcpu_events(CPUState *env)
>> +int kvm_put_vcpu_events(CPUState *env, int level)
>> {
>> #ifdef KVM_CAP_VCPU_EVENTS
>> struct kvm_vcpu_events events;
>> @@ -807,8 +810,11 @@ int kvm_put_vcpu_events(CPUState *env)
>>
>> events.sipi_vector = env->sipi_vector;
>>
>> - events.flags =
>> - KVM_VCPUEVENT_VALID_NMI_PENDING | KVM_VCPUEVENT_VALID_SIPI_VECTOR;
>> + events.flags = 0;
>> + if (level >= KVM_PUT_RESET_STATE) {
>> + events.flags |=
>> + KVM_VCPUEVENT_VALID_NMI_PENDING |
>> KVM_VCPUEVENT_VALID_SIPI_VECTOR;
>> + }
>>
>> return kvm_vcpu_ioctl(env, KVM_SET_VCPU_EVENTS, &events);
>
> What is the reason for write-back of any vcpu-event state for RUNTIME
> case again?
>
> The debug workaround?
Consistency and maximum flexibility.
I don't want to start fiddling with this again when we start to
manipulate some VCPU runtime state that may not require writeback yet
(workarounds like the guest debugging stuff can be a reason for that).
Instead, we should now establish a clean concept that only knows those
three types and their well-defined writeback points.
Jan
signature.asc
Description: OpenPGP digital signature
- [Qemu-devel] [PATCH v3 00/10] qemu-kvm: Hook cleanups and yet more use of upstream code, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 01/10] qemu-kvm: Add KVM_CAP_X86_ROBUST_SINGLESTEP-awareness, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 03/10] x86: Extend validity of cpu_is_bsp, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 02/10] qemu-kvm: Rework VCPU state writeback API, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 09/10] qemu-kvm: Move kvm_set_boot_cpu_id, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 10/10] qemu-kvm: Bring qemu_init_vcpu back home, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 08/10] qemu-kvm: Clean up KVM's APIC hooks, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 06/10] qemu-kvm: Use VCPU event state for reset and vmsave/load, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 05/10] KVM: x86: Restrict writeback of VCPU state, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v3 04/10] qemu-kvm: Clean up mpstate synchronization, Jan Kiszka, 2010/02/24
- [Qemu-devel] [PATCH v4 04/10] qemu-kvm: Clean up mpstate synchronization, Jan Kiszka, 2010/02/25
- [Qemu-devel] [PATCH v3 07/10] qemu-kvm: Cleanup/fix TSC and PV clock writeback, Jan Kiszka, 2010/02/24