qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 1/8] kvm: Set cpu_single_env only once


From: Blue Swirl
Subject: Re: [Qemu-devel] [PATCH v2 1/8] kvm: Set cpu_single_env only once
Date: Sat, 11 Feb 2012 11:25:30 +0000

On Sat, Feb 11, 2012 at 10:06, Jan Kiszka <address@hidden> wrote:
> On 2012-02-11 11:02, Blue Swirl wrote:
>> On Fri, Feb 10, 2012 at 18:31, Jan Kiszka <address@hidden> wrote:
>>> As we have thread-local cpu_single_env now and KVM uses exactly one
>>> thread per VCPU, we can drop the cpu_single_env updates from the loop
>>> and initialize this variable only once during setup.
>>
>> I don't think this is correct. Maybe you missed the part that sets
>> cpu_single_env to NULL, which I think is to annoy broken code that
>> assumes that some CPU state is always globally available. This is not
>> true for monitor context.
>
> I did check this before changing, and I see no such need. Particularly
> as this old debugging help prevents valid use case.

It looks like monitor code is safe now. But in several places there
are checks like this in pc.c:
DeviceState *cpu_get_current_apic(void)
{
    if (cpu_single_env) {
        return cpu_single_env->apic_state;
    } else {
        return NULL;
    }
}

In cpu-exec.c, there are these lines:
    /* fail safe : never use cpu_single_env outside cpu_exec() */
    cpu_single_env = NULL;

I think using cpu_single_env is an indication of a problem, like poor
code, layering violation or poor API (vmport). What is your use case?

>
> Jan
>
>>
>>> Signed-off-by: Jan Kiszka <address@hidden>
>>> ---
>>>  cpus.c    |    1 +
>>>  kvm-all.c |    5 -----
>>>  2 files changed, 1 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/cpus.c b/cpus.c
>>> index f45a438..d0c8340 100644
>>> --- a/cpus.c
>>> +++ b/cpus.c
>>> @@ -714,6 +714,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>>>     qemu_mutex_lock(&qemu_global_mutex);
>>>     qemu_thread_get_self(env->thread);
>>>     env->thread_id = qemu_get_thread_id();
>>> +    cpu_single_env = env;
>>>
>>>     r = kvm_init_vcpu(env);
>>>     if (r < 0) {
>>> diff --git a/kvm-all.c b/kvm-all.c
>>> index c4babda..e2cbc03 100644
>>> --- a/kvm-all.c
>>> +++ b/kvm-all.c
>>> @@ -1118,8 +1118,6 @@ int kvm_cpu_exec(CPUState *env)
>>>         return EXCP_HLT;
>>>     }
>>>
>>> -    cpu_single_env = env;
>>> -
>>>     do {
>>>         if (env->kvm_vcpu_dirty) {
>>>             kvm_arch_put_registers(env, KVM_PUT_RUNTIME_STATE);
>>> @@ -1136,13 +1134,11 @@ int kvm_cpu_exec(CPUState *env)
>>>              */
>>>             qemu_cpu_kick_self();
>>>         }
>>> -        cpu_single_env = NULL;
>>>         qemu_mutex_unlock_iothread();
>>>
>>>         run_ret = kvm_vcpu_ioctl(env, KVM_RUN, 0);
>>>
>>>         qemu_mutex_lock_iothread();
>>> -        cpu_single_env = env;
>>>         kvm_arch_post_run(env, run);
>>>
>>>         kvm_flush_coalesced_mmio_buffer();
>>> @@ -1206,7 +1202,6 @@ int kvm_cpu_exec(CPUState *env)
>>>     }
>>>
>>>     env->exit_request = 0;
>>> -    cpu_single_env = NULL;
>>>     return ret;
>>>  }
>>>
>>> --
>>> 1.7.3.4
>>>
>>>
>>
>>
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]