qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] mttcg/i386: Patch instruction using async_sa


From: Alex Bennée
Subject: Re: [Qemu-devel] [PATCH v2] mttcg/i386: Patch instruction using async_safe_* framework
Date: Fri, 24 Feb 2017 14:36:14 +0000
User-agent: mu4e 0.9.19; emacs 25.2.5

Pranith Kumar <address@hidden> writes:

> Hi Alex,
>
> Alex Bennée writes:
>
>> Pranith Kumar <address@hidden> writes:
>>
>>> In mttcg, calling pause_all_vcpus() during execution from the
>>> generated TBs causes a deadlock if some vCPU is waiting for exclusive
>>> execution in start_exclusive(). Fix this by using the aync_safe_*
>>> framework instead of pausing vcpus for patching instructions.
>>>
>
> <...>
>
>>> diff --git a/hw/i386/kvmvapic.c b/hw/i386/kvmvapic.c
>>> index 82a4955..11b0d49 100644
>>> --- a/hw/i386/kvmvapic.c
>>>
>>> -    resume_all_vcpus();
>>> +    g_free(info);
>>> +}
>>> +
>>> +static void patch_instruction(VAPICROMState *s, X86CPU *cpu, target_ulong 
>>> ip)
>>> +{
>>> +    CPUState *cs = CPU(cpu);
>>> +    VAPICHandlers *handlers;
>>> +    struct PatchInfo *info;
>>> +
>>> +    if (smp_cpus == 1) {
>>> +        handlers = &s->rom_state.up;
>>> +    } else {
>>> +        handlers = &s->rom_state.mp;
>>> +    }
>>> +
>>> +    info  = g_new(struct PatchInfo, 1);
>>> +    info->handler = handlers;
>>> +    info->ip = ip;
>>>
>>>      if (!kvm_enabled()) {
>>> -        /* Both tb_lock and iothread_mutex will be reset when
>>> -         *  longjmps back into the cpu_exec loop. */
>>> -        tb_lock();
>>> -        tb_gen_code(cs, current_pc, current_cs_base, current_flags, 1);
>>> -        cpu_loop_exit_noexc(cs);
>>> +        const run_on_cpu_func fn = do_patch_instruction;
>>> +
>>> +        async_safe_run_on_cpu(cs, fn, RUN_ON_CPU_HOST_PTR(info));
>>> +        cs->exception_index = EXCP_INTERRUPT;
>>> +        cpu_loop_exit(cs);
>>>      }
>>> +
>>> +    pause_all_vcpus();
>>> +
>>> +    do_patch_instruction(cs, RUN_ON_CPU_HOST_PTR(info));
>>> +
>>> +    resume_all_vcpus();
>>> +    g_free(info);
>>
>> I don't know if there is any benefit scheduling this as async work for
>> KVM but I'll leave that up to Paolo to decide. From a TCG point of view
>> I think its good:
>>
>
> We are scheduling this as async work only for non-KVM cases. For KVM, we use
> go to the pause/resume path above and patch it there itself.

No I mean would it be more efficient to do that for KVM as safe work. To
be honest the code to pause_all_vcpus() seems a little hokey given
cpu_stop_current() somehow stops itself while cpu_exit'ing the rest of
the vcpus. But I guess you would involve an additional KVM transition
for the calling thread, I'm not sure hence the deference to the KVM
experts ;-)

>
> Thanks,


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]