[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC v3 15/19] tcg: drop global lock during TCG code ex
From: |
Sergey Fedorov |
Subject: |
Re: [Qemu-devel] [RFC v3 15/19] tcg: drop global lock during TCG code execution |
Date: |
Tue, 28 Jun 2016 19:54:18 +0300 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0 |
On 03/06/16 23:40, Alex Bennée wrote:
>
From: Jan Kiszka <address@hidden>
(See http://thread.gmane.org/gmane.comp.emulators.qemu/402092/focus=403090)
> This finally allows TCG to benefit from the iothread introduction: Drop
> the global mutex while running pure TCG CPU code. Reacquire the lock
> when entering MMIO or PIO emulation, or when leaving the TCG loop.
>
> We have to revert a few optimization for the current TCG threading
> model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
> kicking it in qemu_cpu_kick. We also need to disable RAM block
> reordering until we have a more efficient locking mechanism at hand.
>
> Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
> These numbers demonstrate where we gain something:
>
> 20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm
> 20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm
>
> The guest CPU was fully loaded, but the iothread could still run mostly
> independent on a second core. Without the patch we don't get beyond
>
> 32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm
> 32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm
>
> We don't benefit significantly, though, when the guest is not fully
> loading a host CPU.
>
> Signed-off-by: Jan Kiszka <address@hidden>
> Message-Id: <address@hidden>
> [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
> Signed-off-by: KONRAD Frederic <address@hidden>
> [EGC: fixed iothread lock for cpu-exec IRQ handling]
> Signed-off-by: Emilio G. Cota <address@hidden>
> [AJB: -smp single-threaded fix, rm old info from commit msg]
> Signed-off-by: Alex Bennée <address@hidden>
>
(snip)
> diff --git a/cpu-exec.c b/cpu-exec.c
> index 1613c63..e1fb9ca 100644
> --- a/cpu-exec.c
> +++ b/cpu-exec.c
> @@ -29,6 +29,7 @@
> #include "qemu/rcu.h"
> #include "exec/tb-hash.h"
> #include "exec/log.h"
> +#include "qemu/main-loop.h"
> #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
> #include "hw/i386/apic.h"
> #endif
> @@ -460,6 +461,8 @@ static inline void cpu_handle_interrupt(CPUState *cpu,
> int interrupt_request = cpu->interrupt_request;
>
> if (unlikely(interrupt_request)) {
> + qemu_mutex_lock_iothread();
> +
cpu_handle_halt() for target-i386 also needs to protect
'cpu->interrupt_request' with the global lock.
> if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
> /* Mask out external interrupts for this step. */
> interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
> @@ -514,6 +517,10 @@ static inline void cpu_handle_interrupt(CPUState *cpu,
> the program flow was changed */
> *last_tb = NULL;
> }
> +
> + /* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
> + qemu_mutex_unlock_iothread();
> +
> }
> if (unlikely(cpu->exit_request || replay_has_interrupt())) {
> cpu->exit_request = 0;
(snip)
> diff --git a/exec.c b/exec.c
> index e23039c..b7744b9 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2149,9 +2149,9 @@ static void check_watchpoint(int offset, int len,
> MemTxAttrs attrs, int flags)
> }
> cpu->watchpoint_hit = wp;
>
> - /* The tb_lock will be reset when cpu_loop_exit or
> - * cpu_resume_from_signal longjmp back into the cpu_exec
> - * main loop.
> + /* Both tb_lock and iothread_mutex will be reset when
> + * cpu_loop_exit or cpu_resume_from_signal longjmp
> + * back into the cpu_exec main loop.
> */
> tb_lock();
> tb_check_watchpoint(cpu);
> @@ -2387,8 +2387,14 @@ static void io_mem_init(void)
> memory_region_init_io(&io_mem_rom, NULL, &unassigned_mem_ops, NULL,
> NULL, UINT64_MAX);
> memory_region_init_io(&io_mem_unassigned, NULL, &unassigned_mem_ops,
> NULL,
> NULL, UINT64_MAX);
> +
> + /* io_mem_notdirty calls tb_invalidate_phys_page_fast,
> + * which must be called without the iothread mutex.
"must" or "can"?
> + */
> memory_region_init_io(&io_mem_notdirty, NULL, ¬dirty_mem_ops, NULL,
> NULL, UINT64_MAX);
> + memory_region_clear_global_locking(&io_mem_notdirty);
> +
> memory_region_init_io(&io_mem_watch, NULL, &watch_mem_ops, NULL,
> NULL, UINT64_MAX);
> }
(snip)
> diff --git a/target-i386/smm_helper.c b/target-i386/smm_helper.c
> index 4dd6a2c..6a5489b 100644
> --- a/target-i386/smm_helper.c
> +++ b/target-i386/smm_helper.c
> @@ -18,6 +18,7 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/main-loop.h"
> #include "cpu.h"
> #include "exec/helper-proto.h"
> #include "exec/log.h"
> @@ -42,11 +43,14 @@ void helper_rsm(CPUX86State *env)
> #define SMM_REVISION_ID 0x00020000
> #endif
>
> +/* Called we iothread lock taken */
s/we/with/
> void cpu_smm_update(X86CPU *cpu)
> {
> CPUX86State *env = &cpu->env;
> bool smm_enabled = (env->hflags & HF_SMM_MASK);
>
> + g_assert(qemu_mutex_iothread_locked());
> +
> if (cpu->smram) {
> memory_region_set_enabled(cpu->smram, smm_enabled);
> }
> @@ -333,7 +337,10 @@ void helper_rsm(CPUX86State *env)
> }
> env->hflags2 &= ~HF2_SMM_INSIDE_NMI_MASK;
> env->hflags &= ~HF_SMM_MASK;
> +
> + qemu_mutex_lock_iothread();
> cpu_smm_update(cpu);
> + qemu_mutex_unlock_iothread();
I'm wondering if there are some other similar places to take the global
lock.
>
> qemu_log_mask(CPU_LOG_INT, "SMM: after RSM\n");
> log_cpu_state_mask(CPU_LOG_INT, CPU(cpu), CPU_DUMP_CCOP);
(snip)
Kind regards
Sergey
- Re: [Qemu-devel] [RFC v3 13/19] tcg: rename tcg_current_cpu to tcg_current_rr_cpu, (continued)
- [Qemu-devel] [RFC v3 14/19] tcg: remove global exit_request, Alex Bennée, 2016/06/03
- [Qemu-devel] [RFC v3 09/19] target-arm/arm-powerctl: wake up sleeping CPUs, Alex Bennée, 2016/06/03
- [Qemu-devel] [RFC v3 10/19] tcg: cpus rm tcg_exec_all(), Alex Bennée, 2016/06/03
- [Qemu-devel] [RFC v3 18/19] tcg: Ensure safe TB lookup out of 'tb_lock', Alex Bennée, 2016/06/03
- [Qemu-devel] [RFC v3 17/19] tcg: enable thread-per-vCPU, Alex Bennée, 2016/06/03
- [Qemu-devel] [RFC v3 15/19] tcg: drop global lock during TCG code execution, Alex Bennée, 2016/06/03
- Re: [Qemu-devel] [RFC v3 15/19] tcg: drop global lock during TCG code execution,
Sergey Fedorov <=
- [Qemu-devel] [RFC v3 19/19] cpu-exec: remove tb_lock from the hot-path, Alex Bennée, 2016/06/03
[Qemu-devel] [RFC v3 16/19] tcg: move locking for tb_invalidate_phys_page_range up, Alex Bennée, 2016/06/03
Re: [Qemu-devel] [RFC v3 00/19] Base enabling patches for MTTCG, Pranith Kumar, 2016/06/04