qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH V6 15/18] cpu: introduce tlb_flush*_all.


From: Frederic Konrad
Subject: Re: [Qemu-devel] [RFC PATCH V6 15/18] cpu: introduce tlb_flush*_all.
Date: Fri, 26 Jun 2015 17:54:56 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 26/06/2015 17:15, Paolo Bonzini wrote:

On 26/06/2015 16:47, address@hidden wrote:
+    CPU_FOREACH(cpu) {
+        if (qemu_cpu_is_self(cpu)) {
+            /* async_run_on_cpu handle this case but this just avoid a malloc
+             * here.
+             */
+            tlb_flush(cpu, flush_global);
+        } else {
+            params = g_malloc(sizeof(struct TLBFlushParams));
+            params->cpu = cpu;
+            params->flush_global = flush_global;
+            async_run_on_cpu(cpu, tlb_flush_async_work, params);
Shouldn't this be synchronous (which you cannot do straightforwardly
because of deadlocks---hence the need to hook cpu_has_work as discussed
earlier)?

Paolo

I think it doesn't requires to be synchronous as each VCPUs only clear it's own
tlb here:

void tlb_flush(CPUState *cpu, int flush_global)
{
    CPUArchState *env = cpu->env_ptr;

#if defined(DEBUG_TLB)
    printf("tlb_flush:\n");
#endif
    /* must reset current TB so that interrupts cannot modify the
       links while we are modifying them */
    cpu->current_tb = NULL;

    memset(env->tlb_table, -1, sizeof(env->tlb_table));
    memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table));
    memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache));

    env->vtlb_index = 0;
    env->tlb_flush_addr = -1;
    env->tlb_flush_mask = 0;
    tlb_flush_count++;
}

So what happen is:
An arm instruction want to clear tlb of all VCPUs eg: IS version of TLBIALL.
The VCPU which execute the TLBIALL_IS can't flush tlb of other VCPU.
It will just ask all VCPU thread to exit and to do tlb_flush hence the async_work.

Maybe the big issue might be memory barrier instruction here which I didn't
checked.

Fred
+        }
+    }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]