qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Rationalising exit_request, cpu->exit_request and tcg_exit_


From: Alex Bennée
Subject: [Qemu-devel] Rationalising exit_request, cpu->exit_request and tcg_exit_req?
Date: Wed, 16 Dec 2015 17:14:46 +0000
User-agent: mu4e 0.9.15; emacs 24.5.50.4

Hi,

While looking at Fred's current MTTCG WIP branch I ran into a problem
where:

  - async_safe_work_pending() was true
  - this triggered setting cpu->exit_request
  - however we never left tcg_exec_all()
  - because the global exit_request wasn't set
  - hence qemu_tcg_wait_io_event() never drained the async work queue

While trying to understand why we have both a cpu and a global
exit_request I then discovered there is also cpu->tcg_exit_req which is
the actual variable the TCG examines. This leads to sequences like:

void cpu_exit(CPUState *cpu)
{
    cpu->exit_request = 1;
    /* Ensure cpu_exec will see the exit request after TCG has exited.  */
    smp_wmb();
    cpu->tcg_exit_req = 1;
}

which itself is amusingly called from:

static void qemu_cpu_kick_no_halt(void)
{
    CPUState *cpu;
    /* Ensure whatever caused the exit has reached the CPU threads before
     * writing exit_request.
     */
    atomic_mb_set(&exit_request, 1);
    cpu = atomic_mb_read(&tcg_current_cpu);
    if (cpu) {
        cpu_exit(cpu);
    }
}

This seems to me to be slightly insane as we now have 3 variables that
struggle to be kept in sync. Could all this not be rationalised into a
single variable or am I missing a subtly in their different semantics?

One problem I can think of when we get to the MTTCG world is a race when
signalling other CPUs to exit and making sure that request is not
dropped as we clear an old exit_request.

The other complication is the main cpu_exec loop which works hard to
avoid leaving the main loop when processing interrupts (which require
an exit_request to trigger). This means there a potentially multiple
places where exit_requests are drained.

I don't know if there is clean-up that can happen in master or if this
all needs to be done in the mttcg work but would it make sense just to
keep cpu->exit_request, make it visible to the TCG code and make all
exits fall out to qemu_tcg_cpu_thread_fn which would be the only place
to clear the flag?

I did have a brief look at the KVM side of the code and it only seems to
reference cpu->exit_request so I think the rest of this is a TCG
problem.

Thoughts?

--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]