qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 03/11] tcg: comment on which functions have to


From: Emilio G. Cota
Subject: Re: [Qemu-devel] [RFC v1 03/11] tcg: comment on which functions have to be called with tb_lock held
Date: Mon, 21 Mar 2016 17:50:44 -0400
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Mar 18, 2016 at 17:59:46 +0100, Paolo Bonzini wrote:
> On 18/03/2016 17:18, Alex Bennée wrote:
> > +
> > +    /* Protected by tb_lock.  */
> 
> Only writes are protected by tb_lock.  Read happen outside the lock.
> 
> Reads are not quite thread safe yet, because of tb_flush.  In order to
> fix that, there's either the async_safe_run() mechanism from Fred or
> preferrably the code generation buffer could be moved under RCU.

A third approach (which I prefer) is to protect tb_jmp_cache with
a seqlock. That way invalidates (via tlb_flush from other CPUs, or
via tb_flush) are picked up if they're racing with concurrent reads.

> Because tb_flush is really rare, my suggestion is simply to allocate two
> code generation buffers and do something like
> 
> static int which_buffer_is_in_use_bit_mask = 1;
> ...
> 
>    /* in tb_flush */
>    assert (which_buffer_is_in_use_bit_mask != 3);
>    if (which_buffer_is_in_use_bit_mask == 1) {
>        which_buffer_is_in_use_bit_mask |= 2;
>        call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~1);
>        point TCG to second buffer
>     } else if (which_buffer_is_in_use_bit_mask == 2) {
>        which_buffer_is_in_use_bit_mask |= 1;
>        call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~2);
>        point TCG to first buffer
>     }
> 
> Basically, we just assert that call_rcu makes at least one pass between
> two tb_flushes.
> 
> All this is also a prerequisite for patch 1.

The problem with this approach is that the "point TCG to second buffer"
is not just a question of pointing code_gen_buffer to a new address;
we'd have to create a new tcg_ctx struct, since tcg_ctx has quite a few
elements that are dependent on code_gen_buffer (e.g. s->code_ptr,
s->code_buf). And this could end up with readers reading a partially
up-to-date (i.e. corrupt) tcg_ctx.

I know you're not enthusiastic about it, but I think a mechanism to "stop
all CPUs and wait until they have indeed stopped" is in this case justified.

I'm preparing an RFC with these two changes (seqlock and stop all cpus 
mechanism)
on top of these base patches.

Thanks,

                Emilio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]