qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/3] cputlb: serialize tlb updates with env->tlb


From: Emilio G. Cota
Subject: Re: [Qemu-devel] [PATCH 2/3] cputlb: serialize tlb updates with env->tlb_lock
Date: Wed, 3 Oct 2018 11:48:44 -0400
User-agent: Mutt/1.9.4 (2018-02-28)

On Wed, Oct 03, 2018 at 12:02:19 +0200, Paolo Bonzini wrote:
> On 03/10/2018 11:19, Alex Bennée wrote:
> >> Fix it by using tlb_lock, a per-vCPU lock. All updaters of tlb_table
> >> and the corresponding victim cache now hold the lock.
> >> The readers that do not hold tlb_lock must use atomic reads when
> >> reading .addr_write, since this field can be updated by other threads;
> >> the conversion to atomic reads is done in the next patch.
> > What about the inline TLB lookup code? The original purpose of the
> > cmpxchg was to ensure the inline code would either see a valid entry or
> > and invalid one, not a potentially torn read.
> > 
> 
> atomic_set also ensures that there are no torn reads.

Yes. On the reader side for inline TLB reads, we're emitting
appropriately sized loads that are guaranteed to be atomic
by the ISA. For oversized guests that isn't possible, so we
disable MTTCG.

>  However, here:
> 
> static void copy_tlb_helper_locked(CPUTLBEntry *d, const CPUTLBEntry *s)
> {
> #if TCG_OVERSIZED_GUEST
>     *d = *s;
> #else
>     if (atomic_set) {
>         d->addr_read = s->addr_read;
>         d->addr_code = s->addr_code;
>         atomic_set(&d->addend, atomic_read(&s->addend));
>         /* Pairs with flag setting in tlb_reset_dirty_range */
>         atomic_mb_set(&d->addr_write, atomic_read(&s->addr_write));
>     } else {
>         d->addr_read = s->addr_read;
>         d->addr_write = atomic_read(&s->addr_write);
>         d->addr_code = s->addr_code;
>         d->addend = atomic_read(&s->addend);
>     }
> #endif
> }
> 
> it's probably best to do all atomic_set instead of just the memberwise copy.

Atomics aren't necessary here, as long as the copy is protected by the
lock. This allows other vCPUs to see a consistent view of the data (since
they always acquire the TLB lock), and since copy_tlb is only called
by the vCPU that owns the TLB, regular reads from this vCPU will always
see consistent data.

Thanks,

                Emilio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]