qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] atomics: add volatile_read/volatile_set


From: Sergey Fedorov
Subject: Re: [Qemu-devel] [PATCH] atomics: add volatile_read/volatile_set
Date: Mon, 18 Jul 2016 19:57:04 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 18/07/16 19:53, Paolo Bonzini wrote:
>
> On 18/07/2016 18:52, Sergey Fedorov wrote:
>> So how are we going to use them?
> Instead of atomic_read/atomic_set when marking invalid TBs.

But shouldn't they be atomic to avoid reading torn writes?

Thanks,
Sergey

>
> diff --git a/cpu-exec.c b/cpu-exec.c
> index fd43de8..1275f3d 100644
> --- a/cpu-exec.c
> +++ b/cpu-exec.c
> @@ -292,10 +292,10 @@ static inline TranslationBlock *tb_find(CPUState *cpu,
>         always be the same before a given translated block
>         is executed. */
>      cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
> -    tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]);
> -    if (unlikely(!tb || atomic_read(&tb->pc) != pc ||
> -                 atomic_read(&tb->cs_base) != cs_base ||
> -                 atomic_read(&tb->flags) != flags)) {
> +    tb = atomic_rcu_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]);
> +    if (unlikely(!tb || volatile_read(&tb->pc) != pc ||
> +                 volatile_read(&tb->cs_base) != cs_base ||
> +                 volatile_read(&tb->flags) != flags)) {
>          tb = tb_htable_lookup(cpu, pc, cs_base, flags);
>          if (!tb) {
>  
> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
> index 8f0afcd..35e963b 100644
> --- a/include/exec/exec-all.h
> +++ b/include/exec/exec-all.h
> @@ -262,9 +262,9 @@ static inline void tb_mark_invalid(TranslationBlock *tb)
>      uint32_t flags = 0;
>  
>      cpu_get_invalid_tb_cpu_state(&pc, &cs_base, &flags);
> -    atomic_set(&tb->pc, pc);
> -    atomic_set(&tb->cs_base, cs_base);
> -    atomic_set(&tb->flags, flags);
> +    volatile_set(&tb->pc, pc);
> +    volatile_set(&tb->cs_base, cs_base);
> +    volatile_set(&tb->flags, flags);
>  }
>  
>  static inline bool tb_is_invalid(TranslationBlock *tb)
>
>
> Thanks,
>
> Paolo
>
>> Thanks,
>> Sergey
>>
>> On 18/07/16 17:17, Paolo Bonzini wrote:
>>> Signed-off-by: Paolo Bonzini <address@hidden>
>>> ---
>>>  docs/atomics.txt      | 19 ++++++++++++++++---
>>>  include/qemu/atomic.h | 17 +++++++++++++++++
>>>  2 files changed, 33 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/docs/atomics.txt b/docs/atomics.txt
>>> index c95950b..1f21d2e 100644
>>> --- a/docs/atomics.txt
>>> +++ b/docs/atomics.txt
>>> @@ -123,6 +123,14 @@ to do so, because it tells readers which variables are 
>>> shared with
>>>  other threads, and which are local to the current thread or protected
>>>  by other, more mundane means.
>>>  
>>> +atomic_read() and atomic_set() only support accesses as large as a
>>> +pointer.  If you need to access variables larger than a pointer you
>>> +can use volatile_read() and volatile_set(), but be careful: these always
>>> +use volatile accesses, and 64-bit volatile accesses are not atomic on
>>> +several 32-bit processors such as ARMv7.  In other words, volatile_read
>>> +and volatile_set only provide "safe register" semantics when applied to
>>> +64-bit variables.
>>> +
>>>  Memory barriers control the order of references to shared memory.
>>>  They come in four kinds:
>>>  
>>> @@ -335,11 +343,16 @@ and memory barriers, and the equivalents in QEMU:
>>>    Both semantics prevent the compiler from doing certain transformations;
>>>    the difference is that atomic accesses are guaranteed to be atomic,
>>>    while volatile accesses aren't. Thus, in the volatile case we just cross
>>> -  our fingers hoping that the compiler will generate atomic accesses,
>>> -  since we assume the variables passed are machine-word sized and
>>> -  properly aligned.
>>> +  our fingers hoping that the compiler and processor will provide atomic
>>> +  accesses, since we assume the variables passed are machine-word sized
>>> +  and properly aligned.
>>> +
>>>    No barriers are implied by atomic_read/set in either Linux or QEMU.
>>>  
>>> +- volatile_read and volatile_set are equivalent to ACCESS_ONCE in Linux.
>>> +  No barriers are implied by volatile_read/set in QEMU, nor by
>>> +  ACCESS_ONCE in Linux.
>>> +
>>>  - atomic read-modify-write operations in Linux are of three kinds:
>>>  
>>>           atomic_OP          returns void
>>> diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
>>> index 7e13fca..8409bdb 100644
>>> --- a/include/qemu/atomic.h
>>> +++ b/include/qemu/atomic.h
>>> @@ -18,6 +18,12 @@
>>>  /* Compiler barrier */
>>>  #define barrier()   ({ asm volatile("" ::: "memory"); (void)0; })
>>>  
>>> +/* These will only be atomic if the processor does the fetch or store
>>> + * in a single issue memory operation
>>> + */
>>> +#define volatile_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
>>> +#define volatile_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = 
>>> (i))
>>> +
>>>  #ifdef __ATOMIC_RELAXED
>>>  /* For C11 atomic ops */
>>>  
>>> @@ -260,6 +266,17 @@
>>>   */
>>>  #define atomic_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
>>>  #define atomic_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = 
>>> (i))
>>> +#define atomic_read(ptr)                              \
>>> +    ({                                                \
>>> +    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
>>> +    volatile_read(ptr);                               \
>>> +    })
>>> +
>>> +#define atomic_set(ptr, i)  do {                      \
>>> +    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
>>> +    volatile_set(ptr, i);                             \
>>> +} while(0)
>>> +
>>>  
>>>  /**
>>>   * atomic_rcu_read - reads a RCU-protected pointer to a local variable




reply via email to

[Prev in Thread] Current Thread [Next in Thread]