qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [RFC PATCH 4/4] tsan: various fixes for make check


From: Paolo Bonzini
Subject: Re: [Qemu-block] [RFC PATCH 4/4] tsan: various fixes for make check
Date: Mon, 25 Jan 2016 19:09:23 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0


On 25/01/2016 17:49, Alex Bennée wrote:
> After building with the ThreadSanitizer I ran make check and started
> going through the failures reported. Most are failures to use atomic
> primitives to access variables previously atomically set. While this
> likely will work on x86 it could cause problems on other architectures.

It will work on other architectures, because there are memory barriers
(either explicit, or implicit in other atomic_* ops).  In fact, using
atomic_read/atomic_set would actually fix bugs in x86 :) if it weren't
for commit 3bbf572 ("atomics: add explicit compiler fence in __atomic
memory barriers", 2015-06-03).

>  - async: use atomic reads for scheduled/notify_me
>  - thread-pool: use atomic_mb_read/set to for thread ->state

Please use atomic_read/atomic_set and keep the memory barriers.  That's
a fine way to convert code that uses non-atomic accesses together with
memory barriers (not just thread-pool, also e.g. virtio).

Otherwise looks sane, thanks!

Paolo

>  - test-thread-pool: use atomic read for data.n
> 
> Signed-off-by: Alex Bennée <address@hidden>
> ---
>  async.c                  | 4 ++--
>  tests/test-thread-pool.c | 2 +-
>  thread-pool.c            | 9 ++++-----
>  3 files changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/async.c b/async.c
> index e106072..8d5f810 100644
> --- a/async.c
> +++ b/async.c
> @@ -165,7 +165,7 @@ aio_compute_timeout(AioContext *ctx)
>      QEMUBH *bh;
>  
>      for (bh = ctx->first_bh; bh; bh = bh->next) {
> -        if (!bh->deleted && bh->scheduled) {
> +        if (!bh->deleted && atomic_read(&bh->scheduled)) {
>              if (bh->idle) {
>                  /* idle bottom halves will be polled at least
>                   * every 10ms */
> @@ -286,7 +286,7 @@ void aio_notify(AioContext *ctx)
>       * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
>       */
>      smp_mb();
> -    if (ctx->notify_me) {
> +    if (atomic_read(&ctx->notify_me)) {
>          event_notifier_set(&ctx->notifier);
>          atomic_mb_set(&ctx->notified, true);
>      }
> diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
> index ccdee39..5694ad9 100644
> --- a/tests/test-thread-pool.c
> +++ b/tests/test-thread-pool.c
> @@ -46,7 +46,7 @@ static void test_submit(void)
>  {
>      WorkerTestData data = { .n = 0 };
>      thread_pool_submit(pool, worker_cb, &data);
> -    while (data.n == 0) {
> +    while (atomic_read(&data.n) == 0) {
>          aio_poll(ctx, true);
>      }
>      g_assert_cmpint(data.n, ==, 1);
> diff --git a/thread-pool.c b/thread-pool.c
> index 402c778..97b2c0c 100644
> --- a/thread-pool.c
> +++ b/thread-pool.c
> @@ -99,15 +99,14 @@ static void *worker_thread(void *opaque)
>  
>          req = QTAILQ_FIRST(&pool->request_list);
>          QTAILQ_REMOVE(&pool->request_list, req, reqs);
> -        req->state = THREAD_ACTIVE;
> +        atomic_mb_set(&req->state, THREAD_ACTIVE);
>          qemu_mutex_unlock(&pool->lock);
>  
>          ret = req->func(req->arg);
>  
>          req->ret = ret;
>          /* Write ret before state.  */
> -        smp_wmb();
> -        req->state = THREAD_DONE;
> +        atomic_mb_set(&req->state, THREAD_DONE);
>  
>          qemu_mutex_lock(&pool->lock);
>  
> @@ -167,7 +166,7 @@ static void thread_pool_completion_bh(void *opaque)
>  
>  restart:
>      QLIST_FOREACH_SAFE(elem, &pool->head, all, next) {
> -        if (elem->state != THREAD_DONE) {
> +        if (atomic_read(&elem->state) != THREAD_DONE) {
>              continue;
>          }
>  
> @@ -201,7 +200,7 @@ static void thread_pool_cancel(BlockAIOCB *acb)
>      trace_thread_pool_cancel(elem, elem->common.opaque);
>  
>      qemu_mutex_lock(&pool->lock);
> -    if (elem->state == THREAD_QUEUED &&
> +    if (atomic_mb_read(&elem->state) == THREAD_QUEUED &&
>          /* No thread has yet started working on elem. we can try to "steal"
>           * the item from the worker if we can get a signal from the
>           * semaphore.  Because this is non-blocking, we can do it with
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]