[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
tcg_flush_jmp_cache replacing qatomic_set loop with memset
|
From: |
Richard W.M. Jones |
|
Subject: |
tcg_flush_jmp_cache replacing qatomic_set loop with memset |
|
Date: |
Mon, 16 Oct 2023 16:43:36 +0100 |
|
User-agent: |
Mutt/1.5.21 (2010-09-15) |
Hey Paolo,
Quick question. I'm sure the transformation below is *not* correct,
because it doesn't preserve the invariant of the lockless structure.
Is there a way to do this while maintaining correctness? For example
putting barrier() after memset? (Note I'm also zeroing .pc which may
be a problem.)
The background to this is that I've been playing around with the very
hot tb_lookup function. Increasing the size of the jump cache (which
hasn't changed since, erm, 2005!), looks like it could improve
performance, plus a few other changes which I'm playing with. However
increasing the size causes profiles to be dominated by the loop in
tcg_flush_jmp_cache, presumably because of all those serialized atomic ops.
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 8cb6ad3511..6a21b3dba8 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -796,9 +796,7 @@ void tcg_flush_jmp_cache(CPUState *cpu)
return;
}
- for (int i = 0; i < TB_JMP_CACHE_SIZE; i++) {
- qatomic_set(&jc->array[i].tb, NULL);
- }
+ memset(jc->array, 0, TB_JMP_CACHE_SIZE * sizeof jc->array[0]);
}
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
nbdkit - Flexible, fast NBD server with plugins
https://gitlab.com/nbdkit/nbdkit
- tcg_flush_jmp_cache replacing qatomic_set loop with memset,
Richard W.M. Jones <=