[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[RFC 01/13] bql: check that the BQL is not dropped within marked section
From: |
Zhao Liu |
Subject: |
[RFC 01/13] bql: check that the BQL is not dropped within marked sections |
Date: |
Thu, 5 Dec 2024 14:07:02 +0800 |
From: Paolo Bonzini <pbonzini@redhat.com>
The Big QEMU Lock (BQL) is used to provide interior mutability to Rust
code. While BqlCell performs indivisible accesses, an equivalent of
RefCell will allow the borrower to hold to the interior content for a
long time. If the BQL is dropped, another thread could come and mutate
the data from C code (Rust code would panic on borrow_mut() instead).
In order to prevent this, add a new BQL primitive that can mark
BQL-atomic sections and aborts if the BQL is dropped within them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
include/qemu/main-loop.h | 15 +++++++++++++++
stubs/iothread-lock.c | 15 +++++++++++++++
system/cpus.c | 15 +++++++++++++++
3 files changed, 45 insertions(+)
diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
index 5764db157c97..646306c272f7 100644
--- a/include/qemu/main-loop.h
+++ b/include/qemu/main-loop.h
@@ -262,6 +262,21 @@ AioContext *iohandler_get_aio_context(void);
*/
bool bql_locked(void);
+/**
+ * bql_block: Allow/deny releasing the BQL
+ *
+ * The Big QEMU Lock (BQL) is used to provide interior mutability to
+ * Rust code, but this only works if other threads cannot run while
+ * the Rust code has an active borrow. This is because C code in
+ * other threads could come in and mutate data under the Rust code's
+ * feet.
+ *
+ * @increase: Whether to increase or decrease the blocking counter.
+ * Releasing the BQL while the counter is nonzero triggers
+ * an assertion failure.
+ */
+void bql_block_unlock(bool increase);
+
/**
* qemu_in_main_thread: return whether it's possible to safely access
* the global state of the block layer.
diff --git a/stubs/iothread-lock.c b/stubs/iothread-lock.c
index d7890e5581c5..54676598950f 100644
--- a/stubs/iothread-lock.c
+++ b/stubs/iothread-lock.c
@@ -1,6 +1,8 @@
#include "qemu/osdep.h"
#include "qemu/main-loop.h"
+static uint32_t bql_unlock_blocked;
+
bool bql_locked(void)
{
return false;
@@ -12,4 +14,17 @@ void bql_lock_impl(const char *file, int line)
void bql_unlock(void)
{
+ assert(!bql_unlock_blocked);
+}
+
+void bql_block_unlock(bool increase)
+{
+ uint32_t new_value;
+
+ assert(bql_locked());
+
+ /* check for overflow! */
+ new_value = bql_unlock_blocked + increase - !increase;
+ assert((new_value > bql_unlock_blocked) == increase);
+ bql_unlock_blocked = new_value;
}
diff --git a/system/cpus.c b/system/cpus.c
index a1b46f68476a..793c4698c7ad 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -514,6 +514,20 @@ bool qemu_in_vcpu_thread(void)
QEMU_DEFINE_STATIC_CO_TLS(bool, bql_locked)
+static uint32_t bql_unlock_blocked;
+
+void bql_block_unlock(bool increase)
+{
+ uint32_t new_value;
+
+ assert(bql_locked());
+
+ /* check for overflow! */
+ new_value = bql_unlock_blocked + increase - !increase;
+ assert((new_value > bql_unlock_blocked) == increase);
+ bql_unlock_blocked = new_value;
+}
+
bool bql_locked(void)
{
return get_bql_locked();
@@ -540,6 +554,7 @@ void bql_lock_impl(const char *file, int line)
void bql_unlock(void)
{
g_assert(bql_locked());
+ g_assert(!bql_unlock_blocked);
set_bql_locked(false);
qemu_mutex_unlock(&bql);
}
--
2.34.1
[RFC 02/13] rust: cell: add BQL-enforcing RefCell variant, Zhao Liu, 2024/12/05
[RFC 05/13] rust: add a bit operation binding for deposit64, Zhao Liu, 2024/12/05