qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v5 3/6] softmmu: Add helpers for a new slowpath


From: Richard Henderson
Subject: Re: [Qemu-devel] [RFC v5 3/6] softmmu: Add helpers for a new slowpath
Date: Wed, 30 Sep 2015 13:58:31 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 09/24/2015 06:32 PM, Alvise Rigo wrote:
The new helpers rely on the legacy ones to perform the actual read/write.

The LoadLink helper (helper_ldlink_name) prepares the way for the
following SC operation. It sets the linked address and the size of the
access.
These helper also update the TLB entry of the page involved in the
LL/SC for those vCPUs that have the bit set (dirty), so that the
following accesses made by all the vCPUs will follow the slow path.

The StoreConditional helper (helper_stcond_name) returns 1 if the
store has to fail due to a concurrent access to the same page by
another vCPU. A 'concurrent access' can be a store made by *any* vCPU
(although, some implementations allow stores made by the CPU that issued
the LoadLink).

Suggested-by: Jani Kokkonen <address@hidden>
Suggested-by: Claudio Fontana <address@hidden>
Signed-off-by: Alvise Rigo <address@hidden>
---
  cputlb.c                |   3 ++
  softmmu_llsc_template.h | 124 ++++++++++++++++++++++++++++++++++++++++++++++++
  softmmu_template.h      |  12 +++++
  tcg/tcg.h               |  30 ++++++++++++
  4 files changed, 169 insertions(+)
  create mode 100644 softmmu_llsc_template.h

diff --git a/cputlb.c b/cputlb.c
index 1e25a2a..d5aae7c 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -416,6 +416,8 @@ static inline void lookup_and_reset_cpus_ll_addr(hwaddr 
addr, hwaddr size)

  #define MMUSUFFIX _mmu

+/* Generates LoadLink/StoreConditional helpers in softmmu_template.h */
+#define GEN_EXCLUSIVE_HELPERS
  #define SHIFT 0
  #include "softmmu_template.h"

@@ -428,6 +430,7 @@ static inline void lookup_and_reset_cpus_ll_addr(hwaddr 
addr, hwaddr size)
  #define SHIFT 3
  #include "softmmu_template.h"
  #undef MMUSUFFIX
+#undef GEN_EXCLUSIVE_HELPERS

  #define MMUSUFFIX _cmmu
  #undef GETPC_ADJ
diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h
new file mode 100644
index 0000000..9f22834
--- /dev/null
+++ b/softmmu_llsc_template.h
@@ -0,0 +1,124 @@
+/*
+ *  Software MMU support (esclusive load/store operations)
+ *
+ * Generate helpers used by TCG for qemu_ldlink/stcond ops.
+ *
+ * Included from softmmu_template.h only.
+ *
+ * Copyright (c) 2015 Virtual Open Systems
+ *
+ * Authors:
+ *  Alvise Rigo <address@hidden>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+/* This template does not generate together the le and be version, but only one
+ * of the two depending on whether BIGENDIAN_EXCLUSIVE_HELPERS has been set.
+ * The same nomenclature as softmmu_template.h is used for the exclusive
+ * helpers.  */
+
+#ifdef BIGENDIAN_EXCLUSIVE_HELPERS
+
+#define helper_ldlink_name  glue(glue(helper_be_ldlink, USUFFIX), MMUSUFFIX)
+#define helper_stcond_name  glue(glue(helper_be_stcond, SUFFIX), MMUSUFFIX)
+#define helper_ld glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX)
+#define helper_st glue(glue(helper_be_st, SUFFIX), MMUSUFFIX)
+
+#else /* LE helpers + 8bit helpers (generated only once for both LE end BE) */
+
+#if DATA_SIZE > 1
+#define helper_ldlink_name  glue(glue(helper_le_ldlink, USUFFIX), MMUSUFFIX)
+#define helper_stcond_name  glue(glue(helper_le_stcond, SUFFIX), MMUSUFFIX)
+#define helper_ld glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX)
+#define helper_st glue(glue(helper_le_st, SUFFIX), MMUSUFFIX)
+#else /* DATA_SIZE <= 1 */
+#define helper_ldlink_name  glue(glue(helper_ret_ldlink, USUFFIX), MMUSUFFIX)
+#define helper_stcond_name  glue(glue(helper_ret_stcond, SUFFIX), MMUSUFFIX)
+#define helper_ld glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX)
+#define helper_st glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX)
+#endif
+
+#endif
+
+WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr,
+                                TCGMemOpIdx oi, uintptr_t retaddr)
+{
+    WORD_TYPE ret;
+    int index;
+    CPUState *cpu;
+    hwaddr hw_addr;
+    unsigned mmu_idx = get_mmuidx(oi);
+
+    /* Use the proper load helper from cpu_ldst.h */
+    ret = helper_ld(env, addr, mmu_idx, retaddr);
+
+    index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+
+    /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat)
+     * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */
+    hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr;
+
+    cpu_physical_memory_clear_excl_dirty(hw_addr, ENV_GET_CPU(env)->cpu_index);
+    /* If all the vCPUs have the EXCL bit set for this page there is no need
+     * to request any flush. */
+    if (cpu_physical_memory_excl_is_dirty(hw_addr, smp_cpus)) {
+        CPU_FOREACH(cpu) {
+            if (current_cpu != cpu) {
+                if (cpu_physical_memory_excl_is_dirty(hw_addr,
+                                                    cpu->cpu_index)) {
+                    cpu_physical_memory_clear_excl_dirty(hw_addr,
+                                                         cpu->cpu_index);
+                    tlb_flush(cpu, 1);
+                }

Why would you need to indicate that another cpu has started an exclusive operation on this page? That seems definitely wrong.

I think that all you wanted was to flush the other cpu, so that it notices that this cpu has started an exclusive operation.

It would be great if most of this were pulled out to a subroutine so that these helper functions didn't accrue so much duplicate code.

It would be fantastic to implement a tlb_flush_phys_page to that we didn't have to flush the entire tlb of the other cpus.

+    /* We set it preventively to true to distinguish the following legacy
+     * access as one made by the store conditional wrapper. If the store
+     * conditional does not succeed, the value will be set to 0.*/
+    env->excl_succeeded = 1;

Ah -- "distinguish" is the word that was missing from the comments in the previous patches.

diff --git a/softmmu_template.h b/softmmu_template.h
index e4431e8..ad65d20 100644
--- a/softmmu_template.h
+++ b/softmmu_template.h
@@ -640,6 +640,18 @@ void probe_write(CPUArchState *env, target_ulong addr, int 
mmu_idx,
  #endif
  #endif /* !defined(SOFTMMU_CODE_ACCESS) */

+#ifdef GEN_EXCLUSIVE_HELPERS
+
+#if DATA_SIZE > 1 /* The 8-bit helpers are generate along with LE helpers */
+#define BIGENDIAN_EXCLUSIVE_HELPERS
+#include "softmmu_llsc_template.h"
+#undef BIGENDIAN_EXCLUSIVE_HELPERS
+#endif
+
+#include "softmmu_llsc_template.h"
+
+#endif /* !defined(GEN_EXCLUSIVE_HELPERS) */

I'm not especially keen on this. Not that what we currently have in softmmu_template.h is terribly better.

I wonder what can be done to clean all of this up, short of actually transitioning to c++ templates...


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]