qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] Improve the alignment check infrastructure


From: Sergey Sorokin
Subject: Re: [Qemu-devel] [PATCH v2] Improve the alignment check infrastructure
Date: Wed, 22 Jun 2016 19:30:20 +0300

   A

   A

   22.06.2016, 18:50, "Richard Henderson" <address@hidden>:

     On 06/22/2016 05:37 AM, Sergey Sorokin wrote:

     A +/* Use this mask to check interception with an alignment mask
     A + * in a TCG backend.
     A + */
     A +#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY |
     TLB_MMIO)

     I think we ought to check this in tcg-op.c, rather than wait until
     generating
     code in the backend.

     A --- a/tcg/aarch64/tcg-target.inc.c
     A +++ b/tcg/aarch64/tcg-target.inc.c
     A @@ -1071,19 +1071,21 @@ static void tcg_out_tlb_read(TCGContext
     *s, TCGReg addr_reg, TCGMemOp opc,
     A A A A A A int tlb_offset = is_read ?
     A A A A A A A A A A offsetof(CPUArchState,
     tlb_table[mem_index][0].addr_read)
     A A A A A A A A A A : offsetof(CPUArchState,
     tlb_table[mem_index][0].addr_write);
     A - int s_mask = (1 << (opc & MO_SIZE)) - 1;
     A + int a_bits = get_alignment_bits(opc);
     A A A A A A TCGReg base = TCG_AREG0, x3;
     A - uint64_t tlb_mask;
     A + target_ulong tlb_mask;

     Hum. I had been talking about i386 specifically when changing the
     type of
     tlb_mask.
     For aarch64, a quirk in the code generation logic requires that a
     32-bit
     tlb_mask be sign-extended to 64-bit. The effect of the actual
     instruction will
     be zero-extension, however.
     See is_limm, tcg_out_logicali, and a related comment in tcg_out_movi
     for
     details. We should probably add a comment here in tlb_read for the
     next person
     that comes along...

   Thank you for the comment.

   A

     A diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
     A index da10052..3dc38fa 100644
     A --- a/tcg/ppc/tcg-target.inc.c
     A +++ b/tcg/ppc/tcg-target.inc.c
     A @@ -1399,6 +1399,7 @@ static TCGReg tcg_out_tlb_read(TCGContext
     *s, TCGMemOp opc,
     A A A A A A int add_off = offsetof(CPUArchState,
     tlb_table[mem_index][0].addend);
     A A A A A A TCGReg base = TCG_AREG0;
     A A A A A A TCGMemOp s_bits = opc & MO_SIZE;
     A + int a_bits = get_alignment_bits(opc);
     A A A A A A /* Extract the page index, shifted into place for tlb
     index. */
     A A A A A A if (TCG_TARGET_REG_BITS == 64) {
     A @@ -1456,14 +1457,21 @@ static TCGReg tcg_out_tlb_read(TCGContext
     *s, TCGMemOp opc,
     A A A A A A A A A A A * the bottom bits and thus trigger a
     comparison failure on
     A A A A A A A A A A A * unaligned accesses
     A A A A A A A A A A A */
     A + if (a_bits > 0) {
     A + tcg_debug_assert((((1 << a_bits) - 1) & TLB_FLAGS_MASK) == 0);
     A + } else {
     A + a_bits = s_bits;
     A + }
     A A A A A A A A A A tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0,
     A + (32 - a_bits) & 31, 31 - TARGET_PAGE_BITS);

     ppc32 can certainly support over-alignment, just like every other
     target. It's
     just that there are some 32-bit parts that don't support unaligned
     accesses.

   A

   I don't understand your point here.

   As the comment says this case preserves all alignment bits to go to the
   slow path in case of any unaligned access regardless of an alignment
   enabling.

   A

   A

   A

   Also I forget about softmmu_template.h. This patch is not full.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]