qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 2/2] target-mips: Misaligned memory accesses


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v3 2/2] target-mips: Misaligned memory accesses for MSA
Date: Wed, 13 May 2015 12:28:04 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0

On 05/13/2015 08:37 AM, Yongbok Kim wrote:
> +static inline void ensure_atomic_msa_block_access(CPUMIPSState *env,
> +                                                  target_ulong addr,
> +                                                  int rw,
> +                                                  int mmu_idx)
>  {
> +#if !defined(CONFIG_USER_ONLY)
> +#define MSA_PAGESPAN(x) (unlikely((((x) & ~TARGET_PAGE_MASK)                \
> +                                   + MSA_WRLEN/8 - 1) >= TARGET_PAGE_SIZE))
> +    CPUState *cs = CPU(mips_env_get_cpu(env));
> +    target_ulong page_addr;
>  
> +    if (MSA_PAGESPAN(addr)) {
> +        /* first page */
> +        tlb_fill(cs, addr, rw, mmu_idx, 0);
> +        /* second page */
> +        page_addr = (addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
> +        tlb_fill(cs, page_addr, rw, mmu_idx, 0);
>      }
> +#endif
>  }

This doesn't do quite what you think it does.  It does trap if the page isn't
mapped at all, but it doesn't trap if e.g. rw is set and the page is read-only.
That requires a subsequent check for what permissions were installed by
tlb_set_page.

I had thought there was a way to look this up besides duplicating the code in
softmmu_template.h, but I suppose that's in a patch set that never made it in.


> +    if (unlikely(addr & ((1 << DF) - 1))) {                             \
> +        /* work-around for misaligned accesses */                       \
> +        for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {                    \
> +            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE), mmu_idx);    \
> +        }                                                               \
> +        correct_vector_endianness_ ## TYPE(pwd, pwd);                   \

Why byte accesses?  The softmmu helpers are guaranteed to support misalignment.

As an aside, consider moving away from

#define HELPER_LD(name, insn, type)                                     \
static inline type do_##name(CPUMIPSState *env, target_ulong addr,      \
                             int mem_idx)                               \
{                                                                       \
    switch (mem_idx)                                                    \
    {                                                                   \
    case 0: return (type) cpu_##insn##_kernel(env, addr); break;        \
    case 1: return (type) cpu_##insn##_super(env, addr); break;         \
    default:                                                            \
    case 2: return (type) cpu_##insn##_user(env, addr); break;          \
    }                                                                   \
}

to using helper_ret_*_mmu directly.  Which allows you to specify the mmu_idx
directly rather than bouncing around different thunks.  It also allows you to
pass in GETRA(), which would allow these helpers to use cpu_restore_state on
faults.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]