qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH arm-devs v1 05/15] xilinx_spips: lqspi: Dont tra


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH arm-devs v1 05/15] xilinx_spips: lqspi: Dont trash config register
Date: Fri, 5 Apr 2013 19:46:03 +0100

On 3 April 2013 05:32, Peter Crosthwaite <address@hidden> wrote:
> The LQSPI code currently manipulates the config register to achieve its
> ends. Some (agressively designed) drivers assume that the config
> register preserves state across a transition into and out of LQSPI
> mode. Fixed by just restoring R_CONFIG to its original value after
> LQSPI does its thing.
>
> Signed-off-by: Peter Crosthwaite <address@hidden>
> ---
>
>  hw/xilinx_spips.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/hw/xilinx_spips.c b/hw/xilinx_spips.c
> index 29636ce..06c2ec5 100644
> --- a/hw/xilinx_spips.c
> +++ b/hw/xilinx_spips.c
> @@ -467,6 +467,7 @@ lqspi_read(void *opaque, hwaddr addr, unsigned int size)
>          int flash_addr = (addr / num_effective_busses(s));
>          int slave = flash_addr >> LQSPI_ADDRESS_BITS;
>          int cache_entry = 0;
> +        uint32_t r_config_save = s->regs[R_CONFIG];
>
>          DB_PRINT("config reg status: %08x\n", s->regs[R_LQSPI_CFG]);
>
> @@ -512,6 +513,8 @@ lqspi_read(void *opaque, hwaddr addr, unsigned int size)
>
>          s->regs[R_CONFIG] |= CS;
>          xilinx_spips_update_cs_lines(s);
> +        s->regs[R_CONFIG] = r_config_save;
> +        xilinx_spips_update_cs_lines(s);
>
>          q->lqspi_cached_addr = addr;
>          return lqspi_read(opaque, addr, size);
> --
> 1.7.0.4

Is this a "we don't implement this the way the hardware does but
this is close enough" kind of thing? (in particular does the hardware
really do the same thing to the cs lines that those two calls to
update_cs_lines() presumably do?) Maybe worth a comment in the code
about what we do vs what hardware does / what we theoretically
ideally should do, if you happen to know.

-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]