qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/4 v5] block: Add device-width property to pfla


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH 2/4 v5] block: Add device-width property to pflash_cfi01
Date: Thu, 28 Nov 2013 19:03:24 +0000

On 22 October 2013 17:35, Roy Franz <address@hidden> wrote:
> The width of the devices that make up the flash interface
> is required to mask certain commands, in particular the
> write length for buffered writes.  This length will be presented
> to each device on the interface by the program writing the flash,
> and the flash emulation code needs to be able to determine
> the length of the write as recieved by each flash device.
> The device-width defaults to the bank width which should
> maintain existing behavior for platforms that don't need
> this change.
> This change is required to support buffered writes on the
> vexpress platform that has a 32 bit flash interface with 2
> 16 bit devices on it.
>
> Signed-off-by: Roy Franz <address@hidden>
> ---
>  hw/block/pflash_cfi01.c |    5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/hw/block/pflash_cfi01.c b/hw/block/pflash_cfi01.c
> index d5e366d..cda8289 100644
> --- a/hw/block/pflash_cfi01.c
> +++ b/hw/block/pflash_cfi01.c
> @@ -72,6 +72,7 @@ struct pflash_t {
>      uint32_t nb_blocs;
>      uint64_t sector_len;
>      uint8_t bank_width;
> +    uint8_t device_width;
>      uint8_t be;
>      uint8_t wcycle; /* if 0, the flash is read normally */
>      int ro;
> @@ -378,6 +379,8 @@ static void pflash_write(pflash_t *pfl, hwaddr offset,
>
>              break;
>          case 0xe8:
> +            /* Mask writeblock size based on device width */
> +            value &= (1ULL << (pfl->device_width * 8)) - 1;

  value = extract32(value, 0, pfl->device_width * 8);

(you'll need to #include "qemu/bitops.h".)

Other than this and the status (which you deal with in
the other patch) the accesses I wonder about whether we
have correct are:
 * manufacturer & device ID code read
 * cfi_table[] accesses ("query mode")

http://www.delorie.com/agenda/specs/29220404.pdf
table 1 only talks about using 2 8x devices for a 16
bit bus, but it definitely seems to imply that the QRY
reads from the cfi_table[] behave differently for
paired devices than for a single full width one
(and that's logically what you'd expect since a
single device will just return the one character but
a paired device will return that one character
mirrored up into each of the byte lanes).

Basically these two things, like status read, are
returning fixed-values which will be output by both
the parallel devices; it's only pure data accesses
that behave the same way regardless.

>              DPRINTF("%s: block write of %x bytes\n", __func__, value);
>              pfl->counter = value;
>              pfl->wcycle++;
> @@ -707,6 +710,7 @@ static Property pflash_cfi01_properties[] = {
>      DEFINE_PROP_UINT32("num-blocks", struct pflash_t, nb_blocs, 0),
>      DEFINE_PROP_UINT64("sector-length", struct pflash_t, sector_len, 0),
>      DEFINE_PROP_UINT8("width", struct pflash_t, bank_width, 0),
> +    DEFINE_PROP_UINT8("device-width", struct pflash_t, device_width, 0),
>      DEFINE_PROP_UINT8("big-endian", struct pflash_t, be, 0),
>      DEFINE_PROP_UINT16("id0", struct pflash_t, ident0, 0),
>      DEFINE_PROP_UINT16("id1", struct pflash_t, ident1, 0),
> @@ -757,6 +761,7 @@ pflash_t *pflash_cfi01_register(hwaddr base,
>      qdev_prop_set_uint32(dev, "num-blocks", nb_blocs);
>      qdev_prop_set_uint64(dev, "sector-length", sector_len);
>      qdev_prop_set_uint8(dev, "width", bank_width);
> +    qdev_prop_set_uint8(dev, "device-width", bank_width);

This doesn't look right. We should just leave the property
at its default value. (In pflash_cfi01_realize you can catch
the 'device_width == 0' case and set it to bank_width there.)

>      qdev_prop_set_uint8(dev, "big-endian", !!be);
>      qdev_prop_set_uint16(dev, "id0", id0);
>      qdev_prop_set_uint16(dev, "id1", id1);

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]