qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v2 0/6] QOMify pflash_cfi0x + PL353 for Xilinx Zynq


From: Peter Crosthwaite
Subject: [Qemu-devel] [PATCH v2 0/6] QOMify pflash_cfi0x + PL353 for Xilinx Zynq
Date: Mon, 22 Oct 2012 17:18:58 +1000

This series adds the PL353 to Xilinx Zynq with both NAND and pflashes attached. 
Had to QOMify the pflash_cfi0x devices to get them working with PL35x in the 
least hackish way. Regression tested pflash_cfi_01 using petalogix-ml605 and 
pflash_cfi_02 tested using zynq. Further testing by clients of the pflash would 
be appreciated.

The pl35x is setup as a generalisation of all the pl35x family (i.e. it 
implements all of PL351-pl354). Once we get to actually implementing some of 
the register ops of this SRAM interface we could add this to vexpress for its 
PL354. The PL35x is incomplete (see the FIXME:s) at the moment but im pushing 
for this now as the more conterversial QOM-entangled aspects of this device 
model are encapsulated by this series. The device does also fully work for 
Linux.

Changlog:
Changed from v1:
Address PMM and Paolos Reviews (P3).
Fixed a compile error in in pflash when debug was turned on (P6)
Removed NAND READ_STATUS address reset patch (fomerly P6)

Peter Crosthwaite (6):
  pflash_cfi0x: remove unused base field
  pflash_cfi01: remove unused total_len field
  pflash_cfi0x: QOMified
  hw: Model of Primecell pl35x mem controller
  xilinx_zynq: add pl353
  pflash_cfi01: Fix debug mode printfery

 default-configs/arm-softmmu.mak |    1 +
 hw/Makefile.objs                |    1 +
 hw/pflash_cfi01.c               |  149 ++++++++++++++------
 hw/pflash_cfi02.c               |  162 +++++++++++++++------
 hw/pl35x.c                      |  299 +++++++++++++++++++++++++++++++++++++++
 hw/xilinx_zynq.c                |   50 ++++++-
 6 files changed, 560 insertions(+), 102 deletions(-)
 create mode 100644 hw/pl35x.c




reply via email to

[Prev in Thread] Current Thread [Next in Thread]