Merge tag 'for-linus-20170713' of git://git.infradead.org/linux-mtd
authorLinus Torvalds <torvalds@linux-foundation.org>
Thu, 13 Jul 2017 19:07:44 +0000 (12:07 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 13 Jul 2017 19:07:44 +0000 (12:07 -0700)
Pull MTD updates from Brian Norris:
 "General updates:
   - Cleanups and additional flash support for "dataflash" driver
   - new driver for mchp23k256 SPI SRAM device
   - improve handling of MTDs without eraseblocks (i.e., MTD_NO_ERASE)
   - refactor and improve "sub-partition" handling with TRX partition
     parser; partitions can now be created as sub-partitions of another
     partition

  SPINOR updates, from Cyrille Pitchen and Marek Vasut:
   - introduce support to the SPI 1-2-2 and 1-4-4 protocols.
   - introduce support to the Double Data Rate (DDR) mode.
   - introduce support to the Octo SPI protocols.
   - add support to new memory parts for Spansion, Macronix and Winbond.
   - add fixes for the Aspeed, STM32 and Cadence QSPI controler drivers.
   - clean up the st_spi_fsm driver.

  NAND updates, from Boris Brezillon:
   - addition of on-die ECC support to Micron driver
   - addition of helpers to help drivers choose most appropriate ECC
     settings
   - deletion of dead-code (cached programming and ->errstat() hook)
   - make sure drivers that do not support the SET/GET FEATURES command
     return ENOTSUPP use a dummy ->set/get_features implementation
     returning -ENOTSUPP (required for Micron on-die ECC)
   - change the semantic of ecc->write_page() for drivers setting the
     NAND_ECC_CUSTOM_PAGE_ACCESS flag
   - support exiting 'GET STATUS' command in default ->cmdfunc()
     implementations
   - change the prototype of ->setup_data_interface()

  A bunch of driver related changes:
   - various cleanup, fixes and improvements of the MTK driver
   - OMAP DT bindings fixes
   - support for ->setup_data_interface() in the fsmc driver
   - support for imx7 in the gpmi driver
   - finalization of the denali driver rework (thanks to Masahiro for
     the work he's done on this driver)
   - fix "bitflips in erased pages" handling in the ifc driver
   - addition of PM ops and dynamic timing configuration to the atmel
     driver"

* tag 'for-linus-20170713' of git://git.infradead.org/linux-mtd: (118 commits)
  Documentation: ABI: mtd: describe "offset" more precisely
  mtd: Fix check in mtd_unpoint()
  mtd: nand: mtk: release lock on error path
  mtd: st_spi_fsm: remove SPINOR_OP_RDSR2 and use SPINOR_OP_RDCR instead
  mtd: spi-nor: cqspi: remove duplicate const
  mtd: spi-nor: Add support for Spansion S25FL064L
  mtd: spi-nor: Add support for mx66u51235f
  mtd: nand: mtk: add ->setup_data_interface() hook
  mtd: nand: mtk: remove unneeded mtk_ecc_hw_init from mtk_ecc_resume
  mtd: nand: mtk: remove unneeded mtk_nfc_hw_init from mtk_nfc_resume
  mtd: nand: mtk: disable ecc irq when writing page with hwecc
  mtd: nand: mtk: fix incorrect register setting order about ecc irq
  mtd: partitions: fixup some allocate_partition() whitespace
  mtd: parsers: trx: fix pr_err format for printing offset
  MAINTAINERS: Update SPI NOR subsystem git repositories
  mtd: extract TRX parser out of bcm47xxpart into a separated module
  mtd: partitions: add support for partition parsers
  mtd: partitions: add support for subpartitions
  mtd: partitions: rename "master" to the "parent" where appropriate
  mtd: partitions: remove sysfs files when deleting all master's partitions
  ...

79 files changed:
Documentation/ABI/testing/sysfs-class-mtd
Documentation/devicetree/bindings/mtd/denali-nand.txt
Documentation/devicetree/bindings/mtd/elm.txt
Documentation/devicetree/bindings/mtd/gpmc-nand.txt
Documentation/devicetree/bindings/mtd/gpmc-nor.txt
Documentation/devicetree/bindings/mtd/gpmc-onenand.txt
Documentation/devicetree/bindings/mtd/gpmi-nand.txt
Documentation/devicetree/bindings/mtd/microchip,mchp23k256.txt [new file with mode: 0644]
Documentation/devicetree/bindings/mtd/mtk-nand.txt
Documentation/devicetree/bindings/mtd/nand.txt
Documentation/devicetree/bindings/mtd/partition.txt
Documentation/devicetree/bindings/net/gpmc-eth.txt
MAINTAINERS
drivers/mtd/Kconfig
drivers/mtd/Makefile
drivers/mtd/bcm47xxpart.c
drivers/mtd/chips/cfi_cmdset_0020.c
drivers/mtd/devices/Kconfig
drivers/mtd/devices/Makefile
drivers/mtd/devices/m25p80.c
drivers/mtd/devices/mchp23k256.c [new file with mode: 0644]
drivers/mtd/devices/mtd_dataflash.c
drivers/mtd/devices/serial_flash_cmds.h
drivers/mtd/devices/st_spi_fsm.c
drivers/mtd/maps/physmap_of_gemini.c
drivers/mtd/mtdcore.c
drivers/mtd/mtdpart.c
drivers/mtd/nand/Kconfig
drivers/mtd/nand/atmel/nand-controller.c
drivers/mtd/nand/bcm47xxnflash/ops_bcm4706.c
drivers/mtd/nand/cafe_nand.c
drivers/mtd/nand/davinci_nand.c
drivers/mtd/nand/denali.c
drivers/mtd/nand/denali.h
drivers/mtd/nand/denali_dt.c
drivers/mtd/nand/denali_pci.c
drivers/mtd/nand/docg4.c
drivers/mtd/nand/fsl_elbc_nand.c
drivers/mtd/nand/fsl_ifc_nand.c
drivers/mtd/nand/fsmc_nand.c
drivers/mtd/nand/gpmi-nand/gpmi-lib.c
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
drivers/mtd/nand/gpmi-nand/gpmi-nand.h
drivers/mtd/nand/hisi504_nand.c
drivers/mtd/nand/jz4780_nand.c
drivers/mtd/nand/mpc5121_nfc.c
drivers/mtd/nand/mtk_ecc.c
drivers/mtd/nand/mtk_ecc.h
drivers/mtd/nand/mtk_nand.c
drivers/mtd/nand/mxc_nand.c
drivers/mtd/nand/nand_base.c
drivers/mtd/nand/nand_micron.c
drivers/mtd/nand/orion_nand.c
drivers/mtd/nand/pxa3xx_nand.c
drivers/mtd/nand/qcom_nandc.c
drivers/mtd/nand/s3c2410.c
drivers/mtd/nand/sh_flctl.c
drivers/mtd/nand/sunxi_nand.c
drivers/mtd/nand/tango_nand.c
drivers/mtd/nand/vf610_nfc.c
drivers/mtd/parsers/Kconfig [new file with mode: 0644]
drivers/mtd/parsers/Makefile [new file with mode: 0644]
drivers/mtd/parsers/parser_trx.c [new file with mode: 0644]
drivers/mtd/spi-nor/Kconfig
drivers/mtd/spi-nor/aspeed-smc.c
drivers/mtd/spi-nor/atmel-quadspi.c
drivers/mtd/spi-nor/cadence-quadspi.c
drivers/mtd/spi-nor/fsl-quadspi.c
drivers/mtd/spi-nor/hisi-sfc.c
drivers/mtd/spi-nor/intel-spi.c
drivers/mtd/spi-nor/mtk-quadspi.c
drivers/mtd/spi-nor/nxp-spifi.c
drivers/mtd/spi-nor/spi-nor.c
drivers/mtd/spi-nor/stm32-quadspi.c
drivers/mtd/tests/subpagetest.c
drivers/staging/mt29f_spinand/mt29f_spinand.c
include/linux/mtd/nand.h
include/linux/mtd/partitions.h
include/linux/mtd/spi-nor.h

index 3b5c3bca9186d13e8cf5f1911b2fc88424c6970f..f34e592301d1dbd9190d8557e43fd05bcde08f80 100644 (file)
@@ -229,6 +229,6 @@ KernelVersion:      4.1
 Contact:       linux-mtd@lists.infradead.org
 Description:
                For a partition, the offset of that partition from the start
-               of the master device in bytes. This attribute is absent on
-               main devices, so it can be used to distinguish between
-               partitions and devices that aren't partitions.
+               of the parent (another partition or a flash device) in bytes.
+               This attribute is absent on flash devices, so it can be used
+               to distinguish them from partitions.
index e593bbeb2115deb92966d593c18f002e5f269e41..504291d2e5c2e5e02b880738c7f453b527dee63c 100644 (file)
@@ -3,10 +3,23 @@
 Required properties:
   - compatible : should be one of the following:
       "altr,socfpga-denali-nand"            - for Altera SOCFPGA
+      "socionext,uniphier-denali-nand-v5a"  - for Socionext UniPhier (v5a)
+      "socionext,uniphier-denali-nand-v5b"  - for Socionext UniPhier (v5b)
   - reg : should contain registers location and length for data and reg.
   - reg-names: Should contain the reg names "nand_data" and "denali_reg"
   - interrupts : The interrupt number.
 
+Optional properties:
+  - nand-ecc-step-size: see nand.txt for details.  If present, the value must be
+      512        for "altr,socfpga-denali-nand"
+      1024       for "socionext,uniphier-denali-nand-v5a"
+      1024       for "socionext,uniphier-denali-nand-v5b"
+  - nand-ecc-strength: see nand.txt for details.  Valid values are:
+      8, 15      for "altr,socfpga-denali-nand"
+      8, 16, 24  for "socionext,uniphier-denali-nand-v5a"
+      8, 16      for "socionext,uniphier-denali-nand-v5b"
+  - nand-ecc-maximize: see nand.txt for details
+
 The device tree may optionally contain sub-nodes describing partitions of the
 address space. See partition.txt for more detail.
 
index 8c1528c421d47b2ca3995cff8a4f80af4942c8fb..59ddc61c10768dc1dfe73adbd5cf462e0dbc2da8 100644 (file)
@@ -1,7 +1,7 @@
 Error location module
 
 Required properties:
-- compatible: Must be "ti,am33xx-elm"
+- compatible: Must be "ti,am3352-elm"
 - reg: physical base address and size of the registers map.
 - interrupts: Interrupt number for the elm.
 
index 174f68c26c1b2a66a09c1b88dc7762d16ee54380..dd559045593d7be3cd4a984c643c7733bd7a3f76 100644 (file)
@@ -5,7 +5,7 @@ the GPMC controller with a name of "nand".
 
 All timing relevant properties as well as generic gpmc child properties are
 explained in a separate documents - please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 For NAND specific properties such as ECC modes or bus width, please refer to
 Documentation/devicetree/bindings/mtd/nand.txt
index 4828c17bb784bd78d61d0766e80e6ef5ded9938c..131d3a74d0bd453f48c3f31e5487db4e3c71c6e1 100644 (file)
@@ -5,7 +5,7 @@ child nodes of the GPMC controller with a name of "nor".
 
 All timing relevant properties as well as generic GPMC child properties are
 explained in a separate documents. Please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 Required properties:
 - bank-width:          Width of NOR flash in bytes. GPMC supports 8-bit and
@@ -28,7 +28,7 @@ Required properties:
 
 Optional properties:
 - gpmc,XXX             Additional GPMC timings and settings parameters. See
-                       Documentation/devicetree/bindings/bus/ti-gpmc.txt
+                       Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 Optional properties for partition table parsing:
 - #address-cells: should be set to 1
index 5d8fa527c496a1e29674064805c66461b20f2eed..b6e8bfd024f461902efbc09e52502fb7089c3048 100644 (file)
@@ -5,7 +5,7 @@ the GPMC controller with a name of "onenand".
 
 All timing relevant properties as well as generic gpmc child properties are
 explained in a separate documents - please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 Required properties:
 
index d02acaff3c35e98476b91c26757f30355736cb3b..b289ef3c1b7e4a8ae798fc1b7f7db38880b3df23 100644 (file)
@@ -4,7 +4,12 @@ The GPMI nand controller provides an interface to control the
 NAND flash chips.
 
 Required properties:
-  - compatible : should be "fsl,<chip>-gpmi-nand"
+  - compatible : should be "fsl,<chip>-gpmi-nand", chip can be:
+    * imx23
+    * imx28
+    * imx6q
+    * imx6sx
+    * imx7d
   - reg : should contain registers location and length for gpmi and bch.
   - reg-names: Should contain the reg names "gpmi-nand" and "bch"
   - interrupts : BCH interrupt number.
@@ -13,6 +18,13 @@ Required properties:
     and GPMI DMA channel ID.
     Refer to dma.txt and fsl-mxs-dma.txt for details.
   - dma-names: Must be "rx-tx".
+  - clocks : clocks phandle and clock specifier corresponding to each clock
+    specified in clock-names.
+  - clock-names : The "gpmi_io" clock is always required. Which clocks are
+    exactly required depends on chip:
+    * imx23/imx28 : "gpmi_io"
+    * imx6q/sx : "gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch"
+    * imx7d : "gpmi_io", "gpmi_bch_apb"
 
 Optional properties:
   - nand-on-flash-bbt: boolean to enable on flash bbt option if not
diff --git a/Documentation/devicetree/bindings/mtd/microchip,mchp23k256.txt b/Documentation/devicetree/bindings/mtd/microchip,mchp23k256.txt
new file mode 100644 (file)
index 0000000..7328eb9
--- /dev/null
@@ -0,0 +1,18 @@
+* MTD SPI driver for Microchip 23K256 (and similar) serial SRAM
+
+Required properties:
+- #address-cells, #size-cells : Must be present if the device has sub-nodes
+  representing partitions.
+- compatible : Must be one of "microchip,mchp23k256" or "microchip,mchp23lcv1024"
+- reg : Chip-Select number
+- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at
+
+Example:
+
+       spi-sram@0 {
+               #address-cells = <1>;
+               #size-cells = <1>;
+               compatible = "microchip,mchp23k256";
+               reg = <0>;
+               spi-max-frequency = <20000000>;
+       };
index 069c192ed5c2f1b629b91a0ad4b98b73fdfda5d1..dbf9e054c11c0f3a68ba2783a042ca25e12d6b78 100644 (file)
@@ -12,7 +12,8 @@ tree nodes.
 
 The first part of NFC is NAND Controller Interface (NFI) HW.
 Required NFI properties:
-- compatible:                  Should be "mediatek,mtxxxx-nfc".
+- compatible:                  Should be one of "mediatek,mt2701-nfc",
+                               "mediatek,mt2712-nfc".
 - reg:                         Base physical address and size of NFI.
 - interrupts:                  Interrupts of NFI.
 - clocks:                      NFI required clocks.
@@ -141,7 +142,7 @@ Example:
 ==============
 
 Required BCH properties:
-- compatible:  Should be "mediatek,mtxxxx-ecc".
+- compatible:  Should be one of "mediatek,mt2701-ecc", "mediatek,mt2712-ecc".
 - reg:         Base physical address and size of ECC.
 - interrupts:  Interrupts of ECC.
 - clocks:      ECC required clocks.
index b05601600083d91c17c649b8cc5603011628f2a3..133f3813719c26398b6aede5b2126e0df33df28a 100644 (file)
@@ -21,7 +21,7 @@ Optional NAND chip properties:
 
 - nand-ecc-mode : String, operation mode of the NAND ecc mode.
                  Supported values are: "none", "soft", "hw", "hw_syndrome",
-                 "hw_oob_first".
+                 "hw_oob_first", "on-die".
                  Deprecated values:
                  "soft_bch": use "soft" and nand-ecc-algo instead
 - nand-ecc-algo: string, algorithm of NAND ECC.
index 81a224da63be54a4b7a5b248f57a985423b36213..36f3b769a62675cccac1a23ec8287e08bd91653e 100644 (file)
@@ -1,29 +1,49 @@
-Representing flash partitions in devicetree
+Flash partitions in device tree
+===============================
 
-Partitions can be represented by sub-nodes of an mtd device. This can be used
+Flash devices can be partitioned into one or more functional ranges (e.g. "boot
+code", "nvram", "kernel").
+
+Different devices may be partitioned in a different ways. Some may use a fixed
+flash layout set at production time. Some may use on-flash table that describes
+the geometry and naming/purpose of each functional region. It is also possible
+to see these methods mixed.
+
+To assist system software in locating partitions, we allow describing which
+method is used for a given flash device. To describe the method there should be
+a subnode of the flash device that is named 'partitions'. It must have a
+'compatible' property, which is used to identify the method to use.
+
+We currently only document a binding for fixed layouts.
+
+
+Fixed Partitions
+================
+
+Partitions can be represented by sub-nodes of a flash device. This can be used
 on platforms which have strong conventions about which portions of a flash are
 used for what purposes, but which don't use an on-flash partition table such
 as RedBoot.
 
-The partition table should be a subnode of the mtd node and should be named
+The partition table should be a subnode of the flash node and should be named
 'partitions'. This node should have the following property:
 - compatible : (required) must be "fixed-partitions"
 Partitions are then defined in subnodes of the partitions node.
 
-For backwards compatibility partitions as direct subnodes of the mtd device are
+For backwards compatibility partitions as direct subnodes of the flash device are
 supported. This use is discouraged.
 NOTE: also for backwards compatibility, direct subnodes that have a compatible
 string are not considered partitions, as they may be used for other bindings.
 
 #address-cells & #size-cells must both be present in the partitions subnode of the
-mtd device. There are two valid values for both:
+flash device. There are two valid values for both:
 <1>: for partitions that require a single 32-bit cell to represent their
      size/address (aka the value is below 4 GiB)
 <2>: for partitions that require two 32-bit cells to represent their
      size/address (aka the value is 4 GiB or greater).
 
 Required properties:
-- reg : The partition's offset and size within the mtd bank.
+- reg : The partition's offset and size within the flash
 
 Optional properties:
 - label : The label / name for this partition.  If omitted, the label is taken
index ace4a64b3695930254570d742704eb2abc888c37..f7da3d73ca1b2e15d71160b9ec811f2aad274e93 100644 (file)
@@ -9,7 +9,7 @@ the GPMC controller with an "ethernet" name.
 
 All timing relevant properties as well as generic GPMC child properties are
 explained in a separate documents. Please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 For the properties relevant to the ethernet controller connected to the GPMC
 refer to the binding documentation of the device. For example, the documentation
@@ -43,7 +43,7 @@ Required properties:
 
 Optional properties:
 - gpmc,XXX             Additional GPMC timings and settings parameters. See
-                       Documentation/devicetree/bindings/bus/ti-gpmc.txt
+                       Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 Example:
 
index 1347726cf9d531705926c4f95aeadb8962d9ee64..e1a22ed547b1b5751dbae50a70e7d611d9c7f17e 100644 (file)
@@ -3974,6 +3974,12 @@ M:       Pali Rohár <pali.rohar@gmail.com>
 S:     Maintained
 F:     drivers/platform/x86/dell-wmi.c
 
+DENALI NAND DRIVER
+M:     Masahiro Yamada <yamada.masahiro@socionext.com>
+L:     linux-mtd@lists.infradead.org
+S:     Supported
+F:     drivers/mtd/nand/denali*
+
 DESIGNWARE USB2 DRD IP DRIVER
 M:     John Youn <johnyoun@synopsys.com>
 L:     linux-usb@vger.kernel.org
@@ -12464,7 +12470,8 @@ M:      Marek Vasut <marek.vasut@gmail.com>
 L:     linux-mtd@lists.infradead.org
 W:     http://www.linux-mtd.infradead.org/
 Q:     http://patchwork.ozlabs.org/project/linux-mtd/list/
-T:     git git://github.com/spi-nor/linux.git
+T:     git git://git.infradead.org/linux-mtd.git spi-nor/fixes
+T:     git git://git.infradead.org/l2-mtd.git spi-nor/next
 S:     Maintained
 F:     drivers/mtd/spi-nor/
 F:     include/linux/mtd/spi-nor.h
index e83a279f1217e7b9959ed7caa7e4a8be13309a1d..5a2d71729b9ace7a67e8216f7d5ba61fddb471d7 100644 (file)
@@ -155,6 +155,10 @@ config MTD_BCM47XX_PARTS
          This provides partitions parser for devices based on BCM47xx
          boards.
 
+menu "Partition parsers"
+source "drivers/mtd/parsers/Kconfig"
+endmenu
+
 comment "User Modules And Translation Layers"
 
 #
index 99bb9a1f6e16fc324b7b0a794e06f6abbeb39575..151d60df303acdbab6db611440be753ab4987b2a 100644 (file)
@@ -13,6 +13,7 @@ obj-$(CONFIG_MTD_AFS_PARTS)   += afs.o
 obj-$(CONFIG_MTD_AR7_PARTS)    += ar7part.o
 obj-$(CONFIG_MTD_BCM63XX_PARTS)        += bcm63xxpart.o
 obj-$(CONFIG_MTD_BCM47XX_PARTS)        += bcm47xxpart.o
+obj-y                          += parsers/
 
 # 'Users' - code which presents functionality to userspace.
 obj-$(CONFIG_MTD_BLKDEVS)      += mtd_blkdevs.o
index d10fa6c8f074648e0a7ec5e05562e15f4695b849..fe2581d9d882f2f480f4f5ec4b458cb3d67268e9 100644 (file)
@@ -43,7 +43,8 @@
 #define ML_MAGIC2                      0x26594131
 #define TRX_MAGIC                      0x30524448
 #define SHSQ_MAGIC                     0x71736873      /* shsq (weird ZTE H218N endianness) */
-#define UBI_EC_MAGIC                   0x23494255      /* UBI# */
+
+static const char * const trx_types[] = { "trx", NULL };
 
 struct trx_header {
        uint32_t magic;
@@ -62,89 +63,6 @@ static void bcm47xxpart_add_part(struct mtd_partition *part, const char *name,
        part->mask_flags = mask_flags;
 }
 
-static const char *bcm47xxpart_trx_data_part_name(struct mtd_info *master,
-                                                 size_t offset)
-{
-       uint32_t buf;
-       size_t bytes_read;
-       int err;
-
-       err  = mtd_read(master, offset, sizeof(buf), &bytes_read,
-                       (uint8_t *)&buf);
-       if (err && !mtd_is_bitflip(err)) {
-               pr_err("mtd_read error while parsing (offset: 0x%X): %d\n",
-                       offset, err);
-               goto out_default;
-       }
-
-       if (buf == UBI_EC_MAGIC)
-               return "ubi";
-
-out_default:
-       return "rootfs";
-}
-
-static int bcm47xxpart_parse_trx(struct mtd_info *master,
-                                struct mtd_partition *trx,
-                                struct mtd_partition *parts,
-                                size_t parts_len)
-{
-       struct trx_header header;
-       size_t bytes_read;
-       int curr_part = 0;
-       int i, err;
-
-       if (parts_len < 3) {
-               pr_warn("No enough space to add TRX partitions!\n");
-               return -ENOMEM;
-       }
-
-       err = mtd_read(master, trx->offset, sizeof(header), &bytes_read,
-                      (uint8_t *)&header);
-       if (err && !mtd_is_bitflip(err)) {
-               pr_err("mtd_read error while reading TRX header: %d\n", err);
-               return err;
-       }
-
-       i = 0;
-
-       /* We have LZMA loader if offset[2] points to sth */
-       if (header.offset[2]) {
-               bcm47xxpart_add_part(&parts[curr_part++], "loader",
-                                    trx->offset + header.offset[i], 0);
-               i++;
-       }
-
-       if (header.offset[i]) {
-               bcm47xxpart_add_part(&parts[curr_part++], "linux",
-                                    trx->offset + header.offset[i], 0);
-               i++;
-       }
-
-       if (header.offset[i]) {
-               size_t offset = trx->offset + header.offset[i];
-               const char *name = bcm47xxpart_trx_data_part_name(master,
-                                                                 offset);
-
-               bcm47xxpart_add_part(&parts[curr_part++], name, offset, 0);
-               i++;
-       }
-
-       /*
-        * Assume that every partition ends at the beginning of the one it is
-        * followed by.
-        */
-       for (i = 0; i < curr_part; i++) {
-               u64 next_part_offset = (i < curr_part - 1) ?
-                                       parts[i + 1].offset :
-                                       trx->offset + trx->size;
-
-               parts[i].size = next_part_offset - parts[i].offset;
-       }
-
-       return curr_part;
-}
-
 /**
  * bcm47xxpart_bootpartition - gets index of TRX partition used by bootloader
  *
@@ -362,17 +280,10 @@ static int bcm47xxpart_parse(struct mtd_info *master,
        for (i = 0; i < trx_num; i++) {
                struct mtd_partition *trx = &parts[trx_parts[i]];
 
-               if (i == bcm47xxpart_bootpartition()) {
-                       int num_parts;
-
-                       num_parts = bcm47xxpart_parse_trx(master, trx,
-                                                         parts + curr_part,
-                                                         BCM47XXPART_MAX_PARTS - curr_part);
-                       if (num_parts > 0)
-                               curr_part += num_parts;
-               } else {
+               if (i == bcm47xxpart_bootpartition())
+                       trx->types = trx_types;
+               else
                        trx->name = "failsafe";
-               }
        }
 
        *pparts = parts;
index 94d3eb42c4d5f2c55dc6db811ae3a5280e58f95e..7d342965f392232d89919a0182982e7764982b82 100644 (file)
@@ -666,7 +666,7 @@ cfi_staa_writev(struct mtd_info *mtd, const struct kvec *vecs,
        size_t   totlen = 0, thislen;
        int      ret = 0;
        size_t   buflen = 0;
-       static char *buffer;
+       char *buffer;
 
        if (!ECCBUF_SIZE) {
                /* We should fall back to a general writev implementation.
index 58329d2dacd1f74da67e9c45e001b01de41d4e94..6def5445e03e185cdbb92a997bc0a0db95a37b41 100644 (file)
@@ -95,6 +95,16 @@ config MTD_M25P80
          if you want to specify device partitioning or to use a device which
          doesn't support the JEDEC ID instruction.
 
+config MTD_MCHP23K256
+       tristate "Microchip 23K256 SRAM"
+       depends on SPI_MASTER
+       help
+         This enables access to Microchip 23K256 SRAM chips, using SPI.
+
+         Set up your spi devices with the right board-specific
+         platform data, or a device tree description if you want to
+         specify device partitioning
+
 config MTD_SPEAR_SMI
        tristate "SPEAR MTD NOR Support through SMI controller"
        depends on PLAT_SPEAR
index 7912d3a0ee343b045daf1dbdf9ae93061afad4eb..f0f767624cc68d987e968e826b1c8d16d820e078 100644 (file)
@@ -12,6 +12,7 @@ obj-$(CONFIG_MTD_LART)                += lart.o
 obj-$(CONFIG_MTD_BLOCK2MTD)    += block2mtd.o
 obj-$(CONFIG_MTD_DATAFLASH)    += mtd_dataflash.o
 obj-$(CONFIG_MTD_M25P80)       += m25p80.o
+obj-$(CONFIG_MTD_MCHP23K256)   += mchp23k256.o
 obj-$(CONFIG_MTD_SPEAR_SMI)    += spear_smi.o
 obj-$(CONFIG_MTD_SST25L)       += sst25l.o
 obj-$(CONFIG_MTD_BCM47XXSFLASH)        += bcm47xxsflash.o
index c4df3b1bded0bac4acf44bbbe8bb4430c073f77a..00eea6fd379cc68d51dbe197eec9b8b310341fdb 100644 (file)
@@ -78,11 +78,17 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
 {
        struct m25p *flash = nor->priv;
        struct spi_device *spi = flash->spi;
-       struct spi_transfer t[2] = {};
+       unsigned int inst_nbits, addr_nbits, data_nbits, data_idx;
+       struct spi_transfer t[3] = {};
        struct spi_message m;
        int cmd_sz = m25p_cmdsz(nor);
        ssize_t ret;
 
+       /* get transfer protocols. */
+       inst_nbits = spi_nor_get_protocol_inst_nbits(nor->write_proto);
+       addr_nbits = spi_nor_get_protocol_addr_nbits(nor->write_proto);
+       data_nbits = spi_nor_get_protocol_data_nbits(nor->write_proto);
+
        spi_message_init(&m);
 
        if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second)
@@ -92,12 +98,27 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
        m25p_addr2cmd(nor, to, flash->command);
 
        t[0].tx_buf = flash->command;
+       t[0].tx_nbits = inst_nbits;
        t[0].len = cmd_sz;
        spi_message_add_tail(&t[0], &m);
 
-       t[1].tx_buf = buf;
-       t[1].len = len;
-       spi_message_add_tail(&t[1], &m);
+       /* split the op code and address bytes into two transfers if needed. */
+       data_idx = 1;
+       if (addr_nbits != inst_nbits) {
+               t[0].len = 1;
+
+               t[1].tx_buf = &flash->command[1];
+               t[1].tx_nbits = addr_nbits;
+               t[1].len = cmd_sz - 1;
+               spi_message_add_tail(&t[1], &m);
+
+               data_idx = 2;
+       }
+
+       t[data_idx].tx_buf = buf;
+       t[data_idx].tx_nbits = data_nbits;
+       t[data_idx].len = len;
+       spi_message_add_tail(&t[data_idx], &m);
 
        ret = spi_sync(spi, &m);
        if (ret)
@@ -109,18 +130,6 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
        return ret;
 }
 
-static inline unsigned int m25p80_rx_nbits(struct spi_nor *nor)
-{
-       switch (nor->flash_read) {
-       case SPI_NOR_DUAL:
-               return 2;
-       case SPI_NOR_QUAD:
-               return 4;
-       default:
-               return 0;
-       }
-}
-
 /*
  * Read an address range from the nor chip.  The address range
  * may be any size provided it is within the physical boundaries.
@@ -130,13 +139,20 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
 {
        struct m25p *flash = nor->priv;
        struct spi_device *spi = flash->spi;
-       struct spi_transfer t[2];
+       unsigned int inst_nbits, addr_nbits, data_nbits, data_idx;
+       struct spi_transfer t[3];
        struct spi_message m;
        unsigned int dummy = nor->read_dummy;
        ssize_t ret;
+       int cmd_sz;
+
+       /* get transfer protocols. */
+       inst_nbits = spi_nor_get_protocol_inst_nbits(nor->read_proto);
+       addr_nbits = spi_nor_get_protocol_addr_nbits(nor->read_proto);
+       data_nbits = spi_nor_get_protocol_data_nbits(nor->read_proto);
 
        /* convert the dummy cycles to the number of bytes */
-       dummy /= 8;
+       dummy = (dummy * addr_nbits) / 8;
 
        if (spi_flash_read_supported(spi)) {
                struct spi_flash_read_message msg;
@@ -149,10 +165,9 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
                msg.read_opcode = nor->read_opcode;
                msg.addr_width = nor->addr_width;
                msg.dummy_bytes = dummy;
-               /* TODO: Support other combinations */
-               msg.opcode_nbits = SPI_NBITS_SINGLE;
-               msg.addr_nbits = SPI_NBITS_SINGLE;
-               msg.data_nbits = m25p80_rx_nbits(nor);
+               msg.opcode_nbits = inst_nbits;
+               msg.addr_nbits = addr_nbits;
+               msg.data_nbits = data_nbits;
 
                ret = spi_flash_read(spi, &msg);
                if (ret < 0)
@@ -167,20 +182,45 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
        m25p_addr2cmd(nor, from, flash->command);
 
        t[0].tx_buf = flash->command;
+       t[0].tx_nbits = inst_nbits;
        t[0].len = m25p_cmdsz(nor) + dummy;
        spi_message_add_tail(&t[0], &m);
 
-       t[1].rx_buf = buf;
-       t[1].rx_nbits = m25p80_rx_nbits(nor);
-       t[1].len = min3(len, spi_max_transfer_size(spi),
-                       spi_max_message_size(spi) - t[0].len);
-       spi_message_add_tail(&t[1], &m);
+       /*
+        * Set all dummy/mode cycle bits to avoid sending some manufacturer
+        * specific pattern, which might make the memory enter its Continuous
+        * Read mode by mistake.
+        * Based on the different mode cycle bit patterns listed and described
+        * in the JESD216B specification, the 0xff value works for all memories
+        * and all manufacturers.
+        */
+       cmd_sz = t[0].len;
+       memset(flash->command + cmd_sz - dummy, 0xff, dummy);
+
+       /* split the op code and address bytes into two transfers if needed. */
+       data_idx = 1;
+       if (addr_nbits != inst_nbits) {
+               t[0].len = 1;
+
+               t[1].tx_buf = &flash->command[1];
+               t[1].tx_nbits = addr_nbits;
+               t[1].len = cmd_sz - 1;
+               spi_message_add_tail(&t[1], &m);
+
+               data_idx = 2;
+       }
+
+       t[data_idx].rx_buf = buf;
+       t[data_idx].rx_nbits = data_nbits;
+       t[data_idx].len = min3(len, spi_max_transfer_size(spi),
+                              spi_max_message_size(spi) - cmd_sz);
+       spi_message_add_tail(&t[data_idx], &m);
 
        ret = spi_sync(spi, &m);
        if (ret)
                return ret;
 
-       ret = m.actual_length - m25p_cmdsz(nor) - dummy;
+       ret = m.actual_length - cmd_sz;
        if (ret < 0)
                return -EIO;
        return ret;
@@ -196,7 +236,11 @@ static int m25p_probe(struct spi_device *spi)
        struct flash_platform_data      *data;
        struct m25p *flash;
        struct spi_nor *nor;
-       enum read_mode mode = SPI_NOR_NORMAL;
+       struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_PP,
+       };
        char *flash_name;
        int ret;
 
@@ -221,10 +265,19 @@ static int m25p_probe(struct spi_device *spi)
        spi_set_drvdata(spi, flash);
        flash->spi = spi;
 
-       if (spi->mode & SPI_RX_QUAD)
-               mode = SPI_NOR_QUAD;
-       else if (spi->mode & SPI_RX_DUAL)
-               mode = SPI_NOR_DUAL;
+       if (spi->mode & SPI_RX_QUAD) {
+               hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
+
+               if (spi->mode & SPI_TX_QUAD)
+                       hwcaps.mask |= (SNOR_HWCAPS_READ_1_4_4 |
+                                       SNOR_HWCAPS_PP_1_1_4 |
+                                       SNOR_HWCAPS_PP_1_4_4);
+       } else if (spi->mode & SPI_RX_DUAL) {
+               hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
+
+               if (spi->mode & SPI_TX_DUAL)
+                       hwcaps.mask |= SNOR_HWCAPS_READ_1_2_2;
+       }
 
        if (data && data->name)
                nor->mtd.name = data->name;
@@ -241,7 +294,7 @@ static int m25p_probe(struct spi_device *spi)
        else
                flash_name = spi->modalias;
 
-       ret = spi_nor_scan(nor, flash_name, mode);
+       ret = spi_nor_scan(nor, flash_name, &hwcaps);
        if (ret)
                return ret;
 
diff --git a/drivers/mtd/devices/mchp23k256.c b/drivers/mtd/devices/mchp23k256.c
new file mode 100644 (file)
index 0000000..8956b7d
--- /dev/null
@@ -0,0 +1,236 @@
+/*
+ * mchp23k256.c
+ *
+ * Driver for Microchip 23k256 SPI RAM chips
+ *
+ * Copyright Â© 2016 Andrew Lunn <andrew@lunn.ch>
+ *
+ * This code is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/partitions.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/sizes.h>
+#include <linux/spi/flash.h>
+#include <linux/spi/spi.h>
+#include <linux/of_device.h>
+
+#define MAX_CMD_SIZE           4
+
+struct mchp23_caps {
+       u8 addr_width;
+       unsigned int size;
+};
+
+struct mchp23k256_flash {
+       struct spi_device       *spi;
+       struct mutex            lock;
+       struct mtd_info         mtd;
+       const struct mchp23_caps        *caps;
+};
+
+#define MCHP23K256_CMD_WRITE_STATUS    0x01
+#define MCHP23K256_CMD_WRITE           0x02
+#define MCHP23K256_CMD_READ            0x03
+#define MCHP23K256_MODE_SEQ            BIT(6)
+
+#define to_mchp23k256_flash(x) container_of(x, struct mchp23k256_flash, mtd)
+
+static void mchp23k256_addr2cmd(struct mchp23k256_flash *flash,
+                               unsigned int addr, u8 *cmd)
+{
+       int i;
+
+       /*
+        * Address is sent in big endian (MSB first) and we skip
+        * the first entry of the cmd array which contains the cmd
+        * opcode.
+        */
+       for (i = flash->caps->addr_width; i > 0; i--, addr >>= 8)
+               cmd[i] = addr;
+}
+
+static int mchp23k256_cmdsz(struct mchp23k256_flash *flash)
+{
+       return 1 + flash->caps->addr_width;
+}
+
+static int mchp23k256_write(struct mtd_info *mtd, loff_t to, size_t len,
+                           size_t *retlen, const unsigned char *buf)
+{
+       struct mchp23k256_flash *flash = to_mchp23k256_flash(mtd);
+       struct spi_transfer transfer[2] = {};
+       struct spi_message message;
+       unsigned char command[MAX_CMD_SIZE];
+
+       spi_message_init(&message);
+
+       command[0] = MCHP23K256_CMD_WRITE;
+       mchp23k256_addr2cmd(flash, to, command);
+
+       transfer[0].tx_buf = command;
+       transfer[0].len = mchp23k256_cmdsz(flash);
+       spi_message_add_tail(&transfer[0], &message);
+
+       transfer[1].tx_buf = buf;
+       transfer[1].len = len;
+       spi_message_add_tail(&transfer[1], &message);
+
+       mutex_lock(&flash->lock);
+
+       spi_sync(flash->spi, &message);
+
+       if (retlen && message.actual_length > sizeof(command))
+               *retlen += message.actual_length - sizeof(command);
+
+       mutex_unlock(&flash->lock);
+       return 0;
+}
+
+static int mchp23k256_read(struct mtd_info *mtd, loff_t from, size_t len,
+                          size_t *retlen, unsigned char *buf)
+{
+       struct mchp23k256_flash *flash = to_mchp23k256_flash(mtd);
+       struct spi_transfer transfer[2] = {};
+       struct spi_message message;
+       unsigned char command[MAX_CMD_SIZE];
+
+       spi_message_init(&message);
+
+       memset(&transfer, 0, sizeof(transfer));
+       command[0] = MCHP23K256_CMD_READ;
+       mchp23k256_addr2cmd(flash, from, command);
+
+       transfer[0].tx_buf = command;
+       transfer[0].len = mchp23k256_cmdsz(flash);
+       spi_message_add_tail(&transfer[0], &message);
+
+       transfer[1].rx_buf = buf;
+       transfer[1].len = len;
+       spi_message_add_tail(&transfer[1], &message);
+
+       mutex_lock(&flash->lock);
+
+       spi_sync(flash->spi, &message);
+
+       if (retlen && message.actual_length > sizeof(command))
+               *retlen += message.actual_length - sizeof(command);
+
+       mutex_unlock(&flash->lock);
+       return 0;
+}
+
+/*
+ * Set the device into sequential mode. This allows read/writes to the
+ * entire SRAM in a single operation
+ */
+static int mchp23k256_set_mode(struct spi_device *spi)
+{
+       struct spi_transfer transfer = {};
+       struct spi_message message;
+       unsigned char command[2];
+
+       spi_message_init(&message);
+
+       command[0] = MCHP23K256_CMD_WRITE_STATUS;
+       command[1] = MCHP23K256_MODE_SEQ;
+
+       transfer.tx_buf = command;
+       transfer.len = sizeof(command);
+       spi_message_add_tail(&transfer, &message);
+
+       return spi_sync(spi, &message);
+}
+
+static const struct mchp23_caps mchp23k256_caps = {
+       .size = SZ_32K,
+       .addr_width = 2,
+};
+
+static const struct mchp23_caps mchp23lcv1024_caps = {
+       .size = SZ_128K,
+       .addr_width = 3,
+};
+
+static int mchp23k256_probe(struct spi_device *spi)
+{
+       struct mchp23k256_flash *flash;
+       struct flash_platform_data *data;
+       int err;
+
+       flash = devm_kzalloc(&spi->dev, sizeof(*flash), GFP_KERNEL);
+       if (!flash)
+               return -ENOMEM;
+
+       flash->spi = spi;
+       mutex_init(&flash->lock);
+       spi_set_drvdata(spi, flash);
+
+       err = mchp23k256_set_mode(spi);
+       if (err)
+               return err;
+
+       data = dev_get_platdata(&spi->dev);
+
+       flash->caps = of_device_get_match_data(&spi->dev);
+       if (!flash->caps)
+               flash->caps = &mchp23k256_caps;
+
+       mtd_set_of_node(&flash->mtd, spi->dev.of_node);
+       flash->mtd.dev.parent   = &spi->dev;
+       flash->mtd.type         = MTD_RAM;
+       flash->mtd.flags        = MTD_CAP_RAM;
+       flash->mtd.writesize    = 1;
+       flash->mtd.size         = flash->caps->size;
+       flash->mtd._read        = mchp23k256_read;
+       flash->mtd._write       = mchp23k256_write;
+
+       err = mtd_device_register(&flash->mtd, data ? data->parts : NULL,
+                                 data ? data->nr_parts : 0);
+       if (err)
+               return err;
+
+       return 0;
+}
+
+static int mchp23k256_remove(struct spi_device *spi)
+{
+       struct mchp23k256_flash *flash = spi_get_drvdata(spi);
+
+       return mtd_device_unregister(&flash->mtd);
+}
+
+static const struct of_device_id mchp23k256_of_table[] = {
+       {
+               .compatible = "microchip,mchp23k256",
+               .data = &mchp23k256_caps,
+       },
+       {
+               .compatible = "microchip,mchp23lcv1024",
+               .data = &mchp23lcv1024_caps,
+       },
+       {}
+};
+MODULE_DEVICE_TABLE(of, mchp23k256_of_table);
+
+static struct spi_driver mchp23k256_driver = {
+       .driver = {
+               .name   = "mchp23k256",
+               .of_match_table = of_match_ptr(mchp23k256_of_table),
+       },
+       .probe          = mchp23k256_probe,
+       .remove         = mchp23k256_remove,
+};
+
+module_spi_driver(mchp23k256_driver);
+
+MODULE_DESCRIPTION("MTD SPI driver for MCHP23K256 RAM chips");
+MODULE_AUTHOR("Andrew Lunn <andre@lunn.ch>");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("spi:mchp23k256");
index f9e9bd1cfaa034a4e79f0d4458ca90d47236a6f3..5dc8bd042cc54b2d07407f123d0ee62bce738594 100644 (file)
 #define OP_WRITE_SECURITY_REVC 0x9A
 #define OP_WRITE_SECURITY      0x9B    /* revision D */
 
+#define CFI_MFR_ATMEL          0x1F
+
+#define DATAFLASH_SHIFT_EXTID  24
+#define DATAFLASH_SHIFT_ID     40
 
 struct dataflash {
-       uint8_t                 command[4];
+       u8                      command[4];
        char                    name[24];
 
        unsigned short          page_offset;    /* offset in flash address */
@@ -129,8 +133,7 @@ static int dataflash_waitready(struct spi_device *spi)
        for (;;) {
                status = dataflash_status(spi);
                if (status < 0) {
-                       pr_debug("%s: status %d?\n",
-                                       dev_name(&spi->dev), status);
+                       dev_dbg(&spi->dev, "status %d?\n", status);
                        status = 0;
                }
 
@@ -153,12 +156,11 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr)
        struct spi_transfer     x = { };
        struct spi_message      msg;
        unsigned                blocksize = priv->page_size << 3;
-       uint8_t                 *command;
-       uint32_t                rem;
+       u8                      *command;
+       u32                     rem;
 
-       pr_debug("%s: erase addr=0x%llx len 0x%llx\n",
-             dev_name(&spi->dev), (long long)instr->addr,
-             (long long)instr->len);
+       dev_dbg(&spi->dev, "erase addr=0x%llx len 0x%llx\n",
+               (long long)instr->addr, (long long)instr->len);
 
        div_u64_rem(instr->len, priv->page_size, &rem);
        if (rem)
@@ -187,11 +189,11 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr)
                pageaddr = pageaddr << priv->page_offset;
 
                command[0] = do_block ? OP_ERASE_BLOCK : OP_ERASE_PAGE;
-               command[1] = (uint8_t)(pageaddr >> 16);
-               command[2] = (uint8_t)(pageaddr >> 8);
+               command[1] = (u8)(pageaddr >> 16);
+               command[2] = (u8)(pageaddr >> 8);
                command[3] = 0;
 
-               pr_debug("ERASE %s: (%x) %x %x %x [%i]\n",
+               dev_dbg(&spi->dev, "ERASE %s: (%x) %x %x %x [%i]\n",
                        do_block ? "block" : "page",
                        command[0], command[1], command[2], command[3],
                        pageaddr);
@@ -200,8 +202,8 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr)
                (void) dataflash_waitready(spi);
 
                if (status < 0) {
-                       printk(KERN_ERR "%s: erase %x, err %d\n",
-                               dev_name(&spi->dev), pageaddr, status);
+                       dev_err(&spi->dev, "erase %x, err %d\n",
+                               pageaddr, status);
                        /* REVISIT:  can retry instr->retries times; or
                         * giveup and instr->fail_addr = instr->addr;
                         */
@@ -239,11 +241,11 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
        struct spi_transfer     x[2] = { };
        struct spi_message      msg;
        unsigned int            addr;
-       uint8_t                 *command;
+       u8                      *command;
        int                     status;
 
-       pr_debug("%s: read 0x%x..0x%x\n", dev_name(&priv->spi->dev),
-                       (unsigned)from, (unsigned)(from + len));
+       dev_dbg(&priv->spi->dev, "read 0x%x..0x%x\n",
+                 (unsigned int)from, (unsigned int)(from + len));
 
        /* Calculate flash page/byte address */
        addr = (((unsigned)from / priv->page_size) << priv->page_offset)
@@ -251,7 +253,7 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
 
        command = priv->command;
 
-       pr_debug("READ: (%x) %x %x %x\n",
+       dev_dbg(&priv->spi->dev, "READ: (%x) %x %x %x\n",
                command[0], command[1], command[2], command[3]);
 
        spi_message_init(&msg);
@@ -271,9 +273,9 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
         * fewer "don't care" bytes.  Both buffers stay unchanged.
         */
        command[0] = OP_READ_CONTINUOUS;
-       command[1] = (uint8_t)(addr >> 16);
-       command[2] = (uint8_t)(addr >> 8);
-       command[3] = (uint8_t)(addr >> 0);
+       command[1] = (u8)(addr >> 16);
+       command[2] = (u8)(addr >> 8);
+       command[3] = (u8)(addr >> 0);
        /* plus 4 "don't care" bytes */
 
        status = spi_sync(priv->spi, &msg);
@@ -283,8 +285,7 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
                *retlen = msg.actual_length - 8;
                status = 0;
        } else
-               pr_debug("%s: read %x..%x --> %d\n",
-                       dev_name(&priv->spi->dev),
+               dev_dbg(&priv->spi->dev, "read %x..%x --> %d\n",
                        (unsigned)from, (unsigned)(from + len),
                        status);
        return status;
@@ -308,10 +309,10 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
        size_t                  remaining = len;
        u_char                  *writebuf = (u_char *) buf;
        int                     status = -EINVAL;
-       uint8_t                 *command;
+       u8                      *command;
 
-       pr_debug("%s: write 0x%x..0x%x\n",
-               dev_name(&spi->dev), (unsigned)to, (unsigned)(to + len));
+       dev_dbg(&spi->dev, "write 0x%x..0x%x\n",
+               (unsigned int)to, (unsigned int)(to + len));
 
        spi_message_init(&msg);
 
@@ -328,7 +329,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
 
        mutex_lock(&priv->lock);
        while (remaining > 0) {
-               pr_debug("write @ %i:%i len=%i\n",
+               dev_dbg(&spi->dev, "write @ %i:%i len=%i\n",
                        pageaddr, offset, writelen);
 
                /* REVISIT:
@@ -356,13 +357,13 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
                        command[2] = (addr & 0x0000FF00) >> 8;
                        command[3] = 0;
 
-                       pr_debug("TRANSFER: (%x) %x %x %x\n",
+                       dev_dbg(&spi->dev, "TRANSFER: (%x) %x %x %x\n",
                                command[0], command[1], command[2], command[3]);
 
                        status = spi_sync(spi, &msg);
                        if (status < 0)
-                               pr_debug("%s: xfer %u -> %d\n",
-                                       dev_name(&spi->dev), addr, status);
+                               dev_dbg(&spi->dev, "xfer %u -> %d\n",
+                                       addr, status);
 
                        (void) dataflash_waitready(priv->spi);
                }
@@ -374,7 +375,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
                command[2] = (addr & 0x0000FF00) >> 8;
                command[3] = (addr & 0x000000FF);
 
-               pr_debug("PROGRAM: (%x) %x %x %x\n",
+               dev_dbg(&spi->dev, "PROGRAM: (%x) %x %x %x\n",
                        command[0], command[1], command[2], command[3]);
 
                x[1].tx_buf = writebuf;
@@ -383,8 +384,8 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
                status = spi_sync(spi, &msg);
                spi_transfer_del(x + 1);
                if (status < 0)
-                       pr_debug("%s: pgm %u/%u -> %d\n",
-                               dev_name(&spi->dev), addr, writelen, status);
+                       dev_dbg(&spi->dev, "pgm %u/%u -> %d\n",
+                               addr, writelen, status);
 
                (void) dataflash_waitready(priv->spi);
 
@@ -398,20 +399,20 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
                command[2] = (addr & 0x0000FF00) >> 8;
                command[3] = 0;
 
-               pr_debug("COMPARE: (%x) %x %x %x\n",
+               dev_dbg(&spi->dev, "COMPARE: (%x) %x %x %x\n",
                        command[0], command[1], command[2], command[3]);
 
                status = spi_sync(spi, &msg);
                if (status < 0)
-                       pr_debug("%s: compare %u -> %d\n",
-                               dev_name(&spi->dev), addr, status);
+                       dev_dbg(&spi->dev, "compare %u -> %d\n",
+                               addr, status);
 
                status = dataflash_waitready(priv->spi);
 
                /* Check result of the compare operation */
                if (status & (1 << 6)) {
-                       printk(KERN_ERR "%s: compare page %u, err %d\n",
-                               dev_name(&spi->dev), pageaddr, status);
+                       dev_err(&spi->dev, "compare page %u, err %d\n",
+                               pageaddr, status);
                        remaining = 0;
                        status = -EIO;
                        break;
@@ -455,11 +456,11 @@ static int dataflash_get_otp_info(struct mtd_info *mtd, size_t len,
 }
 
 static ssize_t otp_read(struct spi_device *spi, unsigned base,
-               uint8_t *buf, loff_t off, size_t len)
+               u8 *buf, loff_t off, size_t len)
 {
        struct spi_message      m;
        size_t                  l;
-       uint8_t                 *scratch;
+       u8                      *scratch;
        struct spi_transfer     t;
        int                     status;
 
@@ -538,7 +539,7 @@ static int dataflash_write_user_otp(struct mtd_info *mtd,
 {
        struct spi_message      m;
        const size_t            l = 4 + 64;
-       uint8_t                 *scratch;
+       u8                      *scratch;
        struct spi_transfer     t;
        struct dataflash        *priv = mtd->priv;
        int                     status;
@@ -689,14 +690,15 @@ struct flash_info {
        /* JEDEC id has a high byte of zero plus three data bytes:
         * the manufacturer id, then a two byte device id.
         */
-       uint32_t        jedec_id;
+       u64             jedec_id;
 
        /* The size listed here is what works with OP_ERASE_PAGE. */
        unsigned        nr_pages;
-       uint16_t        pagesize;
-       uint16_t        pageoffset;
+       u16             pagesize;
+       u16             pageoffset;
 
-       uint16_t        flags;
+       u16             flags;
+#define SUP_EXTID      0x0004          /* supports extended ID data */
 #define SUP_POW2PS     0x0002          /* supports 2^N byte pages */
 #define IS_POW2PS      0x0001          /* uses 2^N byte pages */
 };
@@ -734,54 +736,32 @@ static struct flash_info dataflash_data[] = {
 
        { "AT45DB642x",  0x1f2800, 8192, 1056, 11, SUP_POW2PS},
        { "at45db642d",  0x1f2800, 8192, 1024, 10, SUP_POW2PS | IS_POW2PS},
+
+       { "AT45DB641E",  0x1f28000100, 32768, 264, 9, SUP_EXTID | SUP_POW2PS},
+       { "at45db641e",  0x1f28000100, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS},
 };
 
-static struct flash_info *jedec_probe(struct spi_device *spi)
+static struct flash_info *jedec_lookup(struct spi_device *spi,
+                                      u64 jedec, bool use_extid)
 {
-       int                     tmp;
-       uint8_t                 code = OP_READ_ID;
-       uint8_t                 id[3];
-       uint32_t                jedec;
-       struct flash_info       *info;
+       struct flash_info *info;
        int status;
 
-       /* JEDEC also defines an optional "extended device information"
-        * string for after vendor-specific data, after the three bytes
-        * we use here.  Supporting some chips might require using it.
-        *
-        * If the vendor ID isn't Atmel's (0x1f), assume this call failed.
-        * That's not an error; only rev C and newer chips handle it, and
-        * only Atmel sells these chips.
-        */
-       tmp = spi_write_then_read(spi, &code, 1, id, 3);
-       if (tmp < 0) {
-               pr_debug("%s: error %d reading JEDEC ID\n",
-                       dev_name(&spi->dev), tmp);
-               return ERR_PTR(tmp);
-       }
-       if (id[0] != 0x1f)
-               return NULL;
-
-       jedec = id[0];
-       jedec = jedec << 8;
-       jedec |= id[1];
-       jedec = jedec << 8;
-       jedec |= id[2];
+       for (info = dataflash_data;
+            info < dataflash_data + ARRAY_SIZE(dataflash_data);
+            info++) {
+               if (use_extid && !(info->flags & SUP_EXTID))
+                       continue;
 
-       for (tmp = 0, info = dataflash_data;
-                       tmp < ARRAY_SIZE(dataflash_data);
-                       tmp++, info++) {
                if (info->jedec_id == jedec) {
-                       pr_debug("%s: OTP, sector protect%s\n",
-                               dev_name(&spi->dev),
-                               (info->flags & SUP_POW2PS)
-                                       ? ", binary pagesize" : ""
-                               );
+                       dev_dbg(&spi->dev, "OTP, sector protect%s\n",
+                               (info->flags & SUP_POW2PS) ?
+                               ", binary pagesize" : "");
                        if (info->flags & SUP_POW2PS) {
                                status = dataflash_status(spi);
                                if (status < 0) {
-                                       pr_debug("%s: status error %d\n",
-                                               dev_name(&spi->dev), status);
+                                       dev_dbg(&spi->dev, "status error %d\n",
+                                               status);
                                        return ERR_PTR(status);
                                }
                                if (status & 0x1) {
@@ -796,12 +776,58 @@ static struct flash_info *jedec_probe(struct spi_device *spi)
                }
        }
 
+       return ERR_PTR(-ENODEV);
+}
+
+static struct flash_info *jedec_probe(struct spi_device *spi)
+{
+       int ret;
+       u8 code = OP_READ_ID;
+       u64 jedec;
+       u8 id[sizeof(jedec)] = {0};
+       const unsigned int id_size = 5;
+       struct flash_info *info;
+
+       /*
+        * JEDEC also defines an optional "extended device information"
+        * string for after vendor-specific data, after the three bytes
+        * we use here.  Supporting some chips might require using it.
+        *
+        * If the vendor ID isn't Atmel's (0x1f), assume this call failed.
+        * That's not an error; only rev C and newer chips handle it, and
+        * only Atmel sells these chips.
+        */
+       ret = spi_write_then_read(spi, &code, 1, id, id_size);
+       if (ret < 0) {
+               dev_dbg(&spi->dev, "error %d reading JEDEC ID\n", ret);
+               return ERR_PTR(ret);
+       }
+
+       if (id[0] != CFI_MFR_ATMEL)
+               return NULL;
+
+       jedec = be64_to_cpup((__be64 *)id);
+
+       /*
+        * First, try to match device using extended device
+        * information
+        */
+       info = jedec_lookup(spi, jedec >> DATAFLASH_SHIFT_EXTID, true);
+       if (!IS_ERR(info))
+               return info;
+       /*
+        * If that fails, make another pass using regular ID
+        * information
+        */
+       info = jedec_lookup(spi, jedec >> DATAFLASH_SHIFT_ID, false);
+       if (!IS_ERR(info))
+               return info;
        /*
         * Treat other chips as errors ... we won't know the right page
         * size (it might be binary) even when we can tell which density
         * class is involved (legacy chip id scheme).
         */
-       dev_warn(&spi->dev, "JEDEC id %06x not handled\n", jedec);
+       dev_warn(&spi->dev, "JEDEC id %016llx not handled\n", jedec);
        return ERR_PTR(-ENODEV);
 }
 
@@ -845,8 +871,7 @@ static int dataflash_probe(struct spi_device *spi)
         */
        status = dataflash_status(spi);
        if (status <= 0 || status == 0xff) {
-               pr_debug("%s: status error %d\n",
-                               dev_name(&spi->dev), status);
+               dev_dbg(&spi->dev, "status error %d\n", status);
                if (status == 0 || status == 0xff)
                        status = -ENODEV;
                return status;
@@ -887,8 +912,7 @@ static int dataflash_probe(struct spi_device *spi)
        }
 
        if (status < 0)
-               pr_debug("%s: add_dataflash --> %d\n", dev_name(&spi->dev),
-                               status);
+               dev_dbg(&spi->dev, "add_dataflash --> %d\n", status);
 
        return status;
 }
@@ -898,7 +922,7 @@ static int dataflash_remove(struct spi_device *spi)
        struct dataflash        *flash = spi_get_drvdata(spi);
        int                     status;
 
-       pr_debug("%s: remove\n", dev_name(&spi->dev));
+       dev_dbg(&spi->dev, "remove\n");
 
        status = mtd_device_unregister(&flash->mtd);
        if (status == 0)
index 8b81e15105dd4a6cc34b90ed8f50b08aa41ac9fa..eba125c9f23f485cf452739c5c5056cd440c1ccd 100644 (file)
@@ -13,7 +13,6 @@
 #define _MTD_SERIAL_FLASH_CMDS_H
 
 /* Generic Flash Commands/OPCODEs */
-#define SPINOR_OP_RDSR2                0x35
 #define SPINOR_OP_WRVCR                0x81
 #define SPINOR_OP_RDVCR                0x85
 
index 804313a33f2bec433010350f47ef063313354f94..21afd94cd904a0b0fe71e71a39345f938a0e9bc0 100644 (file)
@@ -1445,7 +1445,7 @@ static int stfsm_s25fl_config(struct stfsm *fsm)
        }
 
        /* Check status of 'QE' bit, update if required. */
-       stfsm_read_status(fsm, SPINOR_OP_RDSR2, &cr1, 1);
+       stfsm_read_status(fsm, SPINOR_OP_RDCR, &cr1, 1);
        data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
        if (data_pads == 4) {
                if (!(cr1 & STFSM_S25FL_CONFIG_QE)) {
@@ -1490,7 +1490,7 @@ static int stfsm_w25q_config(struct stfsm *fsm)
                return ret;
 
        /* Check status of 'QE' bit, update if required. */
-       stfsm_read_status(fsm, SPINOR_OP_RDSR2, &sr2, 1);
+       stfsm_read_status(fsm, SPINOR_OP_RDCR, &sr2, 1);
        data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
        if (data_pads == 4) {
                if (!(sr2 & W25Q_STATUS_QE)) {
index 9d371cd728ea122e47000139f35b451dd84df032..05b286b5289f547020622a6883318cad15ec010f 100644 (file)
@@ -59,7 +59,7 @@ int of_flash_probe_gemini(struct platform_device *pdev,
                          struct device_node *np,
                          struct map_info *map)
 {
-       static struct regmap *rmap;
+       struct regmap *rmap;
        struct device *dev = &pdev->dev;
        u32 val;
        int ret;
index 1517da3ddd7d0b9752f8b7d4ebea8aaa3a6ae298..956382cea2568b3ee9c8721e5ab2acd522ce66f5 100644 (file)
@@ -991,7 +991,7 @@ EXPORT_SYMBOL_GPL(mtd_point);
 /* We probably shouldn't allow XIP if the unpoint isn't a NULL */
 int mtd_unpoint(struct mtd_info *mtd, loff_t from, size_t len)
 {
-       if (!mtd->_point)
+       if (!mtd->_unpoint)
                return -EOPNOTSUPP;
        if (from < 0 || from >= mtd->size || len > mtd->size - from)
                return -EINVAL;
index ea5e5307f667f2a6669287377a633824bb67a4bb..5736b0c90b339b6bc3e8e1ae3189341910b958f2 100644 (file)
 static LIST_HEAD(mtd_partitions);
 static DEFINE_MUTEX(mtd_partitions_mutex);
 
-/* Our partition node structure */
+/**
+ * struct mtd_part - our partition node structure
+ *
+ * @mtd: struct holding partition details
+ * @parent: parent mtd - flash device or another partition
+ * @offset: partition offset relative to the *flash device*
+ */
 struct mtd_part {
        struct mtd_info mtd;
-       struct mtd_info *master;
+       struct mtd_info *parent;
        uint64_t offset;
        struct list_head list;
 };
@@ -67,15 +73,15 @@ static int part_read(struct mtd_info *mtd, loff_t from, size_t len,
        struct mtd_ecc_stats stats;
        int res;
 
-       stats = part->master->ecc_stats;
-       res = part->master->_read(part->master, from + part->offset, len,
+       stats = part->parent->ecc_stats;
+       res = part->parent->_read(part->parent, from + part->offset, len,
                                  retlen, buf);
        if (unlikely(mtd_is_eccerr(res)))
                mtd->ecc_stats.failed +=
-                       part->master->ecc_stats.failed - stats.failed;
+                       part->parent->ecc_stats.failed - stats.failed;
        else
                mtd->ecc_stats.corrected +=
-                       part->master->ecc_stats.corrected - stats.corrected;
+                       part->parent->ecc_stats.corrected - stats.corrected;
        return res;
 }
 
@@ -84,7 +90,7 @@ static int part_point(struct mtd_info *mtd, loff_t from, size_t len,
 {
        struct mtd_part *part = mtd_to_part(mtd);
 
-       return part->master->_point(part->master, from + part->offset, len,
+       return part->parent->_point(part->parent, from + part->offset, len,
                                    retlen, virt, phys);
 }
 
@@ -92,7 +98,7 @@ static int part_unpoint(struct mtd_info *mtd, loff_t from, size_t len)
 {
        struct mtd_part *part = mtd_to_part(mtd);
 
-       return part->master->_unpoint(part->master, from + part->offset, len);
+       return part->parent->_unpoint(part->parent, from + part->offset, len);
 }
 
 static unsigned long part_get_unmapped_area(struct mtd_info *mtd,
@@ -103,7 +109,7 @@ static unsigned long part_get_unmapped_area(struct mtd_info *mtd,
        struct mtd_part *part = mtd_to_part(mtd);
 
        offset += part->offset;
-       return part->master->_get_unmapped_area(part->master, len, offset,
+       return part->parent->_get_unmapped_area(part->parent, len, offset,
                                                flags);
 }
 
@@ -132,7 +138,7 @@ static int part_read_oob(struct mtd_info *mtd, loff_t from,
                        return -EINVAL;
        }
 
-       res = part->master->_read_oob(part->master, from + part->offset, ops);
+       res = part->parent->_read_oob(part->parent, from + part->offset, ops);
        if (unlikely(res)) {
                if (mtd_is_bitflip(res))
                        mtd->ecc_stats.corrected++;
@@ -146,7 +152,7 @@ static int part_read_user_prot_reg(struct mtd_info *mtd, loff_t from,
                size_t len, size_t *retlen, u_char *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_read_user_prot_reg(part->master, from, len,
+       return part->parent->_read_user_prot_reg(part->parent, from, len,
                                                 retlen, buf);
 }
 
@@ -154,7 +160,7 @@ static int part_get_user_prot_info(struct mtd_info *mtd, size_t len,
                                   size_t *retlen, struct otp_info *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_get_user_prot_info(part->master, len, retlen,
+       return part->parent->_get_user_prot_info(part->parent, len, retlen,
                                                 buf);
 }
 
@@ -162,7 +168,7 @@ static int part_read_fact_prot_reg(struct mtd_info *mtd, loff_t from,
                size_t len, size_t *retlen, u_char *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_read_fact_prot_reg(part->master, from, len,
+       return part->parent->_read_fact_prot_reg(part->parent, from, len,
                                                 retlen, buf);
 }
 
@@ -170,7 +176,7 @@ static int part_get_fact_prot_info(struct mtd_info *mtd, size_t len,
                                   size_t *retlen, struct otp_info *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_get_fact_prot_info(part->master, len, retlen,
+       return part->parent->_get_fact_prot_info(part->parent, len, retlen,
                                                 buf);
 }
 
@@ -178,7 +184,7 @@ static int part_write(struct mtd_info *mtd, loff_t to, size_t len,
                size_t *retlen, const u_char *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_write(part->master, to + part->offset, len,
+       return part->parent->_write(part->parent, to + part->offset, len,
                                    retlen, buf);
 }
 
@@ -186,7 +192,7 @@ static int part_panic_write(struct mtd_info *mtd, loff_t to, size_t len,
                size_t *retlen, const u_char *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_panic_write(part->master, to + part->offset, len,
+       return part->parent->_panic_write(part->parent, to + part->offset, len,
                                          retlen, buf);
 }
 
@@ -199,14 +205,14 @@ static int part_write_oob(struct mtd_info *mtd, loff_t to,
                return -EINVAL;
        if (ops->datbuf && to + ops->len > mtd->size)
                return -EINVAL;
-       return part->master->_write_oob(part->master, to + part->offset, ops);
+       return part->parent->_write_oob(part->parent, to + part->offset, ops);
 }
 
 static int part_write_user_prot_reg(struct mtd_info *mtd, loff_t from,
                size_t len, size_t *retlen, u_char *buf)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_write_user_prot_reg(part->master, from, len,
+       return part->parent->_write_user_prot_reg(part->parent, from, len,
                                                  retlen, buf);
 }
 
@@ -214,14 +220,14 @@ static int part_lock_user_prot_reg(struct mtd_info *mtd, loff_t from,
                size_t len)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_lock_user_prot_reg(part->master, from, len);
+       return part->parent->_lock_user_prot_reg(part->parent, from, len);
 }
 
 static int part_writev(struct mtd_info *mtd, const struct kvec *vecs,
                unsigned long count, loff_t to, size_t *retlen)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_writev(part->master, vecs, count,
+       return part->parent->_writev(part->parent, vecs, count,
                                     to + part->offset, retlen);
 }
 
@@ -231,7 +237,7 @@ static int part_erase(struct mtd_info *mtd, struct erase_info *instr)
        int ret;
 
        instr->addr += part->offset;
-       ret = part->master->_erase(part->master, instr);
+       ret = part->parent->_erase(part->parent, instr);
        if (ret) {
                if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
                        instr->fail_addr -= part->offset;
@@ -257,51 +263,51 @@ EXPORT_SYMBOL_GPL(mtd_erase_callback);
 static int part_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_lock(part->master, ofs + part->offset, len);
+       return part->parent->_lock(part->parent, ofs + part->offset, len);
 }
 
 static int part_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_unlock(part->master, ofs + part->offset, len);
+       return part->parent->_unlock(part->parent, ofs + part->offset, len);
 }
 
 static int part_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_is_locked(part->master, ofs + part->offset, len);
+       return part->parent->_is_locked(part->parent, ofs + part->offset, len);
 }
 
 static void part_sync(struct mtd_info *mtd)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       part->master->_sync(part->master);
+       part->parent->_sync(part->parent);
 }
 
 static int part_suspend(struct mtd_info *mtd)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_suspend(part->master);
+       return part->parent->_suspend(part->parent);
 }
 
 static void part_resume(struct mtd_info *mtd)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       part->master->_resume(part->master);
+       part->parent->_resume(part->parent);
 }
 
 static int part_block_isreserved(struct mtd_info *mtd, loff_t ofs)
 {
        struct mtd_part *part = mtd_to_part(mtd);
        ofs += part->offset;
-       return part->master->_block_isreserved(part->master, ofs);
+       return part->parent->_block_isreserved(part->parent, ofs);
 }
 
 static int part_block_isbad(struct mtd_info *mtd, loff_t ofs)
 {
        struct mtd_part *part = mtd_to_part(mtd);
        ofs += part->offset;
-       return part->master->_block_isbad(part->master, ofs);
+       return part->parent->_block_isbad(part->parent, ofs);
 }
 
 static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
@@ -310,7 +316,7 @@ static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
        int res;
 
        ofs += part->offset;
-       res = part->master->_block_markbad(part->master, ofs);
+       res = part->parent->_block_markbad(part->parent, ofs);
        if (!res)
                mtd->ecc_stats.badblocks++;
        return res;
@@ -319,13 +325,13 @@ static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
 static int part_get_device(struct mtd_info *mtd)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       return part->master->_get_device(part->master);
+       return part->parent->_get_device(part->parent);
 }
 
 static void part_put_device(struct mtd_info *mtd)
 {
        struct mtd_part *part = mtd_to_part(mtd);
-       part->master->_put_device(part->master);
+       part->parent->_put_device(part->parent);
 }
 
 static int part_ooblayout_ecc(struct mtd_info *mtd, int section,
@@ -333,7 +339,7 @@ static int part_ooblayout_ecc(struct mtd_info *mtd, int section,
 {
        struct mtd_part *part = mtd_to_part(mtd);
 
-       return mtd_ooblayout_ecc(part->master, section, oobregion);
+       return mtd_ooblayout_ecc(part->parent, section, oobregion);
 }
 
 static int part_ooblayout_free(struct mtd_info *mtd, int section,
@@ -341,7 +347,7 @@ static int part_ooblayout_free(struct mtd_info *mtd, int section,
 {
        struct mtd_part *part = mtd_to_part(mtd);
 
-       return mtd_ooblayout_free(part->master, section, oobregion);
+       return mtd_ooblayout_free(part->parent, section, oobregion);
 }
 
 static const struct mtd_ooblayout_ops part_ooblayout_ops = {
@@ -353,7 +359,7 @@ static int part_max_bad_blocks(struct mtd_info *mtd, loff_t ofs, size_t len)
 {
        struct mtd_part *part = mtd_to_part(mtd);
 
-       return part->master->_max_bad_blocks(part->master,
+       return part->parent->_max_bad_blocks(part->parent,
                                             ofs + part->offset, len);
 }
 
@@ -363,63 +369,70 @@ static inline void free_partition(struct mtd_part *p)
        kfree(p);
 }
 
-/*
- * This function unregisters and destroy all slave MTD objects which are
- * attached to the given master MTD object.
+/**
+ * mtd_parse_part - parse MTD partition looking for subpartitions
+ *
+ * @slave: part that is supposed to be a container and should be parsed
+ * @types: NULL-terminated array with names of partition parsers to try
+ *
+ * Some partitions are kind of containers with extra subpartitions (volumes).
+ * There can be various formats of such containers. This function tries to use
+ * specified parsers to analyze given partition and registers found
+ * subpartitions on success.
  */
-
-int del_mtd_partitions(struct mtd_info *master)
+static int mtd_parse_part(struct mtd_part *slave, const char *const *types)
 {
-       struct mtd_part *slave, *next;
-       int ret, err = 0;
+       struct mtd_partitions parsed;
+       int err;
 
-       mutex_lock(&mtd_partitions_mutex);
-       list_for_each_entry_safe(slave, next, &mtd_partitions, list)
-               if (slave->master == master) {
-                       ret = del_mtd_device(&slave->mtd);
-                       if (ret < 0) {
-                               err = ret;
-                               continue;
-                       }
-                       list_del(&slave->list);
-                       free_partition(slave);
-               }
-       mutex_unlock(&mtd_partitions_mutex);
+       err = parse_mtd_partitions(&slave->mtd, types, &parsed, NULL);
+       if (err)
+               return err;
+       else if (!parsed.nr_parts)
+               return -ENOENT;
+
+       err = add_mtd_partitions(&slave->mtd, parsed.parts, parsed.nr_parts);
+
+       mtd_part_parser_cleanup(&parsed);
 
        return err;
 }
 
-static struct mtd_part *allocate_partition(struct mtd_info *master,
+static struct mtd_part *allocate_partition(struct mtd_info *parent,
                        const struct mtd_partition *part, int partno,
                        uint64_t cur_offset)
 {
+       int wr_alignment = (parent->flags & MTD_NO_ERASE) ? parent->writesize :
+                                                           parent->erasesize;
        struct mtd_part *slave;
+       u32 remainder;
        char *name;
+       u64 tmp;
 
        /* allocate the partition structure */
        slave = kzalloc(sizeof(*slave), GFP_KERNEL);
        name = kstrdup(part->name, GFP_KERNEL);
        if (!name || !slave) {
                printk(KERN_ERR"memory allocation error while creating partitions for \"%s\"\n",
-                      master->name);
+                      parent->name);
                kfree(name);
                kfree(slave);
                return ERR_PTR(-ENOMEM);
        }
 
        /* set up the MTD object for this partition */
-       slave->mtd.type = master->type;
-       slave->mtd.flags = master->flags & ~part->mask_flags;
+       slave->mtd.type = parent->type;
+       slave->mtd.flags = parent->flags & ~part->mask_flags;
        slave->mtd.size = part->size;
-       slave->mtd.writesize = master->writesize;
-       slave->mtd.writebufsize = master->writebufsize;
-       slave->mtd.oobsize = master->oobsize;
-       slave->mtd.oobavail = master->oobavail;
-       slave->mtd.subpage_sft = master->subpage_sft;
-       slave->mtd.pairing = master->pairing;
+       slave->mtd.writesize = parent->writesize;
+       slave->mtd.writebufsize = parent->writebufsize;
+       slave->mtd.oobsize = parent->oobsize;
+       slave->mtd.oobavail = parent->oobavail;
+       slave->mtd.subpage_sft = parent->subpage_sft;
+       slave->mtd.pairing = parent->pairing;
 
        slave->mtd.name = name;
-       slave->mtd.owner = master->owner;
+       slave->mtd.owner = parent->owner;
 
        /* NOTE: Historically, we didn't arrange MTDs as a tree out of
         * concern for showing the same data in multiple partitions.
@@ -429,80 +442,81 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
         * parent conditional on that option. Note, this is a way to
         * distinguish between the master and the partition in sysfs.
         */
-       slave->mtd.dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) ?
-                               &master->dev :
-                               master->dev.parent;
+       slave->mtd.dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ?
+                               &parent->dev :
+                               parent->dev.parent;
        slave->mtd.dev.of_node = part->of_node;
 
        slave->mtd._read = part_read;
        slave->mtd._write = part_write;
 
-       if (master->_panic_write)
+       if (parent->_panic_write)
                slave->mtd._panic_write = part_panic_write;
 
-       if (master->_point && master->_unpoint) {
+       if (parent->_point && parent->_unpoint) {
                slave->mtd._point = part_point;
                slave->mtd._unpoint = part_unpoint;
        }
 
-       if (master->_get_unmapped_area)
+       if (parent->_get_unmapped_area)
                slave->mtd._get_unmapped_area = part_get_unmapped_area;
-       if (master->_read_oob)
+       if (parent->_read_oob)
                slave->mtd._read_oob = part_read_oob;
-       if (master->_write_oob)
+       if (parent->_write_oob)
                slave->mtd._write_oob = part_write_oob;
-       if (master->_read_user_prot_reg)
+       if (parent->_read_user_prot_reg)
                slave->mtd._read_user_prot_reg = part_read_user_prot_reg;
-       if (master->_read_fact_prot_reg)
+       if (parent->_read_fact_prot_reg)
                slave->mtd._read_fact_prot_reg = part_read_fact_prot_reg;
-       if (master->_write_user_prot_reg)
+       if (parent->_write_user_prot_reg)
                slave->mtd._write_user_prot_reg = part_write_user_prot_reg;
-       if (master->_lock_user_prot_reg)
+       if (parent->_lock_user_prot_reg)
                slave->mtd._lock_user_prot_reg = part_lock_user_prot_reg;
-       if (master->_get_user_prot_info)
+       if (parent->_get_user_prot_info)
                slave->mtd._get_user_prot_info = part_get_user_prot_info;
-       if (master->_get_fact_prot_info)
+       if (parent->_get_fact_prot_info)
                slave->mtd._get_fact_prot_info = part_get_fact_prot_info;
-       if (master->_sync)
+       if (parent->_sync)
                slave->mtd._sync = part_sync;
-       if (!partno && !master->dev.class && master->_suspend &&
-           master->_resume) {
-                       slave->mtd._suspend = part_suspend;
-                       slave->mtd._resume = part_resume;
+       if (!partno && !parent->dev.class && parent->_suspend &&
+           parent->_resume) {
+               slave->mtd._suspend = part_suspend;
+               slave->mtd._resume = part_resume;
        }
-       if (master->_writev)
+       if (parent->_writev)
                slave->mtd._writev = part_writev;
-       if (master->_lock)
+       if (parent->_lock)
                slave->mtd._lock = part_lock;
-       if (master->_unlock)
+       if (parent->_unlock)
                slave->mtd._unlock = part_unlock;
-       if (master->_is_locked)
+       if (parent->_is_locked)
                slave->mtd._is_locked = part_is_locked;
-       if (master->_block_isreserved)
+       if (parent->_block_isreserved)
                slave->mtd._block_isreserved = part_block_isreserved;
-       if (master->_block_isbad)
+       if (parent->_block_isbad)
                slave->mtd._block_isbad = part_block_isbad;
-       if (master->_block_markbad)
+       if (parent->_block_markbad)
                slave->mtd._block_markbad = part_block_markbad;
-       if (master->_max_bad_blocks)
+       if (parent->_max_bad_blocks)
                slave->mtd._max_bad_blocks = part_max_bad_blocks;
 
-       if (master->_get_device)
+       if (parent->_get_device)
                slave->mtd._get_device = part_get_device;
-       if (master->_put_device)
+       if (parent->_put_device)
                slave->mtd._put_device = part_put_device;
 
        slave->mtd._erase = part_erase;
-       slave->master = master;
+       slave->parent = parent;
        slave->offset = part->offset;
 
        if (slave->offset == MTDPART_OFS_APPEND)
                slave->offset = cur_offset;
        if (slave->offset == MTDPART_OFS_NXTBLK) {
+               tmp = cur_offset;
                slave->offset = cur_offset;
-               if (mtd_mod_by_eb(cur_offset, master) != 0) {
-                       /* Round up to next erasesize */
-                       slave->offset = (mtd_div_by_eb(cur_offset, master) + 1) * master->erasesize;
+               remainder = do_div(tmp, wr_alignment);
+               if (remainder) {
+                       slave->offset += wr_alignment - remainder;
                        printk(KERN_NOTICE "Moving partition %d: "
                               "0x%012llx -> 0x%012llx\n", partno,
                               (unsigned long long)cur_offset, (unsigned long long)slave->offset);
@@ -510,25 +524,25 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
        }
        if (slave->offset == MTDPART_OFS_RETAIN) {
                slave->offset = cur_offset;
-               if (master->size - slave->offset >= slave->mtd.size) {
-                       slave->mtd.size = master->size - slave->offset
+               if (parent->size - slave->offset >= slave->mtd.size) {
+                       slave->mtd.size = parent->size - slave->offset
                                                        - slave->mtd.size;
                } else {
                        printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n",
-                               part->name, master->size - slave->offset,
+                               part->name, parent->size - slave->offset,
                                slave->mtd.size);
                        /* register to preserve ordering */
                        goto out_register;
                }
        }
        if (slave->mtd.size == MTDPART_SIZ_FULL)
-               slave->mtd.size = master->size - slave->offset;
+               slave->mtd.size = parent->size - slave->offset;
 
        printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n", (unsigned long long)slave->offset,
                (unsigned long long)(slave->offset + slave->mtd.size), slave->mtd.name);
 
        /* let's do some sanity checks */
-       if (slave->offset >= master->size) {
+       if (slave->offset >= parent->size) {
                /* let's register it anyway to preserve ordering */
                slave->offset = 0;
                slave->mtd.size = 0;
@@ -536,16 +550,16 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
                        part->name);
                goto out_register;
        }
-       if (slave->offset + slave->mtd.size > master->size) {
-               slave->mtd.size = master->size - slave->offset;
+       if (slave->offset + slave->mtd.size > parent->size) {
+               slave->mtd.size = parent->size - slave->offset;
                printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n",
-                       part->name, master->name, (unsigned long long)slave->mtd.size);
+                       part->name, parent->name, (unsigned long long)slave->mtd.size);
        }
-       if (master->numeraseregions > 1) {
+       if (parent->numeraseregions > 1) {
                /* Deal with variable erase size stuff */
-               int i, max = master->numeraseregions;
+               int i, max = parent->numeraseregions;
                u64 end = slave->offset + slave->mtd.size;
-               struct mtd_erase_region_info *regions = master->eraseregions;
+               struct mtd_erase_region_info *regions = parent->eraseregions;
 
                /* Find the first erase regions which is part of this
                 * partition. */
@@ -564,37 +578,40 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
                BUG_ON(slave->mtd.erasesize == 0);
        } else {
                /* Single erase size */
-               slave->mtd.erasesize = master->erasesize;
+               slave->mtd.erasesize = parent->erasesize;
        }
 
-       if ((slave->mtd.flags & MTD_WRITEABLE) &&
-           mtd_mod_by_eb(slave->offset, &slave->mtd)) {
+       tmp = slave->offset;
+       remainder = do_div(tmp, wr_alignment);
+       if ((slave->mtd.flags & MTD_WRITEABLE) && remainder) {
                /* Doesn't start on a boundary of major erase size */
                /* FIXME: Let it be writable if it is on a boundary of
                 * _minor_ erase size though */
                slave->mtd.flags &= ~MTD_WRITEABLE;
-               printk(KERN_WARNING"mtd: partition \"%s\" doesn't start on an erase block boundary -- force read-only\n",
+               printk(KERN_WARNING"mtd: partition \"%s\" doesn't start on an erase/write block boundary -- force read-only\n",
                        part->name);
        }
-       if ((slave->mtd.flags & MTD_WRITEABLE) &&
-           mtd_mod_by_eb(slave->mtd.size, &slave->mtd)) {
+
+       tmp = slave->mtd.size;
+       remainder = do_div(tmp, wr_alignment);
+       if ((slave->mtd.flags & MTD_WRITEABLE) && remainder) {
                slave->mtd.flags &= ~MTD_WRITEABLE;
-               printk(KERN_WARNING"mtd: partition \"%s\" doesn't end on an erase block -- force read-only\n",
+               printk(KERN_WARNING"mtd: partition \"%s\" doesn't end on an erase/write block -- force read-only\n",
                        part->name);
        }
 
        mtd_set_ooblayout(&slave->mtd, &part_ooblayout_ops);
-       slave->mtd.ecc_step_size = master->ecc_step_size;
-       slave->mtd.ecc_strength = master->ecc_strength;
-       slave->mtd.bitflip_threshold = master->bitflip_threshold;
+       slave->mtd.ecc_step_size = parent->ecc_step_size;
+       slave->mtd.ecc_strength = parent->ecc_strength;
+       slave->mtd.bitflip_threshold = parent->bitflip_threshold;
 
-       if (master->_block_isbad) {
+       if (parent->_block_isbad) {
                uint64_t offs = 0;
 
                while (offs < slave->mtd.size) {
-                       if (mtd_block_isreserved(master, offs + slave->offset))
+                       if (mtd_block_isreserved(parent, offs + slave->offset))
                                slave->mtd.ecc_stats.bbtblocks++;
-                       else if (mtd_block_isbad(master, offs + slave->offset))
+                       else if (mtd_block_isbad(parent, offs + slave->offset))
                                slave->mtd.ecc_stats.badblocks++;
                        offs += slave->mtd.erasesize;
                }
@@ -628,7 +645,7 @@ static int mtd_add_partition_attrs(struct mtd_part *new)
        return ret;
 }
 
-int mtd_add_partition(struct mtd_info *master, const char *name,
+int mtd_add_partition(struct mtd_info *parent, const char *name,
                      long long offset, long long length)
 {
        struct mtd_partition part;
@@ -641,7 +658,7 @@ int mtd_add_partition(struct mtd_info *master, const char *name,
                return -EINVAL;
 
        if (length == MTDPART_SIZ_FULL)
-               length = master->size - offset;
+               length = parent->size - offset;
 
        if (length <= 0)
                return -EINVAL;
@@ -651,7 +668,7 @@ int mtd_add_partition(struct mtd_info *master, const char *name,
        part.size = length;
        part.offset = offset;
 
-       new = allocate_partition(master, &part, -1, offset);
+       new = allocate_partition(parent, &part, -1, offset);
        if (IS_ERR(new))
                return PTR_ERR(new);
 
@@ -667,23 +684,69 @@ int mtd_add_partition(struct mtd_info *master, const char *name,
 }
 EXPORT_SYMBOL_GPL(mtd_add_partition);
 
-int mtd_del_partition(struct mtd_info *master, int partno)
+/**
+ * __mtd_del_partition - delete MTD partition
+ *
+ * @priv: internal MTD struct for partition to be deleted
+ *
+ * This function must be called with the partitions mutex locked.
+ */
+static int __mtd_del_partition(struct mtd_part *priv)
+{
+       struct mtd_part *child, *next;
+       int err;
+
+       list_for_each_entry_safe(child, next, &mtd_partitions, list) {
+               if (child->parent == &priv->mtd) {
+                       err = __mtd_del_partition(child);
+                       if (err)
+                               return err;
+               }
+       }
+
+       sysfs_remove_files(&priv->mtd.dev.kobj, mtd_partition_attrs);
+
+       err = del_mtd_device(&priv->mtd);
+       if (err)
+               return err;
+
+       list_del(&priv->list);
+       free_partition(priv);
+
+       return 0;
+}
+
+/*
+ * This function unregisters and destroy all slave MTD objects which are
+ * attached to the given MTD object.
+ */
+int del_mtd_partitions(struct mtd_info *mtd)
 {
        struct mtd_part *slave, *next;
-       int ret = -EINVAL;
+       int ret, err = 0;
 
        mutex_lock(&mtd_partitions_mutex);
        list_for_each_entry_safe(slave, next, &mtd_partitions, list)
-               if ((slave->master == master) &&
-                   (slave->mtd.index == partno)) {
-                       sysfs_remove_files(&slave->mtd.dev.kobj,
-                                          mtd_partition_attrs);
-                       ret = del_mtd_device(&slave->mtd);
+               if (slave->parent == mtd) {
+                       ret = __mtd_del_partition(slave);
                        if (ret < 0)
-                               break;
+                               err = ret;
+               }
+       mutex_unlock(&mtd_partitions_mutex);
+
+       return err;
+}
+
+int mtd_del_partition(struct mtd_info *mtd, int partno)
+{
+       struct mtd_part *slave, *next;
+       int ret = -EINVAL;
 
-                       list_del(&slave->list);
-                       free_partition(slave);
+       mutex_lock(&mtd_partitions_mutex);
+       list_for_each_entry_safe(slave, next, &mtd_partitions, list)
+               if ((slave->parent == mtd) &&
+                   (slave->mtd.index == partno)) {
+                       ret = __mtd_del_partition(slave);
                        break;
                }
        mutex_unlock(&mtd_partitions_mutex);
@@ -724,6 +787,8 @@ int add_mtd_partitions(struct mtd_info *master,
 
                add_mtd_device(&slave->mtd);
                mtd_add_partition_attrs(slave);
+               if (parts[i].types)
+                       mtd_parse_part(slave, parts[i].types);
 
                cur_offset = slave->offset + slave->mtd.size;
        }
@@ -799,6 +864,27 @@ static const char * const default_mtd_part_types[] = {
        NULL
 };
 
+static int mtd_part_do_parse(struct mtd_part_parser *parser,
+                            struct mtd_info *master,
+                            struct mtd_partitions *pparts,
+                            struct mtd_part_parser_data *data)
+{
+       int ret;
+
+       ret = (*parser->parse_fn)(master, &pparts->parts, data);
+       pr_debug("%s: parser %s: %i\n", master->name, parser->name, ret);
+       if (ret <= 0)
+               return ret;
+
+       pr_notice("%d %s partitions found on MTD device %s\n", ret,
+                 parser->name, master->name);
+
+       pparts->nr_parts = ret;
+       pparts->parser = parser;
+
+       return ret;
+}
+
 /**
  * parse_mtd_partitions - parse MTD partitions
  * @master: the master partition (describes whole MTD device)
@@ -839,16 +925,10 @@ int parse_mtd_partitions(struct mtd_info *master, const char *const *types,
                         parser ? parser->name : NULL);
                if (!parser)
                        continue;
-               ret = (*parser->parse_fn)(master, &pparts->parts, data);
-               pr_debug("%s: parser %s: %i\n",
-                        master->name, parser->name, ret);
-               if (ret > 0) {
-                       printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n",
-                              ret, parser->name, master->name);
-                       pparts->nr_parts = ret;
-                       pparts->parser = parser;
+               ret = mtd_part_do_parse(parser, master, pparts, data);
+               /* Found partitions! */
+               if (ret > 0)
                        return 0;
-               }
                mtd_part_parser_put(parser);
                /*
                 * Stash the first error we see; only report it if no parser
@@ -899,6 +979,6 @@ uint64_t mtd_get_device_size(const struct mtd_info *mtd)
        if (!mtd_is_partition(mtd))
                return mtd->size;
 
-       return mtd_to_part(mtd)->master->size;
+       return mtd_get_device_size(mtd_to_part(mtd)->parent);
 }
 EXPORT_SYMBOL_GPL(mtd_get_device_size);
index c3029528063b8578e79746c51038d643631920b2..dbfa72d61d5aa7b955e6d0728b127f69e79b3012 100644 (file)
@@ -308,6 +308,7 @@ config MTD_NAND_CS553X
 config MTD_NAND_ATMEL
        tristate "Support for NAND Flash / SmartMedia on AT91"
        depends on ARCH_AT91
+       select MFD_ATMEL_SMC
        help
          Enables support for NAND Flash / Smart Media Card interface
          on Atmel AT91 processors.
@@ -542,6 +543,7 @@ config MTD_NAND_SUNXI
 
 config MTD_NAND_HISI504
        tristate "Support for NAND controller on Hisilicon SoC Hip04"
+       depends on ARCH_HISI || COMPILE_TEST
        depends on HAS_DMA
        help
          Enables support for NAND controller on Hisilicon SoC Hip04.
@@ -555,6 +557,7 @@ config MTD_NAND_QCOM
 
 config MTD_NAND_MTK
        tristate "Support for NAND controller on MTK SoCs"
+       depends on ARCH_MEDIATEK || COMPILE_TEST
        depends on HAS_DMA
        help
          Enables support for NAND controller on MTK SoCs.
index 3b24468961473e78f75acc254642973a8c15ff64..d922a88e407f119bbf52aae494c632e08d113e7f 100644 (file)
@@ -57,6 +57,7 @@
 #include <linux/interrupt.h>
 #include <linux/mfd/syscon.h>
 #include <linux/mfd/syscon/atmel-matrix.h>
+#include <linux/mfd/syscon/atmel-smc.h>
 #include <linux/module.h>
 #include <linux/mtd/nand.h>
 #include <linux/of_address.h>
@@ -64,7 +65,6 @@
 #include <linux/of_platform.h>
 #include <linux/iopoll.h>
 #include <linux/platform_device.h>
-#include <linux/platform_data/atmel.h>
 #include <linux/regmap.h>
 
 #include "pmecc.h"
@@ -151,6 +151,8 @@ struct atmel_nand_cs {
                void __iomem *virt;
                dma_addr_t dma;
        } io;
+
+       struct atmel_smc_cs_conf smcconf;
 };
 
 struct atmel_nand {
@@ -196,6 +198,8 @@ struct atmel_nand_controller_ops {
        void (*nand_init)(struct atmel_nand_controller *nc,
                          struct atmel_nand *nand);
        int (*ecc_init)(struct atmel_nand *nand);
+       int (*setup_data_interface)(struct atmel_nand *nand, int csline,
+                                   const struct nand_data_interface *conf);
 };
 
 struct atmel_nand_controller_caps {
@@ -912,7 +916,7 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
        struct mtd_info *mtd = nand_to_mtd(chip);
        struct atmel_nand *nand = to_atmel_nand(chip);
        struct atmel_hsmc_nand_controller *nc;
-       int ret;
+       int ret, status;
 
        nc = to_hsmc_nand_controller(chip->controller);
 
@@ -954,6 +958,10 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
                dev_err(nc->base.dev, "Failed to program NAND page (err = %d)\n",
                        ret);
 
+       status = chip->waitfunc(mtd, chip);
+       if (status & NAND_STATUS_FAIL)
+               return -EIO;
+
        return ret;
 }
 
@@ -1175,6 +1183,295 @@ static int atmel_hsmc_nand_ecc_init(struct atmel_nand *nand)
        return 0;
 }
 
+static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
+                                       const struct nand_data_interface *conf,
+                                       struct atmel_smc_cs_conf *smcconf)
+{
+       u32 ncycles, totalcycles, timeps, mckperiodps;
+       struct atmel_nand_controller *nc;
+       int ret;
+
+       nc = to_nand_controller(nand->base.controller);
+
+       /* DDR interface not supported. */
+       if (conf->type != NAND_SDR_IFACE)
+               return -ENOTSUPP;
+
+       /*
+        * tRC < 30ns implies EDO mode. This controller does not support this
+        * mode.
+        */
+       if (conf->timings.sdr.tRC_min < 30)
+               return -ENOTSUPP;
+
+       atmel_smc_cs_conf_init(smcconf);
+
+       mckperiodps = NSEC_PER_SEC / clk_get_rate(nc->mck);
+       mckperiodps *= 1000;
+
+       /*
+        * Set write pulse timing. This one is easy to extract:
+        *
+        * NWE_PULSE = tWP
+        */
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tWP_min, mckperiodps);
+       totalcycles = ncycles;
+       ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NWE_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /*
+        * The write setup timing depends on the operation done on the NAND.
+        * All operations goes through the same data bus, but the operation
+        * type depends on the address we are writing to (ALE/CLE address
+        * lines).
+        * Since we have no way to differentiate the different operations at
+        * the SMC level, we must consider the worst case (the biggest setup
+        * time among all operation types):
+        *
+        * NWE_SETUP = max(tCLS, tCS, tALS, tDS) - NWE_PULSE
+        */
+       timeps = max3(conf->timings.sdr.tCLS_min, conf->timings.sdr.tCS_min,
+                     conf->timings.sdr.tALS_min);
+       timeps = max(timeps, conf->timings.sdr.tDS_min);
+       ncycles = DIV_ROUND_UP(timeps, mckperiodps);
+       ncycles = ncycles > totalcycles ? ncycles - totalcycles : 0;
+       totalcycles += ncycles;
+       ret = atmel_smc_cs_conf_set_setup(smcconf, ATMEL_SMC_NWE_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /*
+        * As for the write setup timing, the write hold timing depends on the
+        * operation done on the NAND:
+        *
+        * NWE_HOLD = max(tCLH, tCH, tALH, tDH, tWH)
+        */
+       timeps = max3(conf->timings.sdr.tCLH_min, conf->timings.sdr.tCH_min,
+                     conf->timings.sdr.tALH_min);
+       timeps = max3(timeps, conf->timings.sdr.tDH_min,
+                     conf->timings.sdr.tWH_min);
+       ncycles = DIV_ROUND_UP(timeps, mckperiodps);
+       totalcycles += ncycles;
+
+       /*
+        * The write cycle timing is directly matching tWC, but is also
+        * dependent on the other timings on the setup and hold timings we
+        * calculated earlier, which gives:
+        *
+        * NWE_CYCLE = max(tWC, NWE_SETUP + NWE_PULSE + NWE_HOLD)
+        */
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tWC_min, mckperiodps);
+       ncycles = max(totalcycles, ncycles);
+       ret = atmel_smc_cs_conf_set_cycle(smcconf, ATMEL_SMC_NWE_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /*
+        * We don't want the CS line to be toggled between each byte/word
+        * transfer to the NAND. The only way to guarantee that is to have the
+        * NCS_{WR,RD}_{SETUP,HOLD} timings set to 0, which in turn means:
+        *
+        * NCS_WR_PULSE = NWE_CYCLE
+        */
+       ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NCS_WR_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /*
+        * As for the write setup timing, the read hold timing depends on the
+        * operation done on the NAND:
+        *
+        * NRD_HOLD = max(tREH, tRHOH)
+        */
+       timeps = max(conf->timings.sdr.tREH_min, conf->timings.sdr.tRHOH_min);
+       ncycles = DIV_ROUND_UP(timeps, mckperiodps);
+       totalcycles = ncycles;
+
+       /*
+        * TDF = tRHZ - NRD_HOLD
+        */
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tRHZ_max, mckperiodps);
+       ncycles -= totalcycles;
+
+       /*
+        * In ONFI 4.0 specs, tRHZ has been increased to support EDO NANDs and
+        * we might end up with a config that does not fit in the TDF field.
+        * Just take the max value in this case and hope that the NAND is more
+        * tolerant than advertised.
+        */
+       if (ncycles > ATMEL_SMC_MODE_TDF_MAX)
+               ncycles = ATMEL_SMC_MODE_TDF_MAX;
+       else if (ncycles < ATMEL_SMC_MODE_TDF_MIN)
+               ncycles = ATMEL_SMC_MODE_TDF_MIN;
+
+       smcconf->mode |= ATMEL_SMC_MODE_TDF(ncycles) |
+                        ATMEL_SMC_MODE_TDFMODE_OPTIMIZED;
+
+       /*
+        * Read pulse timing directly matches tRP:
+        *
+        * NRD_PULSE = tRP
+        */
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tRP_min, mckperiodps);
+       totalcycles += ncycles;
+       ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NRD_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /*
+        * The write cycle timing is directly matching tWC, but is also
+        * dependent on the setup and hold timings we calculated earlier,
+        * which gives:
+        *
+        * NRD_CYCLE = max(tRC, NRD_PULSE + NRD_HOLD)
+        *
+        * NRD_SETUP is always 0.
+        */
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tRC_min, mckperiodps);
+       ncycles = max(totalcycles, ncycles);
+       ret = atmel_smc_cs_conf_set_cycle(smcconf, ATMEL_SMC_NRD_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /*
+        * We don't want the CS line to be toggled between each byte/word
+        * transfer from the NAND. The only way to guarantee that is to have
+        * the NCS_{WR,RD}_{SETUP,HOLD} timings set to 0, which in turn means:
+        *
+        * NCS_RD_PULSE = NRD_CYCLE
+        */
+       ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NCS_RD_SHIFT,
+                                         ncycles);
+       if (ret)
+               return ret;
+
+       /* Txxx timings are directly matching tXXX ones. */
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tCLR_min, mckperiodps);
+       ret = atmel_smc_cs_conf_set_timing(smcconf,
+                                          ATMEL_HSMC_TIMINGS_TCLR_SHIFT,
+                                          ncycles);
+       if (ret)
+               return ret;
+
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tADL_min, mckperiodps);
+       ret = atmel_smc_cs_conf_set_timing(smcconf,
+                                          ATMEL_HSMC_TIMINGS_TADL_SHIFT,
+                                          ncycles);
+       if (ret)
+               return ret;
+
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tAR_min, mckperiodps);
+       ret = atmel_smc_cs_conf_set_timing(smcconf,
+                                          ATMEL_HSMC_TIMINGS_TAR_SHIFT,
+                                          ncycles);
+       if (ret)
+               return ret;
+
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tRR_min, mckperiodps);
+       ret = atmel_smc_cs_conf_set_timing(smcconf,
+                                          ATMEL_HSMC_TIMINGS_TRR_SHIFT,
+                                          ncycles);
+       if (ret)
+               return ret;
+
+       ncycles = DIV_ROUND_UP(conf->timings.sdr.tWB_max, mckperiodps);
+       ret = atmel_smc_cs_conf_set_timing(smcconf,
+                                          ATMEL_HSMC_TIMINGS_TWB_SHIFT,
+                                          ncycles);
+       if (ret)
+               return ret;
+
+       /* Attach the CS line to the NFC logic. */
+       smcconf->timings |= ATMEL_HSMC_TIMINGS_NFSEL;
+
+       /* Set the appropriate data bus width. */
+       if (nand->base.options & NAND_BUSWIDTH_16)
+               smcconf->mode |= ATMEL_SMC_MODE_DBW_16;
+
+       /* Operate in NRD/NWE READ/WRITEMODE. */
+       smcconf->mode |= ATMEL_SMC_MODE_READMODE_NRD |
+                        ATMEL_SMC_MODE_WRITEMODE_NWE;
+
+       return 0;
+}
+
+static int atmel_smc_nand_setup_data_interface(struct atmel_nand *nand,
+                                       int csline,
+                                       const struct nand_data_interface *conf)
+{
+       struct atmel_nand_controller *nc;
+       struct atmel_smc_cs_conf smcconf;
+       struct atmel_nand_cs *cs;
+       int ret;
+
+       nc = to_nand_controller(nand->base.controller);
+
+       ret = atmel_smc_nand_prepare_smcconf(nand, conf, &smcconf);
+       if (ret)
+               return ret;
+
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+               return 0;
+
+       cs = &nand->cs[csline];
+       cs->smcconf = smcconf;
+       atmel_smc_cs_conf_apply(nc->smc, cs->id, &cs->smcconf);
+
+       return 0;
+}
+
+static int atmel_hsmc_nand_setup_data_interface(struct atmel_nand *nand,
+                                       int csline,
+                                       const struct nand_data_interface *conf)
+{
+       struct atmel_nand_controller *nc;
+       struct atmel_smc_cs_conf smcconf;
+       struct atmel_nand_cs *cs;
+       int ret;
+
+       nc = to_nand_controller(nand->base.controller);
+
+       ret = atmel_smc_nand_prepare_smcconf(nand, conf, &smcconf);
+       if (ret)
+               return ret;
+
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+               return 0;
+
+       cs = &nand->cs[csline];
+       cs->smcconf = smcconf;
+
+       if (cs->rb.type == ATMEL_NAND_NATIVE_RB)
+               cs->smcconf.timings |= ATMEL_HSMC_TIMINGS_RBNSEL(cs->rb.id);
+
+       atmel_hsmc_cs_conf_apply(nc->smc, cs->id, &cs->smcconf);
+
+       return 0;
+}
+
+static int atmel_nand_setup_data_interface(struct mtd_info *mtd, int csline,
+                                       const struct nand_data_interface *conf)
+{
+       struct nand_chip *chip = mtd_to_nand(mtd);
+       struct atmel_nand *nand = to_atmel_nand(chip);
+       struct atmel_nand_controller *nc;
+
+       nc = to_nand_controller(nand->base.controller);
+
+       if (csline >= nand->numcs ||
+           (csline < 0 && csline != NAND_DATA_IFACE_CHECK_ONLY))
+               return -EINVAL;
+
+       return nc->caps->ops->setup_data_interface(nand, csline, conf);
+}
+
 static void atmel_nand_init(struct atmel_nand_controller *nc,
                            struct atmel_nand *nand)
 {
@@ -1192,6 +1489,9 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
        chip->write_buf = atmel_nand_write_buf;
        chip->select_chip = atmel_nand_select_chip;
 
+       if (nc->mck && nc->caps->ops->setup_data_interface)
+               chip->setup_data_interface = atmel_nand_setup_data_interface;
+
        /* Some NANDs require a longer delay than the default one (20us). */
        chip->chip_delay = 40;
 
@@ -1677,6 +1977,12 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
        if (nc->caps->legacy_of_bindings)
                return 0;
 
+       nc->mck = of_clk_get(dev->parent->of_node, 0);
+       if (IS_ERR(nc->mck)) {
+               dev_err(dev, "Failed to retrieve MCK clk\n");
+               return PTR_ERR(nc->mck);
+       }
+
        np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
        if (!np) {
                dev_err(dev, "Missing or invalid atmel,smc property\n");
@@ -1983,6 +2289,7 @@ static const struct atmel_nand_controller_ops atmel_hsmc_nc_ops = {
        .remove = atmel_hsmc_nand_controller_remove,
        .ecc_init = atmel_hsmc_nand_ecc_init,
        .nand_init = atmel_hsmc_nand_init,
+       .setup_data_interface = atmel_hsmc_nand_setup_data_interface,
 };
 
 static const struct atmel_nand_controller_caps atmel_sama5_nc_caps = {
@@ -2037,7 +2344,14 @@ atmel_smc_nand_controller_remove(struct atmel_nand_controller *nc)
        return 0;
 }
 
-static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
+/*
+ * The SMC reg layout of at91rm9200 is completely different which prevents us
+ * from re-using atmel_smc_nand_setup_data_interface() for the
+ * ->setup_data_interface() hook.
+ * At this point, there's no support for the at91rm9200 SMC IP, so we leave
+ * ->setup_data_interface() unassigned.
+ */
+static const struct atmel_nand_controller_ops at91rm9200_nc_ops = {
        .probe = atmel_smc_nand_controller_probe,
        .remove = atmel_smc_nand_controller_remove,
        .ecc_init = atmel_nand_ecc_init,
@@ -2045,6 +2359,20 @@ static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
 };
 
 static const struct atmel_nand_controller_caps atmel_rm9200_nc_caps = {
+       .ale_offs = BIT(21),
+       .cle_offs = BIT(22),
+       .ops = &at91rm9200_nc_ops,
+};
+
+static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
+       .probe = atmel_smc_nand_controller_probe,
+       .remove = atmel_smc_nand_controller_remove,
+       .ecc_init = atmel_nand_ecc_init,
+       .nand_init = atmel_smc_nand_init,
+       .setup_data_interface = atmel_smc_nand_setup_data_interface,
+};
+
+static const struct atmel_nand_controller_caps atmel_sam9260_nc_caps = {
        .ale_offs = BIT(21),
        .cle_offs = BIT(22),
        .ops = &atmel_smc_nc_ops,
@@ -2093,7 +2421,7 @@ static const struct of_device_id atmel_nand_controller_of_ids[] = {
        },
        {
                .compatible = "atmel,at91sam9260-nand-controller",
-               .data = &atmel_rm9200_nc_caps,
+               .data = &atmel_sam9260_nc_caps,
        },
        {
                .compatible = "atmel,at91sam9261-nand-controller",
@@ -2181,6 +2509,24 @@ static int atmel_nand_controller_remove(struct platform_device *pdev)
        return nc->caps->ops->remove(nc);
 }
 
+static __maybe_unused int atmel_nand_controller_resume(struct device *dev)
+{
+       struct atmel_nand_controller *nc = dev_get_drvdata(dev);
+       struct atmel_nand *nand;
+
+       list_for_each_entry(nand, &nc->chips, node) {
+               int i;
+
+               for (i = 0; i < nand->numcs; i++)
+                       nand_reset(&nand->base, i);
+       }
+
+       return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(atmel_nand_controller_pm_ops, NULL,
+                        atmel_nand_controller_resume);
+
 static struct platform_driver atmel_nand_controller_driver = {
        .driver = {
                .name = "atmel-nand-controller",
index f1da4ea88f2c01c4d28ae98a96df96b1a49eb3ef..54bac5b73f0ab39886ff2babe16f2a6c1bb49d4b 100644 (file)
@@ -392,6 +392,8 @@ int bcm47xxnflash_ops_bcm4706_init(struct bcm47xxnflash *b47n)
        b47n->nand_chip.read_byte = bcm47xxnflash_ops_bcm4706_read_byte;
        b47n->nand_chip.read_buf = bcm47xxnflash_ops_bcm4706_read_buf;
        b47n->nand_chip.write_buf = bcm47xxnflash_ops_bcm4706_write_buf;
+       b47n->nand_chip.onfi_set_features = nand_onfi_get_set_features_notsupp;
+       b47n->nand_chip.onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        nand_chip->chip_delay = 50;
        b47n->nand_chip.bbt_options = NAND_BBT_USE_FLASH;
index d40c32d311d8f9cb95ca20fd07cee000d7b69956..2fd733eba0a30fdc6bdbfb30ecbcc5de8a104775 100644 (file)
@@ -654,6 +654,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
        cafe->nand.read_buf = cafe_read_buf;
        cafe->nand.write_buf = cafe_write_buf;
        cafe->nand.select_chip = cafe_select_chip;
+       cafe->nand.onfi_set_features = nand_onfi_get_set_features_notsupp;
+       cafe->nand.onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        cafe->nand.chip_delay = 0;
 
index 531c51991e5747b35fc68114ace9d534a1ea430e..7b26e53b95b1188c96dd348fa4efcd0dc575679f 100644 (file)
@@ -771,11 +771,14 @@ static int nand_davinci_probe(struct platform_device *pdev)
                        info->chip.ecc.hwctl = nand_davinci_hwctl_4bit;
                        info->chip.ecc.bytes = 10;
                        info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
+                       info->chip.ecc.algo = NAND_ECC_BCH;
                } else {
+                       /* 1bit ecc hamming */
                        info->chip.ecc.calculate = nand_davinci_calculate_1bit;
                        info->chip.ecc.correct = nand_davinci_correct_1bit;
                        info->chip.ecc.hwctl = nand_davinci_hwctl_1bit;
                        info->chip.ecc.bytes = 3;
+                       info->chip.ecc.algo = NAND_ECC_HAMMING;
                }
                info->chip.ecc.size = 512;
                info->chip.ecc.strength = pdata->ecc_bits;
index 16634df2e39a77ae1e34f68a7b4af0c642232b1e..d723be35214827edbada0ecf0d175c8ead3a62bd 100644 (file)
 #include <linux/mutex.h>
 #include <linux/mtd/mtd.h>
 #include <linux/module.h>
+#include <linux/slab.h>
 
 #include "denali.h"
 
 MODULE_LICENSE("GPL");
 
-/*
- * We define a module parameter that allows the user to override
- * the hardware and decide what timing mode should be used.
- */
-#define NAND_DEFAULT_TIMINGS   -1
+#define DENALI_NAND_NAME    "denali-nand"
 
-static int onfi_timing_mode = NAND_DEFAULT_TIMINGS;
-module_param(onfi_timing_mode, int, S_IRUGO);
-MODULE_PARM_DESC(onfi_timing_mode,
-          "Overrides default ONFI setting. -1 indicates use default timings");
+/* Host Data/Command Interface */
+#define DENALI_HOST_ADDR       0x00
+#define DENALI_HOST_DATA       0x10
 
-#define DENALI_NAND_NAME    "denali-nand"
+#define DENALI_MAP00           (0 << 26)       /* direct access to buffer */
+#define DENALI_MAP01           (1 << 26)       /* read/write pages in PIO */
+#define DENALI_MAP10           (2 << 26)       /* high-level control plane */
+#define DENALI_MAP11           (3 << 26)       /* direct controller access */
 
-/*
- * We define a macro here that combines all interrupts this driver uses into
- * a single constant value, for convenience.
- */
-#define DENALI_IRQ_ALL (INTR__DMA_CMD_COMP | \
-                       INTR__ECC_TRANSACTION_DONE | \
-                       INTR__ECC_ERR | \
-                       INTR__PROGRAM_FAIL | \
-                       INTR__LOAD_COMP | \
-                       INTR__PROGRAM_COMP | \
-                       INTR__TIME_OUT | \
-                       INTR__ERASE_FAIL | \
-                       INTR__RST_COMP | \
-                       INTR__ERASE_COMP)
+/* MAP11 access cycle type */
+#define DENALI_MAP11_CMD       ((DENALI_MAP11) | 0)    /* command cycle */
+#define DENALI_MAP11_ADDR      ((DENALI_MAP11) | 1)    /* address cycle */
+#define DENALI_MAP11_DATA      ((DENALI_MAP11) | 2)    /* data cycle */
 
-/*
- * indicates whether or not the internal value for the flash bank is
- * valid or not
- */
-#define CHIP_SELECT_INVALID    -1
+/* MAP10 commands */
+#define DENALI_ERASE           0x01
+
+#define DENALI_BANK(denali)    ((denali)->active_bank << 24)
+
+#define DENALI_INVALID_BANK    -1
+#define DENALI_NR_BANKS                4
 
 /*
- * This macro divides two integers and rounds fractional values up
- * to the nearest integer value.
+ * The bus interface clock, clk_x, is phase aligned with the core clock.  The
+ * clk_x is an integral multiple N of the core clk.  The value N is configured
+ * at IP delivery time, and its available value is 4, 5, or 6.  We need to align
+ * to the largest value to make it work with any possible configuration.
  */
-#define CEIL_DIV(X, Y) (((X)%(Y)) ? ((X)/(Y)+1) : ((X)/(Y)))
+#define DENALI_CLK_X_MULT      6
 
 /*
  * this macro allows us to convert from an MTD structure to our own
@@ -77,339 +70,11 @@ static inline struct denali_nand_info *mtd_to_denali(struct mtd_info *mtd)
        return container_of(mtd_to_nand(mtd), struct denali_nand_info, nand);
 }
 
-/*
- * These constants are defined by the driver to enable common driver
- * configuration options.
- */
-#define SPARE_ACCESS           0x41
-#define MAIN_ACCESS            0x42
-#define MAIN_SPARE_ACCESS      0x43
-
-#define DENALI_READ    0
-#define DENALI_WRITE   0x100
-
-/*
- * this is a helper macro that allows us to
- * format the bank into the proper bits for the controller
- */
-#define BANK(x) ((x) << 24)
-
-/* forward declarations */
-static void clear_interrupts(struct denali_nand_info *denali);
-static uint32_t wait_for_irq(struct denali_nand_info *denali,
-                                                       uint32_t irq_mask);
-static void denali_irq_enable(struct denali_nand_info *denali,
-                                                       uint32_t int_mask);
-static uint32_t read_interrupt_status(struct denali_nand_info *denali);
-
-/*
- * Certain operations for the denali NAND controller use an indexed mode to
- * read/write data. The operation is performed by writing the address value
- * of the command to the device memory followed by the data. This function
- * abstracts this common operation.
- */
-static void index_addr(struct denali_nand_info *denali,
-                               uint32_t address, uint32_t data)
-{
-       iowrite32(address, denali->flash_mem);
-       iowrite32(data, denali->flash_mem + 0x10);
-}
-
-/* Perform an indexed read of the device */
-static void index_addr_read_data(struct denali_nand_info *denali,
-                                uint32_t address, uint32_t *pdata)
-{
-       iowrite32(address, denali->flash_mem);
-       *pdata = ioread32(denali->flash_mem + 0x10);
-}
-
-/*
- * We need to buffer some data for some of the NAND core routines.
- * The operations manage buffering that data.
- */
-static void reset_buf(struct denali_nand_info *denali)
-{
-       denali->buf.head = denali->buf.tail = 0;
-}
-
-static void write_byte_to_buf(struct denali_nand_info *denali, uint8_t byte)
-{
-       denali->buf.buf[denali->buf.tail++] = byte;
-}
-
-/* reads the status of the device */
-static void read_status(struct denali_nand_info *denali)
-{
-       uint32_t cmd;
-
-       /* initialize the data buffer to store status */
-       reset_buf(denali);
-
-       cmd = ioread32(denali->flash_reg + WRITE_PROTECT);
-       if (cmd)
-               write_byte_to_buf(denali, NAND_STATUS_WP);
-       else
-               write_byte_to_buf(denali, 0);
-}
-
-/* resets a specific device connected to the core */
-static void reset_bank(struct denali_nand_info *denali)
-{
-       uint32_t irq_status;
-       uint32_t irq_mask = INTR__RST_COMP | INTR__TIME_OUT;
-
-       clear_interrupts(denali);
-
-       iowrite32(1 << denali->flash_bank, denali->flash_reg + DEVICE_RESET);
-
-       irq_status = wait_for_irq(denali, irq_mask);
-
-       if (irq_status & INTR__TIME_OUT)
-               dev_err(denali->dev, "reset bank failed.\n");
-}
-
-/* Reset the flash controller */
-static uint16_t denali_nand_reset(struct denali_nand_info *denali)
-{
-       int i;
-
-       for (i = 0; i < denali->max_banks; i++)
-               iowrite32(INTR__RST_COMP | INTR__TIME_OUT,
-               denali->flash_reg + INTR_STATUS(i));
-
-       for (i = 0; i < denali->max_banks; i++) {
-               iowrite32(1 << i, denali->flash_reg + DEVICE_RESET);
-               while (!(ioread32(denali->flash_reg + INTR_STATUS(i)) &
-                       (INTR__RST_COMP | INTR__TIME_OUT)))
-                       cpu_relax();
-               if (ioread32(denali->flash_reg + INTR_STATUS(i)) &
-                       INTR__TIME_OUT)
-                       dev_dbg(denali->dev,
-                       "NAND Reset operation timed out on bank %d\n", i);
-       }
-
-       for (i = 0; i < denali->max_banks; i++)
-               iowrite32(INTR__RST_COMP | INTR__TIME_OUT,
-                         denali->flash_reg + INTR_STATUS(i));
-
-       return PASS;
-}
-
-/*
- * this routine calculates the ONFI timing values for a given mode and
- * programs the clocking register accordingly. The mode is determined by
- * the get_onfi_nand_para routine.
- */
-static void nand_onfi_timing_set(struct denali_nand_info *denali,
-                                                               uint16_t mode)
-{
-       uint16_t Trea[6] = {40, 30, 25, 20, 20, 16};
-       uint16_t Trp[6] = {50, 25, 17, 15, 12, 10};
-       uint16_t Treh[6] = {30, 15, 15, 10, 10, 7};
-       uint16_t Trc[6] = {100, 50, 35, 30, 25, 20};
-       uint16_t Trhoh[6] = {0, 15, 15, 15, 15, 15};
-       uint16_t Trloh[6] = {0, 0, 0, 0, 5, 5};
-       uint16_t Tcea[6] = {100, 45, 30, 25, 25, 25};
-       uint16_t Tadl[6] = {200, 100, 100, 100, 70, 70};
-       uint16_t Trhw[6] = {200, 100, 100, 100, 100, 100};
-       uint16_t Trhz[6] = {200, 100, 100, 100, 100, 100};
-       uint16_t Twhr[6] = {120, 80, 80, 60, 60, 60};
-       uint16_t Tcs[6] = {70, 35, 25, 25, 20, 15};
-
-       uint16_t data_invalid_rhoh, data_invalid_rloh, data_invalid;
-       uint16_t dv_window = 0;
-       uint16_t en_lo, en_hi;
-       uint16_t acc_clks;
-       uint16_t addr_2_data, re_2_we, re_2_re, we_2_re, cs_cnt;
-
-       en_lo = CEIL_DIV(Trp[mode], CLK_X);
-       en_hi = CEIL_DIV(Treh[mode], CLK_X);
-#if ONFI_BLOOM_TIME
-       if ((en_hi * CLK_X) < (Treh[mode] + 2))
-               en_hi++;
-#endif
-
-       if ((en_lo + en_hi) * CLK_X < Trc[mode])
-               en_lo += CEIL_DIV((Trc[mode] - (en_lo + en_hi) * CLK_X), CLK_X);
-
-       if ((en_lo + en_hi) < CLK_MULTI)
-               en_lo += CLK_MULTI - en_lo - en_hi;
-
-       while (dv_window < 8) {
-               data_invalid_rhoh = en_lo * CLK_X + Trhoh[mode];
-
-               data_invalid_rloh = (en_lo + en_hi) * CLK_X + Trloh[mode];
-
-               data_invalid = data_invalid_rhoh < data_invalid_rloh ?
-                                       data_invalid_rhoh : data_invalid_rloh;
-
-               dv_window = data_invalid - Trea[mode];
-
-               if (dv_window < 8)
-                       en_lo++;
-       }
-
-       acc_clks = CEIL_DIV(Trea[mode], CLK_X);
-
-       while (acc_clks * CLK_X - Trea[mode] < 3)
-               acc_clks++;
-
-       if (data_invalid - acc_clks * CLK_X < 2)
-               dev_warn(denali->dev, "%s, Line %d: Warning!\n",
-                        __FILE__, __LINE__);
-
-       addr_2_data = CEIL_DIV(Tadl[mode], CLK_X);
-       re_2_we = CEIL_DIV(Trhw[mode], CLK_X);
-       re_2_re = CEIL_DIV(Trhz[mode], CLK_X);
-       we_2_re = CEIL_DIV(Twhr[mode], CLK_X);
-       cs_cnt = CEIL_DIV((Tcs[mode] - Trp[mode]), CLK_X);
-       if (cs_cnt == 0)
-               cs_cnt = 1;
-
-       if (Tcea[mode]) {
-               while (cs_cnt * CLK_X + Trea[mode] < Tcea[mode])
-                       cs_cnt++;
-       }
-
-#if MODE5_WORKAROUND
-       if (mode == 5)
-               acc_clks = 5;
-#endif
-
-       /* Sighting 3462430: Temporary hack for MT29F128G08CJABAWP:B */
-       if (ioread32(denali->flash_reg + MANUFACTURER_ID) == 0 &&
-               ioread32(denali->flash_reg + DEVICE_ID) == 0x88)
-               acc_clks = 6;
-
-       iowrite32(acc_clks, denali->flash_reg + ACC_CLKS);
-       iowrite32(re_2_we, denali->flash_reg + RE_2_WE);
-       iowrite32(re_2_re, denali->flash_reg + RE_2_RE);
-       iowrite32(we_2_re, denali->flash_reg + WE_2_RE);
-       iowrite32(addr_2_data, denali->flash_reg + ADDR_2_DATA);
-       iowrite32(en_lo, denali->flash_reg + RDWR_EN_LO_CNT);
-       iowrite32(en_hi, denali->flash_reg + RDWR_EN_HI_CNT);
-       iowrite32(cs_cnt, denali->flash_reg + CS_SETUP_CNT);
-}
-
-/* queries the NAND device to see what ONFI modes it supports. */
-static uint16_t get_onfi_nand_para(struct denali_nand_info *denali)
+static void denali_host_write(struct denali_nand_info *denali,
+                             uint32_t addr, uint32_t data)
 {
-       int i;
-
-       /*
-        * we needn't to do a reset here because driver has already
-        * reset all the banks before
-        */
-       if (!(ioread32(denali->flash_reg + ONFI_TIMING_MODE) &
-               ONFI_TIMING_MODE__VALUE))
-               return FAIL;
-
-       for (i = 5; i > 0; i--) {
-               if (ioread32(denali->flash_reg + ONFI_TIMING_MODE) &
-                       (0x01 << i))
-                       break;
-       }
-
-       nand_onfi_timing_set(denali, i);
-
-       /*
-        * By now, all the ONFI devices we know support the page cache
-        * rw feature. So here we enable the pipeline_rw_ahead feature
-        */
-       /* iowrite32(1, denali->flash_reg + CACHE_WRITE_ENABLE); */
-       /* iowrite32(1, denali->flash_reg + CACHE_READ_ENABLE);  */
-
-       return PASS;
-}
-
-static void get_samsung_nand_para(struct denali_nand_info *denali,
-                                                       uint8_t device_id)
-{
-       if (device_id == 0xd3) { /* Samsung K9WAG08U1A */
-               /* Set timing register values according to datasheet */
-               iowrite32(5, denali->flash_reg + ACC_CLKS);
-               iowrite32(20, denali->flash_reg + RE_2_WE);
-               iowrite32(12, denali->flash_reg + WE_2_RE);
-               iowrite32(14, denali->flash_reg + ADDR_2_DATA);
-               iowrite32(3, denali->flash_reg + RDWR_EN_LO_CNT);
-               iowrite32(2, denali->flash_reg + RDWR_EN_HI_CNT);
-               iowrite32(2, denali->flash_reg + CS_SETUP_CNT);
-       }
-}
-
-static void get_toshiba_nand_para(struct denali_nand_info *denali)
-{
-       /*
-        * Workaround to fix a controller bug which reports a wrong
-        * spare area size for some kind of Toshiba NAND device
-        */
-       if ((ioread32(denali->flash_reg + DEVICE_MAIN_AREA_SIZE) == 4096) &&
-               (ioread32(denali->flash_reg + DEVICE_SPARE_AREA_SIZE) == 64))
-               iowrite32(216, denali->flash_reg + DEVICE_SPARE_AREA_SIZE);
-}
-
-static void get_hynix_nand_para(struct denali_nand_info *denali,
-                                                       uint8_t device_id)
-{
-       switch (device_id) {
-       case 0xD5: /* Hynix H27UAG8T2A, H27UBG8U5A or H27UCG8VFA */
-       case 0xD7: /* Hynix H27UDG8VEM, H27UCG8UDM or H27UCG8V5A */
-               iowrite32(128, denali->flash_reg + PAGES_PER_BLOCK);
-               iowrite32(4096, denali->flash_reg + DEVICE_MAIN_AREA_SIZE);
-               iowrite32(224, denali->flash_reg + DEVICE_SPARE_AREA_SIZE);
-               iowrite32(0, denali->flash_reg + DEVICE_WIDTH);
-               break;
-       default:
-               dev_warn(denali->dev,
-                        "Unknown Hynix NAND (Device ID: 0x%x).\n"
-                        "Will use default parameter values instead.\n",
-                        device_id);
-       }
-}
-
-/*
- * determines how many NAND chips are connected to the controller. Note for
- * Intel CE4100 devices we don't support more than one device.
- */
-static void find_valid_banks(struct denali_nand_info *denali)
-{
-       uint32_t id[denali->max_banks];
-       int i;
-
-       denali->total_used_banks = 1;
-       for (i = 0; i < denali->max_banks; i++) {
-               index_addr(denali, MODE_11 | (i << 24) | 0, 0x90);
-               index_addr(denali, MODE_11 | (i << 24) | 1, 0);
-               index_addr_read_data(denali, MODE_11 | (i << 24) | 2, &id[i]);
-
-               dev_dbg(denali->dev,
-                       "Return 1st ID for bank[%d]: %x\n", i, id[i]);
-
-               if (i == 0) {
-                       if (!(id[i] & 0x0ff))
-                               break; /* WTF? */
-               } else {
-                       if ((id[i] & 0x0ff) == (id[0] & 0x0ff))
-                               denali->total_used_banks++;
-                       else
-                               break;
-               }
-       }
-
-       if (denali->platform == INTEL_CE4100) {
-               /*
-                * Platform limitations of the CE4100 device limit
-                * users to a single chip solution for NAND.
-                * Multichip support is not enabled.
-                */
-               if (denali->total_used_banks != 1) {
-                       dev_err(denali->dev,
-                               "Sorry, Intel CE4100 only supports a single NAND device.\n");
-                       BUG();
-               }
-       }
-       dev_dbg(denali->dev,
-               "denali->total_used_banks: %d\n", denali->total_used_banks);
+       iowrite32(addr, denali->host + DENALI_HOST_ADDR);
+       iowrite32(data, denali->host + DENALI_HOST_DATA);
 }
 
 /*
@@ -418,7 +83,7 @@ static void find_valid_banks(struct denali_nand_info *denali)
  */
 static void detect_max_banks(struct denali_nand_info *denali)
 {
-       uint32_t features = ioread32(denali->flash_reg + FEATURES);
+       uint32_t features = ioread32(denali->reg + FEATURES);
 
        denali->max_banks = 1 << (features & FEATURES__N_BANKS);
 
@@ -427,227 +92,120 @@ static void detect_max_banks(struct denali_nand_info *denali)
                denali->max_banks <<= 1;
 }
 
-static uint16_t denali_nand_timing_set(struct denali_nand_info *denali)
+static void denali_enable_irq(struct denali_nand_info *denali)
 {
-       uint16_t status = PASS;
-       uint32_t id_bytes[8], addr;
-       uint8_t maf_id, device_id;
        int i;
 
-       /*
-        * Use read id method to get device ID and other params.
-        * For some NAND chips, controller can't report the correct
-        * device ID by reading from DEVICE_ID register
-        */
-       addr = MODE_11 | BANK(denali->flash_bank);
-       index_addr(denali, addr | 0, 0x90);
-       index_addr(denali, addr | 1, 0);
-       for (i = 0; i < 8; i++)
-               index_addr_read_data(denali, addr | 2, &id_bytes[i]);
-       maf_id = id_bytes[0];
-       device_id = id_bytes[1];
-
-       if (ioread32(denali->flash_reg + ONFI_DEVICE_NO_OF_LUNS) &
-               ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE) { /* ONFI 1.0 NAND */
-               if (FAIL == get_onfi_nand_para(denali))
-                       return FAIL;
-       } else if (maf_id == 0xEC) { /* Samsung NAND */
-               get_samsung_nand_para(denali, device_id);
-       } else if (maf_id == 0x98) { /* Toshiba NAND */
-               get_toshiba_nand_para(denali);
-       } else if (maf_id == 0xAD) { /* Hynix NAND */
-               get_hynix_nand_para(denali, device_id);
-       }
-
-       dev_info(denali->dev,
-                       "Dump timing register values:\n"
-                       "acc_clks: %d, re_2_we: %d, re_2_re: %d\n"
-                       "we_2_re: %d, addr_2_data: %d, rdwr_en_lo_cnt: %d\n"
-                       "rdwr_en_hi_cnt: %d, cs_setup_cnt: %d\n",
-                       ioread32(denali->flash_reg + ACC_CLKS),
-                       ioread32(denali->flash_reg + RE_2_WE),
-                       ioread32(denali->flash_reg + RE_2_RE),
-                       ioread32(denali->flash_reg + WE_2_RE),
-                       ioread32(denali->flash_reg + ADDR_2_DATA),
-                       ioread32(denali->flash_reg + RDWR_EN_LO_CNT),
-                       ioread32(denali->flash_reg + RDWR_EN_HI_CNT),
-                       ioread32(denali->flash_reg + CS_SETUP_CNT));
-
-       find_valid_banks(denali);
-
-       /*
-        * If the user specified to override the default timings
-        * with a specific ONFI mode, we apply those changes here.
-        */
-       if (onfi_timing_mode != NAND_DEFAULT_TIMINGS)
-               nand_onfi_timing_set(denali, onfi_timing_mode);
-
-       return status;
+       for (i = 0; i < DENALI_NR_BANKS; i++)
+               iowrite32(U32_MAX, denali->reg + INTR_EN(i));
+       iowrite32(GLOBAL_INT_EN_FLAG, denali->reg + GLOBAL_INT_ENABLE);
 }
 
-static void denali_set_intr_modes(struct denali_nand_info *denali,
-                                       uint16_t INT_ENABLE)
+static void denali_disable_irq(struct denali_nand_info *denali)
 {
-       if (INT_ENABLE)
-               iowrite32(1, denali->flash_reg + GLOBAL_INT_ENABLE);
-       else
-               iowrite32(0, denali->flash_reg + GLOBAL_INT_ENABLE);
-}
-
-/*
- * validation function to verify that the controlling software is making
- * a valid request
- */
-static inline bool is_flash_bank_valid(int flash_bank)
-{
-       return flash_bank >= 0 && flash_bank < 4;
-}
-
-static void denali_irq_init(struct denali_nand_info *denali)
-{
-       uint32_t int_mask;
        int i;
 
-       /* Disable global interrupts */
-       denali_set_intr_modes(denali, false);
-
-       int_mask = DENALI_IRQ_ALL;
-
-       /* Clear all status bits */
-       for (i = 0; i < denali->max_banks; ++i)
-               iowrite32(0xFFFF, denali->flash_reg + INTR_STATUS(i));
-
-       denali_irq_enable(denali, int_mask);
+       for (i = 0; i < DENALI_NR_BANKS; i++)
+               iowrite32(0, denali->reg + INTR_EN(i));
+       iowrite32(0, denali->reg + GLOBAL_INT_ENABLE);
 }
 
-static void denali_irq_cleanup(int irqnum, struct denali_nand_info *denali)
+static void denali_clear_irq(struct denali_nand_info *denali,
+                            int bank, uint32_t irq_status)
 {
-       denali_set_intr_modes(denali, false);
+       /* write one to clear bits */
+       iowrite32(irq_status, denali->reg + INTR_STATUS(bank));
 }
 
-static void denali_irq_enable(struct denali_nand_info *denali,
-                                                       uint32_t int_mask)
+static void denali_clear_irq_all(struct denali_nand_info *denali)
 {
        int i;
 
-       for (i = 0; i < denali->max_banks; ++i)
-               iowrite32(int_mask, denali->flash_reg + INTR_EN(i));
+       for (i = 0; i < DENALI_NR_BANKS; i++)
+               denali_clear_irq(denali, i, U32_MAX);
 }
 
-/*
- * This function only returns when an interrupt that this driver cares about
- * occurs. This is to reduce the overhead of servicing interrupts
- */
-static inline uint32_t denali_irq_detected(struct denali_nand_info *denali)
+static irqreturn_t denali_isr(int irq, void *dev_id)
 {
-       return read_interrupt_status(denali) & DENALI_IRQ_ALL;
-}
+       struct denali_nand_info *denali = dev_id;
+       irqreturn_t ret = IRQ_NONE;
+       uint32_t irq_status;
+       int i;
 
-/* Interrupts are cleared by writing a 1 to the appropriate status bit */
-static inline void clear_interrupt(struct denali_nand_info *denali,
-                                                       uint32_t irq_mask)
-{
-       uint32_t intr_status_reg;
+       spin_lock(&denali->irq_lock);
 
-       intr_status_reg = INTR_STATUS(denali->flash_bank);
+       for (i = 0; i < DENALI_NR_BANKS; i++) {
+               irq_status = ioread32(denali->reg + INTR_STATUS(i));
+               if (irq_status)
+                       ret = IRQ_HANDLED;
 
-       iowrite32(irq_mask, denali->flash_reg + intr_status_reg);
-}
+               denali_clear_irq(denali, i, irq_status);
 
-static void clear_interrupts(struct denali_nand_info *denali)
-{
-       uint32_t status;
+               if (i != denali->active_bank)
+                       continue;
 
-       spin_lock_irq(&denali->irq_lock);
+               denali->irq_status |= irq_status;
 
-       status = read_interrupt_status(denali);
-       clear_interrupt(denali, status);
+               if (denali->irq_status & denali->irq_mask)
+                       complete(&denali->complete);
+       }
+
+       spin_unlock(&denali->irq_lock);
 
-       denali->irq_status = 0x0;
-       spin_unlock_irq(&denali->irq_lock);
+       return ret;
 }
 
-static uint32_t read_interrupt_status(struct denali_nand_info *denali)
+static void denali_reset_irq(struct denali_nand_info *denali)
 {
-       uint32_t intr_status_reg;
-
-       intr_status_reg = INTR_STATUS(denali->flash_bank);
+       unsigned long flags;
 
-       return ioread32(denali->flash_reg + intr_status_reg);
+       spin_lock_irqsave(&denali->irq_lock, flags);
+       denali->irq_status = 0;
+       denali->irq_mask = 0;
+       spin_unlock_irqrestore(&denali->irq_lock, flags);
 }
 
-/*
- * This is the interrupt service routine. It handles all interrupts
- * sent to this device. Note that on CE4100, this is a shared interrupt.
- */
-static irqreturn_t denali_isr(int irq, void *dev_id)
+static uint32_t denali_wait_for_irq(struct denali_nand_info *denali,
+                                   uint32_t irq_mask)
 {
-       struct denali_nand_info *denali = dev_id;
+       unsigned long time_left, flags;
        uint32_t irq_status;
-       irqreturn_t result = IRQ_NONE;
 
-       spin_lock(&denali->irq_lock);
+       spin_lock_irqsave(&denali->irq_lock, flags);
 
-       /* check to see if a valid NAND chip has been selected. */
-       if (is_flash_bank_valid(denali->flash_bank)) {
-               /*
-                * check to see if controller generated the interrupt,
-                * since this is a shared interrupt
-                */
-               irq_status = denali_irq_detected(denali);
-               if (irq_status != 0) {
-                       /* handle interrupt */
-                       /* first acknowledge it */
-                       clear_interrupt(denali, irq_status);
-                       /*
-                        * store the status in the device context for someone
-                        * to read
-                        */
-                       denali->irq_status |= irq_status;
-                       /* notify anyone who cares that it happened */
-                       complete(&denali->complete);
-                       /* tell the OS that we've handled this */
-                       result = IRQ_HANDLED;
-               }
+       irq_status = denali->irq_status;
+
+       if (irq_mask & irq_status) {
+               /* return immediately if the IRQ has already happened. */
+               spin_unlock_irqrestore(&denali->irq_lock, flags);
+               return irq_status;
        }
-       spin_unlock(&denali->irq_lock);
-       return result;
-}
 
-static uint32_t wait_for_irq(struct denali_nand_info *denali, uint32_t irq_mask)
-{
-       unsigned long comp_res;
-       uint32_t intr_status;
-       unsigned long timeout = msecs_to_jiffies(1000);
+       denali->irq_mask = irq_mask;
+       reinit_completion(&denali->complete);
+       spin_unlock_irqrestore(&denali->irq_lock, flags);
 
-       do {
-               comp_res =
-                       wait_for_completion_timeout(&denali->complete, timeout);
-               spin_lock_irq(&denali->irq_lock);
-               intr_status = denali->irq_status;
-
-               if (intr_status & irq_mask) {
-                       denali->irq_status &= ~irq_mask;
-                       spin_unlock_irq(&denali->irq_lock);
-                       /* our interrupt was detected */
-                       break;
-               }
+       time_left = wait_for_completion_timeout(&denali->complete,
+                                               msecs_to_jiffies(1000));
+       if (!time_left) {
+               dev_err(denali->dev, "timeout while waiting for irq 0x%x\n",
+                       denali->irq_mask);
+               return 0;
+       }
 
-               /*
-                * these are not the interrupts you are looking for -
-                * need to wait again
-                */
-               spin_unlock_irq(&denali->irq_lock);
-       } while (comp_res != 0);
+       return denali->irq_status;
+}
+
+static uint32_t denali_check_irq(struct denali_nand_info *denali)
+{
+       unsigned long flags;
+       uint32_t irq_status;
 
-       if (comp_res == 0) {
-               /* timeout */
-               pr_err("timeout occurred, status = 0x%x, mask = 0x%x\n",
-                               intr_status, irq_mask);
+       spin_lock_irqsave(&denali->irq_lock, flags);
+       irq_status = denali->irq_status;
+       spin_unlock_irqrestore(&denali->irq_lock, flags);
 
-               intr_status = 0;
-       }
-       return intr_status;
+       return irq_status;
 }
 
 /*
@@ -664,153 +222,111 @@ static void setup_ecc_for_xfer(struct denali_nand_info *denali, bool ecc_en,
        transfer_spare_flag = transfer_spare ? TRANSFER_SPARE_REG__FLAG : 0;
 
        /* Enable spare area/ECC per user's request. */
-       iowrite32(ecc_en_flag, denali->flash_reg + ECC_ENABLE);
-       iowrite32(transfer_spare_flag, denali->flash_reg + TRANSFER_SPARE_REG);
+       iowrite32(ecc_en_flag, denali->reg + ECC_ENABLE);
+       iowrite32(transfer_spare_flag, denali->reg + TRANSFER_SPARE_REG);
 }
 
-/*
- * sends a pipeline command operation to the controller. See the Denali NAND
- * controller's user guide for more information (section 4.2.3.6).
- */
-static int denali_send_pipeline_cmd(struct denali_nand_info *denali,
-                                   bool ecc_en, bool transfer_spare,
-                                   int access_type, int op)
+static void denali_read_buf(struct mtd_info *mtd, uint8_t *buf, int len)
 {
-       int status = PASS;
-       uint32_t addr, cmd;
-
-       setup_ecc_for_xfer(denali, ecc_en, transfer_spare);
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       int i;
 
-       clear_interrupts(denali);
+       iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+                 denali->host + DENALI_HOST_ADDR);
 
-       addr = BANK(denali->flash_bank) | denali->page;
+       for (i = 0; i < len; i++)
+               buf[i] = ioread32(denali->host + DENALI_HOST_DATA);
+}
 
-       if (op == DENALI_WRITE && access_type != SPARE_ACCESS) {
-               cmd = MODE_01 | addr;
-               iowrite32(cmd, denali->flash_mem);
-       } else if (op == DENALI_WRITE && access_type == SPARE_ACCESS) {
-               /* read spare area */
-               cmd = MODE_10 | addr;
-               index_addr(denali, cmd, access_type);
+static void denali_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
+{
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       int i;
 
-               cmd = MODE_01 | addr;
-               iowrite32(cmd, denali->flash_mem);
-       } else if (op == DENALI_READ) {
-               /* setup page read request for access type */
-               cmd = MODE_10 | addr;
-               index_addr(denali, cmd, access_type);
+       iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+                 denali->host + DENALI_HOST_ADDR);
 
-               cmd = MODE_01 | addr;
-               iowrite32(cmd, denali->flash_mem);
-       }
-       return status;
+       for (i = 0; i < len; i++)
+               iowrite32(buf[i], denali->host + DENALI_HOST_DATA);
 }
 
-/* helper function that simply writes a buffer to the flash */
-static int write_data_to_flash_mem(struct denali_nand_info *denali,
-                                  const uint8_t *buf, int len)
+static void denali_read_buf16(struct mtd_info *mtd, uint8_t *buf, int len)
 {
-       uint32_t *buf32;
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       uint16_t *buf16 = (uint16_t *)buf;
        int i;
 
-       /*
-        * verify that the len is a multiple of 4.
-        * see comment in read_data_from_flash_mem()
-        */
-       BUG_ON((len % 4) != 0);
+       iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+                 denali->host + DENALI_HOST_ADDR);
 
-       /* write the data to the flash memory */
-       buf32 = (uint32_t *)buf;
-       for (i = 0; i < len / 4; i++)
-               iowrite32(*buf32++, denali->flash_mem + 0x10);
-       return i * 4; /* intent is to return the number of bytes read */
+       for (i = 0; i < len / 2; i++)
+               buf16[i] = ioread32(denali->host + DENALI_HOST_DATA);
 }
 
-/* helper function that simply reads a buffer from the flash */
-static int read_data_from_flash_mem(struct denali_nand_info *denali,
-                                   uint8_t *buf, int len)
+static void denali_write_buf16(struct mtd_info *mtd, const uint8_t *buf,
+                              int len)
 {
-       uint32_t *buf32;
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       const uint16_t *buf16 = (const uint16_t *)buf;
        int i;
 
-       /*
-        * we assume that len will be a multiple of 4, if not it would be nice
-        * to know about it ASAP rather than have random failures...
-        * This assumption is based on the fact that this function is designed
-        * to be used to read flash pages, which are typically multiples of 4.
-        */
-       BUG_ON((len % 4) != 0);
+       iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+                 denali->host + DENALI_HOST_ADDR);
 
-       /* transfer the data from the flash */
-       buf32 = (uint32_t *)buf;
-       for (i = 0; i < len / 4; i++)
-               *buf32++ = ioread32(denali->flash_mem + 0x10);
-       return i * 4; /* intent is to return the number of bytes read */
+       for (i = 0; i < len / 2; i++)
+               iowrite32(buf16[i], denali->host + DENALI_HOST_DATA);
 }
 
-/* writes OOB data to the device */
-static int write_oob_data(struct mtd_info *mtd, uint8_t *buf, int page)
+static uint8_t denali_read_byte(struct mtd_info *mtd)
 {
-       struct denali_nand_info *denali = mtd_to_denali(mtd);
-       uint32_t irq_status;
-       uint32_t irq_mask = INTR__PROGRAM_COMP | INTR__PROGRAM_FAIL;
-       int status = 0;
+       uint8_t byte;
 
-       denali->page = page;
+       denali_read_buf(mtd, &byte, 1);
 
-       if (denali_send_pipeline_cmd(denali, false, false, SPARE_ACCESS,
-                                                       DENALI_WRITE) == PASS) {
-               write_data_to_flash_mem(denali, buf, mtd->oobsize);
+       return byte;
+}
 
-               /* wait for operation to complete */
-               irq_status = wait_for_irq(denali, irq_mask);
+static void denali_write_byte(struct mtd_info *mtd, uint8_t byte)
+{
+       denali_write_buf(mtd, &byte, 1);
+}
 
-               if (irq_status == 0) {
-                       dev_err(denali->dev, "OOB write failed\n");
-                       status = -EIO;
-               }
-       } else {
-               dev_err(denali->dev, "unable to send pipeline command\n");
-               status = -EIO;
-       }
-       return status;
+static uint16_t denali_read_word(struct mtd_info *mtd)
+{
+       uint16_t word;
+
+       denali_read_buf16(mtd, (uint8_t *)&word, 2);
+
+       return word;
 }
 
-/* reads OOB data from the device */
-static void read_oob_data(struct mtd_info *mtd, uint8_t *buf, int page)
+static void denali_cmd_ctrl(struct mtd_info *mtd, int dat, unsigned int ctrl)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
-       uint32_t irq_mask = INTR__LOAD_COMP;
-       uint32_t irq_status, addr, cmd;
+       uint32_t type;
 
-       denali->page = page;
+       if (ctrl & NAND_CLE)
+               type = DENALI_MAP11_CMD;
+       else if (ctrl & NAND_ALE)
+               type = DENALI_MAP11_ADDR;
+       else
+               return;
 
-       if (denali_send_pipeline_cmd(denali, false, true, SPARE_ACCESS,
-                                                       DENALI_READ) == PASS) {
-               read_data_from_flash_mem(denali, buf, mtd->oobsize);
+       /*
+        * Some commands are followed by chip->dev_ready or chip->waitfunc.
+        * irq_status must be cleared here to catch the R/B# interrupt later.
+        */
+       if (ctrl & NAND_CTRL_CHANGE)
+               denali_reset_irq(denali);
 
-               /*
-                * wait for command to be accepted
-                * can always use status0 bit as the
-                * mask is identical for each bank.
-                */
-               irq_status = wait_for_irq(denali, irq_mask);
+       denali_host_write(denali, DENALI_BANK(denali) | type, dat);
+}
 
-               if (irq_status == 0)
-                       dev_err(denali->dev, "page on OOB timeout %d\n",
-                                       denali->page);
+static int denali_dev_ready(struct mtd_info *mtd)
+{
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
 
-               /*
-                * We set the device back to MAIN_ACCESS here as I observed
-                * instability with the controller if you do a block erase
-                * and the last transaction was a SPARE_ACCESS. Block erase
-                * is reliable (according to the MTD test infrastructure)
-                * if you are in MAIN_ACCESS.
-                */
-               addr = BANK(denali->flash_bank) | denali->page;
-               cmd = MODE_10 | addr;
-               index_addr(denali, cmd, MAIN_ACCESS);
-       }
+       return !!(denali_check_irq(denali) & INTR__INT_ACT);
 }
 
 static int denali_check_erased_page(struct mtd_info *mtd,
@@ -856,11 +372,11 @@ static int denali_hw_ecc_fixup(struct mtd_info *mtd,
                               unsigned long *uncor_ecc_flags)
 {
        struct nand_chip *chip = mtd_to_nand(mtd);
-       int bank = denali->flash_bank;
+       int bank = denali->active_bank;
        uint32_t ecc_cor;
        unsigned int max_bitflips;
 
-       ecc_cor = ioread32(denali->flash_reg + ECC_COR_INFO(bank));
+       ecc_cor = ioread32(denali->reg + ECC_COR_INFO(bank));
        ecc_cor >>= ECC_COR_INFO__SHIFT(bank);
 
        if (ecc_cor & ECC_COR_INFO__UNCOR_ERR) {
@@ -886,8 +402,6 @@ static int denali_hw_ecc_fixup(struct mtd_info *mtd,
        return max_bitflips;
 }
 
-#define ECC_SECTOR_SIZE 512
-
 #define ECC_SECTOR(x)  (((x) & ECC_ERROR_ADDRESS__SECTOR_NR) >> 12)
 #define ECC_BYTE(x)    (((x) & ECC_ERROR_ADDRESS__OFFSET))
 #define ECC_CORRECTION_VALUE(x) ((x) & ERR_CORRECTION_INFO__BYTEMASK)
@@ -899,22 +413,23 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
                               struct denali_nand_info *denali,
                               unsigned long *uncor_ecc_flags, uint8_t *buf)
 {
+       unsigned int ecc_size = denali->nand.ecc.size;
        unsigned int bitflips = 0;
        unsigned int max_bitflips = 0;
        uint32_t err_addr, err_cor_info;
        unsigned int err_byte, err_sector, err_device;
        uint8_t err_cor_value;
        unsigned int prev_sector = 0;
+       uint32_t irq_status;
 
-       /* read the ECC errors. we'll ignore them for now */
-       denali_set_intr_modes(denali, false);
+       denali_reset_irq(denali);
 
        do {
-               err_addr = ioread32(denali->flash_reg + ECC_ERROR_ADDRESS);
+               err_addr = ioread32(denali->reg + ECC_ERROR_ADDRESS);
                err_sector = ECC_SECTOR(err_addr);
                err_byte = ECC_BYTE(err_addr);
 
-               err_cor_info = ioread32(denali->flash_reg + ERR_CORRECTION_INFO);
+               err_cor_info = ioread32(denali->reg + ERR_CORRECTION_INFO);
                err_cor_value = ECC_CORRECTION_VALUE(err_cor_info);
                err_device = ECC_ERR_DEVICE(err_cor_info);
 
@@ -928,9 +443,9 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
                         * an erased sector.
                         */
                        *uncor_ecc_flags |= BIT(err_sector);
-               } else if (err_byte < ECC_SECTOR_SIZE) {
+               } else if (err_byte < ecc_size) {
                        /*
-                        * If err_byte is larger than ECC_SECTOR_SIZE, means error
+                        * If err_byte is larger than ecc_size, means error
                         * happened in OOB, so we ignore it. It's no need for
                         * us to correct it err_device is represented the NAND
                         * error bits are happened in if there are more than
@@ -939,8 +454,8 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
                        int offset;
                        unsigned int flips_in_byte;
 
-                       offset = (err_sector * ECC_SECTOR_SIZE + err_byte) *
-                                               denali->devnum + err_device;
+                       offset = (err_sector * ecc_size + err_byte) *
+                                       denali->devs_per_cs + err_device;
 
                        /* correct the ECC error */
                        flips_in_byte = hweight8(buf[offset] ^ err_cor_value);
@@ -959,10 +474,9 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
         * ECC_TRANSACTION_DONE interrupt, so here just wait for
         * a while for this interrupt
         */
-       while (!(read_interrupt_status(denali) & INTR__ECC_TRANSACTION_DONE))
-               cpu_relax();
-       clear_interrupts(denali);
-       denali_set_intr_modes(denali, true);
+       irq_status = denali_wait_for_irq(denali, INTR__ECC_TRANSACTION_DONE);
+       if (!(irq_status & INTR__ECC_TRANSACTION_DONE))
+               return -EIO;
 
        return max_bitflips;
 }
@@ -970,17 +484,17 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
 /* programs the controller to either enable/disable DMA transfers */
 static void denali_enable_dma(struct denali_nand_info *denali, bool en)
 {
-       iowrite32(en ? DMA_ENABLE__FLAG : 0, denali->flash_reg + DMA_ENABLE);
-       ioread32(denali->flash_reg + DMA_ENABLE);
+       iowrite32(en ? DMA_ENABLE__FLAG : 0, denali->reg + DMA_ENABLE);
+       ioread32(denali->reg + DMA_ENABLE);
 }
 
-static void denali_setup_dma64(struct denali_nand_info *denali, int op)
+static void denali_setup_dma64(struct denali_nand_info *denali,
+                              dma_addr_t dma_addr, int page, int write)
 {
        uint32_t mode;
        const int page_count = 1;
-       uint64_t addr = denali->buf.dma_buf;
 
-       mode = MODE_10 | BANK(denali->flash_bank) | denali->page;
+       mode = DENALI_MAP10 | DENALI_BANK(denali) | page;
 
        /* DMA is a three step process */
 
@@ -988,191 +502,354 @@ static void denali_setup_dma64(struct denali_nand_info *denali, int op)
         * 1. setup transfer type, interrupt when complete,
         *    burst len = 64 bytes, the number of pages
         */
-       index_addr(denali, mode, 0x01002000 | (64 << 16) | op | page_count);
+       denali_host_write(denali, mode,
+                         0x01002000 | (64 << 16) | (write << 8) | page_count);
 
        /* 2. set memory low address */
-       index_addr(denali, mode, addr);
+       denali_host_write(denali, mode, dma_addr);
 
        /* 3. set memory high address */
-       index_addr(denali, mode, addr >> 32);
+       denali_host_write(denali, mode, (uint64_t)dma_addr >> 32);
 }
 
-static void denali_setup_dma32(struct denali_nand_info *denali, int op)
+static void denali_setup_dma32(struct denali_nand_info *denali,
+                              dma_addr_t dma_addr, int page, int write)
 {
        uint32_t mode;
        const int page_count = 1;
-       uint32_t addr = denali->buf.dma_buf;
 
-       mode = MODE_10 | BANK(denali->flash_bank);
+       mode = DENALI_MAP10 | DENALI_BANK(denali);
 
        /* DMA is a four step process */
 
        /* 1. setup transfer type and # of pages */
-       index_addr(denali, mode | denali->page, 0x2000 | op | page_count);
+       denali_host_write(denali, mode | page,
+                         0x2000 | (write << 8) | page_count);
 
        /* 2. set memory high address bits 23:8 */
-       index_addr(denali, mode | ((addr >> 16) << 8), 0x2200);
+       denali_host_write(denali, mode | ((dma_addr >> 16) << 8), 0x2200);
 
        /* 3. set memory low address bits 23:8 */
-       index_addr(denali, mode | ((addr & 0xffff) << 8), 0x2300);
+       denali_host_write(denali, mode | ((dma_addr & 0xffff) << 8), 0x2300);
 
        /* 4. interrupt when complete, burst len = 64 bytes */
-       index_addr(denali, mode | 0x14000, 0x2400);
+       denali_host_write(denali, mode | 0x14000, 0x2400);
 }
 
-static void denali_setup_dma(struct denali_nand_info *denali, int op)
+static void denali_setup_dma(struct denali_nand_info *denali,
+                            dma_addr_t dma_addr, int page, int write)
 {
        if (denali->caps & DENALI_CAP_DMA_64BIT)
-               denali_setup_dma64(denali, op);
+               denali_setup_dma64(denali, dma_addr, page, write);
        else
-               denali_setup_dma32(denali, op);
+               denali_setup_dma32(denali, dma_addr, page, write);
 }
 
-/*
- * writes a page. user specifies type, and this function handles the
- * configuration details.
- */
-static int write_page(struct mtd_info *mtd, struct nand_chip *chip,
-                       const uint8_t *buf, bool raw_xfer)
+static int denali_pio_read(struct denali_nand_info *denali, void *buf,
+                          size_t size, int page, int raw)
 {
-       struct denali_nand_info *denali = mtd_to_denali(mtd);
-       dma_addr_t addr = denali->buf.dma_buf;
-       size_t size = mtd->writesize + mtd->oobsize;
+       uint32_t addr = DENALI_BANK(denali) | page;
+       uint32_t *buf32 = (uint32_t *)buf;
+       uint32_t irq_status, ecc_err_mask;
+       int i;
+
+       if (denali->caps & DENALI_CAP_HW_ECC_FIXUP)
+               ecc_err_mask = INTR__ECC_UNCOR_ERR;
+       else
+               ecc_err_mask = INTR__ECC_ERR;
+
+       denali_reset_irq(denali);
+
+       iowrite32(DENALI_MAP01 | addr, denali->host + DENALI_HOST_ADDR);
+       for (i = 0; i < size / 4; i++)
+               *buf32++ = ioread32(denali->host + DENALI_HOST_DATA);
+
+       irq_status = denali_wait_for_irq(denali, INTR__PAGE_XFER_INC);
+       if (!(irq_status & INTR__PAGE_XFER_INC))
+               return -EIO;
+
+       if (irq_status & INTR__ERASED_PAGE)
+               memset(buf, 0xff, size);
+
+       return irq_status & ecc_err_mask ? -EBADMSG : 0;
+}
+
+static int denali_pio_write(struct denali_nand_info *denali,
+                           const void *buf, size_t size, int page, int raw)
+{
+       uint32_t addr = DENALI_BANK(denali) | page;
+       const uint32_t *buf32 = (uint32_t *)buf;
        uint32_t irq_status;
-       uint32_t irq_mask = INTR__DMA_CMD_COMP | INTR__PROGRAM_FAIL;
+       int i;
 
-       /*
-        * if it is a raw xfer, we want to disable ecc and send the spare area.
-        * !raw_xfer - enable ecc
-        * raw_xfer - transfer spare
-        */
-       setup_ecc_for_xfer(denali, !raw_xfer, raw_xfer);
+       denali_reset_irq(denali);
 
-       /* copy buffer into DMA buffer */
-       memcpy(denali->buf.buf, buf, mtd->writesize);
+       iowrite32(DENALI_MAP01 | addr, denali->host + DENALI_HOST_ADDR);
+       for (i = 0; i < size / 4; i++)
+               iowrite32(*buf32++, denali->host + DENALI_HOST_DATA);
 
-       if (raw_xfer) {
-               /* transfer the data to the spare area */
-               memcpy(denali->buf.buf + mtd->writesize,
-                       chip->oob_poi,
-                       mtd->oobsize);
+       irq_status = denali_wait_for_irq(denali,
+                               INTR__PROGRAM_COMP | INTR__PROGRAM_FAIL);
+       if (!(irq_status & INTR__PROGRAM_COMP))
+               return -EIO;
+
+       return 0;
+}
+
+static int denali_pio_xfer(struct denali_nand_info *denali, void *buf,
+                          size_t size, int page, int raw, int write)
+{
+       if (write)
+               return denali_pio_write(denali, buf, size, page, raw);
+       else
+               return denali_pio_read(denali, buf, size, page, raw);
+}
+
+static int denali_dma_xfer(struct denali_nand_info *denali, void *buf,
+                          size_t size, int page, int raw, int write)
+{
+       dma_addr_t dma_addr;
+       uint32_t irq_mask, irq_status, ecc_err_mask;
+       enum dma_data_direction dir = write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+       int ret = 0;
+
+       dma_addr = dma_map_single(denali->dev, buf, size, dir);
+       if (dma_mapping_error(denali->dev, dma_addr)) {
+               dev_dbg(denali->dev, "Failed to DMA-map buffer. Trying PIO.\n");
+               return denali_pio_xfer(denali, buf, size, page, raw, write);
        }
 
-       dma_sync_single_for_device(denali->dev, addr, size, DMA_TO_DEVICE);
+       if (write) {
+               /*
+                * INTR__PROGRAM_COMP is never asserted for the DMA transfer.
+                * We can use INTR__DMA_CMD_COMP instead.  This flag is asserted
+                * when the page program is completed.
+                */
+               irq_mask = INTR__DMA_CMD_COMP | INTR__PROGRAM_FAIL;
+               ecc_err_mask = 0;
+       } else if (denali->caps & DENALI_CAP_HW_ECC_FIXUP) {
+               irq_mask = INTR__DMA_CMD_COMP;
+               ecc_err_mask = INTR__ECC_UNCOR_ERR;
+       } else {
+               irq_mask = INTR__DMA_CMD_COMP;
+               ecc_err_mask = INTR__ECC_ERR;
+       }
 
-       clear_interrupts(denali);
        denali_enable_dma(denali, true);
 
-       denali_setup_dma(denali, DENALI_WRITE);
+       denali_reset_irq(denali);
+       denali_setup_dma(denali, dma_addr, page, write);
 
        /* wait for operation to complete */
-       irq_status = wait_for_irq(denali, irq_mask);
-
-       if (irq_status == 0) {
-               dev_err(denali->dev, "timeout on write_page (type = %d)\n",
-                       raw_xfer);
-               denali->status = NAND_STATUS_FAIL;
-       }
+       irq_status = denali_wait_for_irq(denali, irq_mask);
+       if (!(irq_status & INTR__DMA_CMD_COMP))
+               ret = -EIO;
+       else if (irq_status & ecc_err_mask)
+               ret = -EBADMSG;
 
        denali_enable_dma(denali, false);
-       dma_sync_single_for_cpu(denali->dev, addr, size, DMA_TO_DEVICE);
+       dma_unmap_single(denali->dev, dma_addr, size, dir);
 
-       return 0;
-}
+       if (irq_status & INTR__ERASED_PAGE)
+               memset(buf, 0xff, size);
 
-/* NAND core entry points */
+       return ret;
+}
 
-/*
- * this is the callback that the NAND core calls to write a page. Since
- * writing a page with ECC or without is similar, all the work is done
- * by write_page above.
- */
-static int denali_write_page(struct mtd_info *mtd, struct nand_chip *chip,
-                               const uint8_t *buf, int oob_required, int page)
+static int denali_data_xfer(struct denali_nand_info *denali, void *buf,
+                           size_t size, int page, int raw, int write)
 {
-       /*
-        * for regular page writes, we let HW handle all the ECC
-        * data written to the device.
-        */
-       return write_page(mtd, chip, buf, false);
+       setup_ecc_for_xfer(denali, !raw, raw);
+
+       if (denali->dma_avail)
+               return denali_dma_xfer(denali, buf, size, page, raw, write);
+       else
+               return denali_pio_xfer(denali, buf, size, page, raw, write);
 }
 
-/*
- * This is the callback that the NAND core calls to write a page without ECC.
- * raw access is similar to ECC page writes, so all the work is done in the
- * write_page() function above.
- */
-static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-                                const uint8_t *buf, int oob_required,
-                                int page)
+static void denali_oob_xfer(struct mtd_info *mtd, struct nand_chip *chip,
+                           int page, int write)
 {
-       /*
-        * for raw page writes, we want to disable ECC and simply write
-        * whatever data is in the buffer.
-        */
-       return write_page(mtd, chip, buf, true);
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       unsigned int start_cmd = write ? NAND_CMD_SEQIN : NAND_CMD_READ0;
+       unsigned int rnd_cmd = write ? NAND_CMD_RNDIN : NAND_CMD_RNDOUT;
+       int writesize = mtd->writesize;
+       int oobsize = mtd->oobsize;
+       uint8_t *bufpoi = chip->oob_poi;
+       int ecc_steps = chip->ecc.steps;
+       int ecc_size = chip->ecc.size;
+       int ecc_bytes = chip->ecc.bytes;
+       int oob_skip = denali->oob_skip_bytes;
+       size_t size = writesize + oobsize;
+       int i, pos, len;
+
+       /* BBM at the beginning of the OOB area */
+       chip->cmdfunc(mtd, start_cmd, writesize, page);
+       if (write)
+               chip->write_buf(mtd, bufpoi, oob_skip);
+       else
+               chip->read_buf(mtd, bufpoi, oob_skip);
+       bufpoi += oob_skip;
+
+       /* OOB ECC */
+       for (i = 0; i < ecc_steps; i++) {
+               pos = ecc_size + i * (ecc_size + ecc_bytes);
+               len = ecc_bytes;
+
+               if (pos >= writesize)
+                       pos += oob_skip;
+               else if (pos + len > writesize)
+                       len = writesize - pos;
+
+               chip->cmdfunc(mtd, rnd_cmd, pos, -1);
+               if (write)
+                       chip->write_buf(mtd, bufpoi, len);
+               else
+                       chip->read_buf(mtd, bufpoi, len);
+               bufpoi += len;
+               if (len < ecc_bytes) {
+                       len = ecc_bytes - len;
+                       chip->cmdfunc(mtd, rnd_cmd, writesize + oob_skip, -1);
+                       if (write)
+                               chip->write_buf(mtd, bufpoi, len);
+                       else
+                               chip->read_buf(mtd, bufpoi, len);
+                       bufpoi += len;
+               }
+       }
+
+       /* OOB free */
+       len = oobsize - (bufpoi - chip->oob_poi);
+       chip->cmdfunc(mtd, rnd_cmd, size - len, -1);
+       if (write)
+               chip->write_buf(mtd, bufpoi, len);
+       else
+               chip->read_buf(mtd, bufpoi, len);
 }
 
-static int denali_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
-                           int page)
+static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+                               uint8_t *buf, int oob_required, int page)
 {
-       return write_oob_data(mtd, chip->oob_poi, page);
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       int writesize = mtd->writesize;
+       int oobsize = mtd->oobsize;
+       int ecc_steps = chip->ecc.steps;
+       int ecc_size = chip->ecc.size;
+       int ecc_bytes = chip->ecc.bytes;
+       void *dma_buf = denali->buf;
+       int oob_skip = denali->oob_skip_bytes;
+       size_t size = writesize + oobsize;
+       int ret, i, pos, len;
+
+       ret = denali_data_xfer(denali, dma_buf, size, page, 1, 0);
+       if (ret)
+               return ret;
+
+       /* Arrange the buffer for syndrome payload/ecc layout */
+       if (buf) {
+               for (i = 0; i < ecc_steps; i++) {
+                       pos = i * (ecc_size + ecc_bytes);
+                       len = ecc_size;
+
+                       if (pos >= writesize)
+                               pos += oob_skip;
+                       else if (pos + len > writesize)
+                               len = writesize - pos;
+
+                       memcpy(buf, dma_buf + pos, len);
+                       buf += len;
+                       if (len < ecc_size) {
+                               len = ecc_size - len;
+                               memcpy(buf, dma_buf + writesize + oob_skip,
+                                      len);
+                               buf += len;
+                       }
+               }
+       }
+
+       if (oob_required) {
+               uint8_t *oob = chip->oob_poi;
+
+               /* BBM at the beginning of the OOB area */
+               memcpy(oob, dma_buf + writesize, oob_skip);
+               oob += oob_skip;
+
+               /* OOB ECC */
+               for (i = 0; i < ecc_steps; i++) {
+                       pos = ecc_size + i * (ecc_size + ecc_bytes);
+                       len = ecc_bytes;
+
+                       if (pos >= writesize)
+                               pos += oob_skip;
+                       else if (pos + len > writesize)
+                               len = writesize - pos;
+
+                       memcpy(oob, dma_buf + pos, len);
+                       oob += len;
+                       if (len < ecc_bytes) {
+                               len = ecc_bytes - len;
+                               memcpy(oob, dma_buf + writesize + oob_skip,
+                                      len);
+                               oob += len;
+                       }
+               }
+
+               /* OOB free */
+               len = oobsize - (oob - chip->oob_poi);
+               memcpy(oob, dma_buf + size - len, len);
+       }
+
+       return 0;
 }
 
 static int denali_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
                           int page)
 {
-       read_oob_data(mtd, chip->oob_poi, page);
+       denali_oob_xfer(mtd, chip, page, 0);
 
        return 0;
 }
 
-static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
-                           uint8_t *buf, int oob_required, int page)
+static int denali_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
+                           int page)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
-       dma_addr_t addr = denali->buf.dma_buf;
-       size_t size = mtd->writesize + mtd->oobsize;
-       uint32_t irq_status;
-       uint32_t irq_mask = denali->caps & DENALI_CAP_HW_ECC_FIXUP ?
-                               INTR__DMA_CMD_COMP | INTR__ECC_UNCOR_ERR :
-                               INTR__ECC_TRANSACTION_DONE | INTR__ECC_ERR;
-       unsigned long uncor_ecc_flags = 0;
-       int stat = 0;
+       int status;
 
-       if (page != denali->page) {
-               dev_err(denali->dev,
-                       "IN %s: page %d is not equal to denali->page %d",
-                       __func__, page, denali->page);
-               BUG();
-       }
+       denali_reset_irq(denali);
 
-       setup_ecc_for_xfer(denali, true, false);
+       denali_oob_xfer(mtd, chip, page, 1);
 
-       denali_enable_dma(denali, true);
-       dma_sync_single_for_device(denali->dev, addr, size, DMA_FROM_DEVICE);
+       chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+       status = chip->waitfunc(mtd, chip);
 
-       clear_interrupts(denali);
-       denali_setup_dma(denali, DENALI_READ);
-
-       /* wait for operation to complete */
-       irq_status = wait_for_irq(denali, irq_mask);
+       return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
 
-       dma_sync_single_for_cpu(denali->dev, addr, size, DMA_FROM_DEVICE);
+static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
+                           uint8_t *buf, int oob_required, int page)
+{
+       struct denali_nand_info *denali = mtd_to_denali(mtd);
+       unsigned long uncor_ecc_flags = 0;
+       int stat = 0;
+       int ret;
 
-       memcpy(buf, denali->buf.buf, mtd->writesize);
+       ret = denali_data_xfer(denali, buf, mtd->writesize, page, 0, 0);
+       if (ret && ret != -EBADMSG)
+               return ret;
 
        if (denali->caps & DENALI_CAP_HW_ECC_FIXUP)
                stat = denali_hw_ecc_fixup(mtd, denali, &uncor_ecc_flags);
-       else if (irq_status & INTR__ECC_ERR)
+       else if (ret == -EBADMSG)
                stat = denali_sw_ecc_fixup(mtd, denali, &uncor_ecc_flags, buf);
-       denali_enable_dma(denali, false);
 
        if (stat < 0)
                return stat;
 
        if (uncor_ecc_flags) {
-               read_oob_data(mtd, chip->oob_poi, denali->page);
+               ret = denali_read_oob(mtd, chip, page);
+               if (ret)
+                       return ret;
 
                stat = denali_check_erased_page(mtd, chip, buf,
                                                uncor_ecc_flags, stat);
@@ -1181,137 +858,266 @@ static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
        return stat;
 }
 
-static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-                               uint8_t *buf, int oob_required, int page)
+static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+                                const uint8_t *buf, int oob_required, int page)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
-       dma_addr_t addr = denali->buf.dma_buf;
-       size_t size = mtd->writesize + mtd->oobsize;
-       uint32_t irq_mask = INTR__DMA_CMD_COMP;
-
-       if (page != denali->page) {
-               dev_err(denali->dev,
-                       "IN %s: page %d is not equal to denali->page %d",
-                       __func__, page, denali->page);
-               BUG();
-       }
-
-       setup_ecc_for_xfer(denali, false, true);
-       denali_enable_dma(denali, true);
-
-       dma_sync_single_for_device(denali->dev, addr, size, DMA_FROM_DEVICE);
-
-       clear_interrupts(denali);
-       denali_setup_dma(denali, DENALI_READ);
-
-       /* wait for operation to complete */
-       wait_for_irq(denali, irq_mask);
+       int writesize = mtd->writesize;
+       int oobsize = mtd->oobsize;
+       int ecc_steps = chip->ecc.steps;
+       int ecc_size = chip->ecc.size;
+       int ecc_bytes = chip->ecc.bytes;
+       void *dma_buf = denali->buf;
+       int oob_skip = denali->oob_skip_bytes;
+       size_t size = writesize + oobsize;
+       int i, pos, len;
 
-       dma_sync_single_for_cpu(denali->dev, addr, size, DMA_FROM_DEVICE);
+       /*
+        * Fill the buffer with 0xff first except the full page transfer.
+        * This simplifies the logic.
+        */
+       if (!buf || !oob_required)
+               memset(dma_buf, 0xff, size);
+
+       /* Arrange the buffer for syndrome payload/ecc layout */
+       if (buf) {
+               for (i = 0; i < ecc_steps; i++) {
+                       pos = i * (ecc_size + ecc_bytes);
+                       len = ecc_size;
+
+                       if (pos >= writesize)
+                               pos += oob_skip;
+                       else if (pos + len > writesize)
+                               len = writesize - pos;
+
+                       memcpy(dma_buf + pos, buf, len);
+                       buf += len;
+                       if (len < ecc_size) {
+                               len = ecc_size - len;
+                               memcpy(dma_buf + writesize + oob_skip, buf,
+                                      len);
+                               buf += len;
+                       }
+               }
+       }
 
-       denali_enable_dma(denali, false);
+       if (oob_required) {
+               const uint8_t *oob = chip->oob_poi;
+
+               /* BBM at the beginning of the OOB area */
+               memcpy(dma_buf + writesize, oob, oob_skip);
+               oob += oob_skip;
+
+               /* OOB ECC */
+               for (i = 0; i < ecc_steps; i++) {
+                       pos = ecc_size + i * (ecc_size + ecc_bytes);
+                       len = ecc_bytes;
+
+                       if (pos >= writesize)
+                               pos += oob_skip;
+                       else if (pos + len > writesize)
+                               len = writesize - pos;
+
+                       memcpy(dma_buf + pos, oob, len);
+                       oob += len;
+                       if (len < ecc_bytes) {
+                               len = ecc_bytes - len;
+                               memcpy(dma_buf + writesize + oob_skip, oob,
+                                      len);
+                               oob += len;
+                       }
+               }
 
-       memcpy(buf, denali->buf.buf, mtd->writesize);
-       memcpy(chip->oob_poi, denali->buf.buf + mtd->writesize, mtd->oobsize);
+               /* OOB free */
+               len = oobsize - (oob - chip->oob_poi);
+               memcpy(dma_buf + size - len, oob, len);
+       }
 
-       return 0;
+       return denali_data_xfer(denali, dma_buf, size, page, 1, 1);
 }
 
-static uint8_t denali_read_byte(struct mtd_info *mtd)
+static int denali_write_page(struct mtd_info *mtd, struct nand_chip *chip,
+                            const uint8_t *buf, int oob_required, int page)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
-       uint8_t result = 0xff;
-
-       if (denali->buf.head < denali->buf.tail)
-               result = denali->buf.buf[denali->buf.head++];
 
-       return result;
+       return denali_data_xfer(denali, (void *)buf, mtd->writesize,
+                               page, 0, 1);
 }
 
 static void denali_select_chip(struct mtd_info *mtd, int chip)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
 
-       spin_lock_irq(&denali->irq_lock);
-       denali->flash_bank = chip;
-       spin_unlock_irq(&denali->irq_lock);
+       denali->active_bank = chip;
 }
 
 static int denali_waitfunc(struct mtd_info *mtd, struct nand_chip *chip)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
-       int status = denali->status;
+       uint32_t irq_status;
 
-       denali->status = 0;
+       /* R/B# pin transitioned from low to high? */
+       irq_status = denali_wait_for_irq(denali, INTR__INT_ACT);
 
-       return status;
+       return irq_status & INTR__INT_ACT ? 0 : NAND_STATUS_FAIL;
 }
 
 static int denali_erase(struct mtd_info *mtd, int page)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
+       uint32_t irq_status;
 
-       uint32_t cmd, irq_status;
-
-       clear_interrupts(denali);
+       denali_reset_irq(denali);
 
-       /* setup page read request for access type */
-       cmd = MODE_10 | BANK(denali->flash_bank) | page;
-       index_addr(denali, cmd, 0x1);
+       denali_host_write(denali, DENALI_MAP10 | DENALI_BANK(denali) | page,
+                         DENALI_ERASE);
 
        /* wait for erase to complete or failure to occur */
-       irq_status = wait_for_irq(denali, INTR__ERASE_COMP | INTR__ERASE_FAIL);
+       irq_status = denali_wait_for_irq(denali,
+                                        INTR__ERASE_COMP | INTR__ERASE_FAIL);
 
-       return irq_status & INTR__ERASE_FAIL ? NAND_STATUS_FAIL : PASS;
+       return irq_status & INTR__ERASE_COMP ? 0 : NAND_STATUS_FAIL;
 }
 
-static void denali_cmdfunc(struct mtd_info *mtd, unsigned int cmd, int col,
-                          int page)
+#define DIV_ROUND_DOWN_ULL(ll, d) \
+       ({ unsigned long long _tmp = (ll); do_div(_tmp, d); _tmp; })
+
+static int denali_setup_data_interface(struct mtd_info *mtd, int chipnr,
+                                      const struct nand_data_interface *conf)
 {
        struct denali_nand_info *denali = mtd_to_denali(mtd);
-       uint32_t addr, id;
+       const struct nand_sdr_timings *timings;
+       unsigned long t_clk;
+       int acc_clks, re_2_we, re_2_re, we_2_re, addr_2_data;
+       int rdwr_en_lo, rdwr_en_hi, rdwr_en_lo_hi, cs_setup;
+       int addr_2_data_mask;
+       uint32_t tmp;
+
+       timings = nand_get_sdr_timings(conf);
+       if (IS_ERR(timings))
+               return PTR_ERR(timings);
+
+       /* clk_x period in picoseconds */
+       t_clk = DIV_ROUND_DOWN_ULL(1000000000000ULL, denali->clk_x_rate);
+       if (!t_clk)
+               return -EINVAL;
+
+       if (chipnr == NAND_DATA_IFACE_CHECK_ONLY)
+               return 0;
+
+       /* tREA -> ACC_CLKS */
+       acc_clks = DIV_ROUND_UP(timings->tREA_max, t_clk);
+       acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
+
+       tmp = ioread32(denali->reg + ACC_CLKS);
+       tmp &= ~ACC_CLKS__VALUE;
+       tmp |= acc_clks;
+       iowrite32(tmp, denali->reg + ACC_CLKS);
+
+       /* tRWH -> RE_2_WE */
+       re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_clk);
+       re_2_we = min_t(int, re_2_we, RE_2_WE__VALUE);
+
+       tmp = ioread32(denali->reg + RE_2_WE);
+       tmp &= ~RE_2_WE__VALUE;
+       tmp |= re_2_we;
+       iowrite32(tmp, denali->reg + RE_2_WE);
+
+       /* tRHZ -> RE_2_RE */
+       re_2_re = DIV_ROUND_UP(timings->tRHZ_max, t_clk);
+       re_2_re = min_t(int, re_2_re, RE_2_RE__VALUE);
+
+       tmp = ioread32(denali->reg + RE_2_RE);
+       tmp &= ~RE_2_RE__VALUE;
+       tmp |= re_2_re;
+       iowrite32(tmp, denali->reg + RE_2_RE);
+
+       /* tWHR -> WE_2_RE */
+       we_2_re = DIV_ROUND_UP(timings->tWHR_min, t_clk);
+       we_2_re = min_t(int, we_2_re, TWHR2_AND_WE_2_RE__WE_2_RE);
+
+       tmp = ioread32(denali->reg + TWHR2_AND_WE_2_RE);
+       tmp &= ~TWHR2_AND_WE_2_RE__WE_2_RE;
+       tmp |= we_2_re;
+       iowrite32(tmp, denali->reg + TWHR2_AND_WE_2_RE);
+
+       /* tADL -> ADDR_2_DATA */
+
+       /* for older versions, ADDR_2_DATA is only 6 bit wide */
+       addr_2_data_mask = TCWAW_AND_ADDR_2_DATA__ADDR_2_DATA;
+       if (denali->revision < 0x0501)
+               addr_2_data_mask >>= 1;
+
+       addr_2_data = DIV_ROUND_UP(timings->tADL_min, t_clk);
+       addr_2_data = min_t(int, addr_2_data, addr_2_data_mask);
+
+       tmp = ioread32(denali->reg + TCWAW_AND_ADDR_2_DATA);
+       tmp &= ~addr_2_data_mask;
+       tmp |= addr_2_data;
+       iowrite32(tmp, denali->reg + TCWAW_AND_ADDR_2_DATA);
+
+       /* tREH, tWH -> RDWR_EN_HI_CNT */
+       rdwr_en_hi = DIV_ROUND_UP(max(timings->tREH_min, timings->tWH_min),
+                                 t_clk);
+       rdwr_en_hi = min_t(int, rdwr_en_hi, RDWR_EN_HI_CNT__VALUE);
+
+       tmp = ioread32(denali->reg + RDWR_EN_HI_CNT);
+       tmp &= ~RDWR_EN_HI_CNT__VALUE;
+       tmp |= rdwr_en_hi;
+       iowrite32(tmp, denali->reg + RDWR_EN_HI_CNT);
+
+       /* tRP, tWP -> RDWR_EN_LO_CNT */
+       rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min),
+                                 t_clk);
+       rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min),
+                                    t_clk);
+       rdwr_en_lo_hi = max(rdwr_en_lo_hi, DENALI_CLK_X_MULT);
+       rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi);
+       rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE);
+
+       tmp = ioread32(denali->reg + RDWR_EN_LO_CNT);
+       tmp &= ~RDWR_EN_LO_CNT__VALUE;
+       tmp |= rdwr_en_lo;
+       iowrite32(tmp, denali->reg + RDWR_EN_LO_CNT);
+
+       /* tCS, tCEA -> CS_SETUP_CNT */
+       cs_setup = max3((int)DIV_ROUND_UP(timings->tCS_min, t_clk) - rdwr_en_lo,
+                       (int)DIV_ROUND_UP(timings->tCEA_max, t_clk) - acc_clks,
+                       0);
+       cs_setup = min_t(int, cs_setup, CS_SETUP_CNT__VALUE);
+
+       tmp = ioread32(denali->reg + CS_SETUP_CNT);
+       tmp &= ~CS_SETUP_CNT__VALUE;
+       tmp |= cs_setup;
+       iowrite32(tmp, denali->reg + CS_SETUP_CNT);
+
+       return 0;
+}
+
+static void denali_reset_banks(struct denali_nand_info *denali)
+{
+       u32 irq_status;
        int i;
 
-       switch (cmd) {
-       case NAND_CMD_PAGEPROG:
-               break;
-       case NAND_CMD_STATUS:
-               read_status(denali);
-               break;
-       case NAND_CMD_READID:
-       case NAND_CMD_PARAM:
-               reset_buf(denali);
-               /*
-                * sometimes ManufactureId read from register is not right
-                * e.g. some of Micron MT29F32G08QAA MLC NAND chips
-                * So here we send READID cmd to NAND insteand
-                */
-               addr = MODE_11 | BANK(denali->flash_bank);
-               index_addr(denali, addr | 0, 0x90);
-               index_addr(denali, addr | 1, col);
-               for (i = 0; i < 8; i++) {
-                       index_addr_read_data(denali, addr | 2, &id);
-                       write_byte_to_buf(denali, id);
-               }
-               break;
-       case NAND_CMD_READ0:
-       case NAND_CMD_SEQIN:
-               denali->page = page;
-               break;
-       case NAND_CMD_RESET:
-               reset_bank(denali);
-               break;
-       case NAND_CMD_READOOB:
-               /* TODO: Read OOB data */
-               break;
-       default:
-               pr_err(": unsupported command received 0x%x\n", cmd);
-               break;
+       for (i = 0; i < denali->max_banks; i++) {
+               denali->active_bank = i;
+
+               denali_reset_irq(denali);
+
+               iowrite32(DEVICE_RESET__BANK(i),
+                         denali->reg + DEVICE_RESET);
+
+               irq_status = denali_wait_for_irq(denali,
+                       INTR__RST_COMP | INTR__INT_ACT | INTR__TIME_OUT);
+               if (!(irq_status & INTR__INT_ACT))
+                       break;
        }
+
+       dev_dbg(denali->dev, "%d chips connected\n", i);
+       denali->max_banks = i;
 }
-/* end NAND core entry points */
 
-/* Initialization code to bring the device up to a known good state */
 static void denali_hw_init(struct denali_nand_info *denali)
 {
        /*
@@ -1319,8 +1125,7 @@ static void denali_hw_init(struct denali_nand_info *denali)
         * override it.
         */
        if (!denali->revision)
-               denali->revision =
-                               swab16(ioread32(denali->flash_reg + REVISION));
+               denali->revision = swab16(ioread32(denali->reg + REVISION));
 
        /*
         * tell driver how many bit controller will skip before
@@ -1328,30 +1133,51 @@ static void denali_hw_init(struct denali_nand_info *denali)
         * set by firmware. So we read this value out.
         * if this value is 0, just let it be.
         */
-       denali->bbtskipbytes = ioread32(denali->flash_reg +
-                                               SPARE_AREA_SKIP_BYTES);
+       denali->oob_skip_bytes = ioread32(denali->reg + SPARE_AREA_SKIP_BYTES);
        detect_max_banks(denali);
-       denali_nand_reset(denali);
-       iowrite32(0x0F, denali->flash_reg + RB_PIN_ENABLED);
-       iowrite32(CHIP_EN_DONT_CARE__FLAG,
-                       denali->flash_reg + CHIP_ENABLE_DONT_CARE);
+       iowrite32(0x0F, denali->reg + RB_PIN_ENABLED);
+       iowrite32(CHIP_EN_DONT_CARE__FLAG, denali->reg + CHIP_ENABLE_DONT_CARE);
 
-       iowrite32(0xffff, denali->flash_reg + SPARE_AREA_MARKER);
+       iowrite32(0xffff, denali->reg + SPARE_AREA_MARKER);
 
        /* Should set value for these registers when init */
-       iowrite32(0, denali->flash_reg + TWO_ROW_ADDR_CYCLES);
-       iowrite32(1, denali->flash_reg + ECC_ENABLE);
-       denali_nand_timing_set(denali);
-       denali_irq_init(denali);
+       iowrite32(0, denali->reg + TWO_ROW_ADDR_CYCLES);
+       iowrite32(1, denali->reg + ECC_ENABLE);
 }
 
-/*
- * Althogh controller spec said SLC ECC is forceb to be 4bit,
- * but denali controller in MRST only support 15bit and 8bit ECC
- * correction
- */
-#define ECC_8BITS      14
-#define ECC_15BITS     26
+int denali_calc_ecc_bytes(int step_size, int strength)
+{
+       /* BCH code.  Denali requires ecc.bytes to be multiple of 2 */
+       return DIV_ROUND_UP(strength * fls(step_size * 8), 16) * 2;
+}
+EXPORT_SYMBOL(denali_calc_ecc_bytes);
+
+static int denali_ecc_setup(struct mtd_info *mtd, struct nand_chip *chip,
+                           struct denali_nand_info *denali)
+{
+       int oobavail = mtd->oobsize - denali->oob_skip_bytes;
+       int ret;
+
+       /*
+        * If .size and .strength are already set (usually by DT),
+        * check if they are supported by this controller.
+        */
+       if (chip->ecc.size && chip->ecc.strength)
+               return nand_check_ecc_caps(chip, denali->ecc_caps, oobavail);
+
+       /*
+        * We want .size and .strength closest to the chip's requirement
+        * unless NAND_ECC_MAXIMIZE is requested.
+        */
+       if (!(chip->ecc.options & NAND_ECC_MAXIMIZE)) {
+               ret = nand_match_ecc_req(chip, denali->ecc_caps, oobavail);
+               if (!ret)
+                       return 0;
+       }
+
+       /* Max ECC strength is the last thing we can do */
+       return nand_maximize_ecc(chip, denali->ecc_caps, oobavail);
+}
 
 static int denali_ooblayout_ecc(struct mtd_info *mtd, int section,
                                struct mtd_oob_region *oobregion)
@@ -1362,7 +1188,7 @@ static int denali_ooblayout_ecc(struct mtd_info *mtd, int section,
        if (section)
                return -ERANGE;
 
-       oobregion->offset = denali->bbtskipbytes;
+       oobregion->offset = denali->oob_skip_bytes;
        oobregion->length = chip->ecc.total;
 
        return 0;
@@ -1377,7 +1203,7 @@ static int denali_ooblayout_free(struct mtd_info *mtd, int section,
        if (section)
                return -ERANGE;
 
-       oobregion->offset = chip->ecc.total + denali->bbtskipbytes;
+       oobregion->offset = chip->ecc.total + denali->oob_skip_bytes;
        oobregion->length = mtd->oobsize - oobregion->offset;
 
        return 0;
@@ -1388,29 +1214,6 @@ static const struct mtd_ooblayout_ops denali_ooblayout_ops = {
        .free = denali_ooblayout_free,
 };
 
-static uint8_t bbt_pattern[] = {'B', 'b', 't', '0' };
-static uint8_t mirror_pattern[] = {'1', 't', 'b', 'B' };
-
-static struct nand_bbt_descr bbt_main_descr = {
-       .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
-               | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
-       .offs = 8,
-       .len = 4,
-       .veroffs = 12,
-       .maxblocks = 4,
-       .pattern = bbt_pattern,
-};
-
-static struct nand_bbt_descr bbt_mirror_descr = {
-       .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
-               | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
-       .offs = 8,
-       .len = 4,
-       .veroffs = 12,
-       .maxblocks = 4,
-       .pattern = mirror_pattern,
-};
-
 /* initialize driver data structures */
 static void denali_drv_init(struct denali_nand_info *denali)
 {
@@ -1425,12 +1228,6 @@ static void denali_drv_init(struct denali_nand_info *denali)
         * element that might be access shared data (interrupt status)
         */
        spin_lock_init(&denali->irq_lock);
-
-       /* indicate that MTD has not selected a valid bank yet */
-       denali->flash_bank = CHIP_SELECT_INVALID;
-
-       /* initialize our irq_status variable to indicate no interrupts */
-       denali->irq_status = 0;
 }
 
 static int denali_multidev_fixup(struct denali_nand_info *denali)
@@ -1445,23 +1242,23 @@ static int denali_multidev_fixup(struct denali_nand_info *denali)
         * In this case, the core framework knows nothing about this fact,
         * so we should tell it the _logical_ pagesize and anything necessary.
         */
-       denali->devnum = ioread32(denali->flash_reg + DEVICES_CONNECTED);
+       denali->devs_per_cs = ioread32(denali->reg + DEVICES_CONNECTED);
 
        /*
         * On some SoCs, DEVICES_CONNECTED is not auto-detected.
         * For those, DEVICES_CONNECTED is left to 0.  Set 1 if it is the case.
         */
-       if (denali->devnum == 0) {
-               denali->devnum = 1;
-               iowrite32(1, denali->flash_reg + DEVICES_CONNECTED);
+       if (denali->devs_per_cs == 0) {
+               denali->devs_per_cs = 1;
+               iowrite32(1, denali->reg + DEVICES_CONNECTED);
        }
 
-       if (denali->devnum == 1)
+       if (denali->devs_per_cs == 1)
                return 0;
 
-       if (denali->devnum != 2) {
+       if (denali->devs_per_cs != 2) {
                dev_err(denali->dev, "unsupported number of devices %d\n",
-                       denali->devnum);
+                       denali->devs_per_cs);
                return -EINVAL;
        }
 
@@ -1479,7 +1276,7 @@ static int denali_multidev_fixup(struct denali_nand_info *denali)
        chip->ecc.size <<= 1;
        chip->ecc.bytes <<= 1;
        chip->ecc.strength <<= 1;
-       denali->bbtskipbytes <<= 1;
+       denali->oob_skip_bytes <<= 1;
 
        return 0;
 }
@@ -1490,27 +1287,12 @@ int denali_init(struct denali_nand_info *denali)
        struct mtd_info *mtd = nand_to_mtd(chip);
        int ret;
 
-       if (denali->platform == INTEL_CE4100) {
-               /*
-                * Due to a silicon limitation, we can only support
-                * ONFI timing mode 1 and below.
-                */
-               if (onfi_timing_mode < -1 || onfi_timing_mode > 1) {
-                       pr_err("Intel CE4100 only supports ONFI timing mode 1 or below\n");
-                       return -EINVAL;
-               }
-       }
-
-       /* allocate a temporary buffer for nand_scan_ident() */
-       denali->buf.buf = devm_kzalloc(denali->dev, PAGE_SIZE,
-                                       GFP_DMA | GFP_KERNEL);
-       if (!denali->buf.buf)
-               return -ENOMEM;
-
        mtd->dev.parent = denali->dev;
        denali_hw_init(denali);
        denali_drv_init(denali);
 
+       denali_clear_irq_all(denali);
+
        /* Request IRQ after all the hardware initialization is finished */
        ret = devm_request_irq(denali->dev, denali->irq, denali_isr,
                               IRQF_SHARED, DENALI_NAND_NAME, denali);
@@ -1519,8 +1301,11 @@ int denali_init(struct denali_nand_info *denali)
                return ret;
        }
 
-       /* now that our ISR is registered, we can enable interrupts */
-       denali_set_intr_modes(denali, true);
+       denali_enable_irq(denali);
+       denali_reset_banks(denali);
+
+       denali->active_bank = DENALI_INVALID_BANK;
+
        nand_set_flash_node(chip, denali->dev->of_node);
        /* Fallback to the default name if DT did not give "label" property */
        if (!mtd->name)
@@ -1528,10 +1313,17 @@ int denali_init(struct denali_nand_info *denali)
 
        /* register the driver with the NAND core subsystem */
        chip->select_chip = denali_select_chip;
-       chip->cmdfunc = denali_cmdfunc;
        chip->read_byte = denali_read_byte;
+       chip->write_byte = denali_write_byte;
+       chip->read_word = denali_read_word;
+       chip->cmd_ctrl = denali_cmd_ctrl;
+       chip->dev_ready = denali_dev_ready;
        chip->waitfunc = denali_waitfunc;
 
+       /* clk rate info is needed for setup_data_interface */
+       if (denali->clk_x_rate)
+               chip->setup_data_interface = denali_setup_data_interface;
+
        /*
         * scan for NAND devices attached to the controller
         * this is the first stage in a two step process to register
@@ -1539,33 +1331,25 @@ int denali_init(struct denali_nand_info *denali)
         */
        ret = nand_scan_ident(mtd, denali->max_banks, NULL);
        if (ret)
-               goto failed_req_irq;
-
-       /* allocate the right size buffer now */
-       devm_kfree(denali->dev, denali->buf.buf);
-       denali->buf.buf = devm_kzalloc(denali->dev,
-                            mtd->writesize + mtd->oobsize,
-                            GFP_KERNEL);
-       if (!denali->buf.buf) {
-               ret = -ENOMEM;
-               goto failed_req_irq;
-       }
+               goto disable_irq;
 
-       ret = dma_set_mask(denali->dev,
-                          DMA_BIT_MASK(denali->caps & DENALI_CAP_DMA_64BIT ?
-                                       64 : 32));
-       if (ret) {
-               dev_err(denali->dev, "No usable DMA configuration\n");
-               goto failed_req_irq;
+       if (ioread32(denali->reg + FEATURES) & FEATURES__DMA)
+               denali->dma_avail = 1;
+
+       if (denali->dma_avail) {
+               int dma_bit = denali->caps & DENALI_CAP_DMA_64BIT ? 64 : 32;
+
+               ret = dma_set_mask(denali->dev, DMA_BIT_MASK(dma_bit));
+               if (ret) {
+                       dev_info(denali->dev,
+                                "Failed to set DMA mask. Disabling DMA.\n");
+                       denali->dma_avail = 0;
+               }
        }
 
-       denali->buf.dma_buf = dma_map_single(denali->dev, denali->buf.buf,
-                            mtd->writesize + mtd->oobsize,
-                            DMA_BIDIRECTIONAL);
-       if (dma_mapping_error(denali->dev, denali->buf.dma_buf)) {
-               dev_err(denali->dev, "Failed to map DMA buffer\n");
-               ret = -EIO;
-               goto failed_req_irq;
+       if (denali->dma_avail) {
+               chip->options |= NAND_USE_BOUNCE_BUFFER;
+               chip->buf_align = 16;
        }
 
        /*
@@ -1574,46 +1358,49 @@ int denali_init(struct denali_nand_info *denali)
         * bad block management.
         */
 
-       /* Bad block management */
-       chip->bbt_td = &bbt_main_descr;
-       chip->bbt_md = &bbt_mirror_descr;
-
-       /* skip the scan for now until we have OOB read and write support */
        chip->bbt_options |= NAND_BBT_USE_FLASH;
-       chip->options |= NAND_SKIP_BBTSCAN;
+       chip->bbt_options |= NAND_BBT_NO_OOB;
+
        chip->ecc.mode = NAND_ECC_HW_SYNDROME;
 
        /* no subpage writes on denali */
        chip->options |= NAND_NO_SUBPAGE_WRITE;
 
-       /*
-        * Denali Controller only support 15bit and 8bit ECC in MRST,
-        * so just let controller do 15bit ECC for MLC and 8bit ECC for
-        * SLC if possible.
-        * */
-       if (!nand_is_slc(chip) &&
-                       (mtd->oobsize > (denali->bbtskipbytes +
-                       ECC_15BITS * (mtd->writesize /
-                       ECC_SECTOR_SIZE)))) {
-               /* if MLC OOB size is large enough, use 15bit ECC*/
-               chip->ecc.strength = 15;
-               chip->ecc.bytes = ECC_15BITS;
-               iowrite32(15, denali->flash_reg + ECC_CORRECTION);
-       } else if (mtd->oobsize < (denali->bbtskipbytes +
-                       ECC_8BITS * (mtd->writesize /
-                       ECC_SECTOR_SIZE))) {
-               pr_err("Your NAND chip OOB is not large enough to contain 8bit ECC correction codes");
-               goto failed_req_irq;
-       } else {
-               chip->ecc.strength = 8;
-               chip->ecc.bytes = ECC_8BITS;
-               iowrite32(8, denali->flash_reg + ECC_CORRECTION);
+       ret = denali_ecc_setup(mtd, chip, denali);
+       if (ret) {
+               dev_err(denali->dev, "Failed to setup ECC settings.\n");
+               goto disable_irq;
        }
 
+       dev_dbg(denali->dev,
+               "chosen ECC settings: step=%d, strength=%d, bytes=%d\n",
+               chip->ecc.size, chip->ecc.strength, chip->ecc.bytes);
+
+       iowrite32(MAKE_ECC_CORRECTION(chip->ecc.strength, 1),
+                 denali->reg + ECC_CORRECTION);
+       iowrite32(mtd->erasesize / mtd->writesize,
+                 denali->reg + PAGES_PER_BLOCK);
+       iowrite32(chip->options & NAND_BUSWIDTH_16 ? 1 : 0,
+                 denali->reg + DEVICE_WIDTH);
+       iowrite32(mtd->writesize, denali->reg + DEVICE_MAIN_AREA_SIZE);
+       iowrite32(mtd->oobsize, denali->reg + DEVICE_SPARE_AREA_SIZE);
+
+       iowrite32(chip->ecc.size, denali->reg + CFG_DATA_BLOCK_SIZE);
+       iowrite32(chip->ecc.size, denali->reg + CFG_LAST_DATA_BLOCK_SIZE);
+       /* chip->ecc.steps is set by nand_scan_tail(); not available here */
+       iowrite32(mtd->writesize / chip->ecc.size,
+                 denali->reg + CFG_NUM_DATA_BLOCKS);
+
        mtd_set_ooblayout(mtd, &denali_ooblayout_ops);
 
-       /* override the default read operations */
-       chip->ecc.size = ECC_SECTOR_SIZE;
+       if (chip->options & NAND_BUSWIDTH_16) {
+               chip->read_buf = denali_read_buf16;
+               chip->write_buf = denali_write_buf16;
+       } else {
+               chip->read_buf = denali_read_buf;
+               chip->write_buf = denali_write_buf;
+       }
+       chip->ecc.options |= NAND_ECC_CUSTOM_PAGE_ACCESS;
        chip->ecc.read_page = denali_read_page;
        chip->ecc.read_page_raw = denali_read_page_raw;
        chip->ecc.write_page = denali_write_page;
@@ -1624,21 +1411,34 @@ int denali_init(struct denali_nand_info *denali)
 
        ret = denali_multidev_fixup(denali);
        if (ret)
-               goto failed_req_irq;
+               goto disable_irq;
+
+       /*
+        * This buffer is DMA-mapped by denali_{read,write}_page_raw.  Do not
+        * use devm_kmalloc() because the memory allocated by devm_ does not
+        * guarantee DMA-safe alignment.
+        */
+       denali->buf = kmalloc(mtd->writesize + mtd->oobsize, GFP_KERNEL);
+       if (!denali->buf) {
+               ret = -ENOMEM;
+               goto disable_irq;
+       }
 
        ret = nand_scan_tail(mtd);
        if (ret)
-               goto failed_req_irq;
+               goto free_buf;
 
        ret = mtd_device_register(mtd, NULL, 0);
        if (ret) {
                dev_err(denali->dev, "Failed to register MTD: %d\n", ret);
-               goto failed_req_irq;
+               goto free_buf;
        }
        return 0;
 
-failed_req_irq:
-       denali_irq_cleanup(denali->irq, denali);
+free_buf:
+       kfree(denali->buf);
+disable_irq:
+       denali_disable_irq(denali);
 
        return ret;
 }
@@ -1648,16 +1448,9 @@ EXPORT_SYMBOL(denali_init);
 void denali_remove(struct denali_nand_info *denali)
 {
        struct mtd_info *mtd = nand_to_mtd(&denali->nand);
-       /*
-        * Pre-compute DMA buffer size to avoid any problems in case
-        * nand_release() ever changes in a way that mtd->writesize and
-        * mtd->oobsize are not reliable after this call.
-        */
-       int bufsize = mtd->writesize + mtd->oobsize;
 
        nand_release(mtd);
-       denali_irq_cleanup(denali->irq, denali);
-       dma_unmap_single(denali->dev, denali->buf.dma_buf, bufsize,
-                        DMA_BIDIRECTIONAL);
+       kfree(denali->buf);
+       denali_disable_irq(denali);
 }
 EXPORT_SYMBOL(denali_remove);
index ec004850652a7a67df8be4c984d70db6faf86954..237cc706b0fb4a9ef3ba66c48ae1eaeb21360beb 100644 (file)
 #include <linux/mtd/nand.h>
 
 #define DEVICE_RESET                           0x0
-#define     DEVICE_RESET__BANK0                                0x0001
-#define     DEVICE_RESET__BANK1                                0x0002
-#define     DEVICE_RESET__BANK2                                0x0004
-#define     DEVICE_RESET__BANK3                                0x0008
+#define     DEVICE_RESET__BANK(bank)                   BIT(bank)
 
 #define TRANSFER_SPARE_REG                     0x10
-#define     TRANSFER_SPARE_REG__FLAG                   0x0001
+#define     TRANSFER_SPARE_REG__FLAG                   BIT(0)
 
 #define LOAD_WAIT_CNT                          0x20
-#define     LOAD_WAIT_CNT__VALUE                       0xffff
+#define     LOAD_WAIT_CNT__VALUE                       GENMASK(15, 0)
 
 #define PROGRAM_WAIT_CNT                       0x30
-#define     PROGRAM_WAIT_CNT__VALUE                    0xffff
+#define     PROGRAM_WAIT_CNT__VALUE                    GENMASK(15, 0)
 
 #define ERASE_WAIT_CNT                         0x40
-#define     ERASE_WAIT_CNT__VALUE                      0xffff
+#define     ERASE_WAIT_CNT__VALUE                      GENMASK(15, 0)
 
 #define INT_MON_CYCCNT                         0x50
-#define     INT_MON_CYCCNT__VALUE                      0xffff
+#define     INT_MON_CYCCNT__VALUE                      GENMASK(15, 0)
 
 #define RB_PIN_ENABLED                         0x60
-#define     RB_PIN_ENABLED__BANK0                      0x0001
-#define     RB_PIN_ENABLED__BANK1                      0x0002
-#define     RB_PIN_ENABLED__BANK2                      0x0004
-#define     RB_PIN_ENABLED__BANK3                      0x0008
+#define     RB_PIN_ENABLED__BANK(bank)                 BIT(bank)
 
 #define MULTIPLANE_OPERATION                   0x70
-#define     MULTIPLANE_OPERATION__FLAG                 0x0001
+#define     MULTIPLANE_OPERATION__FLAG                 BIT(0)
 
 #define MULTIPLANE_READ_ENABLE                 0x80
-#define     MULTIPLANE_READ_ENABLE__FLAG               0x0001
+#define     MULTIPLANE_READ_ENABLE__FLAG               BIT(0)
 
 #define COPYBACK_DISABLE                       0x90
-#define     COPYBACK_DISABLE__FLAG                     0x0001
+#define     COPYBACK_DISABLE__FLAG                     BIT(0)
 
 #define CACHE_WRITE_ENABLE                     0xa0
-#define     CACHE_WRITE_ENABLE__FLAG                   0x0001
+#define     CACHE_WRITE_ENABLE__FLAG                   BIT(0)
 
 #define CACHE_READ_ENABLE                      0xb0
-#define     CACHE_READ_ENABLE__FLAG                    0x0001
+#define     CACHE_READ_ENABLE__FLAG                    BIT(0)
 
 #define PREFETCH_MODE                          0xc0
-#define     PREFETCH_MODE__PREFETCH_EN                 0x0001
-#define     PREFETCH_MODE__PREFETCH_BURST_LENGTH       0xfff0
+#define     PREFETCH_MODE__PREFETCH_EN                 BIT(0)
+#define     PREFETCH_MODE__PREFETCH_BURST_LENGTH       GENMASK(15, 4)
 
 #define CHIP_ENABLE_DONT_CARE                  0xd0
-#define     CHIP_EN_DONT_CARE__FLAG                    0x01
+#define     CHIP_EN_DONT_CARE__FLAG                    BIT(0)
 
 #define ECC_ENABLE                             0xe0
-#define     ECC_ENABLE__FLAG                           0x0001
+#define     ECC_ENABLE__FLAG                           BIT(0)
 
 #define GLOBAL_INT_ENABLE                      0xf0
-#define     GLOBAL_INT_EN_FLAG                         0x01
+#define     GLOBAL_INT_EN_FLAG                         BIT(0)
 
-#define WE_2_RE                                        0x100
-#define     WE_2_RE__VALUE                             0x003f
+#define TWHR2_AND_WE_2_RE                      0x100
+#define     TWHR2_AND_WE_2_RE__WE_2_RE                 GENMASK(5, 0)
+#define     TWHR2_AND_WE_2_RE__TWHR2                   GENMASK(13, 8)
 
-#define ADDR_2_DATA                            0x110
-#define     ADDR_2_DATA__VALUE                         0x003f
+#define TCWAW_AND_ADDR_2_DATA                  0x110
+/* The width of ADDR_2_DATA is 6 bit for old IP, 7 bit for new IP */
+#define     TCWAW_AND_ADDR_2_DATA__ADDR_2_DATA         GENMASK(6, 0)
+#define     TCWAW_AND_ADDR_2_DATA__TCWAW               GENMASK(13, 8)
 
 #define RE_2_WE                                        0x120
-#define     RE_2_WE__VALUE                             0x003f
+#define     RE_2_WE__VALUE                             GENMASK(5, 0)
 
 #define ACC_CLKS                               0x130
-#define     ACC_CLKS__VALUE                            0x000f
+#define     ACC_CLKS__VALUE                            GENMASK(3, 0)
 
 #define NUMBER_OF_PLANES                       0x140
-#define     NUMBER_OF_PLANES__VALUE                    0x0007
+#define     NUMBER_OF_PLANES__VALUE                    GENMASK(2, 0)
 
 #define PAGES_PER_BLOCK                                0x150
-#define     PAGES_PER_BLOCK__VALUE                     0xffff
+#define     PAGES_PER_BLOCK__VALUE                     GENMASK(15, 0)
 
 #define DEVICE_WIDTH                           0x160
-#define     DEVICE_WIDTH__VALUE                                0x0003
+#define     DEVICE_WIDTH__VALUE                                GENMASK(1, 0)
 
 #define DEVICE_MAIN_AREA_SIZE                  0x170
-#define     DEVICE_MAIN_AREA_SIZE__VALUE               0xffff
+#define     DEVICE_MAIN_AREA_SIZE__VALUE               GENMASK(15, 0)
 
 #define DEVICE_SPARE_AREA_SIZE                 0x180
-#define     DEVICE_SPARE_AREA_SIZE__VALUE              0xffff
+#define     DEVICE_SPARE_AREA_SIZE__VALUE              GENMASK(15, 0)
 
 #define TWO_ROW_ADDR_CYCLES                    0x190
-#define     TWO_ROW_ADDR_CYCLES__FLAG                  0x0001
+#define     TWO_ROW_ADDR_CYCLES__FLAG                  BIT(0)
 
 #define MULTIPLANE_ADDR_RESTRICT               0x1a0
-#define     MULTIPLANE_ADDR_RESTRICT__FLAG             0x0001
+#define     MULTIPLANE_ADDR_RESTRICT__FLAG             BIT(0)
 
 #define ECC_CORRECTION                         0x1b0
-#define     ECC_CORRECTION__VALUE                      0x001f
+#define     ECC_CORRECTION__VALUE                      GENMASK(4, 0)
+#define     ECC_CORRECTION__ERASE_THRESHOLD            GENMASK(31, 16)
+#define     MAKE_ECC_CORRECTION(val, thresh)           \
+                       (((val) & (ECC_CORRECTION__VALUE)) | \
+                       (((thresh) << 16) & (ECC_CORRECTION__ERASE_THRESHOLD)))
 
 #define READ_MODE                              0x1c0
-#define     READ_MODE__VALUE                           0x000f
+#define     READ_MODE__VALUE                           GENMASK(3, 0)
 
 #define WRITE_MODE                             0x1d0
-#define     WRITE_MODE__VALUE                          0x000f
+#define     WRITE_MODE__VALUE                          GENMASK(3, 0)
 
 #define COPYBACK_MODE                          0x1e0
-#define     COPYBACK_MODE__VALUE                       0x000f
+#define     COPYBACK_MODE__VALUE                       GENMASK(3, 0)
 
 #define RDWR_EN_LO_CNT                         0x1f0
-#define     RDWR_EN_LO_CNT__VALUE                      0x001f
+#define     RDWR_EN_LO_CNT__VALUE                      GENMASK(4, 0)
 
 #define RDWR_EN_HI_CNT                         0x200
-#define     RDWR_EN_HI_CNT__VALUE                      0x001f
+#define     RDWR_EN_HI_CNT__VALUE                      GENMASK(4, 0)
 
 #define MAX_RD_DELAY                           0x210
-#define     MAX_RD_DELAY__VALUE                                0x000f
+#define     MAX_RD_DELAY__VALUE                                GENMASK(3, 0)
 
 #define CS_SETUP_CNT                           0x220
-#define     CS_SETUP_CNT__VALUE                                0x001f
+#define     CS_SETUP_CNT__VALUE                                GENMASK(4, 0)
+#define     CS_SETUP_CNT__TWB                          GENMASK(17, 12)
 
 #define SPARE_AREA_SKIP_BYTES                  0x230
-#define     SPARE_AREA_SKIP_BYTES__VALUE               0x003f
+#define     SPARE_AREA_SKIP_BYTES__VALUE               GENMASK(5, 0)
 
 #define SPARE_AREA_MARKER                      0x240
-#define     SPARE_AREA_MARKER__VALUE                   0xffff
+#define     SPARE_AREA_MARKER__VALUE                   GENMASK(15, 0)
 
 #define DEVICES_CONNECTED                      0x250
-#define     DEVICES_CONNECTED__VALUE                   0x0007
+#define     DEVICES_CONNECTED__VALUE                   GENMASK(2, 0)
 
 #define DIE_MASK                               0x260
-#define     DIE_MASK__VALUE                            0x00ff
+#define     DIE_MASK__VALUE                            GENMASK(7, 0)
 
 #define FIRST_BLOCK_OF_NEXT_PLANE              0x270
-#define     FIRST_BLOCK_OF_NEXT_PLANE__VALUE           0xffff
+#define     FIRST_BLOCK_OF_NEXT_PLANE__VALUE           GENMASK(15, 0)
 
 #define WRITE_PROTECT                          0x280
-#define     WRITE_PROTECT__FLAG                                0x0001
+#define     WRITE_PROTECT__FLAG                                BIT(0)
 
 #define RE_2_RE                                        0x290
-#define     RE_2_RE__VALUE                             0x003f
+#define     RE_2_RE__VALUE                             GENMASK(5, 0)
 
 #define MANUFACTURER_ID                                0x300
-#define     MANUFACTURER_ID__VALUE                     0x00ff
+#define     MANUFACTURER_ID__VALUE                     GENMASK(7, 0)
 
 #define DEVICE_ID                              0x310
-#define     DEVICE_ID__VALUE                           0x00ff
+#define     DEVICE_ID__VALUE                           GENMASK(7, 0)
 
 #define DEVICE_PARAM_0                         0x320
-#define     DEVICE_PARAM_0__VALUE                      0x00ff
+#define     DEVICE_PARAM_0__VALUE                      GENMASK(7, 0)
 
 #define DEVICE_PARAM_1                         0x330
-#define     DEVICE_PARAM_1__VALUE                      0x00ff
+#define     DEVICE_PARAM_1__VALUE                      GENMASK(7, 0)
 
 #define DEVICE_PARAM_2                         0x340
-#define     DEVICE_PARAM_2__VALUE                      0x00ff
+#define     DEVICE_PARAM_2__VALUE                      GENMASK(7, 0)
 
 #define LOGICAL_PAGE_DATA_SIZE                 0x350
-#define     LOGICAL_PAGE_DATA_SIZE__VALUE              0xffff
+#define     LOGICAL_PAGE_DATA_SIZE__VALUE              GENMASK(15, 0)
 
 #define LOGICAL_PAGE_SPARE_SIZE                        0x360
-#define     LOGICAL_PAGE_SPARE_SIZE__VALUE             0xffff
+#define     LOGICAL_PAGE_SPARE_SIZE__VALUE             GENMASK(15, 0)
 
 #define REVISION                               0x370
-#define     REVISION__VALUE                            0xffff
+#define     REVISION__VALUE                            GENMASK(15, 0)
 
 #define ONFI_DEVICE_FEATURES                   0x380
-#define     ONFI_DEVICE_FEATURES__VALUE                        0x003f
+#define     ONFI_DEVICE_FEATURES__VALUE                        GENMASK(5, 0)
 
 #define ONFI_OPTIONAL_COMMANDS                 0x390
-#define     ONFI_OPTIONAL_COMMANDS__VALUE              0x003f
+#define     ONFI_OPTIONAL_COMMANDS__VALUE              GENMASK(5, 0)
 
 #define ONFI_TIMING_MODE                       0x3a0
-#define     ONFI_TIMING_MODE__VALUE                    0x003f
+#define     ONFI_TIMING_MODE__VALUE                    GENMASK(5, 0)
 
 #define ONFI_PGM_CACHE_TIMING_MODE             0x3b0
-#define     ONFI_PGM_CACHE_TIMING_MODE__VALUE          0x003f
+#define     ONFI_PGM_CACHE_TIMING_MODE__VALUE          GENMASK(5, 0)
 
 #define ONFI_DEVICE_NO_OF_LUNS                 0x3c0
-#define     ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS         0x00ff
-#define     ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE                0x0100
+#define     ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS         GENMASK(7, 0)
+#define     ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE                BIT(8)
 
 #define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L     0x3d0
-#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE  0xffff
+#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE  GENMASK(15, 0)
 
 #define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U     0x3e0
-#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE  0xffff
-
-#define FEATURES                                       0x3f0
-#define     FEATURES__N_BANKS                          0x0003
-#define     FEATURES__ECC_MAX_ERR                      0x003c
-#define     FEATURES__DMA                              0x0040
-#define     FEATURES__CMD_DMA                          0x0080
-#define     FEATURES__PARTITION                                0x0100
-#define     FEATURES__XDMA_SIDEBAND                    0x0200
-#define     FEATURES__GPREG                            0x0400
-#define     FEATURES__INDEX_ADDR                       0x0800
+#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE  GENMASK(15, 0)
+
+#define FEATURES                               0x3f0
+#define     FEATURES__N_BANKS                          GENMASK(1, 0)
+#define     FEATURES__ECC_MAX_ERR                      GENMASK(5, 2)
+#define     FEATURES__DMA                              BIT(6)
+#define     FEATURES__CMD_DMA                          BIT(7)
+#define     FEATURES__PARTITION                                BIT(8)
+#define     FEATURES__XDMA_SIDEBAND                    BIT(9)
+#define     FEATURES__GPREG                            BIT(10)
+#define     FEATURES__INDEX_ADDR                       BIT(11)
 
 #define TRANSFER_MODE                          0x400
-#define     TRANSFER_MODE__VALUE                       0x0003
+#define     TRANSFER_MODE__VALUE                       GENMASK(1, 0)
 
-#define INTR_STATUS(__bank)    (0x410 + ((__bank) * 0x50))
-#define INTR_EN(__bank)                (0x420 + ((__bank) * 0x50))
+#define INTR_STATUS(bank)                      (0x410 + (bank) * 0x50)
+#define INTR_EN(bank)                          (0x420 + (bank) * 0x50)
 /* bit[1:0] is used differently depending on IP version */
-#define     INTR__ECC_UNCOR_ERR                                0x0001  /* new IP */
-#define     INTR__ECC_TRANSACTION_DONE                 0x0001  /* old IP */
-#define     INTR__ECC_ERR                              0x0002  /* old IP */
-#define     INTR__DMA_CMD_COMP                         0x0004
-#define     INTR__TIME_OUT                             0x0008
-#define     INTR__PROGRAM_FAIL                         0x0010
-#define     INTR__ERASE_FAIL                           0x0020
-#define     INTR__LOAD_COMP                            0x0040
-#define     INTR__PROGRAM_COMP                         0x0080
-#define     INTR__ERASE_COMP                           0x0100
-#define     INTR__PIPE_CPYBCK_CMD_COMP                 0x0200
-#define     INTR__LOCKED_BLK                           0x0400
-#define     INTR__UNSUP_CMD                            0x0800
-#define     INTR__INT_ACT                              0x1000
-#define     INTR__RST_COMP                             0x2000
-#define     INTR__PIPE_CMD_ERR                         0x4000
-#define     INTR__PAGE_XFER_INC                                0x8000
-
-#define PAGE_CNT(__bank)       (0x430 + ((__bank) * 0x50))
-#define ERR_PAGE_ADDR(__bank)  (0x440 + ((__bank) * 0x50))
-#define ERR_BLOCK_ADDR(__bank) (0x450 + ((__bank) * 0x50))
+#define     INTR__ECC_UNCOR_ERR                                BIT(0)  /* new IP */
+#define     INTR__ECC_TRANSACTION_DONE                 BIT(0)  /* old IP */
+#define     INTR__ECC_ERR                              BIT(1)  /* old IP */
+#define     INTR__DMA_CMD_COMP                         BIT(2)
+#define     INTR__TIME_OUT                             BIT(3)
+#define     INTR__PROGRAM_FAIL                         BIT(4)
+#define     INTR__ERASE_FAIL                           BIT(5)
+#define     INTR__LOAD_COMP                            BIT(6)
+#define     INTR__PROGRAM_COMP                         BIT(7)
+#define     INTR__ERASE_COMP                           BIT(8)
+#define     INTR__PIPE_CPYBCK_CMD_COMP                 BIT(9)
+#define     INTR__LOCKED_BLK                           BIT(10)
+#define     INTR__UNSUP_CMD                            BIT(11)
+#define     INTR__INT_ACT                              BIT(12)
+#define     INTR__RST_COMP                             BIT(13)
+#define     INTR__PIPE_CMD_ERR                         BIT(14)
+#define     INTR__PAGE_XFER_INC                                BIT(15)
+#define     INTR__ERASED_PAGE                          BIT(16)
+
+#define PAGE_CNT(bank)                         (0x430 + (bank) * 0x50)
+#define ERR_PAGE_ADDR(bank)                    (0x440 + (bank) * 0x50)
+#define ERR_BLOCK_ADDR(bank)                   (0x450 + (bank) * 0x50)
 
 #define ECC_THRESHOLD                          0x600
-#define     ECC_THRESHOLD__VALUE                       0x03ff
+#define     ECC_THRESHOLD__VALUE                       GENMASK(9, 0)
 
 #define ECC_ERROR_BLOCK_ADDRESS                        0x610
-#define     ECC_ERROR_BLOCK_ADDRESS__VALUE             0xffff
+#define     ECC_ERROR_BLOCK_ADDRESS__VALUE             GENMASK(15, 0)
 
 #define ECC_ERROR_PAGE_ADDRESS                 0x620
-#define     ECC_ERROR_PAGE_ADDRESS__VALUE              0x0fff
-#define     ECC_ERROR_PAGE_ADDRESS__BANK               0xf000
+#define     ECC_ERROR_PAGE_ADDRESS__VALUE              GENMASK(11, 0)
+#define     ECC_ERROR_PAGE_ADDRESS__BANK               GENMASK(15, 12)
 
 #define ECC_ERROR_ADDRESS                      0x630
-#define     ECC_ERROR_ADDRESS__OFFSET                  0x0fff
-#define     ECC_ERROR_ADDRESS__SECTOR_NR               0xf000
+#define     ECC_ERROR_ADDRESS__OFFSET                  GENMASK(11, 0)
+#define     ECC_ERROR_ADDRESS__SECTOR_NR               GENMASK(15, 12)
 
 #define ERR_CORRECTION_INFO                    0x640
-#define     ERR_CORRECTION_INFO__BYTEMASK              0x00ff
-#define     ERR_CORRECTION_INFO__DEVICE_NR             0x0f00
-#define     ERR_CORRECTION_INFO__ERROR_TYPE            0x4000
-#define     ERR_CORRECTION_INFO__LAST_ERR_INFO         0x8000
+#define     ERR_CORRECTION_INFO__BYTEMASK              GENMASK(7, 0)
+#define     ERR_CORRECTION_INFO__DEVICE_NR             GENMASK(11, 8)
+#define     ERR_CORRECTION_INFO__ERROR_TYPE            BIT(14)
+#define     ERR_CORRECTION_INFO__LAST_ERR_INFO         BIT(15)
 
 #define ECC_COR_INFO(bank)                     (0x650 + (bank) / 2 * 0x10)
 #define     ECC_COR_INFO__SHIFT(bank)                  ((bank) % 2 * 8)
-#define     ECC_COR_INFO__MAX_ERRORS                   0x007f
-#define     ECC_COR_INFO__UNCOR_ERR                    0x0080
+#define     ECC_COR_INFO__MAX_ERRORS                   GENMASK(6, 0)
+#define     ECC_COR_INFO__UNCOR_ERR                    BIT(7)
+
+#define CFG_DATA_BLOCK_SIZE                    0x6b0
+
+#define CFG_LAST_DATA_BLOCK_SIZE               0x6c0
+
+#define CFG_NUM_DATA_BLOCKS                    0x6d0
+
+#define CFG_META_DATA_SIZE                     0x6e0
 
 #define DMA_ENABLE                             0x700
-#define     DMA_ENABLE__FLAG                           0x0001
+#define     DMA_ENABLE__FLAG                           BIT(0)
 
 #define IGNORE_ECC_DONE                                0x710
-#define     IGNORE_ECC_DONE__FLAG                      0x0001
+#define     IGNORE_ECC_DONE__FLAG                      BIT(0)
 
 #define DMA_INTR                               0x720
 #define DMA_INTR_EN                            0x730
-#define     DMA_INTR__TARGET_ERROR                     0x0001
-#define     DMA_INTR__DESC_COMP_CHANNEL0               0x0002
-#define     DMA_INTR__DESC_COMP_CHANNEL1               0x0004
-#define     DMA_INTR__DESC_COMP_CHANNEL2               0x0008
-#define     DMA_INTR__DESC_COMP_CHANNEL3               0x0010
-#define     DMA_INTR__MEMCOPY_DESC_COMP                        0x0020
+#define     DMA_INTR__TARGET_ERROR                     BIT(0)
+#define     DMA_INTR__DESC_COMP_CHANNEL0               BIT(1)
+#define     DMA_INTR__DESC_COMP_CHANNEL1               BIT(2)
+#define     DMA_INTR__DESC_COMP_CHANNEL2               BIT(3)
+#define     DMA_INTR__DESC_COMP_CHANNEL3               BIT(4)
+#define     DMA_INTR__MEMCOPY_DESC_COMP                        BIT(5)
 
 #define TARGET_ERR_ADDR_LO                     0x740
-#define     TARGET_ERR_ADDR_LO__VALUE                  0xffff
+#define     TARGET_ERR_ADDR_LO__VALUE                  GENMASK(15, 0)
 
 #define TARGET_ERR_ADDR_HI                     0x750
-#define     TARGET_ERR_ADDR_HI__VALUE                  0xffff
+#define     TARGET_ERR_ADDR_HI__VALUE                  GENMASK(15, 0)
 
 #define CHNL_ACTIVE                            0x760
-#define     CHNL_ACTIVE__CHANNEL0                      0x0001
-#define     CHNL_ACTIVE__CHANNEL1                      0x0002
-#define     CHNL_ACTIVE__CHANNEL2                      0x0004
-#define     CHNL_ACTIVE__CHANNEL3                      0x0008
-
-#define FAIL 1                  /*failed flag*/
-#define PASS 0                  /*success flag*/
-
-#define CLK_X  5
-#define CLK_MULTI 4
-
-#define ONFI_BLOOM_TIME         1
-#define MODE5_WORKAROUND        0
-
-
-#define MODE_00    0x00000000
-#define MODE_01    0x04000000
-#define MODE_10    0x08000000
-#define MODE_11    0x0C000000
-
-#define ECC_SECTOR_SIZE     512
-
-struct nand_buf {
-       int head;
-       int tail;
-       uint8_t *buf;
-       dma_addr_t dma_buf;
-};
-
-#define INTEL_CE4100   1
-#define INTEL_MRST     2
-#define DT             3
+#define     CHNL_ACTIVE__CHANNEL0                      BIT(0)
+#define     CHNL_ACTIVE__CHANNEL1                      BIT(1)
+#define     CHNL_ACTIVE__CHANNEL2                      BIT(2)
+#define     CHNL_ACTIVE__CHANNEL3                      BIT(3)
 
 struct denali_nand_info {
        struct nand_chip nand;
-       int flash_bank; /* currently selected chip */
-       int status;
-       int platform;
-       struct nand_buf buf;
+       unsigned long clk_x_rate;       /* bus interface clock rate */
+       int active_bank;                /* currently selected bank */
        struct device *dev;
-       int total_used_banks;
-       int page;
-       void __iomem *flash_reg;        /* Register Interface */
-       void __iomem *flash_mem;        /* Host Data/Command Interface */
+       void __iomem *reg;              /* Register Interface */
+       void __iomem *host;             /* Host Data/Command Interface */
 
        /* elements used by ISR */
        struct completion complete;
        spinlock_t irq_lock;
+       uint32_t irq_mask;
        uint32_t irq_status;
        int irq;
 
-       int devnum;     /* represent how many nands connected */
-       int bbtskipbytes;
+       void *buf;
+       dma_addr_t dma_addr;
+       int dma_avail;
+       int devs_per_cs;                /* devices connected in parallel */
+       int oob_skip_bytes;
        int max_banks;
        unsigned int revision;
        unsigned int caps;
+       const struct nand_ecc_caps *ecc_caps;
 };
 
 #define DENALI_CAP_HW_ECC_FIXUP                        BIT(0)
 #define DENALI_CAP_DMA_64BIT                   BIT(1)
 
+int denali_calc_ecc_bytes(int step_size, int strength);
 extern int denali_init(struct denali_nand_info *denali);
 extern void denali_remove(struct denali_nand_info *denali);
 
index df9ef36cc2ce3323da883e722152bf0b5a1d2f8b..47f398edf18f495522d0ed07a4102ce128d716bf 100644 (file)
@@ -32,10 +32,31 @@ struct denali_dt {
 struct denali_dt_data {
        unsigned int revision;
        unsigned int caps;
+       const struct nand_ecc_caps *ecc_caps;
 };
 
+NAND_ECC_CAPS_SINGLE(denali_socfpga_ecc_caps, denali_calc_ecc_bytes,
+                    512, 8, 15);
 static const struct denali_dt_data denali_socfpga_data = {
        .caps = DENALI_CAP_HW_ECC_FIXUP,
+       .ecc_caps = &denali_socfpga_ecc_caps,
+};
+
+NAND_ECC_CAPS_SINGLE(denali_uniphier_v5a_ecc_caps, denali_calc_ecc_bytes,
+                    1024, 8, 16, 24);
+static const struct denali_dt_data denali_uniphier_v5a_data = {
+       .caps = DENALI_CAP_HW_ECC_FIXUP |
+               DENALI_CAP_DMA_64BIT,
+       .ecc_caps = &denali_uniphier_v5a_ecc_caps,
+};
+
+NAND_ECC_CAPS_SINGLE(denali_uniphier_v5b_ecc_caps, denali_calc_ecc_bytes,
+                    1024, 8, 16);
+static const struct denali_dt_data denali_uniphier_v5b_data = {
+       .revision = 0x0501,
+       .caps = DENALI_CAP_HW_ECC_FIXUP |
+               DENALI_CAP_DMA_64BIT,
+       .ecc_caps = &denali_uniphier_v5b_ecc_caps,
 };
 
 static const struct of_device_id denali_nand_dt_ids[] = {
@@ -43,13 +64,21 @@ static const struct of_device_id denali_nand_dt_ids[] = {
                .compatible = "altr,socfpga-denali-nand",
                .data = &denali_socfpga_data,
        },
+       {
+               .compatible = "socionext,uniphier-denali-nand-v5a",
+               .data = &denali_uniphier_v5a_data,
+       },
+       {
+               .compatible = "socionext,uniphier-denali-nand-v5b",
+               .data = &denali_uniphier_v5b_data,
+       },
        { /* sentinel */ }
 };
 MODULE_DEVICE_TABLE(of, denali_nand_dt_ids);
 
 static int denali_dt_probe(struct platform_device *pdev)
 {
-       struct resource *denali_reg, *nand_data;
+       struct resource *res;
        struct denali_dt *dt;
        const struct denali_dt_data *data;
        struct denali_nand_info *denali;
@@ -64,9 +93,9 @@ static int denali_dt_probe(struct platform_device *pdev)
        if (data) {
                denali->revision = data->revision;
                denali->caps = data->caps;
+               denali->ecc_caps = data->ecc_caps;
        }
 
-       denali->platform = DT;
        denali->dev = &pdev->dev;
        denali->irq = platform_get_irq(pdev, 0);
        if (denali->irq < 0) {
@@ -74,17 +103,15 @@ static int denali_dt_probe(struct platform_device *pdev)
                return denali->irq;
        }
 
-       denali_reg = platform_get_resource_byname(pdev, IORESOURCE_MEM,
-                                                 "denali_reg");
-       denali->flash_reg = devm_ioremap_resource(&pdev->dev, denali_reg);
-       if (IS_ERR(denali->flash_reg))
-               return PTR_ERR(denali->flash_reg);
+       res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "denali_reg");
+       denali->reg = devm_ioremap_resource(&pdev->dev, res);
+       if (IS_ERR(denali->reg))
+               return PTR_ERR(denali->reg);
 
-       nand_data = platform_get_resource_byname(pdev, IORESOURCE_MEM,
-                                                "nand_data");
-       denali->flash_mem = devm_ioremap_resource(&pdev->dev, nand_data);
-       if (IS_ERR(denali->flash_mem))
-               return PTR_ERR(denali->flash_mem);
+       res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nand_data");
+       denali->host = devm_ioremap_resource(&pdev->dev, res);
+       if (IS_ERR(denali->host))
+               return PTR_ERR(denali->host);
 
        dt->clk = devm_clk_get(&pdev->dev, NULL);
        if (IS_ERR(dt->clk)) {
@@ -93,6 +120,8 @@ static int denali_dt_probe(struct platform_device *pdev)
        }
        clk_prepare_enable(dt->clk);
 
+       denali->clk_x_rate = clk_get_rate(dt->clk);
+
        ret = denali_init(denali);
        if (ret)
                goto out_disable_clk;
index ac843238b77e72f846c63d2eb9a8299a2d3aceb5..81370c79aa48aa4fe6ef3d4d65bb7dd3c2a91db7 100644 (file)
@@ -19,6 +19,9 @@
 
 #define DENALI_NAND_NAME    "denali-nand-pci"
 
+#define INTEL_CE4100   1
+#define INTEL_MRST     2
+
 /* List of platforms this NAND controller has be integrated into */
 static const struct pci_device_id denali_pci_ids[] = {
        { PCI_VDEVICE(INTEL, 0x0701), INTEL_CE4100 },
@@ -27,6 +30,8 @@ static const struct pci_device_id denali_pci_ids[] = {
 };
 MODULE_DEVICE_TABLE(pci, denali_pci_ids);
 
+NAND_ECC_CAPS_SINGLE(denali_pci_ecc_caps, denali_calc_ecc_bytes, 512, 8, 15);
+
 static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 {
        int ret;
@@ -45,13 +50,11 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
        }
 
        if (id->driver_data == INTEL_CE4100) {
-               denali->platform = INTEL_CE4100;
                mem_base = pci_resource_start(dev, 0);
                mem_len = pci_resource_len(dev, 1);
                csr_base = pci_resource_start(dev, 1);
                csr_len = pci_resource_len(dev, 1);
        } else {
-               denali->platform = INTEL_MRST;
                csr_base = pci_resource_start(dev, 0);
                csr_len = pci_resource_len(dev, 0);
                mem_base = pci_resource_start(dev, 1);
@@ -65,6 +68,9 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
        pci_set_master(dev);
        denali->dev = &dev->dev;
        denali->irq = dev->irq;
+       denali->ecc_caps = &denali_pci_ecc_caps;
+       denali->nand.ecc.options |= NAND_ECC_MAXIMIZE;
+       denali->clk_x_rate = 200000000;         /* 200 MHz */
 
        ret = pci_request_regions(dev, DENALI_NAND_NAME);
        if (ret) {
@@ -72,14 +78,14 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
                return ret;
        }
 
-       denali->flash_reg = ioremap_nocache(csr_base, csr_len);
-       if (!denali->flash_reg) {
+       denali->reg = ioremap_nocache(csr_base, csr_len);
+       if (!denali->reg) {
                dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
                return -ENOMEM;
        }
 
-       denali->flash_mem = ioremap_nocache(mem_base, mem_len);
-       if (!denali->flash_mem) {
+       denali->host = ioremap_nocache(mem_base, mem_len);
+       if (!denali->host) {
                dev_err(&dev->dev, "Spectra: ioremap_nocache failed!");
                ret = -ENOMEM;
                goto failed_remap_reg;
@@ -94,9 +100,9 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
        return 0;
 
 failed_remap_mem:
-       iounmap(denali->flash_mem);
+       iounmap(denali->host);
 failed_remap_reg:
-       iounmap(denali->flash_reg);
+       iounmap(denali->reg);
        return ret;
 }
 
@@ -106,8 +112,8 @@ static void denali_pci_remove(struct pci_dev *dev)
        struct denali_nand_info *denali = pci_get_drvdata(dev);
 
        denali_remove(denali);
-       iounmap(denali->flash_reg);
-       iounmap(denali->flash_mem);
+       iounmap(denali->reg);
+       iounmap(denali->host);
 }
 
 static struct pci_driver denali_pci_driver = {
index 7af2a3cd949eee9377a22a510cc15605cd1adf74..a27a84fbfb840bcb87bd039216d50ade7e2683bd 100644 (file)
@@ -1260,6 +1260,8 @@ static void __init init_mtd_structs(struct mtd_info *mtd)
        nand->read_buf = docg4_read_buf;
        nand->write_buf = docg4_write_buf16;
        nand->erase = docg4_erase_block;
+       nand->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       nand->onfi_get_features = nand_onfi_get_set_features_notsupp;
        nand->ecc.read_page = docg4_read_page;
        nand->ecc.write_page = docg4_write_page;
        nand->ecc.read_page_raw = docg4_read_page_raw;
index 113f76e599372d3d09526bdb4f95a3620ea45681..b9ac16f05057c5b01785b8b74453ce44d955ff1a 100644 (file)
@@ -775,6 +775,8 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
        chip->select_chip = fsl_elbc_select_chip;
        chip->cmdfunc = fsl_elbc_cmdfunc;
        chip->waitfunc = fsl_elbc_wait;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        chip->bbt_td = &bbt_main_descr;
        chip->bbt_md = &bbt_mirror_descr;
index d1570f512f0bbad5c07c9903528c125e412c029a..59408ec2c69f21b0db018c69448eff9f0960d6a1 100644 (file)
@@ -171,34 +171,6 @@ static void set_addr(struct mtd_info *mtd, int column, int page_addr, int oob)
                ifc_nand_ctrl->index += mtd->writesize;
 }
 
-static int is_blank(struct mtd_info *mtd, unsigned int bufnum)
-{
-       struct nand_chip *chip = mtd_to_nand(mtd);
-       struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
-       u8 __iomem *addr = priv->vbase + bufnum * (mtd->writesize * 2);
-       u32 __iomem *mainarea = (u32 __iomem *)addr;
-       u8 __iomem *oob = addr + mtd->writesize;
-       struct mtd_oob_region oobregion = { };
-       int i, section = 0;
-
-       for (i = 0; i < mtd->writesize / 4; i++) {
-               if (__raw_readl(&mainarea[i]) != 0xffffffff)
-                       return 0;
-       }
-
-       mtd_ooblayout_ecc(mtd, section++, &oobregion);
-       while (oobregion.length) {
-               for (i = 0; i < oobregion.length; i++) {
-                       if (__raw_readb(&oob[oobregion.offset + i]) != 0xff)
-                               return 0;
-               }
-
-               mtd_ooblayout_ecc(mtd, section++, &oobregion);
-       }
-
-       return 1;
-}
-
 /* returns nonzero if entire page is blank */
 static int check_read_ecc(struct mtd_info *mtd, struct fsl_ifc_ctrl *ctrl,
                          u32 *eccstat, unsigned int bufnum)
@@ -274,16 +246,14 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
                        if (errors == 15) {
                                /*
                                 * Uncorrectable error.
-                                * OK only if the whole page is blank.
+                                * We'll check for blank pages later.
                                 *
                                 * We disable ECCER reporting due to...
                                 * erratum IFC-A002770 -- so report it now if we
                                 * see an uncorrectable error in ECCSTAT.
                                 */
-                               if (!is_blank(mtd, bufnum))
-                                       ctrl->nand_stat |=
-                                               IFC_NAND_EVTER_STAT_ECCER;
-                               break;
+                               ctrl->nand_stat |= IFC_NAND_EVTER_STAT_ECCER;
+                               continue;
                        }
 
                        mtd->ecc_stats.corrected += errors;
@@ -678,6 +648,39 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
        return nand_fsr | NAND_STATUS_WP;
 }
 
+/*
+ * The controller does not check for bitflips in erased pages,
+ * therefore software must check instead.
+ */
+static int check_erased_page(struct nand_chip *chip, u8 *buf)
+{
+       struct mtd_info *mtd = nand_to_mtd(chip);
+       u8 *ecc = chip->oob_poi;
+       const int ecc_size = chip->ecc.bytes;
+       const int pkt_size = chip->ecc.size;
+       int i, res, bitflips = 0;
+       struct mtd_oob_region oobregion = { };
+
+       mtd_ooblayout_ecc(mtd, 0, &oobregion);
+       ecc += oobregion.offset;
+
+       for (i = 0; i < chip->ecc.steps; ++i) {
+               res = nand_check_erased_ecc_chunk(buf, pkt_size, ecc, ecc_size,
+                                                 NULL, 0,
+                                                 chip->ecc.strength);
+               if (res < 0)
+                       mtd->ecc_stats.failed++;
+               else
+                       mtd->ecc_stats.corrected += res;
+
+               bitflips = max(res, bitflips);
+               buf += pkt_size;
+               ecc += ecc_size;
+       }
+
+       return bitflips;
+}
+
 static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
                             uint8_t *buf, int oob_required, int page)
 {
@@ -689,8 +692,12 @@ static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
        if (oob_required)
                fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
 
-       if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER)
-               dev_err(priv->dev, "NAND Flash ECC Uncorrectable Error\n");
+       if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER) {
+               if (!oob_required)
+                       fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
+
+               return check_erased_page(chip, buf);
+       }
 
        if (ctrl->nand_stat != IFC_NAND_EVTER_STAT_OPC)
                mtd->ecc_stats.failed++;
@@ -831,6 +838,8 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
        chip->select_chip = fsl_ifc_select_chip;
        chip->cmdfunc = fsl_ifc_cmdfunc;
        chip->waitfunc = fsl_ifc_wait;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        chip->bbt_td = &bbt_main_descr;
        chip->bbt_md = &bbt_mirror_descr;
@@ -904,7 +913,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
                chip->ecc.algo = NAND_ECC_HAMMING;
        }
 
-       if (ctrl->version == FSL_IFC_VERSION_1_1_0)
+       if (ctrl->version >= FSL_IFC_VERSION_1_1_0)
                fsl_ifc_sram_init(priv);
 
        return 0;
index cea50d2f218c1d33c09005f3a5084372fa7a18e1..9d8b051d318709d454bf5ad832cd13243eb735e7 100644 (file)
@@ -302,25 +302,13 @@ static void fsmc_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
  * This routine initializes timing parameters related to NAND memory access in
  * FSMC registers
  */
-static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
-                          uint32_t busw, struct fsmc_nand_timings *timings)
+static void fsmc_nand_setup(struct fsmc_nand_data *host,
+                           struct fsmc_nand_timings *tims)
 {
        uint32_t value = FSMC_DEVTYPE_NAND | FSMC_ENABLE | FSMC_WAITON;
        uint32_t tclr, tar, thiz, thold, twait, tset;
-       struct fsmc_nand_timings *tims;
-       struct fsmc_nand_timings default_timings = {
-               .tclr   = FSMC_TCLR_1,
-               .tar    = FSMC_TAR_1,
-               .thiz   = FSMC_THIZ_1,
-               .thold  = FSMC_THOLD_4,
-               .twait  = FSMC_TWAIT_6,
-               .tset   = FSMC_TSET_0,
-       };
-
-       if (timings)
-               tims = timings;
-       else
-               tims = &default_timings;
+       unsigned int bank = host->bank;
+       void __iomem *regs = host->regs_va;
 
        tclr = (tims->tclr & FSMC_TCLR_MASK) << FSMC_TCLR_SHIFT;
        tar = (tims->tar & FSMC_TAR_MASK) << FSMC_TAR_SHIFT;
@@ -329,7 +317,7 @@ static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
        twait = (tims->twait & FSMC_TWAIT_MASK) << FSMC_TWAIT_SHIFT;
        tset = (tims->tset & FSMC_TSET_MASK) << FSMC_TSET_SHIFT;
 
-       if (busw)
+       if (host->nand.options & NAND_BUSWIDTH_16)
                writel_relaxed(value | FSMC_DEVWID_16,
                                FSMC_NAND_REG(regs, bank, PC));
        else
@@ -344,6 +332,87 @@ static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
                        FSMC_NAND_REG(regs, bank, ATTRIB));
 }
 
+static int fsmc_calc_timings(struct fsmc_nand_data *host,
+                            const struct nand_sdr_timings *sdrt,
+                            struct fsmc_nand_timings *tims)
+{
+       unsigned long hclk = clk_get_rate(host->clk);
+       unsigned long hclkn = NSEC_PER_SEC / hclk;
+       uint32_t thiz, thold, twait, tset;
+
+       if (sdrt->tRC_min < 30000)
+               return -EOPNOTSUPP;
+
+       tims->tar = DIV_ROUND_UP(sdrt->tAR_min / 1000, hclkn) - 1;
+       if (tims->tar > FSMC_TAR_MASK)
+               tims->tar = FSMC_TAR_MASK;
+       tims->tclr = DIV_ROUND_UP(sdrt->tCLR_min / 1000, hclkn) - 1;
+       if (tims->tclr > FSMC_TCLR_MASK)
+               tims->tclr = FSMC_TCLR_MASK;
+
+       thiz = sdrt->tCS_min - sdrt->tWP_min;
+       tims->thiz = DIV_ROUND_UP(thiz / 1000, hclkn);
+
+       thold = sdrt->tDH_min;
+       if (thold < sdrt->tCH_min)
+               thold = sdrt->tCH_min;
+       if (thold < sdrt->tCLH_min)
+               thold = sdrt->tCLH_min;
+       if (thold < sdrt->tWH_min)
+               thold = sdrt->tWH_min;
+       if (thold < sdrt->tALH_min)
+               thold = sdrt->tALH_min;
+       if (thold < sdrt->tREH_min)
+               thold = sdrt->tREH_min;
+       tims->thold = DIV_ROUND_UP(thold / 1000, hclkn);
+       if (tims->thold == 0)
+               tims->thold = 1;
+       else if (tims->thold > FSMC_THOLD_MASK)
+               tims->thold = FSMC_THOLD_MASK;
+
+       twait = max(sdrt->tRP_min, sdrt->tWP_min);
+       tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
+       if (tims->twait == 0)
+               tims->twait = 1;
+       else if (tims->twait > FSMC_TWAIT_MASK)
+               tims->twait = FSMC_TWAIT_MASK;
+
+       tset = max(sdrt->tCS_min - sdrt->tWP_min,
+                  sdrt->tCEA_max - sdrt->tREA_max);
+       tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1;
+       if (tims->tset == 0)
+               tims->tset = 1;
+       else if (tims->tset > FSMC_TSET_MASK)
+               tims->tset = FSMC_TSET_MASK;
+
+       return 0;
+}
+
+static int fsmc_setup_data_interface(struct mtd_info *mtd, int csline,
+                                    const struct nand_data_interface *conf)
+{
+       struct nand_chip *nand = mtd_to_nand(mtd);
+       struct fsmc_nand_data *host = nand_get_controller_data(nand);
+       struct fsmc_nand_timings tims;
+       const struct nand_sdr_timings *sdrt;
+       int ret;
+
+       sdrt = nand_get_sdr_timings(conf);
+       if (IS_ERR(sdrt))
+               return PTR_ERR(sdrt);
+
+       ret = fsmc_calc_timings(host, sdrt, &tims);
+       if (ret)
+               return ret;
+
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+               return 0;
+
+       fsmc_nand_setup(host, &tims);
+
+       return 0;
+}
+
 /*
  * fsmc_enable_hwecc - Enables Hardware ECC through FSMC registers
  */
@@ -796,10 +865,8 @@ static int fsmc_nand_probe_config_dt(struct platform_device *pdev,
                return -ENOMEM;
        ret = of_property_read_u8_array(np, "timings", (u8 *)host->dev_timings,
                                                sizeof(*host->dev_timings));
-       if (ret) {
-               dev_info(&pdev->dev, "No timings in dts specified, using default timings!\n");
+       if (ret)
                host->dev_timings = NULL;
-       }
 
        /* Set default NAND bank to 0 */
        host->bank = 0;
@@ -933,9 +1000,10 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
                break;
        }
 
-       fsmc_nand_setup(host->regs_va, host->bank,
-                       nand->options & NAND_BUSWIDTH_16,
-                       host->dev_timings);
+       if (host->dev_timings)
+               fsmc_nand_setup(host, host->dev_timings);
+       else
+               nand->setup_data_interface = fsmc_setup_data_interface;
 
        if (AMBA_REV_BITS(host->pid) >= 8) {
                nand->ecc.read_page = fsmc_read_page_hwecc;
@@ -986,6 +1054,9 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
                                break;
                        }
 
+               case NAND_ECC_ON_DIE:
+                       break;
+
                default:
                        dev_err(&pdev->dev, "Unsupported ECC mode!\n");
                        goto err_probe;
@@ -1073,9 +1144,8 @@ static int fsmc_nand_resume(struct device *dev)
        struct fsmc_nand_data *host = dev_get_drvdata(dev);
        if (host) {
                clk_prepare_enable(host->clk);
-               fsmc_nand_setup(host->regs_va, host->bank,
-                               host->nand.options & NAND_BUSWIDTH_16,
-                               host->dev_timings);
+               if (host->dev_timings)
+                       fsmc_nand_setup(host, host->dev_timings);
        }
        return 0;
 }
index 141bd70a49c2c5c888d290b724b2ed6a59af2216..97787246af41d5ee66ac21d85986fbb34de1c6df 100644 (file)
@@ -26,7 +26,7 @@
 #include "gpmi-regs.h"
 #include "bch-regs.h"
 
-static struct timing_threshod timing_default_threshold = {
+static struct timing_threshold timing_default_threshold = {
        .max_data_setup_cycles       = (BM_GPMI_TIMING0_DATA_SETUP >>
                                                BP_GPMI_TIMING0_DATA_SETUP),
        .internal_data_setup_in_ns   = 0,
@@ -329,7 +329,7 @@ static unsigned int ns_to_cycles(unsigned int time,
 static int gpmi_nfc_compute_hardware_timing(struct gpmi_nand_data *this,
                                        struct gpmi_nfc_hardware_timing *hw)
 {
-       struct timing_threshod *nfc = &timing_default_threshold;
+       struct timing_threshold *nfc = &timing_default_threshold;
        struct resources *r = &this->resources;
        struct nand_chip *nand = &this->nand;
        struct nand_timing target = this->timing;
@@ -932,7 +932,7 @@ static int enable_edo_mode(struct gpmi_nand_data *this, int mode)
 
        nand->select_chip(mtd, 0);
 
-       /* [1] send SET FEATURE commond to NAND */
+       /* [1] send SET FEATURE command to NAND */
        feature[0] = mode;
        ret = nand->onfi_set_features(mtd, nand,
                                ONFI_FEATURE_ADDR_TIMING_MODE, feature);
index d52139635b67c658a0608f044a67dcd019ae1ff5..50f8d4a1b9832326070045d0c294d22393001fbd 100644 (file)
@@ -82,6 +82,10 @@ static int gpmi_ooblayout_free(struct mtd_info *mtd, int section,
        return 0;
 }
 
+static const char * const gpmi_clks_for_mx2x[] = {
+       "gpmi_io",
+};
+
 static const struct mtd_ooblayout_ops gpmi_ooblayout_ops = {
        .ecc = gpmi_ooblayout_ecc,
        .free = gpmi_ooblayout_free,
@@ -91,24 +95,48 @@ static const struct gpmi_devdata gpmi_devdata_imx23 = {
        .type = IS_MX23,
        .bch_max_ecc_strength = 20,
        .max_chain_delay = 16,
+       .clks = gpmi_clks_for_mx2x,
+       .clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
 };
 
 static const struct gpmi_devdata gpmi_devdata_imx28 = {
        .type = IS_MX28,
        .bch_max_ecc_strength = 20,
        .max_chain_delay = 16,
+       .clks = gpmi_clks_for_mx2x,
+       .clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
+};
+
+static const char * const gpmi_clks_for_mx6[] = {
+       "gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch",
 };
 
 static const struct gpmi_devdata gpmi_devdata_imx6q = {
        .type = IS_MX6Q,
        .bch_max_ecc_strength = 40,
        .max_chain_delay = 12,
+       .clks = gpmi_clks_for_mx6,
+       .clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
 };
 
 static const struct gpmi_devdata gpmi_devdata_imx6sx = {
        .type = IS_MX6SX,
        .bch_max_ecc_strength = 62,
        .max_chain_delay = 12,
+       .clks = gpmi_clks_for_mx6,
+       .clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
+};
+
+static const char * const gpmi_clks_for_mx7d[] = {
+       "gpmi_io", "gpmi_bch_apb",
+};
+
+static const struct gpmi_devdata gpmi_devdata_imx7d = {
+       .type = IS_MX7D,
+       .bch_max_ecc_strength = 62,
+       .max_chain_delay = 12,
+       .clks = gpmi_clks_for_mx7d,
+       .clks_count = ARRAY_SIZE(gpmi_clks_for_mx7d),
 };
 
 static irqreturn_t bch_irq(int irq, void *cookie)
@@ -599,35 +627,14 @@ acquire_err:
        return -EINVAL;
 }
 
-static char *extra_clks_for_mx6q[GPMI_CLK_MAX] = {
-       "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch",
-};
-
 static int gpmi_get_clks(struct gpmi_nand_data *this)
 {
        struct resources *r = &this->resources;
-       char **extra_clks = NULL;
        struct clk *clk;
        int err, i;
 
-       /* The main clock is stored in the first. */
-       r->clock[0] = devm_clk_get(this->dev, "gpmi_io");
-       if (IS_ERR(r->clock[0])) {
-               err = PTR_ERR(r->clock[0]);
-               goto err_clock;
-       }
-
-       /* Get extra clocks */
-       if (GPMI_IS_MX6(this))
-               extra_clks = extra_clks_for_mx6q;
-       if (!extra_clks)
-               return 0;
-
-       for (i = 1; i < GPMI_CLK_MAX; i++) {
-               if (extra_clks[i - 1] == NULL)
-                       break;
-
-               clk = devm_clk_get(this->dev, extra_clks[i - 1]);
+       for (i = 0; i < this->devdata->clks_count; i++) {
+               clk = devm_clk_get(this->dev, this->devdata->clks[i]);
                if (IS_ERR(clk)) {
                        err = PTR_ERR(clk);
                        goto err_clock;
@@ -1929,12 +1936,6 @@ static int gpmi_set_geometry(struct gpmi_nand_data *this)
        return gpmi_alloc_dma_buffer(this);
 }
 
-static void gpmi_nand_exit(struct gpmi_nand_data *this)
-{
-       nand_release(nand_to_mtd(&this->nand));
-       gpmi_free_dma_buffer(this);
-}
-
 static int gpmi_init_last(struct gpmi_nand_data *this)
 {
        struct nand_chip *chip = &this->nand;
@@ -2048,18 +2049,20 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
 
        ret = nand_boot_init(this);
        if (ret)
-               goto err_out;
+               goto err_nand_cleanup;
        ret = chip->scan_bbt(mtd);
        if (ret)
-               goto err_out;
+               goto err_nand_cleanup;
 
        ret = mtd_device_register(mtd, NULL, 0);
        if (ret)
-               goto err_out;
+               goto err_nand_cleanup;
        return 0;
 
+err_nand_cleanup:
+       nand_cleanup(chip);
 err_out:
-       gpmi_nand_exit(this);
+       gpmi_free_dma_buffer(this);
        return ret;
 }
 
@@ -2076,6 +2079,9 @@ static const struct of_device_id gpmi_nand_id_table[] = {
        }, {
                .compatible = "fsl,imx6sx-gpmi-nand",
                .data = &gpmi_devdata_imx6sx,
+       }, {
+               .compatible = "fsl,imx7d-gpmi-nand",
+               .data = &gpmi_devdata_imx7d,
        }, {}
 };
 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
@@ -2129,7 +2135,8 @@ static int gpmi_nand_remove(struct platform_device *pdev)
 {
        struct gpmi_nand_data *this = platform_get_drvdata(pdev);
 
-       gpmi_nand_exit(this);
+       nand_release(nand_to_mtd(&this->nand));
+       gpmi_free_dma_buffer(this);
        release_resources(this);
        return 0;
 }
index 4e49a1f5fa27aec70d5d7a01a7cf7766fc65f199..9df0ad64e7e06f3a41746d020e40b72b7ce45d89 100644 (file)
@@ -123,13 +123,16 @@ enum gpmi_type {
        IS_MX23,
        IS_MX28,
        IS_MX6Q,
-       IS_MX6SX
+       IS_MX6SX,
+       IS_MX7D,
 };
 
 struct gpmi_devdata {
        enum gpmi_type type;
        int bch_max_ecc_strength;
        int max_chain_delay; /* See the async EDO mode */
+       const char * const *clks;
+       const int clks_count;
 };
 
 struct gpmi_nand_data {
@@ -231,7 +234,7 @@ struct gpmi_nfc_hardware_timing {
 };
 
 /**
- * struct timing_threshod - Timing threshold
+ * struct timing_threshold - Timing threshold
  * @max_data_setup_cycles:       The maximum number of data setup cycles that
  *                               can be expressed in the hardware.
  * @internal_data_setup_in_ns:   The time, in ns, that the NFC hardware requires
@@ -253,7 +256,7 @@ struct gpmi_nfc_hardware_timing {
  *                               progress, this is the clock frequency during
  *                               the most recent I/O transaction.
  */
-struct timing_threshod {
+struct timing_threshold {
        const unsigned int      max_chip_count;
        const unsigned int      max_data_setup_cycles;
        const unsigned int      internal_data_setup_in_ns;
@@ -305,6 +308,8 @@ void gpmi_copy_bits(u8 *dst, size_t dst_bit_off,
 #define GPMI_IS_MX28(x)                ((x)->devdata->type == IS_MX28)
 #define GPMI_IS_MX6Q(x)                ((x)->devdata->type == IS_MX6Q)
 #define GPMI_IS_MX6SX(x)       ((x)->devdata->type == IS_MX6SX)
+#define GPMI_IS_MX7D(x)                ((x)->devdata->type == IS_MX7D)
 
-#define GPMI_IS_MX6(x)         (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x))
+#define GPMI_IS_MX6(x)         (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x) || \
+                                GPMI_IS_MX7D(x))
 #endif
index e40364eeb556bd23e0341a8a089d85047282acd1..530caa80b1b6935a62654949e32aac25f0aa3904 100644 (file)
@@ -764,6 +764,8 @@ static int hisi_nfc_probe(struct platform_device *pdev)
        chip->write_buf         = hisi_nfc_write_buf;
        chip->read_buf          = hisi_nfc_read_buf;
        chip->chip_delay        = HINFC504_CHIP_DELAY;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        hisi_nfc_host_init(host);
 
index a39bb70175eea230cab2c3797f4750f075cfa755..8bc835f71b26683f8ef42e5f90da2f05f3c3ee61 100644 (file)
@@ -205,7 +205,7 @@ static int jz4780_nand_init_ecc(struct jz4780_nand_chip *nand, struct device *de
                return -EINVAL;
        }
 
-       mtd->ooblayout = &nand_ooblayout_lp_ops;
+       mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
 
        return 0;
 }
index 6d6eaed2d20c281321df7e3245ab525866eeca80..0e86fb6277c3ae7c5f111bafead23c18f676ffa3 100644 (file)
@@ -708,6 +708,8 @@ static int mpc5121_nfc_probe(struct platform_device *op)
        chip->read_buf = mpc5121_nfc_read_buf;
        chip->write_buf = mpc5121_nfc_write_buf;
        chip->select_chip = mpc5121_nfc_select_chip;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
        chip->bbt_options = NAND_BBT_USE_FLASH;
        chip->ecc.mode = NAND_ECC_SOFT;
        chip->ecc.algo = NAND_ECC_HAMMING;
index dbf256217b3eb75a0486a0f2ec8774c05f793fe8..6c3a4aab0b487114181c000ff4c068d3c17329f8 100644 (file)
 
 #define ECC_IDLE_MASK          BIT(0)
 #define ECC_IRQ_EN             BIT(0)
+#define ECC_PG_IRQ_SEL         BIT(1)
 #define ECC_OP_ENABLE          (1)
 #define ECC_OP_DISABLE         (0)
 
 #define ECC_ENCCON             (0x00)
 #define ECC_ENCCNFG            (0x04)
-#define                ECC_CNFG_4BIT           (0)
-#define                ECC_CNFG_6BIT           (1)
-#define                ECC_CNFG_8BIT           (2)
-#define                ECC_CNFG_10BIT          (3)
-#define                ECC_CNFG_12BIT          (4)
-#define                ECC_CNFG_14BIT          (5)
-#define                ECC_CNFG_16BIT          (6)
-#define                ECC_CNFG_18BIT          (7)
-#define                ECC_CNFG_20BIT          (8)
-#define                ECC_CNFG_22BIT          (9)
-#define                ECC_CNFG_24BIT          (0xa)
-#define                ECC_CNFG_28BIT          (0xb)
-#define                ECC_CNFG_32BIT          (0xc)
-#define                ECC_CNFG_36BIT          (0xd)
-#define                ECC_CNFG_40BIT          (0xe)
-#define                ECC_CNFG_44BIT          (0xf)
-#define                ECC_CNFG_48BIT          (0x10)
-#define                ECC_CNFG_52BIT          (0x11)
-#define                ECC_CNFG_56BIT          (0x12)
-#define                ECC_CNFG_60BIT          (0x13)
 #define                ECC_MODE_SHIFT          (5)
 #define                ECC_MS_SHIFT            (16)
 #define ECC_ENCDIADDR          (0x08)
 #define ECC_ENCIDLE            (0x0C)
-#define ECC_ENCPAR(x)          (0x10 + (x) * sizeof(u32))
 #define ECC_ENCIRQ_EN          (0x80)
 #define ECC_ENCIRQ_STA         (0x84)
 #define ECC_DECCON             (0x100)
@@ -66,7 +46,6 @@
 #define                DEC_CNFG_CORRECT        (0x3 << 12)
 #define ECC_DECIDLE            (0x10C)
 #define ECC_DECENUM0           (0x114)
-#define                ERR_MASK                (0x3f)
 #define ECC_DECDONE            (0x124)
 #define ECC_DECIRQ_EN          (0x200)
 #define ECC_DECIRQ_STA         (0x204)
 #define ECC_IRQ_REG(op)                ((op) == ECC_ENCODE ? \
                                        ECC_ENCIRQ_EN : ECC_DECIRQ_EN)
 
+struct mtk_ecc_caps {
+       u32 err_mask;
+       const u8 *ecc_strength;
+       u8 num_ecc_strength;
+       u32 encode_parity_reg0;
+       int pg_irq_sel;
+};
+
 struct mtk_ecc {
        struct device *dev;
+       const struct mtk_ecc_caps *caps;
        void __iomem *regs;
        struct clk *clk;
 
@@ -87,7 +75,18 @@ struct mtk_ecc {
        struct mutex lock;
        u32 sectors;
 
-       u8 eccdata[112];
+       u8 *eccdata;
+};
+
+/* ecc strength that each IP supports */
+static const u8 ecc_strength_mt2701[] = {
+       4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
+       40, 44, 48, 52, 56, 60
+};
+
+static const u8 ecc_strength_mt2712[] = {
+       4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
+       40, 44, 48, 52, 56, 60, 68, 72, 80
 };
 
 static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
@@ -136,77 +135,24 @@ static irqreturn_t mtk_ecc_irq(int irq, void *id)
        return IRQ_HANDLED;
 }
 
-static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
+static int mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 {
-       u32 ecc_bit = ECC_CNFG_4BIT, dec_sz, enc_sz;
-       u32 reg;
-
-       switch (config->strength) {
-       case 4:
-               ecc_bit = ECC_CNFG_4BIT;
-               break;
-       case 6:
-               ecc_bit = ECC_CNFG_6BIT;
-               break;
-       case 8:
-               ecc_bit = ECC_CNFG_8BIT;
-               break;
-       case 10:
-               ecc_bit = ECC_CNFG_10BIT;
-               break;
-       case 12:
-               ecc_bit = ECC_CNFG_12BIT;
-               break;
-       case 14:
-               ecc_bit = ECC_CNFG_14BIT;
-               break;
-       case 16:
-               ecc_bit = ECC_CNFG_16BIT;
-               break;
-       case 18:
-               ecc_bit = ECC_CNFG_18BIT;
-               break;
-       case 20:
-               ecc_bit = ECC_CNFG_20BIT;
-               break;
-       case 22:
-               ecc_bit = ECC_CNFG_22BIT;
-               break;
-       case 24:
-               ecc_bit = ECC_CNFG_24BIT;
-               break;
-       case 28:
-               ecc_bit = ECC_CNFG_28BIT;
-               break;
-       case 32:
-               ecc_bit = ECC_CNFG_32BIT;
-               break;
-       case 36:
-               ecc_bit = ECC_CNFG_36BIT;
-               break;
-       case 40:
-               ecc_bit = ECC_CNFG_40BIT;
-               break;
-       case 44:
-               ecc_bit = ECC_CNFG_44BIT;
-               break;
-       case 48:
-               ecc_bit = ECC_CNFG_48BIT;
-               break;
-       case 52:
-               ecc_bit = ECC_CNFG_52BIT;
-               break;
-       case 56:
-               ecc_bit = ECC_CNFG_56BIT;
-               break;
-       case 60:
-               ecc_bit = ECC_CNFG_60BIT;
-               break;
-       default:
-               dev_err(ecc->dev, "invalid strength %d, default to 4 bits\n",
+       u32 ecc_bit, dec_sz, enc_sz;
+       u32 reg, i;
+
+       for (i = 0; i < ecc->caps->num_ecc_strength; i++) {
+               if (ecc->caps->ecc_strength[i] == config->strength)
+                       break;
+       }
+
+       if (i == ecc->caps->num_ecc_strength) {
+               dev_err(ecc->dev, "invalid ecc strength %d\n",
                        config->strength);
+               return -EINVAL;
        }
 
+       ecc_bit = i;
+
        if (config->op == ECC_ENCODE) {
                /* configure ECC encoder (in bits) */
                enc_sz = config->len << 3;
@@ -232,6 +178,8 @@ static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
                if (config->sectors)
                        ecc->sectors = 1 << (config->sectors - 1);
        }
+
+       return 0;
 }
 
 void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
@@ -247,8 +195,8 @@ void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
                offset = (i >> 2) << 2;
                err = readl(ecc->regs + ECC_DECENUM0 + offset);
                err = err >> ((i % 4) * 8);
-               err &= ERR_MASK;
-               if (err == ERR_MASK) {
+               err &= ecc->caps->err_mask;
+               if (err == ecc->caps->err_mask) {
                        /* uncorrectable errors */
                        stats->failed++;
                        continue;
@@ -313,6 +261,7 @@ EXPORT_SYMBOL(of_mtk_ecc_get);
 int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 {
        enum mtk_ecc_operation op = config->op;
+       u16 reg_val;
        int ret;
 
        ret = mutex_lock_interruptible(&ecc->lock);
@@ -322,11 +271,27 @@ int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
        }
 
        mtk_ecc_wait_idle(ecc, op);
-       mtk_ecc_config(ecc, config);
-       writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
 
-       init_completion(&ecc->done);
-       writew(ECC_IRQ_EN, ecc->regs + ECC_IRQ_REG(op));
+       ret = mtk_ecc_config(ecc, config);
+       if (ret) {
+               mutex_unlock(&ecc->lock);
+               return ret;
+       }
+
+       if (config->mode != ECC_NFI_MODE || op != ECC_ENCODE) {
+               init_completion(&ecc->done);
+               reg_val = ECC_IRQ_EN;
+               /*
+                * For ECC_NFI_MODE, if ecc->caps->pg_irq_sel is 1, then it
+                * means this chip can only generate one ecc irq during page
+                * read / write. If is 0, generate one ecc irq each ecc step.
+                */
+               if (ecc->caps->pg_irq_sel && config->mode == ECC_NFI_MODE)
+                       reg_val |= ECC_PG_IRQ_SEL;
+               writew(reg_val, ecc->regs + ECC_IRQ_REG(op));
+       }
+
+       writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
 
        return 0;
 }
@@ -396,7 +361,9 @@ int mtk_ecc_encode(struct mtk_ecc *ecc, struct mtk_ecc_config *config,
        len = (config->strength * ECC_PARITY_BITS + 7) >> 3;
 
        /* write the parity bytes generated by the ECC back to temp buffer */
-       __ioread32_copy(ecc->eccdata, ecc->regs + ECC_ENCPAR(0), round_up(len, 4));
+       __ioread32_copy(ecc->eccdata,
+                       ecc->regs + ecc->caps->encode_parity_reg0,
+                       round_up(len, 4));
 
        /* copy into possibly unaligned OOB region with actual length */
        memcpy(data + bytes, ecc->eccdata, len);
@@ -409,37 +376,79 @@ timeout:
 }
 EXPORT_SYMBOL(mtk_ecc_encode);
 
-void mtk_ecc_adjust_strength(u32 *p)
+void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p)
 {
-       u32 ecc[] = {4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
-                       40, 44, 48, 52, 56, 60};
+       const u8 *ecc_strength = ecc->caps->ecc_strength;
        int i;
 
-       for (i = 0; i < ARRAY_SIZE(ecc); i++) {
-               if (*p <= ecc[i]) {
+       for (i = 0; i < ecc->caps->num_ecc_strength; i++) {
+               if (*p <= ecc_strength[i]) {
                        if (!i)
-                               *p = ecc[i];
-                       else if (*p != ecc[i])
-                               *p = ecc[i - 1];
+                               *p = ecc_strength[i];
+                       else if (*p != ecc_strength[i])
+                               *p = ecc_strength[i - 1];
                        return;
                }
        }
 
-       *p = ecc[ARRAY_SIZE(ecc) - 1];
+       *p = ecc_strength[ecc->caps->num_ecc_strength - 1];
 }
 EXPORT_SYMBOL(mtk_ecc_adjust_strength);
 
+static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
+       .err_mask = 0x3f,
+       .ecc_strength = ecc_strength_mt2701,
+       .num_ecc_strength = 20,
+       .encode_parity_reg0 = 0x10,
+       .pg_irq_sel = 0,
+};
+
+static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
+       .err_mask = 0x7f,
+       .ecc_strength = ecc_strength_mt2712,
+       .num_ecc_strength = 23,
+       .encode_parity_reg0 = 0x300,
+       .pg_irq_sel = 1,
+};
+
+static const struct of_device_id mtk_ecc_dt_match[] = {
+       {
+               .compatible = "mediatek,mt2701-ecc",
+               .data = &mtk_ecc_caps_mt2701,
+       }, {
+               .compatible = "mediatek,mt2712-ecc",
+               .data = &mtk_ecc_caps_mt2712,
+       },
+       {},
+};
+
 static int mtk_ecc_probe(struct platform_device *pdev)
 {
        struct device *dev = &pdev->dev;
        struct mtk_ecc *ecc;
        struct resource *res;
+       const struct of_device_id *of_ecc_id = NULL;
+       u32 max_eccdata_size;
        int irq, ret;
 
        ecc = devm_kzalloc(dev, sizeof(*ecc), GFP_KERNEL);
        if (!ecc)
                return -ENOMEM;
 
+       of_ecc_id = of_match_device(mtk_ecc_dt_match, &pdev->dev);
+       if (!of_ecc_id)
+               return -ENODEV;
+
+       ecc->caps = of_ecc_id->data;
+
+       max_eccdata_size = ecc->caps->num_ecc_strength - 1;
+       max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size];
+       max_eccdata_size = (max_eccdata_size * ECC_PARITY_BITS + 7) >> 3;
+       max_eccdata_size = round_up(max_eccdata_size, 4);
+       ecc->eccdata = devm_kzalloc(dev, max_eccdata_size, GFP_KERNEL);
+       if (!ecc->eccdata)
+               return -ENOMEM;
+
        res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        ecc->regs = devm_ioremap_resource(dev, res);
        if (IS_ERR(ecc->regs)) {
@@ -500,19 +509,12 @@ static int mtk_ecc_resume(struct device *dev)
                return ret;
        }
 
-       mtk_ecc_hw_init(ecc);
-
        return 0;
 }
 
 static SIMPLE_DEV_PM_OPS(mtk_ecc_pm_ops, mtk_ecc_suspend, mtk_ecc_resume);
 #endif
 
-static const struct of_device_id mtk_ecc_dt_match[] = {
-       { .compatible = "mediatek,mt2701-ecc" },
-       {},
-};
-
 MODULE_DEVICE_TABLE(of, mtk_ecc_dt_match);
 
 static struct platform_driver mtk_ecc_driver = {
index cbeba5cd1c13997f6f0cb1e7b966571954e28e1a..d245c14f1b8026c366bd3e0c604107a28b00d77c 100644 (file)
@@ -42,7 +42,7 @@ void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
 int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
 int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *);
 void mtk_ecc_disable(struct mtk_ecc *);
-void mtk_ecc_adjust_strength(u32 *);
+void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p);
 
 struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
 void mtk_ecc_release(struct mtk_ecc *);
index 6c517c682939db436eb040e7481682c6ae309e69..f7ae9946437513a30f0c6e3220f776e29e8a0131 100644 (file)
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/iopoll.h>
 #include <linux/of.h>
+#include <linux/of_device.h>
 #include "mtk_ecc.h"
 
 /* NAND controller register definition */
 #define NFI_PAGEFMT            (0x04)
 #define                PAGEFMT_FDM_ECC_SHIFT   (12)
 #define                PAGEFMT_FDM_SHIFT       (8)
-#define                PAGEFMT_SPARE_16        (0)
-#define                PAGEFMT_SPARE_26        (1)
-#define                PAGEFMT_SPARE_27        (2)
-#define                PAGEFMT_SPARE_28        (3)
-#define                PAGEFMT_SPARE_32        (4)
-#define                PAGEFMT_SPARE_36        (5)
-#define                PAGEFMT_SPARE_40        (6)
-#define                PAGEFMT_SPARE_44        (7)
-#define                PAGEFMT_SPARE_48        (8)
-#define                PAGEFMT_SPARE_49        (9)
-#define                PAGEFMT_SPARE_50        (0xa)
-#define                PAGEFMT_SPARE_51        (0xb)
-#define                PAGEFMT_SPARE_52        (0xc)
-#define                PAGEFMT_SPARE_62        (0xd)
-#define                PAGEFMT_SPARE_63        (0xe)
-#define                PAGEFMT_SPARE_64        (0xf)
-#define                PAGEFMT_SPARE_SHIFT     (4)
 #define                PAGEFMT_SEC_SEL_512     BIT(2)
 #define                PAGEFMT_512_2K          (0)
 #define                PAGEFMT_2K_4K           (1)
 #define MTK_RESET_TIMEOUT      (1000000)
 #define MTK_MAX_SECTOR         (16)
 #define MTK_NAND_MAX_NSELS     (2)
+#define MTK_NFC_MIN_SPARE      (16)
+#define ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt) \
+       ((tpoecs) << 28 | (tprecs) << 22 | (tc2r) << 16 | \
+       (tw2r) << 12 | (twh) << 8 | (twst) << 4 | (trlt))
+
+struct mtk_nfc_caps {
+       const u8 *spare_size;
+       u8 num_spare_size;
+       u8 pageformat_spare_shift;
+       u8 nfi_clk_div;
+};
 
 struct mtk_nfc_bad_mark_ctl {
        void (*bm_swap)(struct mtd_info *, u8 *buf, int raw);
@@ -155,6 +150,7 @@ struct mtk_nfc {
        struct mtk_ecc *ecc;
 
        struct device *dev;
+       const struct mtk_nfc_caps *caps;
        void __iomem *regs;
 
        struct completion done;
@@ -163,6 +159,20 @@ struct mtk_nfc {
        u8 *buffer;
 };
 
+/*
+ * supported spare size of each IP.
+ * order should be the same with the spare size bitfiled defination of
+ * register NFI_PAGEFMT.
+ */
+static const u8 spare_size_mt2701[] = {
+       16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 63, 64
+};
+
+static const u8 spare_size_mt2712[] = {
+       16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 61, 63, 64, 67,
+       74
+};
+
 static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand)
 {
        return container_of(nand, struct mtk_nfc_nand_chip, nand);
@@ -308,7 +318,7 @@ static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
        struct nand_chip *chip = mtd_to_nand(mtd);
        struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
        struct mtk_nfc *nfc = nand_get_controller_data(chip);
-       u32 fmt, spare;
+       u32 fmt, spare, i;
 
        if (!mtd->writesize)
                return 0;
@@ -352,63 +362,21 @@ static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
        if (chip->ecc.size == 1024)
                spare >>= 1;
 
-       switch (spare) {
-       case 16:
-               fmt |= (PAGEFMT_SPARE_16 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 26:
-               fmt |= (PAGEFMT_SPARE_26 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 27:
-               fmt |= (PAGEFMT_SPARE_27 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 28:
-               fmt |= (PAGEFMT_SPARE_28 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 32:
-               fmt |= (PAGEFMT_SPARE_32 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 36:
-               fmt |= (PAGEFMT_SPARE_36 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 40:
-               fmt |= (PAGEFMT_SPARE_40 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 44:
-               fmt |= (PAGEFMT_SPARE_44 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 48:
-               fmt |= (PAGEFMT_SPARE_48 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 49:
-               fmt |= (PAGEFMT_SPARE_49 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 50:
-               fmt |= (PAGEFMT_SPARE_50 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 51:
-               fmt |= (PAGEFMT_SPARE_51 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 52:
-               fmt |= (PAGEFMT_SPARE_52 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 62:
-               fmt |= (PAGEFMT_SPARE_62 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 63:
-               fmt |= (PAGEFMT_SPARE_63 << PAGEFMT_SPARE_SHIFT);
-               break;
-       case 64:
-               fmt |= (PAGEFMT_SPARE_64 << PAGEFMT_SPARE_SHIFT);
-               break;
-       default:
-               dev_err(nfc->dev, "invalid spare per sector %d\n", spare);
+       for (i = 0; i < nfc->caps->num_spare_size; i++) {
+               if (nfc->caps->spare_size[i] == spare)
+                       break;
+       }
+
+       if (i == nfc->caps->num_spare_size) {
+               dev_err(nfc->dev, "invalid spare size %d\n", spare);
                return -EINVAL;
        }
 
+       fmt |= i << nfc->caps->pageformat_spare_shift;
+
        fmt |= mtk_nand->fdm.reg_size << PAGEFMT_FDM_SHIFT;
        fmt |= mtk_nand->fdm.ecc_size << PAGEFMT_FDM_ECC_SHIFT;
-       nfi_writew(nfc, fmt, NFI_PAGEFMT);
+       nfi_writel(nfc, fmt, NFI_PAGEFMT);
 
        nfc->ecc_cfg.strength = chip->ecc.strength;
        nfc->ecc_cfg.len = chip->ecc.size + mtk_nand->fdm.ecc_size;
@@ -531,6 +499,74 @@ static void mtk_nfc_write_buf(struct mtd_info *mtd, const u8 *buf, int len)
                mtk_nfc_write_byte(mtd, buf[i]);
 }
 
+static int mtk_nfc_setup_data_interface(struct mtd_info *mtd, int csline,
+                                       const struct nand_data_interface *conf)
+{
+       struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
+       const struct nand_sdr_timings *timings;
+       u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt;
+
+       timings = nand_get_sdr_timings(conf);
+       if (IS_ERR(timings))
+               return -ENOTSUPP;
+
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+               return 0;
+
+       rate = clk_get_rate(nfc->clk.nfi_clk);
+       /* There is a frequency divider in some IPs */
+       rate /= nfc->caps->nfi_clk_div;
+
+       /* turn clock rate into KHZ */
+       rate /= 1000;
+
+       tpoecs = max(timings->tALH_min, timings->tCLH_min) / 1000;
+       tpoecs = DIV_ROUND_UP(tpoecs * rate, 1000000);
+       tpoecs &= 0xf;
+
+       tprecs = max(timings->tCLS_min, timings->tALS_min) / 1000;
+       tprecs = DIV_ROUND_UP(tprecs * rate, 1000000);
+       tprecs &= 0x3f;
+
+       /* sdr interface has no tCR which means CE# low to RE# low */
+       tc2r = 0;
+
+       tw2r = timings->tWHR_min / 1000;
+       tw2r = DIV_ROUND_UP(tw2r * rate, 1000000);
+       tw2r = DIV_ROUND_UP(tw2r - 1, 2);
+       tw2r &= 0xf;
+
+       twh = max(timings->tREH_min, timings->tWH_min) / 1000;
+       twh = DIV_ROUND_UP(twh * rate, 1000000) - 1;
+       twh &= 0xf;
+
+       twst = timings->tWP_min / 1000;
+       twst = DIV_ROUND_UP(twst * rate, 1000000) - 1;
+       twst &= 0xf;
+
+       trlt = max(timings->tREA_max, timings->tRP_min) / 1000;
+       trlt = DIV_ROUND_UP(trlt * rate, 1000000) - 1;
+       trlt &= 0xf;
+
+       /*
+        * ACCON: access timing control register
+        * -------------------------------------
+        * 31:28: tpoecs, minimum required time for CS post pulling down after
+        *        accessing the device
+        * 27:22: tprecs, minimum required time for CS pre pulling down before
+        *        accessing the device
+        * 21:16: tc2r, minimum required time from NCEB low to NREB low
+        * 15:12: tw2r, minimum required time from NWEB high to NREB low.
+        * 11:08: twh, write enable hold time
+        * 07:04: twst, write wait states
+        * 03:00: trlt, read wait states
+        */
+       trlt = ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt);
+       nfi_writel(nfc, trlt, NFI_ACCCON);
+
+       return 0;
+}
+
 static int mtk_nfc_sector_encode(struct nand_chip *chip, u8 *data)
 {
        struct mtk_nfc *nfc = nand_get_controller_data(chip);
@@ -987,21 +1023,6 @@ static int mtk_nfc_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
 
 static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
 {
-       /*
-        * ACCON: access timing control register
-        * -------------------------------------
-        * 31:28: minimum required time for CS post pulling down after accessing
-        *      the device
-        * 27:22: minimum required time for CS pre pulling down before accessing
-        *      the device
-        * 21:16: minimum required time from NCEB low to NREB low
-        * 15:12: minimum required time from NWEB high to NREB low.
-        * 11:08: write enable hold time
-        * 07:04: write wait states
-        * 03:00: read wait states
-        */
-       nfi_writel(nfc, 0x10804211, NFI_ACCCON);
-
        /*
         * CNRNB: nand ready/busy register
         * -------------------------------
@@ -1009,7 +1030,7 @@ static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
         * 0  : poll the status of the busy/ready signal after [7:4]*16 cycles.
         */
        nfi_writew(nfc, 0xf1, NFI_CNRNB);
-       nfi_writew(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
+       nfi_writel(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
 
        mtk_nfc_hw_reset(nfc);
 
@@ -1131,12 +1152,12 @@ static void mtk_nfc_set_bad_mark_ctl(struct mtk_nfc_bad_mark_ctl *bm_ctl,
        }
 }
 
-static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
+static int mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
 {
        struct nand_chip *nand = mtd_to_nand(mtd);
-       u32 spare[] = {16, 26, 27, 28, 32, 36, 40, 44,
-                       48, 49, 50, 51, 52, 62, 63, 64};
-       u32 eccsteps, i;
+       struct mtk_nfc *nfc = nand_get_controller_data(nand);
+       const u8 *spare = nfc->caps->spare_size;
+       u32 eccsteps, i, closest_spare = 0;
 
        eccsteps = mtd->writesize / nand->ecc.size;
        *sps = mtd->oobsize / eccsteps;
@@ -1144,28 +1165,31 @@ static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
        if (nand->ecc.size == 1024)
                *sps >>= 1;
 
-       for (i = 0; i < ARRAY_SIZE(spare); i++) {
-               if (*sps <= spare[i]) {
-                       if (!i)
-                               *sps = spare[i];
-                       else if (*sps != spare[i])
-                               *sps = spare[i - 1];
-                       break;
+       if (*sps < MTK_NFC_MIN_SPARE)
+               return -EINVAL;
+
+       for (i = 0; i < nfc->caps->num_spare_size; i++) {
+               if (*sps >= spare[i] && spare[i] >= spare[closest_spare]) {
+                       closest_spare = i;
+                       if (*sps == spare[i])
+                               break;
                }
        }
 
-       if (i >= ARRAY_SIZE(spare))
-               *sps = spare[ARRAY_SIZE(spare) - 1];
+       *sps = spare[closest_spare];
 
        if (nand->ecc.size == 1024)
                *sps <<= 1;
+
+       return 0;
 }
 
 static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
 {
        struct nand_chip *nand = mtd_to_nand(mtd);
+       struct mtk_nfc *nfc = nand_get_controller_data(nand);
        u32 spare;
-       int free;
+       int free, ret;
 
        /* support only ecc hw mode */
        if (nand->ecc.mode != NAND_ECC_HW) {
@@ -1194,7 +1218,9 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
                        nand->ecc.size = 1024;
                }
 
-               mtk_nfc_set_spare_per_sector(&spare, mtd);
+               ret = mtk_nfc_set_spare_per_sector(&spare, mtd);
+               if (ret)
+                       return ret;
 
                /* calculate oob bytes except ecc parity data */
                free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3;
@@ -1214,7 +1240,7 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
                }
        }
 
-       mtk_ecc_adjust_strength(&nand->ecc.strength);
+       mtk_ecc_adjust_strength(nfc->ecc, &nand->ecc.strength);
 
        dev_info(dev, "eccsize %d eccstrength %d\n",
                 nand->ecc.size, nand->ecc.strength);
@@ -1271,6 +1297,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
        nand->read_byte = mtk_nfc_read_byte;
        nand->read_buf = mtk_nfc_read_buf;
        nand->cmd_ctrl = mtk_nfc_cmd_ctrl;
+       nand->setup_data_interface = mtk_nfc_setup_data_interface;
 
        /* set default mode in case dt entry is missing */
        nand->ecc.mode = NAND_ECC_HW;
@@ -1312,7 +1339,10 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
                return -EINVAL;
        }
 
-       mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
+       ret = mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
+       if (ret)
+               return ret;
+
        mtk_nfc_set_fdm(&chip->fdm, mtd);
        mtk_nfc_set_bad_mark_ctl(&chip->bad_mark, mtd);
 
@@ -1354,12 +1384,39 @@ static int mtk_nfc_nand_chips_init(struct device *dev, struct mtk_nfc *nfc)
        return 0;
 }
 
+static const struct mtk_nfc_caps mtk_nfc_caps_mt2701 = {
+       .spare_size = spare_size_mt2701,
+       .num_spare_size = 16,
+       .pageformat_spare_shift = 4,
+       .nfi_clk_div = 1,
+};
+
+static const struct mtk_nfc_caps mtk_nfc_caps_mt2712 = {
+       .spare_size = spare_size_mt2712,
+       .num_spare_size = 19,
+       .pageformat_spare_shift = 16,
+       .nfi_clk_div = 2,
+};
+
+static const struct of_device_id mtk_nfc_id_table[] = {
+       {
+               .compatible = "mediatek,mt2701-nfc",
+               .data = &mtk_nfc_caps_mt2701,
+       }, {
+               .compatible = "mediatek,mt2712-nfc",
+               .data = &mtk_nfc_caps_mt2712,
+       },
+       {}
+};
+MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
+
 static int mtk_nfc_probe(struct platform_device *pdev)
 {
        struct device *dev = &pdev->dev;
        struct device_node *np = dev->of_node;
        struct mtk_nfc *nfc;
        struct resource *res;
+       const struct of_device_id *of_nfc_id = NULL;
        int ret, irq;
 
        nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL);
@@ -1423,6 +1480,14 @@ static int mtk_nfc_probe(struct platform_device *pdev)
                goto clk_disable;
        }
 
+       of_nfc_id = of_match_device(mtk_nfc_id_table, &pdev->dev);
+       if (!of_nfc_id) {
+               ret = -ENODEV;
+               goto clk_disable;
+       }
+
+       nfc->caps = of_nfc_id->data;
+
        platform_set_drvdata(pdev, nfc);
 
        ret = mtk_nfc_nand_chips_init(dev, nfc);
@@ -1485,8 +1550,6 @@ static int mtk_nfc_resume(struct device *dev)
        if (ret)
                return ret;
 
-       mtk_nfc_hw_init(nfc);
-
        /* reset NAND chip if VCC was powered off */
        list_for_each_entry(chip, &nfc->chips, node) {
                nand = &chip->nand;
@@ -1503,12 +1566,6 @@ static int mtk_nfc_resume(struct device *dev)
 static SIMPLE_DEV_PM_OPS(mtk_nfc_pm_ops, mtk_nfc_suspend, mtk_nfc_resume);
 #endif
 
-static const struct of_device_id mtk_nfc_id_table[] = {
-       { .compatible = "mediatek,mt2701-nfc" },
-       {}
-};
-MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
-
 static struct platform_driver mtk_nfc_driver = {
        .probe  = mtk_nfc_probe,
        .remove = mtk_nfc_remove,
index 61ca020c527295950241c0982b990af5967c9078..a764d5ca7536b33fdb60a7ced19cf9980cffdd71 100644 (file)
@@ -152,9 +152,8 @@ struct mxc_nand_devtype_data {
        void (*select_chip)(struct mtd_info *mtd, int chip);
        int (*correct_data)(struct mtd_info *mtd, u_char *dat,
                        u_char *read_ecc, u_char *calc_ecc);
-       int (*setup_data_interface)(struct mtd_info *mtd,
-                                   const struct nand_data_interface *conf,
-                                   bool check_only);
+       int (*setup_data_interface)(struct mtd_info *mtd, int csline,
+                                   const struct nand_data_interface *conf);
 
        /*
         * On i.MX21 the CONFIG2:INT bit cannot be read if interrupts are masked
@@ -1015,9 +1014,8 @@ static void preset_v1(struct mtd_info *mtd)
        writew(0x4, NFC_V1_V2_WRPROT);
 }
 
-static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd,
-                                       const struct nand_data_interface *conf,
-                                       bool check_only)
+static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd, int csline,
+                                       const struct nand_data_interface *conf)
 {
        struct nand_chip *nand_chip = mtd_to_nand(mtd);
        struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
@@ -1075,7 +1073,7 @@ static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd,
                return -EINVAL;
        }
 
-       if (check_only)
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
                return 0;
 
        ret = clk_set_rate(host->clk, rate);
index bf8486c406d3da3b8c98d9186512d5b1b88e87fa..5fa5ddc94834d0a27a8add0125ed27310601af09 100644 (file)
@@ -755,6 +755,16 @@ static void nand_command(struct mtd_info *mtd, unsigned int command,
                return;
 
                /* This applies to read commands */
+       case NAND_CMD_READ0:
+               /*
+                * READ0 is sometimes used to exit GET STATUS mode. When this
+                * is the case no address cycles are requested, and we can use
+                * this information to detect that we should not wait for the
+                * device to be ready.
+                */
+               if (column == -1 && page_addr == -1)
+                       return;
+
        default:
                /*
                 * If we don't have access to the busy pin, we apply the given
@@ -889,6 +899,15 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command,
                return;
 
        case NAND_CMD_READ0:
+               /*
+                * READ0 is sometimes used to exit GET STATUS mode. When this
+                * is the case no address cycles are requested, and we can use
+                * this information to detect that READSTART should not be
+                * issued.
+                */
+               if (column == -1 && page_addr == -1)
+                       return;
+
                chip->cmd_ctrl(mtd, NAND_CMD_READSTART,
                               NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE);
                chip->cmd_ctrl(mtd, NAND_CMD_NONE,
@@ -1044,12 +1063,13 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
 /**
  * nand_reset_data_interface - Reset data interface and timings
  * @chip: The NAND chip
+ * @chipnr: Internal die id
  *
  * Reset the Data interface and timings to ONFI mode 0.
  *
  * Returns 0 for success or negative error code otherwise.
  */
-static int nand_reset_data_interface(struct nand_chip *chip)
+static int nand_reset_data_interface(struct nand_chip *chip, int chipnr)
 {
        struct mtd_info *mtd = nand_to_mtd(chip);
        const struct nand_data_interface *conf;
@@ -1073,7 +1093,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
         */
 
        conf = nand_get_default_data_interface();
-       ret = chip->setup_data_interface(mtd, conf, false);
+       ret = chip->setup_data_interface(mtd, chipnr, conf);
        if (ret)
                pr_err("Failed to configure data interface to SDR timing mode 0\n");
 
@@ -1083,6 +1103,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
 /**
  * nand_setup_data_interface - Setup the best data interface and timings
  * @chip: The NAND chip
+ * @chipnr: Internal die id
  *
  * Find and configure the best data interface and NAND timings supported by
  * the chip and the driver.
@@ -1092,7 +1113,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
  *
  * Returns 0 for success or negative error code otherwise.
  */
-static int nand_setup_data_interface(struct nand_chip *chip)
+static int nand_setup_data_interface(struct nand_chip *chip, int chipnr)
 {
        struct mtd_info *mtd = nand_to_mtd(chip);
        int ret;
@@ -1116,7 +1137,7 @@ static int nand_setup_data_interface(struct nand_chip *chip)
                        goto err;
        }
 
-       ret = chip->setup_data_interface(mtd, chip->data_interface, false);
+       ret = chip->setup_data_interface(mtd, chipnr, chip->data_interface);
 err:
        return ret;
 }
@@ -1167,8 +1188,10 @@ static int nand_init_data_interface(struct nand_chip *chip)
                if (ret)
                        continue;
 
-               ret = chip->setup_data_interface(mtd, chip->data_interface,
-                                                true);
+               /* Pass -1 to only */
+               ret = chip->setup_data_interface(mtd,
+                                                NAND_DATA_IFACE_CHECK_ONLY,
+                                                chip->data_interface);
                if (!ret) {
                        chip->onfi_timing_mode_default = mode;
                        break;
@@ -1195,7 +1218,7 @@ int nand_reset(struct nand_chip *chip, int chipnr)
        struct mtd_info *mtd = nand_to_mtd(chip);
        int ret;
 
-       ret = nand_reset_data_interface(chip);
+       ret = nand_reset_data_interface(chip, chipnr);
        if (ret)
                return ret;
 
@@ -1208,7 +1231,7 @@ int nand_reset(struct nand_chip *chip, int chipnr)
        chip->select_chip(mtd, -1);
 
        chip->select_chip(mtd, chipnr);
-       ret = nand_setup_data_interface(chip);
+       ret = nand_setup_data_interface(chip, chipnr);
        chip->select_chip(mtd, -1);
        if (ret)
                return ret;
@@ -1424,7 +1447,10 @@ static int nand_check_erased_buf(void *buf, int len, int bitflips_threshold)
 
        for (; len >= sizeof(long);
             len -= sizeof(long), bitmap += sizeof(long)) {
-               weight = hweight_long(*((unsigned long *)bitmap));
+               unsigned long d = *((unsigned long *)bitmap);
+               if (d == ~0UL)
+                       continue;
+               weight = hweight_long(d);
                bitflips += BITS_PER_LONG - weight;
                if (unlikely(bitflips > bitflips_threshold))
                        return -EBADMSG;
@@ -1527,14 +1553,15 @@ EXPORT_SYMBOL(nand_check_erased_ecc_chunk);
  *
  * Not for syndrome calculating ECC controllers, which use a special oob layout.
  */
-static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-                             uint8_t *buf, int oob_required, int page)
+int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+                      uint8_t *buf, int oob_required, int page)
 {
        chip->read_buf(mtd, buf, mtd->writesize);
        if (oob_required)
                chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
        return 0;
 }
+EXPORT_SYMBOL(nand_read_page_raw);
 
 /**
  * nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc
@@ -2472,8 +2499,8 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from,
  *
  * Not for syndrome calculating ECC controllers, which use a special oob layout.
  */
-static int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-                              const uint8_t *buf, int oob_required, int page)
+int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+                       const uint8_t *buf, int oob_required, int page)
 {
        chip->write_buf(mtd, buf, mtd->writesize);
        if (oob_required)
@@ -2481,6 +2508,7 @@ static int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
 
        return 0;
 }
+EXPORT_SYMBOL(nand_write_page_raw);
 
 /**
  * nand_write_page_raw_syndrome - [INTERN] raw page write function
@@ -2718,7 +2746,7 @@ static int nand_write_page_syndrome(struct mtd_info *mtd,
  */
 static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
                uint32_t offset, int data_len, const uint8_t *buf,
-               int oob_required, int page, int cached, int raw)
+               int oob_required, int page, int raw)
 {
        int status, subpage;
 
@@ -2744,30 +2772,12 @@ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
        if (status < 0)
                return status;
 
-       /*
-        * Cached progamming disabled for now. Not sure if it's worth the
-        * trouble. The speed gain is not very impressive. (2.3->2.6Mib/s).
-        */
-       cached = 0;
+       if (nand_standard_page_accessors(&chip->ecc)) {
+               chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
 
-       if (!cached || !NAND_HAS_CACHEPROG(chip)) {
-
-               if (nand_standard_page_accessors(&chip->ecc))
-                       chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
                status = chip->waitfunc(mtd, chip);
-               /*
-                * See if operation failed and additional status checks are
-                * available.
-                */
-               if ((status & NAND_STATUS_FAIL) && (chip->errstat))
-                       status = chip->errstat(mtd, chip, FL_WRITING, status,
-                                              page);
-
                if (status & NAND_STATUS_FAIL)
                        return -EIO;
-       } else {
-               chip->cmdfunc(mtd, NAND_CMD_CACHEDPROG, -1, -1);
-               status = chip->waitfunc(mtd, chip);
        }
 
        return 0;
@@ -2875,7 +2885,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
 
        while (1) {
                int bytes = mtd->writesize;
-               int cached = writelen > bytes && page != blockmask;
                uint8_t *wbuf = buf;
                int use_bufpoi;
                int part_pagewr = (column || writelen < mtd->writesize);
@@ -2893,7 +2902,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
                if (use_bufpoi) {
                        pr_debug("%s: using write bounce buffer for buf@%p\n",
                                         __func__, buf);
-                       cached = 0;
                        if (part_pagewr)
                                bytes = min_t(int, bytes - column, writelen);
                        chip->pagebuf = -1;
@@ -2912,7 +2920,7 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
                }
 
                ret = nand_write_page(mtd, chip, column, bytes, wbuf,
-                                     oob_required, page, cached,
+                                     oob_required, page,
                                      (ops->mode == MTD_OPS_RAW));
                if (ret)
                        break;
@@ -3228,14 +3236,6 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr,
 
                status = chip->erase(mtd, page & chip->pagemask);
 
-               /*
-                * See if operation failed and additional status checks are
-                * available
-                */
-               if ((status & NAND_STATUS_FAIL) && (chip->errstat))
-                       status = chip->errstat(mtd, chip, FL_ERASING,
-                                              status, page);
-
                /* See if block erase succeeded */
                if (status & NAND_STATUS_FAIL) {
                        pr_debug("%s: failed erase, page 0x%08x\n",
@@ -3421,6 +3421,25 @@ static int nand_onfi_get_features(struct mtd_info *mtd, struct nand_chip *chip,
        return 0;
 }
 
+/**
+ * nand_onfi_get_set_features_notsupp - set/get features stub returning
+ *                                     -ENOTSUPP
+ * @mtd: MTD device structure
+ * @chip: nand chip info structure
+ * @addr: feature address.
+ * @subfeature_param: the subfeature parameters, a four bytes array.
+ *
+ * Should be used by NAND controller drivers that do not support the SET/GET
+ * FEATURES operations.
+ */
+int nand_onfi_get_set_features_notsupp(struct mtd_info *mtd,
+                                      struct nand_chip *chip, int addr,
+                                      u8 *subfeature_param)
+{
+       return -ENOTSUPP;
+}
+EXPORT_SYMBOL(nand_onfi_get_set_features_notsupp);
+
 /**
  * nand_suspend - [MTD Interface] Suspend the NAND flash
  * @mtd: MTD device structure
@@ -4180,6 +4199,7 @@ static const char * const nand_ecc_modes[] = {
        [NAND_ECC_HW]           = "hw",
        [NAND_ECC_HW_SYNDROME]  = "hw_syndrome",
        [NAND_ECC_HW_OOB_FIRST] = "hw_oob_first",
+       [NAND_ECC_ON_DIE]       = "on-die",
 };
 
 static int of_get_nand_ecc_mode(struct device_node *np)
@@ -4374,7 +4394,7 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips,
         * For the other dies, nand_reset() will automatically switch to the
         * best mode for us.
         */
-       ret = nand_setup_data_interface(chip);
+       ret = nand_setup_data_interface(chip, 0);
        if (ret)
                goto err_nand_init;
 
@@ -4512,6 +4532,226 @@ static int nand_set_ecc_soft_ops(struct mtd_info *mtd)
        }
 }
 
+/**
+ * nand_check_ecc_caps - check the sanity of preset ECC settings
+ * @chip: nand chip info structure
+ * @caps: ECC caps info structure
+ * @oobavail: OOB size that the ECC engine can use
+ *
+ * When ECC step size and strength are already set, check if they are supported
+ * by the controller and the calculated ECC bytes fit within the chip's OOB.
+ * On success, the calculated ECC bytes is set.
+ */
+int nand_check_ecc_caps(struct nand_chip *chip,
+                       const struct nand_ecc_caps *caps, int oobavail)
+{
+       struct mtd_info *mtd = nand_to_mtd(chip);
+       const struct nand_ecc_step_info *stepinfo;
+       int preset_step = chip->ecc.size;
+       int preset_strength = chip->ecc.strength;
+       int nsteps, ecc_bytes;
+       int i, j;
+
+       if (WARN_ON(oobavail < 0))
+               return -EINVAL;
+
+       if (!preset_step || !preset_strength)
+               return -ENODATA;
+
+       nsteps = mtd->writesize / preset_step;
+
+       for (i = 0; i < caps->nstepinfos; i++) {
+               stepinfo = &caps->stepinfos[i];
+
+               if (stepinfo->stepsize != preset_step)
+                       continue;
+
+               for (j = 0; j < stepinfo->nstrengths; j++) {
+                       if (stepinfo->strengths[j] != preset_strength)
+                               continue;
+
+                       ecc_bytes = caps->calc_ecc_bytes(preset_step,
+                                                        preset_strength);
+                       if (WARN_ON_ONCE(ecc_bytes < 0))
+                               return ecc_bytes;
+
+                       if (ecc_bytes * nsteps > oobavail) {
+                               pr_err("ECC (step, strength) = (%d, %d) does not fit in OOB",
+                                      preset_step, preset_strength);
+                               return -ENOSPC;
+                       }
+
+                       chip->ecc.bytes = ecc_bytes;
+
+                       return 0;
+               }
+       }
+
+       pr_err("ECC (step, strength) = (%d, %d) not supported on this controller",
+              preset_step, preset_strength);
+
+       return -ENOTSUPP;
+}
+EXPORT_SYMBOL_GPL(nand_check_ecc_caps);
+
+/**
+ * nand_match_ecc_req - meet the chip's requirement with least ECC bytes
+ * @chip: nand chip info structure
+ * @caps: ECC engine caps info structure
+ * @oobavail: OOB size that the ECC engine can use
+ *
+ * If a chip's ECC requirement is provided, try to meet it with the least
+ * number of ECC bytes (i.e. with the largest number of OOB-free bytes).
+ * On success, the chosen ECC settings are set.
+ */
+int nand_match_ecc_req(struct nand_chip *chip,
+                      const struct nand_ecc_caps *caps, int oobavail)
+{
+       struct mtd_info *mtd = nand_to_mtd(chip);
+       const struct nand_ecc_step_info *stepinfo;
+       int req_step = chip->ecc_step_ds;
+       int req_strength = chip->ecc_strength_ds;
+       int req_corr, step_size, strength, nsteps, ecc_bytes, ecc_bytes_total;
+       int best_step, best_strength, best_ecc_bytes;
+       int best_ecc_bytes_total = INT_MAX;
+       int i, j;
+
+       if (WARN_ON(oobavail < 0))
+               return -EINVAL;
+
+       /* No information provided by the NAND chip */
+       if (!req_step || !req_strength)
+               return -ENOTSUPP;
+
+       /* number of correctable bits the chip requires in a page */
+       req_corr = mtd->writesize / req_step * req_strength;
+
+       for (i = 0; i < caps->nstepinfos; i++) {
+               stepinfo = &caps->stepinfos[i];
+               step_size = stepinfo->stepsize;
+
+               for (j = 0; j < stepinfo->nstrengths; j++) {
+                       strength = stepinfo->strengths[j];
+
+                       /*
+                        * If both step size and strength are smaller than the
+                        * chip's requirement, it is not easy to compare the
+                        * resulted reliability.
+                        */
+                       if (step_size < req_step && strength < req_strength)
+                               continue;
+
+                       if (mtd->writesize % step_size)
+                               continue;
+
+                       nsteps = mtd->writesize / step_size;
+
+                       ecc_bytes = caps->calc_ecc_bytes(step_size, strength);
+                       if (WARN_ON_ONCE(ecc_bytes < 0))
+                               continue;
+                       ecc_bytes_total = ecc_bytes * nsteps;
+
+                       if (ecc_bytes_total > oobavail ||
+                           strength * nsteps < req_corr)
+                               continue;
+
+                       /*
+                        * We assume the best is to meet the chip's requrement
+                        * with the least number of ECC bytes.
+                        */
+                       if (ecc_bytes_total < best_ecc_bytes_total) {
+                               best_ecc_bytes_total = ecc_bytes_total;
+                               best_step = step_size;
+                               best_strength = strength;
+                               best_ecc_bytes = ecc_bytes;
+                       }
+               }
+       }
+
+       if (best_ecc_bytes_total == INT_MAX)
+               return -ENOTSUPP;
+
+       chip->ecc.size = best_step;
+       chip->ecc.strength = best_strength;
+       chip->ecc.bytes = best_ecc_bytes;
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(nand_match_ecc_req);
+
+/**
+ * nand_maximize_ecc - choose the max ECC strength available
+ * @chip: nand chip info structure
+ * @caps: ECC engine caps info structure
+ * @oobavail: OOB size that the ECC engine can use
+ *
+ * Choose the max ECC strength that is supported on the controller, and can fit
+ * within the chip's OOB.  On success, the chosen ECC settings are set.
+ */
+int nand_maximize_ecc(struct nand_chip *chip,
+                     const struct nand_ecc_caps *caps, int oobavail)
+{
+       struct mtd_info *mtd = nand_to_mtd(chip);
+       const struct nand_ecc_step_info *stepinfo;
+       int step_size, strength, nsteps, ecc_bytes, corr;
+       int best_corr = 0;
+       int best_step = 0;
+       int best_strength, best_ecc_bytes;
+       int i, j;
+
+       if (WARN_ON(oobavail < 0))
+               return -EINVAL;
+
+       for (i = 0; i < caps->nstepinfos; i++) {
+               stepinfo = &caps->stepinfos[i];
+               step_size = stepinfo->stepsize;
+
+               /* If chip->ecc.size is already set, respect it */
+               if (chip->ecc.size && step_size != chip->ecc.size)
+                       continue;
+
+               for (j = 0; j < stepinfo->nstrengths; j++) {
+                       strength = stepinfo->strengths[j];
+
+                       if (mtd->writesize % step_size)
+                               continue;
+
+                       nsteps = mtd->writesize / step_size;
+
+                       ecc_bytes = caps->calc_ecc_bytes(step_size, strength);
+                       if (WARN_ON_ONCE(ecc_bytes < 0))
+                               continue;
+
+                       if (ecc_bytes * nsteps > oobavail)
+                               continue;
+
+                       corr = strength * nsteps;
+
+                       /*
+                        * If the number of correctable bits is the same,
+                        * bigger step_size has more reliability.
+                        */
+                       if (corr > best_corr ||
+                           (corr == best_corr && step_size > best_step)) {
+                               best_corr = corr;
+                               best_step = step_size;
+                               best_strength = strength;
+                               best_ecc_bytes = ecc_bytes;
+                       }
+               }
+       }
+
+       if (!best_corr)
+               return -ENOTSUPP;
+
+       chip->ecc.size = best_step;
+       chip->ecc.strength = best_strength;
+       chip->ecc.bytes = best_ecc_bytes;
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(nand_maximize_ecc);
+
 /*
  * Check if the chip configuration meet the datasheet requirements.
 
@@ -4733,6 +4973,18 @@ int nand_scan_tail(struct mtd_info *mtd)
                }
                break;
 
+       case NAND_ECC_ON_DIE:
+               if (!ecc->read_page || !ecc->write_page) {
+                       WARN(1, "No ECC functions supplied; on-die ECC not possible\n");
+                       ret = -EINVAL;
+                       goto err_free;
+               }
+               if (!ecc->read_oob)
+                       ecc->read_oob = nand_read_oob_std;
+               if (!ecc->write_oob)
+                       ecc->write_oob = nand_write_oob_std;
+               break;
+
        case NAND_ECC_NONE:
                pr_warn("NAND_ECC_NONE selected by board driver. This is not recommended!\n");
                ecc->read_page = nand_read_page_raw;
@@ -4773,6 +5025,11 @@ int nand_scan_tail(struct mtd_info *mtd)
                goto err_free;
        }
        ecc->total = ecc->steps * ecc->bytes;
+       if (ecc->total > mtd->oobsize) {
+               WARN(1, "Total number of ECC bytes exceeded oobsize\n");
+               ret = -EINVAL;
+               goto err_free;
+       }
 
        /*
         * The number of bytes available for a client to place data into
index 8770110692519636caca8d501388a74d70642739..c30ab60f8e1bb1013a7d6fd7f88fd75b958e258a 100644 (file)
 
 #include <linux/mtd/nand.h>
 
+/*
+ * Special Micron status bit that indicates when the block has been
+ * corrected by on-die ECC and should be rewritten
+ */
+#define NAND_STATUS_WRITE_RECOMMENDED  BIT(3)
+
 struct nand_onfi_vendor_micron {
        u8 two_plane_read;
        u8 read_cache;
@@ -66,9 +72,197 @@ static int micron_nand_onfi_init(struct nand_chip *chip)
        return 0;
 }
 
+static int micron_nand_on_die_ooblayout_ecc(struct mtd_info *mtd, int section,
+                                           struct mtd_oob_region *oobregion)
+{
+       if (section >= 4)
+               return -ERANGE;
+
+       oobregion->offset = (section * 16) + 8;
+       oobregion->length = 8;
+
+       return 0;
+}
+
+static int micron_nand_on_die_ooblayout_free(struct mtd_info *mtd, int section,
+                                            struct mtd_oob_region *oobregion)
+{
+       if (section >= 4)
+               return -ERANGE;
+
+       oobregion->offset = (section * 16) + 2;
+       oobregion->length = 6;
+
+       return 0;
+}
+
+static const struct mtd_ooblayout_ops micron_nand_on_die_ooblayout_ops = {
+       .ecc = micron_nand_on_die_ooblayout_ecc,
+       .free = micron_nand_on_die_ooblayout_free,
+};
+
+static int micron_nand_on_die_ecc_setup(struct nand_chip *chip, bool enable)
+{
+       u8 feature[ONFI_SUBFEATURE_PARAM_LEN] = { 0, };
+
+       if (enable)
+               feature[0] |= ONFI_FEATURE_ON_DIE_ECC_EN;
+
+       return chip->onfi_set_features(nand_to_mtd(chip), chip,
+                                      ONFI_FEATURE_ON_DIE_ECC, feature);
+}
+
+static int
+micron_nand_read_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
+                                uint8_t *buf, int oob_required,
+                                int page)
+{
+       int status;
+       int max_bitflips = 0;
+
+       micron_nand_on_die_ecc_setup(chip, true);
+
+       chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
+       chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
+       status = chip->read_byte(mtd);
+       if (status & NAND_STATUS_FAIL)
+               mtd->ecc_stats.failed++;
+       /*
+        * The internal ECC doesn't tell us the number of bitflips
+        * that have been corrected, but tells us if it recommends to
+        * rewrite the block. If it's the case, then we pretend we had
+        * a number of bitflips equal to the ECC strength, which will
+        * hint the NAND core to rewrite the block.
+        */
+       else if (status & NAND_STATUS_WRITE_RECOMMENDED)
+               max_bitflips = chip->ecc.strength;
+
+       chip->cmdfunc(mtd, NAND_CMD_READ0, -1, -1);
+
+       nand_read_page_raw(mtd, chip, buf, oob_required, page);
+
+       micron_nand_on_die_ecc_setup(chip, false);
+
+       return max_bitflips;
+}
+
+static int
+micron_nand_write_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
+                                 const uint8_t *buf, int oob_required,
+                                 int page)
+{
+       int status;
+
+       micron_nand_on_die_ecc_setup(chip, true);
+
+       chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
+       nand_write_page_raw(mtd, chip, buf, oob_required, page);
+       chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+       status = chip->waitfunc(mtd, chip);
+
+       micron_nand_on_die_ecc_setup(chip, false);
+
+       return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
+
+static int
+micron_nand_read_page_raw_on_die_ecc(struct mtd_info *mtd,
+                                    struct nand_chip *chip,
+                                    uint8_t *buf, int oob_required,
+                                    int page)
+{
+       chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
+       nand_read_page_raw(mtd, chip, buf, oob_required, page);
+
+       return 0;
+}
+
+static int
+micron_nand_write_page_raw_on_die_ecc(struct mtd_info *mtd,
+                                     struct nand_chip *chip,
+                                     const uint8_t *buf, int oob_required,
+                                     int page)
+{
+       int status;
+
+       chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
+       nand_write_page_raw(mtd, chip, buf, oob_required, page);
+       chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+       status = chip->waitfunc(mtd, chip);
+
+       return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
+
+enum {
+       /* The NAND flash doesn't support on-die ECC */
+       MICRON_ON_DIE_UNSUPPORTED,
+
+       /*
+        * The NAND flash supports on-die ECC and it can be
+        * enabled/disabled by a set features command.
+        */
+       MICRON_ON_DIE_SUPPORTED,
+
+       /*
+        * The NAND flash supports on-die ECC, and it cannot be
+        * disabled.
+        */
+       MICRON_ON_DIE_MANDATORY,
+};
+
+/*
+ * Try to detect if the NAND support on-die ECC. To do this, we enable
+ * the feature, and read back if it has been enabled as expected. We
+ * also check if it can be disabled, because some Micron NANDs do not
+ * allow disabling the on-die ECC and we don't support such NANDs for
+ * now.
+ *
+ * This function also has the side effect of disabling on-die ECC if
+ * it had been left enabled by the firmware/bootloader.
+ */
+static int micron_supports_on_die_ecc(struct nand_chip *chip)
+{
+       u8 feature[ONFI_SUBFEATURE_PARAM_LEN] = { 0, };
+       int ret;
+
+       if (chip->onfi_version == 0)
+               return MICRON_ON_DIE_UNSUPPORTED;
+
+       if (chip->bits_per_cell != 1)
+               return MICRON_ON_DIE_UNSUPPORTED;
+
+       ret = micron_nand_on_die_ecc_setup(chip, true);
+       if (ret)
+               return MICRON_ON_DIE_UNSUPPORTED;
+
+       chip->onfi_get_features(nand_to_mtd(chip), chip,
+                               ONFI_FEATURE_ON_DIE_ECC, feature);
+       if ((feature[0] & ONFI_FEATURE_ON_DIE_ECC_EN) == 0)
+               return MICRON_ON_DIE_UNSUPPORTED;
+
+       ret = micron_nand_on_die_ecc_setup(chip, false);
+       if (ret)
+               return MICRON_ON_DIE_UNSUPPORTED;
+
+       chip->onfi_get_features(nand_to_mtd(chip), chip,
+                               ONFI_FEATURE_ON_DIE_ECC, feature);
+       if (feature[0] & ONFI_FEATURE_ON_DIE_ECC_EN)
+               return MICRON_ON_DIE_MANDATORY;
+
+       /*
+        * Some Micron NANDs have an on-die ECC of 4/512, some other
+        * 8/512. We only support the former.
+        */
+       if (chip->onfi_params.ecc_bits != 4)
+               return MICRON_ON_DIE_UNSUPPORTED;
+
+       return MICRON_ON_DIE_SUPPORTED;
+}
+
 static int micron_nand_init(struct nand_chip *chip)
 {
        struct mtd_info *mtd = nand_to_mtd(chip);
+       int ondie;
        int ret;
 
        ret = micron_nand_onfi_init(chip);
@@ -78,6 +272,34 @@ static int micron_nand_init(struct nand_chip *chip)
        if (mtd->writesize == 2048)
                chip->bbt_options |= NAND_BBT_SCAN2NDPAGE;
 
+       ondie = micron_supports_on_die_ecc(chip);
+
+       if (ondie == MICRON_ON_DIE_MANDATORY) {
+               pr_err("On-die ECC forcefully enabled, not supported\n");
+               return -EINVAL;
+       }
+
+       if (chip->ecc.mode == NAND_ECC_ON_DIE) {
+               if (ondie == MICRON_ON_DIE_UNSUPPORTED) {
+                       pr_err("On-die ECC selected but not supported\n");
+                       return -EINVAL;
+               }
+
+               chip->ecc.options = NAND_ECC_CUSTOM_PAGE_ACCESS;
+               chip->ecc.bytes = 8;
+               chip->ecc.size = 512;
+               chip->ecc.strength = 4;
+               chip->ecc.algo = NAND_ECC_BCH;
+               chip->ecc.read_page = micron_nand_read_page_on_die_ecc;
+               chip->ecc.write_page = micron_nand_write_page_on_die_ecc;
+               chip->ecc.read_page_raw =
+                       micron_nand_read_page_raw_on_die_ecc;
+               chip->ecc.write_page_raw =
+                       micron_nand_write_page_raw_on_die_ecc;
+
+               mtd_set_ooblayout(mtd, &micron_nand_on_die_ooblayout_ops);
+       }
+
        return 0;
 }
 
index f8e463a97b9ee479028dd9c282ff74436d885389..209170ed2b764511d6224ab8ddb8996059c26663 100644 (file)
@@ -166,7 +166,11 @@ static int __init orion_nand_probe(struct platform_device *pdev)
                }
        }
 
-       clk_prepare_enable(info->clk);
+       ret = clk_prepare_enable(info->clk);
+       if (ret) {
+               dev_err(&pdev->dev, "failed to prepare clock!\n");
+               return ret;
+       }
 
        ret = nand_scan(mtd, 1);
        if (ret)
index 649ba8200832d5ba3d237327fd3861d9844c2149..74dae4bbdac8f73efa4b8c7dc88d724fcd172c6c 100644 (file)
@@ -1812,6 +1812,8 @@ static int alloc_nand_resource(struct platform_device *pdev)
                chip->write_buf         = pxa3xx_nand_write_buf;
                chip->options           |= NAND_NO_SUBPAGE_WRITE;
                chip->cmdfunc           = nand_cmdfunc;
+               chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+               chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
        }
 
        nand_hw_control_init(chip->controller);
index 57d483ac5765a8aa889da61042aa5c75b64404a0..88af7145a51a2cb62280f94d9f380f80c982445e 100644 (file)
@@ -2008,6 +2008,8 @@ static int qcom_nand_host_init(struct qcom_nand_controller *nandc,
        chip->read_byte         = qcom_nandc_read_byte;
        chip->read_buf          = qcom_nandc_read_buf;
        chip->write_buf         = qcom_nandc_write_buf;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        /*
         * the bad block marker is readable only when we read the last codeword
index f0b030d44f71ff2add4383196fba0c9e02350277..9e0c849607b9ca8345828f0128ce9d796eee1f7f 100644 (file)
@@ -812,9 +812,8 @@ static int s3c2410_nand_add_partition(struct s3c2410_nand_info *info,
        return -ENODEV;
 }
 
-static int s3c2410_nand_setup_data_interface(struct mtd_info *mtd,
-                                       const struct nand_data_interface *conf,
-                                       bool check_only)
+static int s3c2410_nand_setup_data_interface(struct mtd_info *mtd, int csline,
+                                       const struct nand_data_interface *conf)
 {
        struct s3c2410_nand_info *info = s3c2410_nand_mtd_toinfo(mtd);
        struct s3c2410_platform_nand *pdata = info->platform;
index 442ce619b3b6d5cb8a65e2f8740e30e01e750aed..891ac7b993050d7d8f5f92d51fe2c37bdf96d1c0 100644 (file)
@@ -1183,6 +1183,8 @@ static int flctl_probe(struct platform_device *pdev)
        nand->read_buf = flctl_read_buf;
        nand->select_chip = flctl_select_chip;
        nand->cmdfunc = flctl_cmdfunc;
+       nand->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       nand->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        if (pdata->flcmncr_val & SEL_16BIT)
                nand->options |= NAND_BUSWIDTH_16;
index 118a26fff36856dd3f47fa523494ab05909924e4..d0b6f8f9f297ab89f355a727c333de1c5a2f7fc8 100644 (file)
@@ -1301,7 +1301,6 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 
        sunxi_nfc_hw_ecc_enable(mtd);
 
-       chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
        for (i = data_offs / ecc->size;
             i < DIV_ROUND_UP(data_offs + readlen, ecc->size); i++) {
                int data_off = i * ecc->size;
@@ -1592,9 +1591,8 @@ static int _sunxi_nand_lookup_timing(const s32 *lut, int lut_size, u32 duration,
 #define sunxi_nand_lookup_timing(l, p, c) \
                        _sunxi_nand_lookup_timing(l, ARRAY_SIZE(l), p, c)
 
-static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd,
-                                       const struct nand_data_interface *conf,
-                                       bool check_only)
+static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd, int csline,
+                                       const struct nand_data_interface *conf)
 {
        struct nand_chip *nand = mtd_to_nand(mtd);
        struct sunxi_nand_chip *chip = to_sunxi_nand(nand);
@@ -1707,7 +1705,7 @@ static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd,
                return tRHW;
        }
 
-       if (check_only)
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
                return 0;
 
        /*
@@ -1922,7 +1920,6 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
        ecc->write_subpage = sunxi_nfc_hw_ecc_write_subpage;
        ecc->read_oob_raw = nand_read_oob_std;
        ecc->write_oob_raw = nand_write_oob_std;
-       ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 
        return 0;
 }
index 49b286c6c10fc85e5ee7e75f4dd10d231c86c73f..9d40b793b1c490ed6c91843543c6fe48589eb48a 100644 (file)
@@ -303,7 +303,7 @@ static int tango_write_page(struct mtd_info *mtd, struct nand_chip *chip,
                            const u8 *buf, int oob_required, int page)
 {
        struct tango_nfc *nfc = to_tango_nfc(chip->controller);
-       int err, len = mtd->writesize;
+       int err, status, len = mtd->writesize;
 
        /* Calling tango_write_oob() would send PAGEPROG twice */
        if (oob_required)
@@ -314,6 +314,10 @@ static int tango_write_page(struct mtd_info *mtd, struct nand_chip *chip,
        if (err)
                return err;
 
+       status = chip->waitfunc(mtd, chip);
+       if (status & NAND_STATUS_FAIL)
+               return -EIO;
+
        return 0;
 }
 
@@ -340,7 +344,7 @@ static void aux_write(struct nand_chip *chip, const u8 **buf, int len, int *pos)
 
        if (!*buf) {
                /* skip over "len" bytes */
-               chip->cmdfunc(mtd, NAND_CMD_SEQIN, *pos, -1);
+               chip->cmdfunc(mtd, NAND_CMD_RNDIN, *pos, -1);
        } else {
                tango_write_buf(mtd, *buf, len);
                *buf += len;
@@ -431,9 +435,16 @@ static int tango_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
 static int tango_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
                                const u8 *buf, int oob_required, int page)
 {
+       int status;
+
        chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
        raw_write(chip, buf, chip->oob_poi);
        chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+
+       status = chip->waitfunc(mtd, chip);
+       if (status & NAND_STATUS_FAIL)
+               return -EIO;
+
        return 0;
 }
 
@@ -484,9 +495,8 @@ static u32 to_ticks(int kHz, int ps)
        return DIV_ROUND_UP_ULL((u64)kHz * ps, NSEC_PER_SEC);
 }
 
-static int tango_set_timings(struct mtd_info *mtd,
-                            const struct nand_data_interface *conf,
-                            bool check_only)
+static int tango_set_timings(struct mtd_info *mtd, int csline,
+                            const struct nand_data_interface *conf)
 {
        const struct nand_sdr_timings *sdr = nand_get_sdr_timings(conf);
        struct nand_chip *chip = mtd_to_nand(mtd);
@@ -498,7 +508,7 @@ static int tango_set_timings(struct mtd_info *mtd,
        if (IS_ERR(sdr))
                return PTR_ERR(sdr);
 
-       if (check_only)
+       if (csline == NAND_DATA_IFACE_CHECK_ONLY)
                return 0;
 
        Trdy = to_ticks(kHz, sdr->tCEA_max - sdr->tREA_max);
index 3ea4bb19e12d9de9a52ba8c1479ea925d250e9eb..744ab10e896218124c20486a201f8e6c88c9ac08 100644 (file)
@@ -703,6 +703,8 @@ static int vf610_nfc_probe(struct platform_device *pdev)
        chip->read_buf = vf610_nfc_read_buf;
        chip->write_buf = vf610_nfc_write_buf;
        chip->select_chip = vf610_nfc_select_chip;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        chip->options |= NAND_NO_SUBPAGE_WRITE;
 
diff --git a/drivers/mtd/parsers/Kconfig b/drivers/mtd/parsers/Kconfig
new file mode 100644 (file)
index 0000000..d206b3c
--- /dev/null
@@ -0,0 +1,8 @@
+config MTD_PARSER_TRX
+       tristate "Parser for TRX format partitions"
+       depends on MTD && (BCM47XX || ARCH_BCM_5301X || COMPILE_TEST)
+       help
+         TRX is a firmware format used by Broadcom on their devices. It
+         may contain up to 3/4 partitions (depending on the version).
+         This driver will parse TRX header and report at least two partitions:
+         kernel and rootfs.
diff --git a/drivers/mtd/parsers/Makefile b/drivers/mtd/parsers/Makefile
new file mode 100644 (file)
index 0000000..4d9024e
--- /dev/null
@@ -0,0 +1 @@
+obj-$(CONFIG_MTD_PARSER_TRX)           += parser_trx.o
diff --git a/drivers/mtd/parsers/parser_trx.c b/drivers/mtd/parsers/parser_trx.c
new file mode 100644 (file)
index 0000000..df360a7
--- /dev/null
@@ -0,0 +1,126 @@
+/*
+ * Parser for TRX format partitions
+ *
+ * Copyright (C) 2012 - 2017 RafaÅ‚ MiÅ‚ecki <rafal@milecki.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/partitions.h>
+
+#define TRX_PARSER_MAX_PARTS           4
+
+/* Magics */
+#define TRX_MAGIC                      0x30524448
+#define UBI_EC_MAGIC                   0x23494255      /* UBI# */
+
+struct trx_header {
+       uint32_t magic;
+       uint32_t length;
+       uint32_t crc32;
+       uint16_t flags;
+       uint16_t version;
+       uint32_t offset[3];
+} __packed;
+
+static const char *parser_trx_data_part_name(struct mtd_info *master,
+                                            size_t offset)
+{
+       uint32_t buf;
+       size_t bytes_read;
+       int err;
+
+       err  = mtd_read(master, offset, sizeof(buf), &bytes_read,
+                       (uint8_t *)&buf);
+       if (err && !mtd_is_bitflip(err)) {
+               pr_err("mtd_read error while parsing (offset: 0x%zX): %d\n",
+                       offset, err);
+               goto out_default;
+       }
+
+       if (buf == UBI_EC_MAGIC)
+               return "ubi";
+
+out_default:
+       return "rootfs";
+}
+
+static int parser_trx_parse(struct mtd_info *mtd,
+                           const struct mtd_partition **pparts,
+                           struct mtd_part_parser_data *data)
+{
+       struct mtd_partition *parts;
+       struct mtd_partition *part;
+       struct trx_header trx;
+       size_t bytes_read;
+       uint8_t curr_part = 0, i = 0;
+       int err;
+
+       parts = kzalloc(sizeof(struct mtd_partition) * TRX_PARSER_MAX_PARTS,
+                       GFP_KERNEL);
+       if (!parts)
+               return -ENOMEM;
+
+       err = mtd_read(mtd, 0, sizeof(trx), &bytes_read, (uint8_t *)&trx);
+       if (err) {
+               pr_err("MTD reading error: %d\n", err);
+               kfree(parts);
+               return err;
+       }
+
+       if (trx.magic != TRX_MAGIC) {
+               kfree(parts);
+               return -ENOENT;
+       }
+
+       /* We have LZMA loader if there is address in offset[2] */
+       if (trx.offset[2]) {
+               part = &parts[curr_part++];
+               part->name = "loader";
+               part->offset = trx.offset[i];
+               i++;
+       }
+
+       if (trx.offset[i]) {
+               part = &parts[curr_part++];
+               part->name = "linux";
+               part->offset = trx.offset[i];
+               i++;
+       }
+
+       if (trx.offset[i]) {
+               part = &parts[curr_part++];
+               part->name = parser_trx_data_part_name(mtd, trx.offset[i]);
+               part->offset = trx.offset[i];
+               i++;
+       }
+
+       /*
+        * Assume that every partition ends at the beginning of the one it is
+        * followed by.
+        */
+       for (i = 0; i < curr_part; i++) {
+               u64 next_part_offset = (i < curr_part - 1) ?
+                                      parts[i + 1].offset : mtd->size;
+
+               parts[i].size = next_part_offset - parts[i].offset;
+       }
+
+       *pparts = parts;
+       return i;
+};
+
+static struct mtd_part_parser mtd_parser_trx = {
+       .parse_fn = parser_trx_parse,
+       .name = "trx",
+};
+module_mtd_part_parser(mtd_parser_trx);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Parser for TRX format partitions");
index bfdfb1e72b38a323299ee29c8b4ffdb7e4fd21f2..293c8a4d1e49660717736d2a951d7f851be1a651 100644 (file)
@@ -108,7 +108,7 @@ config SPI_INTEL_SPI_PLATFORM
 
 config SPI_STM32_QUADSPI
        tristate "STM32 Quad SPI controller"
-       depends on ARCH_STM32
+       depends on ARCH_STM32 || COMPILE_TEST
        help
          This enables support for the STM32 Quad SPI controller.
          We only connect the NOR to this controller.
index 56051d30f0008172a099d08f8f56d667a66bc0a7..0106357421bd3cd27186937c94a044ce403fbf56 100644 (file)
@@ -19,6 +19,7 @@
 #include <linux/mtd/spi-nor.h>
 #include <linux/of.h>
 #include <linux/of_platform.h>
+#include <linux/sizes.h>
 #include <linux/sysfs.h>
 
 #define DEVICE_NAME    "aspeed-smc"
@@ -97,6 +98,7 @@ struct aspeed_smc_chip {
        struct aspeed_smc_controller *controller;
        void __iomem *ctl;                      /* control register */
        void __iomem *ahb_base;                 /* base of chip window */
+       u32 ahb_window_size;                    /* chip mapping window size */
        u32 ctl_val[smc_max];                   /* control settings */
        enum aspeed_smc_flash_type type;        /* what type of flash */
        struct spi_nor nor;
@@ -109,6 +111,7 @@ struct aspeed_smc_controller {
        const struct aspeed_smc_info *info;     /* type info of controller */
        void __iomem *regs;                     /* controller registers */
        void __iomem *ahb_base;                 /* per-chip windows resource */
+       u32 ahb_window_size;                    /* full mapping window size */
 
        struct aspeed_smc_chip *chips[0];       /* pointers to attached chips */
 };
@@ -180,8 +183,7 @@ struct aspeed_smc_controller {
 
 #define CONTROL_KEEP_MASK                                              \
        (CONTROL_AAF_MODE | CONTROL_CE_INACTIVE_MASK | CONTROL_CLK_DIV4 | \
-        CONTROL_IO_DUMMY_MASK | CONTROL_CLOCK_FREQ_SEL_MASK |          \
-        CONTROL_LSB_FIRST | CONTROL_CLOCK_MODE_3)
+        CONTROL_CLOCK_FREQ_SEL_MASK | CONTROL_LSB_FIRST | CONTROL_CLOCK_MODE_3)
 
 /*
  * The Segment Register uses a 8MB unit to encode the start address
@@ -194,6 +196,10 @@ struct aspeed_smc_controller {
 #define SEGMENT_ADDR_REG0              0x30
 #define SEGMENT_ADDR_START(_r)         ((((_r) >> 16) & 0xFF) << 23)
 #define SEGMENT_ADDR_END(_r)           ((((_r) >> 24) & 0xFF) << 23)
+#define SEGMENT_ADDR_VALUE(start, end)                                 \
+       (((((start) >> 23) & 0xFF) << 16) | ((((end) >> 23) & 0xFF) << 24))
+#define SEGMENT_ADDR_REG(controller, cs)       \
+       ((controller)->regs + SEGMENT_ADDR_REG0 + (cs) * 4)
 
 /*
  * In user mode all data bytes read or written to the chip decode address
@@ -439,8 +445,7 @@ static void __iomem *aspeed_smc_chip_base(struct aspeed_smc_chip *chip,
        u32 reg;
 
        if (controller->info->nce > 1) {
-               reg = readl(controller->regs + SEGMENT_ADDR_REG0 +
-                           chip->cs * 4);
+               reg = readl(SEGMENT_ADDR_REG(controller, chip->cs));
 
                if (SEGMENT_ADDR_START(reg) >= SEGMENT_ADDR_END(reg))
                        return NULL;
@@ -451,6 +456,146 @@ static void __iomem *aspeed_smc_chip_base(struct aspeed_smc_chip *chip,
        return controller->ahb_base + offset;
 }
 
+static u32 aspeed_smc_ahb_base_phy(struct aspeed_smc_controller *controller)
+{
+       u32 seg0_val = readl(SEGMENT_ADDR_REG(controller, 0));
+
+       return SEGMENT_ADDR_START(seg0_val);
+}
+
+static u32 chip_set_segment(struct aspeed_smc_chip *chip, u32 cs, u32 start,
+                           u32 size)
+{
+       struct aspeed_smc_controller *controller = chip->controller;
+       void __iomem *seg_reg;
+       u32 seg_oldval, seg_newval, ahb_base_phy, end;
+
+       ahb_base_phy = aspeed_smc_ahb_base_phy(controller);
+
+       seg_reg = SEGMENT_ADDR_REG(controller, cs);
+       seg_oldval = readl(seg_reg);
+
+       /*
+        * If the chip size is not specified, use the default segment
+        * size, but take into account the possible overlap with the
+        * previous segment
+        */
+       if (!size)
+               size = SEGMENT_ADDR_END(seg_oldval) - start;
+
+       /*
+        * The segment cannot exceed the maximum window size of the
+        * controller.
+        */
+       if (start + size > ahb_base_phy + controller->ahb_window_size) {
+               size = ahb_base_phy + controller->ahb_window_size - start;
+               dev_warn(chip->nor.dev, "CE%d window resized to %dMB",
+                        cs, size >> 20);
+       }
+
+       end = start + size;
+       seg_newval = SEGMENT_ADDR_VALUE(start, end);
+       writel(seg_newval, seg_reg);
+
+       /*
+        * Restore default value if something goes wrong. The chip
+        * might have set some bogus value and we would loose access
+        * to the chip.
+        */
+       if (seg_newval != readl(seg_reg)) {
+               dev_err(chip->nor.dev, "CE%d window invalid", cs);
+               writel(seg_oldval, seg_reg);
+               start = SEGMENT_ADDR_START(seg_oldval);
+               end = SEGMENT_ADDR_END(seg_oldval);
+               size = end - start;
+       }
+
+       dev_info(chip->nor.dev, "CE%d window [ 0x%.8x - 0x%.8x ] %dMB",
+                cs, start, end, size >> 20);
+
+       return size;
+}
+
+/*
+ * The segment register defines the mapping window on the AHB bus and
+ * it needs to be configured depending on the chip size. The segment
+ * register of the following CE also needs to be tuned in order to
+ * provide a contiguous window across multiple chips.
+ *
+ * This is expected to be called in increasing CE order
+ */
+static u32 aspeed_smc_chip_set_segment(struct aspeed_smc_chip *chip)
+{
+       struct aspeed_smc_controller *controller = chip->controller;
+       u32 ahb_base_phy, start;
+       u32 size = chip->nor.mtd.size;
+
+       /*
+        * Each controller has a chip size limit for direct memory
+        * access
+        */
+       if (size > controller->info->maxsize)
+               size = controller->info->maxsize;
+
+       /*
+        * The AST2400 SPI controller only handles one chip and does
+        * not have segment registers. Let's use the chip size for the
+        * AHB window.
+        */
+       if (controller->info == &spi_2400_info)
+               goto out;
+
+       /*
+        * The AST2500 SPI controller has a HW bug when the CE0 chip
+        * size reaches 128MB. Enforce a size limit of 120MB to
+        * prevent the controller from using bogus settings in the
+        * segment register.
+        */
+       if (chip->cs == 0 && controller->info == &spi_2500_info &&
+           size == SZ_128M) {
+               size = 120 << 20;
+               dev_info(chip->nor.dev,
+                        "CE%d window resized to %dMB (AST2500 HW quirk)",
+                        chip->cs, size >> 20);
+       }
+
+       ahb_base_phy = aspeed_smc_ahb_base_phy(controller);
+
+       /*
+        * As a start address for the current segment, use the default
+        * start address if we are handling CE0 or use the previous
+        * segment ending address
+        */
+       if (chip->cs) {
+               u32 prev = readl(SEGMENT_ADDR_REG(controller, chip->cs - 1));
+
+               start = SEGMENT_ADDR_END(prev);
+       } else {
+               start = ahb_base_phy;
+       }
+
+       size = chip_set_segment(chip, chip->cs, start, size);
+
+       /* Update chip base address on the AHB bus */
+       chip->ahb_base = controller->ahb_base + (start - ahb_base_phy);
+
+       /*
+        * Now, make sure the next segment does not overlap with the
+        * current one we just configured, even if there is no
+        * available chip. That could break access in Command Mode.
+        */
+       if (chip->cs < controller->info->nce - 1)
+               chip_set_segment(chip, chip->cs + 1, start + size, 0);
+
+out:
+       if (size < chip->nor.mtd.size)
+               dev_warn(chip->nor.dev,
+                        "CE%d window too small for chip %dMB",
+                        chip->cs, (u32)chip->nor.mtd.size >> 20);
+
+       return size;
+}
+
 static void aspeed_smc_chip_enable_write(struct aspeed_smc_chip *chip)
 {
        struct aspeed_smc_controller *controller = chip->controller;
@@ -524,7 +669,7 @@ static int aspeed_smc_chip_setup_init(struct aspeed_smc_chip *chip,
         */
        chip->ahb_base = aspeed_smc_chip_base(chip, res);
        if (!chip->ahb_base) {
-               dev_warn(chip->nor.dev, "CE segment window closed.\n");
+               dev_warn(chip->nor.dev, "CE%d window closed", chip->cs);
                return -EINVAL;
        }
 
@@ -571,6 +716,9 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
        if (chip->nor.addr_width == 4 && info->set_4b)
                info->set_4b(chip);
 
+       /* This is for direct AHB access when using Command Mode. */
+       chip->ahb_window_size = aspeed_smc_chip_set_segment(chip);
+
        /*
         * base mode has not been optimized yet. use it for writes.
         */
@@ -585,14 +733,12 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
         * TODO: Adjust clocks if fast read is supported and interpret
         * SPI-NOR flags to adjust controller settings.
         */
-       switch (chip->nor.flash_read) {
-       case SPI_NOR_NORMAL:
-               cmd = CONTROL_COMMAND_MODE_NORMAL;
-               break;
-       case SPI_NOR_FAST:
-               cmd = CONTROL_COMMAND_MODE_FREAD;
-               break;
-       default:
+       if (chip->nor.read_proto == SNOR_PROTO_1_1_1) {
+               if (chip->nor.read_dummy == 0)
+                       cmd = CONTROL_COMMAND_MODE_NORMAL;
+               else
+                       cmd = CONTROL_COMMAND_MODE_FREAD;
+       } else {
                dev_err(chip->nor.dev, "unsupported SPI read mode\n");
                return -EINVAL;
        }
@@ -608,6 +754,11 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
 static int aspeed_smc_setup_flash(struct aspeed_smc_controller *controller,
                                  struct device_node *np, struct resource *r)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_PP,
+       };
        const struct aspeed_smc_info *info = controller->info;
        struct device *dev = controller->dev;
        struct device_node *child;
@@ -671,11 +822,11 @@ static int aspeed_smc_setup_flash(struct aspeed_smc_controller *controller,
                        break;
 
                /*
-                * TODO: Add support for SPI_NOR_QUAD and SPI_NOR_DUAL
+                * TODO: Add support for Dual and Quad SPI protocols
                 * attach when board support is present as determined
                 * by of property.
                 */
-               ret = spi_nor_scan(nor, NULL, SPI_NOR_NORMAL);
+               ret = spi_nor_scan(nor, NULL, &hwcaps);
                if (ret)
                        break;
 
@@ -731,6 +882,8 @@ static int aspeed_smc_probe(struct platform_device *pdev)
        if (IS_ERR(controller->ahb_base))
                return PTR_ERR(controller->ahb_base);
 
+       controller->ahb_window_size = resource_size(res);
+
        ret = aspeed_smc_setup_flash(controller, np, res);
        if (ret)
                dev_err(dev, "Aspeed SMC probe failed %d\n", ret);
index 47937d9beec6b03726354d8a1175faa032503ec6..ba76fa8f2031b462a3c5159692e79b5f3ff979b8 100644 (file)
@@ -275,14 +275,48 @@ static void atmel_qspi_debug_command(struct atmel_qspi *aq,
 
 static int atmel_qspi_run_command(struct atmel_qspi *aq,
                                  const struct atmel_qspi_command *cmd,
-                                 u32 ifr_tfrtyp, u32 ifr_width)
+                                 u32 ifr_tfrtyp, enum spi_nor_protocol proto)
 {
        u32 iar, icr, ifr, sr;
        int err = 0;
 
        iar = 0;
        icr = 0;
-       ifr = ifr_tfrtyp | ifr_width;
+       ifr = ifr_tfrtyp;
+
+       /* Set the SPI protocol */
+       switch (proto) {
+       case SNOR_PROTO_1_1_1:
+               ifr |= QSPI_IFR_WIDTH_SINGLE_BIT_SPI;
+               break;
+
+       case SNOR_PROTO_1_1_2:
+               ifr |= QSPI_IFR_WIDTH_DUAL_OUTPUT;
+               break;
+
+       case SNOR_PROTO_1_1_4:
+               ifr |= QSPI_IFR_WIDTH_QUAD_OUTPUT;
+               break;
+
+       case SNOR_PROTO_1_2_2:
+               ifr |= QSPI_IFR_WIDTH_DUAL_IO;
+               break;
+
+       case SNOR_PROTO_1_4_4:
+               ifr |= QSPI_IFR_WIDTH_QUAD_IO;
+               break;
+
+       case SNOR_PROTO_2_2_2:
+               ifr |= QSPI_IFR_WIDTH_DUAL_CMD;
+               break;
+
+       case SNOR_PROTO_4_4_4:
+               ifr |= QSPI_IFR_WIDTH_QUAD_CMD;
+               break;
+
+       default:
+               return -EINVAL;
+       }
 
        /* Compute instruction parameters */
        if (cmd->enable.bits.instruction) {
@@ -434,7 +468,7 @@ static int atmel_qspi_read_reg(struct spi_nor *nor, u8 opcode,
        cmd.rx_buf = buf;
        cmd.buf_len = len;
        return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_READ,
-                                     QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
+                                     nor->reg_proto);
 }
 
 static int atmel_qspi_write_reg(struct spi_nor *nor, u8 opcode,
@@ -450,7 +484,7 @@ static int atmel_qspi_write_reg(struct spi_nor *nor, u8 opcode,
        cmd.tx_buf = buf;
        cmd.buf_len = len;
        return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE,
-                                     QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
+                                     nor->reg_proto);
 }
 
 static ssize_t atmel_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
@@ -469,7 +503,7 @@ static ssize_t atmel_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
        cmd.tx_buf = write_buf;
        cmd.buf_len = len;
        ret = atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE_MEM,
-                                    QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
+                                    nor->write_proto);
        return (ret < 0) ? ret : len;
 }
 
@@ -484,7 +518,7 @@ static int atmel_qspi_erase(struct spi_nor *nor, loff_t offs)
        cmd.instruction = nor->erase_opcode;
        cmd.address = (u32)offs;
        return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE,
-                                     QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
+                                     nor->reg_proto);
 }
 
 static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
@@ -493,27 +527,8 @@ static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
        struct atmel_qspi *aq = nor->priv;
        struct atmel_qspi_command cmd;
        u8 num_mode_cycles, num_dummy_cycles;
-       u32 ifr_width;
        ssize_t ret;
 
-       switch (nor->flash_read) {
-       case SPI_NOR_NORMAL:
-       case SPI_NOR_FAST:
-               ifr_width = QSPI_IFR_WIDTH_SINGLE_BIT_SPI;
-               break;
-
-       case SPI_NOR_DUAL:
-               ifr_width = QSPI_IFR_WIDTH_DUAL_OUTPUT;
-               break;
-
-       case SPI_NOR_QUAD:
-               ifr_width = QSPI_IFR_WIDTH_QUAD_OUTPUT;
-               break;
-
-       default:
-               return -EINVAL;
-       }
-
        if (nor->read_dummy >= 2) {
                num_mode_cycles = 2;
                num_dummy_cycles = nor->read_dummy - 2;
@@ -536,7 +551,7 @@ static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
        cmd.rx_buf = read_buf;
        cmd.buf_len = len;
        ret = atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_READ_MEM,
-                                    ifr_width);
+                                    nor->read_proto);
        return (ret < 0) ? ret : len;
 }
 
@@ -590,6 +605,20 @@ static irqreturn_t atmel_qspi_interrupt(int irq, void *dev_id)
 
 static int atmel_qspi_probe(struct platform_device *pdev)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_READ_1_1_2 |
+                       SNOR_HWCAPS_READ_1_2_2 |
+                       SNOR_HWCAPS_READ_2_2_2 |
+                       SNOR_HWCAPS_READ_1_1_4 |
+                       SNOR_HWCAPS_READ_1_4_4 |
+                       SNOR_HWCAPS_READ_4_4_4 |
+                       SNOR_HWCAPS_PP |
+                       SNOR_HWCAPS_PP_1_1_4 |
+                       SNOR_HWCAPS_PP_1_4_4 |
+                       SNOR_HWCAPS_PP_4_4_4,
+       };
        struct device_node *child, *np = pdev->dev.of_node;
        struct atmel_qspi *aq;
        struct resource *res;
@@ -679,7 +708,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
        if (err)
                goto disable_clk;
 
-       err = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
+       err = spi_nor_scan(nor, NULL, &hwcaps);
        if (err)
                goto disable_clk;
 
index 9f8102de1b16a1786c5fb06f7980acc872433dd5..53c7d8e0327aa4376bdbb3cb00d78babf70349a4 100644 (file)
@@ -855,15 +855,14 @@ static int cqspi_set_protocol(struct spi_nor *nor, const int read)
        f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
 
        if (read) {
-               switch (nor->flash_read) {
-               case SPI_NOR_NORMAL:
-               case SPI_NOR_FAST:
+               switch (nor->read_proto) {
+               case SNOR_PROTO_1_1_1:
                        f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
                        break;
-               case SPI_NOR_DUAL:
+               case SNOR_PROTO_1_1_2:
                        f_pdata->data_width = CQSPI_INST_TYPE_DUAL;
                        break;
-               case SPI_NOR_QUAD:
+               case SNOR_PROTO_1_1_4:
                        f_pdata->data_width = CQSPI_INST_TYPE_QUAD;
                        break;
                default:
@@ -1069,6 +1068,13 @@ static void cqspi_controller_init(struct cqspi_st *cqspi)
 
 static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_READ_1_1_2 |
+                       SNOR_HWCAPS_READ_1_1_4 |
+                       SNOR_HWCAPS_PP,
+       };
        struct platform_device *pdev = cqspi->pdev;
        struct device *dev = &pdev->dev;
        struct cqspi_flash_pdata *f_pdata;
@@ -1123,7 +1129,7 @@ static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
                        goto err;
                }
 
-               ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
+               ret = spi_nor_scan(nor, NULL, &hwcaps);
                if (ret)
                        goto err;
 
@@ -1277,7 +1283,7 @@ static const struct dev_pm_ops cqspi__dev_pm_ops = {
 #define CQSPI_DEV_PM_OPS       NULL
 #endif
 
-static struct of_device_id const cqspi_dt_ids[] = {
+static const struct of_device_id cqspi_dt_ids[] = {
        {.compatible = "cdns,qspi-nor",},
        { /* end of table */ }
 };
index 1476135e0d50176312c4534de0b1386296e79405..f17d22435bfcff296236a706cf10a18b2fdd6ed5 100644 (file)
@@ -957,6 +957,10 @@ static void fsl_qspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
 
 static int fsl_qspi_probe(struct platform_device *pdev)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ_1_1_4 |
+                       SNOR_HWCAPS_PP,
+       };
        struct device_node *np = pdev->dev.of_node;
        struct device *dev = &pdev->dev;
        struct fsl_qspi *q;
@@ -1065,7 +1069,7 @@ static int fsl_qspi_probe(struct platform_device *pdev)
                /* set the chip address for READID */
                fsl_qspi_set_base_addr(q, nor);
 
-               ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
+               ret = spi_nor_scan(nor, NULL, &hwcaps);
                if (ret)
                        goto mutex_failed;
 
index a286350627a663fc878245e18fbbb328ac7aa65f..d1106832b9d5ef49d318119aef3ef70c103bdbca 100644 (file)
@@ -120,19 +120,24 @@ static inline int wait_op_finish(struct hifmc_host *host)
                (reg & FMC_INT_OP_DONE), 0, FMC_WAIT_TIMEOUT);
 }
 
-static int get_if_type(enum read_mode flash_read)
+static int get_if_type(enum spi_nor_protocol proto)
 {
        enum hifmc_iftype if_type;
 
-       switch (flash_read) {
-       case SPI_NOR_DUAL:
+       switch (proto) {
+       case SNOR_PROTO_1_1_2:
                if_type = IF_TYPE_DUAL;
                break;
-       case SPI_NOR_QUAD:
+       case SNOR_PROTO_1_2_2:
+               if_type = IF_TYPE_DIO;
+               break;
+       case SNOR_PROTO_1_1_4:
                if_type = IF_TYPE_QUAD;
                break;
-       case SPI_NOR_NORMAL:
-       case SPI_NOR_FAST:
+       case SNOR_PROTO_1_4_4:
+               if_type = IF_TYPE_QIO;
+               break;
+       case SNOR_PROTO_1_1_1:
        default:
                if_type = IF_TYPE_STD;
                break;
@@ -253,7 +258,10 @@ static int hisi_spi_nor_dma_transfer(struct spi_nor *nor, loff_t start_off,
        writel(FMC_DMA_LEN_SET(len), host->regbase + FMC_DMA_LEN);
 
        reg = OP_CFG_FM_CS(priv->chipselect);
-       if_type = get_if_type(nor->flash_read);
+       if (op_type == FMC_OP_READ)
+               if_type = get_if_type(nor->read_proto);
+       else
+               if_type = get_if_type(nor->write_proto);
        reg |= OP_CFG_MEM_IF_TYPE(if_type);
        if (op_type == FMC_OP_READ)
                reg |= OP_CFG_DUMMY_NUM(nor->read_dummy >> 3);
@@ -321,6 +329,13 @@ static ssize_t hisi_spi_nor_write(struct spi_nor *nor, loff_t to,
 static int hisi_spi_nor_register(struct device_node *np,
                                struct hifmc_host *host)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_READ_1_1_2 |
+                       SNOR_HWCAPS_READ_1_1_4 |
+                       SNOR_HWCAPS_PP,
+       };
        struct device *dev = host->dev;
        struct spi_nor *nor;
        struct hifmc_priv *priv;
@@ -362,7 +377,7 @@ static int hisi_spi_nor_register(struct device_node *np,
        nor->read = hisi_spi_nor_read;
        nor->write = hisi_spi_nor_write;
        nor->erase = NULL;
-       ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
+       ret = spi_nor_scan(nor, NULL, &hwcaps);
        if (ret)
                return ret;
 
index 986a3d020a3a154157f025f912fd88fd1c5881de..8a596bfeddff6ce87db49f8fd2047d6bd355efd9 100644 (file)
@@ -715,6 +715,11 @@ static void intel_spi_fill_partition(struct intel_spi *ispi,
 struct intel_spi *intel_spi_probe(struct device *dev,
        struct resource *mem, const struct intel_spi_boardinfo *info)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_PP,
+       };
        struct mtd_partition part;
        struct intel_spi *ispi;
        int ret;
@@ -746,7 +751,7 @@ struct intel_spi *intel_spi_probe(struct device *dev,
        ispi->nor.write = intel_spi_write;
        ispi->nor.erase = intel_spi_erase;
 
-       ret = spi_nor_scan(&ispi->nor, NULL, SPI_NOR_NORMAL);
+       ret = spi_nor_scan(&ispi->nor, NULL, &hwcaps);
        if (ret) {
                dev_info(dev, "failed to locate the chip\n");
                return ERR_PTR(ret);
index b6377707ce321e077c0d0d18005b6ccca3833f0f..8a20ec4991c878eb00b7a2b0551cf50e2320a7dd 100644 (file)
@@ -123,20 +123,20 @@ static void mt8173_nor_set_read_mode(struct mt8173_nor *mt8173_nor)
 {
        struct spi_nor *nor = &mt8173_nor->nor;
 
-       switch (nor->flash_read) {
-       case SPI_NOR_FAST:
+       switch (nor->read_proto) {
+       case SNOR_PROTO_1_1_1:
                writeb(nor->read_opcode, mt8173_nor->base +
                       MTK_NOR_PRGDATA3_REG);
                writeb(MTK_NOR_FAST_READ, mt8173_nor->base +
                       MTK_NOR_CFG1_REG);
                break;
-       case SPI_NOR_DUAL:
+       case SNOR_PROTO_1_1_2:
                writeb(nor->read_opcode, mt8173_nor->base +
                       MTK_NOR_PRGDATA3_REG);
                writeb(MTK_NOR_DUAL_READ_EN, mt8173_nor->base +
                       MTK_NOR_DUAL_REG);
                break;
-       case SPI_NOR_QUAD:
+       case SNOR_PROTO_1_1_4:
                writeb(nor->read_opcode, mt8173_nor->base +
                       MTK_NOR_PRGDATA4_REG);
                writeb(MTK_NOR_QUAD_READ_EN, mt8173_nor->base +
@@ -408,6 +408,11 @@ static int mt8173_nor_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf,
 static int mtk_nor_init(struct mt8173_nor *mt8173_nor,
                        struct device_node *flash_node)
 {
+       const struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_READ_1_1_2 |
+                       SNOR_HWCAPS_PP,
+       };
        int ret;
        struct spi_nor *nor;
 
@@ -426,7 +431,7 @@ static int mtk_nor_init(struct mt8173_nor *mt8173_nor,
        nor->write_reg = mt8173_nor_write_reg;
        nor->mtd.name = "mtk_nor";
        /* initialized with NULL */
-       ret = spi_nor_scan(nor, NULL, SPI_NOR_DUAL);
+       ret = spi_nor_scan(nor, NULL, &hwcaps);
        if (ret)
                return ret;
 
index 73a14f40928be2536d9c0d1256c26ce0b3e54755..15374216d4d904b44084696378763c29792c84d6 100644 (file)
@@ -240,13 +240,12 @@ static int nxp_spifi_erase(struct spi_nor *nor, loff_t offs)
 
 static int nxp_spifi_setup_memory_cmd(struct nxp_spifi *spifi)
 {
-       switch (spifi->nor.flash_read) {
-       case SPI_NOR_NORMAL:
-       case SPI_NOR_FAST:
+       switch (spifi->nor.read_proto) {
+       case SNOR_PROTO_1_1_1:
                spifi->mcmd = SPIFI_CMD_FIELDFORM_ALL_SERIAL;
                break;
-       case SPI_NOR_DUAL:
-       case SPI_NOR_QUAD:
+       case SNOR_PROTO_1_1_2:
+       case SNOR_PROTO_1_1_4:
                spifi->mcmd = SPIFI_CMD_FIELDFORM_QUAD_DUAL_DATA;
                break;
        default:
@@ -274,7 +273,11 @@ static void nxp_spifi_dummy_id_read(struct spi_nor *nor)
 static int nxp_spifi_setup_flash(struct nxp_spifi *spifi,
                                 struct device_node *np)
 {
-       enum read_mode flash_read;
+       struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_PP,
+       };
        u32 ctrl, property;
        u16 mode = 0;
        int ret;
@@ -308,13 +311,12 @@ static int nxp_spifi_setup_flash(struct nxp_spifi *spifi,
 
        if (mode & SPI_RX_DUAL) {
                ctrl |= SPIFI_CTRL_DUAL;
-               flash_read = SPI_NOR_DUAL;
+               hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
        } else if (mode & SPI_RX_QUAD) {
                ctrl &= ~SPIFI_CTRL_DUAL;
-               flash_read = SPI_NOR_QUAD;
+               hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
        } else {
                ctrl |= SPIFI_CTRL_DUAL;
-               flash_read = SPI_NOR_NORMAL;
        }
 
        switch (mode & (SPI_CPHA | SPI_CPOL)) {
@@ -351,7 +353,7 @@ static int nxp_spifi_setup_flash(struct nxp_spifi *spifi,
         */
        nxp_spifi_dummy_id_read(&spifi->nor);
 
-       ret = spi_nor_scan(&spifi->nor, NULL, flash_read);
+       ret = spi_nor_scan(&spifi->nor, NULL, &hwcaps);
        if (ret) {
                dev_err(spifi->dev, "device scan failed\n");
                return ret;
index dea8c9cbadf00a75ff9e775a92c4029390c6e2b3..1413828ff1fbc1ccf963b4f1a79fe6861e2c6d08 100644 (file)
@@ -149,24 +149,6 @@ static int read_cr(struct spi_nor *nor)
        return val;
 }
 
-/*
- * Dummy Cycle calculation for different type of read.
- * It can be used to support more commands with
- * different dummy cycle requirements.
- */
-static inline int spi_nor_read_dummy_cycles(struct spi_nor *nor)
-{
-       switch (nor->flash_read) {
-       case SPI_NOR_FAST:
-       case SPI_NOR_DUAL:
-       case SPI_NOR_QUAD:
-               return 8;
-       case SPI_NOR_NORMAL:
-               return 0;
-       }
-       return 0;
-}
-
 /*
  * Write status register 1 byte
  * Returns negative if error occurred.
@@ -221,6 +203,10 @@ static inline u8 spi_nor_convert_3to4_read(u8 opcode)
                { SPINOR_OP_READ_1_2_2, SPINOR_OP_READ_1_2_2_4B },
                { SPINOR_OP_READ_1_1_4, SPINOR_OP_READ_1_1_4_4B },
                { SPINOR_OP_READ_1_4_4, SPINOR_OP_READ_1_4_4_4B },
+
+               { SPINOR_OP_READ_1_1_1_DTR,     SPINOR_OP_READ_1_1_1_DTR_4B },
+               { SPINOR_OP_READ_1_2_2_DTR,     SPINOR_OP_READ_1_2_2_DTR_4B },
+               { SPINOR_OP_READ_1_4_4_DTR,     SPINOR_OP_READ_1_4_4_DTR_4B },
        };
 
        return spi_nor_convert_opcode(opcode, spi_nor_3to4_read,
@@ -1022,10 +1008,12 @@ static const struct flash_info spi_nor_ids[] = {
        { "mx25u6435f",  INFO(0xc22537, 0, 64 * 1024, 128, SECT_4K) },
        { "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, 0) },
        { "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) },
-       { "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, 0) },
+       { "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
        { "mx25u25635f", INFO(0xc22539, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_4B_OPCODES) },
        { "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
-       { "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, SPI_NOR_QUAD_READ) },
+       { "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
+       { "mx66u51235f", INFO(0xc2253a, 0, 64 * 1024, 1024, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
+       { "mx66l1g45g",  INFO(0xc2201b, 0, 64 * 1024, 2048, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
        { "mx66l1g55g",  INFO(0xc2261b, 0, 64 * 1024, 2048, SPI_NOR_QUAD_READ) },
 
        /* Micron */
@@ -1036,7 +1024,7 @@ static const struct flash_info spi_nor_ids[] = {
        { "n25q064a",    INFO(0x20bb17, 0, 64 * 1024,  128, SECT_4K | SPI_NOR_QUAD_READ) },
        { "n25q128a11",  INFO(0x20bb18, 0, 64 * 1024,  256, SECT_4K | SPI_NOR_QUAD_READ) },
        { "n25q128a13",  INFO(0x20ba18, 0, 64 * 1024,  256, SECT_4K | SPI_NOR_QUAD_READ) },
-       { "n25q256a",    INFO(0x20ba19, 0, 64 * 1024,  512, SECT_4K | SPI_NOR_QUAD_READ) },
+       { "n25q256a",    INFO(0x20ba19, 0, 64 * 1024,  512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
        { "n25q256ax1",  INFO(0x20bb19, 0, 64 * 1024,  512, SECT_4K | SPI_NOR_QUAD_READ) },
        { "n25q512a",    INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
        { "n25q512ax3",  INFO(0x20ba20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
@@ -1076,6 +1064,7 @@ static const struct flash_info spi_nor_ids[] = {
        { "s25fl164k",  INFO(0x014017,      0,  64 * 1024, 128, SECT_4K) },
        { "s25fl204k",  INFO(0x014013,      0,  64 * 1024,   8, SECT_4K | SPI_NOR_DUAL_READ) },
        { "s25fl208k",  INFO(0x014014,      0,  64 * 1024,  16, SECT_4K | SPI_NOR_DUAL_READ) },
+       { "s25fl064l",  INFO(0x016017,      0,  64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
 
        /* SST -- large erase sizes are "overlays", "sectors" are 4K */
        { "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024,  8, SECT_4K | SST_WRITE) },
@@ -1159,7 +1148,9 @@ static const struct flash_info spi_nor_ids[] = {
        { "w25q80", INFO(0xef5014, 0, 64 * 1024,  16, SECT_4K) },
        { "w25q80bl", INFO(0xef4014, 0, 64 * 1024,  16, SECT_4K) },
        { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
-       { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K) },
+       { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
+       { "w25m512jv", INFO(0xef7119, 0, 64 * 1024, 1024,
+                       SECT_4K | SPI_NOR_QUAD_READ | SPI_NOR_DUAL_READ) },
 
        /* Catalyst / On Semiconductor -- non-JEDEC */
        { "cat25c11", CAT25_INFO(  16, 8, 16, 1, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
@@ -1403,8 +1394,9 @@ static int macronix_quad_enable(struct spi_nor *nor)
 
        write_sr(nor, val | SR_QUAD_EN_MX);
 
-       if (spi_nor_wait_till_ready(nor))
-               return 1;
+       ret = spi_nor_wait_till_ready(nor);
+       if (ret)
+               return ret;
 
        ret = read_sr(nor);
        if (!(ret > 0 && (ret & SR_QUAD_EN_MX))) {
@@ -1460,30 +1452,6 @@ static int spansion_quad_enable(struct spi_nor *nor)
        return 0;
 }
 
-static int set_quad_mode(struct spi_nor *nor, const struct flash_info *info)
-{
-       int status;
-
-       switch (JEDEC_MFR(info)) {
-       case SNOR_MFR_MACRONIX:
-               status = macronix_quad_enable(nor);
-               if (status) {
-                       dev_err(nor->dev, "Macronix quad-read not enabled\n");
-                       return -EINVAL;
-               }
-               return status;
-       case SNOR_MFR_MICRON:
-               return 0;
-       default:
-               status = spansion_quad_enable(nor);
-               if (status) {
-                       dev_err(nor->dev, "Spansion quad-read not enabled\n");
-                       return -EINVAL;
-               }
-               return status;
-       }
-}
-
 static int spi_nor_check(struct spi_nor *nor)
 {
        if (!nor->dev || !nor->read || !nor->write ||
@@ -1536,8 +1504,349 @@ static int s3an_nor_scan(const struct flash_info *info, struct spi_nor *nor)
        return 0;
 }
 
-int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
+struct spi_nor_read_command {
+       u8                      num_mode_clocks;
+       u8                      num_wait_states;
+       u8                      opcode;
+       enum spi_nor_protocol   proto;
+};
+
+struct spi_nor_pp_command {
+       u8                      opcode;
+       enum spi_nor_protocol   proto;
+};
+
+enum spi_nor_read_command_index {
+       SNOR_CMD_READ,
+       SNOR_CMD_READ_FAST,
+       SNOR_CMD_READ_1_1_1_DTR,
+
+       /* Dual SPI */
+       SNOR_CMD_READ_1_1_2,
+       SNOR_CMD_READ_1_2_2,
+       SNOR_CMD_READ_2_2_2,
+       SNOR_CMD_READ_1_2_2_DTR,
+
+       /* Quad SPI */
+       SNOR_CMD_READ_1_1_4,
+       SNOR_CMD_READ_1_4_4,
+       SNOR_CMD_READ_4_4_4,
+       SNOR_CMD_READ_1_4_4_DTR,
+
+       /* Octo SPI */
+       SNOR_CMD_READ_1_1_8,
+       SNOR_CMD_READ_1_8_8,
+       SNOR_CMD_READ_8_8_8,
+       SNOR_CMD_READ_1_8_8_DTR,
+
+       SNOR_CMD_READ_MAX
+};
+
+enum spi_nor_pp_command_index {
+       SNOR_CMD_PP,
+
+       /* Quad SPI */
+       SNOR_CMD_PP_1_1_4,
+       SNOR_CMD_PP_1_4_4,
+       SNOR_CMD_PP_4_4_4,
+
+       /* Octo SPI */
+       SNOR_CMD_PP_1_1_8,
+       SNOR_CMD_PP_1_8_8,
+       SNOR_CMD_PP_8_8_8,
+
+       SNOR_CMD_PP_MAX
+};
+
+struct spi_nor_flash_parameter {
+       u64                             size;
+       u32                             page_size;
+
+       struct spi_nor_hwcaps           hwcaps;
+       struct spi_nor_read_command     reads[SNOR_CMD_READ_MAX];
+       struct spi_nor_pp_command       page_programs[SNOR_CMD_PP_MAX];
+
+       int (*quad_enable)(struct spi_nor *nor);
+};
+
+static void
+spi_nor_set_read_settings(struct spi_nor_read_command *read,
+                         u8 num_mode_clocks,
+                         u8 num_wait_states,
+                         u8 opcode,
+                         enum spi_nor_protocol proto)
 {
+       read->num_mode_clocks = num_mode_clocks;
+       read->num_wait_states = num_wait_states;
+       read->opcode = opcode;
+       read->proto = proto;
+}
+
+static void
+spi_nor_set_pp_settings(struct spi_nor_pp_command *pp,
+                       u8 opcode,
+                       enum spi_nor_protocol proto)
+{
+       pp->opcode = opcode;
+       pp->proto = proto;
+}
+
+static int spi_nor_init_params(struct spi_nor *nor,
+                              const struct flash_info *info,
+                              struct spi_nor_flash_parameter *params)
+{
+       /* Set legacy flash parameters as default. */
+       memset(params, 0, sizeof(*params));
+
+       /* Set SPI NOR sizes. */
+       params->size = info->sector_size * info->n_sectors;
+       params->page_size = info->page_size;
+
+       /* (Fast) Read settings. */
+       params->hwcaps.mask |= SNOR_HWCAPS_READ;
+       spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ],
+                                 0, 0, SPINOR_OP_READ,
+                                 SNOR_PROTO_1_1_1);
+
+       if (!(info->flags & SPI_NOR_NO_FR)) {
+               params->hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
+               spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_FAST],
+                                         0, 8, SPINOR_OP_READ_FAST,
+                                         SNOR_PROTO_1_1_1);
+       }
+
+       if (info->flags & SPI_NOR_DUAL_READ) {
+               params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
+               spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_2],
+                                         0, 8, SPINOR_OP_READ_1_1_2,
+                                         SNOR_PROTO_1_1_2);
+       }
+
+       if (info->flags & SPI_NOR_QUAD_READ) {
+               params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
+               spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_4],
+                                         0, 8, SPINOR_OP_READ_1_1_4,
+                                         SNOR_PROTO_1_1_4);
+       }
+
+       /* Page Program settings. */
+       params->hwcaps.mask |= SNOR_HWCAPS_PP;
+       spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP],
+                               SPINOR_OP_PP, SNOR_PROTO_1_1_1);
+
+       /* Select the procedure to set the Quad Enable bit. */
+       if (params->hwcaps.mask & (SNOR_HWCAPS_READ_QUAD |
+                                  SNOR_HWCAPS_PP_QUAD)) {
+               switch (JEDEC_MFR(info)) {
+               case SNOR_MFR_MACRONIX:
+                       params->quad_enable = macronix_quad_enable;
+                       break;
+
+               case SNOR_MFR_MICRON:
+                       break;
+
+               default:
+                       params->quad_enable = spansion_quad_enable;
+                       break;
+               }
+       }
+
+       return 0;
+}
+
+static int spi_nor_hwcaps2cmd(u32 hwcaps, const int table[][2], size_t size)
+{
+       size_t i;
+
+       for (i = 0; i < size; i++)
+               if (table[i][0] == (int)hwcaps)
+                       return table[i][1];
+
+       return -EINVAL;
+}
+
+static int spi_nor_hwcaps_read2cmd(u32 hwcaps)
+{
+       static const int hwcaps_read2cmd[][2] = {
+               { SNOR_HWCAPS_READ,             SNOR_CMD_READ },
+               { SNOR_HWCAPS_READ_FAST,        SNOR_CMD_READ_FAST },
+               { SNOR_HWCAPS_READ_1_1_1_DTR,   SNOR_CMD_READ_1_1_1_DTR },
+               { SNOR_HWCAPS_READ_1_1_2,       SNOR_CMD_READ_1_1_2 },
+               { SNOR_HWCAPS_READ_1_2_2,       SNOR_CMD_READ_1_2_2 },
+               { SNOR_HWCAPS_READ_2_2_2,       SNOR_CMD_READ_2_2_2 },
+               { SNOR_HWCAPS_READ_1_2_2_DTR,   SNOR_CMD_READ_1_2_2_DTR },
+               { SNOR_HWCAPS_READ_1_1_4,       SNOR_CMD_READ_1_1_4 },
+               { SNOR_HWCAPS_READ_1_4_4,       SNOR_CMD_READ_1_4_4 },
+               { SNOR_HWCAPS_READ_4_4_4,       SNOR_CMD_READ_4_4_4 },
+               { SNOR_HWCAPS_READ_1_4_4_DTR,   SNOR_CMD_READ_1_4_4_DTR },
+               { SNOR_HWCAPS_READ_1_1_8,       SNOR_CMD_READ_1_1_8 },
+               { SNOR_HWCAPS_READ_1_8_8,       SNOR_CMD_READ_1_8_8 },
+               { SNOR_HWCAPS_READ_8_8_8,       SNOR_CMD_READ_8_8_8 },
+               { SNOR_HWCAPS_READ_1_8_8_DTR,   SNOR_CMD_READ_1_8_8_DTR },
+       };
+
+       return spi_nor_hwcaps2cmd(hwcaps, hwcaps_read2cmd,
+                                 ARRAY_SIZE(hwcaps_read2cmd));
+}
+
+static int spi_nor_hwcaps_pp2cmd(u32 hwcaps)
+{
+       static const int hwcaps_pp2cmd[][2] = {
+               { SNOR_HWCAPS_PP,               SNOR_CMD_PP },
+               { SNOR_HWCAPS_PP_1_1_4,         SNOR_CMD_PP_1_1_4 },
+               { SNOR_HWCAPS_PP_1_4_4,         SNOR_CMD_PP_1_4_4 },
+               { SNOR_HWCAPS_PP_4_4_4,         SNOR_CMD_PP_4_4_4 },
+               { SNOR_HWCAPS_PP_1_1_8,         SNOR_CMD_PP_1_1_8 },
+               { SNOR_HWCAPS_PP_1_8_8,         SNOR_CMD_PP_1_8_8 },
+               { SNOR_HWCAPS_PP_8_8_8,         SNOR_CMD_PP_8_8_8 },
+       };
+
+       return spi_nor_hwcaps2cmd(hwcaps, hwcaps_pp2cmd,
+                                 ARRAY_SIZE(hwcaps_pp2cmd));
+}
+
+static int spi_nor_select_read(struct spi_nor *nor,
+                              const struct spi_nor_flash_parameter *params,
+                              u32 shared_hwcaps)
+{
+       int cmd, best_match = fls(shared_hwcaps & SNOR_HWCAPS_READ_MASK) - 1;
+       const struct spi_nor_read_command *read;
+
+       if (best_match < 0)
+               return -EINVAL;
+
+       cmd = spi_nor_hwcaps_read2cmd(BIT(best_match));
+       if (cmd < 0)
+               return -EINVAL;
+
+       read = &params->reads[cmd];
+       nor->read_opcode = read->opcode;
+       nor->read_proto = read->proto;
+
+       /*
+        * In the spi-nor framework, we don't need to make the difference
+        * between mode clock cycles and wait state clock cycles.
+        * Indeed, the value of the mode clock cycles is used by a QSPI
+        * flash memory to know whether it should enter or leave its 0-4-4
+        * (Continuous Read / XIP) mode.
+        * eXecution In Place is out of the scope of the mtd sub-system.
+        * Hence we choose to merge both mode and wait state clock cycles
+        * into the so called dummy clock cycles.
+        */
+       nor->read_dummy = read->num_mode_clocks + read->num_wait_states;
+       return 0;
+}
+
+static int spi_nor_select_pp(struct spi_nor *nor,
+                            const struct spi_nor_flash_parameter *params,
+                            u32 shared_hwcaps)
+{
+       int cmd, best_match = fls(shared_hwcaps & SNOR_HWCAPS_PP_MASK) - 1;
+       const struct spi_nor_pp_command *pp;
+
+       if (best_match < 0)
+               return -EINVAL;
+
+       cmd = spi_nor_hwcaps_pp2cmd(BIT(best_match));
+       if (cmd < 0)
+               return -EINVAL;
+
+       pp = &params->page_programs[cmd];
+       nor->program_opcode = pp->opcode;
+       nor->write_proto = pp->proto;
+       return 0;
+}
+
+static int spi_nor_select_erase(struct spi_nor *nor,
+                               const struct flash_info *info)
+{
+       struct mtd_info *mtd = &nor->mtd;
+
+#ifdef CONFIG_MTD_SPI_NOR_USE_4K_SECTORS
+       /* prefer "small sector" erase if possible */
+       if (info->flags & SECT_4K) {
+               nor->erase_opcode = SPINOR_OP_BE_4K;
+               mtd->erasesize = 4096;
+       } else if (info->flags & SECT_4K_PMC) {
+               nor->erase_opcode = SPINOR_OP_BE_4K_PMC;
+               mtd->erasesize = 4096;
+       } else
+#endif
+       {
+               nor->erase_opcode = SPINOR_OP_SE;
+               mtd->erasesize = info->sector_size;
+       }
+       return 0;
+}
+
+static int spi_nor_setup(struct spi_nor *nor, const struct flash_info *info,
+                        const struct spi_nor_flash_parameter *params,
+                        const struct spi_nor_hwcaps *hwcaps)
+{
+       u32 ignored_mask, shared_mask;
+       bool enable_quad_io;
+       int err;
+
+       /*
+        * Keep only the hardware capabilities supported by both the SPI
+        * controller and the SPI flash memory.
+        */
+       shared_mask = hwcaps->mask & params->hwcaps.mask;
+
+       /* SPI n-n-n protocols are not supported yet. */
+       ignored_mask = (SNOR_HWCAPS_READ_2_2_2 |
+                       SNOR_HWCAPS_READ_4_4_4 |
+                       SNOR_HWCAPS_READ_8_8_8 |
+                       SNOR_HWCAPS_PP_4_4_4 |
+                       SNOR_HWCAPS_PP_8_8_8);
+       if (shared_mask & ignored_mask) {
+               dev_dbg(nor->dev,
+                       "SPI n-n-n protocols are not supported yet.\n");
+               shared_mask &= ~ignored_mask;
+       }
+
+       /* Select the (Fast) Read command. */
+       err = spi_nor_select_read(nor, params, shared_mask);
+       if (err) {
+               dev_err(nor->dev,
+                       "can't select read settings supported by both the SPI controller and memory.\n");
+               return err;
+       }
+
+       /* Select the Page Program command. */
+       err = spi_nor_select_pp(nor, params, shared_mask);
+       if (err) {
+               dev_err(nor->dev,
+                       "can't select write settings supported by both the SPI controller and memory.\n");
+               return err;
+       }
+
+       /* Select the Sector Erase command. */
+       err = spi_nor_select_erase(nor, info);
+       if (err) {
+               dev_err(nor->dev,
+                       "can't select erase settings supported by both the SPI controller and memory.\n");
+               return err;
+       }
+
+       /* Enable Quad I/O if needed. */
+       enable_quad_io = (spi_nor_get_protocol_width(nor->read_proto) == 4 ||
+                         spi_nor_get_protocol_width(nor->write_proto) == 4);
+       if (enable_quad_io && params->quad_enable) {
+               err = params->quad_enable(nor);
+               if (err) {
+                       dev_err(nor->dev, "quad mode not supported\n");
+                       return err;
+               }
+       }
+
+       return 0;
+}
+
+int spi_nor_scan(struct spi_nor *nor, const char *name,
+                const struct spi_nor_hwcaps *hwcaps)
+{
+       struct spi_nor_flash_parameter params;
        const struct flash_info *info = NULL;
        struct device *dev = nor->dev;
        struct mtd_info *mtd = &nor->mtd;
@@ -1549,6 +1858,11 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
        if (ret)
                return ret;
 
+       /* Reset SPI protocol for all commands. */
+       nor->reg_proto = SNOR_PROTO_1_1_1;
+       nor->read_proto = SNOR_PROTO_1_1_1;
+       nor->write_proto = SNOR_PROTO_1_1_1;
+
        if (name)
                info = spi_nor_match_id(name);
        /* Try to auto-detect if chip name wasn't specified or not found */
@@ -1591,6 +1905,11 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
        if (info->flags & SPI_S3AN)
                nor->flags |=  SNOR_F_READY_XSR_RDY;
 
+       /* Parse the Serial Flash Discoverable Parameters table. */
+       ret = spi_nor_init_params(nor, info, &params);
+       if (ret)
+               return ret;
+
        /*
         * Atmel, SST, Intel/Numonyx, and others serial NOR tend to power up
         * with the software protection bits set
@@ -1611,7 +1930,7 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
        mtd->type = MTD_NORFLASH;
        mtd->writesize = 1;
        mtd->flags = MTD_CAP_NORFLASH;
-       mtd->size = info->sector_size * info->n_sectors;
+       mtd->size = params.size;
        mtd->_erase = spi_nor_erase;
        mtd->_read = spi_nor_read;
 
@@ -1642,75 +1961,38 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
        if (info->flags & NO_CHIP_ERASE)
                nor->flags |= SNOR_F_NO_OP_CHIP_ERASE;
 
-#ifdef CONFIG_MTD_SPI_NOR_USE_4K_SECTORS
-       /* prefer "small sector" erase if possible */
-       if (info->flags & SECT_4K) {
-               nor->erase_opcode = SPINOR_OP_BE_4K;
-               mtd->erasesize = 4096;
-       } else if (info->flags & SECT_4K_PMC) {
-               nor->erase_opcode = SPINOR_OP_BE_4K_PMC;
-               mtd->erasesize = 4096;
-       } else
-#endif
-       {
-               nor->erase_opcode = SPINOR_OP_SE;
-               mtd->erasesize = info->sector_size;
-       }
-
        if (info->flags & SPI_NOR_NO_ERASE)
                mtd->flags |= MTD_NO_ERASE;
 
        mtd->dev.parent = dev;
-       nor->page_size = info->page_size;
+       nor->page_size = params.page_size;
        mtd->writebufsize = nor->page_size;
 
        if (np) {
                /* If we were instantiated by DT, use it */
                if (of_property_read_bool(np, "m25p,fast-read"))
-                       nor->flash_read = SPI_NOR_FAST;
+                       params.hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
                else
-                       nor->flash_read = SPI_NOR_NORMAL;
+                       params.hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST;
        } else {
                /* If we weren't instantiated by DT, default to fast-read */
-               nor->flash_read = SPI_NOR_FAST;
+               params.hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
        }
 
        /* Some devices cannot do fast-read, no matter what DT tells us */
        if (info->flags & SPI_NOR_NO_FR)
-               nor->flash_read = SPI_NOR_NORMAL;
-
-       /* Quad/Dual-read mode takes precedence over fast/normal */
-       if (mode == SPI_NOR_QUAD && info->flags & SPI_NOR_QUAD_READ) {
-               ret = set_quad_mode(nor, info);
-               if (ret) {
-                       dev_err(dev, "quad mode not supported\n");
-                       return ret;
-               }
-               nor->flash_read = SPI_NOR_QUAD;
-       } else if (mode == SPI_NOR_DUAL && info->flags & SPI_NOR_DUAL_READ) {
-               nor->flash_read = SPI_NOR_DUAL;
-       }
-
-       /* Default commands */
-       switch (nor->flash_read) {
-       case SPI_NOR_QUAD:
-               nor->read_opcode = SPINOR_OP_READ_1_1_4;
-               break;
-       case SPI_NOR_DUAL:
-               nor->read_opcode = SPINOR_OP_READ_1_1_2;
-               break;
-       case SPI_NOR_FAST:
-               nor->read_opcode = SPINOR_OP_READ_FAST;
-               break;
-       case SPI_NOR_NORMAL:
-               nor->read_opcode = SPINOR_OP_READ;
-               break;
-       default:
-               dev_err(dev, "No Read opcode defined\n");
-               return -EINVAL;
-       }
+               params.hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST;
 
-       nor->program_opcode = SPINOR_OP_PP;
+       /*
+        * Configure the SPI memory:
+        * - select op codes for (Fast) Read, Page Program and Sector Erase.
+        * - set the number of dummy cycles (mode cycles + wait states).
+        * - set the SPI protocols for register and memory accesses.
+        * - set the Quad Enable bit if needed (required by SPI x-y-4 protos).
+        */
+       ret = spi_nor_setup(nor, info, &params, hwcaps);
+       if (ret)
+               return ret;
 
        if (info->addr_width)
                nor->addr_width = info->addr_width;
@@ -1732,8 +2014,6 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
                return -EINVAL;
        }
 
-       nor->read_dummy = spi_nor_read_dummy_cycles(nor);
-
        if (info->flags & SPI_S3AN) {
                ret = s3an_nor_scan(info, nor);
                if (ret)
index ae45f81b8cd33fdea3cbedccbd0a1cc11575cd38..86c0931543c538c340421786db8bc4ef5d88a55c 100644 (file)
@@ -19,6 +19,7 @@
 #include <linux/of_device.h>
 #include <linux/platform_device.h>
 #include <linux/reset.h>
+#include <linux/sizes.h>
 
 #define QUADSPI_CR             0x00
 #define CR_EN                  BIT(0)
@@ -192,15 +193,15 @@ static void stm32_qspi_set_framemode(struct spi_nor *nor,
        cmd->framemode = CCR_IMODE_1;
 
        if (read) {
-               switch (nor->flash_read) {
-               case SPI_NOR_NORMAL:
-               case SPI_NOR_FAST:
+               switch (nor->read_proto) {
+               default:
+               case SNOR_PROTO_1_1_1:
                        dmode = CCR_DMODE_1;
                        break;
-               case SPI_NOR_DUAL:
+               case SNOR_PROTO_1_1_2:
                        dmode = CCR_DMODE_2;
                        break;
-               case SPI_NOR_QUAD:
+               case SNOR_PROTO_1_1_4:
                        dmode = CCR_DMODE_4;
                        break;
                }
@@ -375,7 +376,7 @@ static ssize_t stm32_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
        struct stm32_qspi_cmd cmd;
        int err;
 
-       dev_dbg(qspi->dev, "read(%#.2x): buf:%p from:%#.8x len:%#x\n",
+       dev_dbg(qspi->dev, "read(%#.2x): buf:%p from:%#.8x len:%#zx\n",
                nor->read_opcode, buf, (u32)from, len);
 
        memset(&cmd, 0, sizeof(cmd));
@@ -402,7 +403,7 @@ static ssize_t stm32_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
        struct stm32_qspi_cmd cmd;
        int err;
 
-       dev_dbg(dev, "write(%#.2x): buf:%p to:%#.8x len:%#x\n",
+       dev_dbg(dev, "write(%#.2x): buf:%p to:%#.8x len:%#zx\n",
                nor->program_opcode, buf, (u32)to, len);
 
        memset(&cmd, 0, sizeof(cmd));
@@ -480,7 +481,12 @@ static void stm32_qspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
 static int stm32_qspi_flash_setup(struct stm32_qspi *qspi,
                                  struct device_node *np)
 {
-       u32 width, flash_read, presc, cs_num, max_rate = 0;
+       struct spi_nor_hwcaps hwcaps = {
+               .mask = SNOR_HWCAPS_READ |
+                       SNOR_HWCAPS_READ_FAST |
+                       SNOR_HWCAPS_PP,
+       };
+       u32 width, presc, cs_num, max_rate = 0;
        struct stm32_qspi_flash *flash;
        struct mtd_info *mtd;
        int ret;
@@ -499,12 +505,10 @@ static int stm32_qspi_flash_setup(struct stm32_qspi *qspi,
                width = 1;
 
        if (width == 4)
-               flash_read = SPI_NOR_QUAD;
+               hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
        else if (width == 2)
-               flash_read = SPI_NOR_DUAL;
-       else if (width == 1)
-               flash_read = SPI_NOR_NORMAL;
-       else
+               hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
+       else if (width != 1)
                return -EINVAL;
 
        flash = &qspi->flash[cs_num];
@@ -539,7 +543,7 @@ static int stm32_qspi_flash_setup(struct stm32_qspi *qspi,
         */
        flash->fsize = FSIZE_VAL(SZ_1K);
 
-       ret = spi_nor_scan(&flash->nor, NULL, flash_read);
+       ret = spi_nor_scan(&flash->nor, NULL, &hwcaps);
        if (ret) {
                dev_err(qspi->dev, "device scan failed\n");
                return ret;
index aecc6ce5a9e1c131a38d657325356ab590de845c..fa2519ad2435eea3c2b32a00ce507d59acc6aba1 100644 (file)
@@ -102,7 +102,7 @@ static int write_eraseblock2(int ebnum)
                if (unlikely(err || written != subpgsize * k)) {
                        pr_err("error: write failed at %#llx\n",
                               (long long)addr);
-                       if (written != subpgsize) {
+                       if (written != subpgsize * k) {
                                pr_err("  write size: %#x\n",
                                       subpgsize * k);
                                pr_err("  written: %#08zx\n",
index e389009fca42c0caa447dc8f9b37360e3565a456..a4e3ae8f0c85fb441cb528425347b9bae6562d20 100644 (file)
@@ -915,6 +915,8 @@ static int spinand_probe(struct spi_device *spi_nand)
        chip->waitfunc  = spinand_wait;
        chip->options   |= NAND_CACHEPRG;
        chip->select_chip = spinand_select_chip;
+       chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+       chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
        mtd = nand_to_mtd(chip);
 
index de0d889e4fe14fd4e3be96619a1fda43d34d73e4..892148c448cce2c9c9e9e59b2d44941283635cbd 100644 (file)
@@ -107,6 +107,8 @@ int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
 #define NAND_STATUS_READY      0x40
 #define NAND_STATUS_WP         0x80
 
+#define NAND_DATA_IFACE_CHECK_ONLY     -1
+
 /*
  * Constants for ECC_MODES
  */
@@ -116,6 +118,7 @@ typedef enum {
        NAND_ECC_HW,
        NAND_ECC_HW_SYNDROME,
        NAND_ECC_HW_OOB_FIRST,
+       NAND_ECC_ON_DIE,
 } nand_ecc_modes_t;
 
 enum nand_ecc_algo {
@@ -257,6 +260,8 @@ struct nand_chip;
 
 /* Vendor-specific feature address (Micron) */
 #define ONFI_FEATURE_ADDR_READ_RETRY   0x89
+#define ONFI_FEATURE_ON_DIE_ECC                0x90
+#define   ONFI_FEATURE_ON_DIE_ECC_EN   BIT(3)
 
 /* ONFI subfeature parameters length */
 #define ONFI_SUBFEATURE_PARAM_LEN      4
@@ -476,6 +481,44 @@ static inline void nand_hw_control_init(struct nand_hw_control *nfc)
        init_waitqueue_head(&nfc->wq);
 }
 
+/**
+ * struct nand_ecc_step_info - ECC step information of ECC engine
+ * @stepsize: data bytes per ECC step
+ * @strengths: array of supported strengths
+ * @nstrengths: number of supported strengths
+ */
+struct nand_ecc_step_info {
+       int stepsize;
+       const int *strengths;
+       int nstrengths;
+};
+
+/**
+ * struct nand_ecc_caps - capability of ECC engine
+ * @stepinfos: array of ECC step information
+ * @nstepinfos: number of ECC step information
+ * @calc_ecc_bytes: driver's hook to calculate ECC bytes per step
+ */
+struct nand_ecc_caps {
+       const struct nand_ecc_step_info *stepinfos;
+       int nstepinfos;
+       int (*calc_ecc_bytes)(int step_size, int strength);
+};
+
+/* a shorthand to generate struct nand_ecc_caps with only one ECC stepsize */
+#define NAND_ECC_CAPS_SINGLE(__name, __calc, __step, ...)      \
+static const int __name##_strengths[] = { __VA_ARGS__ };       \
+static const struct nand_ecc_step_info __name##_stepinfo = {   \
+       .stepsize = __step,                                     \
+       .strengths = __name##_strengths,                        \
+       .nstrengths = ARRAY_SIZE(__name##_strengths),           \
+};                                                             \
+static const struct nand_ecc_caps __name = {                   \
+       .stepinfos = &__name##_stepinfo,                        \
+       .nstepinfos = 1,                                        \
+       .calc_ecc_bytes = __calc,                               \
+}
+
 /**
  * struct nand_ecc_ctrl - Control structure for ECC
  * @mode:      ECC mode
@@ -815,7 +858,10 @@ struct nand_manufacturer_ops {
  * @read_retries:      [INTERN] the number of read retry modes supported
  * @onfi_set_features: [REPLACEABLE] set the features for ONFI nand
  * @onfi_get_features: [REPLACEABLE] get the features for ONFI nand
- * @setup_data_interface: [OPTIONAL] setup the data interface and timing
+ * @setup_data_interface: [OPTIONAL] setup the data interface and timing. If
+ *                       chipnr is set to %NAND_DATA_IFACE_CHECK_ONLY this
+ *                       means the configuration should not be applied but
+ *                       only checked.
  * @bbt:               [INTERN] bad block table pointer
  * @bbt_td:            [REPLACEABLE] bad block table descriptor for flash
  *                     lookup.
@@ -826,9 +872,6 @@ struct nand_manufacturer_ops {
  *                     structure which is shared among multiple independent
  *                     devices.
  * @priv:              [OPTIONAL] pointer to private chip data
- * @errstat:           [OPTIONAL] hardware specific function to perform
- *                     additional error status checks (determine if errors are
- *                     correctable).
  * @manufacturer:      [INTERN] Contains manufacturer information
  */
 
@@ -852,16 +895,13 @@ struct nand_chip {
        int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this);
        int (*erase)(struct mtd_info *mtd, int page);
        int (*scan_bbt)(struct mtd_info *mtd);
-       int (*errstat)(struct mtd_info *mtd, struct nand_chip *this, int state,
-                       int status, int page);
        int (*onfi_set_features)(struct mtd_info *mtd, struct nand_chip *chip,
                        int feature_addr, uint8_t *subfeature_para);
        int (*onfi_get_features)(struct mtd_info *mtd, struct nand_chip *chip,
                        int feature_addr, uint8_t *subfeature_para);
        int (*setup_read_retry)(struct mtd_info *mtd, int retry_mode);
-       int (*setup_data_interface)(struct mtd_info *mtd,
-                                   const struct nand_data_interface *conf,
-                                   bool check_only);
+       int (*setup_data_interface)(struct mtd_info *mtd, int chipnr,
+                                   const struct nand_data_interface *conf);
 
 
        int chip_delay;
@@ -1244,6 +1284,15 @@ int nand_check_erased_ecc_chunk(void *data, int datalen,
                                void *extraoob, int extraooblen,
                                int threshold);
 
+int nand_check_ecc_caps(struct nand_chip *chip,
+                       const struct nand_ecc_caps *caps, int oobavail);
+
+int nand_match_ecc_req(struct nand_chip *chip,
+                      const struct nand_ecc_caps *caps,  int oobavail);
+
+int nand_maximize_ecc(struct nand_chip *chip,
+                     const struct nand_ecc_caps *caps, int oobavail);
+
 /* Default write_oob implementation */
 int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
 
@@ -1258,6 +1307,19 @@ int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
 int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
                           int page);
 
+/* Stub used by drivers that do not support GET/SET FEATURES operations */
+int nand_onfi_get_set_features_notsupp(struct mtd_info *mtd,
+                                      struct nand_chip *chip, int addr,
+                                      u8 *subfeature_param);
+
+/* Default read_page_raw implementation */
+int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+                      uint8_t *buf, int oob_required, int page);
+
+/* Default write_page_raw implementation */
+int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+                       const uint8_t *buf, int oob_required, int page);
+
 /* Reset and initialize a NAND device */
 int nand_reset(struct nand_chip *chip, int chipnr);
 
index 06df1e06b6e03bb953f950ad55e9e516ed5a9dc5..c4beb70dacbd6c3a54ef6078bec828afe0a84e28 100644 (file)
  *
  * For each partition, these fields are available:
  * name: string that will be used to label the partition's MTD device.
+ * types: some partitions can be containers using specific format to describe
+ *     embedded subpartitions / volumes. E.g. many home routers use "firmware"
+ *     partition that contains at least kernel and rootfs. In such case an
+ *     extra parser is needed that will detect these dynamic partitions and
+ *     report them to the MTD subsystem. If set this property stores an array
+ *     of parser names to use when looking for subpartitions.
  * size: the partition size; if defined as MTDPART_SIZ_FULL, the partition
  *     will extend to the end of the master MTD device.
  * offset: absolute starting position within the master MTD device; if
@@ -38,6 +44,7 @@
 
 struct mtd_partition {
        const char *name;               /* identifier string */
+       const char *const *types;       /* names of parsers to use if any */
        uint64_t size;                  /* partition size */
        uint64_t offset;                /* offset within the master MTD space */
        uint32_t mask_flags;            /* master MTD flags to mask out for this partition */
index f2a718030476f734c6f8c4a620b7ea3f1770dc28..55faa2f07ccaf29ccef6a6bf46a66872e83d7a39 100644 (file)
 #define SPINOR_OP_BE_32K_4B    0x5c    /* Erase 32KiB block */
 #define SPINOR_OP_SE_4B                0xdc    /* Sector erase (usually 64KiB) */
 
+/* Double Transfer Rate opcodes - defined in JEDEC JESD216B. */
+#define SPINOR_OP_READ_1_1_1_DTR       0x0d
+#define SPINOR_OP_READ_1_2_2_DTR       0xbd
+#define SPINOR_OP_READ_1_4_4_DTR       0xed
+
+#define SPINOR_OP_READ_1_1_1_DTR_4B    0x0e
+#define SPINOR_OP_READ_1_2_2_DTR_4B    0xbe
+#define SPINOR_OP_READ_1_4_4_DTR_4B    0xee
+
 /* Used for SST flashes only. */
 #define SPINOR_OP_BP           0x02    /* Byte program */
 #define SPINOR_OP_WRDI         0x04    /* Write disable */
 /* Configuration Register bits. */
 #define CR_QUAD_EN_SPAN                BIT(1)  /* Spansion Quad I/O */
 
-enum read_mode {
-       SPI_NOR_NORMAL = 0,
-       SPI_NOR_FAST,
-       SPI_NOR_DUAL,
-       SPI_NOR_QUAD,
+/* Supported SPI protocols */
+#define SNOR_PROTO_INST_MASK   GENMASK(23, 16)
+#define SNOR_PROTO_INST_SHIFT  16
+#define SNOR_PROTO_INST(_nbits)        \
+       ((((unsigned long)(_nbits)) << SNOR_PROTO_INST_SHIFT) & \
+        SNOR_PROTO_INST_MASK)
+
+#define SNOR_PROTO_ADDR_MASK   GENMASK(15, 8)
+#define SNOR_PROTO_ADDR_SHIFT  8
+#define SNOR_PROTO_ADDR(_nbits)        \
+       ((((unsigned long)(_nbits)) << SNOR_PROTO_ADDR_SHIFT) & \
+        SNOR_PROTO_ADDR_MASK)
+
+#define SNOR_PROTO_DATA_MASK   GENMASK(7, 0)
+#define SNOR_PROTO_DATA_SHIFT  0
+#define SNOR_PROTO_DATA(_nbits)        \
+       ((((unsigned long)(_nbits)) << SNOR_PROTO_DATA_SHIFT) & \
+        SNOR_PROTO_DATA_MASK)
+
+#define SNOR_PROTO_IS_DTR      BIT(24) /* Double Transfer Rate */
+
+#define SNOR_PROTO_STR(_inst_nbits, _addr_nbits, _data_nbits)  \
+       (SNOR_PROTO_INST(_inst_nbits) |                         \
+        SNOR_PROTO_ADDR(_addr_nbits) |                         \
+        SNOR_PROTO_DATA(_data_nbits))
+#define SNOR_PROTO_DTR(_inst_nbits, _addr_nbits, _data_nbits)  \
+       (SNOR_PROTO_IS_DTR |                                    \
+        SNOR_PROTO_STR(_inst_nbits, _addr_nbits, _data_nbits))
+
+enum spi_nor_protocol {
+       SNOR_PROTO_1_1_1 = SNOR_PROTO_STR(1, 1, 1),
+       SNOR_PROTO_1_1_2 = SNOR_PROTO_STR(1, 1, 2),
+       SNOR_PROTO_1_1_4 = SNOR_PROTO_STR(1, 1, 4),
+       SNOR_PROTO_1_1_8 = SNOR_PROTO_STR(1, 1, 8),
+       SNOR_PROTO_1_2_2 = SNOR_PROTO_STR(1, 2, 2),
+       SNOR_PROTO_1_4_4 = SNOR_PROTO_STR(1, 4, 4),
+       SNOR_PROTO_1_8_8 = SNOR_PROTO_STR(1, 8, 8),
+       SNOR_PROTO_2_2_2 = SNOR_PROTO_STR(2, 2, 2),
+       SNOR_PROTO_4_4_4 = SNOR_PROTO_STR(4, 4, 4),
+       SNOR_PROTO_8_8_8 = SNOR_PROTO_STR(8, 8, 8),
+
+       SNOR_PROTO_1_1_1_DTR = SNOR_PROTO_DTR(1, 1, 1),
+       SNOR_PROTO_1_2_2_DTR = SNOR_PROTO_DTR(1, 2, 2),
+       SNOR_PROTO_1_4_4_DTR = SNOR_PROTO_DTR(1, 4, 4),
+       SNOR_PROTO_1_8_8_DTR = SNOR_PROTO_DTR(1, 8, 8),
 };
 
+static inline bool spi_nor_protocol_is_dtr(enum spi_nor_protocol proto)
+{
+       return !!(proto & SNOR_PROTO_IS_DTR);
+}
+
+static inline u8 spi_nor_get_protocol_inst_nbits(enum spi_nor_protocol proto)
+{
+       return ((unsigned long)(proto & SNOR_PROTO_INST_MASK)) >>
+               SNOR_PROTO_INST_SHIFT;
+}
+
+static inline u8 spi_nor_get_protocol_addr_nbits(enum spi_nor_protocol proto)
+{
+       return ((unsigned long)(proto & SNOR_PROTO_ADDR_MASK)) >>
+               SNOR_PROTO_ADDR_SHIFT;
+}
+
+static inline u8 spi_nor_get_protocol_data_nbits(enum spi_nor_protocol proto)
+{
+       return ((unsigned long)(proto & SNOR_PROTO_DATA_MASK)) >>
+               SNOR_PROTO_DATA_SHIFT;
+}
+
+static inline u8 spi_nor_get_protocol_width(enum spi_nor_protocol proto)
+{
+       return spi_nor_get_protocol_data_nbits(proto);
+}
+
 #define SPI_NOR_MAX_CMD_SIZE   8
 enum spi_nor_ops {
        SPI_NOR_OPS_READ = 0,
@@ -154,9 +231,11 @@ enum spi_nor_option_flags {
  * @read_opcode:       the read opcode
  * @read_dummy:                the dummy needed by the read operation
  * @program_opcode:    the program opcode
- * @flash_read:                the mode of the read
  * @sst_write_second:  used by the SST write operation
  * @flags:             flag options for the current SPI-NOR (SNOR_F_*)
+ * @read_proto:                the SPI protocol for read operations
+ * @write_proto:       the SPI protocol for write operations
+ * @reg_proto          the SPI protocol for read_reg/write_reg/erase operations
  * @cmd_buf:           used by the write_reg
  * @prepare:           [OPTIONAL] do some preparations for the
  *                     read/write/erase/lock/unlock operations
@@ -185,7 +264,9 @@ struct spi_nor {
        u8                      read_opcode;
        u8                      read_dummy;
        u8                      program_opcode;
-       enum read_mode          flash_read;
+       enum spi_nor_protocol   read_proto;
+       enum spi_nor_protocol   write_proto;
+       enum spi_nor_protocol   reg_proto;
        bool                    sst_write_second;
        u32                     flags;
        u8                      cmd_buf[SPI_NOR_MAX_CMD_SIZE];
@@ -219,11 +300,72 @@ static inline struct device_node *spi_nor_get_flash_node(struct spi_nor *nor)
        return mtd_get_of_node(&nor->mtd);
 }
 
+/**
+ * struct spi_nor_hwcaps - Structure for describing the hardware capabilies
+ * supported by the SPI controller (bus master).
+ * @mask:              the bitmask listing all the supported hw capabilies
+ */
+struct spi_nor_hwcaps {
+       u32     mask;
+};
+
+/*
+ *(Fast) Read capabilities.
+ * MUST be ordered by priority: the higher bit position, the higher priority.
+ * As a matter of performances, it is relevant to use Octo SPI protocols first,
+ * then Quad SPI protocols before Dual SPI protocols, Fast Read and lastly
+ * (Slow) Read.
+ */
+#define SNOR_HWCAPS_READ_MASK          GENMASK(14, 0)
+#define SNOR_HWCAPS_READ               BIT(0)
+#define SNOR_HWCAPS_READ_FAST          BIT(1)
+#define SNOR_HWCAPS_READ_1_1_1_DTR     BIT(2)
+
+#define SNOR_HWCAPS_READ_DUAL          GENMASK(6, 3)
+#define SNOR_HWCAPS_READ_1_1_2         BIT(3)
+#define SNOR_HWCAPS_READ_1_2_2         BIT(4)
+#define SNOR_HWCAPS_READ_2_2_2         BIT(5)
+#define SNOR_HWCAPS_READ_1_2_2_DTR     BIT(6)
+
+#define SNOR_HWCAPS_READ_QUAD          GENMASK(10, 7)
+#define SNOR_HWCAPS_READ_1_1_4         BIT(7)
+#define SNOR_HWCAPS_READ_1_4_4         BIT(8)
+#define SNOR_HWCAPS_READ_4_4_4         BIT(9)
+#define SNOR_HWCAPS_READ_1_4_4_DTR     BIT(10)
+
+#define SNOR_HWCPAS_READ_OCTO          GENMASK(14, 11)
+#define SNOR_HWCAPS_READ_1_1_8         BIT(11)
+#define SNOR_HWCAPS_READ_1_8_8         BIT(12)
+#define SNOR_HWCAPS_READ_8_8_8         BIT(13)
+#define SNOR_HWCAPS_READ_1_8_8_DTR     BIT(14)
+
+/*
+ * Page Program capabilities.
+ * MUST be ordered by priority: the higher bit position, the higher priority.
+ * Like (Fast) Read capabilities, Octo/Quad SPI protocols are preferred to the
+ * legacy SPI 1-1-1 protocol.
+ * Note that Dual Page Programs are not supported because there is no existing
+ * JEDEC/SFDP standard to define them. Also at this moment no SPI flash memory
+ * implements such commands.
+ */
+#define SNOR_HWCAPS_PP_MASK    GENMASK(22, 16)
+#define SNOR_HWCAPS_PP         BIT(16)
+
+#define SNOR_HWCAPS_PP_QUAD    GENMASK(19, 17)
+#define SNOR_HWCAPS_PP_1_1_4   BIT(17)
+#define SNOR_HWCAPS_PP_1_4_4   BIT(18)
+#define SNOR_HWCAPS_PP_4_4_4   BIT(19)
+
+#define SNOR_HWCAPS_PP_OCTO    GENMASK(22, 20)
+#define SNOR_HWCAPS_PP_1_1_8   BIT(20)
+#define SNOR_HWCAPS_PP_1_8_8   BIT(21)
+#define SNOR_HWCAPS_PP_8_8_8   BIT(22)
+
 /**
  * spi_nor_scan() - scan the SPI NOR
  * @nor:       the spi_nor structure
  * @name:      the chip type name
- * @mode:      the read mode supported by the driver
+ * @hwcaps:    the hardware capabilities supported by the controller driver
  *
  * The drivers can use this fuction to scan the SPI NOR.
  * In the scanning, it will try to get all the necessary information to
@@ -233,6 +375,7 @@ static inline struct device_node *spi_nor_get_flash_node(struct spi_nor *nor)
  *
  * Return: 0 for success, others for failure.
  */
-int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode);
+int spi_nor_scan(struct spi_nor *nor, const char *name,
+                const struct spi_nor_hwcaps *hwcaps);
 
 #endif