Commit b3ce1deb authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge master.kernel.org:/pub/scm/linux/kernel/git/tglx/mtd-2.6

Some manual fixups for clashing kfree() cleanups etc.
parents 5b2f7ffc c2965f11
# $Id: Kconfig,v 1.7 2004/11/22 11:33:56 ijc Exp $
# $Id: Kconfig,v 1.11 2005/11/07 11:14:19 gleixner Exp $
menu "Memory Technology Devices (MTD)"
......@@ -10,7 +10,7 @@ config MTD
will provide the generic support for MTD drivers to register
themselves with the kernel and for potential users of MTD devices
to enumerate the devices which are present and obtain a handle on
them. It will also allow you to select individual drivers for
them. It will also allow you to select individual drivers for
particular hardware and users of MTD devices. If unsure, say N.
config MTD_DEBUG
......@@ -61,11 +61,11 @@ config MTD_REDBOOT_PARTS
If you need code which can detect and parse this table, and register
MTD 'partitions' corresponding to each image in the table, enable
this option.
this option.
You will still need the parsing functions to be called by the driver
for your particular device. It won't happen automatically. The
SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for
for your particular device. It won't happen automatically. The
SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for
example.
config MTD_REDBOOT_DIRECTORY_BLOCK
......@@ -81,10 +81,10 @@ config MTD_REDBOOT_DIRECTORY_BLOCK
partition table. A zero or positive value gives an absolete
erase block number. A negative value specifies a number of
sectors before the end of the device.
For example "2" means block number 2, "-1" means the last
block and "-2" means the penultimate block.
config MTD_REDBOOT_PARTS_UNALLOCATED
bool " Include unallocated flash regions"
depends on MTD_REDBOOT_PARTS
......@@ -105,11 +105,11 @@ config MTD_CMDLINE_PARTS
---help---
Allow generic configuration of the MTD paritition tables via the kernel
command line. Multiple flash resources are supported for hardware where
different kinds of flash memory are available.
different kinds of flash memory are available.
You will still need the parsing functions to be called by the driver
for your particular device. It won't happen automatically. The
SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for
for your particular device. It won't happen automatically. The
SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for
example.
The format for the command line is as follows:
......@@ -118,12 +118,12 @@ config MTD_CMDLINE_PARTS
<mtddef> := <mtd-id>:<partdef>[,<partdef>]
<partdef> := <size>[@offset][<name>][ro]
<mtd-id> := unique id used in mapping driver/device
<size> := standard linux memsize OR "-" to denote all
<size> := standard linux memsize OR "-" to denote all
remaining space
<name> := (NAME)
Due to the way Linux handles the command line, no spaces are
allowed in the partition definition, including mtd id's and partition
Due to the way Linux handles the command line, no spaces are
allowed in the partition definition, including mtd id's and partition
names.
Examples:
......@@ -240,7 +240,7 @@ config INFTL
tristate "INFTL (Inverse NAND Flash Translation Layer) support"
depends on MTD
---help---
This provides support for the Inverse NAND Flash Translation
This provides support for the Inverse NAND Flash Translation
Layer which is used on M-Systems' newer DiskOnChip devices. It
uses a kind of pseudo-file system on a flash device to emulate
a block device with 512-byte sectors, on top of which you put
......@@ -253,6 +253,16 @@ config INFTL
permitted to copy, modify and distribute the code as you wish. Just
not use it.
config RFD_FTL
tristate "Resident Flash Disk (Flash Translation Layer) support"
depends on MTD
---help---
This provides support for the flash translation layer known
as the Resident Flash Disk (RFD), as used by the Embedded BIOS
of General Software. There is a blurb at:
http://www.gensw.com/pages/prod/bios/rfd.htm
source "drivers/mtd/chips/Kconfig"
source "drivers/mtd/maps/Kconfig"
......@@ -261,5 +271,7 @@ source "drivers/mtd/devices/Kconfig"
source "drivers/mtd/nand/Kconfig"
source "drivers/mtd/onenand/Kconfig"
endmenu
#
# Makefile for the memory technology device drivers.
#
# $Id: Makefile.common,v 1.5 2004/08/10 20:51:49 dwmw2 Exp $
# $Id: Makefile.common,v 1.7 2005/07/11 10:39:27 gleixner Exp $
# Core functionality.
mtd-y := mtdcore.o
......@@ -20,8 +20,9 @@ obj-$(CONFIG_MTD_BLOCK_RO) += mtdblock_ro.o mtd_blkdevs.o
obj-$(CONFIG_FTL) += ftl.o mtd_blkdevs.o
obj-$(CONFIG_NFTL) += nftl.o mtd_blkdevs.o
obj-$(CONFIG_INFTL) += inftl.o mtd_blkdevs.o
obj-$(CONFIG_RFD_FTL) += rfd_ftl.o mtd_blkdevs.o
nftl-objs := nftlcore.o nftlmount.o
inftl-objs := inftlcore.o inftlmount.o
obj-y += chips/ maps/ devices/ nand/
obj-y += chips/ maps/ devices/ nand/ onenand/
/*======================================================================
drivers/mtd/afs.c: ARM Flash Layout/Partitioning
Copyright (C) 2000 ARM Limited
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
This is access code for flashes using ARM's flash partitioning
This is access code for flashes using ARM's flash partitioning
standards.
$Id: afs.c,v 1.13 2004/02/27 22:09:59 rmk Exp $
$Id: afs.c,v 1.15 2005/11/07 11:14:19 gleixner Exp $
======================================================================*/
......@@ -163,7 +163,7 @@ afs_read_iis(struct mtd_info *mtd, struct image_info_struct *iis, u_int ptr)
return ret;
}
static int parse_afs_partitions(struct mtd_info *mtd,
static int parse_afs_partitions(struct mtd_info *mtd,
struct mtd_partition **pparts,
unsigned long origin)
{
......
# drivers/mtd/chips/Kconfig
# $Id: Kconfig,v 1.15 2005/06/06 23:04:35 tpoynor Exp $
# $Id: Kconfig,v 1.18 2005/11/07 11:14:22 gleixner Exp $
menu "RAM/ROM/Flash chip drivers"
depends on MTD!=n
......@@ -39,7 +39,7 @@ config MTD_CFI_ADV_OPTIONS
If you need to specify a specific endianness for access to flash
chips, or if you wish to reduce the size of the kernel by including
support for only specific arrangements of flash chips, say 'Y'. This
option does not directly affect the code, but will enable other
option does not directly affect the code, but will enable other
configuration options which allow you to do so.
If unsure, say 'N'.
......@@ -56,7 +56,7 @@ config MTD_CFI_NOSWAP
data bits when writing the 'magic' commands to the chips. Saying
'NO', which is the default when CONFIG_MTD_CFI_ADV_OPTIONS isn't
enabled, means that the CPU will not do any swapping; the chips
are expected to be wired to the CPU in 'host-endian' form.
are expected to be wired to the CPU in 'host-endian' form.
Specific arrangements are possible with the BIG_ENDIAN_BYTE and
LITTLE_ENDIAN_BYTE, if the bytes are reversed.
......@@ -79,10 +79,10 @@ config MTD_CFI_GEOMETRY
bool "Specific CFI Flash geometry selection"
depends on MTD_CFI_ADV_OPTIONS
help
This option does not affect the code directly, but will enable
This option does not affect the code directly, but will enable
some other configuration options which would allow you to reduce
the size of the kernel by including support for only certain
arrangements of CFI chips. If unsure, say 'N' and all options
the size of the kernel by including support for only certain
arrangements of CFI chips. If unsure, say 'N' and all options
which are supported by the current code will be enabled.
config MTD_MAP_BANK_WIDTH_1
......@@ -197,7 +197,7 @@ config MTD_CFI_AMDSTD
help
The Common Flash Interface defines a number of different command
sets which a CFI-compliant chip may claim to implement. This code
provides support for one of those command sets, used on chips
provides support for one of those command sets, used on chips
including the AMD Am29LV320.
config MTD_CFI_AMDSTD_RETRY
......@@ -237,14 +237,14 @@ config MTD_RAM
tristate "Support for RAM chips in bus mapping"
depends on MTD
help
This option enables basic support for RAM chips accessed through
This option enables basic support for RAM chips accessed through
a bus mapping driver.
config MTD_ROM
tristate "Support for ROM chips in bus mapping"
depends on MTD
help
This option enables basic support for ROM chips accessed through
This option enables basic support for ROM chips accessed through
a bus mapping driver.
config MTD_ABSENT
......@@ -275,7 +275,7 @@ config MTD_AMDSTD
depends on MTD && MTD_OBSOLETE_CHIPS
help
This option enables support for flash chips using AMD-compatible
commands, including some which are not CFI-compatible and hence
commands, including some which are not CFI-compatible and hence
cannot be used with the CONFIG_MTD_CFI_AMDSTD option.
It also works on AMD compatible chips that do conform to CFI.
......@@ -285,7 +285,7 @@ config MTD_SHARP
depends on MTD && MTD_OBSOLETE_CHIPS
help
This option enables support for flash chips using Sharp-compatible
commands, including some which are not CFI-compatible and hence
commands, including some which are not CFI-compatible and hence
cannot be used with the CONFIG_MTD_CFI_INTELxxx options.
config MTD_JEDEC
......
#
# linux/drivers/chips/Makefile
#
# $Id: Makefile.common,v 1.4 2004/07/12 16:07:30 dwmw2 Exp $
# $Id: Makefile.common,v 1.5 2005/11/07 11:14:22 gleixner Exp $
# *** BIG UGLY NOTE ***
#
......@@ -11,7 +11,7 @@
# the CFI command set drivers are linked before gen_probe.o
obj-$(CONFIG_MTD) += chipreg.o
obj-$(CONFIG_MTD_AMDSTD) += amd_flash.o
obj-$(CONFIG_MTD_AMDSTD) += amd_flash.o
obj-$(CONFIG_MTD_CFI) += cfi_probe.o
obj-$(CONFIG_MTD_CFI_UTIL) += cfi_util.o
obj-$(CONFIG_MTD_CFI_STAA) += cfi_cmdset_0020.o
......
......@@ -3,7 +3,7 @@
*
* Author: Jonas Holmberg <jonas.holmberg@axis.com>
*
* $Id: amd_flash.c,v 1.27 2005/02/04 07:43:09 jonashg Exp $
* $Id: amd_flash.c,v 1.28 2005/11/07 11:14:22 gleixner Exp $
*
* Copyright (c) 2001 Axis Communications AB
*
......@@ -93,9 +93,9 @@
#define D6_MASK 0x40
struct amd_flash_private {
int device_type;
int interleave;
int numchips;
int device_type;
int interleave;
int numchips;
unsigned long chipshift;
// const char *im_name;
struct flchip chips[0];
......@@ -253,7 +253,7 @@ static int amd_flash_do_unlock(struct mtd_info *mtd, loff_t ofs, size_t len,
int i;
int retval = 0;
int lock_status;
map = mtd->priv;
/* Pass the whole chip through sector by sector and check for each
......@@ -273,7 +273,7 @@ static int amd_flash_do_unlock(struct mtd_info *mtd, loff_t ofs, size_t len,
unlock_sector(map, eraseoffset, is_unlock);
lock_status = is_sector_locked(map, eraseoffset);
if (is_unlock && lock_status) {
printk("Cannot unlock sector at address %x length %xx\n",
eraseoffset, merip->erasesize);
......@@ -305,7 +305,7 @@ static int amd_flash_lock(struct mtd_info *mtd, loff_t ofs, size_t len)
/*
* Reads JEDEC manufacturer ID and device ID and returns the index of the first
* matching table entry (-1 if not found or alias for already found chip).
*/
*/
static int probe_new_chip(struct mtd_info *mtd, __u32 base,
struct flchip *chips,
struct amd_flash_private *private,
......@@ -636,7 +636,7 @@ static struct mtd_info *amd_flash_probe(struct map_info *map)
{ .offset = 0x000000, .erasesize = 0x10000, .numblocks = 31 },
{ .offset = 0x1F0000, .erasesize = 0x02000, .numblocks = 8 }
}
}
}
};
struct mtd_info *mtd;
......@@ -701,7 +701,7 @@ static struct mtd_info *amd_flash_probe(struct map_info *map)
mtd->eraseregions = kmalloc(sizeof(struct mtd_erase_region_info) *
mtd->numeraseregions, GFP_KERNEL);
if (!mtd->eraseregions) {
if (!mtd->eraseregions) {
printk(KERN_WARNING "%s: Failed to allocate "
"memory for MTD erase region info\n", map->name);
kfree(mtd);
......@@ -739,12 +739,12 @@ static struct mtd_info *amd_flash_probe(struct map_info *map)
mtd->type = MTD_NORFLASH;
mtd->flags = MTD_CAP_NORFLASH;
mtd->name = map->name;
mtd->erase = amd_flash_erase;
mtd->read = amd_flash_read;
mtd->write = amd_flash_write;
mtd->sync = amd_flash_sync;
mtd->suspend = amd_flash_suspend;
mtd->resume = amd_flash_resume;
mtd->erase = amd_flash_erase;
mtd->read = amd_flash_read;
mtd->write = amd_flash_write;
mtd->sync = amd_flash_sync;
mtd->suspend = amd_flash_suspend;
mtd->resume = amd_flash_resume;
mtd->lock = amd_flash_lock;
mtd->unlock = amd_flash_unlock;
......@@ -789,7 +789,7 @@ retry:
map->name, chip->state);
set_current_state(TASK_UNINTERRUPTIBLE);
add_wait_queue(&chip->wq, &wait);
spin_unlock_bh(chip->mutex);
schedule();
......@@ -802,7 +802,7 @@ retry:
timeo = jiffies + HZ;
goto retry;
}
}
adr += chip->start;
......@@ -889,7 +889,7 @@ retry:
map->name, chip->state);
set_current_state(TASK_UNINTERRUPTIBLE);
add_wait_queue(&chip->wq, &wait);
spin_unlock_bh(chip->mutex);
schedule();
......@@ -901,7 +901,7 @@ retry:
timeo = jiffies + HZ;
goto retry;
}
}
chip->state = FL_WRITING;
......@@ -911,7 +911,7 @@ retry:
wide_write(map, datum, adr);
times_left = 500000;
while (times_left-- && flash_is_busy(map, adr, private->interleave)) {
while (times_left-- && flash_is_busy(map, adr, private->interleave)) {
if (need_resched()) {
spin_unlock_bh(chip->mutex);
schedule();
......@@ -989,7 +989,7 @@ static int amd_flash_write(struct mtd_info *mtd, loff_t to , size_t len,
if (ret) {
return ret;
}
ofs += n;
buf += n;
(*retlen) += n;
......@@ -1002,7 +1002,7 @@ static int amd_flash_write(struct mtd_info *mtd, loff_t to , size_t len,
}
}
}
/* We are now aligned, write as much as possible. */
while(len >= map->buswidth) {
__u32 datum;
......@@ -1063,7 +1063,7 @@ static int amd_flash_write(struct mtd_info *mtd, loff_t to , size_t len,
if (ret) {
return ret;
}
(*retlen) += n;
}
......@@ -1085,7 +1085,7 @@ retry:
if (chip->state != FL_READY){
set_current_state(TASK_UNINTERRUPTIBLE);
add_wait_queue(&chip->wq, &wait);
spin_unlock_bh(chip->mutex);
schedule();
......@@ -1098,7 +1098,7 @@ retry:
timeo = jiffies + HZ;
goto retry;
}
}
chip->state = FL_ERASING;
......@@ -1106,30 +1106,30 @@ retry:
ENABLE_VPP(map);
send_cmd(map, chip->start, CMD_SECTOR_ERASE_UNLOCK_DATA);
send_cmd_to_addr(map, chip->start, CMD_SECTOR_ERASE_UNLOCK_DATA_2, adr);
timeo = jiffies + (HZ * 20);
spin_unlock_bh(chip->mutex);
msleep(1000);
spin_lock_bh(chip->mutex);
while (flash_is_busy(map, adr, private->interleave)) {
if (chip->state != FL_ERASING) {
/* Someone's suspended the erase. Sleep */
set_current_state(TASK_UNINTERRUPTIBLE);
add_wait_queue(&chip->wq, &wait);
spin_unlock_bh(chip->mutex);
printk(KERN_INFO "%s: erase suspended. Sleeping\n",
map->name);
schedule();
remove_wait_queue(&chip->wq, &wait);
if (signal_pending(current)) {
return -EINTR;
}
timeo = jiffies + (HZ*2); /* FIXME */
spin_lock_bh(chip->mutex);
continue;
......@@ -1145,7 +1145,7 @@ retry:
return -EIO;
}
/* Latency issues. Drop the lock, wait a while and retry */
spin_unlock_bh(chip->mutex);
......@@ -1153,7 +1153,7 @@ retry:
schedule();
else
udelay(1);
spin_lock_bh(chip->mutex);
}
......@@ -1180,7 +1180,7 @@ retry:
return -EIO;
}
}
DISABLE_VPP(map);
chip->state = FL_READY;
wake_up(&chip->wq);
......@@ -1246,7 +1246,7 @@ static int amd_flash_erase(struct mtd_info *mtd, struct erase_info *instr)
* with the erase region at that address.
*/
while ((i < mtd->numeraseregions) &&
while ((i < mtd->numeraseregions) &&
((instr->addr + instr->len) >= regions[i].offset)) {
i++;
}
......@@ -1293,10 +1293,10 @@ static int amd_flash_erase(struct mtd_info *mtd, struct erase_info *instr)
}
}
}
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
return 0;
}
......@@ -1324,7 +1324,7 @@ static void amd_flash_sync(struct mtd_info *mtd)
case FL_JEDEC_QUERY:
chip->oldstate = chip->state;
chip->state = FL_SYNCING;
/* No need to wake_up() on this state change -
/* No need to wake_up() on this state change -
* as the whole point is that nobody can do anything
* with the chip now anyway.
*/
......@@ -1335,13 +1335,13 @@ static void amd_flash_sync(struct mtd_info *mtd)
default:
/* Not an idle state */
add_wait_queue(&chip->wq, &wait);
spin_unlock_bh(chip->mutex);
schedule();
remove_wait_queue(&chip->wq, &wait);
goto retry;
}
}
......@@ -1351,7 +1351,7 @@ static void amd_flash_sync(struct mtd_info *mtd)
chip = &private->chips[i];
spin_lock_bh(chip->mutex);
if (chip->state == FL_SYNCING) {
chip->state = chip->oldstate;
wake_up(&chip->wq);
......
This diff is collapsed.
......@@ -10,14 +10,14 @@
*
* 4_by_16 work by Carolyn J. Smith
*
* XIP support hooks by Vitaly Wool (based on code for Intel flash
* XIP support hooks by Vitaly Wool (based on code for Intel flash
* by Nicolas Pitre)
*
*
* Occasionally maintained by Thayne Harbaugh tharbaugh at lnxi dot com
*
* This code is GPL
*
* $Id: cfi_cmdset_0002.c,v 1.118 2005/07/04 22:34:29 gleixner Exp $
* $Id: cfi_cmdset_0002.c,v 1.122 2005/11/07 11:14:22 gleixner Exp $
*
*/
......@@ -93,7 +93,7 @@ static void cfi_tell_features(struct cfi_pri_amdstd *extp)
};
printk(" Silicon revision: %d\n", extp->SiliconRevision >> 1);
printk(" Address sensitive unlock: %s\n",
printk(" Address sensitive unlock: %s\n",
(extp->SiliconRevision & 1) ? "Not required" : "Required");
if (extp->EraseSuspend < ARRAY_SIZE(erase_suspend))
......@@ -118,9 +118,9 @@ static void cfi_tell_features(struct cfi_pri_amdstd *extp)
else
printk(" Page mode: %d word page\n", extp->PageMode << 2);
printk(" Vpp Supply Minimum Program/Erase Voltage: %d.%d V\n",
printk(" Vpp Supply Minimum Program/Erase Voltage: %d.%d V\n",
extp->VppMin >> 4, extp->VppMin & 0xf);
printk(" Vpp Supply Maximum Program/Erase Voltage: %d.%d V\n",
printk(" Vpp Supply Maximum Program/Erase Voltage: %d.%d V\n",
extp->VppMax >> 4, extp->VppMax & 0xf);
if (extp->TopBottom < ARRAY_SIZE(top_bottom))
......@@ -177,7 +177,7 @@ static void fixup_use_erase_chip(struct mtd_info *mtd, void *param)
((cfi->cfiq->EraseRegionInfo[0] & 0xffff) == 0)) {
mtd->erase = cfi_amdstd_erase_chip;
}
}
static struct cfi_fixup cfi_fixup_table[] = {
......@@ -239,7 +239,7 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
if (cfi->cfi_mode==CFI_MODE_CFI){
unsigned char bootloc;
/*
/*
* It's a real CFI chip, not one for which the probe
* routine faked a CFI structure. So we read the feature
* table from it.
......@@ -253,8 +253,18 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
return NULL;
}
if (extp->MajorVersion != '1' ||
(extp->MinorVersion < '0' || extp->MinorVersion > '4')) {
printk(KERN_ERR " Unknown Amd/Fujitsu Extended Query "
"version %c.%c.\n", extp->MajorVersion,
extp->MinorVersion);
kfree(extp);
kfree(mtd);
return NULL;
}
/* Install our own private info structure */
cfi->cmdset_priv = extp;
cfi->cmdset_priv = extp;
/* Apply cfi device specific fixups */
cfi_fixup(mtd, cfi_fixup_table);
......@@ -262,7 +272,7 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
#ifdef DEBUG_CFI_FEATURES
/* Tell the user about it in lots of lovely detail */
cfi_tell_features(extp);
#endif
#endif
bootloc = extp->TopBottom;
if ((bootloc != 2) && (bootloc != 3)) {
......@@ -273,11 +283,11 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
if (bootloc == 3 && cfi->cfiq->NumEraseRegions > 1) {
printk(KERN_WARNING "%s: Swapping erase regions for broken CFI table.\n", map->name);
for (i=0; i<cfi->cfiq->NumEraseRegions / 2; i++) {
int j = (cfi->cfiq->NumEraseRegions-1)-i;
__u32 swap;
swap = cfi->cfiq->EraseRegionInfo[i];
cfi->cfiq->EraseRegionInfo[i] = cfi->cfiq->EraseRegionInfo[j];
cfi->cfiq->EraseRegionInfo[j] = swap;
......@@ -288,11 +298,11 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
cfi->addr_unlock2 = 0x2aa;
/* Modify the unlock address if we are in compatibility mode */
if ( /* x16 in x8 mode */
((cfi->device_type == CFI_DEVICETYPE_X8) &&
((cfi->device_type == CFI_DEVICETYPE_X8) &&
(cfi->cfiq->InterfaceDesc == 2)) ||
/* x32 in x16 mode */
((cfi->device_type == CFI_DEVICETYPE_X16) &&
(cfi->cfiq->InterfaceDesc == 4)))
(cfi->cfiq->InterfaceDesc == 4)))
{
cfi->addr_unlock1 = 0xaaa;
cfi->addr_unlock2 = 0x555;
......@@ -310,10 +320,10 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
cfi->chips[i].word_write_time = 1<<cfi->cfiq->WordWriteTimeoutTyp;
cfi->chips[i].buffer_write_time = 1<<cfi->cfiq->BufWriteTimeoutTyp;
cfi->chips[i].erase_time = 1<<cfi->cfiq->BlockEraseTimeoutTyp;
}
}
map->fldrv = &cfi_amdstd_chipdrv;
return cfi_amdstd_setup(mtd);
}
......@@ -326,24 +336,24 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd)
unsigned long offset = 0;
int i,j;
printk(KERN_NOTICE "number of %s chips: %d\n",
printk(KERN_NOTICE "number of %s chips: %d\n",
(cfi->cfi_mode == CFI_MODE_CFI)?"CFI":"JEDEC",cfi->numchips);
/* Select the correct geometry setup */
/* Select the correct geometry setup */
mtd->size = devsize * cfi->numchips;
mtd->numeraseregions = cfi->cfiq->NumEraseRegions * cfi->numchips;
mtd->eraseregions = kmalloc(sizeof(struct mtd_erase_region_info)
* mtd->numeraseregions, GFP_KERNEL);
if (!mtd->eraseregions) {
if (!mtd->eraseregions) {
printk(KERN_WARNING "Failed to allocate memory for MTD erase region info\n");
goto setup_err;
}
for (i=0; i<cfi->cfiq->NumEraseRegions; i++) {
unsigned long ernum, ersize;
ersize = ((cfi->cfiq->EraseRegionInfo[i] >> 8) & ~0xff) * cfi->interleave;
ernum = (cfi->cfiq->EraseRegionInfo[i] & 0xffff) + 1;
if (mtd->erasesize < ersize) {
mtd->erasesize = ersize;
}
......@@ -429,7 +439,7 @@ static int __xipram chip_good(struct map_info *map, unsigned long addr, map_word
oldd = map_read(map, addr);
curd = map_read(map, addr);
return map_word_equal(map, oldd, curd) &&
return map_word_equal(map, oldd, curd) &&
map_word_equal(map, curd, expected);
}
......@@ -461,7 +471,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
/* Someone else might have been playing with it. */
goto retry;
}
case FL_READY:
case FL_CFI_QUERY:
case FL_JEDEC_QUERY:
......@@ -504,7 +514,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
printk(KERN_ERR "MTD %s(): chip not ready after erase suspend\n", __func__);
return -EIO;
}
spin_unlock(chip->mutex);
cfi_udelay(1);
spin_lock(chip->mutex);
......@@ -607,7 +617,7 @@ static void __xipram xip_enable(struct map_info *map, struct flchip *chip,
* When a delay is required for the flash operation to complete, the
* xip_udelay() function is polling for both the given timeout and pending
* (but still masked) hardware interrupts. Whenever there is an interrupt
* pending then the flash erase operation is suspended, array mode restored
* pending then the flash erase operation is suspended, array mode restored
* and interrupts unmasked. Task scheduling might also happen at that
* point. The CPU eventually returns from the interrupt or the call to
* schedule() and the suspended flash operation is resumed for the remaining
......@@ -631,9 +641,9 @@ static void __xipram xip_udelay(struct map_info *map, struct flchip *chip,
((chip->state == FL_ERASING && (extp->EraseSuspend & 2))) &&
(cfi_interleave_is_1(cfi) || chip->oldstate == FL_READY)) {
/*
* Let's suspend the erase operation when supported.
* Note that we currently don't try to suspend
* interleaved chips if there is already another
* Let's suspend the erase operation when supported.
* Note that we currently don't try to suspend
* interleaved chips if there is already another
* operation suspended (imagine what happens
* when one chip was already done with the current
* operation while another chip suspended it, then
......@@ -769,8 +779,8 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
adr += chip->start;
/* Ensure cmd read/writes are aligned. */
cmd_addr = adr & ~(map_bankwidth(map)-1);
/* Ensure cmd read/writes are aligned. */
cmd_addr = adr & ~(map_bankwidth(map)-1);
spin_lock(chip->mutex);
ret = get_chip(map, chip, cmd_addr, FL_READY);
......@@ -850,7 +860,7 @@ static inline int do_read_secsi_onechip(struct map_info *map, struct flchip *chi
#endif
set_current_state(TASK_UNINTERRUPTIBLE);
add_wait_queue(&chip->wq, &wait);
spin_unlock(chip->mutex);
schedule();
......@@ -862,7 +872,7 @@ static inline int do_read_secsi_onechip(struct map_info *map, struct flchip *chi
timeo = jiffies + HZ;
goto retry;
}
}
adr += chip->start;
......@@ -871,14 +881,14 @@ static inline int do_read_secsi_onechip(struct map_info *map, struct flchip *chi
cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x88, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
map_copy_from(map, buf, adr, len);
cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x90, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x00, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
wake_up(&chip->wq);
spin_unlock(chip->mutex);
......@@ -987,7 +997,7 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip,
chip->word_write_time);
/* See comment above for timeout value. */
timeo = jiffies + uWriteTimeout;
timeo = jiffies + uWriteTimeout;
for (;;) {
if (chip->state != FL_WRITING) {
/* Someone's suspended the write. Sleep */
......@@ -1003,16 +1013,16 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip,
continue;
}
if (chip_ready(map, adr))
break;
if (time_after(jiffies, timeo)) {
if (time_after(jiffies, timeo) && !chip_ready(map, adr)){
xip_enable(map, chip, adr);
printk(KERN_WARNING "MTD %s(): software timeout\n", __func__);
xip_disable(map, chip, adr);
break;
break;
}
if (chip_ready(map, adr))
break;
/* Latency issues. Drop the lock, wait a while and retry */
UDELAY(map, chip, adr, 1);
}
......@@ -1022,7 +1032,7 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip,
map_write( map, CMD(0xF0), chip->start );
/* FIXME - should have reset delay before continuing */
if (++retry_cnt <= MAX_WORD_RETRIES)
if (++retry_cnt <= MAX_WORD_RETRIES)
goto retry;
ret = -EIO;
......@@ -1090,27 +1100,27 @@ static int cfi_amdstd_write_words(struct mtd_info *mtd, loff_t to, size_t len,
/* Number of bytes to copy from buffer */
n = min_t(int, len, map_bankwidth(map)-i);
tmp_buf = map_word_load_partial(map, tmp_buf, buf, i, n);
ret = do_write_oneword(map, &cfi->chips[chipnum],
ret = do_write_oneword(map, &cfi->chips[chipnum],
bus_ofs, tmp_buf);
if (ret)
if (ret)
return ret;
ofs += n;
buf += n;
(*retlen) += n;
len -= n;
if (ofs >> cfi->chipshift) {
chipnum ++;
chipnum ++;
ofs = 0;
if (chipnum == cfi->numchips)
return 0;
}
}
/* We are now aligned, write as much as possible */
while(len >= map_bankwidth(map)) {
map_word datum;
......@@ -1128,7 +1138,7 @@ static int cfi_amdstd_write_words(struct mtd_info *mtd, loff_t to, size_t len,
len -= map_bankwidth(map);
if (ofs >> cfi->chipshift) {
chipnum ++;
chipnum ++;
ofs = 0;
if (chipnum == cfi->numchips)
return 0;
......@@ -1166,12 +1176,12 @@ static int cfi_amdstd_write_words(struct mtd_info *mtd, loff_t to, size_t len,
spin_unlock(cfi->chips[chipnum].mutex);
tmp_buf = map_word_load_partial(map, tmp_buf, buf, 0, len);
ret = do_write_oneword(map, &cfi->chips[chipnum],
ret = do_write_oneword(map, &cfi->chips[chipnum],
ofs, tmp_buf);
if (ret)
if (ret)
return ret;
(*retlen) += len;
}
......@@ -1183,7 +1193,7 @@ static int cfi_amdstd_write_words(struct mtd_info *mtd, loff_t to, size_t len,
* FIXME: interleaved mode not tested, and probably not supported!
*/
static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
unsigned long adr, const u_char *buf,
unsigned long adr, const u_char *buf,
int len)
{
struct cfi_private *cfi = map->fldrv_priv;
......@@ -1213,7 +1223,7 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
XIP_INVAL_CACHED_RANGE(map, adr, len);
ENABLE_VPP(map);
xip_disable(map, chip, cmd_adr);
cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL);
//cfi_send_gen_cmd(0xA0, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL);
......@@ -1247,8 +1257,8 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
adr, map_bankwidth(map),
chip->word_write_time);
timeo = jiffies + uWriteTimeout;
timeo = jiffies + uWriteTimeout;
for (;;) {
if (chip->state != FL_WRITING) {
/* Someone's suspended the write. Sleep */
......@@ -1264,13 +1274,13 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
continue;
}
if (time_after(jiffies, timeo) && !chip_ready(map, adr))
break;
if (chip_ready(map, adr)) {
xip_enable(map, chip, adr);
goto op_done;
}
if( time_after(jiffies, timeo))
break;
/* Latency issues. Drop the lock, wait a while and retry */
UDELAY(map, chip, adr, 1);
......@@ -1342,7 +1352,7 @@ static int cfi_amdstd_write_buffers(struct mtd_info *mtd, loff_t to, size_t len,
if (size % map_bankwidth(map))
size -= size % map_bankwidth(map);
ret = do_write_buffer(map, &cfi->chips[chipnum],
ret = do_write_buffer(map, &cfi->chips[chipnum],
ofs, buf, size);
if (ret)
return ret;
......@@ -1353,7 +1363,7 @@ static int cfi_amdstd_write_buffers(struct mtd_info *mtd, loff_t to, size_t len,
len -= size;
if (ofs >> cfi->chipshift) {
chipnum ++;
chipnum ++;
ofs = 0;
if (chipnum == cfi->numchips)
return 0;
......@@ -1570,7 +1580,7 @@ int cfi_amdstd_erase_varsize(struct mtd_info *mtd, struct erase_info *instr)
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
return 0;
}
......@@ -1593,7 +1603,7 @@ static int cfi_amdstd_erase_chip(struct mtd_info *mtd, struct erase_info *instr)
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
return 0;
}
......@@ -1620,7 +1630,7 @@ static void cfi_amdstd_sync (struct mtd_info *mtd)
case FL_JEDEC_QUERY:
chip->oldstate = chip->state;
chip->state = FL_SYNCING;
/* No need to wake_up() on this state change -
/* No need to wake_up() on this state change -
* as the whole point is that nobody can do anything
* with the chip now anyway.
*/
......@@ -1631,13 +1641,13 @@ static void cfi_amdstd_sync (struct mtd_info *mtd)
default:
/* Not an idle state */
add_wait_queue(&chip->wq, &wait);
spin_unlock(chip->mutex);
schedule();
remove_wait_queue(&chip->wq, &wait);
goto retry;
}
}
......@@ -1648,7 +1658,7 @@ static void cfi_amdstd_sync (struct mtd_info *mtd)
chip = &cfi->chips[i];
spin_lock(chip->mutex);
if (chip->state == FL_SYNCING) {
chip->state = chip->oldstate;
wake_up(&chip->wq);
......@@ -1678,7 +1688,7 @@ static int cfi_amdstd_suspend(struct mtd_info *mtd)
case FL_JEDEC_QUERY:
chip->oldstate = chip->state;
chip->state = FL_PM_SUSPENDED;
/* No need to wake_up() on this state change -
/* No need to wake_up() on this state change -
* as the whole point is that nobody can do anything
* with the chip now anyway.
*/
......@@ -1699,7 +1709,7 @@ static int cfi_amdstd_suspend(struct mtd_info *mtd)
chip = &cfi->chips[i];
spin_lock(chip->mutex);
if (chip->state == FL_PM_SUSPENDED) {
chip->state = chip->oldstate;
wake_up(&chip->wq);
......@@ -1707,7 +1717,7 @@ static int cfi_amdstd_suspend(struct mtd_info *mtd)
spin_unlock(chip->mutex);
}
}
return ret;
}
......@@ -1720,11 +1730,11 @@ static void cfi_amdstd_resume(struct mtd_info *mtd)
struct flchip *chip;
for (i=0; i<cfi->numchips; i++) {
chip = &cfi->chips[i];
spin_lock(chip->mutex);
if (chip->state == FL_PM_SUSPENDED) {
chip->state = FL_READY;
map_write(map, CMD(0xF0), chip->start);
......
......@@ -4,8 +4,8 @@
*
* (C) 2000 Red Hat. GPL'd
*
* $Id: cfi_cmdset_0020.c,v 1.19 2005/07/13 15:52:45 dwmw2 Exp $
*
* $Id: cfi_cmdset_0020.c,v 1.22 2005/11/07 11:14:22 gleixner Exp $
*
* 10/10/2000 Nicolas Pitre <nico@cam.org>
* - completely revamped method functions so they are aware and
* independent of the flash geometry (buswidth, interleave, etc.)
......@@ -81,17 +81,17 @@ static void cfi_tell_features(struct cfi_pri_intelext *extp)
printk(" - Page-mode read: %s\n", extp->FeatureSupport&128?"supported":"unsupported");
printk(" - Synchronous read: %s\n", extp->FeatureSupport&256?"supported":"unsupported");
for (i=9; i<32; i++) {
if (extp->FeatureSupport & (1<<i))
if (extp->FeatureSupport & (1<<i))
printk(" - Unknown Bit %X: supported\n", i);
}
printk(" Supported functions after Suspend: %2.2X\n", extp->SuspendCmdSupport);
printk(" - Program after Erase Suspend: %s\n", extp->SuspendCmdSupport&1?"supported":"unsupported");
for (i=1; i<8; i++) {
if (extp->SuspendCmdSupport & (1<<i))
printk(" - Unknown Bit %X: supported\n", i);
}
printk(" Block Status Register Mask: %4.4X\n", extp->BlkStatusRegMask);
printk(" - Lock Bit Active: %s\n", extp->BlkStatusRegMask&1?"yes":"no");
printk(" - Valid Bit Active: %s\n", extp->BlkStatusRegMask&2?"yes":"no");
......@@ -99,11 +99,11 @@ static void cfi_tell_features(struct cfi_pri_intelext *extp)
if (extp->BlkStatusRegMask & (1<<i))
printk(" - Unknown Bit %X Active: yes\n",i);
}
printk(" Vcc Logic Supply Optimum Program/Erase Voltage: %d.%d V\n",
printk(" Vcc Logic Supply Optimum Program/Erase Voltage: %d.%d V\n",
extp->VccOptimal >> 8, extp->VccOptimal & 0xf);
if (extp->VppOptimal)
printk(" Vpp Programming Supply Optimum Program/Erase Voltage: %d.%d V\n",
printk(" Vpp Programming Supply Optimum Program/Erase Voltage: %d.%d V\n",
extp->VppOptimal >> 8, extp->VppOptimal & 0xf);
}
#endif
......@@ -121,7 +121,7 @@ struct mtd_info *cfi_cmdset_0020(struct map_info *map, int primary)
int i;
if (cfi->cfi_mode) {
/*
/*
* It's a real CFI chip, not one for which the probe
* routine faked a CFI structure. So we read the feature
* table from it.
......@@ -133,24 +133,33 @@ struct mtd_info *cfi_cmdset_0020(struct map_info *map, int primary)
if (!extp)
return NULL;
if (extp->MajorVersion != '1' ||
(extp->MinorVersion < '0' || extp->MinorVersion > '3')) {
printk(KERN_ERR " Unknown ST Microelectronics"
" Extended Query version %c.%c.\n",
extp->MajorVersion, extp->MinorVersion);
kfree(extp);
return NULL;
}
/* Do some byteswapping if necessary */
extp->FeatureSupport = cfi32_to_cpu(extp->FeatureSupport);
extp->BlkStatusRegMask = cfi32_to_cpu(extp->BlkStatusRegMask);
#ifdef DEBUG_CFI_FEATURES
/* Tell the user about it in lots of lovely detail */
cfi_tell_features(extp);
#endif
#endif
/* Install our own private info structure */
cfi->cmdset_priv = extp;
}
}
for (i=0; i< cfi->numchips; i++) {
cfi->chips[i].word_write_time = 128;
cfi->chips[i].buffer_write_time = 128;
cfi->chips[i].erase_time = 1024;
}
}
return cfi_staa_setup(map);
}
......@@ -178,15 +187,15 @@ static struct mtd_info *cfi_staa_setup(struct map_info *map)
mtd->size = devsize * cfi->numchips;
mtd->numeraseregions = cfi->cfiq->NumEraseRegions * cfi->numchips;
mtd->eraseregions = kmalloc(sizeof(struct mtd_erase_region_info)
mtd->eraseregions = kmalloc(sizeof(struct mtd_erase_region_info)
* mtd->numeraseregions, GFP_KERNEL);
if (!mtd->eraseregions) {
if (!mtd->eraseregions) {
printk(KERN_ERR "Failed to allocate memory for MTD erase region info\n");
kfree(cfi->cmdset_priv);
kfree(mtd);
return NULL;
}
for (i=0; i<cfi->cfiq->NumEraseRegions; i++) {
unsigned long ernum, ersize;
ersize = ((cfi->cfiq->EraseRegionInfo[i] >> 8) & ~0xff) * cfi->interleave;
......@@ -219,7 +228,7 @@ static struct mtd_info *cfi_staa_setup(struct map_info *map)
mtd->eraseregions[i].numblocks);
}
/* Also select the correct geometry setup too */
/* Also select the correct geometry setup too */
mtd->erase = cfi_staa_erase_varsize;
mtd->read = cfi_staa_read;
mtd->write = cfi_staa_write_buffers;
......@@ -250,8 +259,8 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
adr += chip->start;
/* Ensure cmd read/writes are aligned. */
cmd_addr = adr & ~(map_bankwidth(map)-1);
/* Ensure cmd read/writes are aligned. */
cmd_addr = adr & ~(map_bankwidth(map)-1);
/* Let's determine this according to the interleave only once */
status_OK = CMD(0x80);
......@@ -267,7 +276,7 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
case FL_ERASING:
if (!(((struct cfi_pri_intelext *)cfi->cmdset_priv)->FeatureSupport & 2))
goto sleep; /* We don't support erase suspend */
map_write (map, CMD(0xb0), cmd_addr);
/* If the flash has finished erasing, then 'erase suspend'
* appears to make some (28F320) flash devices switch to
......@@ -282,7 +291,7 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
status = map_read(map, cmd_addr);
if (map_word_andequal(map, status, status_OK, status_OK))
break;
if (time_after(jiffies, timeo)) {
/* Urgh */
map_write(map, CMD(0xd0), cmd_addr);
......@@ -294,17 +303,17 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
"suspended: status = 0x%lx\n", status.x[0]);
return -EIO;
}
spin_unlock_bh(chip->mutex);
cfi_udelay(1);
spin_lock_bh(chip->mutex);
}
suspended = 1;
map_write(map, CMD(0xff), cmd_addr);
chip->state = FL_READY;
break;
#if 0
case FL_WRITING:
/* Not quite yet */
......@@ -325,7 +334,7 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
chip->state = FL_READY;
break;
}
/* Urgh. Chip not yet ready to talk to us. */
if (time_after(jiffies, timeo)) {
spin_unlock_bh(chip->mutex);
......@@ -355,17 +364,17 @@ static inline int do_read_onechip(struct map_info *map, struct flchip *chip, lof
if (suspended) {
chip->state = chip->oldstate;
/* What if one interleaved chip has finished and the
/* What if one interleaved chip has finished and the
other hasn't? The old code would leave the finished
one in READY mode. That's bad, and caused -EROFS
one in READY mode. That's bad, and caused -EROFS
errors to be returned from do_erase_oneblock because
that's the only bit it checked for at the time.
As the state machine appears to explicitly allow
As the state machine appears to explicitly allow
sending the 0x70 (Read Status) command to an erasing
chip and expecting it to be ignored, that's what we
chip and expecting it to be ignored, that's what we
do. */
map_write(map, CMD(0xd0), cmd_addr);
map_write(map, CMD(0x70), cmd_addr);
map_write(map, CMD(0x70), cmd_addr);
}
wake_up(&chip->wq);
......@@ -405,14 +414,14 @@ static int cfi_staa_read (struct mtd_info *mtd, loff_t from, size_t len, size_t
*retlen += thislen;
len -= thislen;
buf += thislen;
ofs = 0;
chipnum++;
}
return ret;
}
static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
unsigned long adr, const u_char *buf, int len)
{
struct cfi_private *cfi = map->fldrv_priv;
......@@ -420,7 +429,7 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
unsigned long cmd_adr, timeo;
DECLARE_WAITQUEUE(wait, current);
int wbufsize, z;
/* M58LW064A requires bus alignment for buffer wriets -- saw */
if (adr & (map_bankwidth(map)-1))
return -EINVAL;
......@@ -428,10 +437,10 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
wbufsize = cfi_interleave(cfi) << cfi->cfiq->MaxBufWriteSize;
adr += chip->start;
cmd_adr = adr & ~(wbufsize-1);
/* Let's determine this according to the interleave only once */
status_OK = CMD(0x80);
timeo = jiffies + HZ;
retry:
......@@ -439,7 +448,7 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
printk("%s: chip->state[%d]\n", __FUNCTION__, chip->state);
#endif
spin_lock_bh(chip->mutex);
/* Check that the chip's ready to talk to us.
* Later, we can actually think about interrupting it
* if it's in FL_ERASING state.
......@@ -448,7 +457,7 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
switch (chip->state) {
case FL_READY:
break;
case FL_CFI_QUERY:
case FL_JEDEC_QUERY:
map_write(map, CMD(0x70), cmd_adr);
......@@ -513,7 +522,7 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
/* Write length of data to come */
map_write(map, CMD(len/map_bankwidth(map)-1), cmd_adr );
/* Write data */
for (z = 0; z < len;
z += map_bankwidth(map), buf += map_bankwidth(map)) {
......@@ -560,7 +569,7 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
printk(KERN_ERR "waiting for chip to be ready timed out in bufwrite\n");
return -EIO;
}
/* Latency issues. Drop the lock, wait a while and retry */
spin_unlock_bh(chip->mutex);
cfi_udelay(1);
......@@ -572,9 +581,9 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
if (!chip->buffer_write_time)
chip->buffer_write_time++;
}
if (z > 1)
if (z > 1)
chip->buffer_write_time++;
/* Done and happy. */
DISABLE_VPP(map);
chip->state = FL_STATUS;
......@@ -598,7 +607,7 @@ static inline int do_write_buffer(struct map_info *map, struct flchip *chip,
return 0;
}
static int cfi_staa_write_buffers (struct mtd_info *mtd, loff_t to,
static int cfi_staa_write_buffers (struct mtd_info *mtd, loff_t to,
size_t len, size_t *retlen, const u_char *buf)
{
struct map_info *map = mtd->priv;
......@@ -620,7 +629,7 @@ static int cfi_staa_write_buffers (struct mtd_info *mtd, loff_t to,
printk("%s: chipnum[%x] wbufsize[%x]\n", __FUNCTION__, chipnum, wbufsize);
printk("%s: ofs[%x] len[%x]\n", __FUNCTION__, ofs, len);
#endif
/* Write buffer is worth it only if more than one word to write... */
while (len > 0) {
/* We must not cross write block boundaries */
......@@ -629,7 +638,7 @@ static int cfi_staa_write_buffers (struct mtd_info *mtd, loff_t to,
if (size > len)
size = len;
ret = do_write_buffer(map, &cfi->chips[chipnum],
ret = do_write_buffer(map, &cfi->chips[chipnum],
ofs, buf, size);
if (ret)
return ret;
......@@ -640,13 +649,13 @@ static int cfi_staa_write_buffers (struct mtd_info *mtd, loff_t to,
len -= size;
if (ofs >> cfi->chipshift) {
chipnum ++;
chipnum ++;
ofs = 0;
if (chipnum == cfi->numchips)
return 0;
}
}
return 0;
}
......@@ -756,7 +765,7 @@ retry:
status = map_read(map, adr);
if (map_word_andequal(map, status, status_OK, status_OK))
break;
/* Urgh. Chip not yet ready to talk to us. */
if (time_after(jiffies, timeo)) {
spin_unlock_bh(chip->mutex);
......@@ -789,7 +798,7 @@ retry:
map_write(map, CMD(0x20), adr);
map_write(map, CMD(0xD0), adr);
chip->state = FL_ERASING;
spin_unlock_bh(chip->mutex);
msleep(1000);
spin_lock_bh(chip->mutex);
......@@ -814,7 +823,7 @@ retry:
status = map_read(map, adr);
if (map_word_andequal(map, status, status_OK, status_OK))
break;
/* OK Still waiting */
if (time_after(jiffies, timeo)) {
map_write(map, CMD(0x70), adr);
......@@ -824,13 +833,13 @@ retry:
spin_unlock_bh(chip->mutex);
return -EIO;
}
/* Latency issues. Drop the lock, wait a while and retry */
spin_unlock_bh(chip->mutex);
cfi_udelay(1);
spin_lock_bh(chip->mutex);
}
DISABLE_VPP(map);
ret = 0;
......@@ -855,7 +864,7 @@ retry:
/* Reset the error bits */
map_write(map, CMD(0x50), adr);
map_write(map, CMD(0x70), adr);
if ((chipstatus & 0x30) == 0x30) {
printk(KERN_NOTICE "Chip reports improper command sequence: status 0x%x\n", chipstatus);
ret = -EIO;
......@@ -904,17 +913,17 @@ int cfi_staa_erase_varsize(struct mtd_info *mtd, struct erase_info *instr)
i = 0;
/* Skip all erase regions which are ended before the start of
/* Skip all erase regions which are ended before the start of
the requested erase. Actually, to save on the calculations,
we skip to the first erase region which starts after the
start of the requested erase, and then go back one.
*/
while (i < mtd->numeraseregions && instr->addr >= regions[i].offset)
i++;
i--;
/* OK, now i is pointing at the erase region in which this
/* OK, now i is pointing at the erase region in which this
erase request starts. Check the start of the requested
erase range is aligned with the erase size which is in
effect here.
......@@ -937,7 +946,7 @@ int cfi_staa_erase_varsize(struct mtd_info *mtd, struct erase_info *instr)
the address actually falls
*/
i--;
if ((instr->addr + instr->len) & (regions[i].erasesize-1))
return -EINVAL;
......@@ -949,7 +958,7 @@ int cfi_staa_erase_varsize(struct mtd_info *mtd, struct erase_info *instr)
while(len) {
ret = do_erase_oneblock(map, &cfi->chips[chipnum], adr);
if (ret)
return ret;
......@@ -962,15 +971,15 @@ int cfi_staa_erase_varsize(struct mtd_info *mtd, struct erase_info *instr)
if (adr >> cfi->chipshift) {
adr = 0;
chipnum++;
if (chipnum >= cfi->numchips)
break;
}
}
instr->state = MTD_ERASE_DONE;
mtd_erase_callback(instr);
return 0;
}
......@@ -996,7 +1005,7 @@ static void cfi_staa_sync (struct mtd_info *mtd)
case FL_JEDEC_QUERY:
chip->oldstate = chip->state;
chip->state = FL_SYNCING;
/* No need to wake_up() on this state change -
/* No need to wake_up() on this state change -
* as the whole point is that nobody can do anything
* with the chip now anyway.
*/
......@@ -1007,11 +1016,11 @@ static void cfi_staa_sync (struct mtd_info *mtd)
default:
/* Not an idle state */
add_wait_queue(&chip->wq, &wait);
spin_unlock_bh(chip->mutex);
schedule();
remove_wait_queue(&chip->wq, &wait);
goto retry;
}
}
......@@ -1022,7 +1031,7 @@ static void cfi_staa_sync (struct mtd_info *mtd)
chip = &cfi->chips[i];
spin_lock_bh(chip->mutex);
if (chip->state == FL_SYNCING) {
chip->state = chip->oldstate;
wake_up(&chip->wq);
......@@ -1057,9 +1066,9 @@ retry:
case FL_STATUS:
status = map_read(map, adr);
if (map_word_andequal(map, status, status_OK, status_OK))
if (map_word_andequal(map, status, status_OK, status_OK))
break;
/* Urgh. Chip not yet ready to talk to us. */
if (time_after(jiffies, timeo)) {
spin_unlock_bh(chip->mutex);
......@@ -1088,7 +1097,7 @@ retry:
map_write(map, CMD(0x60), adr);
map_write(map, CMD(0x01), adr);
chip->state = FL_LOCKING;
spin_unlock_bh(chip->mutex);
msleep(1000);
spin_lock_bh(chip->mutex);
......@@ -1102,7 +1111,7 @@ retry:
status = map_read(map, adr);
if (map_word_andequal(map, status, status_OK, status_OK))
break;
/* OK Still waiting */
if (time_after(jiffies, timeo)) {
map_write(map, CMD(0x70), adr);
......@@ -1112,13 +1121,13 @@ retry:
spin_unlock_bh(chip->mutex);
return -EIO;
}
/* Latency issues. Drop the lock, wait a while and retry */
spin_unlock_bh(chip->mutex);
cfi_udelay(1);
spin_lock_bh(chip->mutex);
}
/* Done and happy. */
chip->state = FL_STATUS;
DISABLE_VPP(map);
......@@ -1162,8 +1171,8 @@ static int cfi_staa_lock(struct mtd_info *mtd, loff_t ofs, size_t len)
cfi_send_gen_cmd(0x90, 0x55, 0, map, cfi, cfi->device_type, NULL);
printk("after lock: block status register is %x\n",cfi_read_query(map, adr+(2*ofs_factor)));
cfi_send_gen_cmd(0xff, 0x55, 0, map, cfi, cfi->device_type, NULL);
#endif
#endif
if (ret)
return ret;
......@@ -1173,7 +1182,7 @@ static int cfi_staa_lock(struct mtd_info *mtd, loff_t ofs, size_t len)
if (adr >> cfi->chipshift) {
adr = 0;
chipnum++;
if (chipnum >= cfi->numchips)
break;
}
......@@ -1208,7 +1217,7 @@ retry:
status = map_read(map, adr);
if (map_word_andequal(map, status, status_OK, status_OK))
break;
/* Urgh. Chip not yet ready to talk to us. */
if (time_after(jiffies, timeo)) {
spin_unlock_bh(chip->mutex);
......@@ -1237,7 +1246,7 @@ retry:
map_write(map, CMD(0x60), adr);
map_write(map, CMD(0xD0), adr);
chip->state = FL_UNLOCKING;
spin_unlock_bh(chip->mutex);
msleep(1000);
spin_lock_bh(chip->mutex);
......@@ -1251,7 +1260,7 @@ retry:
status = map_read(map, adr);
if (map_word_andequal(map, status, status_OK, status_OK))
break;
/* OK Still waiting */
if (time_after(jiffies, timeo)) {
map_write(map, CMD(0x70), adr);
......@@ -1261,13 +1270,13 @@ retry:
spin_unlock_bh(chip->mutex);
return -EIO;
}
/* Latency issues. Drop the unlock, wait a while and retry */
spin_unlock_bh(chip->mutex);
cfi_udelay(1);
spin_lock_bh(chip->mutex);
}
/* Done and happy. */
chip->state = FL_STATUS;
DISABLE_VPP(map);
......@@ -1292,7 +1301,7 @@ static int cfi_staa_unlock(struct mtd_info *mtd, loff_t ofs, size_t len)
{
unsigned long temp_adr = adr;
unsigned long temp_len = len;
cfi_send_gen_cmd(0x90, 0x55, 0, map, cfi, cfi->device_type, NULL);
while (temp_len) {
printk("before unlock %x: block status register is %x\n",temp_adr,cfi_read_query(map, temp_adr+(2*ofs_factor)));
......@@ -1310,7 +1319,7 @@ static int cfi_staa_unlock(struct mtd_info *mtd, loff_t ofs, size_t len)
printk("after unlock: block status register is %x\n",cfi_read_query(map, adr+(2*ofs_factor)));
cfi_send_gen_cmd(0xff, 0x55, 0, map, cfi, cfi->device_type, NULL);
#endif
return ret;
}
......@@ -1334,7 +1343,7 @@ static int cfi_staa_suspend(struct mtd_info *mtd)
case FL_JEDEC_QUERY:
chip->oldstate = chip->state;
chip->state = FL_PM_SUSPENDED;
/* No need to wake_up() on this state change -
/* No need to wake_up() on this state change -
* as the whole point is that nobody can do anything
* with the chip now anyway.
*/
......@@ -1353,9 +1362,9 @@ static int cfi_staa_suspend(struct mtd_info *mtd)
if (ret) {
for (i--; i >=0; i--) {
chip = &cfi->chips[i];
spin_lock_bh(chip->mutex);
if (chip->state == FL_PM_SUSPENDED) {
/* No need to force it into a known state here,
because we're returning failure, and it didn't
......@@ -1365,8 +1374,8 @@ static int cfi_staa_suspend(struct mtd_info *mtd)
}
spin_unlock_bh(chip->mutex);
}
}
}
return ret;
}
......@@ -1378,11 +1387,11 @@ static void cfi_staa_resume(struct mtd_info *mtd)
struct flchip *chip;
for (i=0; i<cfi->numchips; i++) {
chip = &cfi->chips[i];
spin_lock_bh(chip->mutex);
/* Go to known state. Chip may have been power cycled */
if (chip->state == FL_PM_SUSPENDED) {
map_write(map, CMD(0xFF), 0);
......
/*
/*
Common Flash Interface probe code.
(C) 2000 Red Hat. GPL'd.
$Id: cfi_probe.c,v 1.83 2004/11/16 18:19:02 nico Exp $
$Id: cfi_probe.c,v 1.84 2005/11/07 11:14:23 gleixner Exp $
*/
#include <linux/config.h>
......@@ -20,7 +20,7 @@
#include <linux/mtd/cfi.h>
#include <linux/mtd/gen_probe.h>
//#define DEBUG_CFI
//#define DEBUG_CFI
#ifdef DEBUG_CFI
static void print_cfi_ident(struct cfi_ident *);
......@@ -103,7 +103,7 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
unsigned long *chip_map, struct cfi_private *cfi)
{
int i;
if ((base + 0) >= map->size) {
printk(KERN_NOTICE
"Probe at base[0x00](0x%08lx) past the end of the map(0x%08lx)\n",
......@@ -128,7 +128,7 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
}
if (!cfi->numchips) {
/* This is the first time we're called. Set up the CFI
/* This is the first time we're called. Set up the CFI
stuff accordingly and return */
return cfi_chip_setup(map, cfi);
}
......@@ -138,13 +138,13 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
unsigned long start;
if(!test_bit(i, chip_map)) {
/* Skip location; no valid chip at this address */
continue;
continue;
}
start = i << cfi->chipshift;
/* This chip should be in read mode if it's one
we've already touched. */
if (qry_present(map, start, cfi)) {
/* Eep. This chip also had the QRY marker.
/* Eep. This chip also had the QRY marker.
* Is it an alias for the new one? */
cfi_send_gen_cmd(0xF0, 0, start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, start, map, cfi, cfi->device_type, NULL);
......@@ -156,13 +156,13 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
map->name, base, start);
return 0;
}
/* Yes, it's actually got QRY for data. Most
/* Yes, it's actually got QRY for data. Most
* unfortunate. Stick the new chip in read mode
* too and if it's the same, assume it's an alias. */
/* FIXME: Use other modes to do a proper check */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, start, map, cfi, cfi->device_type, NULL);
if (qry_present(map, base, cfi)) {
xip_allowed(base, map);
printk(KERN_DEBUG "%s: Found an alias at 0x%x for the chip at 0x%lx\n",
......@@ -171,12 +171,12 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
}
}
}
/* OK, if we got to here, then none of the previous chips appear to
be aliases for the current one. */
set_bit((base >> cfi->chipshift), chip_map); /* Update chip map */
cfi->numchips++;
/* Put it back into Read Mode */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
......@@ -185,11 +185,11 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
printk(KERN_INFO "%s: Found %d x%d devices at 0x%x in %d-bit bank\n",
map->name, cfi->interleave, cfi->device_type*8, base,
map->bankwidth*8);
return 1;
}
static int __xipram cfi_chip_setup(struct map_info *map,
static int __xipram cfi_chip_setup(struct map_info *map,
struct cfi_private *cfi)
{
int ofs_factor = cfi->interleave*cfi->device_type;
......@@ -209,11 +209,11 @@ static int __xipram cfi_chip_setup(struct map_info *map,
printk(KERN_WARNING "%s: kmalloc failed for CFI ident structure\n", map->name);
return 0;
}
memset(cfi->cfiq,0,sizeof(struct cfi_ident));
memset(cfi->cfiq,0,sizeof(struct cfi_ident));
cfi->cfi_mode = CFI_MODE_CFI;
/* Read the CFI info structure */
xip_disable_qry(base, map, cfi);
for (i=0; i<(sizeof(struct cfi_ident) + num_erase_regions * 4); i++)
......@@ -231,7 +231,7 @@ static int __xipram cfi_chip_setup(struct map_info *map,
cfi_send_gen_cmd(0x55, 0x2aa, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x90, 0x555, base, map, cfi, cfi->device_type, NULL);
cfi->mfr = cfi_read_query(map, base);
cfi->id = cfi_read_query(map, base + ofs_factor);
cfi->id = cfi_read_query(map, base + ofs_factor);
/* Put it back into Read Mode */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
......@@ -255,10 +255,10 @@ static int __xipram cfi_chip_setup(struct map_info *map,
for (i=0; i<cfi->cfiq->NumEraseRegions; i++) {
cfi->cfiq->EraseRegionInfo[i] = le32_to_cpu(cfi->cfiq->EraseRegionInfo[i]);
#ifdef DEBUG_CFI
#ifdef DEBUG_CFI
printk(" Erase Region #%d: BlockSize 0x%4.4X bytes, %d blocks\n",
i, (cfi->cfiq->EraseRegionInfo[i] >> 8) & ~0xff,
i, (cfi->cfiq->EraseRegionInfo[i] >> 8) & ~0xff,
(cfi->cfiq->EraseRegionInfo[i] & 0xffff) + 1);
#endif
}
......@@ -271,33 +271,33 @@ static int __xipram cfi_chip_setup(struct map_info *map,
}
#ifdef DEBUG_CFI
static char *vendorname(__u16 vendor)
static char *vendorname(__u16 vendor)
{
switch (vendor) {
case P_ID_NONE:
return "None";
case P_ID_INTEL_EXT:
return "Intel/Sharp Extended";
case P_ID_AMD_STD:
return "AMD/Fujitsu Standard";
case P_ID_INTEL_STD:
return "Intel/Sharp Standard";
case P_ID_AMD_EXT:
return "AMD/Fujitsu Extended";
case P_ID_WINBOND:
return "Winbond Standard";
case P_ID_ST_ADV:
return "ST Advanced";
case P_ID_MITSUBISHI_STD:
return "Mitsubishi Standard";
case P_ID_MITSUBISHI_EXT:
return "Mitsubishi Extended";
......@@ -306,13 +306,13 @@ static char *vendorname(__u16 vendor)
case P_ID_INTEL_PERFORMANCE:
return "Intel Performance Code";
case P_ID_INTEL_DATA:
return "Intel Data";
case P_ID_RESERVED:
return "Not Allowed / Reserved for Future Use";
default:
return "Unknown";
}
......@@ -325,21 +325,21 @@ static void print_cfi_ident(struct cfi_ident *cfip)
if (cfip->qry[0] != 'Q' || cfip->qry[1] != 'R' || cfip->qry[2] != 'Y') {
printk("Invalid CFI ident structure.\n");
return;
}
#endif
}
#endif
printk("Primary Vendor Command Set: %4.4X (%s)\n", cfip->P_ID, vendorname(cfip->P_ID));
if (cfip->P_ADR)
printk("Primary Algorithm Table at %4.4X\n", cfip->P_ADR);
else
printk("No Primary Algorithm Table\n");
printk("Alternative Vendor Command Set: %4.4X (%s)\n", cfip->A_ID, vendorname(cfip->A_ID));
if (cfip->A_ADR)
printk("Alternate Algorithm Table at %4.4X\n", cfip->A_ADR);
else
printk("No Alternate Algorithm Table\n");
printk("Vcc Minimum: %2d.%d V\n", cfip->VccMin >> 4, cfip->VccMin & 0xf);
printk("Vcc Maximum: %2d.%d V\n", cfip->VccMax >> 4, cfip->VccMax & 0xf);
if (cfip->VppMin) {
......@@ -348,61 +348,61 @@ static void print_cfi_ident(struct cfi_ident *cfip)
}
else
printk("No Vpp line\n");
printk("Typical byte/word write timeout: %d s\n", 1<<cfip->WordWriteTimeoutTyp);
printk("Maximum byte/word write timeout: %d s\n", (1<<cfip->WordWriteTimeoutMax) * (1<<cfip->WordWriteTimeoutTyp));
if (cfip->BufWriteTimeoutTyp || cfip->BufWriteTimeoutMax) {
printk("Typical full buffer write timeout: %d s\n", 1<<cfip->BufWriteTimeoutTyp);
printk("Maximum full buffer write timeout: %d s\n", (1<<cfip->BufWriteTimeoutMax) * (1<<cfip->BufWriteTimeoutTyp));
}
else
printk("Full buffer write not supported\n");
printk("Typical block erase timeout: %d ms\n", 1<<cfip->BlockEraseTimeoutTyp);
printk("Maximum block erase timeout: %d ms\n", (1<<cfip->BlockEraseTimeoutMax) * (1<<cfip->BlockEraseTimeoutTyp));
if (cfip->ChipEraseTimeoutTyp || cfip->ChipEraseTimeoutMax) {
printk("Typical chip erase timeout: %d ms\n", 1<<cfip->ChipEraseTimeoutTyp);
printk("Typical chip erase timeout: %d ms\n", 1<<cfip->ChipEraseTimeoutTyp);
printk("Maximum chip erase timeout: %d ms\n", (1<<cfip->ChipEraseTimeoutMax) * (1<<cfip->ChipEraseTimeoutTyp));
}
else
printk("Chip erase not supported\n");
printk("Device size: 0x%X bytes (%d MiB)\n", 1 << cfip->DevSize, 1<< (cfip->DevSize - 20));
printk("Flash Device Interface description: 0x%4.4X\n", cfip->InterfaceDesc);
switch(cfip->InterfaceDesc) {
case 0:
printk(" - x8-only asynchronous interface\n");
break;
case 1:
printk(" - x16-only asynchronous interface\n");
break;
case 2:
printk(" - supports x8 and x16 via BYTE# with asynchronous interface\n");
break;
case 3:
printk(" - x32-only asynchronous interface\n");
break;
case 4:
printk(" - supports x16 and x32 via Word# with asynchronous interface\n");
break;
case 65535:
printk(" - Not Allowed / Reserved\n");
break;
default:
printk(" - Unknown\n");
break;
}
printk("Max. bytes in buffer write: 0x%x\n", 1<< cfip->MaxBufWriteSize);
printk("Number of Erase Block Regions: %d\n", cfip->NumEraseRegions);
}
#endif /* DEBUG_CFI */
......
......@@ -7,7 +7,7 @@
*
* This code is covered by the GPL.
*
* $Id: cfi_util.c,v 1.8 2004/12/14 19:55:56 nico Exp $
* $Id: cfi_util.c,v 1.10 2005/11/07 11:14:23 gleixner Exp $
*
*/
......@@ -56,7 +56,7 @@ __xipram cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* n
/* Read in the Extended Query Table */
for (i=0; i<size; i++) {
((unsigned char *)extp)[i] =
((unsigned char *)extp)[i] =
cfi_read_query(map, base+((adr+i)*ofs_factor));
}
......@@ -70,15 +70,6 @@ __xipram cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* n
local_irq_enable();
#endif
if (extp->MajorVersion != '1' ||
(extp->MinorVersion < '0' || extp->MinorVersion > '3')) {
printk(KERN_WARNING " Unknown %s Extended Query "
"version %c.%c.\n", name, extp->MajorVersion,
extp->MinorVersion);
kfree(extp);
extp = NULL;
}
out: return extp;
}
......@@ -122,17 +113,17 @@ int cfi_varsize_frob(struct mtd_info *mtd, varsize_frob_t frob,
i = 0;
/* Skip all erase regions which are ended before the start of
/* Skip all erase regions which are ended before the start of
the requested erase. Actually, to save on the calculations,
we skip to the first erase region which starts after the
start of the requested erase, and then go back one.
*/
while (i < mtd->numeraseregions && ofs >= regions[i].offset)
i++;
i--;
/* OK, now i is pointing at the erase region in which this
/* OK, now i is pointing at the erase region in which this
erase request starts. Check the start of the requested
erase range is aligned with the erase size which is in
effect here.
......@@ -155,7 +146,7 @@ int cfi_varsize_frob(struct mtd_info *mtd, varsize_frob_t frob,
the address actually falls
*/
i--;
if ((ofs + len) & (regions[i].erasesize-1))
return -EINVAL;
......@@ -168,7 +159,7 @@ int cfi_varsize_frob(struct mtd_info *mtd, varsize_frob_t frob,
int size = regions[i].erasesize;
ret = (*frob)(map, &cfi->chips[chipnum], adr, size, thunk);
if (ret)
return ret;
......@@ -182,7 +173,7 @@ int cfi_varsize_frob(struct mtd_info *mtd, varsize_frob_t frob,
if (adr >> cfi->chipshift) {
adr = 0;
chipnum++;
if (chipnum >= cfi->numchips)
break;
}
......
......@@ -41,7 +41,7 @@ static struct mtd_chip_driver *get_mtd_chip_driver (const char *name)
list_for_each(pos, &chip_drvs_list) {
this = list_entry(pos, typeof(*this), list);
if (!strcmp(this->name, name)) {
ret = this;
break;
......@@ -73,7 +73,7 @@ struct mtd_info *do_map_probe(const char *name, struct map_info *map)
ret = drv->probe(map);
/* We decrease the use count here. It may have been a
/* We decrease the use count here. It may have been a
probe-only module, which is no longer required from this
point, having given us a handle on (and increased the use
count of) the actual driver code.
......@@ -82,7 +82,7 @@ struct mtd_info *do_map_probe(const char *name, struct map_info *map)
if (ret)
return ret;
return NULL;
}
/*
......
......@@ -25,7 +25,7 @@ struct fwh_xxlock_thunk {
* so this code has not been tested with interleaved chips,
* and will likely fail in that context.
*/
static int fwh_xxlock_oneblock(struct map_info *map, struct flchip *chip,
static int fwh_xxlock_oneblock(struct map_info *map, struct flchip *chip,
unsigned long adr, int len, void *thunk)
{
struct cfi_private *cfi = map->fldrv_priv;
......@@ -44,7 +44,7 @@ static int fwh_xxlock_oneblock(struct map_info *map, struct flchip *chip,
* - on 64k boundariesand
* - bit 1 set high
* - block lock registers are 4MiB lower - overflow subtract (danger)
*
*
* The address manipulation is first done on the logical address
* which is 0 at the start of the chip, and then the offset of
* the individual chip is addted to it. Any other order a weird
......@@ -93,7 +93,7 @@ static int fwh_unlock_varsize(struct mtd_info *mtd, loff_t ofs, size_t len)
ret = cfi_varsize_frob(mtd, fwh_xxlock_oneblock, ofs, len,
(void *)&FWH_XXLOCK_ONEBLOCK_UNLOCK);
return ret;
}
......
......@@ -2,7 +2,7 @@
* Routines common to all CFI-type probes.
* (C) 2001-2003 Red Hat, Inc.
* GPL'd
* $Id: gen_probe.c,v 1.22 2005/01/24 23:49:50 rmk Exp $
* $Id: gen_probe.c,v 1.24 2005/11/07 11:14:23 gleixner Exp $
*/
#include <linux/kernel.h>
......@@ -26,7 +26,7 @@ struct mtd_info *mtd_do_chip_probe(struct map_info *map, struct chip_probe *cp)
/* First probe the map to see if we have CFI stuff there. */
cfi = genprobe_ident_chips(map, cp);
if (!cfi)
return NULL;
......@@ -36,12 +36,12 @@ struct mtd_info *mtd_do_chip_probe(struct map_info *map, struct chip_probe *cp)
mtd = check_cmd_set(map, 1); /* First the primary cmdset */
if (!mtd)
mtd = check_cmd_set(map, 0); /* Then the secondary */
if (mtd)
return mtd;
printk(KERN_WARNING"gen_probe: No supported Vendor Command Set found\n");
kfree(cfi->cfiq);
kfree(cfi);
map->fldrv_priv = NULL;
......@@ -60,14 +60,14 @@ static struct cfi_private *genprobe_ident_chips(struct map_info *map, struct chi
memset(&cfi, 0, sizeof(cfi));
/* Call the probetype-specific code with all permutations of
/* Call the probetype-specific code with all permutations of
interleave and device type, etc. */
if (!genprobe_new_chip(map, cp, &cfi)) {
/* The probe didn't like it */
printk(KERN_DEBUG "%s: Found no %s device at location zero\n",
cp->name, map->name);
return NULL;
}
}
#if 0 /* Let the CFI probe routine do this sanity check. The Intel and AMD
probe routines won't ever return a broken CFI structure anyway,
......@@ -92,13 +92,13 @@ static struct cfi_private *genprobe_ident_chips(struct map_info *map, struct chi
} else {
BUG();
}
cfi.numchips = 1;
/*
* Allocate memory for bitmap of valid chips.
* Align bitmap storage size to full byte.
*/
/*
* Allocate memory for bitmap of valid chips.
* Align bitmap storage size to full byte.
*/
max_chips = map->size >> cfi.chipshift;
mapsize = (max_chips / 8) + ((max_chips % 8) ? 1 : 0);
chip_map = kmalloc(mapsize, GFP_KERNEL);
......@@ -122,7 +122,7 @@ static struct cfi_private *genprobe_ident_chips(struct map_info *map, struct chi
}
/*
* Now allocate the space for the structures we need to return to
* Now allocate the space for the structures we need to return to
* our caller, and copy the appropriate data into them.
*/
......@@ -154,7 +154,7 @@ static struct cfi_private *genprobe_ident_chips(struct map_info *map, struct chi
return retcfi;
}
static int genprobe_new_chip(struct map_info *map, struct chip_probe *cp,
struct cfi_private *cfi)
{
......@@ -189,7 +189,7 @@ extern cfi_cmdset_fn_t cfi_cmdset_0001;
extern cfi_cmdset_fn_t cfi_cmdset_0002;
extern cfi_cmdset_fn_t cfi_cmdset_0020;
static inline struct mtd_info *cfi_cmdset_unknown(struct map_info *map,
static inline struct mtd_info *cfi_cmdset_unknown(struct map_info *map,
int primary)
{
struct cfi_private *cfi = map->fldrv_priv;
......@@ -199,7 +199,7 @@ static inline struct mtd_info *cfi_cmdset_unknown(struct map_info *map,
cfi_cmdset_fn_t *probe_function;
sprintf(probename, "cfi_cmdset_%4.4X", type);
probe_function = inter_module_get_request(probename, probename);
if (probe_function) {
......@@ -221,7 +221,7 @@ static struct mtd_info *check_cmd_set(struct map_info *map, int primary)
{
struct cfi_private *cfi = map->fldrv_priv;
__u16 type = primary?cfi->cfiq->P_ID:cfi->cfiq->A_ID;
if (type == P_ID_NONE || type == P_ID_RESERVED)
return NULL;
......@@ -235,6 +235,7 @@ static struct mtd_info *check_cmd_set(struct map_info *map, int primary)
#ifdef CONFIG_MTD_CFI_INTELEXT
case 0x0001:
case 0x0003:
case 0x0200:
return cfi_cmdset_0001(map, primary);
#endif
#ifdef CONFIG_MTD_CFI_AMDSTD
......
This diff is collapsed.
/*
/*
Common Flash Interface probe code.
(C) 2000 Red Hat. GPL'd.
$Id: jedec_probe.c,v 1.63 2005/02/14 16:30:32 bjd Exp $
$Id: jedec_probe.c,v 1.66 2005/11/07 11:14:23 gleixner Exp $
See JEDEC (http://www.jedec.org/) standard JESD21C (section 3.5)
for the standard this probe goes back to.
......@@ -1719,7 +1719,7 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
static struct mtd_info *jedec_probe(struct map_info *map);
static inline u32 jedec_read_mfr(struct map_info *map, __u32 base,
static inline u32 jedec_read_mfr(struct map_info *map, __u32 base,
struct cfi_private *cfi)
{
map_word result;
......@@ -1730,7 +1730,7 @@ static inline u32 jedec_read_mfr(struct map_info *map, __u32 base,
return result.x[0] & mask;
}
static inline u32 jedec_read_id(struct map_info *map, __u32 base,
static inline u32 jedec_read_id(struct map_info *map, __u32 base,
struct cfi_private *cfi)
{
map_word result;
......@@ -1741,7 +1741,7 @@ static inline u32 jedec_read_id(struct map_info *map, __u32 base,
return result.x[0] & mask;
}
static inline void jedec_reset(u32 base, struct map_info *map,
static inline void jedec_reset(u32 base, struct map_info *map,
struct cfi_private *cfi)
{
/* Reset */
......@@ -1765,7 +1765,7 @@ static inline void jedec_reset(u32 base, struct map_info *map,
* so ensure we're in read mode. Send both the Intel and the AMD command
* for this. Intel uses 0xff for this, AMD uses 0xff for NOP, so
* this should be safe.
*/
*/
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
/* FIXME - should have reset delay before continuing */
}
......@@ -1807,14 +1807,14 @@ static int cfi_jedec_setup(struct cfi_private *p_cfi, int index)
printk("Found: %s\n",jedec_table[index].name);
num_erase_regions = jedec_table[index].NumEraseRegions;
p_cfi->cfiq = kmalloc(sizeof(struct cfi_ident) + num_erase_regions * 4, GFP_KERNEL);
if (!p_cfi->cfiq) {
//xx printk(KERN_WARNING "%s: kmalloc failed for CFI ident structure\n", map->name);
return 0;
}
memset(p_cfi->cfiq,0,sizeof(struct cfi_ident));
memset(p_cfi->cfiq,0,sizeof(struct cfi_ident));
p_cfi->cfiq->P_ID = jedec_table[index].CmdSet;
p_cfi->cfiq->NumEraseRegions = jedec_table[index].NumEraseRegions;
......@@ -1969,7 +1969,7 @@ static inline int jedec_match( __u32 base,
cfi_send_gen_cmd(0x90, cfi->addr_unlock1, base, map, cfi, cfi->device_type, NULL);
/* FIXME - should have a delay before continuing */
match_done:
match_done:
return rc;
}
......@@ -1998,23 +1998,23 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
"Probe at base(0x%08x) past the end of the map(0x%08lx)\n",
base, map->size -1);
return 0;
}
/* Ensure the unlock addresses we try stay inside the map */
probe_offset1 = cfi_build_cmd_addr(
cfi->addr_unlock1,
cfi_interleave(cfi),
cfi->addr_unlock1,
cfi_interleave(cfi),
cfi->device_type);
probe_offset2 = cfi_build_cmd_addr(
cfi->addr_unlock1,
cfi_interleave(cfi),
cfi->addr_unlock1,
cfi_interleave(cfi),
cfi->device_type);
if ( ((base + probe_offset1 + map_bankwidth(map)) >= map->size) ||
((base + probe_offset2 + map_bankwidth(map)) >= map->size))
{
goto retry;
}
/* Reset */
jedec_reset(base, map, cfi);
......@@ -2027,13 +2027,13 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
/* FIXME - should have a delay before continuing */
if (!cfi->numchips) {
/* This is the first time we're called. Set up the CFI
/* This is the first time we're called. Set up the CFI
stuff accordingly and return */
cfi->mfr = jedec_read_mfr(map, base, cfi);
cfi->id = jedec_read_id(map, base, cfi);
DEBUG(MTD_DEBUG_LEVEL3,
"Search for id:(%02x %02x) interleave(%d) type(%d)\n",
"Search for id:(%02x %02x) interleave(%d) type(%d)\n",
cfi->mfr, cfi->id, cfi_interleave(cfi), cfi->device_type);
for (i=0; i<sizeof(jedec_table)/sizeof(jedec_table[0]); i++) {
if ( jedec_match( base, map, cfi, &jedec_table[i] ) ) {
......@@ -2062,7 +2062,7 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
return 0;
}
}
/* Check each previous chip locations to see if it's an alias */
for (i=0; i < (base >> cfi->chipshift); i++) {
unsigned long start;
......@@ -2083,7 +2083,7 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
map->name, base, start);
return 0;
}
/* Yes, it's actually got the device IDs as data. Most
* unfortunate. Stick the new chip in read mode
* too and if it's the same, assume it's an alias. */
......@@ -2097,20 +2097,20 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
}
}
}
/* OK, if we got to here, then none of the previous chips appear to
be aliases for the current one. */
set_bit((base >> cfi->chipshift), chip_map); /* Update chip map */
cfi->numchips++;
ok_out:
/* Put it back into Read Mode */
jedec_reset(base, map, cfi);
printk(KERN_INFO "%s: Found %d x%d devices at 0x%x in %d-bit bank\n",
map->name, cfi_interleave(cfi), cfi->device_type*8, base,
map->name, cfi_interleave(cfi), cfi->device_type*8, base,
map->bankwidth*8);
return 1;
}
......
/*
* Common code to handle absent "placeholder" devices
* Copyright 2001 Resilience Corporation <ebrower@resilience.com>
* $Id: map_absent.c,v 1.5 2004/11/16 18:29:00 dwmw2 Exp $
* $Id: map_absent.c,v 1.6 2005/11/07 11:14:23 gleixner Exp $
*
* This map driver is used to allocate "placeholder" MTD
* devices on systems that have socketed/removable media.
* Use of this driver as a fallback preserves the expected
* devices on systems that have socketed/removable media.
* Use of this driver as a fallback preserves the expected
* registration of MTD device nodes regardless of probe outcome.
* A usage example is as follows:
*
......@@ -80,7 +80,7 @@ static int map_absent_read(struct mtd_info *mtd, loff_t from, size_t len, size_t
static int map_absent_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, const u_char *buf)
{
*retlen = 0;
return -ENODEV;
return -ENODEV;
}
static int map_absent_erase(struct mtd_info *mtd, struct erase_info *instr)
......
......@@ -4,7 +4,7 @@
* Copyright 2000,2001 David A. Schleef <ds@schleef.org>
* 2000,2001 Lineo, Inc.
*
* $Id: sharp.c,v 1.14 2004/08/09 13:19:43 dwmw2 Exp $
* $Id: sharp.c,v 1.16 2005/11/07 11:14:23 gleixner Exp $
*
* Devices supported:
* LH28F016SCT Symmetrical block flash memory, 2Mx8
......@@ -31,6 +31,7 @@
#include <linux/mtd/cfi.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/slab.h>
#define CMD_RESET 0xffffffff
#define CMD_READ_ID 0x90909090
......@@ -214,7 +215,7 @@ static int sharp_probe_map(struct map_info *map,struct mtd_info *mtd)
/* This function returns with the chip->mutex lock held. */
static int sharp_wait(struct map_info *map, struct flchip *chip)
{
__u16 status;
int status, i;
unsigned long timeo = jiffies + HZ;
DECLARE_WAITQUEUE(wait, current);
int adr = 0;
......@@ -227,13 +228,11 @@ retry:
map_write32(map,CMD_READ_STATUS,adr);
chip->state = FL_STATUS;
case FL_STATUS:
status = map_read32(map,adr);
//printk("status=%08x\n",status);
udelay(100);
if((status & SR_READY)!=SR_READY){
//printk(".status=%08x\n",status);
udelay(100);
for(i=0;i<100;i++){
status = map_read32(map,adr);
if((status & SR_READY)==SR_READY)
break;
udelay(1);
}
break;
default:
......@@ -460,12 +459,12 @@ static int sharp_do_wait_for_ready(struct map_info *map, struct flchip *chip,
remove_wait_queue(&chip->wq, &wait);
//spin_lock_bh(chip->mutex);
if (signal_pending(current)){
ret = -EINTR;
goto out;
}
}
ret = -ETIME;
out:
......@@ -564,7 +563,7 @@ static int sharp_suspend(struct mtd_info *mtd)
static void sharp_resume(struct mtd_info *mtd)
{
printk("sharp_resume()\n");
}
static void sharp_destroy(struct mtd_info *mtd)
......
/*
* $Id: cmdlinepart.c,v 1.18 2005/06/07 15:04:26 joern Exp $
* $Id: cmdlinepart.c,v 1.19 2005/11/07 11:14:19 gleixner Exp $
*
* Read flash partition table from command line
*
* Copyright 2002 SYSGO Real-Time Solutions GmbH
*
* The format for the command line is as follows:
*
*
* mtdparts=<mtddef>[;<mtddef]
* <mtddef> := <mtd-id>:<partdef>[,<partdef>]
* <partdef> := <size>[@offset][<name>][ro]
* <mtd-id> := unique name used in mapping driver/device (mtd->name)
* <size> := standard linux memsize OR "-" to denote all remaining space
* <name> := '(' NAME ')'
*
*
* Examples:
*
*
* 1 NOR Flash, with 1 single writable partition:
* edb7312-nor:-
*
*
* 1 NOR Flash with 2 partitions, 1 NAND with one
* edb7312-nor:256k(ARMboot)ro,-(root);edb7312-nand:-(home)
*/
......@@ -60,17 +60,17 @@ static int cmdline_parsed = 0;
/*
* Parse one partition definition for an MTD. Since there can be many
* comma separated partition definitions, this function calls itself
* comma separated partition definitions, this function calls itself
* recursively until no more partition definitions are found. Nice side
* effect: the memory to keep the mtd_partition structs and the names
* is allocated upon the last definition being found. At that point the
* syntax has been verified ok.
*/
static struct mtd_partition * newpart(char *s,
static struct mtd_partition * newpart(char *s,
char **retptr,
int *num_parts,
int this_part,
unsigned char **extra_mem_ptr,
int this_part,
unsigned char **extra_mem_ptr,
int extra_mem_size)
{
struct mtd_partition *parts;
......@@ -102,7 +102,7 @@ static struct mtd_partition * newpart(char *s,
mask_flags = 0; /* this is going to be a regular partition */
delim = 0;
/* check for offset */
if (*s == '@')
if (*s == '@')
{
s++;
offset = memparse(s, &s);
......@@ -112,7 +112,7 @@ static struct mtd_partition * newpart(char *s,
{
delim = ')';
}
if (delim)
{
char *p;
......@@ -131,12 +131,12 @@ static struct mtd_partition * newpart(char *s,
name = NULL;
name_len = 13; /* Partition_000 */
}
/* record name length for memory allocation later */
extra_mem_size += name_len + 1;
/* test for options */
if (strncmp(s, "ro", 2) == 0)
if (strncmp(s, "ro", 2) == 0)
{
mask_flags |= MTD_WRITEABLE;
s += 2;
......@@ -151,7 +151,7 @@ static struct mtd_partition * newpart(char *s,
return NULL;
}
/* more partitions follow, parse them */
if ((parts = newpart(s + 1, &s, num_parts,
if ((parts = newpart(s + 1, &s, num_parts,
this_part + 1, &extra_mem, extra_mem_size)) == 0)
return NULL;
}
......@@ -187,7 +187,7 @@ static struct mtd_partition * newpart(char *s,
extra_mem += name_len + 1;
dbg(("partition %d: name <%s>, offset %x, size %x, mask flags %x\n",
this_part,
this_part,
parts[this_part].name,
parts[this_part].offset,
parts[this_part].size,
......@@ -204,8 +204,8 @@ static struct mtd_partition * newpart(char *s,
return parts;
}
/*
* Parse the command line.
/*
* Parse the command line.
*/
static int mtdpart_setup_real(char *s)
{
......@@ -230,7 +230,7 @@ static int mtdpart_setup_real(char *s)
dbg(("parsing <%s>\n", p+1));
/*
/*
* parse one mtd. have it reserve memory for the
* struct cmdline_mtd_partition and the mtd-id string.
*/
......@@ -239,7 +239,7 @@ static int mtdpart_setup_real(char *s)
&num_parts, /* out: number of parts */
0, /* first partition */
(unsigned char**)&this_mtd, /* out: extra mem */
mtd_id_len + 1 + sizeof(*this_mtd) +
mtd_id_len + 1 + sizeof(*this_mtd) +
sizeof(void*)-1 /*alignment*/);
if(!parts)
{
......@@ -254,21 +254,21 @@ static int mtdpart_setup_real(char *s)
}
/* align this_mtd */
this_mtd = (struct cmdline_mtd_partition *)
this_mtd = (struct cmdline_mtd_partition *)
ALIGN((unsigned long)this_mtd, sizeof(void*));
/* enter results */
/* enter results */
this_mtd->parts = parts;
this_mtd->num_parts = num_parts;
this_mtd->mtd_id = (char*)(this_mtd + 1);
strlcpy(this_mtd->mtd_id, mtd_id, mtd_id_len + 1);
/* link into chain */
this_mtd->next = partitions;
this_mtd->next = partitions;
partitions = this_mtd;
dbg(("mtdid=<%s> num_parts=<%d>\n",
dbg(("mtdid=<%s> num_parts=<%d>\n",
this_mtd->mtd_id, this_mtd->num_parts));
/* EOS - we're done */
if (*s == 0)
......@@ -292,7 +292,7 @@ static int mtdpart_setup_real(char *s)
* information. It returns partitions for the requested mtd device, or
* the first one in the chain if a NULL mtd_id is passed in.
*/
static int parse_cmdline_partitions(struct mtd_info *master,
static int parse_cmdline_partitions(struct mtd_info *master,
struct mtd_partition **pparts,
unsigned long origin)
{
......@@ -322,7 +322,7 @@ static int parse_cmdline_partitions(struct mtd_info *master,
part->parts[i].size = master->size - offset;
if (offset + part->parts[i].size > master->size)
{
printk(KERN_WARNING ERRP
printk(KERN_WARNING ERRP
"%s: partitioning exceeds flash size, truncating\n",
part->mtd_id);
part->parts[i].size = master->size - offset;
......@@ -338,8 +338,8 @@ static int parse_cmdline_partitions(struct mtd_info *master,
}
/*
* This is the handler for our kernel parameter, called from
/*
* This is the handler for our kernel parameter, called from
* main.c::checksetup(). Note that we can not yet kmalloc() anything,
* so we only save the commandline for later processing.
*
......
# drivers/mtd/maps/Kconfig
# $Id: Kconfig,v 1.15 2004/12/22 17:51:15 joern Exp $
# $Id: Kconfig,v 1.18 2005/11/07 11:14:24 gleixner Exp $
menu "Self-contained MTD device drivers"
depends on MTD!=n
......@@ -110,7 +110,7 @@ config MTDRAM_ABS_POS
If you have system RAM accessible by the CPU but not used by Linux
in normal operation, you can give the physical address at which the
available RAM starts, and the MTDRAM driver will use it instead of
allocating space from Linux's available memory. Otherwise, leave
allocating space from Linux's available memory. Otherwise, leave
this set to zero. Most people will want to leave this as zero.
config MTD_BLKMTD
......@@ -165,7 +165,7 @@ config MTD_DOC2001
select MTD_DOCPROBE
select MTD_NAND_IDS
---help---
This provides an alternative MTD device driver for the M-Systems
This provides an alternative MTD device driver for the M-Systems
DiskOnChip Millennium devices. Use this if you have problems with
the combined DiskOnChip 2000 and Millennium driver above. To get
the DiskOnChip probe code to load and use this driver instead of
......@@ -192,7 +192,7 @@ config MTD_DOC2001PLUS
If you use this device, you probably also want to enable the INFTL
'Inverse NAND Flash Translation Layer' option below, which is used
to emulate a block device by using a kind of file system on the
to emulate a block device by using a kind of file system on the
flash chips.
NOTE: This driver will soon be replaced by the new DiskOnChip driver
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment