a module, say M here and read Documentation/modules.txt. If unsure,
say N.
+ACP Modem (Mwave) support
+CONFIG_MWAVE
+ The ACP modem (Mwave) for Linux is a WinModem. It is composed of a
+ kernel driver and a user level application. Together these components
+ support direct attachment to public switched telephone networks (PSTNs)
+ and support selected world wide countries.
+
+ This version of the ACP Modem driver supports the IBM Thinkpad 600E,
+ 600, and 770 that include on board ACP modem hardware.
+
+ The modem also supports the standard communications port interface
+ (ttySx) and is compatible with the Hayes AT Command Set.
+
+ The user level application needed to use this driver can be found at
+ the IBM Linux Technology Center (LTC) web site:
+ http://www.ibm.com/linux/ltc/
+
+ If you own one of the above IBM Thinkpads which has the Mwave chipset
+ in it, say Y.
+
+ This driver is also available as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want).
+ The module will be called mwave.o. If you want to compile it as
+ a module, say M here and read Documentation/modules.txt.
+
/dev/agpgart (AGP Support) (EXPERIMENTAL)
CONFIG_AGP
AGP (Accelerated Graphics Port) is a bus system mainly used to
The module is called machzwd.o. If you want to compile it as a module,
say M here and read Documentation/modules.txt.
+SuperH 3/4 Watchdog
+CONFIG_SH_WDT
+ This driver adds watchdog support for the integrated watchdog in the
+ SuperH 3 and 4 processors. If you have one of these processors, say Y,
+ otherwise say N.
+
+ This driver is also available as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want).
+ The module is called shwdt.o. If you want to compile it as a module,
+ say M here and read Documentation/modules.txt.
+
Toshiba Laptop support
CONFIG_TOSHIBA
This adds a driver to safely access the System Management Mode
found on many Sun machines. Note that many of the newer Ultras
actually have pc style hardware instead.
+# The following options are for Linux when running on the Hitachi
+# SuperH family of RISC microprocessors.
+
+CPU Selection
+CONFIG_CPU_SUBTYPE_SH7707
+ This is the type of your Hitachi SuperH processor. This information is
+ used for optimizing and configuration purposes.
+
+ - "SH7707" for SH7707
+ - "SH7708" for SH7708, SH7708S, SH7708R
+ - "SH7709" for SH7707, SH7709, SH7709A, and SH7729.
+ - "SH7750" for SH7750, SH7750S
+ - "SH7751" for SH7751
+ - "ST40STB1" for ST40STB1
+
+Target machine selection
+CONFIG_SH_GENERIC
+ This is machine type of your target.
+
+ - "Generic" for Generic kernel which might support all of them
+ - "SolutionEngine" for Hitachi SolutionEngine (7709A, 7750, 7750S)
+ - "SolutionEngine7751" for Hitachi SolutionEngine (7751)
+ - "STB1_Harp" for STMicroelectronics HARP
+ - "STB1_Overdrive" for STMicroelectronics Overdrive
+ - "HP620" for HP 'Jornada' 620
+ - "HP680" for HP 'Jornada' 680
+ - "HP690" for HP 'Jornada' 690
+ - "CqREEK" for CQ Publishing CqREEK SH-4
+ - "DMIDA" for DMIDA, industrial data assistant
+ - "EC3104" for Compaq Aero 8000
+ - "Dreamcast" for SEGA Dreamcast
+ - "CAT68701" for CAT 68701 Evaluation Board (SH7708)
+ - "BigSur" for Big Sur Evaluation Board
+ - "SH2000" for SH2000 Evaluation Board (SH7709A)
+ - "ADX" for A&D ADX
+ - "BareCPU" for Bare CPU board such as CqREEK SH-3
+
+ If unsure, select "BareCPU".
+
+Physical memory start address
+CONFIG_MEMORY_START
+ Computers built with Hitachi SuperH processors always
+ map the ROM starting at address zero. But the processor
+ does not specify the range that RAM takes. RAM is usually
+ mapped starting at 0c000000, but it may be elsewhere.
+
+ You should set this value to the address of the lowest
+ RAM location.
+
+ A value of 0c000000 will work for most boards.
+
+Directly Connected Compact Flash support
+CONFIG_CF_ENABLER
+ If your board has "Directly Connected" CompactFlash at area 5 or 6,
+ you may want to enable this option. Then, you can use CF as
+ primary IDE drive (only tested for SanDisk).
+
+ If in doubt, press "n".
+
+SuperH RTC support
+CONFIG_SH_RTC
+ Selecting this option will allow the Linux kernel to emulate
+ PC's RTC.
+
+ If unsure, say N.
+
+SuperH DMAC support
+CONFIG_SH_DMA
+ Selecting this option will provide same API as PC's Direct Memory
+ Access Controller(8237A) for SuperH DMAC.
+
+ If unsure, say N.
+
+SuperH SCI (serial) support
+CONFIG_SH_SCI
+ Selecting this option will allow the Linux kernel to transfer
+ data over SCI (Serial Communication Interface) and/or SCIF
+ which are built into the Hitachi SuperH processor.
+
+ If unsure, say N.
+
+Use LinuxSH standard BIOS
+CONFIG_SH_STANDARD_BIOS
+ Say Y here if your target has the gdb-sh-stub package from
+ www.m17n.org (or any conforming standard LinuxSH BIOS) in FLASH
+ or EPROM. The kernel will use standard BIOS calls during boot
+ for various housekeeping tasks. Note this does not work with
+ WindowsCE machines. If unsure, say N.
+
+Early printk support
+CONFIG_SH_EARLY_PRINTK
+ If you say Y here, the kernel printk routine will begin output to
+ the console much earlier in the boot process, before the serial
+ console is initialised, instead of buffering output. Standard
+ LinuxSH BIOS calls are used for the output. This helps when
+ debugging fatal problems early in the boot sequence. This is only
+ useful for kernel hackers. If unsure, say N.
+
+National Semiconductor DP83902AV 'ST-NIC' support
+CONFIG_STNIC
+ If you have a network adaptor with National Semiconductor DP83902AV,
+ say Y or M (for module).
+
+ If unsure, say N.
+
+CompactFlash Connection Area
+CONFIG_CF_AREA5
+ If your board has "Directly Connected" CompactFlash, You should
+ select the area where your CF is connected to.
+
+ - "Area5" if CompactFlash is connected to Area 5 (0x14000000)
+ - "Area6" if it is connected to Area 6 (0x18000000)
+
+ "Area6" will work for most boards. For ADX, select "Area5".
+
#
# m68k-specific kernel options
# Documented by Chris Lawrence <quango@themall.net> et al.
- MFM hard drive (drivers/acorn/block/mfmhd.c)
-- I2O block device (drivers/i2o/i2o_block.c)
+- I2O block device (drivers/message/i2o/i2o_block.c)
- ST-RAM device (arch/m68k/atari/stram.c)
in order to let the driver access to the camera
fnkeyinit: on some Vaios (C1VE, C1VR etc), the Fn key events don't
- get enabled unless you set this parameter to 1
+ get enabled unless you set this parameter to 1.
+ Do not use this option unless it's actually necessary,
+ some Vaio models don't deal well with this option.
verbose: print unknown events from the sonypi device
lines in your /etc/modules.conf file:
alias char-major-10-250 sonypi
- options sonypi minor=250 fnkeyinit=1
+ options sonypi minor=250
This supposes the use of minor 250 for the sonypi device:
VERSION = 2
PATCHLEVEL = 4
SUBLEVEL = 13
-EXTRAVERSION =-pre2
+EXTRAVERSION =-pre3
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
DRIVERS-$(CONFIG_TC) += drivers/tc/tc.a
DRIVERS-$(CONFIG_USB) += drivers/usb/usbdrv.o
DRIVERS-$(CONFIG_INPUT) += drivers/input/inputdrv.o
-DRIVERS-$(CONFIG_I2O) += drivers/i2o/i2o.o
+DRIVERS-$(CONFIG_I2O) += drivers/message/i2o/i2o.o
DRIVERS-$(CONFIG_IRDA) += drivers/net/irda/irda.o
DRIVERS-$(CONFIG_I2C) += drivers/i2c/i2c.o
DRIVERS-$(CONFIG_PHONE) += drivers/telephony/telephony.o
source drivers/ieee1394/Config.in
-source drivers/i2o/Config.in
+source drivers/message/i2o/Config.in
if [ "$CONFIG_NET" = "y" ]; then
mainmenu_option next_comment
source drivers/ieee1394/Config.in
-source drivers/i2o/Config.in
+source drivers/message/i2o/Config.in
if [ "$CONFIG_NET" = "y" ]; then
mainmenu_option next_comment
# PCMCIA character devices
#
# CONFIG_PCMCIA_SERIAL_CS is not set
+# CONFIG_MWAVE is not set
#
# Multimedia devices
break;
case X86_VENDOR_CENTAUR:
- /* Cyrix III has Intel style MTRRs, but doesn't support PAE */
- if (boot_cpu_data.x86 == 6 &&
- (boot_cpu_data.x86_model == 6 ||
- boot_cpu_data.x86_model == 7)) {
+ /* VIA Cyrix family have Intel style MTRRs, but don't support PAE */
+ if (boot_cpu_data.x86 == 6) {
size_or_mask = 0xfff00000; /* 32 bits */
size_and_mask = 0;
}
case 6:
switch (c->x86_model) {
- case 6 ... 7: /* Cyrix III or C3 */
+ case 6 ... 8: /* Cyrix III family */
rdmsr (MSR_VIA_FCR, lo, hi);
lo |= (1<<1 | 1<<7); /* Report CX8 & enable PGE */
wrmsr (MSR_VIA_FCR, lo, hi);
source drivers/mtd/Config.in
source drivers/pnp/Config.in
source drivers/block/Config.in
-source drivers/i2o/Config.in
+source drivers/message/i2o/Config.in
source drivers/md/Config.in
mainmenu_option next_comment
if [ "$CONFIG_DECSTATION" != "y" -a \
"$CONFIG_SGI_IP22" != "y" ]; then
- source drivers/i2o/Config.in
+ source drivers/message/i2o/Config.in
fi
if [ "$CONFIG_NET" = "y" ]; then
bool 'CPU6 Silicon Errata (860 Pre Rev. C)' CONFIG_8xx_CPU6
bool 'I2C/SPI Microcode Patch' CONFIG_UCODE_PATCH
-if [ "$CONFIG_IDE" = "y" ]; then
- bool 'MPC8xx direct IDE support on PCMCIA port' CONFIG_BLK_DEV_MPC8xx_IDE
-fi
endmenu
/*
- * BK Id: SCCS/s.misc.c 1.18 07/30/01 17:19:40 trini
+ * BK Id: SCCS/s.misc.c 1.20 09/24/01 18:42:54 trini
*
* arch/ppc/boot/prep/misc.c
*
RESIDUAL hold_resid_buf;
RESIDUAL *hold_residual = &hold_resid_buf;
unsigned long initrd_start = 0, initrd_end = 0;
+
+/* These values must be variables. If not, the compiler optimizer
+ * will remove some code, causing the size of the code to vary
+ * when these values are zero. This is bad because we first
+ * compile with these zero to determine the size and offsets
+ * in an image, than compile again with these set to the proper
+ * discovered value.
+ */
+unsigned int initrd_offset, initrd_size;
char *zimage_start;
int zimage_size;
size of the elf header which we strip -- Cort */
zimage_start = (char *)(load_addr - 0x10000 + ZIMAGE_OFFSET);
zimage_size = ZIMAGE_SIZE;
+ initrd_offset = INITRD_OFFSET;
+ initrd_size = INITRD_SIZE;
- if ( INITRD_OFFSET )
- initrd_start = load_addr - 0x10000 + INITRD_OFFSET;
+ if ( initrd_offset )
+ initrd_start = load_addr - 0x10000 + initrd_offset;
else
initrd_start = 0;
- initrd_end = INITRD_SIZE + initrd_start;
+ initrd_end = initrd_size + initrd_start;
/*
* Find a place to stick the zimage and initrd and
puts(" "); puthex(initrd_end); puts("\n");
avail_ram = (char *)PAGE_ALIGN(
(unsigned long)zimage_size+(unsigned long)zimage_start);
- memcpy ((void *)avail_ram, (void *)initrd_start, INITRD_SIZE );
+ memcpy ((void *)avail_ram, (void *)initrd_start, initrd_size );
initrd_start = (unsigned long)avail_ram;
- initrd_end = initrd_start + INITRD_SIZE;
+ initrd_end = initrd_start + initrd_size;
puts("relocated to: "); puthex(initrd_start);
puts(" "); puthex(initrd_end); puts("\n");
}
/*
- * BK Id: SCCS/s.prep_pci.c 1.26 09/08/01 15:47:42 paulus
+ * BK Id: SCCS/s.prep_pci.c 1.31 10/05/01 17:48:18 trini
*/
/*
* PReP pci functions.
/* Which PCI interrupt line does a given device [slot] use? */
/* Note: This really should be two dimensional based in slot/pin used */
-unsigned char *Motherboard_map;
+static unsigned char *Motherboard_map;
unsigned char *Motherboard_map_name;
/* How is the 82378 PIRQ mapping setup? */
-unsigned char *Motherboard_routes;
+static unsigned char *Motherboard_routes;
-void (*Motherboard_non0)(struct pci_dev *);
+static void (*Motherboard_non0)(struct pci_dev *);
-void Powerplus_Map_Non0(struct pci_dev *);
+static void Powerplus_Map_Non0(struct pci_dev *);
/* Used for Motorola to store system config register */
static unsigned long *ProcInfo;
0, /* Slot 21 - */
2, /* Slot 22 - */
};
+
static char ibm6015_pci_IRQ_routes[] __prepdata = {
0, /* Line 0 - unused */
13, /* Line 1 */
- 10, /* Line 2 */
+ 15, /* Line 2 */
15, /* Line 3 */
15, /* Line 4 */
};
-/* IBM Nobis and 850 */
+/* IBM Nobis and Thinkpad 850 */
static char Nobis_pci_IRQ_map[23] __prepdata ={
0, /* Slot 0 - unused */
0, /* Slot 1 - unused */
* are routed to OpenPIC inputs 5-8. These values are offset by
* 16 in the table to reflect the Linux kernel interrupt value.
*/
-struct powerplus_irq_list Powerplus_pci_IRQ_list =
+struct powerplus_irq_list Powerplus_pci_IRQ_list __prepdata =
{
{25, 26, 27, 28},
{21, 22, 23, 24}
* are routed to OpenPIC inputs 12-15. These values are offset by
* 16 in the table to reflect the Linux kernel interrupt value.
*/
-struct powerplus_irq_list Mesquite_pci_IRQ_list =
+struct powerplus_irq_list Mesquite_pci_IRQ_list __prepdata =
{
{24, 25, 26, 27},
{28, 29, 30, 31}
* This table represents the standard PCI swizzle defined in the
* PCI bus specification.
*/
-static unsigned char prep_pci_intpins[4][4] =
+static unsigned char prep_pci_intpins[4][4] __prepdata =
{
{ 1, 2, 3, 4}, /* Buses 0, 4, 8, ... */
{ 2, 3, 4, 1}, /* Buses 1, 5, 9, ... */
* other than hard-coded as well... IRQ's are individually mappable
* to either edge or level.
*/
-#define CAROLINA_IRQ_EDGE_MASK_LO 0x00 /* IRQ's 0-7 */
-#define CAROLINA_IRQ_EDGE_MASK_HI 0xA4 /* IRQ's 8-15 [10,13,15] */
/*
* 8259 edge/level control definitions
int MotMPIC;
int mot_multi;
-int __init raven_init(void)
+int __init
+raven_init(void)
{
unsigned int devid;
unsigned int pci_membase;
void (*map_non0_bus)(struct pci_dev *); /* For boards with more than bus 0 devices. */
struct powerplus_irq_list *pci_irq_list; /* List of PCI MPIC inputs */
unsigned char secondary_bridge_devfn; /* devfn of secondary bus transparent bridge */
-} mot_info[] = {
+} mot_info[] __prepdata = {
{0x300, 0x00, 0x00, "MVME 2400", Genesis2_pci_IRQ_map, Raven_pci_IRQ_routes, Powerplus_Map_Non0, &Powerplus_pci_IRQ_list, 0xFF},
{0x010, 0x00, 0x00, "Genesis", Genesis_pci_IRQ_map, Genesis_pci_IRQ_routes, Powerplus_Map_Non0, &Powerplus_pci_IRQ_list, 0x00},
{0x020, 0x00, 0x00, "Powerstack (Series E)", Comet_pci_IRQ_map, Comet_pci_IRQ_routes, NULL, NULL, 0x00},
{0x000, 0x00, 0x00, "", NULL, NULL, NULL, NULL, 0x00}
};
-void ibm_prep_init(void)
+void __init
+ibm_prep_init(void)
{
u32 addr;
#ifdef CONFIG_PREP_RESIDUAL
#ifdef CONFIG_PREP_RESIDUAL
mpic = residual_find_device(-1, NULL, SystemPeripheral,
ProgrammableInterruptController, MPIC, 0);
- if (mpic != NULL) {
+ if (mpic != NULL)
printk("mpic = %p\n", mpic);
- }
#endif
}
-void
+static void __init
ibm43p_pci_map_non0(struct pci_dev *dev)
{
unsigned char intpin;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
}
-void __init prep_route_pci_interrupts(void)
+void __init
+prep_route_pci_interrupts(void)
{
unsigned char *ibc_pirq = (unsigned char *)0x80800860;
unsigned char *ibc_pcicon = (unsigned char *)0x80800840;
/* AJF adjust level/edge control according to routes */
irq_mode = 0;
for (i = 1; i <= 4; i++)
- {
irq_mode |= ( 1 << Motherboard_routes[i] );
- }
outb( irq_mode & 0xff, 0x4d0 );
outb( (irq_mode >> 8) & 0xff, 0x4d1 );
}
- } else if ( _prep_type == _PREP_IBM )
- {
- unsigned char pl_id;
- /*
- * my carolina is 0xf0
- * 6015 has 0xfc
- * -- Cort
- */
- printk("IBM ID: %08x\n", inb(0x0852));
- switch(inb(0x0852))
- {
+ } else if ( _prep_type == _PREP_IBM ) {
+ unsigned char planar_id = inb(0x0852);
+ unsigned char irq_edge_mask_lo, irq_edge_mask_hi;
+
+ printk("IBM ID: %08x\n", planar_id);
+ switch(planar_id) {
case 0xff:
- Motherboard_map_name = "IBM 850/860 Portable";
+ Motherboard_map_name = "IBM Thinkpad 850/860";
Motherboard_map = Nobis_pci_IRQ_map;
Motherboard_routes = Nobis_pci_IRQ_routes;
+ irq_edge_mask_lo = 0x00; /* irq's 0-7 all edge-triggered */
+ irq_edge_mask_hi = 0xA0; /* irq's 13, 15 level-triggered */
break;
case 0xfc:
- Motherboard_map_name = "IBM 6015";
+ Motherboard_map_name = "IBM 6015/7020 (Sandalfoot/Sandalbow)";
Motherboard_map = ibm6015_pci_IRQ_map;
Motherboard_routes = ibm6015_pci_IRQ_routes;
+ irq_edge_mask_lo = 0x00; /* irq's 0-7 all edge-triggered */
+ irq_edge_mask_hi = 0xA0; /* irq's 13, 15 level-triggered */
break;
case 0xd5:
- Motherboard_map_name = "IBM 43p/140";
+ Motherboard_map_name = "IBM 43P-140 (Tiger1)";
Motherboard_map = ibm43p_pci_IRQ_map;
Motherboard_routes = ibm43p_pci_IRQ_routes;
Motherboard_non0 = ibm43p_pci_map_non0;
+ irq_edge_mask_lo = 0x00; /* irq's 0-7 all edge-triggered */
+ irq_edge_mask_hi = 0xA0; /* irq's 13, 15 level-triggered */
break;
default:
- Motherboard_map_name = "IBM 8xx (Carolina)";
+ printk(KERN_ERR "Unknown IBM motherboard! Defaulting to Carolina.\n");
+ case 0xf0: /* PowerSeries 830/850 */
+ case 0xf1: /* PowerSeries 830/850 */
+ case 0xf2: /* PowerSeries 830/850 */
+ case 0xf4: /* 7248-43P */
+ case 0xf5: /* 7248-43P */
+ case 0xf6: /* 7248-43P */
+ case 0xf7: /* 7248-43P (missing from Carolina Tech Spec) */
+ Motherboard_map_name = "IBM PS830/PS850/7248 (Carolina)";
Motherboard_map = ibm8xx_pci_IRQ_map;
Motherboard_routes = ibm8xx_pci_IRQ_routes;
+ irq_edge_mask_lo = 0x00; /* irq's 0-7 all edge-triggered */
+ irq_edge_mask_hi = 0xA4; /* irq's 10, 13, 15 level-triggered */
break;
}
- /*printk("Changing IRQ mode\n");*/
- pl_id=inb(0x04d0);
- /*printk("Low mask is %#0x\n", pl_id);*/
- outb(pl_id|CAROLINA_IRQ_EDGE_MASK_LO, 0x04d0);
-
- pl_id=inb(0x04d1);
- /*printk("Hi mask is %#0x\n", pl_id);*/
- outb(pl_id|CAROLINA_IRQ_EDGE_MASK_HI, 0x04d1);
- pl_id=inb(0x04d1);
- /*printk("Hi mask now %#0x\n", pl_id);*/
- }
- else
- {
+ outb(inb(0x04d0)|irq_edge_mask_lo, 0x04d0); /* primary 8259 */
+ outb(inb(0x04d1)|irq_edge_mask_hi, 0x04d1); /* cascaded 8259 */
+ } else {
printk("No known machine pci routing!\n");
return;
}
/* Set up mapping from slots */
for (i = 1; i <= 4; i++)
- {
ibc_pirq[i-1] = Motherboard_routes[i];
- }
/* Enable PCI interrupts */
*ibc_pcicon |= 0x20;
}
pci_write_config_byte(dev,
PCI_INTERRUPT_LINE,
dev->irq);
- }else{
+ } else {
/* Enable LEGIRQ for PCI INT -> 8259 IRQ routing */
pci_write_config_dword(dev, 0x40, 0x10ff08a1);
}
}
}
-void
+static void __init
Powerplus_Map_Non0(struct pci_dev *dev)
{
struct pci_bus *pbus; /* Parent bus structure pointer */
* Otherwise, assume it's a PMC site and get the interrupt line
* value from the interrupt routing table.
*/
- if (mot_info[mot_entry].secondary_bridge_devfn)
- {
+ if (mot_info[mot_entry].secondary_bridge_devfn) {
pbus = dev->bus;
while (pbus->primary != 0)
pbus = pbus->parent;
- if ((pbus->self)->devfn != 0xA0)
- {
+ if ((pbus->self)->devfn != 0xA0) {
if ((pbus->self)->devfn == mot_info[mot_entry].secondary_bridge_devfn)
intline = mot_info[mot_entry].pci_irq_list->secondary[intpin];
- else
- {
+ else {
if ((char *)(mot_info[mot_entry].map) == (char *)Mesquite_pci_IRQ_map)
intline = mot_info[mot_entry].map[((pbus->self)->devfn)/8] + 16;
- else
- {
+ else {
int i;
for (i=0;i<3;i++)
intpin = (prep_pci_intpins[devnum % 4][intpin]) - 1;
if (OpenPIC_Addr) {
/* PCI interrupts are controlled by the OpenPIC */
pci_for_each_dev(dev) {
- if (dev->bus->number == 0)
- {
+ if (dev->bus->number == 0) {
dev->irq = openpic_to_irq(Motherboard_map[PCI_SLOT(dev->devfn)]);
pcibios_write_config_byte(dev->bus->number, dev->devfn, PCI_INTERRUPT_LINE, dev->irq);
- }
- else
- {
+ } else {
if (Motherboard_non0 != NULL)
Motherboard_non0(dev);
}
unsigned char d = PCI_SLOT(dev->devfn);
dev->irq = Motherboard_routes[Motherboard_map[d]];
- for ( i = 0 ; i <= 5 ; i++ )
- {
+ for ( i = 0 ; i <= 5 ; i++ ) {
/*
* Relocate PCI I/O resources if necessary so the
* standard 256MB BAT covers them.
*/
if ( (pci_resource_flags(dev, i) & IORESOURCE_IO) &&
- (dev->resource[i].start > 0x10000000) )
- {
- printk("Relocating PCI address %lx -> %lx\n",
- dev->resource[i].start,
- (dev->resource[i].start & 0x00FFFFFF)
- | 0x01000000);
- dev->resource[i].start =
- (dev->resource[i].start & 0x00FFFFFF) | 0x01000000;
+ (dev->resource[i].start > 0x10000000)) {
+ printk("Relocating PCI address %lx -> %lx\n",
+ dev->resource[i].start,
+ (dev->resource[i].start &
+ 0x00FFFFFF)| 0x01000000);
+ dev->resource[i].start =
+ (dev->resource[i].start & 0x00FFFFFF)
+ | 0x01000000;
pci_write_config_dword(dev,
- PCI_BASE_ADDRESS_0+(i*0x4),
- dev->resource[i].start );
- dev->resource[i].end =
- (dev->resource[i].end & 0x00FFFFFF) | 0x01000000;
+ PCI_BASE_ADDRESS_0 + (i*0x4),
+ dev->resource[i].start);
+ dev->resource[i].end =
+ (dev->resource[i].end & 0x00FFFFFF)
+ | 0x01000000;
}
}
#if 0
hose->first_busno = 0;
hose->last_busno = 0xff;
hose->pci_mem_offset = PREP_ISA_MEM_BASE;
- hose->io_base_virt = (void *)PREP_ISA_IO_BASE;
+ hose->io_base_phys = PREP_ISA_IO_BASE;
+ hose->io_base_virt = (void *)0x80000000; /* see prep_map_io() */
prep_init_resource(&hose->io_resource, 0, 0x0fffffff, IORESOURCE_IO);
prep_init_resource(&hose->mem_resources[0], 0xc0000000, 0xfeffffff,
IORESOURCE_MEM);
pkt = PnP_find_large_vendor_packet(
res->DevicePnPHeap+hostbridge->AllocatedOffset,
3, 0);
- if(pkt)
- {
+ if(pkt) {
#define p pkt->L4_Pack.L4_Data.L4_PPCPack
setup_indirect_pci(hose,
ld_le32((unsigned *) (p.PPCData)),
ld_le32((unsigned *) (p.PPCData+8)));
- }
- else
- {
+ } else
setup_indirect_pci(hose, 0x80000cf8, 0x80000cfc);
- }
- }
- else
+ } else
#endif /* CONFIG_PREP_RESIDUAL */
- {
hose->ops = &prep_pci_ops;
- }
}
ppc_md.pcibios_fixup = prep_pcibios_fixup;
}
-
/*
- * BK Id: SCCS/s.prep_setup.c 1.36 09/08/01 15:47:42 paulus
+ * BK Id: SCCS/s.prep_setup.c 1.38 09/15/01 09:13:52 trini
*/
/*
* linux/arch/ppc/kernel/setup.c
*/
void __init prep_map_io(void)
{
- io_block_mapping(0x80000000, 0x80000000, 0x10000000, _PAGE_IO);
- io_block_mapping(0xf0000000, 0xc0000000, 0x08000000, _PAGE_IO);
+ io_block_mapping(0x80000000, PREP_ISA_IO_BASE, 0x10000000, _PAGE_IO);
+ io_block_mapping(0xf0000000, PREP_ISA_MEM_BASE, 0x08000000, _PAGE_IO);
}
void __init
# this architecture
#
-#
-# Select the object file format to substitute into the linker script.
-#
-tool_prefix = sh-linux-gnu-
-
ifdef CONFIG_CPU_LITTLE_ENDIAN
CFLAGS += -ml
AFLAGS += -ml
LDFLAGS := -EB
endif
-# ifdef CONFIG_CROSSCOMPILE
-CROSS_COMPILE = $(tool_prefix)
-# endif
-
LD =$(CROSS_COMPILE)ld $(LDFLAGS)
OBJCOPY=$(CROSS_COMPILE)objcopy -O binary -R .note -R .comment -R .stab -R .stabstr -S
AFLAGS += -m3
endif
ifdef CONFIG_CPU_SH4
-CFLAGS += -m4-nofpu
-AFLAGS += -m4-nofpu
+CFLAGS += -m4 -mno-implicit-fp
+AFLAGS += -m4 -mno-implicit-fp
endif
#
fi
bool 'Little Endian' CONFIG_CPU_LITTLE_ENDIAN
# Platform-specific memory start and size definitions
-if [ "$CONFIG_SH_SOLUTION_ENGINE" = "y" -o "$CONFIG_SH_HP600" = "y" -o \
- "$CONFIG_SH_BIGSUR" = "y" -o "$CONFIG_SH_7751_SOLUTION_ENGINE" = "y" -o \
+if [ "$CONFIG_SH_SOLUTION_ENGINE" = "y" ]; then
+ define_hex CONFIG_MEMORY_START 0c000000
+ define_hex CONFIG_MEMORY_SIZE 02000000
+ define_bool CONFIG_MEMORY_SET y
+fi
+if [ "$CONFIG_SH_7751_SOLUTION_ENGINE" = "y" ]; then
+ define_hex CONFIG_MEMORY_START 0c000000
+ define_hex CONFIG_MEMORY_SIZE 04000000
+ define_bool CONFIG_MEMORY_SET y
+fi
+if [ "$CONFIG_SH_HP600" = "y" -o "$CONFIG_SH_BIGSUR" = "y" -o \
"$CONFIG_SH_DREAMCAST" = "y" -o "$CONFIG_SH_SH2000" = "y" ]; then
define_hex CONFIG_MEMORY_START 0c000000
define_hex CONFIG_MEMORY_SIZE 00400000
dep_tristate 'Support for user-space parallel port device drivers' CONFIG_PPDEV $CONFIG_PARPORT
fi
bool 'PS/2 mouse (aka "auxiliary device") support' CONFIG_PSMOUSE
+
+mainmenu_option next_comment
+comment 'Watchdog Cards'
+bool 'Watchdog Timer Support' CONFIG_WATCHDOG
+if [ "$CONFIG_WATCHDOG" != "n" ]; then
+ bool ' Disable watchdog shutdown on close' CONFIG_WATCHDOG_NOWAYOUT
+ dep_tristate ' SH 3/4 Watchdog' CONFIG_SH_WDT $CONFIG_SUPERH
+fi
+endmenu
+
tristate 'Enhanced Real Time Clock Support' CONFIG_RTC
if [ "$CONFIG_HOTPLUG" = "y" -a "$CONFIG_PCMCIA" != "n" ]; then
source drivers/char/pcmcia/Config.in
#
# CONFIG_MAGIC_SYSRQ is not set
CONFIG_SH_STANDARD_BIOS=y
-CONFIG_DEBUG_KERNEL_WITH_GDB_STUB=y
-CONFIG_GDB_STUB_VBR=a0000000
CONFIG_SH_EARLY_PRINTK=y
-/* $Id: io_generic.c,v 1.3 2000/05/07 23:31:58 gniibe Exp $
+/* $Id: io_generic.c,v 1.12 2000/11/14 16:45:11 sugioka Exp $
*
* linux/arch/sh/kernel/io_generic.c
*
/*
- * $Id: io_hd64461.c,v 1.1 2000/06/10 21:45:18 yaegashi Exp $
+ * $Id: io_hd64461.c,v 1.6 2000/11/16 23:28:44 yaegashi Exp $
* Copyright (C) 2000 YAEGASHI Takeshi
* Typical I/O routines for HD64461 system.
*/
-/* $Id: io_se.c,v 1.5 2000/06/08 05:50:10 gniibe Exp $
+/* $Id: io_se.c,v 1.12 2001/08/11 01:23:28 jzs Exp $
*
* linux/arch/sh/kernel/io_se.c
*
-/* $Id: process.c,v 1.34 2001/07/30 12:42:11 gniibe Exp $
+/* $Id: process.c,v 1.35 2001/10/11 09:18:17 gniibe Exp $
*
* linux/arch/sh/kernel/process.c
*
-/* $Id: ptrace.c,v 1.12 2001/07/23 00:00:56 gniibe Exp $
+/* $Id: ptrace.c,v 1.13 2001/10/01 02:21:50 gniibe Exp $
*
* linux/arch/sh/kernel/ptrace.c
*
#include <linux/ptrace.h>
#include <linux/unistd.h>
#include <linux/stddef.h>
+#include <linux/personality.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
if (!addr)
addr = TASK_UNMAPPED_BASE;
- addr = COLOUR_ALIGN(addr);
+ if (flags & MAP_PRIVATE)
+ addr = PAGE_ALIGN(addr);
+ else
+ addr = COLOUR_ALIGN(addr);
for (vma = find_vma(current->mm, addr); ; vma = vma->vm_next) {
/* At this point: (!vma || addr < vma->vm_end). */
if (!vma || addr + len <= vma->vm_start)
return addr;
addr = vma->vm_end;
- addr = COLOUR_ALIGN(addr);
+ if (!(flags & MAP_PRIVATE))
+ addr = COLOUR_ALIGN(addr);
}
}
#endif
-/* $Id: memchr.S,v 1.1 1999/10/17 11:32:38 gniibe Exp $
+/* $Id: memchr.S,v 1.1 2000/04/14 16:49:01 mjd Exp $
*
* "memchr" implementation of SuperH
*
-/* $Id: memcpy.S,v 1.3 1999/09/28 11:32:48 gniibe Exp $
+/* $Id: memcpy.S,v 1.3 2001/07/27 11:50:52 gniibe Exp $
*
* "memcpy" implementation of SuperH
*
-/* $Id: memmove.S,v 1.2 1999/09/21 12:55:49 gniibe Exp $
+/* $Id: memmove.S,v 1.2 2001/07/27 11:51:09 gniibe Exp $
*
* "memmove" implementation of SuperH
*
-/* $Id: memset.S,v 1.1 1999/09/18 16:57:09 gniibe Exp $
+/* $Id: memset.S,v 1.1 2000/04/14 16:49:01 mjd Exp $
*
* "memset" implementation of SuperH
*
-/* $Id: cache-sh3.c,v 1.5 2001/08/24 15:31:41 dwmw2 Exp $
+/* $Id: cache-sh3.c,v 1.6 2001/09/10 08:59:59 dwmw2 Exp $
*
* linux/arch/sh/mm/cache-sh3.c
*
#include <asm/pgalloc.h>
#include <asm/mmu_context.h>
+
#define CCR 0xffffffec /* Address of Cache Control Register */
-#define CCR_CACHE_VAL 0x00000005 /* 8k-byte cache, P1-wb, enable */
-#define CCR_CACHE_INIT 0x0000000d /* 8k-byte cache, CF, P1-wb, enable */
-#define CCR_CACHE_ENABLE 1
+
+#define CCR_CACHE_CE 0x01 /* Cache Enable */
+#define CCR_CACHE_WT 0x02 /* Write-Through (for P0,U0,P3) (else writeback) */
+#define CCR_CACHE_CB 0x04 /* Write-Back (for P1) (else writethrough) */
+#define CCR_CACHE_CF 0x08 /* Cache Flush */
+#define CCR_CACHE_RA 0x20 /* RAM mode */
+
+#define CCR_CACHE_VAL (CCR_CACHE_CB|CCR_CACHE_CE) /* 8k-byte cache, P1-wb, enable */
+#define CCR_CACHE_INIT (CCR_CACHE_CF|CCR_CACHE_VAL) /* 8k-byte cache, CF, P1-wb, enable */
#define CACHE_OC_ADDRESS_ARRAY 0xf0000000
#define CACHE_VALID 1
jump_to_P2();
ccr = ctrl_inl(CCR);
- if (ccr & CCR_CACHE_ENABLE)
+ if (ccr & CCR_CACHE_CE)
/*
* XXX: Should check RA here.
* If RA was 1, we only need to flush the half of the caches.
-/* $Id: cache-sh4.c,v 1.15 2001/08/10 14:13:13 gniibe Exp $
+/* $Id: cache-sh4.c,v 1.16 2001/09/10 11:06:35 dwmw2 Exp $
*
* linux/arch/sh/mm/cache.c
*
#include <asm/mmu_context.h>
#define CCR 0xff00001c /* Address of Cache Control Register */
-#define CCR_CACHE_VAL 0x00000105 /* 8k+16k-byte cache,P1-wb,enable */
-#define CCR_CACHE_INIT 0x0000090d /* ICI,ICE(8k), OCI,P1-wb,OCE(16k) */
-#define CCR_CACHE_ENABLE 0x00000101
+
+#define CCR_CACHE_OCE 0x0001 /* Operand Cache Enable */
+#define CCR_CACHE_WT 0x0002 /* Write-Through (for P0,U0,P3) (else writeback)*/
+#define CCR_CACHE_CB 0x0004 /* Copy-Back (for P1) (else writethrough) */
+#define CCR_CACHE_OCI 0x0008 /* OC Invalidate */
+#define CCR_CACHE_ORA 0x0020 /* OC RAM Mode */
+#define CCR_CACHE_OIX 0x0080 /* OC Index Enable */
+#define CCR_CACHE_ICE 0x0100 /* Instruction Cache Enable */
+#define CCR_CACHE_ICI 0x0800 /* IC Invalidate */
+#define CCR_CACHE_IIX 0x8000 /* IC Index Enable */
+
+/* Default CCR setup: 8k+16k-byte cache,P1-wb,enable */
+#define CCR_CACHE_VAL (CCR_CACHE_ICE|CCR_CACHE_CB|CCR_CACHE_OCE)
+#define CCR_CACHE_INIT (CCR_CACHE_VAL|CCR_CACHE_OCI|CCR_CACHE_ICI)
+#define CCR_CACHE_ENABLE (CCR_CACHE_OCE|CCR_CACHE_ICE)
#define CACHE_IC_ADDRESS_ARRAY 0xf0000000
#define CACHE_OC_ADDRESS_ARRAY 0xf4000000
-/* $Id: fault.c,v 1.48 2001/08/09 00:27:04 gniibe Exp $
+/* $Id: fault.c,v 1.49 2001/10/06 19:46:00 lethal Exp $
*
* linux/arch/sh/mm/fault.c
* Copyright (C) 1999 Niibe Yutaka
* make sure we exit gracefully rather than endlessly redo
* the fault.
*/
+survive:
switch (handle_mm_fault(mm, vma, address, writeaccess)) {
case 1:
tsk->min_flt++;
*/
out_of_memory:
up_read(&mm->mmap_sem);
+ if (current->pid == 1) {
+ current->policy |= SCHED_YIELD;
+ schedule();
+ down_read(&mm->mmap_sem);
+ goto survive;
+ }
printk("VM: killing process %s\n", tsk->comm);
if (user_mode(regs))
do_exit(SIGKILL);
-/* $Id: init.c,v 1.18 2001/08/03 11:22:06 gniibe Exp $
+/* $Id: init.c,v 1.19 2001/10/01 02:21:50 gniibe Exp $
*
* linux/arch/sh/mm/init.c
*
subdir-$(CONFIG_SGI) += sgi
subdir-$(CONFIG_IDE) += ide
subdir-$(CONFIG_SCSI) += scsi
-subdir-$(CONFIG_I2O) += i2o
+subdir-$(CONFIG_I2O) += message/i2o
subdir-$(CONFIG_FUSION) += message/fusion
subdir-$(CONFIG_MD) += md
subdir-$(CONFIG_IEEE1394) += ieee1394
return 0;
case BLKGETSIZE:
- return put_user (mfm[minor].nr_sects, (long *)arg);
+ return put_user (mfm[minor].nr_sects, (unsigned long *)arg);
case BLKGETSIZE64:
return put_user ((u64)mfm[minor].nr_sects << 9, (u64 *)arg);
case BLKGETSIZE: /* Return device size */
return put_user(acsi_part[MINOR(inode->i_rdev)].nr_sects,
- (long *) arg);
+ (unsigned long *) arg);
case BLKGETSIZE64: /* Return device size */
return put_user((u64)acsi_part[MINOR(inode->i_rdev)].nr_sects << 9,
return -EFAULT;
break;
case BLKGETSIZE:
- return put_user(unit[drive].blocks,(long *)param);
+ return put_user(unit[drive].blocks,(unsigned long *)param);
break;
case BLKGETSIZE64:
return put_user((u64)unit[drive].blocks << 9, (u64 *)param);
dtp = UDT;
}
if (cmd == BLKGETSIZE)
- return put_user(dtp->blocks, (long *)param);
+ return put_user(dtp->blocks, (unsigned long *)param);
memset((void *)&getprm, 0, sizeof(getprm));
getprm.size = dtp->blocks;
/* add BLKGETSIZE64 too */
g = get_gendisk(dev);
if (!g)
- longval = 0;
+ ulongval = 0;
else
- longval = g->part[MINOR(dev)].nr_sects;
- return put_user(longval, (long *) arg);
+ ulongval = g->part[MINOR(dev)].nr_sects;
+ return put_user(ulongval, (unsigned long *) arg);
#endif
#if 0
case BLKRRPART: /* Re-read partition tables */
put_user(hba[ctlr]->hd[MINOR(inode->i_rdev)].start_sect, &geo->start);
return 0;
case BLKGETSIZE:
- put_user(hba[ctlr]->hd[MINOR(inode->i_rdev)].nr_sects, (long*)arg);
+ put_user(hba[ctlr]->hd[MINOR(inode->i_rdev)].nr_sects, (unsigned long *)arg);
return 0;
case BLKGETSIZE64:
put_user((u64)hba[ctlr]->hd[MINOR(inode->i_rdev)].nr_sects << 9, (u64*)arg);
case IDAGETDRVINFO:
return copy_to_user(&io->c.drv,&hba[ctlr]->drv[dsk],sizeof(drv_info_t));
case BLKGETSIZE:
- return put_user(ida[(ctlr<<CTLR_SHIFT)+MINOR(inode->i_rdev)].nr_sects, (long*)arg);
+ return put_user(ida[(ctlr<<CTLR_SHIFT)+MINOR(inode->i_rdev)].nr_sects, (unsigned long *)arg);
case BLKGETSIZE64:
return put_user((u64)(ida[(ctlr<<CTLR_SHIFT)+MINOR(inode->i_rdev)].nr_sects) << 9, (u64*)arg);
case BLKRRPART:
case BLKGETSIZE:
ECALL(get_floppy_geometry(drive, type, &g));
- return put_user(g->size, (long *) param);
+ return put_user(g->size, (unsigned long *) param);
case BLKGETSIZE64:
ECALL(get_floppy_geometry(drive, type, &g));
return 0;
do {
- rq = list_entry(head->next, struct request, table);
- list_del(&rq->table);
+ rq = list_entry(head->next, struct request, queue);
+ list_del(&rq->queue);
kmem_cache_free(request_cachep, rq);
i++;
} while (!list_empty(head));
}
memset(rq, 0, sizeof(struct request));
rq->rq_status = RQ_INACTIVE;
- list_add(&rq->table, &q->request_freelist[i & 1]);
+ list_add(&rq->queue, &q->request_freelist[i & 1]);
}
init_waitqueue_head(&q->wait_for_request);
q->head_active = 1;
}
-#define blkdev_free_rq(list) list_entry((list)->next, struct request, table);
+#define blkdev_free_rq(list) list_entry((list)->next, struct request, queue);
/*
* Get a free request. io_request_lock must be held and interrupts
* disabled on the way in.
if (!list_empty(&q->request_freelist[rw])) {
rq = blkdev_free_rq(&q->request_freelist[rw]);
- list_del(&rq->table);
+ list_del(&rq->queue);
rq->rq_status = RQ_ACTIVE;
rq->special = NULL;
rq->q = q;
/*
* Add to pending free list and batch wakeups
*/
- list_add(&req->table, &q->pending_freelist[rw]);
+ list_add(&req->queue, &q->pending_freelist[rw]);
if (++q->pending_free[rw] >= batch_requests) {
int wake_up = q->pending_free[rw];
static int max_loop = 8;
static struct loop_device *loop_dev;
-static int *loop_sizes;
+static unsigned long *loop_sizes;
static int *loop_blksizes;
static devfs_handle_t devfs_handle; /* For the directory */
#define MAX_DISK_SIZE 1024*1024*1024
-static int compute_loop_size(struct loop_device *lo, struct dentry * lo_dentry, kdev_t lodev)
+static unsigned long compute_loop_size(struct loop_device *lo, struct dentry * lo_dentry, kdev_t lodev)
{
if (S_ISREG(lo_dentry->d_inode->i_mode))
return (lo_dentry->d_inode->i_size - lo->lo_offset) >> BLOCK_SIZE_BITS;
err = -ENXIO;
break;
}
- err = put_user(loop_sizes[lo->lo_number] << 1, (long *) arg);
+ err = put_user(loop_sizes[lo->lo_number] << 1, (unsigned long *) arg);
break;
case BLKGETSIZE64:
if (lo->lo_state != Lo_bound) {
if (!loop_dev)
return -ENOMEM;
- loop_sizes = kmalloc(max_loop * sizeof(int), GFP_KERNEL);
+ loop_sizes = kmalloc(max_loop * sizeof(unsigned long), GFP_KERNEL);
if (!loop_sizes)
goto out_sizes;
spin_lock_init(&lo->lo_lock);
}
- memset(loop_sizes, 0, max_loop * sizeof(int));
+ memset(loop_sizes, 0, max_loop * sizeof(unsigned long));
memset(loop_blksizes, 0, max_loop * sizeof(int));
blk_size[MAJOR_NR] = loop_sizes;
blksize_size[MAJOR_NR] = loop_blksizes;
return 0;
#endif
case BLKGETSIZE:
- return put_user(nbd_bytesizes[dev] >> 9, (long *) arg);
+ return put_user(nbd_bytesizes[dev] >> 9, (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)nbd_bytesizes[dev], (u64 *) arg);
}
return 0;
case BLKGETSIZE:
if (!arg) return -EINVAL;
- err = verify_area(VERIFY_WRITE,(long *) arg,sizeof(long));
+ err = verify_area(VERIFY_WRITE,(unsigned long *) arg,sizeof(unsigned long));
if (err) return (err);
- put_user(pd_hd[dev].nr_sects,(long *) arg);
+ put_user(pd_hd[dev].nr_sects,(unsigned long *) arg);
return (0);
case BLKGETSIZE64:
return put_user((u64)pd_hd[dev].nr_sects << 9, (u64 *)arg);
case BLKGETSIZE:
if (arg) {
- if ((err = verify_area(VERIFY_WRITE, (long *) arg, sizeof(long))))
+ if ((err = verify_area(VERIFY_WRITE, (unsigned long *) arg, sizeof(unsigned long))))
return (err);
- put_user(ps2esdi[MINOR(inode->i_rdev)].nr_sects, (long *) arg);
+ put_user(ps2esdi[MINOR(inode->i_rdev)].nr_sects, (unsigned long *) arg);
return (0);
}
case BLKGETSIZE: /* Return device size */
if (!arg)
break;
- error = put_user(rd_kbsize[minor] << 1, (long *) arg);
+ error = put_user(rd_kbsize[minor] << 1, (unsigned long *) arg);
break;
case BLKGETSIZE64:
error = put_user((u64)rd_kbsize[minor]<<10, (u64*)arg);
}
case BLKGETSIZE:
if (!arg) return -EINVAL;
- return put_user(xd_struct[MINOR(inode->i_rdev)].nr_sects,(long *) arg);
+ return put_user(xd_struct[MINOR(inode->i_rdev)].nr_sects,(unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)xd_struct[MINOR(inode->i_rdev)].nr_sects << 9, (u64 *)arg);
case HDIO_SET_DMA:
module_init(cdrom_init);
module_exit(cdrom_exit);
+MODULE_LICENSE("GPL");
if [ "$CONFIG_HOTPLUG" = "y" -a "$CONFIG_PCMCIA" != "n" ]; then
source drivers/char/pcmcia/Config.in
fi
+
+tristate 'ACP Modem (Mwave) support' CONFIG_MWAVE
+
endmenu
obj-$(CONFIG_977_WATCHDOG) += wdt977.o
obj-$(CONFIG_I810_TCO) += i810-tco.o
obj-$(CONFIG_MACHZ_WDT) += machzwd.o
+obj-$(CONFIG_SH_WDT) += shwdt.o
obj-$(CONFIG_SOFT_WATCHDOG) += softdog.o
+subdir-$(CONFIG_MWAVE) += mwave
+ifeq ($(CONFIG_MWAVE),y)
+ obj-y += mwave/mwave.o
+endif
include $(TOPDIR)/Rules.make
else {
u16 *q = p;
int cnt = count;
+ u16 a;
if (!can_do_color) {
- while (cnt--) *q++ ^= 0x0800;
+ while (cnt--) {
+ a = scr_readw(q);
+ a ^= 0x0800;
+ scr_writew(a, q);
+ q++;
+ }
} else if (hi_font_mask == 0x100) {
while (cnt--) {
- u16 a = *q;
+ a = scr_readw(q);
a = ((a) & 0x11ff) | (((a) & 0xe000) >> 4) | (((a) & 0x0e00) << 4);
- *q++ = a;
+ scr_writew(a, q);
+ q++;
}
} else {
while (cnt--) {
- u16 a = *q;
+ a = scr_readw(q);
a = ((a) & 0x88ff) | (((a) & 0x7000) >> 4) | (((a) & 0x0700) << 4);
- *q++ = a;
+ scr_writew(a, q);
+ q++;
}
}
}
}
}
-static char banner[] __initdata[] =
+static char banner[] __initdata =
KERN_INFO "SuperH SCI(F) driver initialized\n";
int __init sci_init(void)
--- /dev/null
+/*
+ * drivers/char/shwdt.c
+ *
+ * Watchdog driver for integrated watchdog in the SuperH 3/4 processors.
+ *
+ * Copyright (C) 2001 Paul Mundt <lethal@chaoticdreams.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/miscdevice.h>
+#include <linux/watchdog.h>
+#include <linux/reboot.h>
+#include <linux/notifier.h>
+#include <linux/smp_lock.h>
+#include <linux/ioport.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+#if defined(CONFIG_CPU_SH4)
+ #define WTCNT 0xffc00008
+ #define WTCSR 0xffc0000c
+#elif defined(CONFIG_CPU_SH3)
+ #define WTCNT 0xffffff84
+ #define WTCSR 0xffffff86
+#else
+ #error "Can't use SH 3/4 watchdog on non-SH 3/4 processor."
+#endif
+
+#define WTCNT_HIGH 0x5a00
+#define WTCSR_HIGH 0xa500
+
+#define WTCSR_TME 0x80
+#define WTCSR_WT 0x40
+#define WTCSR_RSTS 0x20
+#define WTCSR_WOVF 0x10
+#define WTCSR_IOVF 0x08
+#define WTCSR_CKS2 0x04
+#define WTCSR_CKS1 0x02
+#define WTCSR_CKS0 0x01
+
+#define WTCSR_CKS 0x07
+#define WTCSR_CKS_1 0x00
+#define WTCSR_CKS_4 0x01
+#define WTCSR_CKS_16 0x02
+#define WTCSR_CKS_32 0x03
+#define WTCSR_CKS_64 0x04
+#define WTCSR_CKS_256 0x05
+#define WTCSR_CKS_1024 0x06
+#define WTCSR_CKS_4096 0x07
+
+static int sh_is_open = 0;
+static struct watchdog_info sh_wdt_info;
+
+/**
+ * sh_wdt_write_cnt - Write to Counter
+ *
+ * @val: Value to write
+ *
+ * Writes the given value @val to the lower byte of the timer counter.
+ * The upper byte is set manually on each write.
+ */
+static void sh_wdt_write_cnt(__u8 val)
+{
+ ctrl_outw(WTCNT_HIGH | (__u16)val, WTCNT);
+}
+
+/**
+ * sh_wdt_write_csr - Write to Control/Status Register
+ *
+ * @val: Value to write
+ *
+ * Writes the given value @val to the lower byte of the control/status
+ * register. The upper byte is set manually on each write.
+ */
+static void sh_wdt_write_csr(__u8 val)
+{
+ ctrl_outw(WTCSR_HIGH | (__u16)val, WTCSR);
+}
+
+/**
+ * sh_wdt_start - Start the Watchdog
+ *
+ * Starts the watchdog.
+ */
+static void sh_wdt_start(void)
+{
+ sh_wdt_write_csr(WTCSR_WT | WTCSR_CKS_4096);
+ sh_wdt_write_cnt(0);
+ sh_wdt_write_csr((ctrl_inb(WTCSR) | WTCSR_TME));
+}
+
+/**
+ * sh_wdt_stop - Stop the Watchdog
+ *
+ * Stops the watchdog.
+ */
+static void sh_wdt_stop(void)
+{
+ sh_wdt_write_csr((ctrl_inb(WTCSR) & ~WTCSR_TME));
+}
+
+/**
+ * sh_wdt_ping - Ping the Watchdog
+ *
+ * @data: Unused
+ *
+ * Clears overflow bit, resets timer counter.
+ */
+static void sh_wdt_ping(unsigned long data)
+{
+ sh_wdt_write_csr((ctrl_inb(WTCSR) & ~WTCSR_IOVF));
+ sh_wdt_write_cnt(0);
+}
+
+/**
+ * sh_wdt_open - Open the Device
+ *
+ * @inode: inode of device
+ * @file: file handle of device
+ *
+ * Watchdog device is opened and started.
+ */
+static int sh_wdt_open(struct inode *inode, struct file *file)
+{
+ switch (MINOR(inode->i_rdev)) {
+ case WATCHDOG_MINOR:
+ if (sh_is_open) {
+ return -EBUSY;
+ }
+
+ sh_is_open = 1;
+ sh_wdt_start();
+
+ return 0;
+ default:
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+/**
+ * sh_wdt_close - Close the Device
+ *
+ * @inode: inode of device
+ * @file: file handle of device
+ *
+ * Watchdog device is closed and stopped.
+ */
+static int sh_wdt_close(struct inode *inode, struct file *file)
+{
+ lock_kernel();
+
+ if (MINOR(inode->i_rdev) == WATCHDOG_MINOR) {
+#ifndef CONFIG_WATCHDOG_NOWAYOUT
+ sh_wdt_stop();
+#endif
+ sh_is_open = 0;
+ }
+
+ unlock_kernel();
+
+ return 0;
+}
+
+/**
+ * sh_wdt_read - Read from Device
+ *
+ * @file: file handle of device
+ * @char: buffer to write to
+ * @count: length of buffer
+ * @ppos: offset
+ *
+ * Unsupported.
+ */
+static ssize_t sh_wdt_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos)
+{
+ return -EINVAL;
+}
+
+/**
+ * sh_wdt_write - Write to Device
+ *
+ * @file: file handle of device
+ * @char: buffer to write
+ * @count: length of buffer
+ * @ppos: offset
+ *
+ * Pings the watchdog on write.
+ */
+static ssize_t sh_wdt_write(struct file *file, const char *buf,
+ size_t count, loff_t *ppos)
+{
+ /* Can't seek (pwrite) on this device */
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+
+ if (count) {
+ sh_wdt_ping(0);
+ return 1;
+ }
+
+ return 0;
+}
+
+/**
+ * sh_wdt_ioctl - Query Device
+ *
+ * @inode: inode of device
+ * @file: file handle of device
+ * @cmd: watchdog command
+ * @arg: argument
+ *
+ * Query basic information from the device or ping it, as outlined by the
+ * watchdog API.
+ */
+static int sh_wdt_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ if (copy_to_user((struct watchdog_info *)arg,
+ &sh_wdt_info,
+ sizeof(sh_wdt_info))) {
+ return -EFAULT;
+ }
+
+ break;
+ case WDIOC_GETSTATUS:
+ if (copy_to_user((int *)arg,
+ &sh_is_open,
+ sizeof(int))) {
+ return -EFAULT;
+ }
+
+ break;
+ case WDIOC_KEEPALIVE:
+ sh_wdt_ping(0);
+
+ break;
+ default:
+ return -ENOTTY;
+ }
+
+ return 0;
+}
+
+/**
+ * sh_wdt_notify_sys - Notifier Handler
+ *
+ * @this: notifier block
+ * @code: notifier event
+ * @unused: unused
+ *
+ * Handles specific events, such as turning off the watchdog during a
+ * shutdown event.
+ */
+static int sh_wdt_notify_sys(struct notifier_block *this,
+ unsigned long code, void *unused)
+{
+ if (code == SYS_DOWN || SYS_HALT) {
+ sh_wdt_stop();
+ }
+
+ return NOTIFY_DONE;
+}
+
+static struct file_operations sh_wdt_fops = {
+ owner: THIS_MODULE,
+ read: sh_wdt_read,
+ write: sh_wdt_write,
+ ioctl: sh_wdt_ioctl,
+ open: sh_wdt_open,
+ release: sh_wdt_close,
+};
+
+static struct watchdog_info sh_wdt_info = {
+ WDIOF_KEEPALIVEPING,
+ 1,
+ "SH WDT",
+};
+
+static struct notifier_block sh_wdt_notifier = {
+ sh_wdt_notify_sys,
+ NULL,
+ 0
+};
+
+static struct miscdevice sh_wdt_miscdev = {
+ WATCHDOG_MINOR,
+ "watchdog",
+ &sh_wdt_fops,
+};
+
+/**
+ * sh_wdt_init - Initialize module
+ *
+ * Registers the device and notifier handler. Actual device
+ * initialization is handled by sh_wdt_open().
+ */
+static int __init sh_wdt_init(void)
+{
+ if (misc_register(&sh_wdt_miscdev)) {
+ printk(KERN_ERR "shwdt: Can't register misc device\n");
+ return -EINVAL;
+ }
+
+ if (!request_region(WTCNT, 1, "shwdt")) {
+ printk(KERN_ERR "shwdt: Can't request WTCNT region\n");
+ misc_deregister(&sh_wdt_miscdev);
+ return -ENXIO;
+ }
+
+ if (!request_region(WTCSR, 1, "shwdt")) {
+ printk(KERN_ERR "shwdt: Can't request WTCSR region\n");
+ release_region(WTCNT, 1);
+ misc_deregister(&sh_wdt_miscdev);
+ return -ENXIO;
+ }
+
+ if (register_reboot_notifier(&sh_wdt_notifier)) {
+ printk(KERN_ERR "shwdt: Can't register reboot notifier\n");
+ release_region(WTCSR, 1);
+ release_region(WTCNT, 1);
+ misc_deregister(&sh_wdt_miscdev);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * sh_wdt_exit - Deinitialize module
+ *
+ * Unregisters the device and notifier handler. Actual device
+ * deinitialization is handled by sh_wdt_close().
+ */
+static void __exit sh_wdt_exit(void)
+{
+ unregister_reboot_notifier(&sh_wdt_notifier);
+ release_region(WTCSR, 1);
+ release_region(WTCNT, 1);
+ misc_deregister(&sh_wdt_miscdev);
+}
+
+EXPORT_NO_SYMBOLS;
+
+MODULE_AUTHOR("Paul Mundt <lethal@chaoticdreams.org>");
+MODULE_DESCRIPTION("SH 3/4 watchdog driver");
+MODULE_LICENSE("GPL");
+
+module_init(sh_wdt_init);
+module_exit(sh_wdt_exit);
+
-1, "sonypi", &sonypi_misc_fops
};
-static int __devinit sonypi_probe(struct pci_dev *pcidev,
- const struct pci_device_id *ent) {
+static int __devinit sonypi_probe(struct pci_dev *pcidev) {
int i, ret;
struct sonypi_ioport_list *ioport_list;
struct sonypi_irq_list *irq_list;
- if (sonypi_device.dev) {
- printk(KERN_ERR "sonypi: only one device allowed!\n"),
- ret = -EBUSY;
- goto out1;
- }
sonypi_device.dev = pcidev;
- sonypi_device.model = (int)ent->driver_data;
+ if (pcidev)
+ sonypi_device.model = SONYPI_DEVICE_MODEL_TYPE1;
+ else
+ sonypi_device.model = SONYPI_DEVICE_MODEL_TYPE2;
sonypi_initq();
init_MUTEX(&sonypi_device.lock);
- if (pci_enable_device(pcidev)) {
+ if (pcidev && pci_enable_device(pcidev)) {
printk(KERN_ERR "sonypi: pci_enable_device failed\n");
ret = -EIO;
goto out1;
printk(KERN_INFO "sonypi: Sony Programmable I/O Controller Driver v%d.%d.\n",
SONYPI_DRIVER_MAJORVERSION,
SONYPI_DRIVER_MINORVERSION);
- printk(KERN_INFO "sonypi: detected %s model (%04x:%04x), "
+ printk(KERN_INFO "sonypi: detected %s model, "
"camera = %s, compat = %s\n",
(sonypi_device.model == SONYPI_DEVICE_MODEL_TYPE1) ?
"type1" : "type2",
- sonypi_device.dev->vendor, sonypi_device.dev->device,
camera ? "on" : "off",
compat ? "on" : "off");
printk(KERN_INFO "sonypi: enabled at irq=%d, port1=0x%x, port2=0x%x\n",
return ret;
}
-static void __devexit sonypi_remove(struct pci_dev *pcidev) {
+static void __devexit sonypi_remove(void) {
sonypi_call2(0x81, 0); /* make sure we don't get any more events */
if (camera)
sonypi_camera_off();
printk(KERN_INFO "sonypi: removed.\n");
}
-static struct pci_device_id sonypi_id_tbl[] __devinitdata = {
- { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_3,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- (unsigned long) SONYPI_DEVICE_MODEL_TYPE1 },
- { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_10,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- (unsigned long) SONYPI_DEVICE_MODEL_TYPE2 },
- { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_12,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- (unsigned long) SONYPI_DEVICE_MODEL_TYPE2 },
- { }
-};
-
-MODULE_DEVICE_TABLE(pci, sonypi_id_tbl);
-
-static struct pci_driver sonypi_driver = {
- name: "sonypi",
- id_table: sonypi_id_tbl,
- probe: sonypi_probe,
- remove: sonypi_remove,
-};
-
static int __init sonypi_init_module(void) {
- if (is_sony_vaio_laptop)
- return pci_module_init(&sonypi_driver);
+ struct pci_dev *pcidev = NULL;
+
+ if (is_sony_vaio_laptop) {
+ pcidev = pci_find_device(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_82371AB_3,
+ NULL);
+ return sonypi_probe(pcidev);
+ }
else
return -ENODEV;
}
static void __exit sonypi_cleanup_module(void) {
- pci_unregister_driver(&sonypi_driver);
+ sonypi_remove();
}
#ifndef MODULE
#ifdef __KERNEL__
#define SONYPI_DRIVER_MAJORVERSION 1
-#define SONYPI_DRIVER_MINORVERSION 6
+#define SONYPI_DRIVER_MINORVERSION 7
#include <linux/types.h>
#include <linux/pci.h>
+++ /dev/null
-mainmenu_option next_comment
-comment 'I2O device support'
-
-tristate 'I2O support' CONFIG_I2O
-
-if [ "$CONFIG_PCI" = "y" ]; then
- dep_tristate ' I2O PCI support' CONFIG_I2O_PCI $CONFIG_I2O
-fi
-dep_tristate ' I2O Block OSM' CONFIG_I2O_BLOCK $CONFIG_I2O
-if [ "$CONFIG_NET" = "y" ]; then
- dep_tristate ' I2O LAN OSM' CONFIG_I2O_LAN $CONFIG_I2O
-fi
-dep_tristate ' I2O SCSI OSM' CONFIG_I2O_SCSI $CONFIG_I2O $CONFIG_SCSI
-dep_tristate ' I2O /proc support' CONFIG_I2O_PROC $CONFIG_I2O
-
-endmenu
+++ /dev/null
-#
-# Makefile for the kernel I2O OSM.
-#
-# Note : at this point, these files are compiled on all systems.
-# In the future, some of these should be built conditionally.
-#
-
-O_TARGET := i2o.o
-
-export-objs := i2o_pci.o i2o_core.o i2o_config.o i2o_block.o i2o_lan.o i2o_scsi.o i2o_proc.o
-
-obj-$(CONFIG_I2O_PCI) += i2o_pci.o
-obj-$(CONFIG_I2O) += i2o_core.o i2o_config.o
-obj-$(CONFIG_I2O_BLOCK) += i2o_block.o
-obj-$(CONFIG_I2O_LAN) += i2o_lan.o
-obj-$(CONFIG_I2O_SCSI) += i2o_scsi.o
-obj-$(CONFIG_I2O_PROC) += i2o_proc.o
-
-include $(TOPDIR)/Rules.make
-
+++ /dev/null
-
- Linux I2O Support (c) Copyright 1999 Red Hat Software
- and others.
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version
- 2 of the License, or (at your option) any later version.
-
-AUTHORS (so far)
-
-Alan Cox, Building Number Three Ltd.
- Core code, SCSI and Block OSMs
-
-Steve Ralston, LSI Logic Corp.
- Debugging SCSI and Block OSM
-
-Deepak Saxena, Intel Corp.
- Various core/block extensions
- /proc interface, bug fixes
- Ioctl interfaces for control
- Debugging LAN OSM
-
-Philip Rumpf
- Fixed assorted dumb SMP locking bugs
-
-Juha Sievanen, University of Helsinki Finland
- LAN OSM code
- /proc interface to LAN class
- Bug fixes
- Core code extensions
-
-Auvo Häkkinen, University of Helsinki Finland
- LAN OSM code
- /Proc interface to LAN class
- Bug fixes
- Core code extensions
-
-Taneli Vähäkangas, University of Helsinki Finland
- Fixes to i2o_config
-
-CREDITS
-
- This work was made possible by
-
-Red Hat Software
- Funding for the Building #3 part of the project
-
-Symbios Logic (Now LSI)
- Host adapters, hints, known to work platforms when I hit
- compatibility problems
-
-BoxHill Corporation
- Loan of initial FibreChannel disk array used for development work.
-
-European Comission
- Funding the work done by the University of Helsinki
-
-SysKonnect
- Loan of FDDI and Gigabit Ethernet cards
-
-ASUSTeK
- Loan of I2O motherboard
-
-STATUS:
-
-o The core setup works within limits.
-o The scsi layer seems to almost work.
- I'm still chasing down the hang bug.
-o The block OSM is mostly functional
-o LAN OSM works with FDDI and Ethernet cards.
-
-TO DO:
-
-General:
-o Provide hidden address space if asked
-o Long term message flow control
-o PCI IOP's without interrupts are not supported yet
-o Push FAIL handling into the core
-o DDM control interfaces for module load etc
-o Add I2O 2.0 support (Deffered to 2.5 kernel)
-
-Block:
-o Multiple major numbers
-o Read ahead and cache handling stuff. Talk to Ingo and people
-o Power management
-o Finish Media changers
-
-SCSI:
-o Find the right way to associate drives/luns/busses
-
-Lan:
-o Performance tuning
-o Test Fibre Channel code
-
-Tape:
-o Anyone seen anything implementing this ?
- (D.S: Will attempt to do so if spare cycles permit)
+++ /dev/null
-
-Linux I2O User Space Interface
-rev 0.3 - 04/20/99
-
-=============================================================================
-Originally written by Deepak Saxena(deepak@plexity.net)
-Currently maintained by Deepak Saxena(deepak@plexity.net)
-=============================================================================
-
-I. Introduction
-
-The Linux I2O subsystem provides a set of ioctl() commands that can be
-utilized by user space applications to communicate with IOPs and devices
-on individual IOPs. This document defines the specific ioctl() commands
-that are available to the user and provides examples of their uses.
-
-This document assumes the reader is familiar with or has access to the
-I2O specification as no I2O message parameters are outlined. For information
-on the specification, see http://www.i2osig.org
-
-This document and the I2O user space interface are currently maintained
-by Deepak Saxena. Please send all comments, errata, and bug fixes to
-deepak@csociety.purdue.edu
-
-II. IOP Access
-
-Access to the I2O subsystem is provided through the device file named
-/dev/i2o/ctl. This file is a character file with major number 10 and minor
-number 166. It can be created through the following command:
-
- mknod /dev/i2o/ctl c 10 166
-
-III. Determining the IOP Count
-
- SYNOPSIS
-
- ioctl(fd, I2OGETIOPS, int *count);
-
- u8 count[MAX_I2O_CONTROLLERS];
-
- DESCRIPTION
-
- This function returns the system's active IOP table. count should
- point to a buffer containing MAX_I2O_CONTROLLERS entries. Upon
- returning, each entry will contain a non-zero value if the given
- IOP unit is active, and NULL if it is inactive or non-existent.
-
- RETURN VALUE.
-
- Returns 0 if no errors occur, and -1 otherwise. If an error occurs,
- errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
-
-IV. Getting Hardware Resource Table
-
- SYNOPSIS
-
- ioctl(fd, I2OHRTGET, struct i2o_cmd_hrt *hrt);
-
- struct i2o_cmd_hrtlct
- {
- u32 iop; /* IOP unit number */
- void *resbuf; /* Buffer for result */
- u32 *reslen; /* Buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function returns the Hardware Resource Table of the IOP specified
- by hrt->iop in the buffer pointed to by hrt->resbuf. The actual size of
- the data is written into *(hrt->reslen).
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(hrt->reslen)
-
-V. Getting Logical Configuration Table
-
- SYNOPSIS
-
- ioctl(fd, I2OLCTGET, struct i2o_cmd_lct *lct);
-
- struct i2o_cmd_hrtlct
- {
- u32 iop; /* IOP unit number */
- void *resbuf; /* Buffer for result */
- u32 *reslen; /* Buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function returns the Logical Configuration Table of the IOP specified
- by lct->iop in the buffer pointed to by lct->resbuf. The actual size of
- the data is written into *(lct->reslen).
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(lct->reslen)
-
-VI. Settting Parameters
-
- SYNOPSIS
-
- ioctl(fd, I2OPARMSET, struct i2o_parm_setget *ops);
-
- struct i2o_cmd_psetget
- {
- u32 iop; /* IOP unit number */
- u32 tid; /* Target device TID */
- void *opbuf; /* Operation List buffer */
- u32 oplen; /* Operation List buffer length in bytes */
- void *resbuf; /* Result List buffer */
- u32 *reslen; /* Result List buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function posts a UtilParamsSet message to the device identified
- by ops->iop and ops->tid. The operation list for the message is
- sent through the ops->opbuf buffer, and the result list is written
- into the buffer pointed to by ops->resbuf. The number of bytes
- written is placed into *(ops->reslen).
-
- RETURNS
-
- The return value is the size in bytes of the data written into
- ops->resbuf if no errors occur. If an error occurs, -1 is returned
- and errno is set appropriatly:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(ops->reslen)
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
- A return value of 0 does not mean that the value was actually
- changed properly on the IOP. The user should check the result
- list to determine the specific status of the transaction.
-
-VII. Getting Parameters
-
- SYNOPSIS
-
- ioctl(fd, I2OPARMGET, struct i2o_parm_setget *ops);
-
- struct i2o_parm_setget
- {
- u32 iop; /* IOP unit number */
- u32 tid; /* Target device TID */
- void *opbuf; /* Operation List buffer */
- u32 oplen; /* Operation List buffer length in bytes */
- void *resbuf; /* Result List buffer */
- u32 *reslen; /* Result List buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function posts a UtilParamsGet message to the device identified
- by ops->iop and ops->tid. The operation list for the message is
- sent through the ops->opbuf buffer, and the result list is written
- into the buffer pointed to by ops->resbuf. The actual size of data
- written is placed into *(ops->reslen).
-
- RETURNS
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(ops->reslen)
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
- A return value of 0 does not mean that the value was actually
- properly retreived. The user should check the result list
- to determine the specific status of the transaction.
-
-VIII. Downloading Software
-
- SYNOPSIS
-
- ioctl(fd, I2OSWDL, struct i2o_sw_xfer *sw);
-
- struct i2o_sw_xfer
- {
- u32 iop; /* IOP unit number */
- u8 flags; /* DownloadFlags field */
- u8 sw_type; /* Software type */
- u32 sw_id; /* Software ID */
- void *buf; /* Pointer to software buffer */
- u32 *swlen; /* Length of software buffer */
- u32 *maxfrag; /* Number of fragments */
- u32 *curfrag; /* Current fragment number */
- };
-
- DESCRIPTION
-
- This function downloads a software fragment pointed by sw->buf
- to the iop identified by sw->iop. The DownloadFlags, SwID, SwType
- and SwSize fields of the ExecSwDownload message are filled in with
- the values of sw->flags, sw->sw_id, sw->sw_type and *(sw->swlen).
-
- The fragments _must_ be sent in order and be 8K in size. The last
- fragment _may_ be shorter, however. The kernel will compute its
- size based on information in the sw->swlen field.
-
- Please note that SW transfers can take a long time.
-
- RETURNS
-
- This function returns 0 no errors occur. If an error occurs, -1
- is returned and errno is set appropriatly:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-IX. Uploading Software
-
- SYNOPSIS
-
- ioctl(fd, I2OSWUL, struct i2o_sw_xfer *sw);
-
- struct i2o_sw_xfer
- {
- u32 iop; /* IOP unit number */
- u8 flags; /* UploadFlags */
- u8 sw_type; /* Software type */
- u32 sw_id; /* Software ID */
- void *buf; /* Pointer to software buffer */
- u32 *swlen; /* Length of software buffer */
- u32 *maxfrag; /* Number of fragments */
- u32 *curfrag; /* Current fragment number */
- };
-
- DESCRIPTION
-
- This function uploads a software fragment from the IOP identified
- by sw->iop, sw->sw_type, sw->sw_id and optionally sw->swlen fields.
- The UploadFlags, SwID, SwType and SwSize fields of the ExecSwUpload
- message are filled in with the values of sw->flags, sw->sw_id,
- sw->sw_type and *(sw->swlen).
-
- The fragments _must_ be requested in order and be 8K in size. The
- user is responsible for allocating memory pointed by sw->buf. The
- last fragment _may_ be shorter.
-
- Please note that SW transfers can take a long time.
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriatly:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-X. Removing Software
-
- SYNOPSIS
-
- ioctl(fd, I2OSWDEL, struct i2o_sw_xfer *sw);
-
- struct i2o_sw_xfer
- {
- u32 iop; /* IOP unit number */
- u8 flags; /* RemoveFlags */
- u8 sw_type; /* Software type */
- u32 sw_id; /* Software ID */
- void *buf; /* Unused */
- u32 *swlen; /* Length of the software data */
- u32 *maxfrag; /* Unused */
- u32 *curfrag; /* Unused */
- };
-
- DESCRIPTION
-
- This function removes software from the IOP identified by sw->iop.
- The RemoveFlags, SwID, SwType and SwSize fields of the ExecSwRemove message
- are filled in with the values of sw->flags, sw->sw_id, sw->sw_type and
- *(sw->swlen). Give zero in *(sw->len) if the value is unknown. IOP uses
- *(sw->swlen) value to verify correct identication of the module to remove.
- The actual size of the module is written into *(sw->swlen).
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriatly:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-X. Validating Configuration
-
- SYNOPSIS
-
- ioctl(fd, I2OVALIDATE, int *iop);
- u32 iop;
-
- DESCRIPTION
-
- This function posts an ExecConfigValidate message to the controller
- identified by iop. This message indicates that the the current
- configuration is accepted. The iop changes the status of suspect drivers
- to valid and may delete old drivers from its store.
-
- RETURNS
-
- This function returns 0 if no erro occur. If an error occurs, -1 is
- returned and errno is set appropriatly:
-
- ETIMEDOUT Timeout waiting for reply message
- ENXIO Invalid IOP number
-
-XI. Configuration Dialog
-
- SYNOPSIS
-
- ioctl(fd, I2OHTML, struct i2o_html *htquery);
- struct i2o_html
- {
- u32 iop; /* IOP unit number */
- u32 tid; /* Target device ID */
- u32 page; /* HTML page */
- void *resbuf; /* Buffer for reply HTML page */
- u32 *reslen; /* Length in bytes of reply buffer */
- void *qbuf; /* Pointer to HTTP query string */
- u32 qlen; /* Length in bytes of query string buffer */
- };
-
- DESCRIPTION
-
- This function posts an UtilConfigDialog message to the device identified
- by htquery->iop and htquery->tid. The requested HTML page number is
- provided by the htquery->page field, and the resultant data is stored
- in the buffer pointed to by htquery->resbuf. If there is an HTTP query
- string that is to be sent to the device, it should be sent in the buffer
- pointed to by htquery->qbuf. If there is no query string, this field
- should be set to NULL. The actual size of the reply received is written
- into *(htquery->reslen).
-
- RETURNS
-
- This function returns 0 if no error occur. If an error occurs, -1
- is returned and errno is set appropriatly:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(ops->reslen)
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-XII. Events
-
- In the process of determining this. Current idea is to have use
- the select() interface to allow user apps to periodically poll
- the /dev/i2o/ctl device for events. When select() notifies the user
- that an event is available, the user would call read() to retrieve
- a list of all the events that are pending for the specific device.
-
-=============================================================================
-Revision History
-=============================================================================
-
-Rev 0.1 - 04/01/99
-- Initial revision
-
-Rev 0.2 - 04/06/99
-- Changed return values to match UNIX ioctl() standard. Only return values
- are 0 and -1. All errors are reported through errno.
-- Added summary of proposed possible event interfaces
-
-Rev 0.3 - 04/20/99
-- Changed all ioctls() to use pointers to user data instead of actual data
-- Updated error values to match the code
+++ /dev/null
-/*
- * I2O Random Block Storage Class OSM
- *
- * (C) Copyright 1999 Red Hat Software
- *
- * Written by Alan Cox, Building Number Three Ltd
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- *
- * This is a beta test release. Most of the good code was taken
- * from the nbd driver by Pavel Machek, who in turn took some of it
- * from loop.c. Isn't free software great for reusability 8)
- *
- * Fixes/additions:
- * Steve Ralston:
- * Multiple device handling error fixes,
- * Added a queue depth.
- * Alan Cox:
- * FC920 has an rmw bug. Dont or in the end marker.
- * Removed queue walk, fixed for 64bitness.
- * Deepak Saxena:
- * Independent queues per IOP
- * Support for dynamic device creation/deletion
- * Code cleanup
- * Support for larger I/Os through merge* functions
- * (taken from DAC960 driver)
- * Boji T Kannanthanam:
- * Set the I2O Block devices to be detected in increasing
- * order of TIDs during boot.
- * Search and set the I2O block device that we boot off from as
- * the first device to be claimed (as /dev/i2o/hda)
- * Properly attach/detach I2O gendisk structure from the system
- * gendisk list. The I2O block devices now appear in
- * /proc/partitions.
- *
- * To do:
- * Serial number scanning to find duplicates for FC multipathing
- */
-
-#include <linux/major.h>
-
-#include <linux/module.h>
-
-#include <linux/sched.h>
-#include <linux/fs.h>
-#include <linux/stat.h>
-#include <linux/errno.h>
-#include <linux/file.h>
-#include <linux/ioctl.h>
-#include <linux/i2o.h>
-#include <linux/blkdev.h>
-#include <linux/blkpg.h>
-#include <linux/slab.h>
-#include <linux/hdreg.h>
-
-#include <linux/notifier.h>
-#include <linux/reboot.h>
-
-#include <asm/uaccess.h>
-#include <asm/semaphore.h>
-#include <linux/completion.h>
-#include <asm/io.h>
-#include <asm/atomic.h>
-#include <linux/smp_lock.h>
-#include <linux/wait.h>
-
-#define MAJOR_NR I2O_MAJOR
-
-#include <linux/blk.h>
-
-#define MAX_I2OB 16
-
-#define MAX_I2OB_DEPTH 128
-#define MAX_I2OB_RETRIES 4
-
-//#define DRIVERDEBUG
-#ifdef DRIVERDEBUG
-#define DEBUG( s )
-#else
-#define DEBUG( s ) printk( s )
-#endif
-
-/*
- * Events that this OSM is interested in
- */
-#define I2OB_EVENT_MASK (I2O_EVT_IND_BSA_VOLUME_LOAD | \
- I2O_EVT_IND_BSA_VOLUME_UNLOAD | \
- I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ | \
- I2O_EVT_IND_BSA_CAPACITY_CHANGE | \
- I2O_EVT_IND_BSA_SCSI_SMART )
-
-
-/*
- * I2O Block Error Codes - should be in a header file really...
- */
-#define I2O_BSA_DSC_SUCCESS 0x0000
-#define I2O_BSA_DSC_MEDIA_ERROR 0x0001
-#define I2O_BSA_DSC_ACCESS_ERROR 0x0002
-#define I2O_BSA_DSC_DEVICE_FAILURE 0x0003
-#define I2O_BSA_DSC_DEVICE_NOT_READY 0x0004
-#define I2O_BSA_DSC_MEDIA_NOT_PRESENT 0x0005
-#define I2O_BSA_DSC_MEDIA_LOCKED 0x0006
-#define I2O_BSA_DSC_MEDIA_FAILURE 0x0007
-#define I2O_BSA_DSC_PROTOCOL_FAILURE 0x0008
-#define I2O_BSA_DSC_BUS_FAILURE 0x0009
-#define I2O_BSA_DSC_ACCESS_VIOLATION 0x000A
-#define I2O_BSA_DSC_WRITE_PROTECTED 0x000B
-#define I2O_BSA_DSC_DEVICE_RESET 0x000C
-#define I2O_BSA_DSC_VOLUME_CHANGED 0x000D
-#define I2O_BSA_DSC_TIMEOUT 0x000E
-
-/*
- * Some of these can be made smaller later
- */
-
-static int i2ob_blksizes[MAX_I2OB<<4];
-static int i2ob_hardsizes[MAX_I2OB<<4];
-static int i2ob_sizes[MAX_I2OB<<4];
-static int i2ob_media_change_flag[MAX_I2OB];
-static u32 i2ob_max_sectors[MAX_I2OB<<4];
-
-static int i2ob_context;
-
-/*
- * I2O Block device descriptor
- */
-struct i2ob_device
-{
- struct i2o_controller *controller;
- struct i2o_device *i2odev;
- int unit;
- int tid;
- int flags;
- int refcnt;
- struct request *head, *tail;
- request_queue_t *req_queue;
- int max_segments;
- int done_flag;
- int constipated;
- int depth;
-};
-
-/*
- * FIXME:
- * We should cache align these to avoid ping-ponging lines on SMP
- * boxes under heavy I/O load...
- */
-struct i2ob_request
-{
- struct i2ob_request *next;
- struct request *req;
- int num;
-};
-
-/*
- * Per IOP requst queue information
- *
- * We have a separate requeust_queue_t per IOP so that a heavilly
- * loaded I2O block device on an IOP does not starve block devices
- * across all I2O controllers.
- *
- */
-struct i2ob_iop_queue
-{
- atomic_t queue_depth;
- struct i2ob_request request_queue[MAX_I2OB_DEPTH];
- struct i2ob_request *i2ob_qhead;
- request_queue_t req_queue;
-};
-static struct i2ob_iop_queue *i2ob_queues[MAX_I2O_CONTROLLERS];
-static struct i2ob_request *i2ob_backlog[MAX_I2O_CONTROLLERS];
-static struct i2ob_request *i2ob_backlog_tail[MAX_I2O_CONTROLLERS];
-
-/*
- * Each I2O disk is one of these.
- */
-
-static struct i2ob_device i2ob_dev[MAX_I2OB<<4];
-static int i2ob_dev_count = 0;
-static struct hd_struct i2ob[MAX_I2OB<<4];
-static struct gendisk i2ob_gendisk; /* Declared later */
-
-/*
- * Mutex and spin lock for event handling synchronization
- * evt_msg contains the last event.
- */
-static DECLARE_MUTEX_LOCKED(i2ob_evt_sem);
-static DECLARE_COMPLETION(i2ob_thread_dead);
-static spinlock_t i2ob_evt_lock = SPIN_LOCK_UNLOCKED;
-static u32 evt_msg[MSG_FRAME_SIZE>>2];
-
-static struct timer_list i2ob_timer;
-static int i2ob_timer_started = 0;
-
-static void i2o_block_reply(struct i2o_handler *, struct i2o_controller *,
- struct i2o_message *);
-static void i2ob_new_device(struct i2o_controller *, struct i2o_device *);
-static void i2ob_del_device(struct i2o_controller *, struct i2o_device *);
-static void i2ob_reboot_event(void);
-static int i2ob_install_device(struct i2o_controller *, struct i2o_device *, int);
-static void i2ob_end_request(struct request *);
-static void i2ob_request(request_queue_t *);
-static int i2ob_backlog_request(struct i2o_controller *, struct i2ob_device *);
-static int i2ob_init_iop(unsigned int);
-static request_queue_t* i2ob_get_queue(kdev_t);
-static int i2ob_query_device(struct i2ob_device *, int, int, void*, int);
-static int do_i2ob_revalidate(kdev_t, int);
-static int i2ob_evt(void *);
-
-static int evt_pid = 0;
-static int evt_running = 0;
-static int scan_unit = 0;
-
-/*
- * I2O OSM registration structure...keeps getting bigger and bigger :)
- */
-static struct i2o_handler i2o_block_handler =
-{
- i2o_block_reply,
- i2ob_new_device,
- i2ob_del_device,
- i2ob_reboot_event,
- "I2O Block OSM",
- 0,
- I2O_CLASS_RANDOM_BLOCK_STORAGE
-};
-
-/*
- * Get a message
- */
-
-static u32 i2ob_get(struct i2ob_device *dev)
-{
- struct i2o_controller *c=dev->controller;
- return I2O_POST_READ32(c);
-}
-
-/*
- * Turn a Linux block request into an I2O block read/write.
- */
-
-static int i2ob_send(u32 m, struct i2ob_device *dev, struct i2ob_request *ireq, u32 base, int unit)
-{
- struct i2o_controller *c = dev->controller;
- int tid = dev->tid;
- unsigned long msg;
- unsigned long mptr;
- u64 offset;
- struct request *req = ireq->req;
- struct buffer_head *bh = req->bh;
- int count = req->nr_sectors<<9;
- char *last = NULL;
- unsigned short size = 0;
-
- // printk(KERN_INFO "i2ob_send called\n");
- /* Map the message to a virtual address */
- msg = c->mem_offset + m;
-
- /*
- * Build the message based on the request.
- */
- __raw_writel(i2ob_context|(unit<<8), msg+8);
- __raw_writel(ireq->num, msg+12);
- __raw_writel(req->nr_sectors << 9, msg+20);
-
- /*
- * Mask out partitions from now on
- */
- unit &= 0xF0;
-
- /* This can be optimised later - just want to be sure its right for
- starters */
- offset = ((u64)(req->sector+base)) << 9;
- __raw_writel( offset & 0xFFFFFFFF, msg+24);
- __raw_writel(offset>>32, msg+28);
- mptr=msg+32;
-
- if(req->cmd == READ)
- {
- __raw_writel(I2O_CMD_BLOCK_READ<<24|HOST_TID<<12|tid, msg+4);
- while(bh!=NULL)
- {
- if(bh->b_data == last) {
- size += bh->b_size;
- last += bh->b_size;
- if(bh->b_reqnext)
- __raw_writel(0x14000000|(size), mptr-8);
- else
- __raw_writel(0xD4000000|(size), mptr-8);
- }
- else
- {
- if(bh->b_reqnext)
- __raw_writel(0x10000000|(bh->b_size), mptr);
- else
- __raw_writel(0xD0000000|(bh->b_size), mptr);
- __raw_writel(virt_to_bus(bh->b_data), mptr+4);
- mptr += 8;
- size = bh->b_size;
- last = bh->b_data + size;
- }
-
- count -= bh->b_size;
- bh = bh->b_reqnext;
- }
- /*
- * Heuristic for now since the block layer doesnt give
- * us enough info. If its a big write assume sequential
- * readahead on controller. If its small then don't read
- * ahead but do use the controller cache.
- */
- if(size >= 8192)
- __raw_writel((8<<24)|(1<<16)|8, msg+16);
- else
- __raw_writel((8<<24)|(1<<16)|4, msg+16);
- }
- else if(req->cmd == WRITE)
- {
- __raw_writel(I2O_CMD_BLOCK_WRITE<<24|HOST_TID<<12|tid, msg+4);
- while(bh!=NULL)
- {
- if(bh->b_data == last) {
- size += bh->b_size;
- last += bh->b_size;
- if(bh->b_reqnext)
- __raw_writel(0x14000000|(size), mptr-8);
- else
- __raw_writel(0xD4000000|(size), mptr-8);
- }
- else
- {
- if(bh->b_reqnext)
- __raw_writel(0x14000000|(bh->b_size), mptr);
- else
- __raw_writel(0xD4000000|(bh->b_size), mptr);
- __raw_writel(virt_to_bus(bh->b_data), mptr+4);
- mptr += 8;
- size = bh->b_size;
- last = bh->b_data + size;
- }
-
- count -= bh->b_size;
- bh = bh->b_reqnext;
- }
-
- if(c->battery)
- {
-
- if(size>16384)
- __raw_writel(4, msg+16);
- else
- /*
- * Allow replies to come back once data is cached in the controller
- * This allows us to handle writes quickly thus giving more of the
- * queue to reads.
- */
- __raw_writel(16, msg+16);
- }
- else
- {
- /* Large write, don't cache */
- if(size>8192)
- __raw_writel(4, msg+16);
- else
- /* write through */
- __raw_writel(8, msg+16);
- }
- }
- __raw_writel(I2O_MESSAGE_SIZE(mptr-msg)>>2 | SGL_OFFSET_8, msg);
-
- if(count != 0)
- {
- printk(KERN_ERR "Request count botched by %d.\n", count);
- }
-
- i2o_post_message(c,m);
- atomic_inc(&i2ob_queues[c->unit]->queue_depth);
-
- return 0;
-}
-
-/*
- * Remove a request from the _locked_ request list. We update both the
- * list chain and if this is the last item the tail pointer. Caller
- * must hold the lock.
- */
-
-static inline void i2ob_unhook_request(struct i2ob_request *ireq,
- unsigned int iop)
-{
- ireq->next = i2ob_queues[iop]->i2ob_qhead;
- i2ob_queues[iop]->i2ob_qhead = ireq;
-}
-
-/*
- * Request completion handler
- */
-
-static inline void i2ob_end_request(struct request *req)
-{
- /*
- * Loop until all of the buffers that are linked
- * to this request have been marked updated and
- * unlocked.
- */
-
- while (end_that_request_first( req, !req->errors, "i2o block" ));
-
- /*
- * It is now ok to complete the request.
- */
- end_that_request_last( req );
-}
-
-/*
- * Request merging functions
- */
-static inline int i2ob_new_segment(request_queue_t *q, struct request *req,
- int __max_segments)
-{
- int max_segments = i2ob_dev[MINOR(req->rq_dev)].max_segments;
-
- if (__max_segments < max_segments)
- max_segments = __max_segments;
-
- if (req->nr_segments < max_segments) {
- req->nr_segments++;
- return 1;
- }
- return 0;
-}
-
-static int i2ob_back_merge(request_queue_t *q, struct request *req,
- struct buffer_head *bh, int __max_segments)
-{
- if (req->bhtail->b_data + req->bhtail->b_size == bh->b_data)
- return 1;
- return i2ob_new_segment(q, req, __max_segments);
-}
-
-static int i2ob_front_merge(request_queue_t *q, struct request *req,
- struct buffer_head *bh, int __max_segments)
-{
- if (bh->b_data + bh->b_size == req->bh->b_data)
- return 1;
- return i2ob_new_segment(q, req, __max_segments);
-}
-
-static int i2ob_merge_requests(request_queue_t *q,
- struct request *req,
- struct request *next,
- int __max_segments)
-{
- int max_segments = i2ob_dev[MINOR(req->rq_dev)].max_segments;
- int total_segments = req->nr_segments + next->nr_segments;
-
- if (__max_segments < max_segments)
- max_segments = __max_segments;
-
- if (req->bhtail->b_data + req->bhtail->b_size == next->bh->b_data)
- total_segments--;
-
- if (total_segments > max_segments)
- return 0;
-
- req->nr_segments = total_segments;
- return 1;
-}
-
-static int i2ob_flush(struct i2o_controller *c, struct i2ob_device *d, int unit)
-{
- unsigned long msg;
- u32 m = i2ob_get(d);
-
- if(m == 0xFFFFFFFF)
- return -1;
-
- msg = c->mem_offset + m;
-
- /*
- * Ask the controller to write the cache back. This sorts out
- * the supertrak firmware flaw and also does roughly the right
- * thing for other cases too.
- */
-
- __raw_writel(FIVE_WORD_MSG_SIZE|SGL_OFFSET_0, msg);
- __raw_writel(I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|d->tid, msg+4);
- __raw_writel(i2ob_context|(unit<<8), msg+8);
- __raw_writel(0, msg+12);
- __raw_writel(60<<16, msg+16);
-
- i2o_post_message(c,m);
- return 0;
-}
-
-/*
- * OSM reply handler. This gets all the message replies
- */
-
-static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg)
-{
- unsigned long flags;
- struct i2ob_request *ireq = NULL;
- u8 st;
- u32 *m = (u32 *)msg;
- u8 unit = (m[2]>>8)&0xF0; /* low 4 bits are partition */
- struct i2ob_device *dev = &i2ob_dev[(unit&0xF0)];
-
- /*
- * FAILed message
- */
- if(m[0] & (1<<13))
- {
- /*
- * FAILed message from controller
- * We increment the error count and abort it
- *
- * In theory this will never happen. The I2O block class
- * speficiation states that block devices never return
- * FAILs but instead use the REQ status field...but
- * better be on the safe side since no one really follows
- * the spec to the book :)
- */
- ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
- ireq->req->errors++;
-
- spin_lock_irqsave(&io_request_lock, flags);
- i2ob_unhook_request(ireq, c->unit);
- i2ob_end_request(ireq->req);
- spin_unlock_irqrestore(&io_request_lock, flags);
-
- /* Now flush the message by making it a NOP */
- m[0]&=0x00FFFFFF;
- m[0]|=(I2O_CMD_UTIL_NOP)<<24;
- i2o_post_message(c,virt_to_bus(m));
-
- return;
- }
-
- if(msg->function == I2O_CMD_UTIL_EVT_REGISTER)
- {
- spin_lock(&i2ob_evt_lock);
- memcpy(evt_msg, msg, (m[0]>>16)<<2);
- spin_unlock(&i2ob_evt_lock);
- up(&i2ob_evt_sem);
- return;
- }
-
- if(msg->function == I2O_CMD_BLOCK_CFLUSH)
- {
- spin_lock_irqsave(&io_request_lock, flags);
- dev->constipated=0;
- DEBUG(("unconstipated\n"));
- if(i2ob_backlog_request(c, dev)==0)
- i2ob_request(dev->req_queue);
- spin_unlock_irqrestore(&io_request_lock, flags);
- return;
- }
-
- if(!dev->i2odev)
- {
- /*
- * This is HACK, but Intel Integrated RAID allows user
- * to delete a volume that is claimed, locked, and in use
- * by the OS. We have to check for a reply from a
- * non-existent device and flag it as an error or the system
- * goes kaput...
- */
- ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
- ireq->req->errors++;
- printk(KERN_WARNING "I2O Block: Data transfer to deleted device!\n");
- spin_lock_irqsave(&io_request_lock, flags);
- i2ob_unhook_request(ireq, c->unit);
- i2ob_end_request(ireq->req);
- spin_unlock_irqrestore(&io_request_lock, flags);
- return;
- }
-
- /*
- * Lets see what is cooking. We stuffed the
- * request in the context.
- */
-
- ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
- st=m[4]>>24;
-
- if(st!=0)
- {
- int err;
- char *bsa_errors[] =
- {
- "Success",
- "Media Error",
- "Failure communicating to device",
- "Device Failure",
- "Device is not ready",
- "Media not present",
- "Media is locked by another user",
- "Media has failed",
- "Failure communicating to device",
- "Device bus failure",
- "Device is locked by another user",
- "Device is write protected",
- "Device has reset",
- "Volume has changed, waiting for acknowledgement"
- };
-
- err = m[4]&0xFFFF;
-
- /*
- * Device not ready means two things. One is that the
- * the thing went offline (but not a removal media)
- *
- * The second is that you have a SuperTrak 100 and the
- * firmware got constipated. Unlike standard i2o card
- * setups the supertrak returns an error rather than
- * blocking for the timeout in these cases.
- */
-
-
- spin_lock_irqsave(&io_request_lock, flags);
- if(err==4)
- {
- /*
- * Time to uncork stuff
- */
-
- if(!dev->constipated)
- {
- dev->constipated = 1;
- DEBUG(("constipated\n"));
- /* Now pull the chain */
- if(i2ob_flush(c, dev, unit)<0)
- {
- DEBUG(("i2ob: Unable to queue flush. Retrying I/O immediately.\n"));
- dev->constipated=0;
- }
- DEBUG(("flushing\n"));
- }
-
- /*
- * Recycle the request
- */
-
-// i2ob_unhook_request(ireq, c->unit);
-
- /*
- * Place it on the recycle queue
- */
-
- ireq->next = NULL;
- if(i2ob_backlog_tail[c->unit]!=NULL)
- i2ob_backlog_tail[c->unit]->next = ireq;
- else
- i2ob_backlog[c->unit] = ireq;
- i2ob_backlog_tail[c->unit] = ireq;
-
- atomic_dec(&i2ob_queues[c->unit]->queue_depth);
-
- /*
- * If the constipator flush failed we want to
- * poke the queue again.
- */
-
- i2ob_request(dev->req_queue);
- spin_unlock_irqrestore(&io_request_lock, flags);
-
- /*
- * and out
- */
-
- return;
- }
- spin_unlock_irqrestore(&io_request_lock, flags);
- printk(KERN_ERR "\n/dev/%s error: %s", dev->i2odev->dev_name,
- bsa_errors[m[4]&0XFFFF]);
- if(m[4]&0x00FF0000)
- printk(" - DDM attempted %d retries", (m[4]>>16)&0x00FF );
- printk(".\n");
- ireq->req->errors++;
- }
- else
- ireq->req->errors = 0;
-
- /*
- * Dequeue the request. We use irqsave locks as one day we
- * may be running polled controllers from a BH...
- */
-
- spin_lock_irqsave(&io_request_lock, flags);
- i2ob_unhook_request(ireq, c->unit);
- i2ob_end_request(ireq->req);
- atomic_dec(&i2ob_queues[c->unit]->queue_depth);
-
- /*
- * We may be able to do more I/O
- */
-
- if(i2ob_backlog_request(c, dev)==0)
- i2ob_request(dev->req_queue);
-
- spin_unlock_irqrestore(&io_request_lock, flags);
-}
-
-/*
- * Event handler. Needs to be a separate thread b/c we may have
- * to do things like scan a partition table, or query parameters
- * which cannot be done from an interrupt or from a bottom half.
- */
-static int i2ob_evt(void *dummy)
-{
- unsigned int evt;
- unsigned long flags;
- int unit;
- int i;
- //The only event that has data is the SCSI_SMART event.
- struct i2o_reply {
- u32 header[4];
- u32 evt_indicator;
- u8 ASC;
- u8 ASCQ;
- u8 data[16];
- } *evt_local;
-
- lock_kernel();
- daemonize();
- unlock_kernel();
-
- strcpy(current->comm, "i2oblock");
- evt_running = 1;
-
- while(1)
- {
- if(down_interruptible(&i2ob_evt_sem))
- {
- evt_running = 0;
- printk("exiting...");
- break;
- }
-
- /*
- * Keep another CPU/interrupt from overwriting the
- * message while we're reading it
- *
- * We stuffed the unit in the TxContext and grab the event mask
- * None of the BSA we care about events have EventData
- */
- spin_lock_irqsave(&i2ob_evt_lock, flags);
- evt_local = (struct i2o_reply *)evt_msg;
- spin_unlock_irqrestore(&i2ob_evt_lock, flags);
-
- unit = evt_local->header[3];
- evt = evt_local->evt_indicator;
-
- switch(evt)
- {
- /*
- * New volume loaded on same TID, so we just re-install.
- * The TID/controller don't change as it is the same
- * I2O device. It's just new media that we have to
- * rescan.
- */
- case I2O_EVT_IND_BSA_VOLUME_LOAD:
- {
- i2ob_install_device(i2ob_dev[unit].i2odev->controller,
- i2ob_dev[unit].i2odev, unit);
- break;
- }
-
- /*
- * No media, so set all parameters to 0 and set the media
- * change flag. The I2O device is still valid, just doesn't
- * have media, so we don't want to clear the controller or
- * device pointer.
- */
- case I2O_EVT_IND_BSA_VOLUME_UNLOAD:
- {
- for(i = unit; i <= unit+15; i++)
- {
- i2ob_sizes[i] = 0;
- i2ob_hardsizes[i] = 0;
- i2ob_max_sectors[i] = 0;
- i2ob[i].nr_sects = 0;
- i2ob_gendisk.part[i].nr_sects = 0;
- }
- i2ob_media_change_flag[unit] = 1;
- break;
- }
-
- case I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ:
- printk(KERN_WARNING "%s: Attempt to eject locked media\n",
- i2ob_dev[unit].i2odev->dev_name);
- break;
-
- /*
- * The capacity has changed and we are going to be
- * updating the max_sectors and other information
- * about this disk. We try a revalidate first. If
- * the block device is in use, we don't want to
- * do that as there may be I/Os bound for the disk
- * at the moment. In that case we read the size
- * from the device and update the information ourselves
- * and the user can later force a partition table
- * update through an ioctl.
- */
- case I2O_EVT_IND_BSA_CAPACITY_CHANGE:
- {
- u64 size;
-
- if(do_i2ob_revalidate(MKDEV(MAJOR_NR, unit),0) != -EBUSY)
- continue;
-
- if(i2ob_query_device(&i2ob_dev[unit], 0x0004, 0, &size, 8) !=0 )
- i2ob_query_device(&i2ob_dev[unit], 0x0000, 4, &size, 8);
-
- spin_lock_irqsave(&io_request_lock, flags);
- i2ob_sizes[unit] = (int)(size>>10);
- i2ob_gendisk.part[unit].nr_sects = size>>9;
- i2ob[unit].nr_sects = (int)(size>>9);
- spin_unlock_irqrestore(&io_request_lock, flags);
- break;
- }
-
- /*
- * We got a SCSI SMART event, we just log the relevant
- * information and let the user decide what they want
- * to do with the information.
- */
- case I2O_EVT_IND_BSA_SCSI_SMART:
- {
- char buf[16];
- printk(KERN_INFO "I2O Block: %s received a SCSI SMART Event\n",i2ob_dev[unit].i2odev->dev_name);
- evt_local->data[16]='\0';
- sprintf(buf,"%s",&evt_local->data[0]);
- printk(KERN_INFO " Disk Serial#:%s\n",buf);
- printk(KERN_INFO " ASC 0x%02x \n",evt_local->ASC);
- printk(KERN_INFO " ASCQ 0x%02x \n",evt_local->ASCQ);
- break;
- }
-
- /*
- * Non event
- */
-
- case 0:
- break;
-
- /*
- * An event we didn't ask for. Call the card manufacturer
- * and tell them to fix their firmware :)
- */
- default:
- printk(KERN_INFO "%s: Received event %d we didn't register for\n"
- KERN_INFO " Blame the I2O card manufacturer 8)\n",
- i2ob_dev[unit].i2odev->dev_name, evt);
- break;
- }
- };
-
- complete_and_exit(&i2ob_thread_dead,0);
- return 0;
-}
-
-/*
- * The timer handler will attempt to restart requests
- * that are queued to the driver. This handler
- * currently only gets called if the controller
- * had no more room in its inbound fifo.
- */
-
-static void i2ob_timer_handler(unsigned long q)
-{
- unsigned long flags;
-
- /*
- * We cannot touch the request queue or the timer
- * flag without holding the io_request_lock.
- */
- spin_lock_irqsave(&io_request_lock,flags);
-
- /*
- * Clear the timer started flag so that
- * the timer can be queued again.
- */
- i2ob_timer_started = 0;
-
- /*
- * Restart any requests.
- */
- i2ob_request((request_queue_t*)q);
-
- /*
- * Free the lock.
- */
- spin_unlock_irqrestore(&io_request_lock,flags);
-}
-
-static int i2ob_backlog_request(struct i2o_controller *c, struct i2ob_device *dev)
-{
- u32 m;
- struct i2ob_request *ireq;
-
- while((ireq=i2ob_backlog[c->unit])!=NULL)
- {
- int unit;
-
- if(atomic_read(&i2ob_queues[c->unit]->queue_depth) > dev->depth/4)
- break;
-
- m = i2ob_get(dev);
- if(m == 0xFFFFFFFF)
- break;
-
- i2ob_backlog[c->unit] = ireq->next;
- if(i2ob_backlog[c->unit] == NULL)
- i2ob_backlog_tail[c->unit] = NULL;
-
- unit = MINOR(ireq->req->rq_dev);
- i2ob_send(m, dev, ireq, i2ob[unit].start_sect, unit);
- }
- if(i2ob_backlog[c->unit])
- return 1;
- return 0;
-}
-
-/*
- * The I2O block driver is listed as one of those that pulls the
- * front entry off the queue before processing it. This is important
- * to remember here. If we drop the io lock then CURRENT will change
- * on us. We must unlink CURRENT in this routine before we return, if
- * we use it.
- */
-
-static void i2ob_request(request_queue_t *q)
-{
- struct request *req;
- struct i2ob_request *ireq;
- int unit;
- struct i2ob_device *dev;
- u32 m;
-
-
- while (!list_empty(&q->queue_head)) {
- /*
- * On an IRQ completion if there is an inactive
- * request on the queue head it means it isnt yet
- * ready to dispatch.
- */
- req = blkdev_entry_next_request(&q->queue_head);
-
- if(req->rq_status == RQ_INACTIVE)
- return;
-
- unit = MINOR(req->rq_dev);
- dev = &i2ob_dev[(unit&0xF0)];
-
- /*
- * Queue depths probably belong with some kind of
- * generic IOP commit control. Certainly its not right
- * its global!
- */
- if(atomic_read(&i2ob_queues[dev->unit]->queue_depth) >= dev->depth)
- break;
-
- /*
- * Is the channel constipated ?
- */
-
- if(i2ob_backlog[dev->unit]!=NULL)
- break;
-
- /* Get a message */
- m = i2ob_get(dev);
-
- if(m==0xFFFFFFFF)
- {
- /*
- * See if the timer has already been queued.
- */
- if (!i2ob_timer_started)
- {
- DEBUG((KERN_ERR "i2ob: starting timer\n"));
-
- /*
- * Set the timer_started flag to insure
- * that the timer is only queued once.
- * Queing it more than once will corrupt
- * the timer queue.
- */
- i2ob_timer_started = 1;
-
- /*
- * Set up the timer to expire in
- * 500ms.
- */
- i2ob_timer.expires = jiffies + (HZ >> 1);
- i2ob_timer.data = (unsigned int)q;
-
- /*
- * Start it.
- */
-
- add_timer(&i2ob_timer);
- return;
- }
- }
-
- /*
- * Everything ok, so pull from kernel queue onto our queue
- */
- req->errors = 0;
- blkdev_dequeue_request(req);
- req->waiting = NULL;
-
- ireq = i2ob_queues[dev->unit]->i2ob_qhead;
- i2ob_queues[dev->unit]->i2ob_qhead = ireq->next;
- ireq->req = req;
-
- i2ob_send(m, dev, ireq, i2ob[unit].start_sect, (unit&0xF0));
- }
-}
-
-
-/*
- * SCSI-CAM for ioctl geometry mapping
- * Duplicated with SCSI - this should be moved into somewhere common
- * perhaps genhd ?
- *
- * LBA -> CHS mapping table taken from:
- *
- * "Incorporating the I2O Architecture into BIOS for Intel Architecture
- * Platforms"
- *
- * This is an I2O document that is only available to I2O members,
- * not developers.
- *
- * From my understanding, this is how all the I2O cards do this
- *
- * Disk Size | Sectors | Heads | Cylinders
- * ---------------+---------+-------+-------------------
- * 1 < X <= 528M | 63 | 16 | X/(63 * 16 * 512)
- * 528M < X <= 1G | 63 | 32 | X/(63 * 32 * 512)
- * 1 < X <528M | 63 | 16 | X/(63 * 16 * 512)
- * 1 < X <528M | 63 | 16 | X/(63 * 16 * 512)
- *
- */
-#define BLOCK_SIZE_528M 1081344
-#define BLOCK_SIZE_1G 2097152
-#define BLOCK_SIZE_21G 4403200
-#define BLOCK_SIZE_42G 8806400
-#define BLOCK_SIZE_84G 17612800
-
-static void i2o_block_biosparam(
- unsigned long capacity,
- unsigned short *cyls,
- unsigned char *hds,
- unsigned char *secs)
-{
- unsigned long heads, sectors, cylinders;
-
- sectors = 63L; /* Maximize sectors per track */
- if(capacity <= BLOCK_SIZE_528M)
- heads = 16;
- else if(capacity <= BLOCK_SIZE_1G)
- heads = 32;
- else if(capacity <= BLOCK_SIZE_21G)
- heads = 64;
- else if(capacity <= BLOCK_SIZE_42G)
- heads = 128;
- else
- heads = 255;
-
- cylinders = capacity / (heads * sectors);
-
- *cyls = (unsigned short) cylinders; /* Stuff return values */
- *secs = (unsigned char) sectors;
- *hds = (unsigned char) heads;
-}
-
-
-/*
- * Rescan the partition tables
- */
-
-static int do_i2ob_revalidate(kdev_t dev, int maxu)
-{
- int minor=MINOR(dev);
- int i;
-
- minor&=0xF0;
-
- i2ob_dev[minor].refcnt++;
- if(i2ob_dev[minor].refcnt>maxu+1)
- {
- i2ob_dev[minor].refcnt--;
- return -EBUSY;
- }
-
- for( i = 15; i>=0 ; i--)
- {
- int m = minor+i;
- invalidate_device(MKDEV(MAJOR_NR, m), 1);
- i2ob_gendisk.part[m].start_sect = 0;
- i2ob_gendisk.part[m].nr_sects = 0;
- }
-
- /*
- * Do a physical check and then reconfigure
- */
-
- i2ob_install_device(i2ob_dev[minor].controller, i2ob_dev[minor].i2odev,
- minor);
- i2ob_dev[minor].refcnt--;
- return 0;
-}
-
-/*
- * Issue device specific ioctl calls.
- */
-
-static int i2ob_ioctl(struct inode *inode, struct file *file,
- unsigned int cmd, unsigned long arg)
-{
- struct i2ob_device *dev;
- int minor;
-
- /* Anyone capable of this syscall can do *real bad* things */
-
- if (!capable(CAP_SYS_ADMIN))
- return -EPERM;
- if (!inode)
- return -EINVAL;
- minor = MINOR(inode->i_rdev);
- if (minor >= (MAX_I2OB<<4))
- return -ENODEV;
-
- dev = &i2ob_dev[minor];
- switch (cmd) {
- case BLKGETSIZE:
- return put_user(i2ob[minor].nr_sects, (long *) arg);
- case BLKGETSIZE64:
- return put_user((u64)i2ob[minor].nr_sects << 9, (u64 *)arg);
-
- case HDIO_GETGEO:
- {
- struct hd_geometry g;
- int u=minor&0xF0;
- i2o_block_biosparam(i2ob_sizes[u]<<1,
- &g.cylinders, &g.heads, &g.sectors);
- g.start = i2ob[minor].start_sect;
- return copy_to_user((void *)arg,&g, sizeof(g))?-EFAULT:0;
- }
-
- case BLKRRPART:
- if(!capable(CAP_SYS_ADMIN))
- return -EACCES;
- return do_i2ob_revalidate(inode->i_rdev,1);
-
- case BLKFLSBUF:
- case BLKROSET:
- case BLKROGET:
- case BLKRASET:
- case BLKRAGET:
- case BLKPG:
- return blk_ioctl(inode->i_rdev, cmd, arg);
-
- default:
- return -EINVAL;
- }
-}
-
-/*
- * Close the block device down
- */
-
-static int i2ob_release(struct inode *inode, struct file *file)
-{
- struct i2ob_device *dev;
- int minor;
-
- minor = MINOR(inode->i_rdev);
- if (minor >= (MAX_I2OB<<4))
- return -ENODEV;
- dev = &i2ob_dev[(minor&0xF0)];
-
- /*
- * This is to deail with the case of an application
- * opening a device and then the device dissapears while
- * it's in use, and then the application tries to release
- * it. ex: Unmounting a deleted RAID volume at reboot.
- * If we send messages, it will just cause FAILs since
- * the TID no longer exists.
- */
- if(!dev->i2odev)
- return 0;
-
- if (dev->refcnt <= 0)
- printk(KERN_ALERT "i2ob_release: refcount(%d) <= 0\n", dev->refcnt);
- dev->refcnt--;
- if(dev->refcnt==0)
- {
- /*
- * Flush the onboard cache on unmount
- */
- u32 msg[5];
- int *query_done = &dev->done_flag;
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|dev->tid;
- msg[2] = i2ob_context|0x40000000;
- msg[3] = (u32)query_done;
- msg[4] = 60<<16;
- DEBUG("Flushing...");
- i2o_post_wait(dev->controller, msg, 20, 60);
-
- /*
- * Unlock the media
- */
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_BLOCK_MUNLOCK<<24|HOST_TID<<12|dev->tid;
- msg[2] = i2ob_context|0x40000000;
- msg[3] = (u32)query_done;
- msg[4] = -1;
- DEBUG("Unlocking...");
- i2o_post_wait(dev->controller, msg, 20, 2);
- DEBUG("Unlocked.\n");
-
- /*
- * Now unclaim the device.
- */
-
- if (i2o_release_device(dev->i2odev, &i2o_block_handler))
- printk(KERN_ERR "i2ob_release: controller rejected unclaim.\n");
-
- DEBUG("Unclaim\n");
- }
- MOD_DEC_USE_COUNT;
- return 0;
-}
-
-/*
- * Open the block device.
- */
-
-static int i2ob_open(struct inode *inode, struct file *file)
-{
- int minor;
- struct i2ob_device *dev;
-
- if (!inode)
- return -EINVAL;
- minor = MINOR(inode->i_rdev);
- if (minor >= MAX_I2OB<<4)
- return -ENODEV;
- dev=&i2ob_dev[(minor&0xF0)];
-
- if(!dev->i2odev)
- return -ENODEV;
-
- if(dev->refcnt++==0)
- {
- u32 msg[6];
-
- DEBUG("Claim ");
- if(i2o_claim_device(dev->i2odev, &i2o_block_handler))
- {
- dev->refcnt--;
- printk(KERN_INFO "I2O Block: Could not open device\n");
- return -EBUSY;
- }
- DEBUG("Claimed ");
-
- /*
- * Mount the media if needed. Note that we don't use
- * the lock bit. Since we have to issue a lock if it
- * refuses a mount (quite possible) then we might as
- * well just send two messages out.
- */
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_BLOCK_MMOUNT<<24|HOST_TID<<12|dev->tid;
- msg[4] = -1;
- msg[5] = 0;
- DEBUG("Mount ");
- i2o_post_wait(dev->controller, msg, 24, 2);
-
- /*
- * Lock the media
- */
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_BLOCK_MLOCK<<24|HOST_TID<<12|dev->tid;
- msg[4] = -1;
- DEBUG("Lock ");
- i2o_post_wait(dev->controller, msg, 20, 2);
- DEBUG("Ready.\n");
- }
- MOD_INC_USE_COUNT;
- return 0;
-}
-
-/*
- * Issue a device query
- */
-
-static int i2ob_query_device(struct i2ob_device *dev, int table,
- int field, void *buf, int buflen)
-{
- return i2o_query_scalar(dev->controller, dev->tid,
- table, field, buf, buflen);
-}
-
-
-/*
- * Install the I2O block device we found.
- */
-
-static int i2ob_install_device(struct i2o_controller *c, struct i2o_device *d, int unit)
-{
- u64 size;
- u32 blocksize;
- u32 limit;
- u8 type;
- u32 flags, status;
- struct i2ob_device *dev=&i2ob_dev[unit];
- int i;
-
- /*
- * For logging purposes...
- */
- printk(KERN_INFO "i2ob: Installing tid %d device at unit %d\n",
- d->lct_data.tid, unit);
-
- /*
- * Ask for the current media data. If that isn't supported
- * then we ask for the device capacity data
- */
- if(i2ob_query_device(dev, 0x0004, 1, &blocksize, 4) != 0
- || i2ob_query_device(dev, 0x0004, 0, &size, 8) !=0 )
- {
- i2ob_query_device(dev, 0x0000, 3, &blocksize, 4);
- i2ob_query_device(dev, 0x0000, 4, &size, 8);
- }
-
- i2ob_query_device(dev, 0x0000, 5, &flags, 4);
- i2ob_query_device(dev, 0x0000, 6, &status, 4);
- i2ob_sizes[unit] = (int)(size>>10);
- for(i=unit; i <= unit+15 ; i++)
- i2ob_hardsizes[i] = blocksize;
- i2ob_gendisk.part[unit].nr_sects = size>>9;
- i2ob[unit].nr_sects = (int)(size>>9);
-
- /* Set limit based on inbound frame size */
- limit = (d->controller->status_block->inbound_frame_size - 8)/2;
- limit = limit<<9;
-
- /*
- * Max number of Scatter-Gather Elements
- */
-
- for(i=unit;i<=unit+15;i++)
- {
- if(d->controller->type == I2O_TYPE_PCI && d->controller->bus.pci.queue_buggy)
- {
- i2ob_max_sectors[i] = 32;
- i2ob_dev[i].max_segments = 8;
- i2ob_dev[i].depth = 4;
- }
- else if(d->controller->type == I2O_TYPE_PCI && d->controller->bus.pci.short_req)
- {
- i2ob_max_sectors[i] = 8;
- i2ob_dev[i].max_segments = 8;
- }
- else
- {
- /* MAX_SECTORS was used but 255 is a dumb number for
- striped RAID */
- i2ob_max_sectors[i]=256;
- i2ob_dev[i].max_segments = (d->controller->status_block->inbound_frame_size - 8)/2;
- }
- }
-
- printk(KERN_INFO "Max segments set to %d\n",
- i2ob_dev[unit].max_segments);
- printk(KERN_INFO "Byte limit is %d.\n", limit);
-
- i2ob_query_device(dev, 0x0000, 0, &type, 1);
-
- sprintf(d->dev_name, "%s%c", i2ob_gendisk.major_name, 'a' + (unit>>4));
-
- printk(KERN_INFO "%s: ", d->dev_name);
- switch(type)
- {
- case 0: printk("Disk Storage");break;
- case 4: printk("WORM");break;
- case 5: printk("CD-ROM");break;
- case 7: printk("Optical device");break;
- default:
- printk("Type %d", type);
- }
- if(status&(1<<10))
- printk("(RAID)");
- if(((flags & (1<<3)) && !(status & (1<<3))) ||
- ((flags & (1<<4)) && !(status & (1<<4))))
- {
- printk(KERN_INFO " Not loaded.\n");
- return 1;
- }
- printk("- %dMb, %d byte sectors",
- (int)(size>>20), blocksize);
- if(status&(1<<0))
- {
- u32 cachesize;
- i2ob_query_device(dev, 0x0003, 0, &cachesize, 4);
- cachesize>>=10;
- if(cachesize>4095)
- printk(", %dMb cache", cachesize>>10);
- else
- printk(", %dKb cache", cachesize);
-
- }
- printk(".\n");
- printk(KERN_INFO "%s: Maximum sectors/read set to %d.\n",
- d->dev_name, i2ob_max_sectors[unit]);
-
- /*
- * If this is the first I2O block device found on this IOP,
- * we need to initialize all the queue data structures
- * before any I/O can be performed. If it fails, this
- * device is useless.
- */
- if(!i2ob_queues[c->unit]) {
- if(i2ob_init_iop(c->unit))
- return 1;
- }
-
- /*
- * This will save one level of lookup/indirection in critical
- * code so that we can directly get the queue ptr from the
- * device instead of having to go the IOP data structure.
- */
- dev->req_queue = &i2ob_queues[c->unit]->req_queue;
-
- grok_partitions(&i2ob_gendisk, unit>>4, 1<<4, (long)(size>>9));
-
- /*
- * Register for the events we're interested in and that the
- * device actually supports.
- */
- i2o_event_register(c, d->lct_data.tid, i2ob_context, unit,
- (I2OB_EVENT_MASK & d->lct_data.event_capabilities));
-
- return 0;
-}
-
-/*
- * Initialize IOP specific queue structures. This is called
- * once for each IOP that has a block device sitting behind it.
- */
-static int i2ob_init_iop(unsigned int unit)
-{
- int i;
-
- i2ob_queues[unit] = (struct i2ob_iop_queue*)
- kmalloc(sizeof(struct i2ob_iop_queue), GFP_ATOMIC);
- if(!i2ob_queues[unit])
- {
- printk(KERN_WARNING
- "Could not allocate request queue for I2O block device!\n");
- return -1;
- }
-
- for(i = 0; i< MAX_I2OB_DEPTH; i++)
- {
- i2ob_queues[unit]->request_queue[i].next =
- &i2ob_queues[unit]->request_queue[i+1];
- i2ob_queues[unit]->request_queue[i].num = i;
- }
-
- /* Queue is MAX_I2OB + 1... */
- i2ob_queues[unit]->request_queue[i].next = NULL;
- i2ob_queues[unit]->i2ob_qhead = &i2ob_queues[unit]->request_queue[0];
- atomic_set(&i2ob_queues[unit]->queue_depth, 0);
-
- blk_init_queue(&i2ob_queues[unit]->req_queue, i2ob_request);
- blk_queue_headactive(&i2ob_queues[unit]->req_queue, 0);
- i2ob_queues[unit]->req_queue.back_merge_fn = i2ob_back_merge;
- i2ob_queues[unit]->req_queue.front_merge_fn = i2ob_front_merge;
- i2ob_queues[unit]->req_queue.merge_requests_fn = i2ob_merge_requests;
- i2ob_queues[unit]->req_queue.queuedata = &i2ob_queues[unit];
-
- return 0;
-}
-
-/*
- * Get the request queue for the given device.
- */
-static request_queue_t* i2ob_get_queue(kdev_t dev)
-{
- int unit = MINOR(dev)&0xF0;
-
- return i2ob_dev[unit].req_queue;
-}
-
-/*
- * Probe the I2O subsytem for block class devices
- */
-static void i2ob_scan(int bios)
-{
- int i;
- int warned = 0;
-
- struct i2o_device *d, *b=NULL;
- struct i2o_controller *c;
- struct i2ob_device *dev;
-
- for(i=0; i< MAX_I2O_CONTROLLERS; i++)
- {
- c=i2o_find_controller(i);
-
- if(c==NULL)
- continue;
-
- /*
- * The device list connected to the I2O Controller is doubly linked
- * Here we traverse the end of the list , and start claiming devices
- * from that end. This assures that within an I2O controller atleast
- * the newly created volumes get claimed after the older ones, thus
- * mapping to same major/minor (and hence device file name) after
- * every reboot.
- * The exception being:
- * 1. If there was a TID reuse.
- * 2. There was more than one I2O controller.
- */
-
- if(!bios)
- {
- for (d=c->devices;d!=NULL;d=d->next)
- if(d->next == NULL)
- b = d;
- }
- else
- b = c->devices;
-
- while(b != NULL)
- {
- d=b;
- if(bios)
- b = b->next;
- else
- b = b->prev;
-
- if(d->lct_data.class_id!=I2O_CLASS_RANDOM_BLOCK_STORAGE)
- continue;
-
- if(d->lct_data.user_tid != 0xFFF)
- continue;
-
- if(bios)
- {
- if(d->lct_data.bios_info != 0x80)
- continue;
- printk(KERN_INFO "Claiming as Boot device: Controller %d, TID %d\n", c->unit, d->lct_data.tid);
- }
- else
- {
- if(d->lct_data.bios_info == 0x80)
- continue; /*Already claimed on pass 1 */
- }
-
- if(i2o_claim_device(d, &i2o_block_handler))
- {
- printk(KERN_WARNING "i2o_block: Controller %d, TID %d\n", c->unit,
- d->lct_data.tid);
- printk(KERN_WARNING "\t%sevice refused claim! Skipping installation\n", bios?"Boot d":"D");
- continue;
- }
-
- if(scan_unit<MAX_I2OB<<4)
- {
- /*
- * Get the device and fill in the
- * Tid and controller.
- */
- dev=&i2ob_dev[scan_unit];
- dev->i2odev = d;
- dev->controller = c;
- dev->unit = c->unit;
- dev->tid = d->lct_data.tid;
-
- if(i2ob_install_device(c,d,scan_unit))
- printk(KERN_WARNING "Could not install I2O block device\n");
- else
- {
- scan_unit+=16;
- i2ob_dev_count++;
-
- /* We want to know when device goes away */
- i2o_device_notify_on(d, &i2o_block_handler);
- }
- }
- else
- {
- if(!warned++)
- printk(KERN_WARNING "i2o_block: too many device, registering only %d.\n", scan_unit>>4);
- }
- i2o_release_device(d, &i2o_block_handler);
- }
- i2o_unlock_controller(c);
- }
-}
-
-static void i2ob_probe(void)
-{
- /*
- * Some overhead/redundancy involved here, while trying to
- * claim the first boot volume encountered as /dev/i2o/hda
- * everytime. All the i2o_controllers are searched and the
- * first i2o block device marked as bootable is claimed
- * If an I2O block device was booted off , the bios sets
- * its bios_info field to 0x80, this what we search for.
- * Assuming that the bootable volume is /dev/i2o/hda
- * everytime will prevent any kernel panic while mounting
- * root partition
- */
-
- printk(KERN_INFO "i2o_block: Checking for Boot device...\n");
- i2ob_scan(1);
-
- /*
- * Now the remainder.
- */
- printk(KERN_INFO "i2o_block: Checking for I2O Block devices...\n");
- i2ob_scan(0);
-}
-
-
-/*
- * New device notification handler. Called whenever a new
- * I2O block storage device is added to the system.
- *
- * Should we spin lock around this to keep multiple devs from
- * getting updated at the same time?
- *
- */
-void i2ob_new_device(struct i2o_controller *c, struct i2o_device *d)
-{
- struct i2ob_device *dev;
- int unit = 0;
-
- printk(KERN_INFO "i2o_block: New device detected\n");
- printk(KERN_INFO " Controller %d Tid %d\n",c->unit, d->lct_data.tid);
-
- /* Check for available space */
- if(i2ob_dev_count>=MAX_I2OB<<4)
- {
- printk(KERN_ERR "i2o_block: No more devices allowed!\n");
- return;
- }
- for(unit = 0; unit < (MAX_I2OB<<4); unit += 16)
- {
- if(!i2ob_dev[unit].i2odev)
- break;
- }
-
- if(i2o_claim_device(d, &i2o_block_handler))
- {
- printk(KERN_INFO
- "i2o_block: Unable to claim device. Installation aborted\n");
- return;
- }
-
- dev = &i2ob_dev[unit];
- dev->i2odev = d;
- dev->controller = c;
- dev->tid = d->lct_data.tid;
-
- if(i2ob_install_device(c,d,unit))
- printk(KERN_ERR "i2o_block: Could not install new device\n");
- else
- {
- i2ob_dev_count++;
- i2o_device_notify_on(d, &i2o_block_handler);
- }
-
- i2o_release_device(d, &i2o_block_handler);
-
- return;
-}
-
-/*
- * Deleted device notification handler. Called when a device we
- * are talking to has been deleted by the user or some other
- * mysterious fource outside the kernel.
- */
-void i2ob_del_device(struct i2o_controller *c, struct i2o_device *d)
-{
- int unit = 0;
- int i = 0;
- unsigned long flags;
-
- spin_lock_irqsave(&io_request_lock, flags);
-
- /*
- * Need to do this...we somtimes get two events from the IRTOS
- * in a row and that causes lots of problems.
- */
- i2o_device_notify_off(d, &i2o_block_handler);
-
- printk(KERN_INFO "I2O Block Device Deleted\n");
-
- for(unit = 0; unit < MAX_I2OB<<4; unit += 16)
- {
- if(i2ob_dev[unit].i2odev == d)
- {
- printk(KERN_INFO " /dev/%s: Controller %d Tid %d\n",
- d->dev_name, c->unit, d->lct_data.tid);
- break;
- }
- }
- if(unit >= MAX_I2OB<<4)
- {
- printk(KERN_ERR "i2ob_del_device called, but not in dev table!\n");
- spin_unlock_irqrestore(&io_request_lock, flags);
- return;
- }
-
- /*
- * This will force errors when i2ob_get_queue() is called
- * by the kenrel.
- */
- i2ob_dev[unit].req_queue = NULL;
- for(i = unit; i <= unit+15; i++)
- {
- i2ob_dev[i].i2odev = NULL;
- i2ob_sizes[i] = 0;
- i2ob_hardsizes[i] = 0;
- i2ob_max_sectors[i] = 0;
- i2ob[i].nr_sects = 0;
- i2ob_gendisk.part[i].nr_sects = 0;
- }
- spin_unlock_irqrestore(&io_request_lock, flags);
-
- /*
- * Sync the device...this will force all outstanding I/Os
- * to attempt to complete, thus causing error messages.
- * We have to do this as the user could immediatelly create
- * a new volume that gets assigned the same minor number.
- * If there are still outstanding writes to the device,
- * that could cause data corruption on the new volume!
- *
- * The truth is that deleting a volume that you are currently
- * accessing will do _bad things_ to your system. This
- * handler will keep it from crashing, but must probably
- * you'll have to do a 'reboot' to get the system running
- * properly. Deleting disks you are using is dumb.
- * Umount them first and all will be good!
- *
- * It's not this driver's job to protect the system from
- * dumb user mistakes :)
- */
- if(i2ob_dev[unit].refcnt)
- fsync_dev(MKDEV(MAJOR_NR,unit));
-
- /*
- * Decrease usage count for module
- */
- while(i2ob_dev[unit].refcnt--)
- MOD_DEC_USE_COUNT;
-
- i2ob_dev[unit].refcnt = 0;
-
- i2ob_dev[i].tid = 0;
-
- /*
- * Do we need this?
- * The media didn't really change...the device is just gone
- */
- i2ob_media_change_flag[unit] = 1;
-
- i2ob_dev_count--;
-}
-
-/*
- * Have we seen a media change ?
- */
-static int i2ob_media_change(kdev_t dev)
-{
- int i=MINOR(dev);
- i>>=4;
- if(i2ob_media_change_flag[i])
- {
- i2ob_media_change_flag[i]=0;
- return 1;
- }
- return 0;
-}
-
-static int i2ob_revalidate(kdev_t dev)
-{
- return do_i2ob_revalidate(dev, 0);
-}
-
-/*
- * Reboot notifier. This is called by i2o_core when the system
- * shuts down.
- */
-static void i2ob_reboot_event(void)
-{
- int i;
-
- for(i=0;i<MAX_I2OB;i++)
- {
- struct i2ob_device *dev=&i2ob_dev[(i<<4)];
-
- if(dev->refcnt!=0)
- {
- /*
- * Flush the onboard cache
- */
- u32 msg[5];
- int *query_done = &dev->done_flag;
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|dev->tid;
- msg[2] = i2ob_context|0x40000000;
- msg[3] = (u32)query_done;
- msg[4] = 60<<16;
-
- DEBUG("Flushing...");
- i2o_post_wait(dev->controller, msg, 20, 60);
-
- DEBUG("Unlocking...");
- /*
- * Unlock the media
- */
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_BLOCK_MUNLOCK<<24|HOST_TID<<12|dev->tid;
- msg[2] = i2ob_context|0x40000000;
- msg[3] = (u32)query_done;
- msg[4] = -1;
- i2o_post_wait(dev->controller, msg, 20, 2);
-
- DEBUG("Unlocked.\n");
- }
- }
-}
-
-static struct block_device_operations i2ob_fops =
-{
- open: i2ob_open,
- release: i2ob_release,
- ioctl: i2ob_ioctl,
- check_media_change: i2ob_media_change,
- revalidate: i2ob_revalidate,
-};
-
-static struct gendisk i2ob_gendisk =
-{
- major: MAJOR_NR,
- major_name: "i2o/hd",
- minor_shift: 4,
- max_p: 1<<4,
- part: i2ob,
- sizes: i2ob_sizes,
- nr_real: MAX_I2OB,
- fops: &i2ob_fops,
-};
-
-
-/*
- * And here should be modules and kernel interface
- * (Just smiley confuses emacs :-)
- */
-
-#ifdef MODULE
-#define i2o_block_init init_module
-#endif
-
-int i2o_block_init(void)
-{
- int i;
-
- printk(KERN_INFO "I2O Block Storage OSM v0.9\n");
- printk(KERN_INFO " (c) Copyright 1999-2001 Red Hat Software.\n");
-
- /*
- * Register the block device interfaces
- */
-
- if (register_blkdev(MAJOR_NR, "i2o_block", &i2ob_fops)) {
- printk(KERN_ERR "Unable to get major number %d for i2o_block\n",
- MAJOR_NR);
- return -EIO;
- }
-#ifdef MODULE
- printk(KERN_INFO "i2o_block: registered device at major %d\n", MAJOR_NR);
-#endif
-
- /*
- * Now fill in the boiler plate
- */
-
- blksize_size[MAJOR_NR] = i2ob_blksizes;
- hardsect_size[MAJOR_NR] = i2ob_hardsizes;
- blk_size[MAJOR_NR] = i2ob_sizes;
- max_sectors[MAJOR_NR] = i2ob_max_sectors;
- blk_dev[MAJOR_NR].queue = i2ob_get_queue;
-
- blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), i2ob_request);
- blk_queue_headactive(BLK_DEFAULT_QUEUE(MAJOR_NR), 0);
-
- for (i = 0; i < MAX_I2OB << 4; i++) {
- i2ob_dev[i].refcnt = 0;
- i2ob_dev[i].flags = 0;
- i2ob_dev[i].controller = NULL;
- i2ob_dev[i].i2odev = NULL;
- i2ob_dev[i].tid = 0;
- i2ob_dev[i].head = NULL;
- i2ob_dev[i].tail = NULL;
- i2ob_dev[i].depth = MAX_I2OB_DEPTH;
- i2ob_blksizes[i] = 1024;
- i2ob_max_sectors[i] = 2;
- }
-
- /*
- * Set up the queue
- */
- for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
- {
- i2ob_queues[i] = NULL;
- }
-
- /*
- * Timers
- */
-
- init_timer(&i2ob_timer);
- i2ob_timer.function = i2ob_timer_handler;
- i2ob_timer.data = 0;
-
- /*
- * Register the OSM handler as we will need this to probe for
- * drives, geometry and other goodies.
- */
-
- if(i2o_install_handler(&i2o_block_handler)<0)
- {
- unregister_blkdev(MAJOR_NR, "i2o_block");
- blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
- printk(KERN_ERR "i2o_block: unable to register OSM.\n");
- return -EINVAL;
- }
- i2ob_context = i2o_block_handler.context;
-
- /*
- * Initialize event handling thread
- */
- init_MUTEX_LOCKED(&i2ob_evt_sem);
- evt_pid = kernel_thread(i2ob_evt, NULL, CLONE_SIGHAND);
- if(evt_pid < 0)
- {
- printk(KERN_ERR
- "i2o_block: Could not initialize event thread. Aborting\n");
- i2o_remove_handler(&i2o_block_handler);
- return 0;
- }
-
- /*
- * Finally see what is actually plugged in to our controllers
- */
- for (i = 0; i < MAX_I2OB; i++)
- register_disk(&i2ob_gendisk, MKDEV(MAJOR_NR,i<<4), 1<<4,
- &i2ob_fops, 0);
- i2ob_probe();
-
- /*
- * Adding i2ob_gendisk into the gendisk list.
- */
- add_gendisk(&i2ob_gendisk);
-
- return 0;
-}
-
-#ifdef MODULE
-
-EXPORT_NO_SYMBOLS;
-MODULE_AUTHOR("Red Hat Software");
-MODULE_DESCRIPTION("I2O Block Device OSM");
-
-void cleanup_module(void)
-{
- struct gendisk *gdp;
- int i;
-
- if(evt_running) {
- printk(KERN_INFO "Killing I2O block threads...");
- i = kill_proc(evt_pid, SIGTERM, 1);
- if(!i) {
- printk("waiting...");
- }
- /* Be sure it died */
- wait_for_completion(&i2ob_thread_dead);
- printk("done.\n");
- }
-
- /*
- * Unregister for updates from any devices..otherwise we still
- * get them and the core jumps to random memory :O
- */
- if(i2ob_dev_count) {
- struct i2o_device *d;
- for(i = 0; i < MAX_I2OB; i++)
- if((d=i2ob_dev[i<<4].i2odev)) {
- i2o_device_notify_off(d, &i2o_block_handler);
- i2o_event_register(d->controller, d->lct_data.tid,
- i2ob_context, i<<4, 0);
- }
- }
-
- /*
- * We may get further callbacks for ourself. The i2o_core
- * code handles this case reasonably sanely. The problem here
- * is we shouldn't get them .. but a couple of cards feel
- * obliged to tell us stuff we dont care about.
- *
- * This isnt ideal at all but will do for now.
- */
-
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ);
-
- /*
- * Flush the OSM
- */
-
- i2o_remove_handler(&i2o_block_handler);
-
- /*
- * Return the block device
- */
- if (unregister_blkdev(MAJOR_NR, "i2o_block") != 0)
- printk("i2o_block: cleanup_module failed\n");
-
- /*
- * free request queue
- */
- blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
-
- del_gendisk(&i2ob_gendisk);
-}
-#endif
+++ /dev/null
-/*
- * I2O Configuration Interface Driver
- *
- * (C) Copyright 1999 Red Hat Software
- *
- * Written by Alan Cox, Building Number Three Ltd
- *
- * Modified 04/20/1999 by Deepak Saxena
- * - Added basic ioctl() support
- * Modified 06/07/1999 by Deepak Saxena
- * - Added software download ioctl (still testing)
- * Modified 09/10/1999 by Auvo Häkkinen
- * - Changes to i2o_cfg_reply(), ioctl_parms()
- * - Added ioct_validate()
- * Modified 09/30/1999 by Taneli Vähäkangas
- * - Fixed ioctl_swdl()
- * Modified 10/04/1999 by Taneli Vähäkangas
- * - Changed ioctl_swdl(), implemented ioctl_swul() and ioctl_swdel()
- * Modified 11/18/199 by Deepak Saxena
- * - Added event managmenet support
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/pci.h>
-#include <linux/i2o.h>
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/miscdevice.h>
-#include <linux/mm.h>
-#include <linux/spinlock.h>
-#include <linux/smp_lock.h>
-
-#include <asm/uaccess.h>
-#include <asm/io.h>
-
-static int i2o_cfg_context = -1;
-static void *page_buf;
-static spinlock_t i2o_config_lock = SPIN_LOCK_UNLOCKED;
-struct wait_queue *i2o_wait_queue;
-
-#define MODINC(x,y) (x = x++ % y)
-
-struct i2o_cfg_info
-{
- struct file* fp;
- struct fasync_struct *fasync;
- struct i2o_evt_info event_q[I2O_EVT_Q_LEN];
- u16 q_in; // Queue head index
- u16 q_out; // Queue tail index
- u16 q_len; // Queue length
- u16 q_lost; // Number of lost events
- u32 q_id; // Event queue ID...used as tx_context
- struct i2o_cfg_info *next;
-};
-static struct i2o_cfg_info *open_files = NULL;
-static int i2o_cfg_info_id = 0;
-
-static int ioctl_getiops(unsigned long);
-static int ioctl_gethrt(unsigned long);
-static int ioctl_getlct(unsigned long);
-static int ioctl_parms(unsigned long, unsigned int);
-static int ioctl_html(unsigned long);
-static int ioctl_swdl(unsigned long);
-static int ioctl_swul(unsigned long);
-static int ioctl_swdel(unsigned long);
-static int ioctl_validate(unsigned long);
-static int ioctl_evt_reg(unsigned long, struct file *);
-static int ioctl_evt_get(unsigned long, struct file *);
-static int cfg_fasync(int, struct file*, int);
-
-/*
- * This is the callback for any message we have posted. The message itself
- * will be returned to the message pool when we return from the IRQ
- *
- * This runs in irq context so be short and sweet.
- */
-static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *m)
-{
- u32 *msg = (u32 *)m;
-
- if (msg[0] & MSG_FAIL) {
- u32 *preserved_msg = (u32*)(c->mem_offset + msg[7]);
-
- printk(KERN_ERR "i2o_config: IOP failed to process the msg.\n");
-
- /* Release the preserved msg frame by resubmitting it as a NOP */
-
- preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
- preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
- preserved_msg[2] = 0;
- i2o_post_message(c, msg[7]);
- }
-
- if (msg[4] >> 24) // ReqStatus != SUCCESS
- i2o_report_status(KERN_INFO,"i2o_config", msg);
-
- if(m->function == I2O_CMD_UTIL_EVT_REGISTER)
- {
- struct i2o_cfg_info *inf;
-
- for(inf = open_files; inf; inf = inf->next)
- if(inf->q_id == msg[3])
- break;
-
- //
- // If this is the case, it means that we're getting
- // events for a file descriptor that's been close()'d
- // w/o the user unregistering for events first.
- // The code currently assumes that the user will
- // take care of unregistering for events before closing
- // a file.
- //
- // TODO:
- // Should we track event registartion and deregister
- // for events when a file is close()'d so this doesn't
- // happen? That would get rid of the search through
- // the linked list since file->private_data could point
- // directly to the i2o_config_info data structure...but
- // it would mean having all sorts of tables to track
- // what each file is registered for...I think the
- // current method is simpler. - DS
- //
- if(!inf)
- return;
-
- inf->event_q[inf->q_in].id.iop = c->unit;
- inf->event_q[inf->q_in].id.tid = m->target_tid;
- inf->event_q[inf->q_in].id.evt_mask = msg[4];
-
- //
- // Data size = msg size - reply header
- //
- inf->event_q[inf->q_in].data_size = (m->size - 5) * 4;
- if(inf->event_q[inf->q_in].data_size)
- memcpy(inf->event_q[inf->q_in].evt_data,
- (unsigned char *)(msg + 5),
- inf->event_q[inf->q_in].data_size);
-
- spin_lock(&i2o_config_lock);
- MODINC(inf->q_in, I2O_EVT_Q_LEN);
- if(inf->q_len == I2O_EVT_Q_LEN)
- {
- MODINC(inf->q_out, I2O_EVT_Q_LEN);
- inf->q_lost++;
- }
- else
- {
- // Keep I2OEVTGET on another CPU from touching this
- inf->q_len++;
- }
- spin_unlock(&i2o_config_lock);
-
-
-// printk(KERN_INFO "File %p w/id %d has %d events\n",
-// inf->fp, inf->q_id, inf->q_len);
-
- kill_fasync(&inf->fasync, SIGIO, POLL_IN);
- }
-
- return;
-}
-
-/*
- * Each of these describes an i2o message handler. They are
- * multiplexed by the i2o_core code
- */
-
-struct i2o_handler cfg_handler=
-{
- i2o_cfg_reply,
- NULL,
- NULL,
- NULL,
- "Configuration",
- 0,
- 0xffffffff // All classes
-};
-
-static ssize_t cfg_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
-{
- printk(KERN_INFO "i2o_config write not yet supported\n");
-
- return 0;
-}
-
-
-static ssize_t cfg_read(struct file *file, char *buf, size_t count, loff_t *ptr)
-{
- return 0;
-}
-
-/*
- * IOCTL Handler
- */
-static int cfg_ioctl(struct inode *inode, struct file *fp, unsigned int cmd,
- unsigned long arg)
-{
- int ret;
-
- switch(cmd)
- {
- case I2OGETIOPS:
- ret = ioctl_getiops(arg);
- break;
-
- case I2OHRTGET:
- ret = ioctl_gethrt(arg);
- break;
-
- case I2OLCTGET:
- ret = ioctl_getlct(arg);
- break;
-
- case I2OPARMSET:
- ret = ioctl_parms(arg, I2OPARMSET);
- break;
-
- case I2OPARMGET:
- ret = ioctl_parms(arg, I2OPARMGET);
- break;
-
- case I2OSWDL:
- ret = ioctl_swdl(arg);
- break;
-
- case I2OSWUL:
- ret = ioctl_swul(arg);
- break;
-
- case I2OSWDEL:
- ret = ioctl_swdel(arg);
- break;
-
- case I2OVALIDATE:
- ret = ioctl_validate(arg);
- break;
-
- case I2OHTML:
- ret = ioctl_html(arg);
- break;
-
- case I2OEVTREG:
- ret = ioctl_evt_reg(arg, fp);
- break;
-
- case I2OEVTGET:
- ret = ioctl_evt_get(arg, fp);
- break;
-
- default:
- ret = -EINVAL;
- }
-
- return ret;
-}
-
-int ioctl_getiops(unsigned long arg)
-{
- u8 *user_iop_table = (u8*)arg;
- struct i2o_controller *c = NULL;
- int i;
- u8 foo[MAX_I2O_CONTROLLERS];
-
- if(!access_ok(VERIFY_WRITE, user_iop_table, MAX_I2O_CONTROLLERS))
- return -EFAULT;
-
- for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
- {
- c = i2o_find_controller(i);
- if(c)
- {
- foo[i] = 1;
- i2o_unlock_controller(c);
- }
- else
- {
- foo[i] = 0;
- }
- }
-
- __copy_to_user(user_iop_table, foo, MAX_I2O_CONTROLLERS);
- return 0;
-}
-
-int ioctl_gethrt(unsigned long arg)
-{
- struct i2o_controller *c;
- struct i2o_cmd_hrtlct *cmd = (struct i2o_cmd_hrtlct*)arg;
- struct i2o_cmd_hrtlct kcmd;
- i2o_hrt *hrt;
- int len;
- u32 reslen;
- int ret = 0;
-
- if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct)))
- return -EFAULT;
-
- if(get_user(reslen, kcmd.reslen) < 0)
- return -EFAULT;
-
- if(kcmd.resbuf == NULL)
- return -EFAULT;
-
- c = i2o_find_controller(kcmd.iop);
- if(!c)
- return -ENXIO;
-
- hrt = (i2o_hrt *)c->hrt;
-
- i2o_unlock_controller(c);
-
- len = 8 + ((hrt->entry_len * hrt->num_entries) << 2);
-
- /* We did a get user...so assuming mem is ok...is this bad? */
- put_user(len, kcmd.reslen);
- if(len > reslen)
- ret = -ENOBUFS;
- if(copy_to_user(kcmd.resbuf, (void*)hrt, len))
- ret = -EFAULT;
-
- return ret;
-}
-
-int ioctl_getlct(unsigned long arg)
-{
- struct i2o_controller *c;
- struct i2o_cmd_hrtlct *cmd = (struct i2o_cmd_hrtlct*)arg;
- struct i2o_cmd_hrtlct kcmd;
- i2o_lct *lct;
- int len;
- int ret = 0;
- u32 reslen;
-
- if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct)))
- return -EFAULT;
-
- if(get_user(reslen, kcmd.reslen) < 0)
- return -EFAULT;
-
- if(kcmd.resbuf == NULL)
- return -EFAULT;
-
- c = i2o_find_controller(kcmd.iop);
- if(!c)
- return -ENXIO;
-
- lct = (i2o_lct *)c->lct;
- i2o_unlock_controller(c);
-
- len = (unsigned int)lct->table_size << 2;
- put_user(len, kcmd.reslen);
- if(len > reslen)
- ret = -ENOBUFS;
- else if(copy_to_user(kcmd.resbuf, (void*)lct, len))
- ret = -EFAULT;
-
- return ret;
-}
-
-static int ioctl_parms(unsigned long arg, unsigned int type)
-{
- int ret = 0;
- struct i2o_controller *c;
- struct i2o_cmd_psetget *cmd = (struct i2o_cmd_psetget*)arg;
- struct i2o_cmd_psetget kcmd;
- u32 reslen;
- u8 *ops;
- u8 *res;
- int len;
-
- u32 i2o_cmd = (type == I2OPARMGET ?
- I2O_CMD_UTIL_PARAMS_GET :
- I2O_CMD_UTIL_PARAMS_SET);
-
- if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_psetget)))
- return -EFAULT;
-
- if(get_user(reslen, kcmd.reslen))
- return -EFAULT;
-
- c = i2o_find_controller(kcmd.iop);
- if(!c)
- return -ENXIO;
-
- ops = (u8*)kmalloc(kcmd.oplen, GFP_KERNEL);
- if(!ops)
- {
- i2o_unlock_controller(c);
- return -ENOMEM;
- }
-
- if(copy_from_user(ops, kcmd.opbuf, kcmd.oplen))
- {
- i2o_unlock_controller(c);
- kfree(ops);
- return -EFAULT;
- }
-
- /*
- * It's possible to have a _very_ large table
- * and that the user asks for all of it at once...
- */
- res = (u8*)kmalloc(65536, GFP_KERNEL);
- if(!res)
- {
- i2o_unlock_controller(c);
- kfree(ops);
- return -ENOMEM;
- }
-
- len = i2o_issue_params(i2o_cmd, c, kcmd.tid,
- ops, kcmd.oplen, res, 65536);
- i2o_unlock_controller(c);
- kfree(ops);
-
- if (len < 0) {
- kfree(res);
- return -EAGAIN;
- }
-
- put_user(len, kcmd.reslen);
- if(len > reslen)
- ret = -ENOBUFS;
- else if(copy_to_user(cmd->resbuf, res, len))
- ret = -EFAULT;
-
- kfree(res);
-
- return ret;
-}
-
-int ioctl_html(unsigned long arg)
-{
- struct i2o_html *cmd = (struct i2o_html*)arg;
- struct i2o_html kcmd;
- struct i2o_controller *c;
- u8 *res = NULL;
- void *query = NULL;
- int ret = 0;
- int token;
- u32 len;
- u32 reslen;
- u32 msg[MSG_FRAME_SIZE/4];
-
- if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_html)))
- {
- printk(KERN_INFO "i2o_config: can't copy html cmd\n");
- return -EFAULT;
- }
-
- if(get_user(reslen, kcmd.reslen) < 0)
- {
- printk(KERN_INFO "i2o_config: can't copy html reslen\n");
- return -EFAULT;
- }
-
- if(!kcmd.resbuf)
- {
- printk(KERN_INFO "i2o_config: NULL html buffer\n");
- return -EFAULT;
- }
-
- c = i2o_find_controller(kcmd.iop);
- if(!c)
- return -ENXIO;
-
- if(kcmd.qlen) /* Check for post data */
- {
- query = kmalloc(kcmd.qlen, GFP_KERNEL);
- if(!query)
- {
- i2o_unlock_controller(c);
- return -ENOMEM;
- }
- if(copy_from_user(query, kcmd.qbuf, kcmd.qlen))
- {
- i2o_unlock_controller(c);
- printk(KERN_INFO "i2o_config: could not get query\n");
- kfree(query);
- return -EFAULT;
- }
- }
-
- res = kmalloc(65536, GFP_KERNEL);
- if(!res)
- {
- i2o_unlock_controller(c);
- kfree(query);
- return -ENOMEM;
- }
-
- msg[1] = (I2O_CMD_UTIL_CONFIG_DIALOG << 24)|HOST_TID<<12|kcmd.tid;
- msg[2] = i2o_cfg_context;
- msg[3] = 0;
- msg[4] = kcmd.page;
- msg[5] = 0xD0000000|65536;
- msg[6] = virt_to_bus(res);
- if(!kcmd.qlen) /* Check for post data */
- msg[0] = SEVEN_WORD_MSG_SIZE|SGL_OFFSET_5;
- else
- {
- msg[0] = NINE_WORD_MSG_SIZE|SGL_OFFSET_5;
- msg[5] = 0x50000000|65536;
- msg[7] = 0xD4000000|(kcmd.qlen);
- msg[8] = virt_to_bus(query);
- }
- /*
- Wait for a considerable time till the Controller
- does its job before timing out. The controller might
- take more time to process this request if there are
- many devices connected to it.
- */
- token = i2o_post_wait_mem(c, msg, 9*4, 400, query, res);
- if(token < 0)
- {
- printk(KERN_DEBUG "token = %#10x\n", token);
- i2o_unlock_controller(c);
-
- if(token != -ETIMEDOUT)
- {
- kfree(res);
- if(kcmd.qlen) kfree(query);
- }
-
- return token;
- }
- i2o_unlock_controller(c);
-
- len = strnlen(res, 65536);
- put_user(len, kcmd.reslen);
- if(len > reslen)
- ret = -ENOMEM;
- if(copy_to_user(kcmd.resbuf, res, len))
- ret = -EFAULT;
-
- kfree(res);
- if(kcmd.qlen)
- kfree(query);
-
- return ret;
-}
-
-int ioctl_swdl(unsigned long arg)
-{
- struct i2o_sw_xfer kxfer;
- struct i2o_sw_xfer *pxfer = (struct i2o_sw_xfer *)arg;
- unsigned char maxfrag = 0, curfrag = 1;
- unsigned char *buffer;
- u32 msg[9];
- unsigned int status = 0, swlen = 0, fragsize = 8192;
- struct i2o_controller *c;
-
- if(copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
- return -EFAULT;
-
- if(get_user(swlen, kxfer.swlen) < 0)
- return -EFAULT;
-
- if(get_user(maxfrag, kxfer.maxfrag) < 0)
- return -EFAULT;
-
- if(get_user(curfrag, kxfer.curfrag) < 0)
- return -EFAULT;
-
- if(curfrag==maxfrag) fragsize = swlen-(maxfrag-1)*8192;
-
- if(!kxfer.buf || !access_ok(VERIFY_READ, kxfer.buf, fragsize))
- return -EFAULT;
-
- c = i2o_find_controller(kxfer.iop);
- if(!c)
- return -ENXIO;
-
- buffer=kmalloc(fragsize, GFP_KERNEL);
- if (buffer==NULL)
- {
- i2o_unlock_controller(c);
- return -ENOMEM;
- }
- __copy_from_user(buffer, kxfer.buf, fragsize);
-
- msg[0]= NINE_WORD_MSG_SIZE | SGL_OFFSET_7;
- msg[1]= I2O_CMD_SW_DOWNLOAD<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[2]= (u32)cfg_handler.context;
- msg[3]= 0;
- msg[4]= (((u32)kxfer.flags)<<24) | (((u32)kxfer.sw_type)<<16) |
- (((u32)maxfrag)<<8) | (((u32)curfrag));
- msg[5]= swlen;
- msg[6]= kxfer.sw_id;
- msg[7]= (0xD0000000 | fragsize);
- msg[8]= virt_to_bus(buffer);
-
-// printk("i2o_config: swdl frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize);
- status = i2o_post_wait_mem(c, msg, sizeof(msg), 60, buffer, NULL);
-
- i2o_unlock_controller(c);
- if(status != -ETIMEDOUT)
- kfree(buffer);
-
- if (status != I2O_POST_WAIT_OK)
- {
- // it fails if you try and send frags out of order
- // and for some yet unknown reasons too
- printk(KERN_INFO "i2o_config: swdl failed, DetailedStatus = %d\n", status);
- return status;
- }
-
- return 0;
-}
-
-int ioctl_swul(unsigned long arg)
-{
- struct i2o_sw_xfer kxfer;
- struct i2o_sw_xfer *pxfer = (struct i2o_sw_xfer *)arg;
- unsigned char maxfrag = 0, curfrag = 1;
- unsigned char *buffer;
- u32 msg[9];
- unsigned int status = 0, swlen = 0, fragsize = 8192;
- struct i2o_controller *c;
-
- if(copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
- return -EFAULT;
-
- if(get_user(swlen, kxfer.swlen) < 0)
- return -EFAULT;
-
- if(get_user(maxfrag, kxfer.maxfrag) < 0)
- return -EFAULT;
-
- if(get_user(curfrag, kxfer.curfrag) < 0)
- return -EFAULT;
-
- if(curfrag==maxfrag) fragsize = swlen-(maxfrag-1)*8192;
-
- if(!kxfer.buf || !access_ok(VERIFY_WRITE, kxfer.buf, fragsize))
- return -EFAULT;
-
- c = i2o_find_controller(kxfer.iop);
- if(!c)
- return -ENXIO;
-
- buffer=kmalloc(fragsize, GFP_KERNEL);
- if (buffer==NULL)
- {
- i2o_unlock_controller(c);
- return -ENOMEM;
- }
-
- msg[0]= NINE_WORD_MSG_SIZE | SGL_OFFSET_7;
- msg[1]= I2O_CMD_SW_UPLOAD<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[2]= (u32)cfg_handler.context;
- msg[3]= 0;
- msg[4]= (u32)kxfer.flags<<24|(u32)kxfer.sw_type<<16|(u32)maxfrag<<8|(u32)curfrag;
- msg[5]= swlen;
- msg[6]= kxfer.sw_id;
- msg[7]= (0xD0000000 | fragsize);
- msg[8]= virt_to_bus(buffer);
-
-// printk("i2o_config: swul frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize);
- status = i2o_post_wait_mem(c, msg, sizeof(msg), 60, buffer, NULL);
- i2o_unlock_controller(c);
-
- if (status != I2O_POST_WAIT_OK)
- {
- if(status != -ETIMEDOUT)
- kfree(buffer);
- printk(KERN_INFO "i2o_config: swul failed, DetailedStatus = %d\n", status);
- return status;
- }
-
- __copy_to_user(kxfer.buf, buffer, fragsize);
- kfree(buffer);
-
- return 0;
-}
-
-int ioctl_swdel(unsigned long arg)
-{
- struct i2o_controller *c;
- struct i2o_sw_xfer kxfer, *pxfer = (struct i2o_sw_xfer *)arg;
- u32 msg[7];
- unsigned int swlen;
- int token;
-
- if (copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
- return -EFAULT;
-
- if (get_user(swlen, kxfer.swlen) < 0)
- return -EFAULT;
-
- c = i2o_find_controller(kxfer.iop);
- if (!c)
- return -ENXIO;
-
- msg[0] = SEVEN_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = I2O_CMD_SW_REMOVE<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[2] = (u32)i2o_cfg_context;
- msg[3] = 0;
- msg[4] = (u32)kxfer.flags<<24 | (u32)kxfer.sw_type<<16;
- msg[5] = swlen;
- msg[6] = kxfer.sw_id;
-
- token = i2o_post_wait(c, msg, sizeof(msg), 10);
- i2o_unlock_controller(c);
-
- if (token != I2O_POST_WAIT_OK)
- {
- printk(KERN_INFO "i2o_config: swdel failed, DetailedStatus = %d\n", token);
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-int ioctl_validate(unsigned long arg)
-{
- int token;
- int iop = (int)arg;
- u32 msg[4];
- struct i2o_controller *c;
-
- c=i2o_find_controller(iop);
- if (!c)
- return -ENXIO;
-
- msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_CONFIG_VALIDATE<<24 | HOST_TID<<12 | iop;
- msg[2] = (u32)i2o_cfg_context;
- msg[3] = 0;
-
- token = i2o_post_wait(c, msg, sizeof(msg), 10);
- i2o_unlock_controller(c);
-
- if (token != I2O_POST_WAIT_OK)
- {
- printk(KERN_INFO "Can't validate configuration, ErrorStatus = %d\n",
- token);
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-static int ioctl_evt_reg(unsigned long arg, struct file *fp)
-{
- u32 msg[5];
- struct i2o_evt_id *pdesc = (struct i2o_evt_id *)arg;
- struct i2o_evt_id kdesc;
- struct i2o_controller *iop;
- struct i2o_device *d;
-
- if (copy_from_user(&kdesc, pdesc, sizeof(struct i2o_evt_id)))
- return -EFAULT;
-
- /* IOP exists? */
- iop = i2o_find_controller(kdesc.iop);
- if(!iop)
- return -ENXIO;
- i2o_unlock_controller(iop);
-
- /* Device exists? */
- for(d = iop->devices; d; d = d->next)
- if(d->lct_data.tid == kdesc.tid)
- break;
-
- if(!d)
- return -ENODEV;
-
- msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | kdesc.tid;
- msg[2] = (u32)i2o_cfg_context;
- msg[3] = (u32)fp->private_data;
- msg[4] = kdesc.evt_mask;
-
- i2o_post_this(iop, msg, 20);
-
- return 0;
-}
-
-static int ioctl_evt_get(unsigned long arg, struct file *fp)
-{
- u32 id = (u32)fp->private_data;
- struct i2o_cfg_info *p = NULL;
- struct i2o_evt_get *uget = (struct i2o_evt_get*)arg;
- struct i2o_evt_get kget;
- unsigned long flags;
-
- for(p = open_files; p; p = p->next)
- if(p->q_id == id)
- break;
-
- if(!p->q_len)
- {
- return -ENOENT;
- return 0;
- }
-
- memcpy(&kget.info, &p->event_q[p->q_out], sizeof(struct i2o_evt_info));
- MODINC(p->q_out, I2O_EVT_Q_LEN);
- spin_lock_irqsave(&i2o_config_lock, flags);
- p->q_len--;
- kget.pending = p->q_len;
- kget.lost = p->q_lost;
- spin_unlock_irqrestore(&i2o_config_lock, flags);
-
- if(copy_to_user(uget, &kget, sizeof(struct i2o_evt_get)))
- return -EFAULT;
- return 0;
-}
-
-static int cfg_open(struct inode *inode, struct file *file)
-{
- struct i2o_cfg_info *tmp =
- (struct i2o_cfg_info *)kmalloc(sizeof(struct i2o_cfg_info), GFP_KERNEL);
- unsigned long flags;
-
- if(!tmp)
- return -ENOMEM;
-
- file->private_data = (void*)(i2o_cfg_info_id++);
- tmp->fp = file;
- tmp->fasync = NULL;
- tmp->q_id = (u32)file->private_data;
- tmp->q_len = 0;
- tmp->q_in = 0;
- tmp->q_out = 0;
- tmp->q_lost = 0;
- tmp->next = open_files;
-
- spin_lock_irqsave(&i2o_config_lock, flags);
- open_files = tmp;
- spin_unlock_irqrestore(&i2o_config_lock, flags);
-
- return 0;
-}
-
-static int cfg_release(struct inode *inode, struct file *file)
-{
- u32 id = (u32)file->private_data;
- struct i2o_cfg_info *p1, *p2;
- unsigned long flags;
-
- lock_kernel();
- p1 = p2 = NULL;
-
- spin_lock_irqsave(&i2o_config_lock, flags);
- for(p1 = open_files; p1; )
- {
- if(p1->q_id == id)
- {
-
- if(p1->fasync)
- cfg_fasync(-1, file, 0);
- if(p2)
- p2->next = p1->next;
- else
- open_files = p1->next;
-
- kfree(p1);
- break;
- }
- p2 = p1;
- p1 = p1->next;
- }
- spin_unlock_irqrestore(&i2o_config_lock, flags);
- unlock_kernel();
-
- return 0;
-}
-
-static int cfg_fasync(int fd, struct file *fp, int on)
-{
- u32 id = (u32)fp->private_data;
- struct i2o_cfg_info *p;
-
- for(p = open_files; p; p = p->next)
- if(p->q_id == id)
- break;
-
- if(!p)
- return -EBADF;
-
- return fasync_helper(fd, fp, on, &p->fasync);
-}
-
-static struct file_operations config_fops =
-{
- owner: THIS_MODULE,
- llseek: no_llseek,
- read: cfg_read,
- write: cfg_write,
- ioctl: cfg_ioctl,
- open: cfg_open,
- release: cfg_release,
- fasync: cfg_fasync,
-};
-
-static struct miscdevice i2o_miscdev = {
- I2O_MINOR,
- "i2octl",
- &config_fops
-};
-
-#ifdef MODULE
-int init_module(void)
-#else
-int __init i2o_config_init(void)
-#endif
-{
- printk(KERN_INFO "I2O configuration manager v 0.04.\n");
- printk(KERN_INFO " (C) Copyright 1999 Red Hat Software\n");
-
- if((page_buf = kmalloc(4096, GFP_KERNEL))==NULL)
- {
- printk(KERN_ERR "i2o_config: no memory for page buffer.\n");
- return -ENOBUFS;
- }
- if(misc_register(&i2o_miscdev)==-1)
- {
- printk(KERN_ERR "i2o_config: can't register device.\n");
- kfree(page_buf);
- return -EBUSY;
- }
- /*
- * Install our handler
- */
- if(i2o_install_handler(&cfg_handler)<0)
- {
- kfree(page_buf);
- printk(KERN_ERR "i2o_config: handler register failed.\n");
- misc_deregister(&i2o_miscdev);
- return -EBUSY;
- }
- /*
- * The low 16bits of the transaction context must match this
- * for everything we post. Otherwise someone else gets our mail
- */
- i2o_cfg_context = cfg_handler.context;
- return 0;
-}
-
-#ifdef MODULE
-
-void cleanup_module(void)
-{
- misc_deregister(&i2o_miscdev);
-
- if(page_buf)
- kfree(page_buf);
- if(i2o_cfg_context != -1)
- i2o_remove_handler(&cfg_handler);
-}
-
-EXPORT_NO_SYMBOLS;
-MODULE_AUTHOR("Red Hat Software");
-MODULE_DESCRIPTION("I2O Configuration");
-
-#endif
+++ /dev/null
-/*
- * Core I2O structure management
- *
- * (C) Copyright 1999 Red Hat Software
- *
- * Written by Alan Cox, Building Number Three Ltd
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- *
- * A lot of the I2O message side code from this is taken from the
- * Red Creek RCPCI45 adapter driver by Red Creek Communications
- *
- * Fixes by:
- * Philipp Rumpf
- * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
- * Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
- * Deepak Saxena <deepak@plexity.net>
- * Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
- *
- */
-
-#include <linux/config.h>
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/pci.h>
-
-#include <linux/i2o.h>
-
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/spinlock.h>
-#include <linux/smp_lock.h>
-
-#include <linux/bitops.h>
-#include <linux/wait.h>
-#include <linux/delay.h>
-#include <linux/timer.h>
-#include <linux/tqueue.h>
-#include <linux/interrupt.h>
-#include <linux/sched.h>
-#include <asm/semaphore.h>
-#include <linux/completion.h>
-
-#include <asm/io.h>
-#include <linux/reboot.h>
-
-#include "i2o_lan.h"
-
-//#define DRIVERDEBUG
-
-#ifdef DRIVERDEBUG
-#define dprintk(s, args...) printk(s, ## args)
-#else
-#define dprintk(s, args...)
-#endif
-
-/* OSM table */
-static struct i2o_handler *i2o_handlers[MAX_I2O_MODULES];
-
-/* Controller list */
-static struct i2o_controller *i2o_controllers[MAX_I2O_CONTROLLERS];
-struct i2o_controller *i2o_controller_chain;
-int i2o_num_controllers;
-
-/* Initiator Context for Core message */
-static int core_context;
-
-/* Initialization && shutdown functions */
-static void i2o_sys_init(void);
-static void i2o_sys_shutdown(void);
-static int i2o_reset_controller(struct i2o_controller *);
-static int i2o_reboot_event(struct notifier_block *, unsigned long , void *);
-static int i2o_online_controller(struct i2o_controller *);
-static int i2o_init_outbound_q(struct i2o_controller *);
-static int i2o_post_outbound_messages(struct i2o_controller *);
-
-/* Reply handler */
-static void i2o_core_reply(struct i2o_handler *, struct i2o_controller *,
- struct i2o_message *);
-
-/* Various helper functions */
-static int i2o_lct_get(struct i2o_controller *);
-static int i2o_lct_notify(struct i2o_controller *);
-static int i2o_hrt_get(struct i2o_controller *);
-
-static int i2o_build_sys_table(void);
-static int i2o_systab_send(struct i2o_controller *c);
-
-/* I2O core event handler */
-static int i2o_core_evt(void *);
-static int evt_pid;
-static int evt_running;
-
-/* Dynamic LCT update handler */
-static int i2o_dyn_lct(void *);
-
-void i2o_report_controller_unit(struct i2o_controller *, struct i2o_device *);
-
-/*
- * I2O System Table. Contains information about
- * all the IOPs in the system. Used to inform IOPs
- * about each other's existence.
- *
- * sys_tbl_ver is the CurrentChangeIndicator that is
- * used by IOPs to track changes.
- */
-static struct i2o_sys_tbl *sys_tbl;
-static int sys_tbl_ind;
-static int sys_tbl_len;
-
-/*
- * This spin lock is used to keep a device from being
- * added and deleted concurrently across CPUs or interrupts.
- * This can occur when a user creates a device and immediatelly
- * deletes it before the new_dev_notify() handler is called.
- */
-static spinlock_t i2o_dev_lock = SPIN_LOCK_UNLOCKED;
-
-#ifdef MODULE
-/*
- * Function table to send to bus specific layers
- * See <include/linux/i2o.h> for explanation of this
- */
-static struct i2o_core_func_table i2o_core_functions =
-{
- i2o_install_controller,
- i2o_activate_controller,
- i2o_find_controller,
- i2o_unlock_controller,
- i2o_run_queue,
- i2o_delete_controller
-};
-
-#ifdef CONFIG_I2O_PCI_MODULE
-extern int i2o_pci_core_attach(struct i2o_core_func_table *);
-extern void i2o_pci_core_detach(void);
-#endif /* CONFIG_I2O_PCI_MODULE */
-
-#endif /* MODULE */
-
-/*
- * Structures and definitions for synchronous message posting.
- * See i2o_post_wait() for description.
- */
-struct i2o_post_wait_data
-{
- int *status; /* Pointer to status block on caller stack */
- int *complete; /* Pointer to completion flag on caller stack */
- u32 id; /* Unique identifier */
- wait_queue_head_t *wq; /* Wake up for caller (NULL for dead) */
- struct i2o_post_wait_data *next; /* Chain */
- void *mem[2]; /* Memory blocks to recover on failure path */
-};
-static struct i2o_post_wait_data *post_wait_queue;
-static u32 post_wait_id; // Unique ID for each post_wait
-static spinlock_t post_wait_lock = SPIN_LOCK_UNLOCKED;
-static void i2o_post_wait_complete(u32, int);
-
-/* OSM descriptor handler */
-static struct i2o_handler i2o_core_handler =
-{
- (void *)i2o_core_reply,
- NULL,
- NULL,
- NULL,
- "I2O core layer",
- 0,
- I2O_CLASS_EXECUTIVE
-};
-
-/*
- * Used when queueing a reply to be handled later
- */
-
-struct reply_info
-{
- struct i2o_controller *iop;
- u32 msg[MSG_FRAME_SIZE];
-};
-static struct reply_info evt_reply;
-static struct reply_info events[I2O_EVT_Q_LEN];
-static int evt_in;
-static int evt_out;
-static int evt_q_len;
-#define MODINC(x,y) ((x) = ((x) + 1) % (y))
-
-/*
- * I2O configuration spinlock. This isnt a big deal for contention
- * so we have one only
- */
-
-static DECLARE_MUTEX(i2o_configuration_lock);
-
-/*
- * Event spinlock. Used to keep event queue sane and from
- * handling multiple events simultaneously.
- */
-static spinlock_t i2o_evt_lock = SPIN_LOCK_UNLOCKED;
-
-/*
- * Semaphore used to synchronize event handling thread with
- * interrupt handler.
- */
-
-static DECLARE_MUTEX(evt_sem);
-static DECLARE_COMPLETION(evt_dead);
-DECLARE_WAIT_QUEUE_HEAD(evt_wait);
-
-static struct notifier_block i2o_reboot_notifier =
-{
- i2o_reboot_event,
- NULL,
- 0
-};
-
-/*
- * Config options
- */
-
-static int verbose;
-MODULE_PARM(verbose, "i");
-
-/*
- * I2O Core reply handler
- */
-static void i2o_core_reply(struct i2o_handler *h, struct i2o_controller *c,
- struct i2o_message *m)
-{
- u32 *msg=(u32 *)m;
- u32 status;
- u32 context = msg[2];
-
- if (msg[0] & MSG_FAIL) // Fail bit is set
- {
- u32 *preserved_msg = (u32*)(c->mem_offset + msg[7]);
-
- i2o_report_status(KERN_INFO, "i2o_core", msg);
- i2o_dump_message(preserved_msg);
-
- /* If the failed request needs special treatment,
- * it should be done here. */
-
- /* Release the preserved msg by resubmitting it as a NOP */
-
- preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
- preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
- preserved_msg[2] = 0;
- i2o_post_message(c, msg[7]);
-
- /* If reply to i2o_post_wait failed, return causes a timeout */
-
- return;
- }
-
-#ifdef DRIVERDEBUG
- i2o_report_status(KERN_INFO, "i2o_core", msg);
-#endif
-
- if(msg[2]&0x80000000) // Post wait message
- {
- if (msg[4] >> 24)
- status = (msg[4] & 0xFFFF);
- else
- status = I2O_POST_WAIT_OK;
-
- i2o_post_wait_complete(context, status);
- return;
- }
-
- if(m->function == I2O_CMD_UTIL_EVT_REGISTER)
- {
- memcpy(events[evt_in].msg, msg, (msg[0]>>16)<<2);
- events[evt_in].iop = c;
-
- spin_lock(&i2o_evt_lock);
- MODINC(evt_in, I2O_EVT_Q_LEN);
- if(evt_q_len == I2O_EVT_Q_LEN)
- MODINC(evt_out, I2O_EVT_Q_LEN);
- else
- evt_q_len++;
- spin_unlock(&i2o_evt_lock);
-
- up(&evt_sem);
- wake_up_interruptible(&evt_wait);
- return;
- }
-
- if(m->function == I2O_CMD_LCT_NOTIFY)
- {
- up(&c->lct_sem);
- return;
- }
-
- /*
- * If this happens, we want to dump the message to the syslog so
- * it can be sent back to the card manufacturer by the end user
- * to aid in debugging.
- *
- */
- printk(KERN_WARNING "%s: Unsolicited message reply sent to core!"
- "Message dumped to syslog\n",
- c->name);
- i2o_dump_message(msg);
-
- return;
-}
-
-/**
- * i2o_install_handler - install a message handler
- * @h: Handler structure
- *
- * Install an I2O handler - these handle the asynchronous messaging
- * from the card once it has initialised. If the table of handlers is
- * full then -ENOSPC is returned. On a success 0 is returned and the
- * context field is set by the function. The structure is part of the
- * system from this time onwards. It must not be freed until it has
- * been uninstalled
- */
-
-int i2o_install_handler(struct i2o_handler *h)
-{
- int i;
- down(&i2o_configuration_lock);
- for(i=0;i<MAX_I2O_MODULES;i++)
- {
- if(i2o_handlers[i]==NULL)
- {
- h->context = i;
- i2o_handlers[i]=h;
- up(&i2o_configuration_lock);
- return 0;
- }
- }
- up(&i2o_configuration_lock);
- return -ENOSPC;
-}
-
-/**
- * i2o_remove_handler - remove an i2o message handler
- * @h: handler
- *
- * Remove a message handler previously installed with i2o_install_handler.
- * After this function returns the handler object can be freed or re-used
- */
-
-int i2o_remove_handler(struct i2o_handler *h)
-{
- i2o_handlers[h->context]=NULL;
- return 0;
-}
-
-
-/*
- * Each I2O controller has a chain of devices on it.
- * Each device has a pointer to it's LCT entry to be used
- * for fun purposes.
- */
-
-/**
- * i2o_install_device - attach a device to a controller
- * @c: controller
- * @d: device
- *
- * Add a new device to an i2o controller. This can be called from
- * non interrupt contexts only. It adds the device and marks it as
- * unclaimed. The device memory becomes part of the kernel and must
- * be uninstalled before being freed or reused. Zero is returned
- * on success.
- */
-
-int i2o_install_device(struct i2o_controller *c, struct i2o_device *d)
-{
- int i;
-
- down(&i2o_configuration_lock);
- d->controller=c;
- d->owner=NULL;
- d->next=c->devices;
- d->prev=NULL;
- if (c->devices != NULL)
- c->devices->prev=d;
- c->devices=d;
- *d->dev_name = 0;
-
- for(i = 0; i < I2O_MAX_MANAGERS; i++)
- d->managers[i] = NULL;
-
- up(&i2o_configuration_lock);
- return 0;
-}
-
-/* we need this version to call out of i2o_delete_controller */
-
-int __i2o_delete_device(struct i2o_device *d)
-{
- struct i2o_device **p;
- int i;
-
- p=&(d->controller->devices);
-
- /*
- * Hey we have a driver!
- * Check to see if the driver wants us to notify it of
- * device deletion. If it doesn't we assume that it
- * is unsafe to delete a device with an owner and
- * fail.
- */
- if(d->owner)
- {
- if(d->owner->dev_del_notify)
- {
- dprintk(KERN_INFO "Device has owner, notifying\n");
- d->owner->dev_del_notify(d->controller, d);
- if(d->owner)
- {
- printk(KERN_WARNING
- "Driver \"%s\" did not release device!\n", d->owner->name);
- return -EBUSY;
- }
- }
- else
- return -EBUSY;
- }
-
- /*
- * Tell any other users who are talking to this device
- * that it's going away. We assume that everything works.
- */
- for(i=0; i < I2O_MAX_MANAGERS; i++)
- {
- if(d->managers[i] && d->managers[i]->dev_del_notify)
- d->managers[i]->dev_del_notify(d->controller, d);
- }
-
- while(*p!=NULL)
- {
- if(*p==d)
- {
- /*
- * Destroy
- */
- *p=d->next;
- kfree(d);
- return 0;
- }
- p=&((*p)->next);
- }
- printk(KERN_ERR "i2o_delete_device: passed invalid device.\n");
- return -EINVAL;
-}
-
-/**
- * i2o_delete_device - remove an i2o device
- * @d: device to remove
- *
- * This function unhooks a device from a controller. The device
- * will not be unhooked if it has an owner who does not wish to free
- * it, or if the owner lacks a dev_del_notify function. In that case
- * -EBUSY is returned. On success 0 is returned. Other errors cause
- * negative errno values to be returned
- */
-
-int i2o_delete_device(struct i2o_device *d)
-{
- int ret;
-
- down(&i2o_configuration_lock);
-
- /*
- * Seek, locate
- */
-
- ret = __i2o_delete_device(d);
-
- up(&i2o_configuration_lock);
-
- return ret;
-}
-
-/**
- * i2o_install_controller - attach a controller
- * @c: controller
- *
- * Add a new controller to the i2o layer. This can be called from
- * non interrupt contexts only. It adds the controller and marks it as
- * unused with no devices. If the tables are full or memory allocations
- * fail then a negative errno code is returned. On success zero is
- * returned and the controller is bound to the system. The structure
- * must not be freed or reused until being uninstalled.
- */
-
-int i2o_install_controller(struct i2o_controller *c)
-{
- int i;
- down(&i2o_configuration_lock);
- for(i=0;i<MAX_I2O_CONTROLLERS;i++)
- {
- if(i2o_controllers[i]==NULL)
- {
- c->dlct = (i2o_lct*)kmalloc(8192, GFP_KERNEL);
- if(c->dlct==NULL)
- {
- up(&i2o_configuration_lock);
- return -ENOMEM;
- }
- i2o_controllers[i]=c;
- c->devices = NULL;
- c->next=i2o_controller_chain;
- i2o_controller_chain=c;
- c->unit = i;
- c->page_frame = NULL;
- c->hrt = NULL;
- c->lct = NULL;
- c->status_block = NULL;
- sprintf(c->name, "i2o/iop%d", i);
- i2o_num_controllers++;
- init_MUTEX_LOCKED(&c->lct_sem);
- up(&i2o_configuration_lock);
- return 0;
- }
- }
- printk(KERN_ERR "No free i2o controller slots.\n");
- up(&i2o_configuration_lock);
- return -EBUSY;
-}
-
-/**
- * i2o_delete_controller - delete a controller
- * @c: controller
- *
- * Remove an i2o controller from the system. If the controller or its
- * devices are busy then -EBUSY is returned. On a failure a negative
- * errno code is returned. On success zero is returned.
- */
-
-int i2o_delete_controller(struct i2o_controller *c)
-{
- struct i2o_controller **p;
- int users;
- char name[16];
- int stat;
-
- dprintk(KERN_INFO "Deleting controller %s\n", c->name);
-
- /*
- * Clear event registration as this can cause weird behavior
- */
- if(c->status_block->iop_state == ADAPTER_STATE_OPERATIONAL)
- i2o_event_register(c, core_context, 0, 0, 0);
-
- down(&i2o_configuration_lock);
- if((users=atomic_read(&c->users)))
- {
- dprintk(KERN_INFO "I2O: %d users for controller %s\n", users,
- c->name);
- up(&i2o_configuration_lock);
- return -EBUSY;
- }
- while(c->devices)
- {
- if(__i2o_delete_device(c->devices)<0)
- {
- /* Shouldnt happen */
- c->bus_disable(c);
- up(&i2o_configuration_lock);
- return -EBUSY;
- }
- }
-
- /*
- * If this is shutdown time, the thread's already been killed
- */
- if(c->lct_running) {
- stat = kill_proc(c->lct_pid, SIGTERM, 1);
- if(!stat) {
- int count = 10 * 100;
- while(c->lct_running && --count) {
- current->state = TASK_INTERRUPTIBLE;
- schedule_timeout(1);
- }
-
- if(!count)
- printk(KERN_ERR
- "%s: LCT thread still running!\n",
- c->name);
- }
- }
-
- p=&i2o_controller_chain;
-
- while(*p)
- {
- if(*p==c)
- {
- /* Ask the IOP to switch to RESET state */
- i2o_reset_controller(c);
-
- /* Release IRQ */
- c->destructor(c);
-
- *p=c->next;
- up(&i2o_configuration_lock);
-
- if(c->page_frame)
- kfree(c->page_frame);
- if(c->hrt)
- kfree(c->hrt);
- if(c->lct)
- kfree(c->lct);
- if(c->status_block)
- kfree(c->status_block);
- if(c->dlct)
- kfree(c->dlct);
-
- i2o_controllers[c->unit]=NULL;
- memcpy(name, c->name, strlen(c->name)+1);
- kfree(c);
- dprintk(KERN_INFO "%s: Deleted from controller chain.\n", name);
-
- i2o_num_controllers--;
- return 0;
- }
- p=&((*p)->next);
- }
- up(&i2o_configuration_lock);
- printk(KERN_ERR "i2o_delete_controller: bad pointer!\n");
- return -ENOENT;
-}
-
-/**
- * i2o_unlock_controller - unlock a controller
- * @c: controller to unlock
- *
- * Take a lock on an i2o controller. This prevents it being deleted.
- * i2o controllers are not refcounted so a deletion of an in use device
- * will fail, not take affect on the last dereference.
- */
-
-void i2o_unlock_controller(struct i2o_controller *c)
-{
- atomic_dec(&c->users);
-}
-
-/**
- * i2o_find_controller - return a locked controller
- * @n: controller number
- *
- * Returns a pointer to the controller object. The controller is locked
- * on return. NULL is returned if the controller is not found.
- */
-
-struct i2o_controller *i2o_find_controller(int n)
-{
- struct i2o_controller *c;
-
- if(n<0 || n>=MAX_I2O_CONTROLLERS)
- return NULL;
-
- down(&i2o_configuration_lock);
- c=i2o_controllers[n];
- if(c!=NULL)
- atomic_inc(&c->users);
- up(&i2o_configuration_lock);
- return c;
-}
-
-/**
- * i2o_issue_claim - claim or release a device
- * @cmd: command
- * @c: controller to claim for
- * @tid: i2o task id
- * @type: type of claim
- *
- * Issue I2O UTIL_CLAIM and UTIL_RELEASE messages. The message to be sent
- * is set by cmd. The tid is the task id of the object to claim and the
- * type is the claim type (see the i2o standard)
- *
- * Zero is returned on success.
- */
-
-static int i2o_issue_claim(u32 cmd, struct i2o_controller *c, int tid, u32 type)
-{
- u32 msg[5];
-
- msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = cmd << 24 | HOST_TID<<12 | tid;
- msg[3] = 0;
- msg[4] = type;
-
- return i2o_post_wait(c, msg, sizeof(msg), 60);
-}
-
-/*
- * i2o_claim_device - claim a device for use by an OSM
- * @d: device to claim
- * @h: handler for this device
- *
- * Do the leg work to assign a device to a given OSM on Linux. The
- * kernel updates the internal handler data for the device and then
- * performs an I2O claim for the device, attempting to claim the
- * device as primary. If the attempt fails a negative errno code
- * is returned. On success zero is returned.
- */
-
-int i2o_claim_device(struct i2o_device *d, struct i2o_handler *h)
-{
- down(&i2o_configuration_lock);
- if (d->owner) {
- printk(KERN_INFO "Device claim called, but dev already owned by %s!",
- h->name);
- up(&i2o_configuration_lock);
- return -EBUSY;
- }
- d->owner=h;
-
- if(i2o_issue_claim(I2O_CMD_UTIL_CLAIM ,d->controller,d->lct_data.tid,
- I2O_CLAIM_PRIMARY))
- {
- d->owner = NULL;
- return -EBUSY;
- }
- up(&i2o_configuration_lock);
- return 0;
-}
-
-/**
- * i2o_release_device - release a device that the OSM is using
- * @d: device to claim
- * @h: handler for this device
- *
- * Drop a claim by an OSM on a given I2O device. The handler is cleared
- * and 0 is returned on success.
- *
- * AC - some devices seem to want to refuse an unclaim until they have
- * finished internal processing. It makes sense since you don't want a
- * new device to go reconfiguring the entire system until you are done.
- * Thus we are prepared to wait briefly.
- */
-
-int i2o_release_device(struct i2o_device *d, struct i2o_handler *h)
-{
- int err = 0;
- int tries;
-
- down(&i2o_configuration_lock);
- if (d->owner != h) {
- printk(KERN_INFO "Claim release called, but not owned by %s!\n",
- h->name);
- up(&i2o_configuration_lock);
- return -ENOENT;
- }
-
- for(tries=0;tries<10;tries++)
- {
- d->owner = NULL;
-
- /*
- * If the controller takes a nonblocking approach to
- * releases we have to sleep/poll for a few times.
- */
-
- if((err=i2o_issue_claim(I2O_CMD_UTIL_RELEASE, d->controller, d->lct_data.tid, I2O_CLAIM_PRIMARY)) )
- {
- err = -ENXIO;
- current->state = TASK_UNINTERRUPTIBLE;
- schedule_timeout(HZ);
- }
- else
- {
- err=0;
- break;
- }
- }
- up(&i2o_configuration_lock);
- return err;
-}
-
-/**
- * i2o_device_notify_on - Enable deletion notifiers
- * @d: device for notification
- * @h: handler to install
- *
- * Called by OSMs to let the core know that they want to be
- * notified if the given device is deleted from the system.
- */
-
-int i2o_device_notify_on(struct i2o_device *d, struct i2o_handler *h)
-{
- int i;
-
- if(d->num_managers == I2O_MAX_MANAGERS)
- return -ENOSPC;
-
- for(i = 0; i < I2O_MAX_MANAGERS; i++)
- {
- if(!d->managers[i])
- {
- d->managers[i] = h;
- break;
- }
- }
-
- d->num_managers++;
-
- return 0;
-}
-
-/**
- * i2o_device_notify_off - Remove deletion notifiers
- * @d: device for notification
- * @h: handler to remove
- *
- * Called by OSMs to let the core know that they no longer
- * are interested in the fate of the given device.
- */
-int i2o_device_notify_off(struct i2o_device *d, struct i2o_handler *h)
-{
- int i;
-
- for(i=0; i < I2O_MAX_MANAGERS; i++)
- {
- if(d->managers[i] == h)
- {
- d->managers[i] = NULL;
- d->num_managers--;
- return 0;
- }
- }
-
- return -ENOENT;
-}
-
-/**
- * i2o_event_register - register interest in an event
- * @c: Controller to register interest with
- * @tid: I2O task id
- * @init_context: initiator context to use with this notifier
- * @tr_context: transaction context to use with this notifier
- * @evt_mask: mask of events
- *
- * Create and posts an event registration message to the task. No reply
- * is waited for, or expected. Errors in posting will be reported.
- */
-
-int i2o_event_register(struct i2o_controller *c, u32 tid,
- u32 init_context, u32 tr_context, u32 evt_mask)
-{
- u32 msg[5]; // Not performance critical, so we just
- // i2o_post_this it instead of building it
- // in IOP memory
-
- msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | tid;
- msg[2] = init_context;
- msg[3] = tr_context;
- msg[4] = evt_mask;
-
- return i2o_post_this(c, msg, sizeof(msg));
-}
-
-/*
- * i2o_event_ack - acknowledge an event
- * @c: controller
- * @msg: pointer to the UTIL_EVENT_REGISTER reply we received
- *
- * We just take a pointer to the original UTIL_EVENT_REGISTER reply
- * message and change the function code since that's what spec
- * describes an EventAck message looking like.
- */
-
-int i2o_event_ack(struct i2o_controller *c, u32 *msg)
-{
- struct i2o_message *m = (struct i2o_message *)msg;
-
- m->function = I2O_CMD_UTIL_EVT_ACK;
-
- return i2o_post_wait(c, msg, m->size * 4, 2);
-}
-
-/*
- * Core event handler. Runs as a separate thread and is woken
- * up whenever there is an Executive class event.
- */
-static int i2o_core_evt(void *reply_data)
-{
- struct reply_info *reply = (struct reply_info *) reply_data;
- u32 *msg = reply->msg;
- struct i2o_controller *c = NULL;
- unsigned long flags;
-
- lock_kernel();
- daemonize();
- unlock_kernel();
-
- strcpy(current->comm, "i2oevtd");
- evt_running = 1;
-
- while(1)
- {
- if(down_interruptible(&evt_sem))
- {
- dprintk(KERN_INFO "I2O event thread dead\n");
- printk("exiting...");
- evt_running = 0;
- complete_and_exit(&evt_dead, 0);
- }
-
- /*
- * Copy the data out of the queue so that we don't have to lock
- * around the whole function and just around the qlen update
- */
- spin_lock_irqsave(&i2o_evt_lock, flags);
- memcpy(reply, &events[evt_out], sizeof(struct reply_info));
- MODINC(evt_out, I2O_EVT_Q_LEN);
- evt_q_len--;
- spin_unlock_irqrestore(&i2o_evt_lock, flags);
-
- c = reply->iop;
- dprintk(KERN_INFO "I2O IRTOS EVENT: iop%d, event %#10x\n", c->unit, msg[4]);
-
- /*
- * We do not attempt to delete/quiesce/etc. the controller if
- * some sort of error indidication occurs. We may want to do
- * so in the future, but for now we just let the user deal with
- * it. One reason for this is that what to do with an error
- * or when to send what ærror is not really agreed on, so
- * we get errors that may not be fatal but just look like they
- * are...so let the user deal with it.
- */
- switch(msg[4])
- {
- case I2O_EVT_IND_EXEC_RESOURCE_LIMITS:
- printk(KERN_ERR "%s: Out of resources\n", c->name);
- break;
-
- case I2O_EVT_IND_EXEC_POWER_FAIL:
- printk(KERN_ERR "%s: Power failure\n", c->name);
- break;
-
- case I2O_EVT_IND_EXEC_HW_FAIL:
- {
- char *fail[] =
- {
- "Unknown Error",
- "Power Lost",
- "Code Violation",
- "Parity Error",
- "Code Execution Exception",
- "Watchdog Timer Expired"
- };
-
- if(msg[5] <= 6)
- printk(KERN_ERR "%s: Hardware Failure: %s\n",
- c->name, fail[msg[5]]);
- else
- printk(KERN_ERR "%s: Unknown Hardware Failure\n", c->name);
-
- break;
- }
-
- /*
- * New device created
- * - Create a new i2o_device entry
- * - Inform all interested drivers about this device's existence
- */
- case I2O_EVT_IND_EXEC_NEW_LCT_ENTRY:
- {
- struct i2o_device *d = (struct i2o_device *)
- kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
- int i;
-
- if (d == NULL) {
- printk(KERN_EMERG "i2oevtd: out of memory\n");
- break;
- }
- memcpy(&d->lct_data, &msg[5], sizeof(i2o_lct_entry));
-
- d->next = NULL;
- d->controller = c;
- d->flags = 0;
-
- i2o_report_controller_unit(c, d);
- i2o_install_device(c,d);
-
- for(i = 0; i < MAX_I2O_MODULES; i++)
- {
- if(i2o_handlers[i] &&
- i2o_handlers[i]->new_dev_notify &&
- (i2o_handlers[i]->class&d->lct_data.class_id))
- {
- spin_lock(&i2o_dev_lock);
- i2o_handlers[i]->new_dev_notify(c,d);
- spin_unlock(&i2o_dev_lock);
- }
- }
-
- break;
- }
-
- /*
- * LCT entry for a device has been modified, so update it
- * internally.
- */
- case I2O_EVT_IND_EXEC_MODIFIED_LCT:
- {
- struct i2o_device *d;
- i2o_lct_entry *new_lct = (i2o_lct_entry *)&msg[5];
-
- for(d = c->devices; d; d = d->next)
- {
- if(d->lct_data.tid == new_lct->tid)
- {
- memcpy(&d->lct_data, new_lct, sizeof(i2o_lct_entry));
- break;
- }
- }
- break;
- }
-
- case I2O_EVT_IND_CONFIGURATION_FLAG:
- printk(KERN_WARNING "%s requires user configuration\n", c->name);
- break;
-
- case I2O_EVT_IND_GENERAL_WARNING:
- printk(KERN_WARNING "%s: Warning notification received!"
- "Check configuration for errors!\n", c->name);
- break;
-
- case I2O_EVT_IND_EVT_MASK_MODIFIED:
- /* Well I guess that was us hey .. */
- break;
-
- default:
- printk(KERN_WARNING "%s: No handler for event (0x%08x)\n", c->name, msg[4]);
- break;
- }
- }
-
- return 0;
-}
-
-/*
- * Dynamic LCT update. This compares the LCT with the currently
- * installed devices to check for device deletions..this needed b/c there
- * is no DELETED_LCT_ENTRY EventIndicator for the Executive class so
- * we can't just have the event handler do this...annoying
- *
- * This is a hole in the spec that will hopefully be fixed someday.
- */
-static int i2o_dyn_lct(void *foo)
-{
- struct i2o_controller *c = (struct i2o_controller *)foo;
- struct i2o_device *d = NULL;
- struct i2o_device *d1 = NULL;
- int i = 0;
- int found = 0;
- int entries;
- void *tmp;
- char name[16];
-
- lock_kernel();
- daemonize();
- unlock_kernel();
-
- sprintf(name, "iop%d_lctd", c->unit);
- strcpy(current->comm, name);
-
- c->lct_running = 1;
-
- while(1)
- {
- down_interruptible(&c->lct_sem);
- if(signal_pending(current))
- {
- dprintk(KERN_ERR "%s: LCT thread dead\n", c->name);
- c->lct_running = 0;
- return 0;
- }
-
- entries = c->dlct->table_size;
- entries -= 3;
- entries /= 9;
-
- dprintk(KERN_INFO "%s: Dynamic LCT Update\n",c->name);
- dprintk(KERN_INFO "%s: Dynamic LCT contains %d entries\n", c->name, entries);
-
- if(!entries)
- {
- printk(KERN_INFO "%s: Empty LCT???\n", c->name);
- continue;
- }
-
- /*
- * Loop through all the devices on the IOP looking for their
- * LCT data in the LCT. We assume that TIDs are not repeated.
- * as that is the only way to really tell. It's been confirmed
- * by the IRTOS vendor(s?) that TIDs are not reused until they
- * wrap arround(4096), and I doubt a system will up long enough
- * to create/delete that many devices.
- */
- for(d = c->devices; d; )
- {
- found = 0;
- d1 = d->next;
-
- for(i = 0; i < entries; i++)
- {
- if(d->lct_data.tid == c->dlct->lct_entry[i].tid)
- {
- found = 1;
- break;
- }
- }
- if(!found)
- {
- dprintk(KERN_INFO "i2o_core: Deleted device!\n");
- spin_lock(&i2o_dev_lock);
- i2o_delete_device(d);
- spin_unlock(&i2o_dev_lock);
- }
- d = d1;
- }
-
- /*
- * Tell LCT to renotify us next time there is a change
- */
- i2o_lct_notify(c);
-
- /*
- * Copy new LCT into public LCT
- *
- * Possible race if someone is reading LCT while we are copying
- * over it. If this happens, we'll fix it then. but I doubt that
- * the LCT will get updated often enough or will get read by
- * a user often enough to worry.
- */
- if(c->lct->table_size < c->dlct->table_size)
- {
- tmp = c->lct;
- c->lct = kmalloc(c->dlct->table_size<<2, GFP_KERNEL);
- if(!c->lct)
- {
- printk(KERN_ERR "%s: No memory for LCT!\n", c->name);
- c->lct = tmp;
- continue;
- }
- kfree(tmp);
- }
- memcpy(c->lct, c->dlct, c->dlct->table_size<<2);
- }
-
- return 0;
-}
-
-/**
- * i2o_run_queue - process pending events on a controller
- * @c: controller to process
- *
- * This is called by the bus specific driver layer when an interrupt
- * or poll of this card interface is desired.
- */
-
-void i2o_run_queue(struct i2o_controller *c)
-{
- struct i2o_message *m;
- u32 mv;
- u32 *msg;
-
- /*
- * Old 960 steppings had a bug in the I2O unit that caused
- * the queue to appear empty when it wasn't.
- */
- if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
- mv=I2O_REPLY_READ32(c);
-
- while(mv!=0xFFFFFFFF)
- {
- struct i2o_handler *i;
- m=(struct i2o_message *)bus_to_virt(mv);
- msg=(u32*)m;
-
- i=i2o_handlers[m->initiator_context&(MAX_I2O_MODULES-1)];
- if(i && i->reply)
- i->reply(i,c,m);
- else
- {
- printk(KERN_WARNING "I2O: Spurious reply to handler %d\n",
- m->initiator_context&(MAX_I2O_MODULES-1));
- }
- i2o_flush_reply(c,mv);
- mb();
-
- /* That 960 bug again... */
- if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
- mv=I2O_REPLY_READ32(c);
- }
-}
-
-
-/**
- * i2o_get_class_name - do i2o class name lookup
- * @class: class number
- *
- * Return a descriptive string for an i2o class
- */
-
-const char *i2o_get_class_name(int class)
-{
- int idx = 16;
- static char *i2o_class_name[] = {
- "Executive",
- "Device Driver Module",
- "Block Device",
- "Tape Device",
- "LAN Interface",
- "WAN Interface",
- "Fibre Channel Port",
- "Fibre Channel Device",
- "SCSI Device",
- "ATE Port",
- "ATE Device",
- "Floppy Controller",
- "Floppy Device",
- "Secondary Bus Port",
- "Peer Transport Agent",
- "Peer Transport",
- "Unknown"
- };
-
- switch(class&0xFFF)
- {
- case I2O_CLASS_EXECUTIVE:
- idx = 0; break;
- case I2O_CLASS_DDM:
- idx = 1; break;
- case I2O_CLASS_RANDOM_BLOCK_STORAGE:
- idx = 2; break;
- case I2O_CLASS_SEQUENTIAL_STORAGE:
- idx = 3; break;
- case I2O_CLASS_LAN:
- idx = 4; break;
- case I2O_CLASS_WAN:
- idx = 5; break;
- case I2O_CLASS_FIBRE_CHANNEL_PORT:
- idx = 6; break;
- case I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL:
- idx = 7; break;
- case I2O_CLASS_SCSI_PERIPHERAL:
- idx = 8; break;
- case I2O_CLASS_ATE_PORT:
- idx = 9; break;
- case I2O_CLASS_ATE_PERIPHERAL:
- idx = 10; break;
- case I2O_CLASS_FLOPPY_CONTROLLER:
- idx = 11; break;
- case I2O_CLASS_FLOPPY_DEVICE:
- idx = 12; break;
- case I2O_CLASS_BUS_ADAPTER_PORT:
- idx = 13; break;
- case I2O_CLASS_PEER_TRANSPORT_AGENT:
- idx = 14; break;
- case I2O_CLASS_PEER_TRANSPORT:
- idx = 15; break;
- }
-
- return i2o_class_name[idx];
-}
-
-
-/**
- * i2o_wait_message - obtain an i2o message from the IOP
- * @c: controller
- * @why: explanation
- *
- * This function waits up to 5 seconds for a message slot to be
- * available. If no message is available it prints an error message
- * that is expected to be what the message will be used for (eg
- * "get_status"). 0xFFFFFFFF is returned on a failure.
- *
- * On a success the message is returned. This is the physical page
- * frame offset address from the read port. (See the i2o spec)
- */
-
-u32 i2o_wait_message(struct i2o_controller *c, char *why)
-{
- long time=jiffies;
- u32 m;
- while((m=I2O_POST_READ32(c))==0xFFFFFFFF)
- {
- if((jiffies-time)>=5*HZ)
- {
- dprintk(KERN_ERR "%s: Timeout waiting for message frame to send %s.\n",
- c->name, why);
- return 0xFFFFFFFF;
- }
- schedule();
- barrier();
- }
- return m;
-}
-
-/**
- * i2o_report_controller_unit - print information about a tid
- * @c: controller
- * @d: device
- *
- * Dump an information block associated with a given unit (TID). The
- * tables are read and a block of text is output to printk that is
- * formatted intended for the user.
- */
-
-void i2o_report_controller_unit(struct i2o_controller *c, struct i2o_device *d)
-{
- char buf[64];
- char str[22];
- int ret;
- int unit = d->lct_data.tid;
-
- if(verbose==0)
- return;
-
- printk(KERN_INFO "Target ID %d.\n", unit);
- if((ret=i2o_query_scalar(c, unit, 0xF100, 3, buf, 16))>=0)
- {
- buf[16]=0;
- printk(KERN_INFO " Vendor: %s\n", buf);
- }
- if((ret=i2o_query_scalar(c, unit, 0xF100, 4, buf, 16))>=0)
- {
- buf[16]=0;
- printk(KERN_INFO " Device: %s\n", buf);
- }
- if(i2o_query_scalar(c, unit, 0xF100, 5, buf, 16)>=0)
- {
- buf[16]=0;
- printk(KERN_INFO " Description: %s\n", buf);
- }
- if((ret=i2o_query_scalar(c, unit, 0xF100, 6, buf, 8))>=0)
- {
- buf[8]=0;
- printk(KERN_INFO " Rev: %s\n", buf);
- }
-
- printk(KERN_INFO " Class: ");
- sprintf(str, "%-21s", i2o_get_class_name(d->lct_data.class_id));
- printk("%s\n", str);
-
- printk(KERN_INFO " Subclass: 0x%04X\n", d->lct_data.sub_class);
- printk(KERN_INFO " Flags: ");
-
- if(d->lct_data.device_flags&(1<<0))
- printk("C"); // ConfigDialog requested
- if(d->lct_data.device_flags&(1<<1))
- printk("U"); // Multi-user capable
- if(!(d->lct_data.device_flags&(1<<4)))
- printk("P"); // Peer service enabled!
- if(!(d->lct_data.device_flags&(1<<5)))
- printk("M"); // Mgmt service enabled!
- printk("\n");
-
-}
-
-
-/*
- * Parse the hardware resource table. Right now we print it out
- * and don't do a lot with it. We should collate these and then
- * interact with the Linux resource allocation block.
- *
- * Lets prove we can read it first eh ?
- *
- * This is full of endianisms!
- */
-
-static int i2o_parse_hrt(struct i2o_controller *c)
-{
-#ifdef DRIVERDEBUG
- u32 *rows=(u32*)c->hrt;
- u8 *p=(u8 *)c->hrt;
- u8 *d;
- int count;
- int length;
- int i;
- int state;
-
- if(p[3]!=0)
- {
- printk(KERN_ERR "%s: HRT table for controller is too new a version.\n",
- c->name);
- return -1;
- }
-
- count=p[0]|(p[1]<<8);
- length = p[2];
-
- printk(KERN_INFO "%s: HRT has %d entries of %d bytes each.\n",
- c->name, count, length<<2);
-
- rows+=2;
-
- for(i=0;i<count;i++)
- {
- printk(KERN_INFO "Adapter %08X: ", rows[0]);
- p=(u8 *)(rows+1);
- d=(u8 *)(rows+2);
- state=p[1]<<8|p[0];
-
- printk("TID %04X:[", state&0xFFF);
- state>>=12;
- if(state&(1<<0))
- printk("H"); /* Hidden */
- if(state&(1<<2))
- {
- printk("P"); /* Present */
- if(state&(1<<1))
- printk("C"); /* Controlled */
- }
- if(state>9)
- printk("*"); /* Hard */
-
- printk("]:");
-
- switch(p[3]&0xFFFF)
- {
- case 0:
- /* Adapter private bus - easy */
- printk("Local bus %d: I/O at 0x%04X Mem 0x%08X",
- p[2], d[1]<<8|d[0], *(u32 *)(d+4));
- break;
- case 1:
- /* ISA bus */
- printk("ISA %d: CSN %d I/O at 0x%04X Mem 0x%08X",
- p[2], d[2], d[1]<<8|d[0], *(u32 *)(d+4));
- break;
-
- case 2: /* EISA bus */
- printk("EISA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
- p[2], d[3], d[1]<<8|d[0], *(u32 *)(d+4));
- break;
-
- case 3: /* MCA bus */
- printk("MCA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
- p[2], d[3], d[1]<<8|d[0], *(u32 *)(d+4));
- break;
-
- case 4: /* PCI bus */
- printk("PCI %d: Bus %d Device %d Function %d",
- p[2], d[2], d[1], d[0]);
- break;
-
- case 0x80: /* Other */
- default:
- printk("Unsupported bus type.");
- break;
- }
- printk("\n");
- rows+=length;
- }
-#endif
- return 0;
-}
-
-/*
- * The logical configuration table tells us what we can talk to
- * on the board. Most of the stuff isn't interesting to us.
- */
-
-static int i2o_parse_lct(struct i2o_controller *c)
-{
- int i;
- int max;
- int tid;
- struct i2o_device *d;
- i2o_lct *lct = c->lct;
-
- if (lct == NULL) {
- printk(KERN_ERR "%s: LCT is empty???\n", c->name);
- return -1;
- }
-
- max = lct->table_size;
- max -= 3;
- max /= 9;
-
- printk(KERN_INFO "%s: LCT has %d entries.\n", c->name, max);
-
- if(lct->iop_flags&(1<<0))
- printk(KERN_WARNING "%s: Configuration dialog desired.\n", c->name);
-
- for(i=0;i<max;i++)
- {
- d = (struct i2o_device *)kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
- if(d==NULL)
- {
- printk(KERN_CRIT "i2o_core: Out of memory for I2O device data.\n");
- return -ENOMEM;
- }
-
- d->controller = c;
- d->next = NULL;
-
- memcpy(&d->lct_data, &lct->lct_entry[i], sizeof(i2o_lct_entry));
-
- d->flags = 0;
- tid = d->lct_data.tid;
-
- i2o_report_controller_unit(c, d);
-
- i2o_install_device(c, d);
- }
- return 0;
-}
-
-
-/**
- * i2o_quiesce_controller - quiesce controller
- * @c: controller
- *
- * Quiesce an IOP. Causes IOP to make external operation quiescent
- * (i2o 'READY' state). Internal operation of the IOP continues normally.
- */
-
-int i2o_quiesce_controller(struct i2o_controller *c)
-{
- u32 msg[4];
- int ret;
-
- i2o_status_get(c);
-
- /* SysQuiesce discarded if IOP not in READY or OPERATIONAL state */
-
- if ((c->status_block->iop_state != ADAPTER_STATE_READY) &&
- (c->status_block->iop_state != ADAPTER_STATE_OPERATIONAL))
- {
- return 0;
- }
-
- msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1] = I2O_CMD_SYS_QUIESCE<<24|HOST_TID<<12|ADAPTER_TID;
- msg[3] = 0;
-
- /* Long timeout needed for quiesce if lots of devices */
-
- if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
- printk(KERN_INFO "%s: Unable to quiesce (status=%#x).\n",
- c->name, -ret);
- else
- dprintk(KERN_INFO "%s: Quiesced.\n", c->name);
-
- i2o_status_get(c); // Entered READY state
- return ret;
-}
-
-/**
- * i2o_enable_controller - move controller from ready to operational
- * @c: controller
- *
- * Enable IOP. This allows the IOP to resume external operations and
- * reverses the effect of a quiesce. In the event of an error a negative
- * errno code is returned.
- */
-
-int i2o_enable_controller(struct i2o_controller *c)
-{
- u32 msg[4];
- int ret;
-
- i2o_status_get(c);
-
- /* Enable only allowed on READY state */
- if(c->status_block->iop_state != ADAPTER_STATE_READY)
- return -EINVAL;
-
- msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1]=I2O_CMD_SYS_ENABLE<<24|HOST_TID<<12|ADAPTER_TID;
-
- /* How long of a timeout do we need? */
-
- if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
- printk(KERN_ERR "%s: Could not enable (status=%#x).\n",
- c->name, -ret);
- else
- dprintk(KERN_INFO "%s: Enabled.\n", c->name);
-
- i2o_status_get(c); // entered OPERATIONAL state
-
- return ret;
-}
-
-/**
- * i2o_clear_controller - clear a controller
- * @c: controller
- *
- * Clear an IOP to HOLD state, ie. terminate external operations, clear all
- * input queues and prepare for a system restart. IOP's internal operation
- * continues normally and the outbound queue is alive.
- * The IOP is not expected to rebuild its LCT.
- */
-
-int i2o_clear_controller(struct i2o_controller *c)
-{
- struct i2o_controller *iop;
- u32 msg[4];
- int ret;
-
- /* Quiesce all IOPs first */
-
- for (iop = i2o_controller_chain; iop; iop = iop->next)
- i2o_quiesce_controller(iop);
-
- msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1]=I2O_CMD_ADAPTER_CLEAR<<24|HOST_TID<<12|ADAPTER_TID;
- msg[3]=0;
-
- if ((ret=i2o_post_wait(c, msg, sizeof(msg), 30)))
- printk(KERN_INFO "%s: Unable to clear (status=%#x).\n",
- c->name, -ret);
- else
- dprintk(KERN_INFO "%s: Cleared.\n",c->name);
-
- i2o_status_get(c);
-
- /* Enable other IOPs */
-
- for (iop = i2o_controller_chain; iop; iop = iop->next)
- if (iop != c)
- i2o_enable_controller(iop);
-
- return ret;
-}
-
-
-/**
- * i2o_reset_controller - reset an IOP
- * @c: controller to reset
- *
- * Reset the IOP into INIT state and wait until IOP gets into RESET state.
- * Terminate all external operations, clear IOP's inbound and outbound
- * queues, terminate all DDMs, and reload the IOP's operating environment
- * and all local DDMs. The IOP rebuilds its LCT.
- */
-
-static int i2o_reset_controller(struct i2o_controller *c)
-{
- struct i2o_controller *iop;
- u32 m;
- u8 *status;
- u32 *msg;
- long time;
-
- /* Quiesce all IOPs first */
-
- for (iop = i2o_controller_chain; iop; iop = iop->next)
- {
- if(iop->type != I2O_TYPE_PCI || !iop->bus.pci.dpt)
- i2o_quiesce_controller(iop);
- }
-
- m=i2o_wait_message(c, "AdapterReset");
- if(m==0xFFFFFFFF)
- return -ETIMEDOUT;
- msg=(u32 *)(c->mem_offset+m);
-
- status=(void *)kmalloc(4, GFP_KERNEL);
- if(status==NULL) {
- printk(KERN_ERR "IOP reset failed - no free memory.\n");
- return -ENOMEM;
- }
- memset(status, 0, 4);
-
- msg[0]=EIGHT_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1]=I2O_CMD_ADAPTER_RESET<<24|HOST_TID<<12|ADAPTER_TID;
- msg[2]=core_context;
- msg[3]=0;
- msg[4]=0;
- msg[5]=0;
- msg[6]=virt_to_bus(status);
- msg[7]=0; /* 64bit host FIXME */
-
- i2o_post_message(c,m);
-
- /* Wait for a reply */
- time=jiffies;
- while(*status==0)
- {
- if((jiffies-time)>=20*HZ)
- {
- printk(KERN_ERR "IOP reset timeout.\n");
- // Better to leak this for safety: kfree(status);
- return -ETIMEDOUT;
- }
- schedule();
- barrier();
- }
-
- if (*status==I2O_CMD_IN_PROGRESS)
- {
- /*
- * Once the reset is sent, the IOP goes into the INIT state
- * which is indeterminate. We need to wait until the IOP
- * has rebooted before we can let the system talk to
- * it. We read the inbound Free_List until a message is
- * available. If we can't read one in the given ammount of
- * time, we assume the IOP could not reboot properly.
- */
-
- dprintk(KERN_INFO "%s: Reset in progress, waiting for reboot...\n",
- c->name);
-
- time = jiffies;
- m = I2O_POST_READ32(c);
- while(m == 0XFFFFFFFF)
- {
- if((jiffies-time) >= 30*HZ)
- {
- printk(KERN_ERR "%s: Timeout waiting for IOP reset.\n",
- c->name);
- return -ETIMEDOUT;
- }
- schedule();
- barrier();
- m = I2O_POST_READ32(c);
- }
- i2o_flush_reply(c,m);
- }
-
- /* If IopReset was rejected or didn't perform reset, try IopClear */
-
- i2o_status_get(c);
- if (status[0] == I2O_CMD_REJECTED ||
- c->status_block->iop_state != ADAPTER_STATE_RESET)
- {
- printk(KERN_WARNING "%s: Reset rejected, trying to clear\n",c->name);
- i2o_clear_controller(c);
- }
- else
- dprintk(KERN_INFO "%s: Reset completed.\n", c->name);
-
- /* Enable other IOPs */
-
- for (iop = i2o_controller_chain; iop; iop = iop->next)
- if (iop != c)
- i2o_enable_controller(iop);
-
- kfree(status);
- return 0;
-}
-
-
-/**
- * i2o_status_get - get the status block for the IOP
- * @c: controller
- *
- * Issue a status query on the controller. This updates the
- * attached status_block. If the controller fails to reply or an
- * error occurs then a negative errno code is returned. On success
- * zero is returned and the status_blok is updated.
- */
-
-int i2o_status_get(struct i2o_controller *c)
-{
- long time;
- u32 m;
- u32 *msg;
- u8 *status_block;
-
- if (c->status_block == NULL)
- {
- c->status_block = (i2o_status_block *)
- kmalloc(sizeof(i2o_status_block),GFP_KERNEL);
- if (c->status_block == NULL)
- {
- printk(KERN_CRIT "%s: Get Status Block failed; Out of memory.\n",
- c->name);
- return -ENOMEM;
- }
- }
-
- status_block = (u8*)c->status_block;
- memset(c->status_block,0,sizeof(i2o_status_block));
-
- m=i2o_wait_message(c, "StatusGet");
- if(m==0xFFFFFFFF)
- return -ETIMEDOUT;
- msg=(u32 *)(c->mem_offset+m);
-
- msg[0]=NINE_WORD_MSG_SIZE|SGL_OFFSET_0;
- msg[1]=I2O_CMD_STATUS_GET<<24|HOST_TID<<12|ADAPTER_TID;
- msg[2]=core_context;
- msg[3]=0;
- msg[4]=0;
- msg[5]=0;
- msg[6]=virt_to_bus(c->status_block);
- msg[7]=0; /* 64bit host FIXME */
- msg[8]=sizeof(i2o_status_block); /* always 88 bytes */
-
- i2o_post_message(c,m);
-
- /* Wait for a reply */
-
- time=jiffies;
- while(status_block[87]!=0xFF)
- {
- if((jiffies-time)>=5*HZ)
- {
- printk(KERN_ERR "%s: Get status timeout.\n",c->name);
- return -ETIMEDOUT;
- }
- schedule();
- barrier();
- }
-
-#ifdef DRIVERDEBUG
- printk(KERN_INFO "%s: State = ", c->name);
- switch (c->status_block->iop_state) {
- case 0x01:
- printk("INIT\n");
- break;
- case 0x02:
- printk("RESET\n");
- break;
- case 0x04:
- printk("HOLD\n");
- break;
- case 0x05:
- printk("READY\n");
- break;
- case 0x08:
- printk("OPERATIONAL\n");
- break;
- case 0x10:
- printk("FAILED\n");
- break;
- case 0x11:
- printk("FAULTED\n");
- break;
- default:
- printk("%x (unknown !!)\n",c->status_block->iop_state);
-}
-#endif
-
- return 0;
-}
-
-/*
- * Get the Hardware Resource Table for the device.
- * The HRT contains information about possible hidden devices
- * but is mostly useless to us
- */
-int i2o_hrt_get(struct i2o_controller *c)
-{
- u32 msg[6];
- int ret, size = sizeof(i2o_hrt);
-
- /* First read just the header to figure out the real size */
-
- do {
- if (c->hrt == NULL) {
- c->hrt=kmalloc(size, GFP_KERNEL);
- if (c->hrt == NULL) {
- printk(KERN_CRIT "%s: Hrt Get failed; Out of memory.\n", c->name);
- return -ENOMEM;
- }
- }
-
- msg[0]= SIX_WORD_MSG_SIZE| SGL_OFFSET_4;
- msg[1]= I2O_CMD_HRT_GET<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[3]= 0;
- msg[4]= (0xD0000000 | size); /* Simple transaction */
- msg[5]= virt_to_bus(c->hrt); /* Dump it here */
-
- ret = i2o_post_wait_mem(c, msg, sizeof(msg), 20, c->hrt, NULL);
-
- if(ret == -ETIMEDOUT)
- {
- /* The HRT block we used is in limbo somewhere. When the iop wakes up
- we will recover it */
- c->hrt = NULL;
- return ret;
- }
-
- if(ret<0)
- {
- printk(KERN_ERR "%s: Unable to get HRT (status=%#x)\n",
- c->name, -ret);
- return ret;
- }
-
- if (c->hrt->num_entries * c->hrt->entry_len << 2 > size) {
- size = c->hrt->num_entries * c->hrt->entry_len << 2;
- kfree(c->hrt);
- c->hrt = NULL;
- }
- } while (c->hrt == NULL);
-
- i2o_parse_hrt(c); // just for debugging
-
- return 0;
-}
-
-/*
- * Send the I2O System Table to the specified IOP
- *
- * The system table contains information about all the IOPs in the
- * system. It is build and then sent to each IOP so that IOPs can
- * establish connections between each other.
- *
- */
-static int i2o_systab_send(struct i2o_controller *iop)
-{
- u32 msg[12];
- int ret;
- u32 *privbuf = kmalloc(16, GFP_KERNEL);
- if(privbuf == NULL)
- return -ENOMEM;
-
- if(iop->type == I2O_TYPE_PCI)
- {
- struct resource *root;
-
- if(iop->status_block->current_mem_size < iop->status_block->desired_mem_size)
- {
- struct resource *res = &iop->mem_resource;
- res->name = iop->bus.pci.pdev->bus->name;
- res->flags = IORESOURCE_MEM;
- res->start = 0;
- res->end = 0;
- printk("%s: requires private memory resources.\n", iop->name);
- root = pci_find_parent_resource(iop->bus.pci.pdev, res);
- if(root==NULL)
- printk("Can't find parent resource!\n");
- if(root && allocate_resource(root, res,
- iop->status_block->desired_mem_size,
- iop->status_block->desired_mem_size,
- iop->status_block->desired_mem_size,
- 1<<20, /* Unspecified, so use 1Mb and play safe */
- NULL,
- NULL)>=0)
- {
- iop->mem_alloc = 1;
- iop->status_block->current_mem_size = 1 + res->end - res->start;
- iop->status_block->current_mem_base = res->start;
- printk(KERN_INFO "%s: allocated %ld bytes of PCI memory at 0x%08lX.\n",
- iop->name, 1+res->end-res->start, res->start);
- }
- }
- if(iop->status_block->current_io_size < iop->status_block->desired_io_size)
- {
- struct resource *res = &iop->io_resource;
- res->name = iop->bus.pci.pdev->bus->name;
- res->flags = IORESOURCE_IO;
- res->start = 0;
- res->end = 0;
- printk("%s: requires private memory resources.\n", iop->name);
- root = pci_find_parent_resource(iop->bus.pci.pdev, res);
- if(root==NULL)
- printk("Can't find parent resource!\n");
- if(root && allocate_resource(root, res,
- iop->status_block->desired_io_size,
- iop->status_block->desired_io_size,
- iop->status_block->desired_io_size,
- 1<<20, /* Unspecified, so use 1Mb and play safe */
- NULL,
- NULL)>=0)
- {
- iop->io_alloc = 1;
- iop->status_block->current_io_size = 1 + res->end - res->start;
- iop->status_block->current_mem_base = res->start;
- printk(KERN_INFO "%s: allocated %ld bytes of PCI I/O at 0x%08lX.\n",
- iop->name, 1+res->end-res->start, res->start);
- }
- }
- }
- else
- {
- privbuf[0] = iop->status_block->current_mem_base;
- privbuf[1] = iop->status_block->current_mem_size;
- privbuf[2] = iop->status_block->current_io_base;
- privbuf[3] = iop->status_block->current_io_size;
- }
-
- msg[0] = I2O_MESSAGE_SIZE(12) | SGL_OFFSET_6;
- msg[1] = I2O_CMD_SYS_TAB_SET<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[3] = 0;
- msg[4] = (0<<16) | ((iop->unit+2) << 12); /* Host 0 IOP ID (unit + 2) */
- msg[5] = 0; /* Segment 0 */
-
- /*
- * Provide three SGL-elements:
- * System table (SysTab), Private memory space declaration and
- * Private i/o space declaration
- *
- * FIXME: provide these for controllers needing them
- */
- msg[6] = 0x54000000 | sys_tbl_len;
- msg[7] = virt_to_bus(sys_tbl);
- msg[8] = 0x54000000 | 8;
- msg[9] = virt_to_bus(privbuf);
- msg[10] = 0xD4000000 | 8;
- msg[11] = virt_to_bus(privbuf+2);
-
- ret=i2o_post_wait_mem(iop, msg, sizeof(msg), 120, privbuf, NULL);
-
- if(ret==-ETIMEDOUT)
- {
- printk(KERN_ERR "%s: SysTab setup timed out.\n", iop->name);
- }
- else if(ret<0)
- {
- printk(KERN_ERR "%s: Unable to set SysTab (status=%#x).\n",
- iop->name, -ret);
- kfree(privbuf);
- }
- else
- {
- dprintk(KERN_INFO "%s: SysTab set.\n", iop->name);
- kfree(privbuf);
- }
- i2o_status_get(iop); // Entered READY state
-
- return ret;
-
- }
-
-/*
- * Initialize I2O subsystem.
- */
-static void __init i2o_sys_init(void)
-{
- struct i2o_controller *iop, *niop = NULL;
-
- printk(KERN_INFO "Activating I2O controllers...\n");
- printk(KERN_INFO "This may take a few minutes if there are many devices\n");
-
- /* In INIT state, Activate IOPs */
- for (iop = i2o_controller_chain; iop; iop = niop) {
- dprintk(KERN_INFO "Calling i2o_activate_controller for %s...\n",
- iop->name);
- niop = iop->next;
- if (i2o_activate_controller(iop) < 0)
- i2o_delete_controller(iop);
- }
-
- /* Active IOPs in HOLD state */
-
-rebuild_sys_tab:
- if (i2o_controller_chain == NULL)
- return;
-
- /*
- * If build_sys_table fails, we kill everything and bail
- * as we can't init the IOPs w/o a system table
- */
- dprintk(KERN_INFO "i2o_core: Calling i2o_build_sys_table...\n");
- if (i2o_build_sys_table() < 0) {
- i2o_sys_shutdown();
- return;
- }
-
- /* If IOP don't get online, we need to rebuild the System table */
- for (iop = i2o_controller_chain; iop; iop = niop) {
- niop = iop->next;
- dprintk(KERN_INFO "Calling i2o_online_controller for %s...\n", iop->name);
- if (i2o_online_controller(iop) < 0) {
- i2o_delete_controller(iop);
- goto rebuild_sys_tab;
- }
- }
-
- /* Active IOPs now in OPERATIONAL state */
-
- /*
- * Register for status updates from all IOPs
- */
- for(iop = i2o_controller_chain; iop; iop=iop->next) {
-
- /* Create a kernel thread to deal with dynamic LCT updates */
- iop->lct_pid = kernel_thread(i2o_dyn_lct, iop, CLONE_SIGHAND);
-
- /* Update change ind on DLCT */
- iop->dlct->change_ind = iop->lct->change_ind;
-
- /* Start dynamic LCT updates */
- i2o_lct_notify(iop);
-
- /* Register for all events from IRTOS */
- i2o_event_register(iop, core_context, 0, 0, 0xFFFFFFFF);
- }
-}
-
-/**
- * i2o_sys_shutdown - shutdown I2O system
- *
- * Bring down each i2o controller and then return. Each controller
- * is taken through an orderly shutdown
- */
-
-static void i2o_sys_shutdown(void)
-{
- struct i2o_controller *iop, *niop;
-
- /* Delete all IOPs from the controller chain */
- /* that will reset all IOPs too */
-
- for (iop = i2o_controller_chain; iop; iop = niop) {
- niop = iop->next;
- i2o_delete_controller(iop);
- }
-}
-
-/**
- * i2o_activate_controller - bring controller up to HOLD
- * @iop: controller
- *
- * This function brings an I2O controller into HOLD state. The adapter
- * is reset if neccessary and then the queues and resource table
- * are read. -1 is returned on a failure, 0 on success.
- *
- */
-
-int i2o_activate_controller(struct i2o_controller *iop)
-{
- /* In INIT state, Wait Inbound Q to initialize (in i2o_status_get) */
- /* In READY state, Get status */
-
- if (i2o_status_get(iop) < 0) {
- printk(KERN_INFO "Unable to obtain status of %s, "
- "attempting a reset.\n", iop->name);
- if (i2o_reset_controller(iop) < 0)
- return -1;
- }
-
- if(iop->status_block->iop_state == ADAPTER_STATE_FAULTED) {
- printk(KERN_CRIT "%s: hardware fault\n", iop->name);
- return -1;
- }
-
- if (iop->status_block->i2o_version > I2OVER15) {
- printk(KERN_ERR "%s: Not running vrs. 1.5. of the I2O Specification.\n",
- iop->name);
- return -1;
- }
-
- if (iop->status_block->iop_state == ADAPTER_STATE_READY ||
- iop->status_block->iop_state == ADAPTER_STATE_OPERATIONAL ||
- iop->status_block->iop_state == ADAPTER_STATE_HOLD ||
- iop->status_block->iop_state == ADAPTER_STATE_FAILED)
- {
- dprintk(KERN_INFO "%s: Already running, trying to reset...\n",
- iop->name);
- if (i2o_reset_controller(iop) < 0)
- return -1;
- }
-
- if (i2o_init_outbound_q(iop) < 0)
- return -1;
-
- if (i2o_post_outbound_messages(iop))
- return -1;
-
- /* In HOLD state */
-
- if (i2o_hrt_get(iop) < 0)
- return -1;
-
- return 0;
-}
-
-
-/**
- * i2o_init_outbound_queue - setup the outbound queue
- * @c: controller
- *
- * Clear and (re)initialize IOP's outbound queue. Returns 0 on
- * success or a negative errno code on a failure.
- */
-
-int i2o_init_outbound_q(struct i2o_controller *c)
-{
- u8 *status;
- u32 m;
- u32 *msg;
- u32 time;
-
- dprintk(KERN_INFO "%s: Initializing Outbound Queue...\n", c->name);
- m=i2o_wait_message(c, "OutboundInit");
- if(m==0xFFFFFFFF)
- return -ETIMEDOUT;
- msg=(u32 *)(c->mem_offset+m);
-
- status = kmalloc(4,GFP_KERNEL);
- if (status==NULL) {
- printk(KERN_ERR "%s: Outbound Queue initialization failed - no free memory.\n",
- c->name);
- return -ENOMEM;
- }
- memset(status, 0, 4);
-
- msg[0]= EIGHT_WORD_MSG_SIZE| TRL_OFFSET_6;
- msg[1]= I2O_CMD_OUTBOUND_INIT<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[2]= core_context;
- msg[3]= 0x0106; /* Transaction context */
- msg[4]= 4096; /* Host page frame size */
- /* Frame size is in words. Pick 128, its what everyone elses uses and
- other sizes break some adapters. */
- msg[5]= MSG_FRAME_SIZE<<16|0x80; /* Outbound msg frame size and Initcode */
- msg[6]= 0xD0000004; /* Simple SG LE, EOB */
- msg[7]= virt_to_bus(status);
-
- i2o_post_message(c,m);
-
- barrier();
- time=jiffies;
- while(status[0] < I2O_CMD_REJECTED)
- {
- if((jiffies-time)>=30*HZ)
- {
- if(status[0]==0x00)
- printk(KERN_ERR "%s: Ignored queue initialize request.\n",
- c->name);
- else
- printk(KERN_ERR "%s: Outbound queue initialize timeout.\n",
- c->name);
- kfree(status);
- return -ETIMEDOUT;
- }
- schedule();
- barrier();
- }
-
- if(status[0] != I2O_CMD_COMPLETED)
- {
- printk(KERN_ERR "%s: IOP outbound initialise failed.\n", c->name);
- kfree(status);
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-/**
- * i2o_post_outbound_messages - fill message queue
- * @c: controller
- *
- * Allocate a message frame and load the messages into the IOP. The
- * function returns zero on success or a negative errno code on
- * failure.
- */
-
-int i2o_post_outbound_messages(struct i2o_controller *c)
-{
- int i;
- u32 m;
- /* Alloc space for IOP's outbound queue message frames */
-
- c->page_frame = kmalloc(MSG_POOL_SIZE, GFP_KERNEL);
- if(c->page_frame==NULL) {
- printk(KERN_CRIT "%s: Outbound Q initialize failed; out of memory.\n",
- c->name);
- return -ENOMEM;
- }
- m=virt_to_bus(c->page_frame);
-
- /* Post frames */
-
- for(i=0; i< NMBR_MSG_FRAMES; i++) {
- I2O_REPLY_WRITE32(c,m);
- mb();
- m += MSG_FRAME_SIZE;
- }
-
- return 0;
-}
-
-/*
- * Get the IOP's Logical Configuration Table
- */
-int i2o_lct_get(struct i2o_controller *c)
-{
- u32 msg[8];
- int ret, size = c->status_block->expected_lct_size;
-
- do {
- if (c->lct == NULL) {
- c->lct = kmalloc(size, GFP_KERNEL);
- if(c->lct == NULL) {
- printk(KERN_CRIT "%s: Lct Get failed. Out of memory.\n",
- c->name);
- return -ENOMEM;
- }
- }
- memset(c->lct, 0, size);
-
- msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
- msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
- /* msg[2] filled in i2o_post_wait */
- msg[3] = 0;
- msg[4] = 0xFFFFFFFF; /* All devices */
- msg[5] = 0x00000000; /* Report now */
- msg[6] = 0xD0000000|size;
- msg[7] = virt_to_bus(c->lct);
-
- ret=i2o_post_wait_mem(c, msg, sizeof(msg), 120, c->lct, NULL);
-
- if(ret == -ETIMEDOUT)
- {
- c->lct = NULL;
- return ret;
- }
-
- if(ret<0)
- {
- printk(KERN_ERR "%s: LCT Get failed (status=%#x.\n",
- c->name, -ret);
- return ret;
- }
-
- if (c->lct->table_size << 2 > size) {
- size = c->lct->table_size << 2;
- kfree(c->lct);
- c->lct = NULL;
- }
- } while (c->lct == NULL);
-
- if ((ret=i2o_parse_lct(c)) < 0)
- return ret;
-
- return 0;
-}
-
-/*
- * Like above, but used for async notification. The main
- * difference is that we keep track of the CurrentChangeIndiicator
- * so that we only get updates when it actually changes.
- *
- */
-int i2o_lct_notify(struct i2o_controller *c)
-{
- u32 msg[8];
-
- msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
- msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
- msg[2] = core_context;
- msg[3] = 0xDEADBEEF;
- msg[4] = 0xFFFFFFFF; /* All devices */
- msg[5] = c->dlct->change_ind+1; /* Next change */
- msg[6] = 0xD0000000|8192;
- msg[7] = virt_to_bus(c->dlct);
-
- return i2o_post_this(c, msg, sizeof(msg));
-}
-
-/*
- * Bring a controller online into OPERATIONAL state.
- */
-
-int i2o_online_controller(struct i2o_controller *iop)
-{
- u32 v;
-
- if (i2o_systab_send(iop) < 0)
- return -1;
-
- /* In READY state */
-
- dprintk(KERN_INFO "%s: Attempting to enable...\n", iop->name);
- if (i2o_enable_controller(iop) < 0)
- return -1;
-
- /* In OPERATIONAL state */
-
- dprintk(KERN_INFO "%s: Attempting to get/parse lct...\n", iop->name);
- if (i2o_lct_get(iop) < 0)
- return -1;
-
- /* Check battery status */
-
- iop->battery = 0;
- if(i2o_query_scalar(iop, ADAPTER_TID, 0x0000, 4, &v, 4)>=0)
- {
- if(v&16)
- iop->battery = 1;
- }
-
- return 0;
-}
-
-/*
- * Build system table
- *
- * The system table contains information about all the IOPs in the
- * system (duh) and is used by the Executives on the IOPs to establish
- * peer2peer connections. We're not supporting peer2peer at the moment,
- * but this will be needed down the road for things like lan2lan forwarding.
- */
-static int i2o_build_sys_table(void)
-{
- struct i2o_controller *iop = NULL;
- struct i2o_controller *niop = NULL;
- int count = 0;
-
- sys_tbl_len = sizeof(struct i2o_sys_tbl) + // Header + IOPs
- (i2o_num_controllers) *
- sizeof(struct i2o_sys_tbl_entry);
-
- if(sys_tbl)
- kfree(sys_tbl);
-
- sys_tbl = kmalloc(sys_tbl_len, GFP_KERNEL);
- if(!sys_tbl) {
- printk(KERN_CRIT "SysTab Set failed. Out of memory.\n");
- return -ENOMEM;
- }
- memset((void*)sys_tbl, 0, sys_tbl_len);
-
- sys_tbl->num_entries = i2o_num_controllers;
- sys_tbl->version = I2OVERSION; /* TODO: Version 2.0 */
- sys_tbl->change_ind = sys_tbl_ind++;
-
- for(iop = i2o_controller_chain; iop; iop = niop)
- {
- niop = iop->next;
-
- /*
- * Get updated IOP state so we have the latest information
- *
- * We should delete the controller at this point if it
- * doesn't respond since if it's not on the system table
- * it is techninically not part of the I2O subsyßtem...
- */
- if(i2o_status_get(iop)) {
- printk(KERN_ERR "%s: Deleting b/c could not get status while"
- "attempting to build system table\n", iop->name);
- i2o_delete_controller(iop);
- sys_tbl->num_entries--;
- continue; // try the next one
- }
-
- sys_tbl->iops[count].org_id = iop->status_block->org_id;
- sys_tbl->iops[count].iop_id = iop->unit + 2;
- sys_tbl->iops[count].seg_num = 0;
- sys_tbl->iops[count].i2o_version =
- iop->status_block->i2o_version;
- sys_tbl->iops[count].iop_state =
- iop->status_block->iop_state;
- sys_tbl->iops[count].msg_type =
- iop->status_block->msg_type;
- sys_tbl->iops[count].frame_size =
- iop->status_block->inbound_frame_size;
- sys_tbl->iops[count].last_changed = sys_tbl_ind - 1; // ??
- sys_tbl->iops[count].iop_capabilities =
- iop->status_block->iop_capabilities;
- sys_tbl->iops[count].inbound_low =
- (u32)virt_to_bus(iop->post_port);
- sys_tbl->iops[count].inbound_high = 0; // TODO: 64-bit support
-
- count++;
- }
-
-#ifdef DRIVERDEBUG
-{
- u32 *table;
- table = (u32*)sys_tbl;
- for(count = 0; count < (sys_tbl_len >>2); count++)
- printk(KERN_INFO "sys_tbl[%d] = %0#10x\n", count, table[count]);
-}
-#endif
-
- return 0;
-}
-
-
-/*
- * Run time support routines
- */
-
-/*
- * Generic "post and forget" helpers. This is less efficient - we do
- * a memcpy for example that isnt strictly needed, but for most uses
- * this is simply not worth optimising
- */
-
-int i2o_post_this(struct i2o_controller *c, u32 *data, int len)
-{
- u32 m;
- u32 *msg;
- unsigned long t=jiffies;
-
- do
- {
- mb();
- m = I2O_POST_READ32(c);
- }
- while(m==0xFFFFFFFF && (jiffies-t)<HZ);
-
- if(m==0xFFFFFFFF)
- {
- printk(KERN_ERR "%s: Timeout waiting for message frame!\n",
- c->name);
- return -ETIMEDOUT;
- }
- msg = (u32 *)(c->mem_offset + m);
- memcpy_toio(msg, data, len);
- i2o_post_message(c,m);
- return 0;
-}
-
-/**
- * i2o_post_wait_mem - I2O query/reply with DMA buffers
- * @c: controller
- * @msg: message to send
- * @len: length of message
- * @timeout: time in seconds to wait
- * @mem1: attached memory buffer 1
- * @mem2: attached memory buffer 2
- *
- * This core API allows an OSM to post a message and then be told whether
- * or not the system received a successful reply.
- *
- * If the message times out then the value '-ETIMEDOUT' is returned. This
- * is a special case. In this situation the message may (should) complete
- * at an indefinite time in the future. When it completes it will use the
- * memory buffers attached to the request. If -ETIMEDOUT is returned then
- * the memory buffers must not be freed. Instead the event completion will
- * free them for you. In all other cases the buffers are your problem.
- *
- * Pass NULL for unneeded buffers.
- */
-
-int i2o_post_wait_mem(struct i2o_controller *c, u32 *msg, int len, int timeout, void *mem1, void *mem2)
-{
- DECLARE_WAIT_QUEUE_HEAD(wq_i2o_post);
- int complete = 0;
- int status;
- unsigned long flags = 0;
- struct i2o_post_wait_data *wait_data =
- kmalloc(sizeof(struct i2o_post_wait_data), GFP_KERNEL);
-
- if(!wait_data)
- return -ENOMEM;
-
- /*
- * Create a new notification object
- */
- wait_data->status = &status;
- wait_data->complete = &complete;
- wait_data->mem[0] = mem1;
- wait_data->mem[1] = mem2;
- /*
- * Queue the event with its unique id
- */
- spin_lock_irqsave(&post_wait_lock, flags);
-
- wait_data->next = post_wait_queue;
- post_wait_queue = wait_data;
- wait_data->id = (++post_wait_id) & 0x7fff;
- wait_data->wq = &wq_i2o_post;
-
- spin_unlock_irqrestore(&post_wait_lock, flags);
-
- /*
- * Fill in the message id
- */
-
- msg[2] = 0x80000000|(u32)core_context|((u32)wait_data->id<<16);
-
- /*
- * Post the message to the controller. At some point later it
- * will return. If we time out before it returns then
- * complete will be zero. From the point post_this returns
- * the wait_data may have been deleted.
- */
- if ((status = i2o_post_this(c, msg, len))==0) {
- sleep_on_timeout(&wq_i2o_post, HZ * timeout);
- }
- else
- return -EIO;
-
- if(signal_pending(current))
- status = -EINTR;
-
- spin_lock_irqsave(&post_wait_lock, flags);
- barrier(); /* Be sure we see complete as it is locked */
- if(!complete)
- {
- /*
- * Mark the entry dead. We cannot remove it. This is important.
- * When it does terminate (which it must do if the controller hasnt
- * died..) then it will otherwise scribble on stuff.
- * !complete lets us safely check if the entry is still
- * allocated and thus we can write into it
- */
- wait_data->wq = NULL;
- status = -ETIMEDOUT;
- }
- else
- {
- /* Debugging check - remove me soon */
- if(status == -ETIMEDOUT)
- {
- printk("TIMEDOUT BUG!\n");
- status = -EIO;
- }
- }
- /* And the wait_data is not leaked either! */
- spin_unlock_irqrestore(&post_wait_lock, flags);
- return status;
-}
-
-/**
- * i2o_post_wait - I2O query/reply
- * @c: controller
- * @msg: message to send
- * @len: length of message
- * @timeout: time in seconds to wait
- *
- * This core API allows an OSM to post a message and then be told whether
- * or not the system received a successful reply.
- */
-
-int i2o_post_wait(struct i2o_controller *c, u32 *msg, int len, int timeout)
-{
- return i2o_post_wait_mem(c, msg, len, timeout, NULL, NULL);
-}
-
-/*
- * i2o_post_wait is completed and we want to wake up the
- * sleeping proccess. Called by core's reply handler.
- */
-
-static void i2o_post_wait_complete(u32 context, int status)
-{
- struct i2o_post_wait_data **p1, *q;
- unsigned long flags;
-
- /*
- * We need to search through the post_wait
- * queue to see if the given message is still
- * outstanding. If not, it means that the IOP
- * took longer to respond to the message than we
- * had allowed and timer has already expired.
- * Not much we can do about that except log
- * it for debug purposes, increase timeout, and recompile
- *
- * Lock needed to keep anyone from moving queue pointers
- * around while we're looking through them.
- */
-
- spin_lock_irqsave(&post_wait_lock, flags);
-
- for(p1 = &post_wait_queue; *p1!=NULL; p1 = &((*p1)->next))
- {
- q = (*p1);
- if(q->id == ((context >> 16) & 0x7fff)) {
- /*
- * Delete it
- */
-
- *p1 = q->next;
-
- /*
- * Live or dead ?
- */
-
- if(q->wq)
- {
- /* Live entry - wakeup and set status */
- *q->status = status;
- *q->complete = 1;
- wake_up(q->wq);
- }
- else
- {
- /*
- * Free resources. Caller is dead
- */
- if(q->mem[0])
- kfree(q->mem[0]);
- if(q->mem[1])
- kfree(q->mem[1]);
- printk(KERN_WARNING "i2o_post_wait event completed after timeout.\n");
- }
- kfree(q);
- spin_unlock(&post_wait_lock);
- return;
- }
- }
- spin_unlock(&post_wait_lock);
-
- printk(KERN_DEBUG "i2o_post_wait: Bogus reply!\n");
-}
-
-/* Issue UTIL_PARAMS_GET or UTIL_PARAMS_SET
- *
- * This function can be used for all UtilParamsGet/Set operations.
- * The OperationList is given in oplist-buffer,
- * and results are returned in reslist-buffer.
- * Note that the minimum sized reslist is 8 bytes and contains
- * ResultCount, ErrorInfoSize, BlockStatus and BlockSize.
- */
-int i2o_issue_params(int cmd, struct i2o_controller *iop, int tid,
- void *oplist, int oplen, void *reslist, int reslen)
-{
- u32 msg[9];
- u32 *res32 = (u32*)reslist;
- u32 *restmp = (u32*)reslist;
- int len = 0;
- int i = 0;
- int wait_status;
- u32 *opmem, *resmem;
-
- /* Get DMAable memory */
- opmem = kmalloc(oplen, GFP_KERNEL);
- if(opmem == NULL)
- return -ENOMEM;
- memcpy(opmem, oplist, oplen);
-
- resmem = kmalloc(reslen, GFP_KERNEL);
- if(resmem == NULL)
- {
- kfree(opmem);
- return -ENOMEM;
- }
-
- msg[0] = NINE_WORD_MSG_SIZE | SGL_OFFSET_5;
- msg[1] = cmd << 24 | HOST_TID << 12 | tid;
- msg[3] = 0;
- msg[4] = 0;
- msg[5] = 0x54000000 | oplen; /* OperationList */
- msg[6] = virt_to_bus(opmem);
- msg[7] = 0xD0000000 | reslen; /* ResultList */
- msg[8] = virt_to_bus(resmem);
-
- wait_status = i2o_post_wait_mem(iop, msg, sizeof(msg), 10, opmem, resmem);
-
- /*
- * This only looks like a memory leak - don't "fix" it.
- */
- if(wait_status == -ETIMEDOUT)
- return wait_status;
-
- /* Query failed */
- if(wait_status != 0)
- {
- kfree(resmem);
- kfree(opmem);
- return wait_status;
- }
-
- memcpy(reslist, resmem, reslen);
- /*
- * Calculate number of bytes of Result LIST
- * We need to loop through each Result BLOCK and grab the length
- */
- restmp = res32 + 1;
- len = 1;
- for(i = 0; i < (res32[0]&0X0000FFFF); i++)
- {
- if(restmp[0]&0x00FF0000) /* BlockStatus != SUCCESS */
- {
- printk(KERN_WARNING "%s - Error:\n ErrorInfoSize = 0x%02x, "
- "BlockStatus = 0x%02x, BlockSize = 0x%04x\n",
- (cmd == I2O_CMD_UTIL_PARAMS_SET) ? "PARAMS_SET"
- : "PARAMS_GET",
- res32[1]>>24, (res32[1]>>16)&0xFF, res32[1]&0xFFFF);
-
- /*
- * If this is the only request,than we return an error
- */
- if((res32[0]&0x0000FFFF) == 1)
- {
- return -((res32[1] >> 16) & 0xFF); /* -BlockStatus */
- }
- }
- len += restmp[0] & 0x0000FFFF; /* Length of res BLOCK */
- restmp += restmp[0] & 0x0000FFFF; /* Skip to next BLOCK */
- }
- return (len << 2); /* bytes used by result list */
-}
-
-/*
- * Query one scalar group value or a whole scalar group.
- */
-int i2o_query_scalar(struct i2o_controller *iop, int tid,
- int group, int field, void *buf, int buflen)
-{
- u16 opblk[] = { 1, 0, I2O_PARAMS_FIELD_GET, group, 1, field };
- u8 resblk[8+buflen]; /* 8 bytes for header */
- int size;
-
- if (field == -1) /* whole group */
- opblk[4] = -1;
-
- size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET, iop, tid,
- opblk, sizeof(opblk), resblk, sizeof(resblk));
-
- memcpy(buf, resblk+8, buflen); /* cut off header */
-
- if(size>buflen)
- return buflen;
- return size;
-}
-
-/*
- * Set a scalar group value or a whole group.
- */
-int i2o_set_scalar(struct i2o_controller *iop, int tid,
- int group, int field, void *buf, int buflen)
-{
- u16 *opblk;
- u8 resblk[8+buflen]; /* 8 bytes for header */
- int size;
-
- opblk = kmalloc(buflen+64, GFP_KERNEL);
- if (opblk == NULL)
- {
- printk(KERN_ERR "i2o: no memory for operation buffer.\n");
- return -ENOMEM;
- }
-
- opblk[0] = 1; /* operation count */
- opblk[1] = 0; /* pad */
- opblk[2] = I2O_PARAMS_FIELD_SET;
- opblk[3] = group;
-
- if(field == -1) { /* whole group */
- opblk[4] = -1;
- memcpy(opblk+5, buf, buflen);
- }
- else /* single field */
- {
- opblk[4] = 1;
- opblk[5] = field;
- memcpy(opblk+6, buf, buflen);
- }
-
- size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
- opblk, 12+buflen, resblk, sizeof(resblk));
-
- kfree(opblk);
- if(size>buflen)
- return buflen;
- return size;
-}
-
-/*
- * if oper == I2O_PARAMS_TABLE_GET, get from all rows
- * if fieldcount == -1 return all fields
- * ibuf and ibuflen are unused (use NULL, 0)
- * else return specific fields
- * ibuf contains fieldindexes
- *
- * if oper == I2O_PARAMS_LIST_GET, get from specific rows
- * if fieldcount == -1 return all fields
- * ibuf contains rowcount, keyvalues
- * else return specific fields
- * fieldcount is # of fieldindexes
- * ibuf contains fieldindexes, rowcount, keyvalues
- *
- * You could also use directly function i2o_issue_params().
- */
-int i2o_query_table(int oper, struct i2o_controller *iop, int tid, int group,
- int fieldcount, void *ibuf, int ibuflen,
- void *resblk, int reslen)
-{
- u16 *opblk;
- int size;
-
- opblk = kmalloc(10 + ibuflen, GFP_KERNEL);
- if (opblk == NULL)
- {
- printk(KERN_ERR "i2o: no memory for query buffer.\n");
- return -ENOMEM;
- }
-
- opblk[0] = 1; /* operation count */
- opblk[1] = 0; /* pad */
- opblk[2] = oper;
- opblk[3] = group;
- opblk[4] = fieldcount;
- memcpy(opblk+5, ibuf, ibuflen); /* other params */
-
- size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET,iop, tid,
- opblk, 10+ibuflen, resblk, reslen);
-
- kfree(opblk);
- if(size>reslen)
- return reslen;
- return size;
-}
-
-/*
- * Clear table group, i.e. delete all rows.
- */
-int i2o_clear_table(struct i2o_controller *iop, int tid, int group)
-{
- u16 opblk[] = { 1, 0, I2O_PARAMS_TABLE_CLEAR, group };
- u8 resblk[32]; /* min 8 bytes for result header */
-
- return i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
- opblk, sizeof(opblk), resblk, sizeof(resblk));
-}
-
-/*
- * Add a new row into a table group.
- *
- * if fieldcount==-1 then we add whole rows
- * buf contains rowcount, keyvalues
- * else just specific fields are given, rest use defaults
- * buf contains fieldindexes, rowcount, keyvalues
- */
-int i2o_row_add_table(struct i2o_controller *iop, int tid,
- int group, int fieldcount, void *buf, int buflen)
-{
- u16 *opblk;
- u8 resblk[32]; /* min 8 bytes for header */
- int size;
-
- opblk = kmalloc(buflen+64, GFP_KERNEL);
- if (opblk == NULL)
- {
- printk(KERN_ERR "i2o: no memory for operation buffer.\n");
- return -ENOMEM;
- }
-
- opblk[0] = 1; /* operation count */
- opblk[1] = 0; /* pad */
- opblk[2] = I2O_PARAMS_ROW_ADD;
- opblk[3] = group;
- opblk[4] = fieldcount;
- memcpy(opblk+5, buf, buflen);
-
- size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
- opblk, 10+buflen, resblk, sizeof(resblk));
-
- kfree(opblk);
- if(size>buflen)
- return buflen;
- return size;
-}
-
-
-/*
- * Used for error reporting/debugging purposes.
- * Following fail status are common to all classes.
- * The preserved message must be handled in the reply handler.
- */
-void i2o_report_fail_status(u8 req_status, u32* msg)
-{
- static char *FAIL_STATUS[] = {
- "0x80", /* not used */
- "SERVICE_SUSPENDED", /* 0x81 */
- "SERVICE_TERMINATED", /* 0x82 */
- "CONGESTION",
- "FAILURE",
- "STATE_ERROR",
- "TIME_OUT",
- "ROUTING_FAILURE",
- "INVALID_VERSION",
- "INVALID_OFFSET",
- "INVALID_MSG_FLAGS",
- "FRAME_TOO_SMALL",
- "FRAME_TOO_LARGE",
- "INVALID_TARGET_ID",
- "INVALID_INITIATOR_ID",
- "INVALID_INITIATOR_CONTEX", /* 0x8F */
- "UNKNOWN_FAILURE" /* 0xFF */
- };
-
- if (req_status == I2O_FSC_TRANSPORT_UNKNOWN_FAILURE)
- printk("TRANSPORT_UNKNOWN_FAILURE (%0#2x)\n.", req_status);
- else
- printk("TRANSPORT_%s.\n", FAIL_STATUS[req_status & 0x0F]);
-
- /* Dump some details */
-
- printk(KERN_ERR " InitiatorId = %d, TargetId = %d\n",
- (msg[1] >> 12) & 0xFFF, msg[1] & 0xFFF);
- printk(KERN_ERR " LowestVersion = 0x%02X, HighestVersion = 0x%02X\n",
- (msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
- printk(KERN_ERR " FailingHostUnit = 0x%04X, FailingIOP = 0x%03X\n",
- msg[5] >> 16, msg[5] & 0xFFF);
-
- printk(KERN_ERR " Severity: 0x%02X ", (msg[4] >> 16) & 0xFF);
- if (msg[4] & (1<<16))
- printk("(FormatError), "
- "this msg can never be delivered/processed.\n");
- if (msg[4] & (1<<17))
- printk("(PathError), "
- "this msg can no longer be delivered/processed.\n");
- if (msg[4] & (1<<18))
- printk("(PathState), "
- "the system state does not allow delivery.\n");
- if (msg[4] & (1<<19))
- printk("(Congestion), resources temporarily not available;"
- "do not retry immediately.\n");
-}
-
-/*
- * Used for error reporting/debugging purposes.
- * Following reply status are common to all classes.
- */
-void i2o_report_common_status(u8 req_status)
-{
- static char *REPLY_STATUS[] = {
- "SUCCESS",
- "ABORT_DIRTY",
- "ABORT_NO_DATA_TRANSFER",
- "ABORT_PARTIAL_TRANSFER",
- "ERROR_DIRTY",
- "ERROR_NO_DATA_TRANSFER",
- "ERROR_PARTIAL_TRANSFER",
- "PROCESS_ABORT_DIRTY",
- "PROCESS_ABORT_NO_DATA_TRANSFER",
- "PROCESS_ABORT_PARTIAL_TRANSFER",
- "TRANSACTION_ERROR",
- "PROGRESS_REPORT"
- };
-
- if (req_status > I2O_REPLY_STATUS_PROGRESS_REPORT)
- printk("RequestStatus = %0#2x", req_status);
- else
- printk("%s", REPLY_STATUS[req_status]);
-}
-
-/*
- * Used for error reporting/debugging purposes.
- * Following detailed status are valid for executive class,
- * utility class, DDM class and for transaction error replies.
- */
-static void i2o_report_common_dsc(u16 detailed_status)
-{
- static char *COMMON_DSC[] = {
- "SUCCESS",
- "0x01", // not used
- "BAD_KEY",
- "TCL_ERROR",
- "REPLY_BUFFER_FULL",
- "NO_SUCH_PAGE",
- "INSUFFICIENT_RESOURCE_SOFT",
- "INSUFFICIENT_RESOURCE_HARD",
- "0x08", // not used
- "CHAIN_BUFFER_TOO_LARGE",
- "UNSUPPORTED_FUNCTION",
- "DEVICE_LOCKED",
- "DEVICE_RESET",
- "INAPPROPRIATE_FUNCTION",
- "INVALID_INITIATOR_ADDRESS",
- "INVALID_MESSAGE_FLAGS",
- "INVALID_OFFSET",
- "INVALID_PARAMETER",
- "INVALID_REQUEST",
- "INVALID_TARGET_ADDRESS",
- "MESSAGE_TOO_LARGE",
- "MESSAGE_TOO_SMALL",
- "MISSING_PARAMETER",
- "TIMEOUT",
- "UNKNOWN_ERROR",
- "UNKNOWN_FUNCTION",
- "UNSUPPORTED_VERSION",
- "DEVICE_BUSY",
- "DEVICE_NOT_AVAILABLE"
- };
-
- if (detailed_status > I2O_DSC_DEVICE_NOT_AVAILABLE)
- printk(" / DetailedStatus = %0#4x.\n", detailed_status);
- else
- printk(" / %s.\n", COMMON_DSC[detailed_status]);
-}
-
-/*
- * Used for error reporting/debugging purposes
- */
-static void i2o_report_lan_dsc(u16 detailed_status)
-{
- static char *LAN_DSC[] = { // Lan detailed status code strings
- "SUCCESS",
- "DEVICE_FAILURE",
- "DESTINATION_NOT_FOUND",
- "TRANSMIT_ERROR",
- "TRANSMIT_ABORTED",
- "RECEIVE_ERROR",
- "RECEIVE_ABORTED",
- "DMA_ERROR",
- "BAD_PACKET_DETECTED",
- "OUT_OF_MEMORY",
- "BUCKET_OVERRUN",
- "IOP_INTERNAL_ERROR",
- "CANCELED",
- "INVALID_TRANSACTION_CONTEXT",
- "DEST_ADDRESS_DETECTED",
- "DEST_ADDRESS_OMITTED",
- "PARTIAL_PACKET_RETURNED",
- "TEMP_SUSPENDED_STATE", // last Lan detailed status code
- "INVALID_REQUEST" // general detailed status code
- };
-
- if (detailed_status > I2O_DSC_INVALID_REQUEST)
- printk(" / %0#4x.\n", detailed_status);
- else
- printk(" / %s.\n", LAN_DSC[detailed_status]);
-}
-
-/*
- * Used for error reporting/debugging purposes
- */
-static void i2o_report_util_cmd(u8 cmd)
-{
- switch (cmd) {
- case I2O_CMD_UTIL_NOP:
- printk("UTIL_NOP, ");
- break;
- case I2O_CMD_UTIL_ABORT:
- printk("UTIL_ABORT, ");
- break;
- case I2O_CMD_UTIL_CLAIM:
- printk("UTIL_CLAIM, ");
- break;
- case I2O_CMD_UTIL_RELEASE:
- printk("UTIL_CLAIM_RELEASE, ");
- break;
- case I2O_CMD_UTIL_CONFIG_DIALOG:
- printk("UTIL_CONFIG_DIALOG, ");
- break;
- case I2O_CMD_UTIL_DEVICE_RESERVE:
- printk("UTIL_DEVICE_RESERVE, ");
- break;
- case I2O_CMD_UTIL_DEVICE_RELEASE:
- printk("UTIL_DEVICE_RELEASE, ");
- break;
- case I2O_CMD_UTIL_EVT_ACK:
- printk("UTIL_EVENT_ACKNOWLEDGE, ");
- break;
- case I2O_CMD_UTIL_EVT_REGISTER:
- printk("UTIL_EVENT_REGISTER, ");
- break;
- case I2O_CMD_UTIL_LOCK:
- printk("UTIL_LOCK, ");
- break;
- case I2O_CMD_UTIL_LOCK_RELEASE:
- printk("UTIL_LOCK_RELEASE, ");
- break;
- case I2O_CMD_UTIL_PARAMS_GET:
- printk("UTIL_PARAMS_GET, ");
- break;
- case I2O_CMD_UTIL_PARAMS_SET:
- printk("UTIL_PARAMS_SET, ");
- break;
- case I2O_CMD_UTIL_REPLY_FAULT_NOTIFY:
- printk("UTIL_REPLY_FAULT_NOTIFY, ");
- break;
- default:
- printk("Cmd = %0#2x, ",cmd);
- }
-}
-
-/*
- * Used for error reporting/debugging purposes
- */
-static void i2o_report_exec_cmd(u8 cmd)
-{
- switch (cmd) {
- case I2O_CMD_ADAPTER_ASSIGN:
- printk("EXEC_ADAPTER_ASSIGN, ");
- break;
- case I2O_CMD_ADAPTER_READ:
- printk("EXEC_ADAPTER_READ, ");
- break;
- case I2O_CMD_ADAPTER_RELEASE:
- printk("EXEC_ADAPTER_RELEASE, ");
- break;
- case I2O_CMD_BIOS_INFO_SET:
- printk("EXEC_BIOS_INFO_SET, ");
- break;
- case I2O_CMD_BOOT_DEVICE_SET:
- printk("EXEC_BOOT_DEVICE_SET, ");
- break;
- case I2O_CMD_CONFIG_VALIDATE:
- printk("EXEC_CONFIG_VALIDATE, ");
- break;
- case I2O_CMD_CONN_SETUP:
- printk("EXEC_CONN_SETUP, ");
- break;
- case I2O_CMD_DDM_DESTROY:
- printk("EXEC_DDM_DESTROY, ");
- break;
- case I2O_CMD_DDM_ENABLE:
- printk("EXEC_DDM_ENABLE, ");
- break;
- case I2O_CMD_DDM_QUIESCE:
- printk("EXEC_DDM_QUIESCE, ");
- break;
- case I2O_CMD_DDM_RESET:
- printk("EXEC_DDM_RESET, ");
- break;
- case I2O_CMD_DDM_SUSPEND:
- printk("EXEC_DDM_SUSPEND, ");
- break;
- case I2O_CMD_DEVICE_ASSIGN:
- printk("EXEC_DEVICE_ASSIGN, ");
- break;
- case I2O_CMD_DEVICE_RELEASE:
- printk("EXEC_DEVICE_RELEASE, ");
- break;
- case I2O_CMD_HRT_GET:
- printk("EXEC_HRT_GET, ");
- break;
- case I2O_CMD_ADAPTER_CLEAR:
- printk("EXEC_IOP_CLEAR, ");
- break;
- case I2O_CMD_ADAPTER_CONNECT:
- printk("EXEC_IOP_CONNECT, ");
- break;
- case I2O_CMD_ADAPTER_RESET:
- printk("EXEC_IOP_RESET, ");
- break;
- case I2O_CMD_LCT_NOTIFY:
- printk("EXEC_LCT_NOTIFY, ");
- break;
- case I2O_CMD_OUTBOUND_INIT:
- printk("EXEC_OUTBOUND_INIT, ");
- break;
- case I2O_CMD_PATH_ENABLE:
- printk("EXEC_PATH_ENABLE, ");
- break;
- case I2O_CMD_PATH_QUIESCE:
- printk("EXEC_PATH_QUIESCE, ");
- break;
- case I2O_CMD_PATH_RESET:
- printk("EXEC_PATH_RESET, ");
- break;
- case I2O_CMD_STATIC_MF_CREATE:
- printk("EXEC_STATIC_MF_CREATE, ");
- break;
- case I2O_CMD_STATIC_MF_RELEASE:
- printk("EXEC_STATIC_MF_RELEASE, ");
- break;
- case I2O_CMD_STATUS_GET:
- printk("EXEC_STATUS_GET, ");
- break;
- case I2O_CMD_SW_DOWNLOAD:
- printk("EXEC_SW_DOWNLOAD, ");
- break;
- case I2O_CMD_SW_UPLOAD:
- printk("EXEC_SW_UPLOAD, ");
- break;
- case I2O_CMD_SW_REMOVE:
- printk("EXEC_SW_REMOVE, ");
- break;
- case I2O_CMD_SYS_ENABLE:
- printk("EXEC_SYS_ENABLE, ");
- break;
- case I2O_CMD_SYS_MODIFY:
- printk("EXEC_SYS_MODIFY, ");
- break;
- case I2O_CMD_SYS_QUIESCE:
- printk("EXEC_SYS_QUIESCE, ");
- break;
- case I2O_CMD_SYS_TAB_SET:
- printk("EXEC_SYS_TAB_SET, ");
- break;
- default:
- printk("Cmd = %#02x, ",cmd);
- }
-}
-
-/*
- * Used for error reporting/debugging purposes
- */
-static void i2o_report_lan_cmd(u8 cmd)
-{
- switch (cmd) {
- case LAN_PACKET_SEND:
- printk("LAN_PACKET_SEND, ");
- break;
- case LAN_SDU_SEND:
- printk("LAN_SDU_SEND, ");
- break;
- case LAN_RECEIVE_POST:
- printk("LAN_RECEIVE_POST, ");
- break;
- case LAN_RESET:
- printk("LAN_RESET, ");
- break;
- case LAN_SUSPEND:
- printk("LAN_SUSPEND, ");
- break;
- default:
- printk("Cmd = %0#2x, ",cmd);
- }
-}
-
-/*
- * Used for error reporting/debugging purposes.
- * Report Cmd name, Request status, Detailed Status.
- */
-void i2o_report_status(const char *severity, const char *str, u32 *msg)
-{
- u8 cmd = (msg[1]>>24)&0xFF;
- u8 req_status = (msg[4]>>24)&0xFF;
- u16 detailed_status = msg[4]&0xFFFF;
- struct i2o_handler *h = i2o_handlers[msg[2] & (MAX_I2O_MODULES-1)];
-
- printk("%s%s: ", severity, str);
-
- if (cmd < 0x1F) // Utility cmd
- i2o_report_util_cmd(cmd);
-
- else if (cmd >= 0xA0 && cmd <= 0xEF) // Executive cmd
- i2o_report_exec_cmd(cmd);
-
- else if (h->class == I2O_CLASS_LAN && cmd >= 0x30 && cmd <= 0x3F)
- i2o_report_lan_cmd(cmd); // LAN cmd
- else
- printk("Cmd = %0#2x, ", cmd); // Other cmds
-
- if (msg[0] & MSG_FAIL) {
- i2o_report_fail_status(req_status, msg);
- return;
- }
-
- i2o_report_common_status(req_status);
-
- if (cmd < 0x1F || (cmd >= 0xA0 && cmd <= 0xEF))
- i2o_report_common_dsc(detailed_status);
- else if (h->class == I2O_CLASS_LAN && cmd >= 0x30 && cmd <= 0x3F)
- i2o_report_lan_dsc(detailed_status);
- else
- printk(" / DetailedStatus = %0#4x.\n", detailed_status);
-}
-
-/* Used to dump a message to syslog during debugging */
-void i2o_dump_message(u32 *msg)
-{
-#ifdef DRIVERDEBUG
- int i;
- printk(KERN_INFO "Dumping I2O message size %d @ %p\n",
- msg[0]>>16&0xffff, msg);
- for(i = 0; i < ((msg[0]>>16)&0xffff); i++)
- printk(KERN_INFO " msg[%d] = %0#10x\n", i, msg[i]);
-#endif
-}
-
-/*
- * I2O reboot/shutdown notification.
- *
- * - Call each OSM's reboot notifier (if one exists)
- * - Quiesce each IOP in the system
- *
- * Each IOP has to be quiesced before we can ensure that the system
- * can be properly shutdown as a transaction that has already been
- * acknowledged still needs to be placed in permanent store on the IOP.
- * The SysQuiesce causes the IOP to force all HDMs to complete their
- * transactions before returning, so only at that point is it safe
- *
- */
-static int i2o_reboot_event(struct notifier_block *n, unsigned long code, void
-*p)
-{
- int i = 0;
- struct i2o_controller *c = NULL;
-
- if(code != SYS_RESTART && code != SYS_HALT && code != SYS_POWER_OFF)
- return NOTIFY_DONE;
-
- printk(KERN_INFO "Shutting down I2O system.\n");
- printk(KERN_INFO
- " This could take a few minutes if there are many devices attached\n");
-
- for(i = 0; i < MAX_I2O_MODULES; i++)
- {
- if(i2o_handlers[i] && i2o_handlers[i]->reboot_notify)
- i2o_handlers[i]->reboot_notify();
- }
-
- for(c = i2o_controller_chain; c; c = c->next)
- {
- if(i2o_quiesce_controller(c))
- {
- printk(KERN_WARNING "i2o: Could not quiesce %s." "
- Verify setup on next system power up.\n", c->name);
- }
- }
-
- printk(KERN_INFO "I2O system down.\n");
- return NOTIFY_DONE;
-}
-
-
-EXPORT_SYMBOL(i2o_controller_chain);
-EXPORT_SYMBOL(i2o_num_controllers);
-EXPORT_SYMBOL(i2o_find_controller);
-EXPORT_SYMBOL(i2o_unlock_controller);
-EXPORT_SYMBOL(i2o_status_get);
-
-EXPORT_SYMBOL(i2o_install_handler);
-EXPORT_SYMBOL(i2o_remove_handler);
-
-EXPORT_SYMBOL(i2o_claim_device);
-EXPORT_SYMBOL(i2o_release_device);
-EXPORT_SYMBOL(i2o_device_notify_on);
-EXPORT_SYMBOL(i2o_device_notify_off);
-
-EXPORT_SYMBOL(i2o_post_this);
-EXPORT_SYMBOL(i2o_post_wait);
-EXPORT_SYMBOL(i2o_post_wait_mem);
-
-EXPORT_SYMBOL(i2o_query_scalar);
-EXPORT_SYMBOL(i2o_set_scalar);
-EXPORT_SYMBOL(i2o_query_table);
-EXPORT_SYMBOL(i2o_clear_table);
-EXPORT_SYMBOL(i2o_row_add_table);
-EXPORT_SYMBOL(i2o_issue_params);
-
-EXPORT_SYMBOL(i2o_event_register);
-EXPORT_SYMBOL(i2o_event_ack);
-
-EXPORT_SYMBOL(i2o_report_status);
-EXPORT_SYMBOL(i2o_dump_message);
-
-EXPORT_SYMBOL(i2o_get_class_name);
-
-#ifdef MODULE
-
-MODULE_AUTHOR("Red Hat Software");
-MODULE_DESCRIPTION("I2O Core");
-
-
-int init_module(void)
-{
- printk(KERN_INFO "I2O Core - (C) Copyright 1999 Red Hat Software\n");
- if (i2o_install_handler(&i2o_core_handler) < 0)
- {
- printk(KERN_ERR
- "i2o_core: Unable to install core handler.\nI2O stack not loaded!");
- return 0;
- }
-
- core_context = i2o_core_handler.context;
-
- /*
- * Attach core to I2O PCI transport (and others as they are developed)
- */
-#ifdef CONFIG_I2O_PCI_MODULE
- if(i2o_pci_core_attach(&i2o_core_functions) < 0)
- printk(KERN_INFO "i2o: No PCI I2O controllers found\n");
-#endif
-
- /*
- * Initialize event handling thread
- */
- init_MUTEX_LOCKED(&evt_sem);
- evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND);
- if(evt_pid < 0)
- {
- printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
- i2o_remove_handler(&i2o_core_handler);
- return 0;
- }
- else
- printk(KERN_INFO "I2O: Event thread created as pid %d\n", evt_pid);
-
- if(i2o_num_controllers)
- i2o_sys_init();
-
- register_reboot_notifier(&i2o_reboot_notifier);
-
- return 0;
-}
-
-void cleanup_module(void)
-{
- int stat;
-
- unregister_reboot_notifier(&i2o_reboot_notifier);
-
- if(i2o_num_controllers)
- i2o_sys_shutdown();
-
- /*
- * If this is shutdown time, the thread has already been killed
- */
- if(evt_running) {
- printk("Terminating i2o threads...");
- stat = kill_proc(evt_pid, SIGTERM, 1);
- if(!stat) {
- printk("waiting...");
- wait_for_completion(&evt_dead);
- }
- printk("done.\n");
- }
-
-#ifdef CONFIG_I2O_PCI_MODULE
- i2o_pci_core_detach();
-#endif
-
- i2o_remove_handler(&i2o_core_handler);
-
- unregister_reboot_notifier(&i2o_reboot_notifier);
-}
-
-#else
-
-extern int i2o_block_init(void);
-extern int i2o_config_init(void);
-extern int i2o_lan_init(void);
-extern int i2o_pci_init(void);
-extern int i2o_proc_init(void);
-extern int i2o_scsi_init(void);
-
-int __init i2o_init(void)
-{
- printk(KERN_INFO "Loading I2O Core - (c) Copyright 1999 Red Hat Software\n");
-
- if (i2o_install_handler(&i2o_core_handler) < 0)
- {
- printk(KERN_ERR
- "i2o_core: Unable to install core handler.\nI2O stack not loaded!");
- return 0;
- }
-
- core_context = i2o_core_handler.context;
-
- /*
- * Initialize event handling thread
- * We may not find any controllers, but still want this as
- * down the road we may have hot pluggable controllers that
- * need to be dealt with.
- */
- init_MUTEX_LOCKED(&evt_sem);
- if((evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND)) < 0)
- {
- printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
- i2o_remove_handler(&i2o_core_handler);
- return 0;
- }
-
-
-#ifdef CONFIG_I2O_PCI
- i2o_pci_init();
-#endif
-
- if(i2o_num_controllers)
- i2o_sys_init();
-
- register_reboot_notifier(&i2o_reboot_notifier);
-
- i2o_config_init();
-#ifdef CONFIG_I2O_BLOCK
- i2o_block_init();
-#endif
-#ifdef CONFIG_I2O_LAN
- i2o_lan_init();
-#endif
-#ifdef CONFIG_I2O_PROC
- i2o_proc_init();
-#endif
- return 0;
-}
-
-#endif
+++ /dev/null
-/*
- * drivers/i2o/i2o_lan.c
- *
- * I2O LAN CLASS OSM May 26th 2000
- *
- * (C) Copyright 1999, 2000 University of Helsinki,
- * Department of Computer Science
- *
- * This code is still under development / test.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- *
- * Authors: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
- * Fixes: Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
- * Taneli Vähäkangas <Taneli.Vahakangas@cs.Helsinki.FI>
- * Deepak Saxena <deepak@plexity.net>
- *
- * Tested: in FDDI environment (using SysKonnect's DDM)
- * in Gigabit Eth environment (using SysKonnect's DDM)
- * in Fast Ethernet environment (using Intel 82558 DDM)
- *
- * TODO: tests for other LAN classes (Token Ring, Fibre Channel)
- */
-
-#include <linux/config.h>
-#include <linux/module.h>
-
-#include <linux/pci.h>
-
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/fddidevice.h>
-#include <linux/trdevice.h>
-#include <linux/fcdevice.h>
-
-#include <linux/skbuff.h>
-#include <linux/if_arp.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include <linux/spinlock.h>
-#include <linux/tqueue.h>
-#include <asm/io.h>
-
-#include <linux/errno.h>
-
-#include <linux/i2o.h>
-#include "i2o_lan.h"
-
-//#define DRIVERDEBUG
-#ifdef DRIVERDEBUG
-#define dprintk(s, args...) printk(s, ## args)
-#else
-#define dprintk(s, args...)
-#endif
-
-/* The following module parameters are used as default values
- * for per interface values located in the net_device private area.
- * Private values are changed via /proc filesystem.
- */
-static u32 max_buckets_out = I2O_LAN_MAX_BUCKETS_OUT;
-static u32 bucket_thresh = I2O_LAN_BUCKET_THRESH;
-static u32 rx_copybreak = I2O_LAN_RX_COPYBREAK;
-static u8 tx_batch_mode = I2O_LAN_TX_BATCH_MODE;
-static u32 i2o_event_mask = I2O_LAN_EVENT_MASK;
-
-#define MAX_LAN_CARDS 16
-static struct net_device *i2o_landevs[MAX_LAN_CARDS+1];
-static int unit = -1; /* device unit number */
-
-static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
-static void i2o_lan_send_post_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
-static int i2o_lan_receive_post(struct net_device *dev);
-static void i2o_lan_receive_post_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
-static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg);
-
-static int i2o_lan_reset(struct net_device *dev);
-static void i2o_lan_handle_event(struct net_device *dev, u32 *msg);
-
-/* Structures to register handlers for the incoming replies. */
-
-static struct i2o_handler i2o_lan_send_handler = {
- i2o_lan_send_post_reply, // For send replies
- NULL,
- NULL,
- NULL,
- "I2O LAN OSM send",
- -1,
- I2O_CLASS_LAN
-};
-static int lan_send_context;
-
-static struct i2o_handler i2o_lan_receive_handler = {
- i2o_lan_receive_post_reply, // For receive replies
- NULL,
- NULL,
- NULL,
- "I2O LAN OSM receive",
- -1,
- I2O_CLASS_LAN
-};
-static int lan_receive_context;
-
-static struct i2o_handler i2o_lan_handler = {
- i2o_lan_reply, // For other replies
- NULL,
- NULL,
- NULL,
- "I2O LAN OSM",
- -1,
- I2O_CLASS_LAN
-};
-static int lan_context;
-
-DECLARE_TASK_QUEUE(i2o_post_buckets_task);
-struct tq_struct run_i2o_post_buckets_task = {
- routine: (void (*)(void *)) run_task_queue,
- data: (void *) 0
-};
-
-/* Functions to handle message failures and transaction errors:
-==============================================================*/
-
-/*
- * i2o_lan_handle_failure(): Fail bit has been set since IOP's message
- * layer cannot deliver the request to the target, or the target cannot
- * process the request.
- */
-static void i2o_lan_handle_failure(struct net_device *dev, u32 *msg)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
-
- u32 *preserved_msg = (u32*)(iop->mem_offset + msg[7]);
- u32 *sgl_elem = &preserved_msg[4];
- struct sk_buff *skb = NULL;
- u8 le_flag;
-
- i2o_report_status(KERN_INFO, dev->name, msg);
-
- /* If PacketSend failed, free sk_buffs reserved by upper layers */
-
- if (msg[1] >> 24 == LAN_PACKET_SEND) {
- do {
- skb = (struct sk_buff *)(sgl_elem[1]);
- dev_kfree_skb_irq(skb);
-
- atomic_dec(&priv->tx_out);
-
- le_flag = *sgl_elem >> 31;
- sgl_elem +=3;
- } while (le_flag == 0); /* Last element flag not set */
-
- if (netif_queue_stopped(dev))
- netif_wake_queue(dev);
- }
-
- /* If ReceivePost failed, free sk_buffs we have reserved */
-
- if (msg[1] >> 24 == LAN_RECEIVE_POST) {
- do {
- skb = (struct sk_buff *)(sgl_elem[1]);
- dev_kfree_skb_irq(skb);
-
- atomic_dec(&priv->buckets_out);
-
- le_flag = *sgl_elem >> 31;
- sgl_elem +=3;
- } while (le_flag == 0); /* Last element flag not set */
- }
-
- /* Release the preserved msg frame by resubmitting it as a NOP */
-
- preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
- preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
- preserved_msg[2] = 0;
- i2o_post_message(iop, msg[7]);
-}
-/*
- * i2o_lan_handle_transaction_error(): IOP or DDM has rejected the request
- * for general cause (format error, bad function code, insufficient resources,
- * etc.). We get one transaction_error for each failed transaction.
- */
-static void i2o_lan_handle_transaction_error(struct net_device *dev, u32 *msg)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct sk_buff *skb;
-
- i2o_report_status(KERN_INFO, dev->name, msg);
-
- /* If PacketSend was rejected, free sk_buff reserved by upper layers */
-
- if (msg[1] >> 24 == LAN_PACKET_SEND) {
- skb = (struct sk_buff *)(msg[3]); // TransactionContext
- dev_kfree_skb_irq(skb);
- atomic_dec(&priv->tx_out);
-
- if (netif_queue_stopped(dev))
- netif_wake_queue(dev);
- }
-
- /* If ReceivePost was rejected, free sk_buff we have reserved */
-
- if (msg[1] >> 24 == LAN_RECEIVE_POST) {
- skb = (struct sk_buff *)(msg[3]);
- dev_kfree_skb_irq(skb);
- atomic_dec(&priv->buckets_out);
- }
-}
-
-/*
- * i2o_lan_handle_status(): Common parts of handling a not succeeded request
- * (status != SUCCESS).
- */
-static int i2o_lan_handle_status(struct net_device *dev, u32 *msg)
-{
- /* Fail bit set? */
-
- if (msg[0] & MSG_FAIL) {
- i2o_lan_handle_failure(dev, msg);
- return -1;
- }
-
- /* Message rejected for general cause? */
-
- if ((msg[4]>>24) == I2O_REPLY_STATUS_TRANSACTION_ERROR) {
- i2o_lan_handle_transaction_error(dev, msg);
- return -1;
- }
-
- /* Else have to handle it in the callback function */
-
- return 0;
-}
-
-/* Callback functions called from the interrupt routine:
-=======================================================*/
-
-/*
- * i2o_lan_send_post_reply(): Callback function to handle PostSend replies.
- */
-static void i2o_lan_send_post_reply(struct i2o_handler *h,
- struct i2o_controller *iop, struct i2o_message *m)
-{
- u32 *msg = (u32 *)m;
- u8 unit = (u8)(msg[2]>>16); // InitiatorContext
- struct net_device *dev = i2o_landevs[unit];
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- u8 trl_count = msg[3] & 0x000000FF;
-
- if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
- if (i2o_lan_handle_status(dev, msg))
- return;
- }
-
-#ifdef DRIVERDEBUG
- i2o_report_status(KERN_INFO, dev->name, msg);
-#endif
-
- /* DDM has handled transmit request(s), free sk_buffs.
- * We get similar single transaction reply also in error cases
- * (except if msg failure or transaction error).
- */
- while (trl_count) {
- dev_kfree_skb_irq((struct sk_buff *)msg[4 + trl_count]);
- dprintk(KERN_INFO "%s: tx skb freed (trl_count=%d).\n",
- dev->name, trl_count);
- atomic_dec(&priv->tx_out);
- trl_count--;
- }
-
- /* If priv->tx_out had reached tx_max_out, the queue was stopped */
-
- if (netif_queue_stopped(dev))
- netif_wake_queue(dev);
-}
-
-/*
- * i2o_lan_receive_post_reply(): Callback function to process incoming packets.
- */
-static void i2o_lan_receive_post_reply(struct i2o_handler *h,
- struct i2o_controller *iop, struct i2o_message *m)
-{
- u32 *msg = (u32 *)m;
- u8 unit = (u8)(msg[2]>>16); // InitiatorContext
- struct net_device *dev = i2o_landevs[unit];
-
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_bucket_descriptor *bucket = (struct i2o_bucket_descriptor *)&msg[6];
- struct i2o_packet_info *packet;
- u8 trl_count = msg[3] & 0x000000FF;
- struct sk_buff *skb, *old_skb;
- unsigned long flags = 0;
-
- if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
- if (i2o_lan_handle_status(dev, msg))
- return;
-
- i2o_lan_release_buckets(dev, msg);
- return;
- }
-
-#ifdef DRIVERDEBUG
- i2o_report_status(KERN_INFO, dev->name, msg);
-#endif
-
- /* Else we are receiving incoming post. */
-
- while (trl_count--) {
- skb = (struct sk_buff *)bucket->context;
- packet = (struct i2o_packet_info *)bucket->packet_info;
- atomic_dec(&priv->buckets_out);
-
- /* Sanity checks: Any weird characteristics in bucket? */
-
- if (packet->flags & 0x0f || ! packet->flags & 0x40) {
- if (packet->flags & 0x01)
- printk(KERN_WARNING "%s: packet with errors, error code=0x%02x.\n",
- dev->name, packet->status & 0xff);
-
- /* The following shouldn't happen, unless parameters in
- * LAN_OPERATION group are changed during the run time.
- */
- if (packet->flags & 0x0c)
- printk(KERN_DEBUG "%s: multi-bucket packets not supported!\n",
- dev->name);
-
- if (! packet->flags & 0x40)
- printk(KERN_DEBUG "%s: multiple packets in a bucket not supported!\n",
- dev->name);
-
- dev_kfree_skb_irq(skb);
-
- bucket++;
- continue;
- }
-
- /* Copy short packet to a new skb */
-
- if (packet->len < priv->rx_copybreak) {
- old_skb = skb;
- skb = (struct sk_buff *)dev_alloc_skb(packet->len+2);
- if (skb == NULL) {
- printk(KERN_ERR "%s: Can't allocate skb.\n", dev->name);
- return;
- }
- skb_reserve(skb, 2);
- memcpy(skb_put(skb, packet->len), old_skb->data, packet->len);
-
- spin_lock_irqsave(&priv->fbl_lock, flags);
- if (priv->i2o_fbl_tail < I2O_LAN_MAX_BUCKETS_OUT)
- priv->i2o_fbl[++priv->i2o_fbl_tail] = old_skb;
- else
- dev_kfree_skb_irq(old_skb);
-
- spin_unlock_irqrestore(&priv->fbl_lock, flags);
- } else
- skb_put(skb, packet->len);
-
- /* Deliver to upper layers */
-
- skb->dev = dev;
- skb->protocol = priv->type_trans(skb, dev);
- netif_rx(skb);
-
- dev->last_rx = jiffies;
-
- dprintk(KERN_INFO "%s: Incoming packet (%d bytes) delivered "
- "to upper level.\n", dev->name, packet->len);
-
- bucket++; // to next Packet Descriptor Block
- }
-
-#ifdef DRIVERDEBUG
- if (msg[5] == 0)
- printk(KERN_INFO "%s: DDM out of buckets (priv->count = %d)!\n",
- dev->name, atomic_read(&priv->buckets_out));
-#endif
-
- /* If DDM has already consumed bucket_thresh buckets, post new ones */
-
- if (atomic_read(&priv->buckets_out) <= priv->max_buckets_out - priv->bucket_thresh) {
- run_i2o_post_buckets_task.data = (void *)dev;
- queue_task(&run_i2o_post_buckets_task, &tq_immediate);
- mark_bh(IMMEDIATE_BH);
- }
-
- return;
-}
-
-/*
- * i2o_lan_reply(): Callback function to handle other incoming messages
- * except SendPost and ReceivePost.
- */
-static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop,
- struct i2o_message *m)
-{
- u32 *msg = (u32 *)m;
- u8 unit = (u8)(msg[2]>>16); // InitiatorContext
- struct net_device *dev = i2o_landevs[unit];
-
- if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
- if (i2o_lan_handle_status(dev, msg))
- return;
-
- /* In other error cases just report and continue */
-
- i2o_report_status(KERN_INFO, dev->name, msg);
- }
-
-#ifdef DRIVERDEBUG
- i2o_report_status(KERN_INFO, dev->name, msg);
-#endif
- switch (msg[1] >> 24) {
- case LAN_RESET:
- case LAN_SUSPEND:
- /* default reply without payload */
- break;
-
- case I2O_CMD_UTIL_EVT_REGISTER:
- case I2O_CMD_UTIL_EVT_ACK:
- i2o_lan_handle_event(dev, msg);
- break;
-
- case I2O_CMD_UTIL_PARAMS_SET:
- /* default reply, results in ReplyPayload (not examined) */
- switch (msg[3] >> 16) {
- case 1: dprintk(KERN_INFO "%s: Reply to set MAC filter mask.\n",
- dev->name);
- break;
- case 2: dprintk(KERN_INFO "%s: Reply to set MAC table.\n",
- dev->name);
- break;
- default: printk(KERN_WARNING "%s: Bad group 0x%04X\n",
- dev->name,msg[3] >> 16);
- }
- break;
-
- default:
- printk(KERN_ERR "%s: No handler for the reply.\n",
- dev->name);
- i2o_report_status(KERN_INFO, dev->name, msg);
- }
-}
-
-/* Functions used by the above callback functions:
-=================================================*/
-/*
- * i2o_lan_release_buckets(): Free unused buckets (sk_buffs).
- */
-static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- u8 trl_elem_size = (u8)(msg[3]>>8 & 0x000000FF);
- u8 trl_count = (u8)(msg[3] & 0x000000FF);
- u32 *pskb = &msg[6];
-
- while (trl_count--) {
- dprintk(KERN_DEBUG "%s: Releasing unused rx skb %p (trl_count=%d).\n",
- dev->name, (struct sk_buff*)(*pskb),trl_count+1);
- dev_kfree_skb_irq((struct sk_buff *)(*pskb));
- pskb += 1 + trl_elem_size;
- atomic_dec(&priv->buckets_out);
- }
-}
-
-/*
- * i2o_lan_event_reply(): Handle events.
- */
-static void i2o_lan_handle_event(struct net_device *dev, u32 *msg)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 max_evt_data_size =iop->status_block->inbound_frame_size-5;
- struct i2o_reply {
- u32 header[4];
- u32 evt_indicator;
- u32 data[max_evt_data_size];
- } *evt = (struct i2o_reply *)msg;
- int evt_data_len = ((msg[0]>>16) - 5) * 4; /* real size*/
-
- printk(KERN_INFO "%s: I2O event - ", dev->name);
-
- if (msg[1]>>24 == I2O_CMD_UTIL_EVT_ACK) {
- printk("Event acknowledgement reply.\n");
- return;
- }
-
- /* Else evt->function == I2O_CMD_UTIL_EVT_REGISTER) */
-
- switch (evt->evt_indicator) {
- case I2O_EVT_IND_STATE_CHANGE: {
- struct state_data {
- u16 status;
- u8 state;
- u8 data;
- } *evt_data = (struct state_data *)(evt->data[0]);
-
- printk("State chance 0x%08x.\n", evt->data[0]);
-
- /* If the DDM is in error state, recovery may be
- * possible if status = Transmit or Receive Control
- * Unit Inoperable.
- */
- if (evt_data->state==0x05 && evt_data->status==0x0003)
- i2o_lan_reset(dev);
- break;
- }
-
- case I2O_EVT_IND_FIELD_MODIFIED: {
- u16 *work16 = (u16 *)evt->data;
- printk("Group 0x%04x, field %d changed.\n", work16[0], work16[1]);
- break;
- }
-
- case I2O_EVT_IND_VENDOR_EVT: {
- int i;
- printk("Vendor event:\n");
- for (i = 0; i < evt_data_len / 4; i++)
- printk(" 0x%08x\n", evt->data[i]);
- break;
- }
-
- case I2O_EVT_IND_DEVICE_RESET:
- /* Spec 2.0 p. 6-121:
- * The event of _DEVICE_RESET should also be responded
- */
- printk("Device reset.\n");
- if (i2o_event_ack(iop, msg) < 0)
- printk("%s: Event Acknowledge timeout.\n", dev->name);
- break;
-
-#if 0
- case I2O_EVT_IND_EVT_MASK_MODIFIED:
- printk("Event mask modified, 0x%08x.\n", evt->data[0]);
- break;
-
- case I2O_EVT_IND_GENERAL_WARNING:
- printk("General warning 0x%04x.\n", evt->data[0]);
- break;
-
- case I2O_EVT_IND_CONFIGURATION_FLAG:
- printk("Configuration requested.\n");
- break;
-
- case I2O_EVT_IND_CAPABILITY_CHANGE:
- printk("Capability change 0x%04x.\n", evt->data[0]);
- break;
-
- case I2O_EVT_IND_DEVICE_STATE:
- printk("Device state changed 0x%08x.\n", evt->data[0]);
- break;
-#endif
- case I2O_LAN_EVT_LINK_DOWN:
- netif_carrier_off(dev);
- printk("Link to the physical device is lost.\n");
- break;
-
- case I2O_LAN_EVT_LINK_UP:
- netif_carrier_on(dev);
- printk("Link to the physical device is (re)established.\n");
- break;
-
- case I2O_LAN_EVT_MEDIA_CHANGE:
- printk("Media change.\n");
- break;
- default:
- printk("0x%08x. No handler.\n", evt->evt_indicator);
- }
-}
-
-/*
- * i2o_lan_receive_post(): Post buckets to receive packets.
- */
-static int i2o_lan_receive_post(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- struct sk_buff *skb;
- u32 m, *msg;
- u32 bucket_len = (dev->mtu + dev->hard_header_len);
- u32 total = priv->max_buckets_out - atomic_read(&priv->buckets_out);
- u32 bucket_count;
- u32 *sgl_elem;
- unsigned long flags;
-
- /* Send (total/bucket_count) separate I2O requests */
-
- while (total) {
- m = I2O_POST_READ32(iop);
- if (m == 0xFFFFFFFF)
- return -ETIMEDOUT;
- msg = (u32 *)(iop->mem_offset + m);
-
- bucket_count = (total >= priv->sgl_max) ? priv->sgl_max : total;
- total -= bucket_count;
- atomic_add(bucket_count, &priv->buckets_out);
-
- dprintk(KERN_INFO "%s: Sending %d buckets (size %d) to LAN DDM.\n",
- dev->name, bucket_count, bucket_len);
-
- /* Fill in the header */
-
- __raw_writel(I2O_MESSAGE_SIZE(4 + 3 * bucket_count) | SGL_OFFSET_4, msg);
- __raw_writel(LAN_RECEIVE_POST<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
- __raw_writel(priv->unit << 16 | lan_receive_context, msg+2);
- __raw_writel(bucket_count, msg+3);
- sgl_elem = &msg[4];
-
- /* Fill in the payload - contains bucket_count SGL elements */
-
- while (bucket_count--) {
- spin_lock_irqsave(&priv->fbl_lock, flags);
- if (priv->i2o_fbl_tail >= 0)
- skb = priv->i2o_fbl[priv->i2o_fbl_tail--];
- else {
- skb = dev_alloc_skb(bucket_len + 2);
- if (skb == NULL) {
- spin_unlock_irqrestore(&priv->fbl_lock, flags);
- return -ENOMEM;
- }
- skb_reserve(skb, 2);
- }
- spin_unlock_irqrestore(&priv->fbl_lock, flags);
-
- __raw_writel(0x51000000 | bucket_len, sgl_elem);
- __raw_writel((u32)skb, sgl_elem+1);
- __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
- sgl_elem += 3;
- }
-
- /* set LE flag and post */
- __raw_writel(__raw_readl(sgl_elem-3) | 0x80000000, (sgl_elem-3));
- i2o_post_message(iop, m);
- }
-
- return 0;
-}
-
-/* Functions called from the network stack, and functions called by them:
-========================================================================*/
-
-/*
- * i2o_lan_reset(): Reset the LAN adapter into the operational state and
- * restore it to full operation.
- */
-static int i2o_lan_reset(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 msg[5];
-
- dprintk(KERN_INFO "%s: LAN RESET MESSAGE.\n", dev->name);
- msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = LAN_RESET<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid;
- msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
- msg[3] = 0; // TransactionContext
- msg[4] = 0; // Keep posted buckets
-
- if (i2o_post_this(iop, msg, sizeof(msg)) < 0)
- return -ETIMEDOUT;
-
- return 0;
-}
-
-/*
- * i2o_lan_suspend(): Put LAN adapter into a safe, non-active state.
- * IOP replies to any LAN class message with status error_no_data_transfer
- * / suspended.
- */
-static int i2o_lan_suspend(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 msg[5];
-
- dprintk(KERN_INFO "%s: LAN SUSPEND MESSAGE.\n", dev->name);
- msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
- msg[1] = LAN_SUSPEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid;
- msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
- msg[3] = 0; // TransactionContext
- msg[4] = 1 << 16; // return posted buckets
-
- if (i2o_post_this(iop, msg, sizeof(msg)) < 0)
- return -ETIMEDOUT;
-
- return 0;
-}
-
-/*
- * i2o_set_ddm_parameters:
- * These settings are done to ensure proper initial values for DDM.
- * They can be changed via proc file system or vai configuration utility.
- */
-static void i2o_set_ddm_parameters(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 val;
-
- /*
- * When PacketOrphanlimit is set to the maximum packet length,
- * the packets will never be split into two separate buckets
- */
- val = dev->mtu + dev->hard_header_len;
- if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0004, 2, &val, sizeof(val)) < 0)
- printk(KERN_WARNING "%s: Unable to set PacketOrphanLimit.\n",
- dev->name);
- else
- dprintk(KERN_INFO "%s: PacketOrphanLimit set to %d.\n",
- dev->name, val);
-
- /* When RxMaxPacketsBucket = 1, DDM puts only one packet into bucket */
-
- val = 1;
- if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0008, 4, &val, sizeof(val)) <0)
- printk(KERN_WARNING "%s: Unable to set RxMaxPacketsBucket.\n",
- dev->name);
- else
- dprintk(KERN_INFO "%s: RxMaxPacketsBucket set to %d.\n",
- dev->name, val);
- return;
-}
-
-/* Functions called from the network stack:
-==========================================*/
-
-/*
- * i2o_lan_open(): Open the device to send/receive packets via
- * the network device.
- */
-static int i2o_lan_open(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 mc_addr_group[64];
-
- MOD_INC_USE_COUNT;
-
- if (i2o_claim_device(i2o_dev, &i2o_lan_handler)) {
- printk(KERN_WARNING "%s: Unable to claim the I2O LAN device.\n", dev->name);
- MOD_DEC_USE_COUNT;
- return -EAGAIN;
- }
- dprintk(KERN_INFO "%s: I2O LAN device (tid=%d) claimed by LAN OSM.\n",
- dev->name, i2o_dev->lct_data.tid);
-
- if (i2o_event_register(iop, i2o_dev->lct_data.tid,
- priv->unit << 16 | lan_context, 0, priv->i2o_event_mask) < 0)
- printk(KERN_WARNING "%s: Unable to set the event mask.\n", dev->name);
-
- i2o_lan_reset(dev);
-
- /* Get the max number of multicast addresses */
-
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0001, -1,
- &mc_addr_group, sizeof(mc_addr_group)) < 0 ) {
- printk(KERN_WARNING "%s: Unable to query LAN_MAC_ADDRESS group.\n", dev->name);
- MOD_DEC_USE_COUNT;
- return -EAGAIN;
- }
- priv->max_size_mc_table = mc_addr_group[8];
-
- /* Malloc space for free bucket list to resuse reveive post buckets */
-
- priv->i2o_fbl = kmalloc(priv->max_buckets_out * sizeof(struct sk_buff *),
- GFP_KERNEL);
- if (priv->i2o_fbl == NULL) {
- MOD_DEC_USE_COUNT;
- return -ENOMEM;
- }
- priv->i2o_fbl_tail = -1;
- priv->send_active = 0;
-
- i2o_set_ddm_parameters(dev);
- i2o_lan_receive_post(dev);
-
- netif_start_queue(dev);
-
- return 0;
-}
-
-/*
- * i2o_lan_close(): End the transfering.
- */
-static int i2o_lan_close(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- int ret = 0;
-
- netif_stop_queue(dev);
- i2o_lan_suspend(dev);
-
- if (i2o_event_register(iop, i2o_dev->lct_data.tid,
- priv->unit << 16 | lan_context, 0, 0) < 0)
- printk(KERN_WARNING "%s: Unable to clear the event mask.\n",
- dev->name);
-
- while (priv->i2o_fbl_tail >= 0)
- dev_kfree_skb(priv->i2o_fbl[priv->i2o_fbl_tail--]);
-
- kfree(priv->i2o_fbl);
-
- if (i2o_release_device(i2o_dev, &i2o_lan_handler)) {
- printk(KERN_WARNING "%s: Unable to unclaim I2O LAN device "
- "(tid=%d).\n", dev->name, i2o_dev->lct_data.tid);
- ret = -EBUSY;
- }
-
- MOD_DEC_USE_COUNT;
-
- return ret;
-}
-
-/*
- * i2o_lan_tx_timeout(): Tx timeout handler.
- */
-static void i2o_lan_tx_timeout(struct net_device *dev)
-{
- if (!netif_queue_stopped(dev))
- netif_start_queue(dev);
-}
-
-/*
- * i2o_lan_batch_send(): Send packets in batch.
- * Both i2o_lan_sdu_send and i2o_lan_packet_send use this.
- */
-static void i2o_lan_batch_send(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_controller *iop = priv->i2o_dev->controller;
-
- spin_lock_irq(&priv->tx_lock);
- if (priv->tx_count != 0) {
- dev->trans_start = jiffies;
- i2o_post_message(iop, priv->m);
- dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
- priv->tx_count = 0;
- }
- priv->send_active = 0;
- spin_unlock_irq(&priv->tx_lock);
- MOD_DEC_USE_COUNT;
-}
-
-#ifdef CONFIG_NET_FC
-/*
- * i2o_lan_sdu_send(): Send a packet, MAC header added by the DDM.
- * Must be supported by Fibre Channel, optional for Ethernet/802.3,
- * Token Ring, FDDI
- */
-static int i2o_lan_sdu_send(struct sk_buff *skb, struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- int tickssofar = jiffies - dev->trans_start;
- u32 m, *msg;
- u32 *sgl_elem;
-
- spin_lock_irq(&priv->tx_lock);
-
- priv->tx_count++;
- atomic_inc(&priv->tx_out);
-
- /*
- * If tx_batch_mode = 0x00 forced to immediate mode
- * If tx_batch_mode = 0x01 forced to batch mode
- * If tx_batch_mode = 0x10 switch automatically, current mode immediate
- * If tx_batch_mode = 0x11 switch automatically, current mode batch
- * If gap between two packets is > 0 ticks, switch to immediate
- */
- if (priv->tx_batch_mode >> 1) // switch automatically
- priv->tx_batch_mode = tickssofar ? 0x02 : 0x03;
-
- if (priv->tx_count == 1) {
- m = I2O_POST_READ32(iop);
- if (m == 0xFFFFFFFF) {
- spin_unlock_irq(&priv->tx_lock);
- return 1;
- }
- msg = (u32 *)(iop->mem_offset + m);
- priv->m = m;
-
- __raw_writel(NINE_WORD_MSG_SIZE | 1<<12 | SGL_OFFSET_4, msg);
- __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
- __raw_writel(priv->unit << 16 | lan_send_context, msg+2); // InitiatorContext
- __raw_writel(1 << 30 | 1 << 3, msg+3); // TransmitControlWord
-
- __raw_writel(0xD7000000 | skb->len, msg+4); // MAC hdr included
- __raw_writel((u32)skb, msg+5); // TransactionContext
- __raw_writel(virt_to_bus(skb->data), msg+6);
- __raw_writel((u32)skb->mac.raw, msg+7);
- __raw_writel((u32)skb->mac.raw+4, msg+8);
-
- if ((priv->tx_batch_mode & 0x01) && !priv->send_active) {
- priv->send_active = 1;
- MOD_INC_USE_COUNT;
- if (schedule_task(&priv->i2o_batch_send_task) == 0)
- MOD_DEC_USE_COUNT;
- }
- } else { /* Add new SGL element to the previous message frame */
-
- msg = (u32 *)(iop->mem_offset + priv->m);
- sgl_elem = &msg[priv->tx_count * 5 + 1];
-
- __raw_writel(I2O_MESSAGE_SIZE((__raw_readl(msg)>>16) + 5) | 1<<12 | SGL_OFFSET_4, msg);
- __raw_writel(__raw_readl(sgl_elem-5) & 0x7FFFFFFF, sgl_elem-5); /* clear LE flag */
- __raw_writel(0xD5000000 | skb->len, sgl_elem);
- __raw_writel((u32)skb, sgl_elem+1);
- __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
- __raw_writel((u32)(skb->mac.raw), sgl_elem+3);
- __raw_writel((u32)(skb->mac.raw)+1, sgl_elem+4);
- }
-
- /* If tx not in batch mode or frame is full, send immediatelly */
-
- if (!(priv->tx_batch_mode & 0x01) || priv->tx_count == priv->sgl_max) {
- dev->trans_start = jiffies;
- i2o_post_message(iop, priv->m);
- dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
- priv->tx_count = 0;
- }
-
- /* If DDMs TxMaxPktOut reached, stop queueing layer to send more */
-
- if (atomic_read(&priv->tx_out) >= priv->tx_max_out)
- netif_stop_queue(dev);
-
- spin_unlock_irq(&priv->tx_lock);
- return 0;
-}
-#endif /* CONFIG_NET_FC */
-
-/*
- * i2o_lan_packet_send(): Send a packet as is, including the MAC header.
- *
- * Must be supported by Ethernet/802.3, Token Ring, FDDI, optional for
- * Fibre Channel
- */
-static int i2o_lan_packet_send(struct sk_buff *skb, struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- int tickssofar = jiffies - dev->trans_start;
- u32 m, *msg;
- u32 *sgl_elem;
-
- spin_lock_irq(&priv->tx_lock);
-
- priv->tx_count++;
- atomic_inc(&priv->tx_out);
-
- /*
- * If tx_batch_mode = 0x00 forced to immediate mode
- * If tx_batch_mode = 0x01 forced to batch mode
- * If tx_batch_mode = 0x10 switch automatically, current mode immediate
- * If tx_batch_mode = 0x11 switch automatically, current mode batch
- * If gap between two packets is > 0 ticks, switch to immediate
- */
- if (priv->tx_batch_mode >> 1) // switch automatically
- priv->tx_batch_mode = tickssofar ? 0x02 : 0x03;
-
- if (priv->tx_count == 1) {
- m = I2O_POST_READ32(iop);
- if (m == 0xFFFFFFFF) {
- spin_unlock_irq(&priv->tx_lock);
- return 1;
- }
- msg = (u32 *)(iop->mem_offset + m);
- priv->m = m;
-
- __raw_writel(SEVEN_WORD_MSG_SIZE | 1<<12 | SGL_OFFSET_4, msg);
- __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
- __raw_writel(priv->unit << 16 | lan_send_context, msg+2); // InitiatorContext
- __raw_writel(1 << 30 | 1 << 3, msg+3); // TransmitControlWord
- // bit 30: reply as soon as transmission attempt is complete
- // bit 3: Suppress CRC generation
- __raw_writel(0xD5000000 | skb->len, msg+4); // MAC hdr included
- __raw_writel((u32)skb, msg+5); // TransactionContext
- __raw_writel(virt_to_bus(skb->data), msg+6);
-
- if ((priv->tx_batch_mode & 0x01) && !priv->send_active) {
- priv->send_active = 1;
- MOD_INC_USE_COUNT;
- if (schedule_task(&priv->i2o_batch_send_task) == 0)
- MOD_DEC_USE_COUNT;
- }
- } else { /* Add new SGL element to the previous message frame */
-
- msg = (u32 *)(iop->mem_offset + priv->m);
- sgl_elem = &msg[priv->tx_count * 3 + 1];
-
- __raw_writel(I2O_MESSAGE_SIZE((__raw_readl(msg)>>16) + 3) | 1<<12 | SGL_OFFSET_4, msg);
- __raw_writel(__raw_readl(sgl_elem-3) & 0x7FFFFFFF, sgl_elem-3); /* clear LE flag */
- __raw_writel(0xD5000000 | skb->len, sgl_elem);
- __raw_writel((u32)skb, sgl_elem+1);
- __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
- }
-
- /* If tx is in immediate mode or frame is full, send now */
-
- if (!(priv->tx_batch_mode & 0x01) || priv->tx_count == priv->sgl_max) {
- dev->trans_start = jiffies;
- i2o_post_message(iop, priv->m);
- dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
- priv->tx_count = 0;
- }
-
- /* If DDMs TxMaxPktOut reached, stop queueing layer to send more */
-
- if (atomic_read(&priv->tx_out) >= priv->tx_max_out)
- netif_stop_queue(dev);
-
- spin_unlock_irq(&priv->tx_lock);
- return 0;
-}
-
-/*
- * i2o_lan_get_stats(): Fill in the statistics.
- */
-static struct net_device_stats *i2o_lan_get_stats(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u64 val64[16];
- u64 supported_group[4] = { 0, 0, 0, 0 };
-
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0100, -1, val64,
- sizeof(val64)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_HISTORICAL_STATS.\n", dev->name);
- else {
- dprintk(KERN_DEBUG "%s: LAN_HISTORICAL_STATS queried.\n", dev->name);
- priv->stats.tx_packets = val64[0];
- priv->stats.tx_bytes = val64[1];
- priv->stats.rx_packets = val64[2];
- priv->stats.rx_bytes = val64[3];
- priv->stats.tx_errors = val64[4];
- priv->stats.rx_errors = val64[5];
- priv->stats.rx_dropped = val64[6];
- }
-
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0180, -1,
- &supported_group, sizeof(supported_group)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_SUPPORTED_OPTIONAL_HISTORICAL_STATS.\n", dev->name);
-
- if (supported_group[2]) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0183, -1,
- val64, sizeof(val64)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_OPTIONAL_RX_HISTORICAL_STATS.\n", dev->name);
- else {
- dprintk(KERN_DEBUG "%s: LAN_OPTIONAL_RX_HISTORICAL_STATS queried.\n", dev->name);
- priv->stats.multicast = val64[4];
- priv->stats.rx_length_errors = val64[10];
- priv->stats.rx_crc_errors = val64[0];
- }
- }
-
- if (i2o_dev->lct_data.sub_class == I2O_LAN_ETHERNET) {
- u64 supported_stats = 0;
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0200, -1,
- val64, sizeof(val64)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_802_3_HISTORICAL_STATS.\n", dev->name);
- else {
- dprintk(KERN_DEBUG "%s: LAN_802_3_HISTORICAL_STATS queried.\n", dev->name);
- priv->stats.transmit_collision = val64[1] + val64[2];
- priv->stats.rx_frame_errors = val64[0];
- priv->stats.tx_carrier_errors = val64[6];
- }
-
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0280, -1,
- &supported_stats, sizeof(supported_stats)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_SUPPORTED_802_3_HISTORICAL_STATS.\n", dev->name);
-
- if (supported_stats != 0) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0281, -1,
- val64, sizeof(val64)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_OPTIONAL_802_3_HISTORICAL_STATS.\n", dev->name);
- else {
- dprintk(KERN_DEBUG "%s: LAN_OPTIONAL_802_3_HISTORICAL_STATS queried.\n", dev->name);
- if (supported_stats & 0x1)
- priv->stats.rx_over_errors = val64[0];
- if (supported_stats & 0x4)
- priv->stats.tx_heartbeat_errors = val64[2];
- }
- }
- }
-
-#ifdef CONFIG_TR
- if (i2o_dev->lct_data.sub_class == I2O_LAN_TR) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0300, -1,
- val64, sizeof(val64)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_802_5_HISTORICAL_STATS.\n", dev->name);
- else {
- struct tr_statistics *stats =
- (struct tr_statistics *)&priv->stats;
- dprintk(KERN_DEBUG "%s: LAN_802_5_HISTORICAL_STATS queried.\n", dev->name);
-
- stats->line_errors = val64[0];
- stats->internal_errors = val64[7];
- stats->burst_errors = val64[4];
- stats->A_C_errors = val64[2];
- stats->abort_delimiters = val64[3];
- stats->lost_frames = val64[1];
- /* stats->recv_congest_count = ?; FIXME ??*/
- stats->frame_copied_errors = val64[5];
- stats->frequency_errors = val64[6];
- stats->token_errors = val64[9];
- }
- /* Token Ring optional stats not yet defined */
- }
-#endif
-
-#ifdef CONFIG_FDDI
- if (i2o_dev->lct_data.sub_class == I2O_LAN_FDDI) {
- if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0400, -1,
- val64, sizeof(val64)) < 0)
- printk(KERN_INFO "%s: Unable to query LAN_FDDI_HISTORICAL_STATS.\n", dev->name);
- else {
- dprintk(KERN_DEBUG "%s: LAN_FDDI_HISTORICAL_STATS queried.\n", dev->name);
- priv->stats.smt_cf_state = val64[0];
- memcpy(priv->stats.mac_upstream_nbr, &val64[1], FDDI_K_ALEN);
- memcpy(priv->stats.mac_downstream_nbr, &val64[2], FDDI_K_ALEN);
- priv->stats.mac_error_cts = val64[3];
- priv->stats.mac_lost_cts = val64[4];
- priv->stats.mac_rmt_state = val64[5];
- memcpy(priv->stats.port_lct_fail_cts, &val64[6], 8);
- memcpy(priv->stats.port_lem_reject_cts, &val64[7], 8);
- memcpy(priv->stats.port_lem_cts, &val64[8], 8);
- memcpy(priv->stats.port_pcm_state, &val64[9], 8);
- }
- /* FDDI optional stats not yet defined */
- }
-#endif
-
-#ifdef CONFIG_NET_FC
- /* Fibre Channel Statistics not yet defined in 1.53 nor 2.0 */
-#endif
-
- return (struct net_device_stats *)&priv->stats;
-}
-
-/*
- * i2o_lan_set_mc_filter(): Post a request to set multicast filter.
- */
-int i2o_lan_set_mc_filter(struct net_device *dev, u32 filter_mask)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- u32 msg[10];
-
- msg[0] = TEN_WORD_MSG_SIZE | SGL_OFFSET_5;
- msg[1] = I2O_CMD_UTIL_PARAMS_SET << 24 | HOST_TID << 12 | i2o_dev->lct_data.tid;
- msg[2] = priv->unit << 16 | lan_context;
- msg[3] = 0x0001 << 16 | 3 ; // TransactionContext: group&field
- msg[4] = 0;
- msg[5] = 0xCC000000 | 16; // Immediate data SGL
- msg[6] = 1; // OperationCount
- msg[7] = 0x0001<<16 | I2O_PARAMS_FIELD_SET; // Group, Operation
- msg[8] = 3 << 16 | 1; // FieldIndex, FieldCount
- msg[9] = filter_mask; // Value
-
- return i2o_post_this(iop, msg, sizeof(msg));
-}
-
-/*
- * i2o_lan_set_mc_table(): Post a request to set LAN_MULTICAST_MAC_ADDRESS table.
- */
-int i2o_lan_set_mc_table(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- struct i2o_controller *iop = i2o_dev->controller;
- struct dev_mc_list *mc;
- u32 msg[10 + 2 * dev->mc_count];
- u8 *work8 = (u8 *)(msg + 10);
-
- msg[0] = I2O_MESSAGE_SIZE(10 + 2 * dev->mc_count) | SGL_OFFSET_5;
- msg[1] = I2O_CMD_UTIL_PARAMS_SET << 24 | HOST_TID << 12 | i2o_dev->lct_data.tid;
- msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
- msg[3] = 0x0002 << 16 | (u16)-1; // TransactionContext
- msg[4] = 0; // OperationFlags
- msg[5] = 0xCC000000 | (16 + 8 * dev->mc_count); // Immediate data SGL
- msg[6] = 2; // OperationCount
- msg[7] = 0x0002 << 16 | I2O_PARAMS_TABLE_CLEAR; // Group, Operation
- msg[8] = 0x0002 << 16 | I2O_PARAMS_ROW_ADD; // Group, Operation
- msg[9] = dev->mc_count << 16 | (u16)-1; // RowCount, FieldCount
-
- for (mc = dev->mc_list; mc ; mc = mc->next, work8 += 8) {
- memset(work8, 0, 8);
- memcpy(work8, mc->dmi_addr, mc->dmi_addrlen); // Values
- }
-
- return i2o_post_this(iop, msg, sizeof(msg));
-}
-
-/*
- * i2o_lan_set_multicast_list(): Enable a network device to receive packets
- * not send to the protocol address.
- */
-static void i2o_lan_set_multicast_list(struct net_device *dev)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- u32 filter_mask;
-
- if (dev->flags & IFF_PROMISC) {
- filter_mask = 0x00000002;
- dprintk(KERN_INFO "%s: Enabling promiscuous mode...\n", dev->name);
- } else if ((dev->flags & IFF_ALLMULTI) || dev->mc_count > priv->max_size_mc_table) {
- filter_mask = 0x00000004;
- dprintk(KERN_INFO "%s: Enabling all multicast mode...\n", dev->name);
- } else if (dev->mc_count) {
- filter_mask = 0x00000000;
- dprintk(KERN_INFO "%s: Enabling multicast mode...\n", dev->name);
- if (i2o_lan_set_mc_table(dev) < 0)
- printk(KERN_WARNING "%s: Unable to send MAC table.\n", dev->name);
- } else {
- filter_mask = 0x00000300; // Broadcast, Multicast disabled
- dprintk(KERN_INFO "%s: Enabling unicast mode...\n", dev->name);
- }
-
- /* Finally copy new FilterMask to DDM */
-
- if (i2o_lan_set_mc_filter(dev, filter_mask) < 0)
- printk(KERN_WARNING "%s: Unable to send MAC FilterMask.\n", dev->name);
-}
-
-/*
- * i2o_lan_change_mtu(): Change maximum transfer unit size.
- */
-static int i2o_lan_change_mtu(struct net_device *dev, int new_mtu)
-{
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
- u32 max_pkt_size;
-
- if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
- 0x0000, 6, &max_pkt_size, 4) < 0)
- return -EFAULT;
-
- if (new_mtu < 68 || new_mtu > 9000 || new_mtu > max_pkt_size)
- return -EINVAL;
-
- dev->mtu = new_mtu;
-
- i2o_lan_suspend(dev); // to SUSPENDED state, return buckets
-
- while (priv->i2o_fbl_tail >= 0) // free buffered buckets
- dev_kfree_skb(priv->i2o_fbl[priv->i2o_fbl_tail--]);
-
- i2o_lan_reset(dev); // to OPERATIONAL state
- i2o_set_ddm_parameters(dev); // reset some parameters
- i2o_lan_receive_post(dev); // post new buckets (new size)
-
- return 0;
-}
-
-/* Functions to initialize I2O LAN OSM:
-======================================*/
-
-/*
- * i2o_lan_register_device(): Register LAN class device to kernel.
- */
-struct net_device *i2o_lan_register_device(struct i2o_device *i2o_dev)
-{
- struct net_device *dev = NULL;
- struct i2o_lan_local *priv = NULL;
- u8 hw_addr[8];
- u32 tx_max_out = 0;
- unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
- void (*unregister_dev)(struct net_device *dev);
-
- switch (i2o_dev->lct_data.sub_class) {
- case I2O_LAN_ETHERNET:
- dev = init_etherdev(NULL, sizeof(struct i2o_lan_local));
- if (dev == NULL)
- return NULL;
- type_trans = eth_type_trans;
- unregister_dev = unregister_netdev;
- break;
-
-#ifdef CONFIG_ANYLAN
- case I2O_LAN_100VG:
- printk(KERN_ERR "i2o_lan: 100base VG not yet supported.\n");
- return NULL;
- break;
-#endif
-
-#ifdef CONFIG_TR
- case I2O_LAN_TR:
- dev = init_trdev(NULL, sizeof(struct i2o_lan_local));
- if (dev==NULL)
- return NULL;
- type_trans = tr_type_trans;
- unregister_dev = unregister_trdev;
- break;
-#endif
-
-#ifdef CONFIG_FDDI
- case I2O_LAN_FDDI:
- {
- int size = sizeof(struct net_device) + sizeof(struct i2o_lan_local);
-
- dev = (struct net_device *) kmalloc(size, GFP_KERNEL);
- if (dev == NULL)
- return NULL;
- memset((char *)dev, 0, size);
- dev->priv = (void *)(dev + 1);
-
- if (dev_alloc_name(dev, "fddi%d") < 0) {
- printk(KERN_WARNING "i2o_lan: Too many FDDI devices.\n");
- kfree(dev);
- return NULL;
- }
- type_trans = fddi_type_trans;
- unregister_dev = (void *)unregister_netdevice;
-
- fddi_setup(dev);
- register_netdev(dev);
- }
- break;
-#endif
-
-#ifdef CONFIG_NET_FC
- case I2O_LAN_FIBRE_CHANNEL:
- dev = init_fcdev(NULL, sizeof(struct i2o_lan_local));
- if (dev == NULL)
- return NULL;
- type_trans = NULL;
-/* FIXME: Move fc_type_trans() from drivers/net/fc/iph5526.c to net/802/fc.c
- * and export it in include/linux/fcdevice.h
- * type_trans = fc_type_trans;
- */
- unregister_dev = (void *)unregister_fcdev;
- break;
-#endif
-
- case I2O_LAN_UNKNOWN:
- default:
- printk(KERN_ERR "i2o_lan: LAN type 0x%04x not supported.\n",
- i2o_dev->lct_data.sub_class);
- return NULL;
- }
-
- priv = (struct i2o_lan_local *)dev->priv;
- priv->i2o_dev = i2o_dev;
- priv->type_trans = type_trans;
- priv->sgl_max = (i2o_dev->controller->status_block->inbound_frame_size - 4) / 3;
- atomic_set(&priv->buckets_out, 0);
-
- /* Set default values for user configurable parameters */
- /* Private values are changed via /proc file system */
-
- priv->max_buckets_out = max_buckets_out;
- priv->bucket_thresh = bucket_thresh;
- priv->rx_copybreak = rx_copybreak;
- priv->tx_batch_mode = tx_batch_mode & 0x03;
- priv->i2o_event_mask = i2o_event_mask;
-
- priv->tx_lock = SPIN_LOCK_UNLOCKED;
- priv->fbl_lock = SPIN_LOCK_UNLOCKED;
-
- unit++;
- i2o_landevs[unit] = dev;
- priv->unit = unit;
-
- if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
- 0x0001, 0, &hw_addr, sizeof(hw_addr)) < 0) {
- printk(KERN_ERR "%s: Unable to query hardware address.\n", dev->name);
- unit--;
- unregister_dev(dev);
- kfree(dev);
- return NULL;
- }
- dprintk(KERN_DEBUG "%s: hwaddr = %02X:%02X:%02X:%02X:%02X:%02X\n",
- dev->name, hw_addr[0], hw_addr[1], hw_addr[2], hw_addr[3],
- hw_addr[4], hw_addr[5]);
-
- dev->addr_len = 6;
- memcpy(dev->dev_addr, hw_addr, 6);
-
- if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
- 0x0007, 2, &tx_max_out, sizeof(tx_max_out)) < 0) {
- printk(KERN_ERR "%s: Unable to query max TX queue.\n", dev->name);
- unit--;
- unregister_dev(dev);
- kfree(dev);
- return NULL;
- }
- dprintk(KERN_INFO "%s: Max TX Outstanding = %d.\n", dev->name, tx_max_out);
- priv->tx_max_out = tx_max_out;
- atomic_set(&priv->tx_out, 0);
- priv->tx_count = 0;
-
- INIT_LIST_HEAD(&priv->i2o_batch_send_task.list);
- priv->i2o_batch_send_task.sync = 0;
- priv->i2o_batch_send_task.routine = (void *)i2o_lan_batch_send;
- priv->i2o_batch_send_task.data = (void *)dev;
-
- dev->open = i2o_lan_open;
- dev->stop = i2o_lan_close;
- dev->get_stats = i2o_lan_get_stats;
- dev->set_multicast_list = i2o_lan_set_multicast_list;
- dev->tx_timeout = i2o_lan_tx_timeout;
- dev->watchdog_timeo = I2O_LAN_TX_TIMEOUT;
-
-#ifdef CONFIG_NET_FC
- if (i2o_dev->lct_data.sub_class == I2O_LAN_FIBRE_CHANNEL)
- dev->hard_start_xmit = i2o_lan_sdu_send;
- else
-#endif
- dev->hard_start_xmit = i2o_lan_packet_send;
-
- if (i2o_dev->lct_data.sub_class == I2O_LAN_ETHERNET)
- dev->change_mtu = i2o_lan_change_mtu;
-
- return dev;
-}
-
-#ifdef MODULE
-#define i2o_lan_init init_module
-#endif
-
-int __init i2o_lan_init(void)
-{
- struct net_device *dev;
- int i;
-
- printk(KERN_INFO "I2O LAN OSM (C) 1999 University of Helsinki.\n");
-
- /* Module params are used as global defaults for private values */
-
- if (max_buckets_out > I2O_LAN_MAX_BUCKETS_OUT)
- max_buckets_out = I2O_LAN_MAX_BUCKETS_OUT;
- if (bucket_thresh > max_buckets_out)
- bucket_thresh = max_buckets_out;
-
- /* Install handlers for incoming replies */
-
- if (i2o_install_handler(&i2o_lan_send_handler) < 0) {
- printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
- return -EINVAL;
- }
- lan_send_context = i2o_lan_send_handler.context;
-
- if (i2o_install_handler(&i2o_lan_receive_handler) < 0) {
- printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
- return -EINVAL;
- }
- lan_receive_context = i2o_lan_receive_handler.context;
-
- if (i2o_install_handler(&i2o_lan_handler) < 0) {
- printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
- return -EINVAL;
- }
- lan_context = i2o_lan_handler.context;
-
- for(i=0; i <= MAX_LAN_CARDS; i++)
- i2o_landevs[i] = NULL;
-
- for (i=0; i < MAX_I2O_CONTROLLERS; i++) {
- struct i2o_controller *iop = i2o_find_controller(i);
- struct i2o_device *i2o_dev;
-
- if (iop==NULL)
- continue;
-
- for (i2o_dev=iop->devices;i2o_dev != NULL;i2o_dev=i2o_dev->next) {
-
- if (i2o_dev->lct_data.class_id != I2O_CLASS_LAN)
- continue;
-
- /* Make sure device not already claimed by an ISM */
- if (i2o_dev->lct_data.user_tid != 0xFFF)
- continue;
-
- if (unit == MAX_LAN_CARDS) {
- i2o_unlock_controller(iop);
- printk(KERN_WARNING "i2o_lan: Too many I2O LAN devices.\n");
- return -EINVAL;
- }
-
- dev = i2o_lan_register_device(i2o_dev);
- if (dev == NULL) {
- printk(KERN_ERR "i2o_lan: Unable to register I2O LAN device 0x%04x.\n",
- i2o_dev->lct_data.sub_class);
- continue;
- }
-
- printk(KERN_INFO "%s: I2O LAN device registered, "
- "subclass = 0x%04x, unit = %d, tid = %d.\n",
- dev->name, i2o_dev->lct_data.sub_class,
- ((struct i2o_lan_local *)dev->priv)->unit,
- i2o_dev->lct_data.tid);
- }
-
- i2o_unlock_controller(iop);
- }
-
- dprintk(KERN_INFO "%d I2O LAN devices found and registered.\n", unit+1);
-
- return 0;
-}
-
-#ifdef MODULE
-
-void cleanup_module(void)
-{
- int i;
-
- for (i = 0; i <= unit; i++) {
- struct net_device *dev = i2o_landevs[i];
- struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
- struct i2o_device *i2o_dev = priv->i2o_dev;
-
- switch (i2o_dev->lct_data.sub_class) {
- case I2O_LAN_ETHERNET:
- unregister_netdev(dev);
- break;
-#ifdef CONFIG_FDDI
- case I2O_LAN_FDDI:
- unregister_netdevice(dev);
- break;
-#endif
-#ifdef CONFIG_TR
- case I2O_LAN_TR:
- unregister_trdev(dev);
- break;
-#endif
-#ifdef CONFIG_NET_FC
- case I2O_LAN_FIBRE_CHANNEL:
- unregister_fcdev(dev);
- break;
-#endif
- default:
- printk(KERN_WARNING "%s: Spurious I2O LAN subclass 0x%08x.\n",
- dev->name, i2o_dev->lct_data.sub_class);
- }
-
- dprintk(KERN_INFO "%s: I2O LAN device unregistered.\n",
- dev->name);
- kfree(dev);
- }
-
- i2o_remove_handler(&i2o_lan_handler);
- i2o_remove_handler(&i2o_lan_send_handler);
- i2o_remove_handler(&i2o_lan_receive_handler);
-}
-
-EXPORT_NO_SYMBOLS;
-
-MODULE_AUTHOR("University of Helsinki, Department of Computer Science");
-MODULE_DESCRIPTION("I2O Lan OSM");
-
-MODULE_PARM(max_buckets_out, "1-" __MODULE_STRING(I2O_LAN_MAX_BUCKETS_OUT) "i");
-MODULE_PARM_DESC(max_buckets_out, "Total number of buckets to post (1-)");
-MODULE_PARM(bucket_thresh, "1-" __MODULE_STRING(I2O_LAN_MAX_BUCKETS_OUT) "i");
-MODULE_PARM_DESC(bucket_thresh, "Bucket post threshold (1-)");
-MODULE_PARM(rx_copybreak, "1-" "i");
-MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy only small frames (1-)");
-MODULE_PARM(tx_batch_mode, "0-2" "i");
-MODULE_PARM_DESC(tx_batch_mode, "0=Send immediatelly, 1=Send in batches, 2=Switch automatically");
-
-#endif
+++ /dev/null
-/*
- * i2o_lan.h I2O LAN Class definitions
- *
- * I2O LAN CLASS OSM May 26th 2000
- *
- * (C) Copyright 1999, 2000 University of Helsinki,
- * Department of Computer Science
- *
- * This code is still under development / test.
- *
- * Author: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
- * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
- * Taneli Vähäkangas <Taneli.Vahakangas@cs.Helsinki.FI>
- */
-
-#ifndef _I2O_LAN_H
-#define _I2O_LAN_H
-
-/* Default values for tunable parameters first */
-
-#define I2O_LAN_MAX_BUCKETS_OUT 96
-#define I2O_LAN_BUCKET_THRESH 18 /* 9 buckets in one message */
-#define I2O_LAN_RX_COPYBREAK 200
-#define I2O_LAN_TX_TIMEOUT (1*HZ)
-#define I2O_LAN_TX_BATCH_MODE 2 /* 2=automatic, 1=on, 0=off */
-#define I2O_LAN_EVENT_MASK 0 /* 0=None, 0xFFC00002=All */
-
-/* LAN types */
-#define I2O_LAN_ETHERNET 0x0030
-#define I2O_LAN_100VG 0x0040
-#define I2O_LAN_TR 0x0050
-#define I2O_LAN_FDDI 0x0060
-#define I2O_LAN_FIBRE_CHANNEL 0x0070
-#define I2O_LAN_UNKNOWN 0x00000000
-
-/* Connector types */
-
-/* Ethernet */
-#define I2O_LAN_AUI (I2O_LAN_ETHERNET << 4) + 0x00000001
-#define I2O_LAN_10BASE5 (I2O_LAN_ETHERNET << 4) + 0x00000002
-#define I2O_LAN_FIORL (I2O_LAN_ETHERNET << 4) + 0x00000003
-#define I2O_LAN_10BASE2 (I2O_LAN_ETHERNET << 4) + 0x00000004
-#define I2O_LAN_10BROAD36 (I2O_LAN_ETHERNET << 4) + 0x00000005
-#define I2O_LAN_10BASE_T (I2O_LAN_ETHERNET << 4) + 0x00000006
-#define I2O_LAN_10BASE_FP (I2O_LAN_ETHERNET << 4) + 0x00000007
-#define I2O_LAN_10BASE_FB (I2O_LAN_ETHERNET << 4) + 0x00000008
-#define I2O_LAN_10BASE_FL (I2O_LAN_ETHERNET << 4) + 0x00000009
-#define I2O_LAN_100BASE_TX (I2O_LAN_ETHERNET << 4) + 0x0000000A
-#define I2O_LAN_100BASE_FX (I2O_LAN_ETHERNET << 4) + 0x0000000B
-#define I2O_LAN_100BASE_T4 (I2O_LAN_ETHERNET << 4) + 0x0000000C
-#define I2O_LAN_1000BASE_SX (I2O_LAN_ETHERNET << 4) + 0x0000000D
-#define I2O_LAN_1000BASE_LX (I2O_LAN_ETHERNET << 4) + 0x0000000E
-#define I2O_LAN_1000BASE_CX (I2O_LAN_ETHERNET << 4) + 0x0000000F
-#define I2O_LAN_1000BASE_T (I2O_LAN_ETHERNET << 4) + 0x00000010
-
-/* AnyLAN */
-#define I2O_LAN_100VG_ETHERNET (I2O_LAN_100VG << 4) + 0x00000001
-#define I2O_LAN_100VG_TR (I2O_LAN_100VG << 4) + 0x00000002
-
-/* Token Ring */
-#define I2O_LAN_4MBIT (I2O_LAN_TR << 4) + 0x00000001
-#define I2O_LAN_16MBIT (I2O_LAN_TR << 4) + 0x00000002
-
-/* FDDI */
-#define I2O_LAN_125MBAUD (I2O_LAN_FDDI << 4) + 0x00000001
-
-/* Fibre Channel */
-#define I2O_LAN_POINT_POINT (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000001
-#define I2O_LAN_ARB_LOOP (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000002
-#define I2O_LAN_PUBLIC_LOOP (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000003
-#define I2O_LAN_FABRIC (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000004
-
-#define I2O_LAN_EMULATION 0x00000F00
-#define I2O_LAN_OTHER 0x00000F01
-#define I2O_LAN_DEFAULT 0xFFFFFFFF
-
-/* LAN class functions */
-
-#define LAN_PACKET_SEND 0x3B
-#define LAN_SDU_SEND 0x3D
-#define LAN_RECEIVE_POST 0x3E
-#define LAN_RESET 0x35
-#define LAN_SUSPEND 0x37
-
-/* LAN DetailedStatusCode defines */
-#define I2O_LAN_DSC_SUCCESS 0x00
-#define I2O_LAN_DSC_DEVICE_FAILURE 0x01
-#define I2O_LAN_DSC_DESTINATION_NOT_FOUND 0x02
-#define I2O_LAN_DSC_TRANSMIT_ERROR 0x03
-#define I2O_LAN_DSC_TRANSMIT_ABORTED 0x04
-#define I2O_LAN_DSC_RECEIVE_ERROR 0x05
-#define I2O_LAN_DSC_RECEIVE_ABORTED 0x06
-#define I2O_LAN_DSC_DMA_ERROR 0x07
-#define I2O_LAN_DSC_BAD_PACKET_DETECTED 0x08
-#define I2O_LAN_DSC_OUT_OF_MEMORY 0x09
-#define I2O_LAN_DSC_BUCKET_OVERRUN 0x0A
-#define I2O_LAN_DSC_IOP_INTERNAL_ERROR 0x0B
-#define I2O_LAN_DSC_CANCELED 0x0C
-#define I2O_LAN_DSC_INVALID_TRANSACTION_CONTEXT 0x0D
-#define I2O_LAN_DSC_DEST_ADDRESS_DETECTED 0x0E
-#define I2O_LAN_DSC_DEST_ADDRESS_OMITTED 0x0F
-#define I2O_LAN_DSC_PARTIAL_PACKET_RETURNED 0x10
-#define I2O_LAN_DSC_SUSPENDED 0x11
-
-struct i2o_packet_info {
- u32 offset : 24;
- u32 flags : 8;
- u32 len : 24;
- u32 status : 8;
-};
-
-struct i2o_bucket_descriptor {
- u32 context; /* FIXME: 64bit support */
- struct i2o_packet_info packet_info[1];
-};
-
-/* Event Indicator Mask Flags for LAN OSM */
-
-#define I2O_LAN_EVT_LINK_DOWN 0x01
-#define I2O_LAN_EVT_LINK_UP 0x02
-#define I2O_LAN_EVT_MEDIA_CHANGE 0x04
-
-#include <linux/netdevice.h>
-#include <linux/fddidevice.h>
-
-struct i2o_lan_local {
- u8 unit;
- struct i2o_device *i2o_dev;
-
- struct fddi_statistics stats; /* see also struct net_device_stats */
- unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
- atomic_t buckets_out; /* nbr of unused buckets on DDM */
- atomic_t tx_out; /* outstanding TXes */
- u8 tx_count; /* packets in one TX message frame */
- u16 tx_max_out; /* DDM's Tx queue len */
- u8 sgl_max; /* max SGLs in one message frame */
- u32 m; /* IOP address of the batch msg frame */
-
- struct tq_struct i2o_batch_send_task;
- int send_active;
- struct sk_buff **i2o_fbl; /* Free bucket list (to reuse skbs) */
- int i2o_fbl_tail;
- spinlock_t fbl_lock;
-
- spinlock_t tx_lock;
-
- u32 max_size_mc_table; /* max number of multicast addresses */
-
- /* LAN OSM configurable parameters are here: */
-
- u16 max_buckets_out; /* max nbr of buckets to send to DDM */
- u16 bucket_thresh; /* send more when this many used */
- u16 rx_copybreak;
-
- u8 tx_batch_mode; /* Set when using batch mode sends */
- u32 i2o_event_mask; /* To turn on interesting event flags */
-};
-
-#endif /* _I2O_LAN_H */
+++ /dev/null
-/*
- * Find I2O capable controllers on the PCI bus, and register/install
- * them with the I2O layer
- *
- * (C) Copyright 1999 Red Hat Software
- *
- * Written by Alan Cox, Building Number Three Ltd
- * Modified by Deepak Saxena <deepak@plexity.net>
- * Modified by Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- *
- * TODO:
- * Support polled I2O PCI controllers.
- */
-
-#include <linux/config.h>
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/pci.h>
-#include <linux/i2o.h>
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <asm/io.h>
-
-#ifdef CONFIG_MTRR
-#include <asm/mtrr.h>
-#endif // CONFIG_MTRR
-
-#ifdef MODULE
-/*
- * Core function table
- * See <include/linux/i2o.h> for an explanation
- */
-static struct i2o_core_func_table *core;
-
-/* Core attach function */
-extern int i2o_pci_core_attach(struct i2o_core_func_table *);
-extern void i2o_pci_core_detach(void);
-#endif /* MODULE */
-
-/*
- * Free bus specific resources
- */
-static void i2o_pci_dispose(struct i2o_controller *c)
-{
- I2O_IRQ_WRITE32(c,0xFFFFFFFF);
- if(c->bus.pci.irq > 0)
- free_irq(c->bus.pci.irq, c);
- iounmap(((u8 *)c->post_port)-0x40);
-
-#ifdef CONFIG_MTRR
- if(c->bus.pci.mtrr_reg0 > 0)
- mtrr_del(c->bus.pci.mtrr_reg0, 0, 0);
- if(c->bus.pci.mtrr_reg1 > 0)
- mtrr_del(c->bus.pci.mtrr_reg1, 0, 0);
-#endif
-}
-
-/*
- * No real bus specific handling yet (note that later we will
- * need to 'steal' PCI devices on i960 mainboards)
- */
-
-static int i2o_pci_bind(struct i2o_controller *c, struct i2o_device *dev)
-{
- MOD_INC_USE_COUNT;
- return 0;
-}
-
-static int i2o_pci_unbind(struct i2o_controller *c, struct i2o_device *dev)
-{
- MOD_DEC_USE_COUNT;
- return 0;
-}
-
-/*
- * Bus specific enable/disable functions
- */
-static void i2o_pci_enable(struct i2o_controller *c)
-{
- I2O_IRQ_WRITE32(c, 0);
- c->enabled = 1;
-}
-
-static void i2o_pci_disable(struct i2o_controller *c)
-{
- I2O_IRQ_WRITE32(c, 0xFFFFFFFF);
- c->enabled = 0;
-}
-
-/*
- * Bus specific interrupt handler
- */
-
-static void i2o_pci_interrupt(int irq, void *dev_id, struct pt_regs *r)
-{
- struct i2o_controller *c = dev_id;
-#ifdef MODULE
- core->run_queue(c);
-#else
- i2o_run_queue(c);
-#endif /* MODULE */
-}
-
-/*
- * Install a PCI (or in theory AGP) i2o controller
- *
- * TODO: Add support for polled controllers
- */
-int __init i2o_pci_install(struct pci_dev *dev)
-{
- struct i2o_controller *c=kmalloc(sizeof(struct i2o_controller),
- GFP_KERNEL);
- u8 *mem;
- u32 memptr = 0;
- u32 size;
-
- int i;
-
- if(c==NULL)
- {
- printk(KERN_ERR "i2o: Insufficient memory to add controller.\n");
- return -ENOMEM;
- }
- memset(c, 0, sizeof(*c));
-
- for(i=0; i<6; i++)
- {
- /* Skip I/O spaces */
- if(!(pci_resource_flags(dev, i) & IORESOURCE_IO))
- {
- memptr = pci_resource_start(dev, i);
- break;
- }
- }
-
- if(i==6)
- {
- printk(KERN_ERR "i2o: I2O controller has no memory regions defined.\n");
- kfree(c);
- return -EINVAL;
- }
-
- size = dev->resource[i].end-dev->resource[i].start+1;
- /* Map the I2O controller */
-
- printk(KERN_INFO "i2o: PCI I2O controller at 0x%08X size=%d\n", memptr, size);
- mem = ioremap(memptr, size);
- if(mem==NULL)
- {
- printk(KERN_ERR "i2o: Unable to map controller.\n");
- kfree(c);
- return -EINVAL;
- }
-
- c->bus.pci.irq = -1;
- c->bus.pci.queue_buggy = 0;
- c->bus.pci.dpt = 0;
- c->bus.pci.short_req = 0;
- c->bus.pci.pdev = dev;
-
- c->irq_mask = (volatile u32 *)(mem+0x34);
- c->post_port = (volatile u32 *)(mem+0x40);
- c->reply_port = (volatile u32 *)(mem+0x44);
-
- c->mem_phys = memptr;
- c->mem_offset = (u32)mem;
- c->destructor = i2o_pci_dispose;
-
- c->bind = i2o_pci_bind;
- c->unbind = i2o_pci_unbind;
- c->bus_enable = i2o_pci_enable;
- c->bus_disable = i2o_pci_disable;
-
- c->type = I2O_TYPE_PCI;
-
- /*
- * Cards that fall apart if you hit them with large I/O
- * loads...
- */
-
- if(dev->vendor == PCI_VENDOR_ID_NCR && dev->device == 0x0630)
- {
- c->bus.pci.short_req=1;
- printk(KERN_INFO "I2O: Symbios FC920 workarounds activated.\n");
- }
- if(dev->subsystem_vendor == PCI_VENDOR_ID_PROMISE)
- {
- c->bus.pci.queue_buggy=1;
- printk(KERN_INFO "I2O: Promise workarounds activated.\n");
- }
-
- /*
- * Cards that go bananas if you quiesce them before you reset
- * them
- */
-
- if(dev->vendor == PCI_VENDOR_ID_DPT)
- c->bus.pci.dpt=1;
-
- /*
- * Enable Write Combining MTRR for IOP's memory region
- */
-#ifdef CONFIG_MTRR
- c->bus.pci.mtrr_reg0 =
- mtrr_add(c->mem_phys, size, MTRR_TYPE_WRCOMB, 1);
-/*
-* If it is an INTEL i960 I/O processor then set the first 64K to Uncacheable
-* since the region contains the Messaging unit which shouldn't be cached.
-*/
- c->bus.pci.mtrr_reg1 = -1;
- if(dev->vendor == PCI_VENDOR_ID_INTEL || dev->vendor == PCI_VENDOR_ID_DPT)
- {
- printk(KERN_INFO "I2O: MTRR workaround for Intel i960 processor\n");
- c->bus.pci.mtrr_reg1 = mtrr_add(c->mem_phys, 65536, MTRR_TYPE_UNCACHABLE, 1);
- if(c->bus.pci.mtrr_reg1< 0)
- printk(KERN_INFO "i2o_pci: Error in setting MTRR_TYPE_UNCACHABLE\n");
- }
-
-#endif
-
- I2O_IRQ_WRITE32(c,0xFFFFFFFF);
-
-#ifdef MODULE
- i = core->install(c);
-#else
- i = i2o_install_controller(c);
-#endif /* MODULE */
-
- if(i<0)
- {
- printk(KERN_ERR "i2o: Unable to install controller.\n");
- kfree(c);
- iounmap(mem);
- return i;
- }
-
- c->bus.pci.irq = dev->irq;
- if(c->bus.pci.irq)
- {
- i=request_irq(dev->irq, i2o_pci_interrupt, SA_SHIRQ,
- c->name, c);
- if(i<0)
- {
- printk(KERN_ERR "%s: unable to allocate interrupt %d.\n",
- c->name, dev->irq);
- c->bus.pci.irq = -1;
-#ifdef MODULE
- core->delete(c);
-#else
- i2o_delete_controller(c);
-#endif /* MODULE */
- iounmap(mem);
- return -EBUSY;
- }
- }
-
- printk(KERN_INFO "%s: Installed at IRQ%d\n", c->name, dev->irq);
- I2O_IRQ_WRITE32(c,0x0);
- c->enabled = 1;
- return 0;
-}
-
-int __init i2o_pci_scan(void)
-{
- struct pci_dev *dev;
- int count=0;
-
- printk(KERN_INFO "i2o: Checking for PCI I2O controllers...\n");
-
- pci_for_each_dev(dev)
- {
- if((dev->class>>8)!=PCI_CLASS_INTELLIGENT_I2O)
- continue;
- if((dev->class&0xFF)>1)
- {
- printk(KERN_INFO "i2o: I2O Controller found but does not support I2O 1.5 (skipping).\n");
- continue;
- }
- if (pci_enable_device(dev))
- continue;
- printk(KERN_INFO "i2o: I2O controller on bus %d at %d.\n",
- dev->bus->number, dev->devfn);
- pci_set_master(dev);
- if(i2o_pci_install(dev)==0)
- count++;
- }
- if(count)
- printk(KERN_INFO "i2o: %d I2O controller%s found and installed.\n", count,
- count==1?"":"s");
- return count?count:-ENODEV;
-}
-
-#ifdef I2O_HOTPLUG_SUPPORT
-/*
- * Activate a newly found PCI I2O controller
- * Not used now, but will be needed in future for
- * hot plug PCI support
- */
-static void i2o_pci_activate(i2o_controller * c)
-{
- int i=0;
- struct i2o_controller *c;
-
- if(c->type == I2O_TYPE_PCI)
- {
- I2O_IRQ_WRITE32(c,0);
-#ifdef MODULE
- if(core->activate(c))
-#else
- if(i2o_activate_controller(c))
-#endif /* MODULE */
- {
- printk("%s: Failed to initialize.\n", c->name);
-#ifdef MODULE
- core->unlock(c);
- core->delete(c);
-#else
- i2o_unlock_controller(c);
- i2o_delete_controller(c);
-#endif
- continue;
- }
- }
-}
-#endif // I2O_HOTPLUG_SUPPORT
-
-#ifdef MODULE
-
-int i2o_pci_core_attach(struct i2o_core_func_table *table)
-{
- MOD_INC_USE_COUNT;
-
- core = table;
-
- return i2o_pci_scan();
-}
-
-void i2o_pci_core_detach(void)
-{
- core = NULL;
-
- MOD_DEC_USE_COUNT;
-}
-
-int init_module(void)
-{
- printk(KERN_INFO "Linux I2O PCI support (c) 1999 Red Hat Software.\n");
-
- core = NULL;
-
- return 0;
-
-}
-
-void cleanup_module(void)
-{
-}
-
-EXPORT_SYMBOL(i2o_pci_core_attach);
-EXPORT_SYMBOL(i2o_pci_core_detach);
-
-MODULE_AUTHOR("Red Hat Software");
-MODULE_DESCRIPTION("I2O PCI Interface");
-
-#else
-void __init i2o_pci_init(void)
-{
- printk(KERN_INFO "Linux I2O PCI support (c) 1999 Red Hat Software.\n");
- i2o_pci_scan();
-}
-#endif
+++ /dev/null
-/*
- * procfs handler for Linux I2O subsystem
- *
- * (c) Copyright 1999 Deepak Saxena
- *
- * Originally written by Deepak Saxena(deepak@plexity.net)
- *
- * This program is free software. You can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- *
- * This is an initial test release. The code is based on the design
- * of the ide procfs system (drivers/block/ide-proc.c). Some code
- * taken from i2o-core module by Alan Cox.
- *
- * DISCLAIMER: This code is still under development/test and may cause
- * your system to behave unpredictably. Use at your own discretion.
- *
- * LAN entries by Juha Sievänen (Juha.Sievanen@cs.Helsinki.FI),
- * Auvo Häkkinen (Auvo.Hakkinen@cs.Helsinki.FI)
- * University of Helsinki, Department of Computer Science
- */
-
-/*
- * set tabstop=3
- */
-
-/*
- * TODO List
- *
- * - Add support for any version 2.0 spec changes once 2.0 IRTOS is
- * is available to test with
- * - Clean up code to use official structure definitions
- */
-
-// FIXME!
-#define FMT_U64_HEX "0x%08x%08x"
-#define U64_VAL(pu64) *((u32*)(pu64)+1), *((u32*)(pu64))
-
-#include <linux/types.h>
-#include <linux/kernel.h>
-#include <linux/pci.h>
-#include <linux/i2o.h>
-#include <linux/proc_fs.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/errno.h>
-#include <linux/spinlock.h>
-
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <asm/byteorder.h>
-
-#include "i2o_lan.h"
-
-/*
- * Structure used to define /proc entries
- */
-typedef struct _i2o_proc_entry_t
-{
- char *name; /* entry name */
- mode_t mode; /* mode */
- read_proc_t *read_proc; /* read func */
- write_proc_t *write_proc; /* write func */
-} i2o_proc_entry;
-
-// #define DRIVERDEBUG
-
-static int i2o_proc_read_lct(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_hrt(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_status(char *, char **, off_t, int, int *, void *);
-
-static int i2o_proc_read_hw(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_ddm_table(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_driver_store(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_drivers_stored(char *, char **, off_t, int, int *, void *);
-
-static int i2o_proc_read_groups(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_phys_device(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_claimed(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_users(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_priv_msgs(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_authorized_users(char *, char **, off_t, int, int *, void *);
-
-static int i2o_proc_read_dev_name(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_dev_identity(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_ddm_identity(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_uinfo(char *, char **, off_t, int, int *, void *);
-static int i2o_proc_read_sgl_limits(char *, char **, off_t, int, int *, void *);
-
-static int i2o_proc_read_sensors(char *, char **, off_t, int, int *, void *);
-
-static int print_serial_number(char *, int, u8 *, int);
-
-static int i2o_proc_create_entries(void *, i2o_proc_entry *,
- struct proc_dir_entry *);
-static void i2o_proc_remove_entries(i2o_proc_entry *, struct proc_dir_entry *);
-static int i2o_proc_add_controller(struct i2o_controller *,
- struct proc_dir_entry * );
-static void i2o_proc_remove_controller(struct i2o_controller *,
- struct proc_dir_entry * );
-static void i2o_proc_add_device(struct i2o_device *, struct proc_dir_entry *);
-static void i2o_proc_remove_device(struct i2o_device *);
-static int create_i2o_procfs(void);
-static int destroy_i2o_procfs(void);
-static void i2o_proc_new_dev(struct i2o_controller *, struct i2o_device *);
-static void i2o_proc_dev_del(struct i2o_controller *, struct i2o_device *);
-
-static int i2o_proc_read_lan_dev_info(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_mac_addr(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_mcast_addr(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_batch_control(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_operation(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_media_operation(char *, char **, off_t, int,
- int *, void *);
-static int i2o_proc_read_lan_alt_addr(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_tx_info(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_rx_info(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_hist_stats(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_eth_stats(char *, char **, off_t, int,
- int *, void *);
-static int i2o_proc_read_lan_tr_stats(char *, char **, off_t, int, int *,
- void *);
-static int i2o_proc_read_lan_fddi_stats(char *, char **, off_t, int, int *,
- void *);
-
-static struct proc_dir_entry *i2o_proc_dir_root;
-
-/*
- * I2O OSM descriptor
- */
-static struct i2o_handler i2o_proc_handler =
-{
- NULL,
- i2o_proc_new_dev,
- i2o_proc_dev_del,
- NULL,
- "I2O procfs Layer",
- 0,
- 0xffffffff // All classes
-};
-
-/*
- * IOP specific entries...write field just in case someone
- * ever wants one.
- */
-static i2o_proc_entry generic_iop_entries[] =
-{
- {"hrt", S_IFREG|S_IRUGO, i2o_proc_read_hrt, NULL},
- {"lct", S_IFREG|S_IRUGO, i2o_proc_read_lct, NULL},
- {"status", S_IFREG|S_IRUGO, i2o_proc_read_status, NULL},
- {"hw", S_IFREG|S_IRUGO, i2o_proc_read_hw, NULL},
- {"ddm_table", S_IFREG|S_IRUGO, i2o_proc_read_ddm_table, NULL},
- {"driver_store", S_IFREG|S_IRUGO, i2o_proc_read_driver_store, NULL},
- {"drivers_stored", S_IFREG|S_IRUGO, i2o_proc_read_drivers_stored, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-/*
- * Device specific entries
- */
-static i2o_proc_entry generic_dev_entries[] =
-{
- {"groups", S_IFREG|S_IRUGO, i2o_proc_read_groups, NULL},
- {"phys_dev", S_IFREG|S_IRUGO, i2o_proc_read_phys_device, NULL},
- {"claimed", S_IFREG|S_IRUGO, i2o_proc_read_claimed, NULL},
- {"users", S_IFREG|S_IRUGO, i2o_proc_read_users, NULL},
- {"priv_msgs", S_IFREG|S_IRUGO, i2o_proc_read_priv_msgs, NULL},
- {"authorized_users", S_IFREG|S_IRUGO, i2o_proc_read_authorized_users, NULL},
- {"dev_identity", S_IFREG|S_IRUGO, i2o_proc_read_dev_identity, NULL},
- {"ddm_identity", S_IFREG|S_IRUGO, i2o_proc_read_ddm_identity, NULL},
- {"user_info", S_IFREG|S_IRUGO, i2o_proc_read_uinfo, NULL},
- {"sgl_limits", S_IFREG|S_IRUGO, i2o_proc_read_sgl_limits, NULL},
- {"sensors", S_IFREG|S_IRUGO, i2o_proc_read_sensors, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-/*
- * Storage unit specific entries (SCSI Periph, BS) with device names
- */
-static i2o_proc_entry rbs_dev_entries[] =
-{
- {"dev_name", S_IFREG|S_IRUGO, i2o_proc_read_dev_name, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-#define SCSI_TABLE_SIZE 13
-static char *scsi_devices[] =
-{
- "Direct-Access Read/Write",
- "Sequential-Access Storage",
- "Printer",
- "Processor",
- "WORM Device",
- "CD-ROM Device",
- "Scanner Device",
- "Optical Memory Device",
- "Medium Changer Device",
- "Communications Device",
- "Graphics Art Pre-Press Device",
- "Graphics Art Pre-Press Device",
- "Array Controller Device"
-};
-
-/* private */
-
-/*
- * Generic LAN specific entries
- *
- * Should groups with r/w entries have their own subdirectory?
- *
- */
-static i2o_proc_entry lan_entries[] =
-{
- {"lan_dev_info", S_IFREG|S_IRUGO, i2o_proc_read_lan_dev_info, NULL},
- {"lan_mac_addr", S_IFREG|S_IRUGO, i2o_proc_read_lan_mac_addr, NULL},
- {"lan_mcast_addr", S_IFREG|S_IRUGO|S_IWUSR,
- i2o_proc_read_lan_mcast_addr, NULL},
- {"lan_batch_ctrl", S_IFREG|S_IRUGO|S_IWUSR,
- i2o_proc_read_lan_batch_control, NULL},
- {"lan_operation", S_IFREG|S_IRUGO, i2o_proc_read_lan_operation, NULL},
- {"lan_media_operation", S_IFREG|S_IRUGO,
- i2o_proc_read_lan_media_operation, NULL},
- {"lan_alt_addr", S_IFREG|S_IRUGO, i2o_proc_read_lan_alt_addr, NULL},
- {"lan_tx_info", S_IFREG|S_IRUGO, i2o_proc_read_lan_tx_info, NULL},
- {"lan_rx_info", S_IFREG|S_IRUGO, i2o_proc_read_lan_rx_info, NULL},
-
- {"lan_hist_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_hist_stats, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-/*
- * Port specific LAN entries
- *
- */
-static i2o_proc_entry lan_eth_entries[] =
-{
- {"lan_eth_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_eth_stats, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-static i2o_proc_entry lan_tr_entries[] =
-{
- {"lan_tr_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_tr_stats, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-static i2o_proc_entry lan_fddi_entries[] =
-{
- {"lan_fddi_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_fddi_stats, NULL},
- {NULL, 0, NULL, NULL}
-};
-
-
-static char *chtostr(u8 *chars, int n)
-{
- char tmp[256];
- tmp[0] = 0;
- return strncat(tmp, (char *)chars, n);
-}
-
-static int i2o_report_query_status(char *buf, int block_status, char *group)
-{
- switch (block_status)
- {
- case -ETIMEDOUT:
- return sprintf(buf, "Timeout reading group %s.\n",group);
- case -ENOMEM:
- return sprintf(buf, "No free memory to read the table.\n");
- case -I2O_PARAMS_STATUS_INVALID_GROUP_ID:
- return sprintf(buf, "Group %s not supported.\n", group);
- default:
- return sprintf(buf, "Error reading group %s. BlockStatus 0x%02X\n",
- group, -block_status);
- }
-}
-
-static char* bus_strings[] =
-{
- "Local Bus",
- "ISA",
- "EISA",
- "MCA",
- "PCI",
- "PCMCIA",
- "NUBUS",
- "CARDBUS"
-};
-
-static spinlock_t i2o_proc_lock = SPIN_LOCK_UNLOCKED;
-
-int i2o_proc_read_hrt(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_controller *c = (struct i2o_controller *)data;
- i2o_hrt *hrt = (i2o_hrt *)c->hrt;
- u32 bus;
- int count;
- int i;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- if(hrt->hrt_version)
- {
- len += sprintf(buf+len,
- "HRT table for controller is too new a version.\n");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- count = hrt->num_entries;
-
- if((count * hrt->entry_len + 8) > 2048) {
- printk(KERN_WARNING "i2o_proc: HRT does not fit into buffer\n");
- len += sprintf(buf+len,
- "HRT table too big to fit in buffer.\n");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "HRT has %d entries of %d bytes each.\n",
- count, hrt->entry_len << 2);
-
- for(i = 0; i < count; i++)
- {
- len += sprintf(buf+len, "Entry %d:\n", i);
- len += sprintf(buf+len, " Adapter ID: %0#10x\n",
- hrt->hrt_entry[i].adapter_id);
- len += sprintf(buf+len, " Controlling tid: %0#6x\n",
- hrt->hrt_entry[i].parent_tid);
-
- if(hrt->hrt_entry[i].bus_type != 0x80)
- {
- bus = hrt->hrt_entry[i].bus_type;
- len += sprintf(buf+len, " %s Information\n", bus_strings[bus]);
-
- switch(bus)
- {
- case I2O_BUS_LOCAL:
- len += sprintf(buf+len, " IOBase: %0#6x,",
- hrt->hrt_entry[i].bus.local_bus.LbBaseIOPort);
- len += sprintf(buf+len, " MemoryBase: %0#10x\n",
- hrt->hrt_entry[i].bus.local_bus.LbBaseMemoryAddress);
- break;
-
- case I2O_BUS_ISA:
- len += sprintf(buf+len, " IOBase: %0#6x,",
- hrt->hrt_entry[i].bus.isa_bus.IsaBaseIOPort);
- len += sprintf(buf+len, " MemoryBase: %0#10x,",
- hrt->hrt_entry[i].bus.isa_bus.IsaBaseMemoryAddress);
- len += sprintf(buf+len, " CSN: %0#4x,",
- hrt->hrt_entry[i].bus.isa_bus.CSN);
- break;
-
- case I2O_BUS_EISA:
- len += sprintf(buf+len, " IOBase: %0#6x,",
- hrt->hrt_entry[i].bus.eisa_bus.EisaBaseIOPort);
- len += sprintf(buf+len, " MemoryBase: %0#10x,",
- hrt->hrt_entry[i].bus.eisa_bus.EisaBaseMemoryAddress);
- len += sprintf(buf+len, " Slot: %0#4x,",
- hrt->hrt_entry[i].bus.eisa_bus.EisaSlotNumber);
- break;
-
- case I2O_BUS_MCA:
- len += sprintf(buf+len, " IOBase: %0#6x,",
- hrt->hrt_entry[i].bus.mca_bus.McaBaseIOPort);
- len += sprintf(buf+len, " MemoryBase: %0#10x,",
- hrt->hrt_entry[i].bus.mca_bus.McaBaseMemoryAddress);
- len += sprintf(buf+len, " Slot: %0#4x,",
- hrt->hrt_entry[i].bus.mca_bus.McaSlotNumber);
- break;
-
- case I2O_BUS_PCI:
- len += sprintf(buf+len, " Bus: %0#4x",
- hrt->hrt_entry[i].bus.pci_bus.PciBusNumber);
- len += sprintf(buf+len, " Dev: %0#4x",
- hrt->hrt_entry[i].bus.pci_bus.PciDeviceNumber);
- len += sprintf(buf+len, " Func: %0#4x",
- hrt->hrt_entry[i].bus.pci_bus.PciFunctionNumber);
- len += sprintf(buf+len, " Vendor: %0#6x",
- hrt->hrt_entry[i].bus.pci_bus.PciVendorID);
- len += sprintf(buf+len, " Device: %0#6x\n",
- hrt->hrt_entry[i].bus.pci_bus.PciDeviceID);
- break;
-
- default:
- len += sprintf(buf+len, " Unsupported Bus Type\n");
- }
- }
- else
- len += sprintf(buf+len, " Unknown Bus Type\n");
- }
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-int i2o_proc_read_lct(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_controller *c = (struct i2o_controller*)data;
- i2o_lct *lct = (i2o_lct *)c->lct;
- int entries;
- int i;
-
-#define BUS_TABLE_SIZE 3
- static char *bus_ports[] =
- {
- "Generic Bus",
- "SCSI Bus",
- "Fibre Channel Bus"
- };
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- entries = (lct->table_size - 3)/9;
-
- len += sprintf(buf, "LCT contains %d %s\n", entries,
- entries == 1 ? "entry" : "entries");
- if(lct->boot_tid)
- len += sprintf(buf+len, "Boot Device @ ID %d\n", lct->boot_tid);
-
- len +=
- sprintf(buf+len, "Current Change Indicator: %#10x\n", lct->change_ind);
-
- for(i = 0; i < entries; i++)
- {
- len += sprintf(buf+len, "Entry %d\n", i);
- len += sprintf(buf+len, " Class, SubClass : %s", i2o_get_class_name(lct->lct_entry[i].class_id));
-
- /*
- * Classes which we'll print subclass info for
- */
- switch(lct->lct_entry[i].class_id & 0xFFF)
- {
- case I2O_CLASS_RANDOM_BLOCK_STORAGE:
- switch(lct->lct_entry[i].sub_class)
- {
- case 0x00:
- len += sprintf(buf+len, ", Direct-Access Read/Write");
- break;
-
- case 0x04:
- len += sprintf(buf+len, ", WORM Drive");
- break;
-
- case 0x05:
- len += sprintf(buf+len, ", CD-ROM Drive");
- break;
-
- case 0x07:
- len += sprintf(buf+len, ", Optical Memory Device");
- break;
-
- default:
- len += sprintf(buf+len, ", Unknown (0x%02x)",
- lct->lct_entry[i].sub_class);
- break;
- }
- break;
-
- case I2O_CLASS_LAN:
- switch(lct->lct_entry[i].sub_class & 0xFF)
- {
- case 0x30:
- len += sprintf(buf+len, ", Ethernet");
- break;
-
- case 0x40:
- len += sprintf(buf+len, ", 100base VG");
- break;
-
- case 0x50:
- len += sprintf(buf+len, ", IEEE 802.5/Token-Ring");
- break;
-
- case 0x60:
- len += sprintf(buf+len, ", ANSI X3T9.5 FDDI");
- break;
-
- case 0x70:
- len += sprintf(buf+len, ", Fibre Channel");
- break;
-
- default:
- len += sprintf(buf+len, ", Unknown Sub-Class (0x%02x)",
- lct->lct_entry[i].sub_class & 0xFF);
- break;
- }
- break;
-
- case I2O_CLASS_SCSI_PERIPHERAL:
- if(lct->lct_entry[i].sub_class < SCSI_TABLE_SIZE)
- len += sprintf(buf+len, ", %s",
- scsi_devices[lct->lct_entry[i].sub_class]);
- else
- len += sprintf(buf+len, ", Unknown Device Type");
- break;
-
- case I2O_CLASS_BUS_ADAPTER_PORT:
- if(lct->lct_entry[i].sub_class < BUS_TABLE_SIZE)
- len += sprintf(buf+len, ", %s",
- bus_ports[lct->lct_entry[i].sub_class]);
- else
- len += sprintf(buf+len, ", Unknown Bus Type");
- break;
- }
- len += sprintf(buf+len, "\n");
-
- len += sprintf(buf+len, " Local TID : 0x%03x\n", lct->lct_entry[i].tid);
- len += sprintf(buf+len, " User TID : 0x%03x\n", lct->lct_entry[i].user_tid);
- len += sprintf(buf+len, " Parent TID : 0x%03x\n",
- lct->lct_entry[i].parent_tid);
- len += sprintf(buf+len, " Identity Tag : 0x%x%x%x%x%x%x%x%x\n",
- lct->lct_entry[i].identity_tag[0],
- lct->lct_entry[i].identity_tag[1],
- lct->lct_entry[i].identity_tag[2],
- lct->lct_entry[i].identity_tag[3],
- lct->lct_entry[i].identity_tag[4],
- lct->lct_entry[i].identity_tag[5],
- lct->lct_entry[i].identity_tag[6],
- lct->lct_entry[i].identity_tag[7]);
- len += sprintf(buf+len, " Change Indicator : %0#10x\n",
- lct->lct_entry[i].change_ind);
- len += sprintf(buf+len, " Event Capab Mask : %0#10x\n",
- lct->lct_entry[i].device_flags);
- }
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-int i2o_proc_read_status(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_controller *c = (struct i2o_controller*)data;
- char prodstr[25];
- int version;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- i2o_status_get(c); // reread the status block
-
- len += sprintf(buf+len,"Organization ID : %0#6x\n",
- c->status_block->org_id);
-
- version = c->status_block->i2o_version;
-
-/* FIXME for Spec 2.0
- if (version == 0x02) {
- len += sprintf(buf+len,"Lowest I2O version supported: ");
- switch(workspace[2]) {
- case 0x00:
- len += sprintf(buf+len,"1.0\n");
- break;
- case 0x01:
- len += sprintf(buf+len,"1.5\n");
- break;
- case 0x02:
- len += sprintf(buf+len,"2.0\n");
- break;
- }
-
- len += sprintf(buf+len, "Highest I2O version supported: ");
- switch(workspace[3]) {
- case 0x00:
- len += sprintf(buf+len,"1.0\n");
- break;
- case 0x01:
- len += sprintf(buf+len,"1.5\n");
- break;
- case 0x02:
- len += sprintf(buf+len,"2.0\n");
- break;
- }
- }
-*/
- len += sprintf(buf+len,"IOP ID : %0#5x\n",
- c->status_block->iop_id);
- len += sprintf(buf+len,"Host Unit ID : %0#6x\n",
- c->status_block->host_unit_id);
- len += sprintf(buf+len,"Segment Number : %0#5x\n",
- c->status_block->segment_number);
-
- len += sprintf(buf+len, "I2O version : ");
- switch (version) {
- case 0x00:
- len += sprintf(buf+len,"1.0\n");
- break;
- case 0x01:
- len += sprintf(buf+len,"1.5\n");
- break;
- case 0x02:
- len += sprintf(buf+len,"2.0\n");
- break;
- default:
- len += sprintf(buf+len,"Unknown version\n");
- }
-
- len += sprintf(buf+len, "IOP State : ");
- switch (c->status_block->iop_state) {
- case 0x01:
- len += sprintf(buf+len,"INIT\n");
- break;
-
- case 0x02:
- len += sprintf(buf+len,"RESET\n");
- break;
-
- case 0x04:
- len += sprintf(buf+len,"HOLD\n");
- break;
-
- case 0x05:
- len += sprintf(buf+len,"READY\n");
- break;
-
- case 0x08:
- len += sprintf(buf+len,"OPERATIONAL\n");
- break;
-
- case 0x10:
- len += sprintf(buf+len,"FAILED\n");
- break;
-
- case 0x11:
- len += sprintf(buf+len,"FAULTED\n");
- break;
-
- default:
- len += sprintf(buf+len,"Unknown\n");
- break;
- }
-
- len += sprintf(buf+len,"Messenger Type : ");
- switch (c->status_block->msg_type) {
- case 0x00:
- len += sprintf(buf+len,"Memory mapped\n");
- break;
- case 0x01:
- len += sprintf(buf+len,"Memory mapped only\n");
- break;
- case 0x02:
- len += sprintf(buf+len,"Remote only\n");
- break;
- case 0x03:
- len += sprintf(buf+len,"Memory mapped and remote\n");
- break;
- default:
- len += sprintf(buf+len,"Unknown\n");
- }
-
- len += sprintf(buf+len,"Inbound Frame Size : %d bytes\n",
- c->status_block->inbound_frame_size<<2);
- len += sprintf(buf+len,"Max Inbound Frames : %d\n",
- c->status_block->max_inbound_frames);
- len += sprintf(buf+len,"Current Inbound Frames : %d\n",
- c->status_block->cur_inbound_frames);
- len += sprintf(buf+len,"Max Outbound Frames : %d\n",
- c->status_block->max_outbound_frames);
-
- /* Spec doesn't say if NULL terminated or not... */
- memcpy(prodstr, c->status_block->product_id, 24);
- prodstr[24] = '\0';
- len += sprintf(buf+len,"Product ID : %s\n", prodstr);
- len += sprintf(buf+len,"Expected LCT Size : %d bytes\n",
- c->status_block->expected_lct_size);
-
- len += sprintf(buf+len,"IOP Capabilities\n");
- len += sprintf(buf+len," Context Field Size Support : ");
- switch (c->status_block->iop_capabilities & 0x0000003) {
- case 0:
- len += sprintf(buf+len,"Supports only 32-bit context fields\n");
- break;
- case 1:
- len += sprintf(buf+len,"Supports only 64-bit context fields\n");
- break;
- case 2:
- len += sprintf(buf+len,"Supports 32-bit and 64-bit context fields, "
- "but not concurrently\n");
- break;
- case 3:
- len += sprintf(buf+len,"Supports 32-bit and 64-bit context fields "
- "concurrently\n");
- break;
- default:
- len += sprintf(buf+len,"0x%08x\n",c->status_block->iop_capabilities);
- }
- len += sprintf(buf+len," Current Context Field Size : ");
- switch (c->status_block->iop_capabilities & 0x0000000C) {
- case 0:
- len += sprintf(buf+len,"not configured\n");
- break;
- case 4:
- len += sprintf(buf+len,"Supports only 32-bit context fields\n");
- break;
- case 8:
- len += sprintf(buf+len,"Supports only 64-bit context fields\n");
- break;
- case 12:
- len += sprintf(buf+len,"Supports both 32-bit or 64-bit context fields "
- "concurrently\n");
- break;
- default:
- len += sprintf(buf+len,"\n");
- }
- len += sprintf(buf+len," Inbound Peer Support : %s\n",
- (c->status_block->iop_capabilities & 0x00000010) ? "Supported" : "Not supported");
- len += sprintf(buf+len," Outbound Peer Support : %s\n",
- (c->status_block->iop_capabilities & 0x00000020) ? "Supported" : "Not supported");
- len += sprintf(buf+len," Peer to Peer Support : %s\n",
- (c->status_block->iop_capabilities & 0x00000040) ? "Supported" : "Not supported");
-
- len += sprintf(buf+len, "Desired private memory size : %d kB\n",
- c->status_block->desired_mem_size>>10);
- len += sprintf(buf+len, "Allocated private memory size : %d kB\n",
- c->status_block->current_mem_size>>10);
- len += sprintf(buf+len, "Private memory base address : %0#10x\n",
- c->status_block->current_mem_base);
- len += sprintf(buf+len, "Desired private I/O size : %d kB\n",
- c->status_block->desired_io_size>>10);
- len += sprintf(buf+len, "Allocated private I/O size : %d kB\n",
- c->status_block->current_io_size>>10);
- len += sprintf(buf+len, "Private I/O base address : %0#10x\n",
- c->status_block->current_io_base);
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-int i2o_proc_read_hw(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_controller *c = (struct i2o_controller*)data;
- static u32 work32[5];
- static u8 *work8 = (u8*)work32;
- static u16 *work16 = (u16*)work32;
- int token;
- u32 hwcap;
-
- static char *cpu_table[] =
- {
- "Intel 80960 series",
- "AMD2900 series",
- "Motorola 68000 series",
- "ARM series",
- "MIPS series",
- "Sparc series",
- "PowerPC series",
- "Intel x86 series"
- };
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_scalar(c, ADAPTER_TID, 0x0000, -1, &work32, sizeof(work32));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0000 IOP Hardware");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "I2O Vendor ID : %0#6x\n", work16[0]);
- len += sprintf(buf+len, "Product ID : %0#6x\n", work16[1]);
- len += sprintf(buf+len, "CPU : ");
- if(work8[16] > 8)
- len += sprintf(buf+len, "Unknown\n");
- else
- len += sprintf(buf+len, "%s\n", cpu_table[work8[16]]);
- /* Anyone using ProcessorVersion? */
-
- len += sprintf(buf+len, "RAM : %dkB\n", work32[1]>>10);
- len += sprintf(buf+len, "Non-Volatile Mem : %dkB\n", work32[2]>>10);
-
- hwcap = work32[3];
- len += sprintf(buf+len, "Capabilities : 0x%08x\n", hwcap);
- len += sprintf(buf+len, " [%s] Self booting\n",
- (hwcap&0x00000001) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Upgradable IRTOS\n",
- (hwcap&0x00000002) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Supports downloading DDMs\n",
- (hwcap&0x00000004) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Supports installing DDMs\n",
- (hwcap&0x00000008) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Battery-backed RAM\n",
- (hwcap&0x00000010) ? "+" : "-");
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-
-/* Executive group 0003h - Executing DDM List (table) */
-int i2o_proc_read_ddm_table(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_controller *c = (struct i2o_controller*)data;
- int token;
- int i;
-
- typedef struct _i2o_exec_execute_ddm_table {
- u16 ddm_tid;
- u8 module_type;
- u8 reserved;
- u16 i2o_vendor_id;
- u16 module_id;
- u8 module_name_version[28];
- u32 data_size;
- u32 code_size;
- } i2o_exec_execute_ddm_table;
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- i2o_exec_execute_ddm_table ddm_table[MAX_I2O_MODULES];
- } result;
-
- i2o_exec_execute_ddm_table ddm_table;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- c, ADAPTER_TID,
- 0x0003, -1,
- NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0003 Executing DDM List");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "Tid Module_type Vendor Mod_id Module_name Vrs Data_size Code_size\n");
- ddm_table=result.ddm_table[0];
-
- for(i=0; i < result.row_count; ddm_table=result.ddm_table[++i])
- {
- len += sprintf(buf+len, "0x%03x ", ddm_table.ddm_tid & 0xFFF);
-
- switch(ddm_table.module_type)
- {
- case 0x01:
- len += sprintf(buf+len, "Downloaded DDM ");
- break;
- case 0x22:
- len += sprintf(buf+len, "Embedded DDM ");
- break;
- default:
- len += sprintf(buf+len, " ");
- }
-
- len += sprintf(buf+len, "%-#7x", ddm_table.i2o_vendor_id);
- len += sprintf(buf+len, "%-#8x", ddm_table.module_id);
- len += sprintf(buf+len, "%-29s", chtostr(ddm_table.module_name_version, 28));
- len += sprintf(buf+len, "%9d ", ddm_table.data_size);
- len += sprintf(buf+len, "%8d", ddm_table.code_size);
-
- len += sprintf(buf+len, "\n");
- }
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-
-/* Executive group 0004h - Driver Store (scalar) */
-int i2o_proc_read_driver_store(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_controller *c = (struct i2o_controller*)data;
- u32 work32[8];
- int token;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_scalar(c, ADAPTER_TID, 0x0004, -1, &work32, sizeof(work32));
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0004 Driver Store");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "Module limit : %d\n"
- "Module count : %d\n"
- "Current space : %d kB\n"
- "Free space : %d kB\n",
- work32[0], work32[1], work32[2]>>10, work32[3]>>10);
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-
-/* Executive group 0005h - Driver Store Table (table) */
-int i2o_proc_read_drivers_stored(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- typedef struct _i2o_driver_store {
- u16 stored_ddm_index;
- u8 module_type;
- u8 reserved;
- u16 i2o_vendor_id;
- u16 module_id;
- u8 module_name_version[28];
- u8 date[8];
- u32 module_size;
- u32 mpb_size;
- u32 module_flags;
- } i2o_driver_store_table;
-
- struct i2o_controller *c = (struct i2o_controller*)data;
- int token;
- int i;
-
- typedef struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- i2o_driver_store_table dst[MAX_I2O_MODULES];
- } i2o_driver_result_table;
-
- i2o_driver_result_table *result;
- i2o_driver_store_table *dst;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- result = kmalloc(sizeof(i2o_driver_result_table), GFP_KERNEL);
- if(result == NULL)
- return -ENOMEM;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- c, ADAPTER_TID, 0x0005, -1, NULL, 0,
- result, sizeof(*result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0005 DRIVER STORE TABLE");
- spin_unlock(&i2o_proc_lock);
- kfree(result);
- return len;
- }
-
- len += sprintf(buf+len, "# Module_type Vendor Mod_id Module_name Vrs"
- "Date Mod_size Par_size Flags\n");
- for(i=0, dst=&result->dst[0]; i < result->row_count; dst=&result->dst[++i])
- {
- len += sprintf(buf+len, "%-3d", dst->stored_ddm_index);
- switch(dst->module_type)
- {
- case 0x01:
- len += sprintf(buf+len, "Downloaded DDM ");
- break;
- case 0x22:
- len += sprintf(buf+len, "Embedded DDM ");
- break;
- default:
- len += sprintf(buf+len, " ");
- }
-
-#if 0
- if(c->i2oversion == 0x02)
- len += sprintf(buf+len, "%-d", dst->module_state);
-#endif
-
- len += sprintf(buf+len, "%-#7x", dst->i2o_vendor_id);
- len += sprintf(buf+len, "%-#8x", dst->module_id);
- len += sprintf(buf+len, "%-29s", chtostr(dst->module_name_version,28));
- len += sprintf(buf+len, "%-9s", chtostr(dst->date,8));
- len += sprintf(buf+len, "%8d ", dst->module_size);
- len += sprintf(buf+len, "%8d ", dst->mpb_size);
- len += sprintf(buf+len, "0x%04x", dst->module_flags);
-#if 0
- if(c->i2oversion == 0x02)
- len += sprintf(buf+len, "%d",
- dst->notification_level);
-#endif
- len += sprintf(buf+len, "\n");
- }
-
- spin_unlock(&i2o_proc_lock);
- kfree(result);
- return len;
-}
-
-
-/* Generic group F000h - Params Descriptor (table) */
-int i2o_proc_read_groups(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
- u8 properties;
-
- typedef struct _i2o_group_info
- {
- u16 group_number;
- u16 field_count;
- u16 row_count;
- u8 properties;
- u8 reserved;
- } i2o_group_info;
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- i2o_group_info group[256];
- } result;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid, 0xF000, -1, NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len = i2o_report_query_status(buf+len, token, "0xF000 Params Descriptor");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "# Group FieldCount RowCount Type Add Del Clear\n");
-
- for (i=0; i < result.row_count; i++)
- {
- len += sprintf(buf+len, "%-3d", i);
- len += sprintf(buf+len, "0x%04X ", result.group[i].group_number);
- len += sprintf(buf+len, "%10d ", result.group[i].field_count);
- len += sprintf(buf+len, "%8d ", result.group[i].row_count);
-
- properties = result.group[i].properties;
- if (properties & 0x1) len += sprintf(buf+len, "Table ");
- else len += sprintf(buf+len, "Scalar ");
- if (properties & 0x2) len += sprintf(buf+len, " + ");
- else len += sprintf(buf+len, " - ");
- if (properties & 0x4) len += sprintf(buf+len, " + ");
- else len += sprintf(buf+len, " - ");
- if (properties & 0x8) len += sprintf(buf+len, " + ");
- else len += sprintf(buf+len, " - ");
-
- len += sprintf(buf+len, "\n");
- }
-
- if (result.more_flag)
- len += sprintf(buf+len, "There is more...\n");
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-
-/* Generic group F001h - Physical Device Table (table) */
-int i2o_proc_read_phys_device(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- u32 adapter_id[64];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid,
- 0xF001, -1, NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF001 Physical Device Table");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- if (result.row_count)
- len += sprintf(buf+len, "# AdapterId\n");
-
- for (i=0; i < result.row_count; i++)
- {
- len += sprintf(buf+len, "%-2d", i);
- len += sprintf(buf+len, "%#7x\n", result.adapter_id[i]);
- }
-
- if (result.more_flag)
- len += sprintf(buf+len, "There is more...\n");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* Generic group F002h - Claimed Table (table) */
-int i2o_proc_read_claimed(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
-
- struct {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- u16 claimed_tid[64];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid,
- 0xF002, -1, NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF002 Claimed Table");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- if (result.row_count)
- len += sprintf(buf+len, "# ClaimedTid\n");
-
- for (i=0; i < result.row_count; i++)
- {
- len += sprintf(buf+len, "%-2d", i);
- len += sprintf(buf+len, "%#7x\n", result.claimed_tid[i]);
- }
-
- if (result.more_flag)
- len += sprintf(buf+len, "There is more...\n");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* Generic group F003h - User Table (table) */
-int i2o_proc_read_users(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
-
- typedef struct _i2o_user_table
- {
- u16 instance;
- u16 user_tid;
- u8 claim_type;
- u8 reserved1;
- u16 reserved2;
- } i2o_user_table;
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- i2o_user_table user[64];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid,
- 0xF003, -1, NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF003 User Table");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "# Instance UserTid ClaimType\n");
-
- for(i=0; i < result.row_count; i++)
- {
- len += sprintf(buf+len, "%-3d", i);
- len += sprintf(buf+len, "%#8x ", result.user[i].instance);
- len += sprintf(buf+len, "%#7x ", result.user[i].user_tid);
- len += sprintf(buf+len, "%#9x\n", result.user[i].claim_type);
- }
-
- if (result.more_flag)
- len += sprintf(buf+len, "There is more...\n");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* Generic group F005h - Private message extensions (table) (optional) */
-int i2o_proc_read_priv_msgs(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
-
- typedef struct _i2o_private
- {
- u16 ext_instance;
- u16 organization_id;
- u16 x_function_code;
- } i2o_private;
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- i2o_private extension[64];
- } result;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid,
- 0xF000, -1,
- NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF005 Private Message Extensions (optional)");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "Instance# OrgId FunctionCode\n");
-
- for(i=0; i < result.row_count; i++)
- {
- len += sprintf(buf+len, "%0#9x ", result.extension[i].ext_instance);
- len += sprintf(buf+len, "%0#6x ", result.extension[i].organization_id);
- len += sprintf(buf+len, "%0#6x", result.extension[i].x_function_code);
-
- len += sprintf(buf+len, "\n");
- }
-
- if(result.more_flag)
- len += sprintf(buf+len, "There is more...\n");
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-
-/* Generic group F006h - Authorized User Table (table) */
-int i2o_proc_read_authorized_users(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- u32 alternate_tid[64];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid,
- 0xF006, -1,
- NULL, 0,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF006 Autohorized User Table");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- if (result.row_count)
- len += sprintf(buf+len, "# AlternateTid\n");
-
- for(i=0; i < result.row_count; i++)
- {
- len += sprintf(buf+len, "%-2d", i);
- len += sprintf(buf+len, "%#7x ", result.alternate_tid[i]);
- }
-
- if (result.more_flag)
- len += sprintf(buf+len, "There is more...\n");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-
-/* Generic group F100h - Device Identity (scalar) */
-int i2o_proc_read_dev_identity(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[128]; // allow for "stuff" + up to 256 byte (max) serial number
- // == (allow) 512d bytes (max)
- static u16 *work16 = (u16*)work32;
- int token;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0xF100, -1,
- &work32, sizeof(work32));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token ,"0xF100 Device Identity");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Device Class : %s\n", i2o_get_class_name(work16[0]));
- len += sprintf(buf+len, "Owner TID : %0#5x\n", work16[2]);
- len += sprintf(buf+len, "Parent TID : %0#5x\n", work16[3]);
- len += sprintf(buf+len, "Vendor info : %s\n", chtostr((u8 *)(work32+2), 16));
- len += sprintf(buf+len, "Product info : %s\n", chtostr((u8 *)(work32+6), 16));
- len += sprintf(buf+len, "Description : %s\n", chtostr((u8 *)(work32+10), 16));
- len += sprintf(buf+len, "Product rev. : %s\n", chtostr((u8 *)(work32+14), 8));
-
- len += sprintf(buf+len, "Serial number : ");
- len = print_serial_number(buf, len,
- (u8*)(work32+16),
- /* allow for SNLen plus
- * possible trailing '\0'
- */
- sizeof(work32)-(16*sizeof(u32))-2
- );
- len += sprintf(buf+len, "\n");
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-
-int i2o_proc_read_dev_name(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
-
- if ( d->dev_name[0] == '\0' )
- return 0;
-
- len = sprintf(buf, "%s\n", d->dev_name);
-
- return len;
-}
-
-
-/* Generic group F101h - DDM Identity (scalar) */
-int i2o_proc_read_ddm_identity(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
-
- struct
- {
- u16 ddm_tid;
- u8 module_name[24];
- u8 module_rev[8];
- u8 sn_format;
- u8 serial_number[12];
- u8 pad[256]; // allow up to 256 byte (max) serial number
- } result;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0xF101, -1,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF101 DDM Identity");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Registering DDM TID : 0x%03x\n", result.ddm_tid);
- len += sprintf(buf+len, "Module name : %s\n", chtostr(result.module_name, 24));
- len += sprintf(buf+len, "Module revision : %s\n", chtostr(result.module_rev, 8));
-
- len += sprintf(buf+len, "Serial number : ");
- len = print_serial_number(buf, len, result.serial_number, sizeof(result)-36);
- /* allow for SNLen plus possible trailing '\0' */
-
- len += sprintf(buf+len, "\n");
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-/* Generic group F102h - User Information (scalar) */
-int i2o_proc_read_uinfo(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
-
- struct
- {
- u8 device_name[64];
- u8 service_name[64];
- u8 physical_location[64];
- u8 instance_number[4];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0xF102, -1,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF102 User Information");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Device name : %s\n", chtostr(result.device_name, 64));
- len += sprintf(buf+len, "Service name : %s\n", chtostr(result.service_name, 64));
- len += sprintf(buf+len, "Physical name : %s\n", chtostr(result.physical_location, 64));
- len += sprintf(buf+len, "Instance number : %s\n", chtostr(result.instance_number, 4));
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* Generic group F103h - SGL Operating Limits (scalar) */
-int i2o_proc_read_sgl_limits(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[12];
- static u16 *work16 = (u16 *)work32;
- static u8 *work8 = (u8 *)work32;
- int token;
-
- spin_lock(&i2o_proc_lock);
-
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0xF103, -1,
- &work32, sizeof(work32));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF103 SGL Operating Limits");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "SGL chain size : %d\n", work32[0]);
- len += sprintf(buf+len, "Max SGL chain size : %d\n", work32[1]);
- len += sprintf(buf+len, "SGL chain size target : %d\n", work32[2]);
- len += sprintf(buf+len, "SGL frag count : %d\n", work16[6]);
- len += sprintf(buf+len, "Max SGL frag count : %d\n", work16[7]);
- len += sprintf(buf+len, "SGL frag count target : %d\n", work16[8]);
-
- if (d->i2oversion == 0x02)
- {
- len += sprintf(buf+len, "SGL data alignment : %d\n", work16[8]);
- len += sprintf(buf+len, "SGL addr limit : %d\n", work8[20]);
- len += sprintf(buf+len, "SGL addr sizes supported : ");
- if (work8[21] & 0x01)
- len += sprintf(buf+len, "32 bit ");
- if (work8[21] & 0x02)
- len += sprintf(buf+len, "64 bit ");
- if (work8[21] & 0x04)
- len += sprintf(buf+len, "96 bit ");
- if (work8[21] & 0x08)
- len += sprintf(buf+len, "128 bit ");
- len += sprintf(buf+len, "\n");
- }
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-/* Generic group F200h - Sensors (scalar) */
-int i2o_proc_read_sensors(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
-
- struct
- {
- u16 sensor_instance;
- u8 component;
- u16 component_instance;
- u8 sensor_class;
- u8 sensor_type;
- u8 scaling_exponent;
- u32 actual_reading;
- u32 minimum_reading;
- u32 low2lowcat_treshold;
- u32 lowcat2low_treshold;
- u32 lowwarn2low_treshold;
- u32 low2lowwarn_treshold;
- u32 norm2lowwarn_treshold;
- u32 lowwarn2norm_treshold;
- u32 nominal_reading;
- u32 hiwarn2norm_treshold;
- u32 norm2hiwarn_treshold;
- u32 high2hiwarn_treshold;
- u32 hiwarn2high_treshold;
- u32 hicat2high_treshold;
- u32 hi2hicat_treshold;
- u32 maximum_reading;
- u8 sensor_state;
- u16 event_enable;
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0xF200, -1,
- &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0xF200 Sensors (optional)");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "Sensor instance : %d\n", result.sensor_instance);
-
- len += sprintf(buf+len, "Component : %d = ", result.component);
- switch (result.component)
- {
- case 0: len += sprintf(buf+len, "Other");
- break;
- case 1: len += sprintf(buf+len, "Planar logic Board");
- break;
- case 2: len += sprintf(buf+len, "CPU");
- break;
- case 3: len += sprintf(buf+len, "Chassis");
- break;
- case 4: len += sprintf(buf+len, "Power Supply");
- break;
- case 5: len += sprintf(buf+len, "Storage");
- break;
- case 6: len += sprintf(buf+len, "External");
- break;
- }
- len += sprintf(buf+len,"\n");
-
- len += sprintf(buf+len, "Component instance : %d\n", result.component_instance);
- len += sprintf(buf+len, "Sensor class : %s\n",
- result.sensor_class ? "Analog" : "Digital");
-
- len += sprintf(buf+len, "Sensor type : %d = ",result.sensor_type);
- switch (result.sensor_type)
- {
- case 0: len += sprintf(buf+len, "Other\n");
- break;
- case 1: len += sprintf(buf+len, "Thermal\n");
- break;
- case 2: len += sprintf(buf+len, "DC voltage (DC volts)\n");
- break;
- case 3: len += sprintf(buf+len, "AC voltage (AC volts)\n");
- break;
- case 4: len += sprintf(buf+len, "DC current (DC amps)\n");
- break;
- case 5: len += sprintf(buf+len, "AC current (AC volts)\n");
- break;
- case 6: len += sprintf(buf+len, "Door open\n");
- break;
- case 7: len += sprintf(buf+len, "Fan operational\n");
- break;
- }
-
- len += sprintf(buf+len, "Scaling exponent : %d\n", result.scaling_exponent);
- len += sprintf(buf+len, "Actual reading : %d\n", result.actual_reading);
- len += sprintf(buf+len, "Minimum reading : %d\n", result.minimum_reading);
- len += sprintf(buf+len, "Low2LowCat treshold : %d\n", result.low2lowcat_treshold);
- len += sprintf(buf+len, "LowCat2Low treshold : %d\n", result.lowcat2low_treshold);
- len += sprintf(buf+len, "LowWarn2Low treshold : %d\n", result.lowwarn2low_treshold);
- len += sprintf(buf+len, "Low2LowWarn treshold : %d\n", result.low2lowwarn_treshold);
- len += sprintf(buf+len, "Norm2LowWarn treshold : %d\n", result.norm2lowwarn_treshold);
- len += sprintf(buf+len, "LowWarn2Norm treshold : %d\n", result.lowwarn2norm_treshold);
- len += sprintf(buf+len, "Nominal reading : %d\n", result.nominal_reading);
- len += sprintf(buf+len, "HiWarn2Norm treshold : %d\n", result.hiwarn2norm_treshold);
- len += sprintf(buf+len, "Norm2HiWarn treshold : %d\n", result.norm2hiwarn_treshold);
- len += sprintf(buf+len, "High2HiWarn treshold : %d\n", result.high2hiwarn_treshold);
- len += sprintf(buf+len, "HiWarn2High treshold : %d\n", result.hiwarn2high_treshold);
- len += sprintf(buf+len, "HiCat2High treshold : %d\n", result.hicat2high_treshold);
- len += sprintf(buf+len, "High2HiCat treshold : %d\n", result.hi2hicat_treshold);
- len += sprintf(buf+len, "Maximum reading : %d\n", result.maximum_reading);
-
- len += sprintf(buf+len, "Sensor state : %d = ", result.sensor_state);
- switch (result.sensor_state)
- {
- case 0: len += sprintf(buf+len, "Normal\n");
- break;
- case 1: len += sprintf(buf+len, "Abnormal\n");
- break;
- case 2: len += sprintf(buf+len, "Unknown\n");
- break;
- case 3: len += sprintf(buf+len, "Low Catastrophic (LoCat)\n");
- break;
- case 4: len += sprintf(buf+len, "Low (Low)\n");
- break;
- case 5: len += sprintf(buf+len, "Low Warning (LoWarn)\n");
- break;
- case 6: len += sprintf(buf+len, "High Warning (HiWarn)\n");
- break;
- case 7: len += sprintf(buf+len, "High (High)\n");
- break;
- case 8: len += sprintf(buf+len, "High Catastrophic (HiCat)\n");
- break;
- }
-
- len += sprintf(buf+len, "Event_enable : 0x%02X\n", result.event_enable);
- len += sprintf(buf+len, " [%s] Operational state change. \n",
- (result.event_enable & 0x01) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] Low catastrophic. \n",
- (result.event_enable & 0x02) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] Low reading. \n",
- (result.event_enable & 0x04) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] Low warning. \n",
- (result.event_enable & 0x08) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] Change back to normal from out of range state. \n",
- (result.event_enable & 0x10) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] High warning. \n",
- (result.event_enable & 0x20) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] High reading. \n",
- (result.event_enable & 0x40) ? "+" : "-" );
- len += sprintf(buf+len, " [%s] High catastrophic. \n",
- (result.event_enable & 0x80) ? "+" : "-" );
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-
-static int print_serial_number(char *buff, int pos, u8 *serialno, int max_len)
-{
- int i;
-
- /* 19990419 -sralston
- * The I2O v1.5 (and v2.0 so far) "official specification"
- * got serial numbers WRONG!
- * Apparently, and despite what Section 3.4.4 says and
- * Figure 3-35 shows (pg 3-39 in the pdf doc),
- * the convention / consensus seems to be:
- * + First byte is SNFormat
- * + Second byte is SNLen (but only if SNFormat==7 (?))
- * + (v2.0) SCSI+BS may use IEEE Registered (64 or 128 bit) format
- */
- switch(serialno[0])
- {
- case I2O_SNFORMAT_BINARY: /* Binary */
- pos += sprintf(buff+pos, "0x");
- for(i = 0; i < serialno[1]; i++)
- {
- pos += sprintf(buff+pos, "%02X", serialno[2+i]);
- }
- break;
-
- case I2O_SNFORMAT_ASCII: /* ASCII */
- if ( serialno[1] < ' ' ) /* printable or SNLen? */
- {
- /* sanity */
- max_len = (max_len < serialno[1]) ? max_len : serialno[1];
- serialno[1+max_len] = '\0';
-
- /* just print it */
- pos += sprintf(buff+pos, "%s", &serialno[2]);
- }
- else
- {
- /* print chars for specified length */
- for(i = 0; i < serialno[1]; i++)
- {
- pos += sprintf(buff+pos, "%c", serialno[2+i]);
- }
- }
- break;
-
- case I2O_SNFORMAT_UNICODE: /* UNICODE */
- pos += sprintf(buff+pos, "UNICODE Format. Can't Display\n");
- break;
-
- case I2O_SNFORMAT_LAN48_MAC: /* LAN-48 MAC Address */
- pos += sprintf(buff+pos,
- "LAN-48 MAC address @ %02X:%02X:%02X:%02X:%02X:%02X",
- serialno[2], serialno[3],
- serialno[4], serialno[5],
- serialno[6], serialno[7]);
- break;
-
- case I2O_SNFORMAT_WAN: /* WAN MAC Address */
- /* FIXME: Figure out what a WAN access address looks like?? */
- pos += sprintf(buff+pos, "WAN Access Address");
- break;
-
-/* plus new in v2.0 */
- case I2O_SNFORMAT_LAN64_MAC: /* LAN-64 MAC Address */
- /* FIXME: Figure out what a LAN-64 address really looks like?? */
- pos += sprintf(buff+pos,
- "LAN-64 MAC address @ [?:%02X:%02X:?] %02X:%02X:%02X:%02X:%02X:%02X",
- serialno[8], serialno[9],
- serialno[2], serialno[3],
- serialno[4], serialno[5],
- serialno[6], serialno[7]);
- break;
-
-
- case I2O_SNFORMAT_DDM: /* I2O DDM */
- pos += sprintf(buff+pos,
- "DDM: Tid=%03Xh, Rsvd=%04Xh, OrgId=%04Xh",
- *(u16*)&serialno[2],
- *(u16*)&serialno[4],
- *(u16*)&serialno[6]);
- break;
-
- case I2O_SNFORMAT_IEEE_REG64: /* IEEE Registered (64-bit) */
- case I2O_SNFORMAT_IEEE_REG128: /* IEEE Registered (128-bit) */
- /* FIXME: Figure if this is even close?? */
- pos += sprintf(buff+pos,
- "IEEE NodeName(hi,lo)=(%08Xh:%08Xh), PortName(hi,lo)=(%08Xh:%08Xh)\n",
- *(u32*)&serialno[2],
- *(u32*)&serialno[6],
- *(u32*)&serialno[10],
- *(u32*)&serialno[14]);
- break;
-
-
- case I2O_SNFORMAT_UNKNOWN: /* Unknown 0 */
- case I2O_SNFORMAT_UNKNOWN2: /* Unknown 0xff */
- default:
- pos += sprintf(buff+pos, "Unknown data format (0x%02x)",
- serialno[0]);
- break;
- }
-
- return pos;
-}
-
-const char * i2o_get_connector_type(int conn)
-{
- int idx = 16;
- static char *i2o_connector_type[] = {
- "OTHER",
- "UNKNOWN",
- "AUI",
- "UTP",
- "BNC",
- "RJ45",
- "STP DB9",
- "FIBER MIC",
- "APPLE AUI",
- "MII",
- "DB9",
- "HSSDC",
- "DUPLEX SC FIBER",
- "DUPLEX ST FIBER",
- "TNC/BNC",
- "HW DEFAULT"
- };
-
- switch(conn)
- {
- case 0x00000000:
- idx = 0;
- break;
- case 0x00000001:
- idx = 1;
- break;
- case 0x00000002:
- idx = 2;
- break;
- case 0x00000003:
- idx = 3;
- break;
- case 0x00000004:
- idx = 4;
- break;
- case 0x00000005:
- idx = 5;
- break;
- case 0x00000006:
- idx = 6;
- break;
- case 0x00000007:
- idx = 7;
- break;
- case 0x00000008:
- idx = 8;
- break;
- case 0x00000009:
- idx = 9;
- break;
- case 0x0000000A:
- idx = 10;
- break;
- case 0x0000000B:
- idx = 11;
- break;
- case 0x0000000C:
- idx = 12;
- break;
- case 0x0000000D:
- idx = 13;
- break;
- case 0x0000000E:
- idx = 14;
- break;
- case 0xFFFFFFFF:
- idx = 15;
- break;
- }
-
- return i2o_connector_type[idx];
-}
-
-
-const char * i2o_get_connection_type(int conn)
-{
- int idx = 0;
- static char *i2o_connection_type[] = {
- "Unknown",
- "AUI",
- "10BASE5",
- "FIORL",
- "10BASE2",
- "10BROAD36",
- "10BASE-T",
- "10BASE-FP",
- "10BASE-FB",
- "10BASE-FL",
- "100BASE-TX",
- "100BASE-FX",
- "100BASE-T4",
- "1000BASE-SX",
- "1000BASE-LX",
- "1000BASE-CX",
- "1000BASE-T",
- "100VG-ETHERNET",
- "100VG-TOKEN RING",
- "4MBIT TOKEN RING",
- "16 Mb Token Ring",
- "125 MBAUD FDDI",
- "Point-to-point",
- "Arbitrated loop",
- "Public loop",
- "Fabric",
- "Emulation",
- "Other",
- "HW default"
- };
-
- switch(conn)
- {
- case I2O_LAN_UNKNOWN:
- idx = 0;
- break;
- case I2O_LAN_AUI:
- idx = 1;
- break;
- case I2O_LAN_10BASE5:
- idx = 2;
- break;
- case I2O_LAN_FIORL:
- idx = 3;
- break;
- case I2O_LAN_10BASE2:
- idx = 4;
- break;
- case I2O_LAN_10BROAD36:
- idx = 5;
- break;
- case I2O_LAN_10BASE_T:
- idx = 6;
- break;
- case I2O_LAN_10BASE_FP:
- idx = 7;
- break;
- case I2O_LAN_10BASE_FB:
- idx = 8;
- break;
- case I2O_LAN_10BASE_FL:
- idx = 9;
- break;
- case I2O_LAN_100BASE_TX:
- idx = 10;
- break;
- case I2O_LAN_100BASE_FX:
- idx = 11;
- break;
- case I2O_LAN_100BASE_T4:
- idx = 12;
- break;
- case I2O_LAN_1000BASE_SX:
- idx = 13;
- break;
- case I2O_LAN_1000BASE_LX:
- idx = 14;
- break;
- case I2O_LAN_1000BASE_CX:
- idx = 15;
- break;
- case I2O_LAN_1000BASE_T:
- idx = 16;
- break;
- case I2O_LAN_100VG_ETHERNET:
- idx = 17;
- break;
- case I2O_LAN_100VG_TR:
- idx = 18;
- break;
- case I2O_LAN_4MBIT:
- idx = 19;
- break;
- case I2O_LAN_16MBIT:
- idx = 20;
- break;
- case I2O_LAN_125MBAUD:
- idx = 21;
- break;
- case I2O_LAN_POINT_POINT:
- idx = 22;
- break;
- case I2O_LAN_ARB_LOOP:
- idx = 23;
- break;
- case I2O_LAN_PUBLIC_LOOP:
- idx = 24;
- break;
- case I2O_LAN_FABRIC:
- idx = 25;
- break;
- case I2O_LAN_EMULATION:
- idx = 26;
- break;
- case I2O_LAN_OTHER:
- idx = 27;
- break;
- case I2O_LAN_DEFAULT:
- idx = 28;
- break;
- }
-
- return i2o_connection_type[idx];
-}
-
-
-/* LAN group 0000h - Device info (scalar) */
-int i2o_proc_read_lan_dev_info(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[56];
- static u8 *work8 = (u8*)work32;
- static u16 *work16 = (u16*)work32;
- static u64 *work64 = (u64*)work32;
- int token;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0000, -1, &work32, 56*4);
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token, "0x0000 LAN Device Info");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "LAN Type : ");
- switch (work16[0])
- {
- case 0x0030:
- len += sprintf(buf+len, "Ethernet, ");
- break;
- case 0x0040:
- len += sprintf(buf+len, "100Base VG, ");
- break;
- case 0x0050:
- len += sprintf(buf+len, "Token Ring, ");
- break;
- case 0x0060:
- len += sprintf(buf+len, "FDDI, ");
- break;
- case 0x0070:
- len += sprintf(buf+len, "Fibre Channel, ");
- break;
- default:
- len += sprintf(buf+len, "Unknown type (0x%04x), ", work16[0]);
- break;
- }
-
- if (work16[1]&0x00000001)
- len += sprintf(buf+len, "emulated LAN, ");
- else
- len += sprintf(buf+len, "physical LAN port, ");
-
- if (work16[1]&0x00000002)
- len += sprintf(buf+len, "full duplex\n");
- else
- len += sprintf(buf+len, "simplex\n");
-
- len += sprintf(buf+len, "Address format : ");
- switch(work8[4]) {
- case 0x00:
- len += sprintf(buf+len, "IEEE 48bit\n");
- break;
- case 0x01:
- len += sprintf(buf+len, "FC IEEE\n");
- break;
- default:
- len += sprintf(buf+len, "Unknown (0x%02x)\n", work8[4]);
- break;
- }
-
- len += sprintf(buf+len, "State : ");
- switch(work8[5])
- {
- case 0x00:
- len += sprintf(buf+len, "Unknown\n");
- break;
- case 0x01:
- len += sprintf(buf+len, "Unclaimed\n");
- break;
- case 0x02:
- len += sprintf(buf+len, "Operational\n");
- break;
- case 0x03:
- len += sprintf(buf+len, "Suspended\n");
- break;
- case 0x04:
- len += sprintf(buf+len, "Resetting\n");
- break;
- case 0x05:
- len += sprintf(buf+len, "ERROR: ");
- if(work16[3]&0x0001)
- len += sprintf(buf+len, "TxCU inoperative ");
- if(work16[3]&0x0002)
- len += sprintf(buf+len, "RxCU inoperative ");
- if(work16[3]&0x0004)
- len += sprintf(buf+len, "Local mem alloc ");
- len += sprintf(buf+len, "\n");
- break;
- case 0x06:
- len += sprintf(buf+len, "Operational no Rx\n");
- break;
- case 0x07:
- len += sprintf(buf+len, "Suspended no Rx\n");
- break;
- default:
- len += sprintf(buf+len, "Unspecified\n");
- break;
- }
-
- len += sprintf(buf+len, "Min packet size : %d\n", work32[2]);
- len += sprintf(buf+len, "Max packet size : %d\n", work32[3]);
- len += sprintf(buf+len, "HW address : "
- "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- work8[16],work8[17],work8[18],work8[19],
- work8[20],work8[21],work8[22],work8[23]);
-
- len += sprintf(buf+len, "Max Tx wire speed : %d bps\n", (int)work64[3]);
- len += sprintf(buf+len, "Max Rx wire speed : %d bps\n", (int)work64[4]);
-
- len += sprintf(buf+len, "Min SDU packet size : 0x%08x\n", work32[10]);
- len += sprintf(buf+len, "Max SDU packet size : 0x%08x\n", work32[11]);
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0001h - MAC address table (scalar) */
-int i2o_proc_read_lan_mac_addr(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[48];
- static u8 *work8 = (u8*)work32;
- int token;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0001, -1, &work32, 48*4);
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0001 LAN MAC Address");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Active address : "
- "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- work8[0],work8[1],work8[2],work8[3],
- work8[4],work8[5],work8[6],work8[7]);
- len += sprintf(buf+len, "Current address : "
- "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- work8[8],work8[9],work8[10],work8[11],
- work8[12],work8[13],work8[14],work8[15]);
- len += sprintf(buf+len, "Functional address mask : "
- "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- work8[16],work8[17],work8[18],work8[19],
- work8[20],work8[21],work8[22],work8[23]);
-
- len += sprintf(buf+len,"HW/DDM capabilities : 0x%08x\n", work32[7]);
- len += sprintf(buf+len," [%s] Unicast packets supported\n",
- (work32[7]&0x00000001)?"+":"-");
- len += sprintf(buf+len," [%s] Promiscuous mode supported\n",
- (work32[7]&0x00000002)?"+":"-");
- len += sprintf(buf+len," [%s] Promiscuous multicast mode supported\n",
- (work32[7]&0x00000004)?"+":"-");
- len += sprintf(buf+len," [%s] Broadcast reception disabling supported\n",
- (work32[7]&0x00000100)?"+":"-");
- len += sprintf(buf+len," [%s] Multicast reception disabling supported\n",
- (work32[7]&0x00000200)?"+":"-");
- len += sprintf(buf+len," [%s] Functional address disabling supported\n",
- (work32[7]&0x00000400)?"+":"-");
- len += sprintf(buf+len," [%s] MAC reporting supported\n",
- (work32[7]&0x00000800)?"+":"-");
-
- len += sprintf(buf+len,"Filter mask : 0x%08x\n", work32[6]);
- len += sprintf(buf+len," [%s] Unicast packets disable\n",
- (work32[6]&0x00000001)?"+":"-");
- len += sprintf(buf+len," [%s] Promiscuous mode enable\n",
- (work32[6]&0x00000002)?"+":"-");
- len += sprintf(buf+len," [%s] Promiscuous multicast mode enable\n",
- (work32[6]&0x00000004)?"+":"-");
- len += sprintf(buf+len," [%s] Broadcast packets disable\n",
- (work32[6]&0x00000100)?"+":"-");
- len += sprintf(buf+len," [%s] Multicast packets disable\n",
- (work32[6]&0x00000200)?"+":"-");
- len += sprintf(buf+len," [%s] Functional address disable\n",
- (work32[6]&0x00000400)?"+":"-");
-
- if (work32[7]&0x00000800) {
- len += sprintf(buf+len, " MAC reporting mode : ");
- if (work32[6]&0x00000800)
- len += sprintf(buf+len, "Pass only priority MAC packets to user\n");
- else if (work32[6]&0x00001000)
- len += sprintf(buf+len, "Pass all MAC packets to user\n");
- else if (work32[6]&0x00001800)
- len += sprintf(buf+len, "Pass all MAC packets (promiscuous) to user\n");
- else
- len += sprintf(buf+len, "Do not pass MAC packets to user\n");
- }
- len += sprintf(buf+len, "Number of multicast addresses : %d\n", work32[8]);
- len += sprintf(buf+len, "Perfect filtering for max %d multicast addresses\n",
- work32[9]);
- len += sprintf(buf+len, "Imperfect filtering for max %d multicast addresses\n",
- work32[10]);
-
- spin_unlock(&i2o_proc_lock);
-
- return len;
-}
-
-/* LAN group 0002h - Multicast MAC address table (table) */
-int i2o_proc_read_lan_mcast_addr(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
- u8 mc_addr[8];
-
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- u8 mc_addr[256][8];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid, 0x0002, -1,
- NULL, 0, &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x002 LAN Multicast MAC Address");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- for (i = 0; i < result.row_count; i++)
- {
- memcpy(mc_addr, result.mc_addr[i], 8);
-
- len += sprintf(buf+len, "MC MAC address[%d]: "
- "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- i, mc_addr[0], mc_addr[1], mc_addr[2],
- mc_addr[3], mc_addr[4], mc_addr[5],
- mc_addr[6], mc_addr[7]);
- }
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0003h - Batch Control (scalar) */
-int i2o_proc_read_lan_batch_control(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[9];
- int token;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0003, -1, &work32, 9*4);
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0003 LAN Batch Control");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Batch mode ");
- if (work32[0]&0x00000001)
- len += sprintf(buf+len, "disabled");
- else
- len += sprintf(buf+len, "enabled");
- if (work32[0]&0x00000002)
- len += sprintf(buf+len, " (current setting)");
- if (work32[0]&0x00000004)
- len += sprintf(buf+len, ", forced");
- else
- len += sprintf(buf+len, ", toggle");
- len += sprintf(buf+len, "\n");
-
- len += sprintf(buf+len, "Max Rx batch count : %d\n", work32[5]);
- len += sprintf(buf+len, "Max Rx batch delay : %d\n", work32[6]);
- len += sprintf(buf+len, "Max Tx batch delay : %d\n", work32[7]);
- len += sprintf(buf+len, "Max Tx batch count : %d\n", work32[8]);
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0004h - LAN Operation (scalar) */
-int i2o_proc_read_lan_operation(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[5];
- int token;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0004, -1, &work32, 20);
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0004 LAN Operation");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Packet prepadding (32b words) : %d\n", work32[0]);
- len += sprintf(buf+len, "Transmission error reporting : %s\n",
- (work32[1]&1)?"on":"off");
- len += sprintf(buf+len, "Bad packet handling : %s\n",
- (work32[1]&0x2)?"by host":"by DDM");
- len += sprintf(buf+len, "Packet orphan limit : %d\n", work32[2]);
-
- len += sprintf(buf+len, "Tx modes : 0x%08x\n", work32[3]);
- len += sprintf(buf+len, " [%s] HW CRC suppression\n",
- (work32[3]&0x00000004) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW IPv4 checksum\n",
- (work32[3]&0x00000100) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW TCP checksum\n",
- (work32[3]&0x00000200) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW UDP checksum\n",
- (work32[3]&0x00000400) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW RSVP checksum\n",
- (work32[3]&0x00000800) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW ICMP checksum\n",
- (work32[3]&0x00001000) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Loopback suppression enable\n",
- (work32[3]&0x00002000) ? "+" : "-");
-
- len += sprintf(buf+len, "Rx modes : 0x%08x\n", work32[4]);
- len += sprintf(buf+len, " [%s] FCS in payload\n",
- (work32[4]&0x00000004) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW IPv4 checksum validation\n",
- (work32[4]&0x00000100) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW TCP checksum validation\n",
- (work32[4]&0x00000200) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW UDP checksum validation\n",
- (work32[4]&0x00000400) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW RSVP checksum validation\n",
- (work32[4]&0x00000800) ? "+" : "-");
- len += sprintf(buf+len, " [%s] HW ICMP checksum validation\n",
- (work32[4]&0x00001000) ? "+" : "-");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0005h - Media operation (scalar) */
-int i2o_proc_read_lan_media_operation(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
-
- struct
- {
- u32 connector_type;
- u32 connection_type;
- u64 current_tx_wire_speed;
- u64 current_rx_wire_speed;
- u8 duplex_mode;
- u8 link_status;
- u8 reserved;
- u8 duplex_mode_target;
- u32 connector_type_target;
- u32 connection_type_target;
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0005, -1, &result, sizeof(result));
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token, "0x0005 LAN Media Operation");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Connector type : %s\n",
- i2o_get_connector_type(result.connector_type));
- len += sprintf(buf+len, "Connection type : %s\n",
- i2o_get_connection_type(result.connection_type));
-
- len += sprintf(buf+len, "Current Tx wire speed : %d bps\n", (int)result.current_tx_wire_speed);
- len += sprintf(buf+len, "Current Rx wire speed : %d bps\n", (int)result.current_rx_wire_speed);
- len += sprintf(buf+len, "Duplex mode : %s duplex\n",
- (result.duplex_mode)?"Full":"Half");
-
- len += sprintf(buf+len, "Link status : ");
- switch (result.link_status)
- {
- case 0x00:
- len += sprintf(buf+len, "Unknown\n");
- break;
- case 0x01:
- len += sprintf(buf+len, "Normal\n");
- break;
- case 0x02:
- len += sprintf(buf+len, "Failure\n");
- break;
- case 0x03:
- len += sprintf(buf+len, "Reset\n");
- break;
- default:
- len += sprintf(buf+len, "Unspecified\n");
- }
-
- len += sprintf(buf+len, "Duplex mode target : ");
- switch (result.duplex_mode_target){
- case 0:
- len += sprintf(buf+len, "Half duplex\n");
- break;
- case 1:
- len += sprintf(buf+len, "Full duplex\n");
- break;
- default:
- len += sprintf(buf+len, "\n");
- }
-
- len += sprintf(buf+len, "Connector type target : %s\n",
- i2o_get_connector_type(result.connector_type_target));
- len += sprintf(buf+len, "Connection type target : %s\n",
- i2o_get_connection_type(result.connection_type_target));
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0006h - Alternate address (table) (optional) */
-int i2o_proc_read_lan_alt_addr(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
- int i;
- u8 alt_addr[8];
- struct
- {
- u16 result_count;
- u16 pad;
- u16 block_size;
- u8 block_status;
- u8 error_info_size;
- u16 row_count;
- u16 more_flag;
- u8 alt_addr[256][8];
- } result;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_table(I2O_PARAMS_TABLE_GET,
- d->controller, d->lct_data.tid,
- 0x0006, -1, NULL, 0, &result, sizeof(result));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token, "0x0006 LAN Alternate Address (optional)");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- for (i=0; i < result.row_count; i++)
- {
- memcpy(alt_addr,result.alt_addr[i],8);
- len += sprintf(buf+len, "Alternate address[%d]: "
- "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
- i, alt_addr[0], alt_addr[1], alt_addr[2],
- alt_addr[3], alt_addr[4], alt_addr[5],
- alt_addr[6], alt_addr[7]);
- }
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-
-/* LAN group 0007h - Transmit info (scalar) */
-int i2o_proc_read_lan_tx_info(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[8];
- int token;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0007, -1, &work32, 8*4);
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0007 LAN Transmit Info");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "Tx Max SG elements per packet : %d\n", work32[0]);
- len += sprintf(buf+len, "Tx Max SG elements per chain : %d\n", work32[1]);
- len += sprintf(buf+len, "Tx Max outstanding packets : %d\n", work32[2]);
- len += sprintf(buf+len, "Tx Max packets per request : %d\n", work32[3]);
-
- len += sprintf(buf+len, "Tx modes : 0x%08x\n", work32[4]);
- len += sprintf(buf+len, " [%s] No DA in SGL\n",
- (work32[4]&0x00000002) ? "+" : "-");
- len += sprintf(buf+len, " [%s] CRC suppression\n",
- (work32[4]&0x00000004) ? "+" : "-");
- len += sprintf(buf+len, " [%s] MAC insertion\n",
- (work32[4]&0x00000010) ? "+" : "-");
- len += sprintf(buf+len, " [%s] RIF insertion\n",
- (work32[4]&0x00000020) ? "+" : "-");
- len += sprintf(buf+len, " [%s] IPv4 checksum generation\n",
- (work32[4]&0x00000100) ? "+" : "-");
- len += sprintf(buf+len, " [%s] TCP checksum generation\n",
- (work32[4]&0x00000200) ? "+" : "-");
- len += sprintf(buf+len, " [%s] UDP checksum generation\n",
- (work32[4]&0x00000400) ? "+" : "-");
- len += sprintf(buf+len, " [%s] RSVP checksum generation\n",
- (work32[4]&0x00000800) ? "+" : "-");
- len += sprintf(buf+len, " [%s] ICMP checksum generation\n",
- (work32[4]&0x00001000) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Loopback enabled\n",
- (work32[4]&0x00010000) ? "+" : "-");
- len += sprintf(buf+len, " [%s] Loopback suppression enabled\n",
- (work32[4]&0x00020000) ? "+" : "-");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0008h - Receive info (scalar) */
-int i2o_proc_read_lan_rx_info(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u32 work32[8];
- int token;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0008, -1, &work32, 8*4);
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0008 LAN Receive Info");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf ,"Rx Max size of chain element : %d\n", work32[0]);
- len += sprintf(buf+len, "Rx Max Buckets : %d\n", work32[1]);
- len += sprintf(buf+len, "Rx Max Buckets in Reply : %d\n", work32[3]);
- len += sprintf(buf+len, "Rx Max Packets in Bucket : %d\n", work32[4]);
- len += sprintf(buf+len, "Rx Max Buckets in Post : %d\n", work32[5]);
-
- len += sprintf(buf+len, "Rx Modes : 0x%08x\n", work32[2]);
- len += sprintf(buf+len, " [%s] FCS reception\n",
- (work32[2]&0x00000004) ? "+" : "-");
- len += sprintf(buf+len, " [%s] IPv4 checksum validation \n",
- (work32[2]&0x00000100) ? "+" : "-");
- len += sprintf(buf+len, " [%s] TCP checksum validation \n",
- (work32[2]&0x00000200) ? "+" : "-");
- len += sprintf(buf+len, " [%s] UDP checksum validation \n",
- (work32[2]&0x00000400) ? "+" : "-");
- len += sprintf(buf+len, " [%s] RSVP checksum validation \n",
- (work32[2]&0x00000800) ? "+" : "-");
- len += sprintf(buf+len, " [%s] ICMP checksum validation \n",
- (work32[2]&0x00001000) ? "+" : "-");
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-static int i2o_report_opt_field(char *buf, char *field_name,
- int field_nbr, int supp_fields, u64 *value)
-{
- if (supp_fields & (1 << field_nbr))
- return sprintf(buf, "%-24s : " FMT_U64_HEX "\n", field_name, U64_VAL(value));
- else
- return sprintf(buf, "%-24s : Not supported\n", field_name);
-}
-
-/* LAN group 0100h - LAN Historical statistics (scalar) */
-/* LAN group 0180h - Supported Optional Historical Statistics (scalar) */
-/* LAN group 0182h - Optional Non Media Specific Transmit Historical Statistics (scalar) */
-/* LAN group 0183h - Optional Non Media Specific Receive Historical Statistics (scalar) */
-
-int i2o_proc_read_lan_hist_stats(char *buf, char **start, off_t offset, int len,
- int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
-
- struct
- {
- u64 tx_packets;
- u64 tx_bytes;
- u64 rx_packets;
- u64 rx_bytes;
- u64 tx_errors;
- u64 rx_errors;
- u64 rx_dropped;
- u64 adapter_resets;
- u64 adapter_suspends;
- } stats; // 0x0100
-
- static u64 supp_groups[4]; // 0x0180
-
- struct
- {
- u64 tx_retries;
- u64 tx_directed_bytes;
- u64 tx_directed_packets;
- u64 tx_multicast_bytes;
- u64 tx_multicast_packets;
- u64 tx_broadcast_bytes;
- u64 tx_broadcast_packets;
- u64 tx_group_addr_packets;
- u64 tx_short_packets;
- } tx_stats; // 0x0182
-
- struct
- {
- u64 rx_crc_errors;
- u64 rx_directed_bytes;
- u64 rx_directed_packets;
- u64 rx_multicast_bytes;
- u64 rx_multicast_packets;
- u64 rx_broadcast_bytes;
- u64 rx_broadcast_packets;
- u64 rx_group_addr_packets;
- u64 rx_short_packets;
- u64 rx_long_packets;
- u64 rx_runt_packets;
- } rx_stats; // 0x0183
-
- struct
- {
- u64 ipv4_generate;
- u64 ipv4_validate_success;
- u64 ipv4_validate_errors;
- u64 tcp_generate;
- u64 tcp_validate_success;
- u64 tcp_validate_errors;
- u64 udp_generate;
- u64 udp_validate_success;
- u64 udp_validate_errors;
- u64 rsvp_generate;
- u64 rsvp_validate_success;
- u64 rsvp_validate_errors;
- u64 icmp_generate;
- u64 icmp_validate_success;
- u64 icmp_validate_errors;
- } chksum_stats; // 0x0184
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0100, -1, &stats, sizeof(stats));
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x100 LAN Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "Tx packets : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_packets));
- len += sprintf(buf+len, "Tx bytes : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_bytes));
- len += sprintf(buf+len, "Rx packets : " FMT_U64_HEX "\n",
- U64_VAL(&stats.rx_packets));
- len += sprintf(buf+len, "Rx bytes : " FMT_U64_HEX "\n",
- U64_VAL(&stats.rx_bytes));
- len += sprintf(buf+len, "Tx errors : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_errors));
- len += sprintf(buf+len, "Rx errors : " FMT_U64_HEX "\n",
- U64_VAL(&stats.rx_errors));
- len += sprintf(buf+len, "Rx dropped : " FMT_U64_HEX "\n",
- U64_VAL(&stats.rx_dropped));
- len += sprintf(buf+len, "Adapter resets : " FMT_U64_HEX "\n",
- U64_VAL(&stats.adapter_resets));
- len += sprintf(buf+len, "Adapter suspends : " FMT_U64_HEX "\n",
- U64_VAL(&stats.adapter_suspends));
-
- /* Optional statistics follows */
- /* Get 0x0180 to see which optional groups/fields are supported */
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0180, -1, &supp_groups, sizeof(supp_groups));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token, "0x180 LAN Supported Optional Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- if (supp_groups[1]) /* 0x0182 */
- {
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0182, -1, &tx_stats, sizeof(tx_stats));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x182 LAN Optional Tx Historical Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "==== Optional TX statistics (group 0182h)\n");
-
- len += i2o_report_opt_field(buf+len, "Tx RetryCount",
- 0, supp_groups[1], &tx_stats.tx_retries);
- len += i2o_report_opt_field(buf+len, "Tx DirectedBytes",
- 1, supp_groups[1], &tx_stats.tx_directed_bytes);
- len += i2o_report_opt_field(buf+len, "Tx DirectedPackets",
- 2, supp_groups[1], &tx_stats.tx_directed_packets);
- len += i2o_report_opt_field(buf+len, "Tx MulticastBytes",
- 3, supp_groups[1], &tx_stats.tx_multicast_bytes);
- len += i2o_report_opt_field(buf+len, "Tx MulticastPackets",
- 4, supp_groups[1], &tx_stats.tx_multicast_packets);
- len += i2o_report_opt_field(buf+len, "Tx BroadcastBytes",
- 5, supp_groups[1], &tx_stats.tx_broadcast_bytes);
- len += i2o_report_opt_field(buf+len, "Tx BroadcastPackets",
- 6, supp_groups[1], &tx_stats.tx_broadcast_packets);
- len += i2o_report_opt_field(buf+len, "Tx TotalGroupAddrPackets",
- 7, supp_groups[1], &tx_stats.tx_group_addr_packets);
- len += i2o_report_opt_field(buf+len, "Tx TotalPacketsTooShort",
- 8, supp_groups[1], &tx_stats.tx_short_packets);
- }
-
- if (supp_groups[2]) /* 0x0183 */
- {
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0183, -1, &rx_stats, sizeof(rx_stats));
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x183 LAN Optional Rx Historical Stats");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "==== Optional RX statistics (group 0183h)\n");
-
- len += i2o_report_opt_field(buf+len, "Rx CRCErrorCount",
- 0, supp_groups[2], &rx_stats.rx_crc_errors);
- len += i2o_report_opt_field(buf+len, "Rx DirectedBytes",
- 1, supp_groups[2], &rx_stats.rx_directed_bytes);
- len += i2o_report_opt_field(buf+len, "Rx DirectedPackets",
- 2, supp_groups[2], &rx_stats.rx_directed_packets);
- len += i2o_report_opt_field(buf+len, "Rx MulticastBytes",
- 3, supp_groups[2], &rx_stats.rx_multicast_bytes);
- len += i2o_report_opt_field(buf+len, "Rx MulticastPackets",
- 4, supp_groups[2], &rx_stats.rx_multicast_packets);
- len += i2o_report_opt_field(buf+len, "Rx BroadcastBytes",
- 5, supp_groups[2], &rx_stats.rx_broadcast_bytes);
- len += i2o_report_opt_field(buf+len, "Rx BroadcastPackets",
- 6, supp_groups[2], &rx_stats.rx_broadcast_packets);
- len += i2o_report_opt_field(buf+len, "Rx TotalGroupAddrPackets",
- 7, supp_groups[2], &rx_stats.rx_group_addr_packets);
- len += i2o_report_opt_field(buf+len, "Rx TotalPacketsTooShort",
- 8, supp_groups[2], &rx_stats.rx_short_packets);
- len += i2o_report_opt_field(buf+len, "Rx TotalPacketsTooLong",
- 9, supp_groups[2], &rx_stats.rx_long_packets);
- len += i2o_report_opt_field(buf+len, "Rx TotalPacketsRunt",
- 10, supp_groups[2], &rx_stats.rx_runt_packets);
- }
-
- if (supp_groups[3]) /* 0x0184 */
- {
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0184, -1, &chksum_stats, sizeof(chksum_stats));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x184 LAN Optional Chksum Historical Stats");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "==== Optional CHKSUM statistics (group 0x0184)\n");
-
- len += i2o_report_opt_field(buf+len, "IPv4 Generate",
- 0, supp_groups[3], &chksum_stats.ipv4_generate);
- len += i2o_report_opt_field(buf+len, "IPv4 ValidateSuccess",
- 1, supp_groups[3], &chksum_stats.ipv4_validate_success);
- len += i2o_report_opt_field(buf+len, "IPv4 ValidateError",
- 2, supp_groups[3], &chksum_stats.ipv4_validate_errors);
- len += i2o_report_opt_field(buf+len, "TCP Generate",
- 3, supp_groups[3], &chksum_stats.tcp_generate);
- len += i2o_report_opt_field(buf+len, "TCP ValidateSuccess",
- 4, supp_groups[3], &chksum_stats.tcp_validate_success);
- len += i2o_report_opt_field(buf+len, "TCP ValidateError",
- 5, supp_groups[3], &chksum_stats.tcp_validate_errors);
- len += i2o_report_opt_field(buf+len, "UDP Generate",
- 6, supp_groups[3], &chksum_stats.udp_generate);
- len += i2o_report_opt_field(buf+len, "UDP ValidateSuccess",
- 7, supp_groups[3], &chksum_stats.udp_validate_success);
- len += i2o_report_opt_field(buf+len, "UDP ValidateError",
- 8, supp_groups[3], &chksum_stats.udp_validate_errors);
- len += i2o_report_opt_field(buf+len, "RSVP Generate",
- 9, supp_groups[3], &chksum_stats.rsvp_generate);
- len += i2o_report_opt_field(buf+len, "RSVP ValidateSuccess",
- 10, supp_groups[3], &chksum_stats.rsvp_validate_success);
- len += i2o_report_opt_field(buf+len, "RSVP ValidateError",
- 11, supp_groups[3], &chksum_stats.rsvp_validate_errors);
- len += i2o_report_opt_field(buf+len, "ICMP Generate",
- 12, supp_groups[3], &chksum_stats.icmp_generate);
- len += i2o_report_opt_field(buf+len, "ICMP ValidateSuccess",
- 13, supp_groups[3], &chksum_stats.icmp_validate_success);
- len += i2o_report_opt_field(buf+len, "ICMP ValidateError",
- 14, supp_groups[3], &chksum_stats.icmp_validate_errors);
- }
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0200h - Required Ethernet Statistics (scalar) */
-/* LAN group 0280h - Optional Ethernet Statistics Supported (scalar) */
-/* LAN group 0281h - Optional Ethernet Historical Statistics (scalar) */
-int i2o_proc_read_lan_eth_stats(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- int token;
-
- struct
- {
- u64 rx_align_errors;
- u64 tx_one_collisions;
- u64 tx_multiple_collisions;
- u64 tx_deferred;
- u64 tx_late_collisions;
- u64 tx_max_collisions;
- u64 tx_carrier_lost;
- u64 tx_excessive_deferrals;
- } stats;
-
- static u64 supp_fields;
- struct
- {
- u64 rx_overrun;
- u64 tx_underrun;
- u64 tx_heartbeat_failure;
- } hist_stats;
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0200, -1, &stats, sizeof(stats));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0200 LAN Ethernet Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "Rx alignment errors : " FMT_U64_HEX "\n",
- U64_VAL(&stats.rx_align_errors));
- len += sprintf(buf+len, "Tx one collisions : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_one_collisions));
- len += sprintf(buf+len, "Tx multicollisions : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_multiple_collisions));
- len += sprintf(buf+len, "Tx deferred : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_deferred));
- len += sprintf(buf+len, "Tx late collisions : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_late_collisions));
- len += sprintf(buf+len, "Tx max collisions : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_max_collisions));
- len += sprintf(buf+len, "Tx carrier lost : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_carrier_lost));
- len += sprintf(buf+len, "Tx excessive deferrals : " FMT_U64_HEX "\n",
- U64_VAL(&stats.tx_excessive_deferrals));
-
- /* Optional Ethernet statistics follows */
- /* Get 0x0280 to see which optional fields are supported */
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0280, -1, &supp_fields, sizeof(supp_fields));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0280 LAN Supported Optional Ethernet Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- if (supp_fields) /* 0x0281 */
- {
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0281, -1, &stats, sizeof(stats));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0281 LAN Optional Ethernet Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "==== Optional ETHERNET statistics (group 0x0281)\n");
-
- len += i2o_report_opt_field(buf+len, "Rx Overrun",
- 0, supp_fields, &hist_stats.rx_overrun);
- len += i2o_report_opt_field(buf+len, "Tx Underrun",
- 1, supp_fields, &hist_stats.tx_underrun);
- len += i2o_report_opt_field(buf+len, "Tx HeartbeatFailure",
- 2, supp_fields, &hist_stats.tx_heartbeat_failure);
- }
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0300h - Required Token Ring Statistics (scalar) */
-/* LAN group 0380h, 0381h - Optional Statistics not yet defined (TODO) */
-int i2o_proc_read_lan_tr_stats(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u64 work64[13];
- int token;
-
- static char *ring_status[] =
- {
- "",
- "",
- "",
- "",
- "",
- "Ring Recovery",
- "Single Station",
- "Counter Overflow",
- "Remove Received",
- "",
- "Auto-Removal Error 1",
- "Lobe Wire Fault",
- "Transmit Beacon",
- "Soft Error",
- "Hard Error",
- "Signal Loss"
- };
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0300, -1, &work64, sizeof(work64));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0300 Token Ring Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf, "LineErrors : " FMT_U64_HEX "\n",
- U64_VAL(&work64[0]));
- len += sprintf(buf+len, "LostFrames : " FMT_U64_HEX "\n",
- U64_VAL(&work64[1]));
- len += sprintf(buf+len, "ACError : " FMT_U64_HEX "\n",
- U64_VAL(&work64[2]));
- len += sprintf(buf+len, "TxAbortDelimiter : " FMT_U64_HEX "\n",
- U64_VAL(&work64[3]));
- len += sprintf(buf+len, "BursErrors : " FMT_U64_HEX "\n",
- U64_VAL(&work64[4]));
- len += sprintf(buf+len, "FrameCopiedErrors : " FMT_U64_HEX "\n",
- U64_VAL(&work64[5]));
- len += sprintf(buf+len, "FrequencyErrors : " FMT_U64_HEX "\n",
- U64_VAL(&work64[6]));
- len += sprintf(buf+len, "InternalErrors : " FMT_U64_HEX "\n",
- U64_VAL(&work64[7]));
- len += sprintf(buf+len, "LastRingStatus : %s\n", ring_status[work64[8]]);
- len += sprintf(buf+len, "TokenError : " FMT_U64_HEX "\n",
- U64_VAL(&work64[9]));
- len += sprintf(buf+len, "UpstreamNodeAddress : " FMT_U64_HEX "\n",
- U64_VAL(&work64[10]));
- len += sprintf(buf+len, "LastRingID : " FMT_U64_HEX "\n",
- U64_VAL(&work64[11]));
- len += sprintf(buf+len, "LastBeaconType : " FMT_U64_HEX "\n",
- U64_VAL(&work64[12]));
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-/* LAN group 0400h - Required FDDI Statistics (scalar) */
-/* LAN group 0480h, 0481h - Optional Statistics, not yet defined (TODO) */
-int i2o_proc_read_lan_fddi_stats(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- struct i2o_device *d = (struct i2o_device*)data;
- static u64 work64[11];
- int token;
-
- static char *conf_state[] =
- {
- "Isolated",
- "Local a",
- "Local b",
- "Local ab",
- "Local s",
- "Wrap a",
- "Wrap b",
- "Wrap ab",
- "Wrap s",
- "C-Wrap a",
- "C-Wrap b",
- "C-Wrap s",
- "Through",
- };
-
- static char *ring_state[] =
- {
- "Isolated",
- "Non-op",
- "Rind-op",
- "Detect",
- "Non-op-Dup",
- "Ring-op-Dup",
- "Directed",
- "Trace"
- };
-
- static char *link_state[] =
- {
- "Off",
- "Break",
- "Trace",
- "Connect",
- "Next",
- "Signal",
- "Join",
- "Verify",
- "Active",
- "Maintenance"
- };
-
- spin_lock(&i2o_proc_lock);
- len = 0;
-
- token = i2o_query_scalar(d->controller, d->lct_data.tid,
- 0x0400, -1, &work64, sizeof(work64));
-
- if (token < 0) {
- len += i2o_report_query_status(buf+len, token,"0x0400 FDDI Required Statistics");
- spin_unlock(&i2o_proc_lock);
- return len;
- }
-
- len += sprintf(buf+len, "ConfigurationState : %s\n", conf_state[work64[0]]);
- len += sprintf(buf+len, "UpstreamNode : " FMT_U64_HEX "\n",
- U64_VAL(&work64[1]));
- len += sprintf(buf+len, "DownStreamNode : " FMT_U64_HEX "\n",
- U64_VAL(&work64[2]));
- len += sprintf(buf+len, "FrameErrors : " FMT_U64_HEX "\n",
- U64_VAL(&work64[3]));
- len += sprintf(buf+len, "FramesLost : " FMT_U64_HEX "\n",
- U64_VAL(&work64[4]));
- len += sprintf(buf+len, "RingMgmtState : %s\n", ring_state[work64[5]]);
- len += sprintf(buf+len, "LCTFailures : " FMT_U64_HEX "\n",
- U64_VAL(&work64[6]));
- len += sprintf(buf+len, "LEMRejects : " FMT_U64_HEX "\n",
- U64_VAL(&work64[7]));
- len += sprintf(buf+len, "LEMCount : " FMT_U64_HEX "\n",
- U64_VAL(&work64[8]));
- len += sprintf(buf+len, "LConnectionState : %s\n",
- link_state[work64[9]]);
-
- spin_unlock(&i2o_proc_lock);
- return len;
-}
-
-static int i2o_proc_create_entries(void *data, i2o_proc_entry *pentry,
- struct proc_dir_entry *parent)
-{
- struct proc_dir_entry *ent;
-
- while(pentry->name != NULL)
- {
- ent = create_proc_entry(pentry->name, pentry->mode, parent);
- if(!ent) return -1;
-
- ent->data = data;
- ent->read_proc = pentry->read_proc;
- ent->write_proc = pentry->write_proc;
- ent->nlink = 1;
-
- pentry++;
- }
-
- return 0;
-}
-
-static void i2o_proc_remove_entries(i2o_proc_entry *pentry,
- struct proc_dir_entry *parent)
-{
- while(pentry->name != NULL)
- {
- remove_proc_entry(pentry->name, parent);
- pentry++;
- }
-}
-
-static int i2o_proc_add_controller(struct i2o_controller *pctrl,
- struct proc_dir_entry *root )
-{
- struct proc_dir_entry *dir, *dir1;
- struct i2o_device *dev;
- char buff[10];
-
- sprintf(buff, "iop%d", pctrl->unit);
-
- dir = proc_mkdir(buff, root);
- if(!dir)
- return -1;
-
- pctrl->proc_entry = dir;
-
- i2o_proc_create_entries(pctrl, generic_iop_entries, dir);
-
- for(dev = pctrl->devices; dev; dev = dev->next)
- {
- sprintf(buff, "%0#5x", dev->lct_data.tid);
-
- dir1 = proc_mkdir(buff, dir);
- dev->proc_entry = dir1;
-
- if(!dir1)
- printk(KERN_INFO "i2o_proc: Could not allocate proc dir\n");
-
- i2o_proc_add_device(dev, dir1);
- }
-
- return 0;
-}
-
-void i2o_proc_new_dev(struct i2o_controller *c, struct i2o_device *d)
-{
- char buff[10];
-
-#ifdef DRIVERDEBUG
- printk(KERN_INFO "Adding new device to /proc/i2o/iop%d\n", c->unit);
-#endif
- sprintf(buff, "%0#5x", d->lct_data.tid);
-
- d->proc_entry = proc_mkdir(buff, c->proc_entry);
-
- if(!d->proc_entry)
- {
- printk(KERN_WARNING "i2o: Could not allocate procdir!\n");
- return;
- }
-
- i2o_proc_add_device(d, d->proc_entry);
-}
-
-void i2o_proc_add_device(struct i2o_device *dev, struct proc_dir_entry *dir)
-{
- i2o_proc_create_entries(dev, generic_dev_entries, dir);
-
- /* Inform core that we want updates about this device's status */
- i2o_device_notify_on(dev, &i2o_proc_handler);
- switch(dev->lct_data.class_id)
- {
- case I2O_CLASS_SCSI_PERIPHERAL:
- case I2O_CLASS_RANDOM_BLOCK_STORAGE:
- i2o_proc_create_entries(dev, rbs_dev_entries, dir);
- break;
- case I2O_CLASS_LAN:
- i2o_proc_create_entries(dev, lan_entries, dir);
- switch(dev->lct_data.sub_class)
- {
- case I2O_LAN_ETHERNET:
- i2o_proc_create_entries(dev, lan_eth_entries, dir);
- break;
- case I2O_LAN_FDDI:
- i2o_proc_create_entries(dev, lan_fddi_entries, dir);
- break;
- case I2O_LAN_TR:
- i2o_proc_create_entries(dev, lan_tr_entries, dir);
- break;
- default:
- break;
- }
- break;
- default:
- break;
- }
-}
-
-static void i2o_proc_remove_controller(struct i2o_controller *pctrl,
- struct proc_dir_entry *parent)
-{
- char buff[10];
- struct i2o_device *dev;
-
- /* Remove unused device entries */
- for(dev=pctrl->devices; dev; dev=dev->next)
- i2o_proc_remove_device(dev);
-
- if(!atomic_read(&pctrl->proc_entry->count))
- {
- sprintf(buff, "iop%d", pctrl->unit);
-
- i2o_proc_remove_entries(generic_iop_entries, pctrl->proc_entry);
-
- remove_proc_entry(buff, parent);
- pctrl->proc_entry = NULL;
- }
-}
-
-void i2o_proc_remove_device(struct i2o_device *dev)
-{
- struct proc_dir_entry *de=dev->proc_entry;
- char dev_id[10];
-
- sprintf(dev_id, "%0#5x", dev->lct_data.tid);
-
- i2o_device_notify_off(dev, &i2o_proc_handler);
- /* Would it be safe to remove _files_ even if they are in use? */
- if((de) && (!atomic_read(&de->count)))
- {
- i2o_proc_remove_entries(generic_dev_entries, de);
- switch(dev->lct_data.class_id)
- {
- case I2O_CLASS_SCSI_PERIPHERAL:
- case I2O_CLASS_RANDOM_BLOCK_STORAGE:
- i2o_proc_remove_entries(rbs_dev_entries, de);
- break;
- case I2O_CLASS_LAN:
- {
- i2o_proc_remove_entries(lan_entries, de);
- switch(dev->lct_data.sub_class)
- {
- case I2O_LAN_ETHERNET:
- i2o_proc_remove_entries(lan_eth_entries, de);
- break;
- case I2O_LAN_FDDI:
- i2o_proc_remove_entries(lan_fddi_entries, de);
- break;
- case I2O_LAN_TR:
- i2o_proc_remove_entries(lan_tr_entries, de);
- break;
- }
- }
- remove_proc_entry(dev_id, dev->controller->proc_entry);
- }
- }
-}
-
-void i2o_proc_dev_del(struct i2o_controller *c, struct i2o_device *d)
-{
-#ifdef DRIVERDEBUG
- printk(KERN_INFO, "Deleting device %d from iop%d\n",
- d->lct_data.tid, c->unit);
-#endif
-
- i2o_proc_remove_device(d);
-}
-
-static int create_i2o_procfs(void)
-{
- struct i2o_controller *pctrl = NULL;
- int i;
-
- i2o_proc_dir_root = proc_mkdir("i2o", 0);
- if(!i2o_proc_dir_root)
- return -1;
-
- for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
- {
- pctrl = i2o_find_controller(i);
- if(pctrl)
- {
- i2o_proc_add_controller(pctrl, i2o_proc_dir_root);
- i2o_unlock_controller(pctrl);
- }
- };
-
- return 0;
-}
-
-static int __exit destroy_i2o_procfs(void)
-{
- struct i2o_controller *pctrl = NULL;
- int i;
-
- for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
- {
- pctrl = i2o_find_controller(i);
- if(pctrl)
- {
- i2o_proc_remove_controller(pctrl, i2o_proc_dir_root);
- i2o_unlock_controller(pctrl);
- }
- }
-
- if(!atomic_read(&i2o_proc_dir_root->count))
- remove_proc_entry("i2o", 0);
- else
- return -1;
-
- return 0;
-}
-
-int __init i2o_proc_init(void)
-{
- if (i2o_install_handler(&i2o_proc_handler) < 0)
- {
- printk(KERN_ERR "i2o_proc: Unable to install PROC handler.\n");
- return 0;
- }
-
- if(create_i2o_procfs())
- return -EBUSY;
-
- return 0;
-}
-
-MODULE_AUTHOR("Deepak Saxena");
-MODULE_DESCRIPTION("I2O procfs Handler");
-
-static void __exit i2o_proc_exit(void)
-{
- destroy_i2o_procfs();
- i2o_remove_handler(&i2o_proc_handler);
-}
-
-#ifdef MODULE
-module_init(i2o_proc_init);
-#endif
-module_exit(i2o_proc_exit);
-
+++ /dev/null
-/*
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; either version 2, or (at your option) any
- * later version.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Complications for I2O scsi
- *
- * o Each (bus,lun) is a logical device in I2O. We keep a map
- * table. We spoof failed selection for unmapped units
- * o Request sense buffers can come back for free.
- * o Scatter gather is a bit dynamic. We have to investigate at
- * setup time.
- * o Some of our resources are dynamically shared. The i2o core
- * needs a message reservation protocol to avoid swap v net
- * deadlocking. We need to back off queue requests.
- *
- * In general the firmware wants to help. Where its help isn't performance
- * useful we just ignore the aid. Its not worth the code in truth.
- *
- * Fixes:
- * Steve Ralston : Scatter gather now works
- *
- * To Do
- * 64bit cleanups
- * Fix the resource management problems.
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/string.h>
-#include <linux/ioport.h>
-#include <linux/sched.h>
-#include <linux/interrupt.h>
-#include <linux/timer.h>
-#include <linux/delay.h>
-#include <linux/proc_fs.h>
-#include <asm/dma.h>
-#include <asm/system.h>
-#include <asm/io.h>
-#include <asm/atomic.h>
-#include <linux/blk.h>
-#include <linux/version.h>
-#include <linux/i2o.h>
-#include "../scsi/scsi.h"
-#include "../scsi/hosts.h"
-#include "../scsi/sd.h"
-#include "i2o_scsi.h"
-
-#define VERSION_STRING "Version 0.0.1"
-
-#define dprintk(x)
-
-#define MAXHOSTS 32
-
-struct i2o_scsi_host
-{
- struct i2o_controller *controller;
- s16 task[16][8]; /* Allow 16 devices for now */
- unsigned long tagclock[16][8]; /* Tag clock for queueing */
- s16 bus_task; /* The adapter TID */
-};
-
-static int scsi_context;
-static int lun_done;
-static int i2o_scsi_hosts;
-
-static u32 *retry[32];
-static struct i2o_controller *retry_ctrl[32];
-static struct timer_list retry_timer;
-static int retry_ct = 0;
-
-static atomic_t queue_depth;
-
-/*
- * SG Chain buffer support...
- */
-
-#define SG_MAX_FRAGS 64
-
-/*
- * FIXME: we should allocate one of these per bus we find as we
- * locate them not in a lump at boot.
- */
-
-typedef struct _chain_buf
-{
- u32 sg_flags_cnt[SG_MAX_FRAGS];
- u32 sg_buf[SG_MAX_FRAGS];
-} chain_buf;
-
-#define SG_CHAIN_BUF_SZ sizeof(chain_buf)
-
-#define SG_MAX_BUFS (i2o_num_controllers * I2O_SCSI_CAN_QUEUE)
-#define SG_CHAIN_POOL_SZ (SG_MAX_BUFS * SG_CHAIN_BUF_SZ)
-
-static int max_sg_len = 0;
-static chain_buf *sg_chain_pool = NULL;
-static int sg_chain_tag = 0;
-static int sg_max_frags = SG_MAX_FRAGS;
-
-/*
- * Retry congested frames. This actually needs pushing down into
- * i2o core. We should only bother the OSM with this when we can't
- * queue and retry the frame. Or perhaps we should call the OSM
- * and its default handler should be this in the core, and this
- * call a 2nd "I give up" handler in the OSM ?
- */
-
-static void i2o_retry_run(unsigned long f)
-{
- int i;
- unsigned long flags;
-
- save_flags(flags);
- cli();
-
- for(i=0;i<retry_ct;i++)
- i2o_post_message(retry_ctrl[i], virt_to_bus(retry[i]));
- retry_ct=0;
-
- restore_flags(flags);
-}
-
-static void flush_pending(void)
-{
- int i;
- unsigned long flags;
-
- save_flags(flags);
- cli();
-
- for(i=0;i<retry_ct;i++)
- {
- retry[i][0]&=~0xFFFFFF;
- retry[i][0]|=I2O_CMD_UTIL_NOP<<24;
- i2o_post_message(retry_ctrl[i],virt_to_bus(retry[i]));
- }
- retry_ct=0;
-
- restore_flags(flags);
-}
-
-static void i2o_scsi_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg)
-{
- Scsi_Cmnd *current_command;
- u32 *m = (u32 *)msg;
- u8 as,ds,st;
-
- if(m[0] & (1<<13))
- {
- printk("IOP fail.\n");
- printk("From %d To %d Cmd %d.\n",
- (m[1]>>12)&0xFFF,
- m[1]&0xFFF,
- m[1]>>24);
- printk("Failure Code %d.\n", m[4]>>24);
- if(m[4]&(1<<16))
- printk("Format error.\n");
- if(m[4]&(1<<17))
- printk("Path error.\n");
- if(m[4]&(1<<18))
- printk("Path State.\n");
- if(m[4]&(1<<18))
- printk("Congestion.\n");
-
- m=(u32 *)bus_to_virt(m[7]);
- printk("Failing message is %p.\n", m);
-
- if((m[4]&(1<<18)) && retry_ct < 32)
- {
- retry_ctrl[retry_ct]=c;
- retry[retry_ct]=m;
- if(!retry_ct++)
- {
- retry_timer.expires=jiffies+1;
- add_timer(&retry_timer);
- }
- }
- else
- {
- /* Create a scsi error for this */
- current_command = (Scsi_Cmnd *)m[3];
- printk("Aborted %ld\n", current_command->serial_number);
-
- spin_lock_irq(&io_request_lock);
- current_command->result = DID_ERROR << 16;
- current_command->scsi_done(current_command);
- spin_unlock_irq(&io_request_lock);
-
- /* Now flush the message by making it a NOP */
- m[0]&=0x00FFFFFF;
- m[0]|=(I2O_CMD_UTIL_NOP)<<24;
- i2o_post_message(c,virt_to_bus(m));
- }
- return;
- }
-
-
- /*
- * Low byte is device status, next is adapter status,
- * (then one byte reserved), then request status.
- */
- ds=(u8)m[4];
- as=(u8)(m[4]>>8);
- st=(u8)(m[4]>>24);
-
- dprintk(("i2o got a scsi reply %08X: ", m[0]));
- dprintk(("m[2]=%08X: ", m[2]));
- dprintk(("m[4]=%08X\n", m[4]));
-
- if(m[2]&0x80000000)
- {
- if(m[2]&0x40000000)
- {
- dprintk(("Event.\n"));
- lun_done=1;
- return;
- }
- printk(KERN_ERR "i2o_scsi: bus reset reply.\n");
- return;
- }
-
- current_command = (Scsi_Cmnd *)m[3];
-
- /*
- * Is this a control request coming back - eg an abort ?
- */
-
- if(current_command==NULL)
- {
- if(st)
- dprintk(("SCSI abort: %08X", m[4]));
- dprintk(("SCSI abort completed.\n"));
- return;
- }
-
- dprintk(("Completed %ld\n", current_command->serial_number));
-
- atomic_dec(&queue_depth);
-
- if(st == 0x06)
- {
- if(m[5] < current_command->underflow)
- {
- int i;
- printk(KERN_ERR "SCSI: underflow 0x%08X 0x%08X\n",
- m[5], current_command->underflow);
- printk("Cmd: ");
- for(i=0;i<15;i++)
- printk("%02X ", current_command->cmnd[i]);
- printk(".\n");
- }
- else st=0;
- }
-
- if(st)
- {
- /* An error has occurred */
-
- dprintk((KERN_DEBUG "SCSI error %08X", m[4]));
-
- if (as == 0x0E)
- /* SCSI Reset */
- current_command->result = DID_RESET << 16;
- else if (as == 0x0F)
- current_command->result = DID_PARITY << 16;
- else
- current_command->result = DID_ERROR << 16;
- }
- else
- /*
- * It worked maybe ?
- */
- current_command->result = DID_OK << 16 | ds;
- spin_lock(&io_request_lock);
- current_command->scsi_done(current_command);
- spin_unlock(&io_request_lock);
- return;
-}
-
-struct i2o_handler i2o_scsi_handler=
-{
- i2o_scsi_reply,
- NULL,
- NULL,
- NULL,
- "I2O SCSI OSM",
- 0,
- I2O_CLASS_SCSI_PERIPHERAL
-};
-
-static int i2o_find_lun(struct i2o_controller *c, struct i2o_device *d, int *target, int *lun)
-{
- u8 reply[8];
-
- if(i2o_query_scalar(c, d->lct_data.tid, 0, 3, reply, 4)<0)
- return -1;
-
- *target=reply[0];
-
- if(i2o_query_scalar(c, d->lct_data.tid, 0, 4, reply, 8)<0)
- return -1;
-
- *lun=reply[1];
-
- dprintk(("SCSI (%d,%d)\n", *target, *lun));
- return 0;
-}
-
-void i2o_scsi_init(struct i2o_controller *c, struct i2o_device *d, struct Scsi_Host *shpnt)
-{
- struct i2o_device *unit;
- struct i2o_scsi_host *h =(struct i2o_scsi_host *)shpnt->hostdata;
- int lun;
- int target;
-
- h->controller=c;
- h->bus_task=d->lct_data.tid;
-
- for(target=0;target<16;target++)
- for(lun=0;lun<8;lun++)
- h->task[target][lun] = -1;
-
- for(unit=c->devices;unit!=NULL;unit=unit->next)
- {
- dprintk(("Class %03X, parent %d, want %d.\n",
- unit->lct_data.class_id, unit->lct_data.parent_tid, d->lct_data.tid));
-
- /* Only look at scsi and fc devices */
- if ( (unit->lct_data.class_id != I2O_CLASS_SCSI_PERIPHERAL)
- && (unit->lct_data.class_id != I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL)
- )
- continue;
-
- /* On our bus ? */
- dprintk(("Found a disk (%d).\n", unit->lct_data.tid));
- if ((unit->lct_data.parent_tid == d->lct_data.tid)
- || (unit->lct_data.parent_tid == d->lct_data.parent_tid)
- )
- {
- u16 limit;
- dprintk(("Its ours.\n"));
- if(i2o_find_lun(c, unit, &target, &lun)==-1)
- {
- printk(KERN_ERR "i2o_scsi: Unable to get lun for tid %d.\n", unit->lct_data.tid);
- continue;
- }
- dprintk(("Found disk %d %d.\n", target, lun));
- h->task[target][lun]=unit->lct_data.tid;
- h->tagclock[target][lun]=jiffies;
-
- /* Get the max fragments/request */
- i2o_query_scalar(c, d->lct_data.tid, 0xF103, 3, &limit, 2);
-
- /* sanity */
- if ( limit == 0 )
- {
- printk(KERN_WARNING "i2o_scsi: Ignoring unreasonable SG limit of 0 from IOP!\n");
- limit = 1;
- }
-
- shpnt->sg_tablesize = limit;
-
- dprintk(("i2o_scsi: set scatter-gather to %d.\n",
- shpnt->sg_tablesize));
- }
- }
-}
-
-int i2o_scsi_detect(Scsi_Host_Template * tpnt)
-{
- unsigned long flags;
- struct Scsi_Host *shpnt = NULL;
- int i;
- int count;
-
- printk("i2o_scsi.c: %s\n", VERSION_STRING);
-
- if(i2o_install_handler(&i2o_scsi_handler)<0)
- {
- printk(KERN_ERR "i2o_scsi: Unable to install OSM handler.\n");
- return 0;
- }
- scsi_context = i2o_scsi_handler.context;
-
- if((sg_chain_pool = kmalloc(SG_CHAIN_POOL_SZ, GFP_KERNEL)) == NULL)
- {
- printk("i2o_scsi: Unable to alloc %d byte SG chain buffer pool.\n", SG_CHAIN_POOL_SZ);
- printk("i2o_scsi: SG chaining DISABLED!\n");
- sg_max_frags = 11;
- }
- else
- {
- printk(" chain_pool: %d bytes @ %p\n", SG_CHAIN_POOL_SZ, sg_chain_pool);
- printk(" (%d byte buffers X %d can_queue X %d i2o controllers)\n",
- SG_CHAIN_BUF_SZ, I2O_SCSI_CAN_QUEUE, i2o_num_controllers);
- sg_max_frags = SG_MAX_FRAGS; // 64
- }
-
- init_timer(&retry_timer);
- retry_timer.data = 0UL;
- retry_timer.function = i2o_retry_run;
-
-// printk("SCSI OSM at %d.\n", scsi_context);
-
- for (count = 0, i = 0; i < MAX_I2O_CONTROLLERS; i++)
- {
- struct i2o_controller *c=i2o_find_controller(i);
- struct i2o_device *d;
- /*
- * This controller doesn't exist.
- */
-
- if(c==NULL)
- continue;
-
- /*
- * Fixme - we need some altered device locking. This
- * is racing with device addition in theory. Easy to fix.
- */
-
- for(d=c->devices;d!=NULL;d=d->next)
- {
- /*
- * bus_adapter, SCSI (obsolete), or FibreChannel busses only
- */
- if( (d->lct_data.class_id!=I2O_CLASS_BUS_ADAPTER_PORT) // bus_adapter
-// && (d->lct_data.class_id!=I2O_CLASS_FIBRE_CHANNEL_PORT) // FC_PORT
- )
- continue;
-
- shpnt = scsi_register(tpnt, sizeof(struct i2o_scsi_host));
- if(shpnt==NULL)
- continue;
- save_flags(flags);
- cli();
- shpnt->unique_id = (u32)d;
- shpnt->io_port = 0;
- shpnt->n_io_port = 0;
- shpnt->irq = 0;
- shpnt->this_id = /* Good question */15;
- restore_flags(flags);
- i2o_scsi_init(c, d, shpnt);
- count++;
- }
- }
- i2o_scsi_hosts = count;
-
- if(count==0)
- {
- if(sg_chain_pool!=NULL)
- {
- kfree(sg_chain_pool);
- sg_chain_pool = NULL;
- }
- flush_pending();
- del_timer(&retry_timer);
- i2o_remove_handler(&i2o_scsi_handler);
- }
-
- return count;
-}
-
-int i2o_scsi_release(struct Scsi_Host *host)
-{
- if(--i2o_scsi_hosts==0)
- {
- if(sg_chain_pool!=NULL)
- {
- kfree(sg_chain_pool);
- sg_chain_pool = NULL;
- }
- flush_pending();
- del_timer(&retry_timer);
- i2o_remove_handler(&i2o_scsi_handler);
- }
- return 0;
-}
-
-
-const char *i2o_scsi_info(struct Scsi_Host *SChost)
-{
- struct i2o_scsi_host *hostdata;
-
- hostdata = (struct i2o_scsi_host *)SChost->hostdata;
-
- return(&hostdata->controller->name[0]);
-}
-
-
-/*
- * From the wd93 driver:
- * Returns true if there will be a DATA_OUT phase with this command,
- * false otherwise.
- * (Thanks to Joerg Dorchain for the research and suggestion.)
- *
- */
-static int is_dir_out(Scsi_Cmnd *cmd)
-{
- switch (cmd->cmnd[0])
- {
- case WRITE_6: case WRITE_10: case WRITE_12:
- case WRITE_LONG: case WRITE_SAME: case WRITE_BUFFER:
- case WRITE_VERIFY: case WRITE_VERIFY_12:
- case COMPARE: case COPY: case COPY_VERIFY:
- case SEARCH_EQUAL: case SEARCH_HIGH: case SEARCH_LOW:
- case SEARCH_EQUAL_12: case SEARCH_HIGH_12: case SEARCH_LOW_12:
- case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE:
- case MODE_SELECT: case MODE_SELECT_10: case LOG_SELECT:
- case SEND_DIAGNOSTIC: case CHANGE_DEFINITION: case UPDATE_BLOCK:
- case SET_WINDOW: case MEDIUM_SCAN: case SEND_VOLUME_TAG:
- case 0xea:
- return 1;
- default:
- return 0;
- }
-}
-
-int i2o_scsi_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
-{
- int i;
- int tid;
- struct i2o_controller *c;
- Scsi_Cmnd *current_command;
- struct Scsi_Host *host;
- struct i2o_scsi_host *hostdata;
- u32 *msg, *mptr;
- u32 m;
- u32 *lenptr;
- int direction;
- int scsidir;
- u32 len;
- u32 reqlen;
- u32 tag;
-
- static int max_qd = 1;
-
- /*
- * Do the incoming paperwork
- */
-
- host = SCpnt->host;
- hostdata = (struct i2o_scsi_host *)host->hostdata;
- SCpnt->scsi_done = done;
-
- if(SCpnt->target > 15)
- {
- printk(KERN_ERR "i2o_scsi: Wild target %d.\n", SCpnt->target);
- return -1;
- }
-
- tid = hostdata->task[SCpnt->target][SCpnt->lun];
-
- dprintk(("qcmd: Tid = %d\n", tid));
-
- current_command = SCpnt; /* set current command */
- current_command->scsi_done = done; /* set ptr to done function */
-
- /* We don't have such a device. Pretend we did the command
- and that selection timed out */
-
- if(tid == -1)
- {
- SCpnt->result = DID_NO_CONNECT << 16;
- done(SCpnt);
- return 0;
- }
-
- dprintk(("Real scsi messages.\n"));
-
- c = hostdata->controller;
-
- /*
- * Obtain an I2O message. Right now we _have_ to obtain one
- * until the scsi layer stuff is cleaned up.
- */
-
- do
- {
- mb();
- m = I2O_POST_READ32(c);
- }
- while(m==0xFFFFFFFF);
- msg = (u32 *)(c->mem_offset + m);
-
- /*
- * Put together a scsi execscb message
- */
-
- len = SCpnt->request_bufflen;
- direction = 0x00000000; // SGL IN (osm<--iop)
-
- /*
- * The scsi layer should be handling this stuff
- */
-
- scsidir = 0x00000000; // DATA NO XFER
- if(len)
- {
- if(is_dir_out(SCpnt))
- {
- direction=0x04000000; // SGL OUT (osm-->iop)
- scsidir =0x80000000; // DATA OUT (iop-->dev)
- }
- else
- {
- scsidir =0x40000000; // DATA IN (iop<--dev)
- }
- }
-
- __raw_writel(I2O_CMD_SCSI_EXEC<<24|HOST_TID<<12|tid, &msg[1]);
- __raw_writel(scsi_context, &msg[2]); /* So the I2O layer passes to us */
- /* Sorry 64bit folks. FIXME */
- __raw_writel((u32)SCpnt, &msg[3]); /* We want the SCSI control block back */
-
- /* LSI_920_PCI_QUIRK
- *
- * Intermittant observations of msg frame word data corruption
- * observed on msg[4] after:
- * WRITE, READ-MODIFY-WRITE
- * operations. 19990606 -sralston
- *
- * (Hence we build this word via tag. Its good practice anyway
- * we don't want fetches over PCI needlessly)
- */
-
- tag=0;
-
- /*
- * Attach tags to the devices
- */
- if(SCpnt->device->tagged_supported)
- {
- /*
- * Some drives are too stupid to handle fairness issues
- * with tagged queueing. We throw in the odd ordered
- * tag to stop them starving themselves.
- */
- if((jiffies - hostdata->tagclock[SCpnt->target][SCpnt->lun]) > (5*HZ))
- {
- tag=0x01800000; /* ORDERED! */
- hostdata->tagclock[SCpnt->target][SCpnt->lun]=jiffies;
- }
- else
- {
- /* Hmmm... I always see value of 0 here,
- * of which {HEAD_OF, ORDERED, SIMPLE} are NOT! -sralston
- */
- if(SCpnt->tag == HEAD_OF_QUEUE_TAG)
- tag=0x01000000;
- else if(SCpnt->tag == ORDERED_QUEUE_TAG)
- tag=0x01800000;
- }
- }
-
- /* Direction, disconnect ok, tag, CDBLen */
- __raw_writel(scsidir|0x20000000|SCpnt->cmd_len|tag, &msg[4]);
-
- mptr=msg+5;
-
- /*
- * Write SCSI command into the message - always 16 byte block
- */
-
- memcpy_toio(mptr, SCpnt->cmnd, 16);
- mptr+=4;
- lenptr=mptr++; /* Remember me - fill in when we know */
-
- reqlen = 12; // SINGLE SGE
-
- /*
- * Now fill in the SGList and command
- *
- * FIXME: we need to set the sglist limits according to the
- * message size of the I2O controller. We might only have room
- * for 6 or so worst case
- */
-
- if(SCpnt->use_sg)
- {
- struct scatterlist *sg = (struct scatterlist *)SCpnt->request_buffer;
- int chain = 0;
-
- len = 0;
-
- if((sg_max_frags > 11) && (SCpnt->use_sg > 11))
- {
- chain = 1;
- /*
- * Need to chain!
- */
- __raw_writel(direction|0xB0000000|(SCpnt->use_sg*2*4), mptr++);
- __raw_writel(virt_to_bus(sg_chain_pool + sg_chain_tag), mptr);
- mptr = (u32*)(sg_chain_pool + sg_chain_tag);
- if (SCpnt->use_sg > max_sg_len)
- {
- max_sg_len = SCpnt->use_sg;
- printk("i2o_scsi: Chain SG! SCpnt=%p, SG_FragCnt=%d, SG_idx=%d\n",
- SCpnt, SCpnt->use_sg, sg_chain_tag);
- }
- if ( ++sg_chain_tag == SG_MAX_BUFS )
- sg_chain_tag = 0;
- for(i = 0 ; i < SCpnt->use_sg; i++)
- {
- *mptr++=direction|0x10000000|sg->length;
- len+=sg->length;
- *mptr++=virt_to_bus(sg->address);
- sg++;
- }
- mptr[-2]=direction|0xD0000000|(sg-1)->length;
- }
- else
- {
- for(i = 0 ; i < SCpnt->use_sg; i++)
- {
- __raw_writel(direction|0x10000000|sg->length, mptr++);
- len+=sg->length;
- __raw_writel(virt_to_bus(sg->address), mptr++);
- sg++;
- }
-
- /* Make this an end of list. Again evade the 920 bug and
- unwanted PCI read traffic */
-
- __raw_writel(direction|0xD0000000|(sg-1)->length, &mptr[-2]);
- }
-
- if(!chain)
- reqlen = mptr - msg;
-
- __raw_writel(len, lenptr);
-
- if(len != SCpnt->underflow)
- printk("Cmd len %08X Cmd underflow %08X\n",
- len, SCpnt->underflow);
- }
- else
- {
- dprintk(("non sg for %p, %d\n", SCpnt->request_buffer,
- SCpnt->request_bufflen));
- __raw_writel(len = SCpnt->request_bufflen, lenptr);
- if(len == 0)
- {
- reqlen = 9;
- }
- else
- {
- __raw_writel(0xD0000000|direction|SCpnt->request_bufflen, mptr++);
- __raw_writel(virt_to_bus(SCpnt->request_buffer), mptr++);
- }
- }
-
- /*
- * Stick the headers on
- */
-
- __raw_writel(reqlen<<16 | SGL_OFFSET_10, msg);
-
- /* Queue the message */
- i2o_post_message(c,m);
-
- atomic_inc(&queue_depth);
-
- if(atomic_read(&queue_depth)> max_qd)
- {
- max_qd=atomic_read(&queue_depth);
- printk("Queue depth now %d.\n", max_qd);
- }
-
- mb();
- dprintk(("Issued %ld\n", current_command->serial_number));
-
- return 0;
-}
-
-static void internal_done(Scsi_Cmnd * SCpnt)
-{
- SCpnt->SCp.Status++;
-}
-
-int i2o_scsi_command(Scsi_Cmnd * SCpnt)
-{
- i2o_scsi_queuecommand(SCpnt, internal_done);
- SCpnt->SCp.Status = 0;
- while (!SCpnt->SCp.Status)
- barrier();
- return SCpnt->result;
-}
-
-int i2o_scsi_abort(Scsi_Cmnd * SCpnt)
-{
- struct i2o_controller *c;
- struct Scsi_Host *host;
- struct i2o_scsi_host *hostdata;
- u32 *msg;
- u32 m;
- int tid;
-
- printk("i2o_scsi: Aborting command block.\n");
-
- host = SCpnt->host;
- hostdata = (struct i2o_scsi_host *)host->hostdata;
- tid = hostdata->task[SCpnt->target][SCpnt->lun];
- if(tid==-1)
- {
- printk(KERN_ERR "impossible command to abort.\n");
- return SCSI_ABORT_NOT_RUNNING;
- }
- c = hostdata->controller;
-
- /*
- * Obtain an I2O message. Right now we _have_ to obtain one
- * until the scsi layer stuff is cleaned up.
- */
-
- do
- {
- mb();
- m = I2O_POST_READ32(c);
- }
- while(m==0xFFFFFFFF);
- msg = (u32 *)(c->mem_offset + m);
-
- __raw_writel(FIVE_WORD_MSG_SIZE, &msg[0]);
- __raw_writel(I2O_CMD_SCSI_ABORT<<24|HOST_TID<<12|tid, &msg[1]);
- __raw_writel(scsi_context, &msg[2]);
- __raw_writel(0, &msg[3]); /* Not needed for an abort */
- __raw_writel((u32)SCpnt, &msg[4]);
- wmb();
- i2o_post_message(c,m);
- wmb();
- return SCSI_ABORT_PENDING;
-}
-
-int i2o_scsi_reset(Scsi_Cmnd * SCpnt, unsigned int reset_flags)
-{
- int tid;
- struct i2o_controller *c;
- struct Scsi_Host *host;
- struct i2o_scsi_host *hostdata;
- u32 m;
- u32 *msg;
-
- /*
- * Find the TID for the bus
- */
-
- printk("i2o_scsi: Attempting to reset the bus.\n");
-
- host = SCpnt->host;
- hostdata = (struct i2o_scsi_host *)host->hostdata;
- tid = hostdata->bus_task;
- c = hostdata->controller;
-
- /*
- * Now send a SCSI reset request. Any remaining commands
- * will be aborted by the IOP. We need to catch the reply
- * possibly ?
- */
-
- m = I2O_POST_READ32(c);
-
- /*
- * No free messages, try again next time - no big deal
- */
-
- if(m == 0xFFFFFFFF)
- return SCSI_RESET_PUNT;
-
- msg = (u32 *)(c->mem_offset + m);
- __raw_writel(FOUR_WORD_MSG_SIZE|SGL_OFFSET_0, &msg[0]);
- __raw_writel(I2O_CMD_SCSI_BUSRESET<<24|HOST_TID<<12|tid, &msg[1]);
- __raw_writel(scsi_context|0x80000000, &msg[2]);
- /* We use the top bit to split controller and unit transactions */
- /* Now store unit,tid so we can tie the completion back to a specific device */
- __raw_writel(c->unit << 16 | tid, &msg[3]);
- wmb();
- i2o_post_message(c,m);
- return SCSI_RESET_PENDING;
-}
-
-/*
- * This is anyones guess quite frankly.
- */
-
-int i2o_scsi_bios_param(Disk * disk, kdev_t dev, int *ip)
-{
- int size;
-
- size = disk->capacity;
- ip[0] = 64; /* heads */
- ip[1] = 32; /* sectors */
- if ((ip[2] = size >> 11) > 1024) { /* cylinders, test for big disk */
- ip[0] = 255; /* heads */
- ip[1] = 63; /* sectors */
- ip[2] = size / (255 * 63); /* cylinders */
- }
- return 0;
-}
-
-MODULE_AUTHOR("Red Hat Software");
-
-static Scsi_Host_Template driver_template = I2OSCSI;
-
-#include "../scsi/scsi_module.c"
+++ /dev/null
-#ifndef _I2O_SCSI_H
-#define _I2O_SCSI_H
-
-#if !defined(LINUX_VERSION_CODE)
-#include <linux/version.h>
-#endif
-
-#define LinuxVersionCode(v, p, s) (((v)<<16)+((p)<<8)+(s))
-
-#include <linux/types.h>
-#include <linux/kdev_t.h>
-
-#define I2O_SCSI_ID 15
-#define I2O_SCSI_CAN_QUEUE 4
-#define I2O_SCSI_CMD_PER_LUN 6
-
-extern int i2o_scsi_detect(Scsi_Host_Template *);
-extern const char *i2o_scsi_info(struct Scsi_Host *);
-extern int i2o_scsi_command(Scsi_Cmnd *);
-extern int i2o_scsi_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
-extern int i2o_scsi_abort(Scsi_Cmnd *);
-extern int i2o_scsi_reset(Scsi_Cmnd *, unsigned int);
-extern int i2o_scsi_bios_param(Disk *, kdev_t, int *);
-extern void i2o_scsi_setup(char *str, int *ints);
-extern int i2o_scsi_release(struct Scsi_Host *host);
-
-#define I2OSCSI { \
- next: NULL, \
- proc_name: "i2o_scsi", \
- name: "I2O SCSI Layer", \
- detect: i2o_scsi_detect, \
- release: i2o_scsi_release, \
- info: i2o_scsi_info, \
- command: i2o_scsi_command, \
- queuecommand: i2o_scsi_queuecommand, \
- abort: i2o_scsi_abort, \
- reset: i2o_scsi_reset, \
- bios_param: i2o_scsi_bios_param, \
- can_queue: I2O_SCSI_CAN_QUEUE, \
- this_id: I2O_SCSI_ID, \
- sg_tablesize: 8, \
- cmd_per_lun: I2O_SCSI_CMD_PER_LUN, \
- unchecked_isa_dma: 0, \
- use_clustering: ENABLE_CLUSTERING \
- }
-
-#endif
case BLKGETSIZE: /* Return device size */
return put_user(hd[MINOR(inode->i_rdev)].nr_sects,
- (long *) arg);
+ (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)hd[MINOR(inode->i_rdev)].nr_sects << 9,
(u64 *) arg);
if (!arg) return -EINVAL;
sectors = ataraid_gendisk.part[MINOR(inode->i_rdev)].nr_sects;
if (MINOR(inode->i_rdev)&15)
- return put_user(sectors, (long *) arg);
- return put_user(raid[minor].sectors , (long *) arg);
+ return put_user(sectors, (unsigned long *) arg);
+ return put_user(raid[minor].sectors , (unsigned long *) arg);
break;
* any other way to detect this...
*/
if (sense.sense_key == NOT_READY) {
- if (sense.asc == 0x3a && (!sense.ascq||sense.ascq == 1))
+ if (sense.asc == 0x3a && sense.ascq == 1)
return CDS_NO_DISC;
else
return CDS_TRAY_OPEN;
module_init(ide_cdrom_init);
module_exit(ide_cdrom_exit);
+MODULE_LICENSE("GPL");
}
case BLKGETSIZE: /* Return device size */
- return put_user(drive->part[MINOR(inode->i_rdev)&PARTN_MASK].nr_sects, (long *) arg);
+ return put_user(drive->part[MINOR(inode->i_rdev)&PARTN_MASK].nr_sects, (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)drive->part[MINOR(inode->i_rdev)&PARTN_MASK].nr_sects << 9, (u64 *) arg);
if (!arg) return -EINVAL;
sectors = ataraid_gendisk.part[MINOR(inode->i_rdev)].nr_sects;
if (MINOR(inode->i_rdev)&15)
- return put_user(sectors, (long *) arg);
- return put_user(raid[minor].sectors , (long *) arg);
+ return put_user(sectors, (unsigned long *) arg);
+ return put_user(raid[minor].sectors , (unsigned long *) arg);
break;
module_init(macserial_init);
module_exit(macserial_cleanup);
+MODULE_LICENSE("GPL");
+EXPORT_NO_SYMBOLS;
#if 0
/*
module_init(nvram_init);
module_exit(nvram_cleanup);
+MODULE_LICENSE("GPL");
module_init(rtc_init);
module_exit(rtc_exit);
+MODULE_LICENSE("GPL");
/* return device size */
P_IOCTL("%s -- lvm_blk_ioctl -- BLKGETSIZE: %u\n",
lvm_name, lv_ptr->lv_size);
- if (put_user(lv_ptr->lv_size, (long *)arg))
+ if (put_user(lv_ptr->lv_size, (unsigned long *)arg))
return -EFAULT;
break;
goto abort;
}
err = md_put_user(md_hd_struct[minor].nr_sects,
- (long *) arg);
+ (unsigned long *) arg);
goto done;
case BLKGETSIZE64: /* Return device size */
--- /dev/null
+mainmenu_option next_comment
+comment 'I2O device support'
+
+tristate 'I2O support' CONFIG_I2O
+
+if [ "$CONFIG_PCI" = "y" ]; then
+ dep_tristate ' I2O PCI support' CONFIG_I2O_PCI $CONFIG_I2O
+fi
+dep_tristate ' I2O Block OSM' CONFIG_I2O_BLOCK $CONFIG_I2O
+if [ "$CONFIG_NET" = "y" ]; then
+ dep_tristate ' I2O LAN OSM' CONFIG_I2O_LAN $CONFIG_I2O
+fi
+dep_tristate ' I2O SCSI OSM' CONFIG_I2O_SCSI $CONFIG_I2O $CONFIG_SCSI
+dep_tristate ' I2O /proc support' CONFIG_I2O_PROC $CONFIG_I2O
+
+endmenu
--- /dev/null
+#
+# Makefile for the kernel I2O OSM.
+#
+# Note : at this point, these files are compiled on all systems.
+# In the future, some of these should be built conditionally.
+#
+
+O_TARGET := i2o.o
+
+export-objs := i2o_pci.o i2o_core.o i2o_config.o i2o_block.o i2o_lan.o i2o_scsi.o i2o_proc.o
+
+obj-$(CONFIG_I2O_PCI) += i2o_pci.o
+obj-$(CONFIG_I2O) += i2o_core.o i2o_config.o
+obj-$(CONFIG_I2O_BLOCK) += i2o_block.o
+obj-$(CONFIG_I2O_LAN) += i2o_lan.o
+obj-$(CONFIG_I2O_SCSI) += i2o_scsi.o
+obj-$(CONFIG_I2O_PROC) += i2o_proc.o
+
+include $(TOPDIR)/Rules.make
+
--- /dev/null
+
+ Linux I2O Support (c) Copyright 1999 Red Hat Software
+ and others.
+
+ This program is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License
+ as published by the Free Software Foundation; either version
+ 2 of the License, or (at your option) any later version.
+
+AUTHORS (so far)
+
+Alan Cox, Building Number Three Ltd.
+ Core code, SCSI and Block OSMs
+
+Steve Ralston, LSI Logic Corp.
+ Debugging SCSI and Block OSM
+
+Deepak Saxena, Intel Corp.
+ Various core/block extensions
+ /proc interface, bug fixes
+ Ioctl interfaces for control
+ Debugging LAN OSM
+
+Philip Rumpf
+ Fixed assorted dumb SMP locking bugs
+
+Juha Sievanen, University of Helsinki Finland
+ LAN OSM code
+ /proc interface to LAN class
+ Bug fixes
+ Core code extensions
+
+Auvo Häkkinen, University of Helsinki Finland
+ LAN OSM code
+ /Proc interface to LAN class
+ Bug fixes
+ Core code extensions
+
+Taneli Vähäkangas, University of Helsinki Finland
+ Fixes to i2o_config
+
+CREDITS
+
+ This work was made possible by
+
+Red Hat Software
+ Funding for the Building #3 part of the project
+
+Symbios Logic (Now LSI)
+ Host adapters, hints, known to work platforms when I hit
+ compatibility problems
+
+BoxHill Corporation
+ Loan of initial FibreChannel disk array used for development work.
+
+European Comission
+ Funding the work done by the University of Helsinki
+
+SysKonnect
+ Loan of FDDI and Gigabit Ethernet cards
+
+ASUSTeK
+ Loan of I2O motherboard
+
+STATUS:
+
+o The core setup works within limits.
+o The scsi layer seems to almost work.
+ I'm still chasing down the hang bug.
+o The block OSM is mostly functional
+o LAN OSM works with FDDI and Ethernet cards.
+
+TO DO:
+
+General:
+o Provide hidden address space if asked
+o Long term message flow control
+o PCI IOP's without interrupts are not supported yet
+o Push FAIL handling into the core
+o DDM control interfaces for module load etc
+o Add I2O 2.0 support (Deffered to 2.5 kernel)
+
+Block:
+o Multiple major numbers
+o Read ahead and cache handling stuff. Talk to Ingo and people
+o Power management
+o Finish Media changers
+
+SCSI:
+o Find the right way to associate drives/luns/busses
+
+Lan:
+o Performance tuning
+o Test Fibre Channel code
+
+Tape:
+o Anyone seen anything implementing this ?
+ (D.S: Will attempt to do so if spare cycles permit)
--- /dev/null
+
+Linux I2O User Space Interface
+rev 0.3 - 04/20/99
+
+=============================================================================
+Originally written by Deepak Saxena(deepak@plexity.net)
+Currently maintained by Deepak Saxena(deepak@plexity.net)
+=============================================================================
+
+I. Introduction
+
+The Linux I2O subsystem provides a set of ioctl() commands that can be
+utilized by user space applications to communicate with IOPs and devices
+on individual IOPs. This document defines the specific ioctl() commands
+that are available to the user and provides examples of their uses.
+
+This document assumes the reader is familiar with or has access to the
+I2O specification as no I2O message parameters are outlined. For information
+on the specification, see http://www.i2osig.org
+
+This document and the I2O user space interface are currently maintained
+by Deepak Saxena. Please send all comments, errata, and bug fixes to
+deepak@csociety.purdue.edu
+
+II. IOP Access
+
+Access to the I2O subsystem is provided through the device file named
+/dev/i2o/ctl. This file is a character file with major number 10 and minor
+number 166. It can be created through the following command:
+
+ mknod /dev/i2o/ctl c 10 166
+
+III. Determining the IOP Count
+
+ SYNOPSIS
+
+ ioctl(fd, I2OGETIOPS, int *count);
+
+ u8 count[MAX_I2O_CONTROLLERS];
+
+ DESCRIPTION
+
+ This function returns the system's active IOP table. count should
+ point to a buffer containing MAX_I2O_CONTROLLERS entries. Upon
+ returning, each entry will contain a non-zero value if the given
+ IOP unit is active, and NULL if it is inactive or non-existent.
+
+ RETURN VALUE.
+
+ Returns 0 if no errors occur, and -1 otherwise. If an error occurs,
+ errno is set appropriately:
+
+ EFAULT Invalid user space pointer was passed
+
+IV. Getting Hardware Resource Table
+
+ SYNOPSIS
+
+ ioctl(fd, I2OHRTGET, struct i2o_cmd_hrt *hrt);
+
+ struct i2o_cmd_hrtlct
+ {
+ u32 iop; /* IOP unit number */
+ void *resbuf; /* Buffer for result */
+ u32 *reslen; /* Buffer length in bytes */
+ };
+
+ DESCRIPTION
+
+ This function returns the Hardware Resource Table of the IOP specified
+ by hrt->iop in the buffer pointed to by hrt->resbuf. The actual size of
+ the data is written into *(hrt->reslen).
+
+ RETURNS
+
+ This function returns 0 if no errors occur. If an error occurs, -1
+ is returned and errno is set appropriately:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ENOBUFS Buffer not large enough. If this occurs, the required
+ buffer length is written into *(hrt->reslen)
+
+V. Getting Logical Configuration Table
+
+ SYNOPSIS
+
+ ioctl(fd, I2OLCTGET, struct i2o_cmd_lct *lct);
+
+ struct i2o_cmd_hrtlct
+ {
+ u32 iop; /* IOP unit number */
+ void *resbuf; /* Buffer for result */
+ u32 *reslen; /* Buffer length in bytes */
+ };
+
+ DESCRIPTION
+
+ This function returns the Logical Configuration Table of the IOP specified
+ by lct->iop in the buffer pointed to by lct->resbuf. The actual size of
+ the data is written into *(lct->reslen).
+
+ RETURNS
+
+ This function returns 0 if no errors occur. If an error occurs, -1
+ is returned and errno is set appropriately:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ENOBUFS Buffer not large enough. If this occurs, the required
+ buffer length is written into *(lct->reslen)
+
+VI. Settting Parameters
+
+ SYNOPSIS
+
+ ioctl(fd, I2OPARMSET, struct i2o_parm_setget *ops);
+
+ struct i2o_cmd_psetget
+ {
+ u32 iop; /* IOP unit number */
+ u32 tid; /* Target device TID */
+ void *opbuf; /* Operation List buffer */
+ u32 oplen; /* Operation List buffer length in bytes */
+ void *resbuf; /* Result List buffer */
+ u32 *reslen; /* Result List buffer length in bytes */
+ };
+
+ DESCRIPTION
+
+ This function posts a UtilParamsSet message to the device identified
+ by ops->iop and ops->tid. The operation list for the message is
+ sent through the ops->opbuf buffer, and the result list is written
+ into the buffer pointed to by ops->resbuf. The number of bytes
+ written is placed into *(ops->reslen).
+
+ RETURNS
+
+ The return value is the size in bytes of the data written into
+ ops->resbuf if no errors occur. If an error occurs, -1 is returned
+ and errno is set appropriatly:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ENOBUFS Buffer not large enough. If this occurs, the required
+ buffer length is written into *(ops->reslen)
+ ETIMEDOUT Timeout waiting for reply message
+ ENOMEM Kernel memory allocation error
+
+ A return value of 0 does not mean that the value was actually
+ changed properly on the IOP. The user should check the result
+ list to determine the specific status of the transaction.
+
+VII. Getting Parameters
+
+ SYNOPSIS
+
+ ioctl(fd, I2OPARMGET, struct i2o_parm_setget *ops);
+
+ struct i2o_parm_setget
+ {
+ u32 iop; /* IOP unit number */
+ u32 tid; /* Target device TID */
+ void *opbuf; /* Operation List buffer */
+ u32 oplen; /* Operation List buffer length in bytes */
+ void *resbuf; /* Result List buffer */
+ u32 *reslen; /* Result List buffer length in bytes */
+ };
+
+ DESCRIPTION
+
+ This function posts a UtilParamsGet message to the device identified
+ by ops->iop and ops->tid. The operation list for the message is
+ sent through the ops->opbuf buffer, and the result list is written
+ into the buffer pointed to by ops->resbuf. The actual size of data
+ written is placed into *(ops->reslen).
+
+ RETURNS
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ENOBUFS Buffer not large enough. If this occurs, the required
+ buffer length is written into *(ops->reslen)
+ ETIMEDOUT Timeout waiting for reply message
+ ENOMEM Kernel memory allocation error
+
+ A return value of 0 does not mean that the value was actually
+ properly retreived. The user should check the result list
+ to determine the specific status of the transaction.
+
+VIII. Downloading Software
+
+ SYNOPSIS
+
+ ioctl(fd, I2OSWDL, struct i2o_sw_xfer *sw);
+
+ struct i2o_sw_xfer
+ {
+ u32 iop; /* IOP unit number */
+ u8 flags; /* DownloadFlags field */
+ u8 sw_type; /* Software type */
+ u32 sw_id; /* Software ID */
+ void *buf; /* Pointer to software buffer */
+ u32 *swlen; /* Length of software buffer */
+ u32 *maxfrag; /* Number of fragments */
+ u32 *curfrag; /* Current fragment number */
+ };
+
+ DESCRIPTION
+
+ This function downloads a software fragment pointed by sw->buf
+ to the iop identified by sw->iop. The DownloadFlags, SwID, SwType
+ and SwSize fields of the ExecSwDownload message are filled in with
+ the values of sw->flags, sw->sw_id, sw->sw_type and *(sw->swlen).
+
+ The fragments _must_ be sent in order and be 8K in size. The last
+ fragment _may_ be shorter, however. The kernel will compute its
+ size based on information in the sw->swlen field.
+
+ Please note that SW transfers can take a long time.
+
+ RETURNS
+
+ This function returns 0 no errors occur. If an error occurs, -1
+ is returned and errno is set appropriatly:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ETIMEDOUT Timeout waiting for reply message
+ ENOMEM Kernel memory allocation error
+
+IX. Uploading Software
+
+ SYNOPSIS
+
+ ioctl(fd, I2OSWUL, struct i2o_sw_xfer *sw);
+
+ struct i2o_sw_xfer
+ {
+ u32 iop; /* IOP unit number */
+ u8 flags; /* UploadFlags */
+ u8 sw_type; /* Software type */
+ u32 sw_id; /* Software ID */
+ void *buf; /* Pointer to software buffer */
+ u32 *swlen; /* Length of software buffer */
+ u32 *maxfrag; /* Number of fragments */
+ u32 *curfrag; /* Current fragment number */
+ };
+
+ DESCRIPTION
+
+ This function uploads a software fragment from the IOP identified
+ by sw->iop, sw->sw_type, sw->sw_id and optionally sw->swlen fields.
+ The UploadFlags, SwID, SwType and SwSize fields of the ExecSwUpload
+ message are filled in with the values of sw->flags, sw->sw_id,
+ sw->sw_type and *(sw->swlen).
+
+ The fragments _must_ be requested in order and be 8K in size. The
+ user is responsible for allocating memory pointed by sw->buf. The
+ last fragment _may_ be shorter.
+
+ Please note that SW transfers can take a long time.
+
+ RETURNS
+
+ This function returns 0 if no errors occur. If an error occurs, -1
+ is returned and errno is set appropriatly:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ETIMEDOUT Timeout waiting for reply message
+ ENOMEM Kernel memory allocation error
+
+X. Removing Software
+
+ SYNOPSIS
+
+ ioctl(fd, I2OSWDEL, struct i2o_sw_xfer *sw);
+
+ struct i2o_sw_xfer
+ {
+ u32 iop; /* IOP unit number */
+ u8 flags; /* RemoveFlags */
+ u8 sw_type; /* Software type */
+ u32 sw_id; /* Software ID */
+ void *buf; /* Unused */
+ u32 *swlen; /* Length of the software data */
+ u32 *maxfrag; /* Unused */
+ u32 *curfrag; /* Unused */
+ };
+
+ DESCRIPTION
+
+ This function removes software from the IOP identified by sw->iop.
+ The RemoveFlags, SwID, SwType and SwSize fields of the ExecSwRemove message
+ are filled in with the values of sw->flags, sw->sw_id, sw->sw_type and
+ *(sw->swlen). Give zero in *(sw->len) if the value is unknown. IOP uses
+ *(sw->swlen) value to verify correct identication of the module to remove.
+ The actual size of the module is written into *(sw->swlen).
+
+ RETURNS
+
+ This function returns 0 if no errors occur. If an error occurs, -1
+ is returned and errno is set appropriatly:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ETIMEDOUT Timeout waiting for reply message
+ ENOMEM Kernel memory allocation error
+
+X. Validating Configuration
+
+ SYNOPSIS
+
+ ioctl(fd, I2OVALIDATE, int *iop);
+ u32 iop;
+
+ DESCRIPTION
+
+ This function posts an ExecConfigValidate message to the controller
+ identified by iop. This message indicates that the the current
+ configuration is accepted. The iop changes the status of suspect drivers
+ to valid and may delete old drivers from its store.
+
+ RETURNS
+
+ This function returns 0 if no erro occur. If an error occurs, -1 is
+ returned and errno is set appropriatly:
+
+ ETIMEDOUT Timeout waiting for reply message
+ ENXIO Invalid IOP number
+
+XI. Configuration Dialog
+
+ SYNOPSIS
+
+ ioctl(fd, I2OHTML, struct i2o_html *htquery);
+ struct i2o_html
+ {
+ u32 iop; /* IOP unit number */
+ u32 tid; /* Target device ID */
+ u32 page; /* HTML page */
+ void *resbuf; /* Buffer for reply HTML page */
+ u32 *reslen; /* Length in bytes of reply buffer */
+ void *qbuf; /* Pointer to HTTP query string */
+ u32 qlen; /* Length in bytes of query string buffer */
+ };
+
+ DESCRIPTION
+
+ This function posts an UtilConfigDialog message to the device identified
+ by htquery->iop and htquery->tid. The requested HTML page number is
+ provided by the htquery->page field, and the resultant data is stored
+ in the buffer pointed to by htquery->resbuf. If there is an HTTP query
+ string that is to be sent to the device, it should be sent in the buffer
+ pointed to by htquery->qbuf. If there is no query string, this field
+ should be set to NULL. The actual size of the reply received is written
+ into *(htquery->reslen).
+
+ RETURNS
+
+ This function returns 0 if no error occur. If an error occurs, -1
+ is returned and errno is set appropriatly:
+
+ EFAULT Invalid user space pointer was passed
+ ENXIO Invalid IOP number
+ ENOBUFS Buffer not large enough. If this occurs, the required
+ buffer length is written into *(ops->reslen)
+ ETIMEDOUT Timeout waiting for reply message
+ ENOMEM Kernel memory allocation error
+
+XII. Events
+
+ In the process of determining this. Current idea is to have use
+ the select() interface to allow user apps to periodically poll
+ the /dev/i2o/ctl device for events. When select() notifies the user
+ that an event is available, the user would call read() to retrieve
+ a list of all the events that are pending for the specific device.
+
+=============================================================================
+Revision History
+=============================================================================
+
+Rev 0.1 - 04/01/99
+- Initial revision
+
+Rev 0.2 - 04/06/99
+- Changed return values to match UNIX ioctl() standard. Only return values
+ are 0 and -1. All errors are reported through errno.
+- Added summary of proposed possible event interfaces
+
+Rev 0.3 - 04/20/99
+- Changed all ioctls() to use pointers to user data instead of actual data
+- Updated error values to match the code
--- /dev/null
+/*
+ * I2O Random Block Storage Class OSM
+ *
+ * (C) Copyright 1999 Red Hat Software
+ *
+ * Written by Alan Cox, Building Number Three Ltd
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * This is a beta test release. Most of the good code was taken
+ * from the nbd driver by Pavel Machek, who in turn took some of it
+ * from loop.c. Isn't free software great for reusability 8)
+ *
+ * Fixes/additions:
+ * Steve Ralston:
+ * Multiple device handling error fixes,
+ * Added a queue depth.
+ * Alan Cox:
+ * FC920 has an rmw bug. Dont or in the end marker.
+ * Removed queue walk, fixed for 64bitness.
+ * Deepak Saxena:
+ * Independent queues per IOP
+ * Support for dynamic device creation/deletion
+ * Code cleanup
+ * Support for larger I/Os through merge* functions
+ * (taken from DAC960 driver)
+ * Boji T Kannanthanam:
+ * Set the I2O Block devices to be detected in increasing
+ * order of TIDs during boot.
+ * Search and set the I2O block device that we boot off from as
+ * the first device to be claimed (as /dev/i2o/hda)
+ * Properly attach/detach I2O gendisk structure from the system
+ * gendisk list. The I2O block devices now appear in
+ * /proc/partitions.
+ *
+ * To do:
+ * Serial number scanning to find duplicates for FC multipathing
+ */
+
+#include <linux/major.h>
+
+#include <linux/module.h>
+
+#include <linux/sched.h>
+#include <linux/fs.h>
+#include <linux/stat.h>
+#include <linux/errno.h>
+#include <linux/file.h>
+#include <linux/ioctl.h>
+#include <linux/i2o.h>
+#include <linux/blkdev.h>
+#include <linux/blkpg.h>
+#include <linux/slab.h>
+#include <linux/hdreg.h>
+
+#include <linux/notifier.h>
+#include <linux/reboot.h>
+
+#include <asm/uaccess.h>
+#include <asm/semaphore.h>
+#include <linux/completion.h>
+#include <asm/io.h>
+#include <asm/atomic.h>
+#include <linux/smp_lock.h>
+#include <linux/wait.h>
+
+#define MAJOR_NR I2O_MAJOR
+
+#include <linux/blk.h>
+
+#define MAX_I2OB 16
+
+#define MAX_I2OB_DEPTH 128
+#define MAX_I2OB_RETRIES 4
+
+//#define DRIVERDEBUG
+#ifdef DRIVERDEBUG
+#define DEBUG( s )
+#else
+#define DEBUG( s ) printk( s )
+#endif
+
+/*
+ * Events that this OSM is interested in
+ */
+#define I2OB_EVENT_MASK (I2O_EVT_IND_BSA_VOLUME_LOAD | \
+ I2O_EVT_IND_BSA_VOLUME_UNLOAD | \
+ I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ | \
+ I2O_EVT_IND_BSA_CAPACITY_CHANGE | \
+ I2O_EVT_IND_BSA_SCSI_SMART )
+
+
+/*
+ * I2O Block Error Codes - should be in a header file really...
+ */
+#define I2O_BSA_DSC_SUCCESS 0x0000
+#define I2O_BSA_DSC_MEDIA_ERROR 0x0001
+#define I2O_BSA_DSC_ACCESS_ERROR 0x0002
+#define I2O_BSA_DSC_DEVICE_FAILURE 0x0003
+#define I2O_BSA_DSC_DEVICE_NOT_READY 0x0004
+#define I2O_BSA_DSC_MEDIA_NOT_PRESENT 0x0005
+#define I2O_BSA_DSC_MEDIA_LOCKED 0x0006
+#define I2O_BSA_DSC_MEDIA_FAILURE 0x0007
+#define I2O_BSA_DSC_PROTOCOL_FAILURE 0x0008
+#define I2O_BSA_DSC_BUS_FAILURE 0x0009
+#define I2O_BSA_DSC_ACCESS_VIOLATION 0x000A
+#define I2O_BSA_DSC_WRITE_PROTECTED 0x000B
+#define I2O_BSA_DSC_DEVICE_RESET 0x000C
+#define I2O_BSA_DSC_VOLUME_CHANGED 0x000D
+#define I2O_BSA_DSC_TIMEOUT 0x000E
+
+/*
+ * Some of these can be made smaller later
+ */
+
+static int i2ob_blksizes[MAX_I2OB<<4];
+static int i2ob_hardsizes[MAX_I2OB<<4];
+static int i2ob_sizes[MAX_I2OB<<4];
+static int i2ob_media_change_flag[MAX_I2OB];
+static u32 i2ob_max_sectors[MAX_I2OB<<4];
+
+static int i2ob_context;
+
+/*
+ * I2O Block device descriptor
+ */
+struct i2ob_device
+{
+ struct i2o_controller *controller;
+ struct i2o_device *i2odev;
+ int unit;
+ int tid;
+ int flags;
+ int refcnt;
+ struct request *head, *tail;
+ request_queue_t *req_queue;
+ int max_segments;
+ int done_flag;
+ int constipated;
+ int depth;
+};
+
+/*
+ * FIXME:
+ * We should cache align these to avoid ping-ponging lines on SMP
+ * boxes under heavy I/O load...
+ */
+struct i2ob_request
+{
+ struct i2ob_request *next;
+ struct request *req;
+ int num;
+};
+
+/*
+ * Per IOP requst queue information
+ *
+ * We have a separate requeust_queue_t per IOP so that a heavilly
+ * loaded I2O block device on an IOP does not starve block devices
+ * across all I2O controllers.
+ *
+ */
+struct i2ob_iop_queue
+{
+ atomic_t queue_depth;
+ struct i2ob_request request_queue[MAX_I2OB_DEPTH];
+ struct i2ob_request *i2ob_qhead;
+ request_queue_t req_queue;
+};
+static struct i2ob_iop_queue *i2ob_queues[MAX_I2O_CONTROLLERS];
+static struct i2ob_request *i2ob_backlog[MAX_I2O_CONTROLLERS];
+static struct i2ob_request *i2ob_backlog_tail[MAX_I2O_CONTROLLERS];
+
+/*
+ * Each I2O disk is one of these.
+ */
+
+static struct i2ob_device i2ob_dev[MAX_I2OB<<4];
+static int i2ob_dev_count = 0;
+static struct hd_struct i2ob[MAX_I2OB<<4];
+static struct gendisk i2ob_gendisk; /* Declared later */
+
+/*
+ * Mutex and spin lock for event handling synchronization
+ * evt_msg contains the last event.
+ */
+static DECLARE_MUTEX_LOCKED(i2ob_evt_sem);
+static DECLARE_COMPLETION(i2ob_thread_dead);
+static spinlock_t i2ob_evt_lock = SPIN_LOCK_UNLOCKED;
+static u32 evt_msg[MSG_FRAME_SIZE>>2];
+
+static struct timer_list i2ob_timer;
+static int i2ob_timer_started = 0;
+
+static void i2o_block_reply(struct i2o_handler *, struct i2o_controller *,
+ struct i2o_message *);
+static void i2ob_new_device(struct i2o_controller *, struct i2o_device *);
+static void i2ob_del_device(struct i2o_controller *, struct i2o_device *);
+static void i2ob_reboot_event(void);
+static int i2ob_install_device(struct i2o_controller *, struct i2o_device *, int);
+static void i2ob_end_request(struct request *);
+static void i2ob_request(request_queue_t *);
+static int i2ob_backlog_request(struct i2o_controller *, struct i2ob_device *);
+static int i2ob_init_iop(unsigned int);
+static request_queue_t* i2ob_get_queue(kdev_t);
+static int i2ob_query_device(struct i2ob_device *, int, int, void*, int);
+static int do_i2ob_revalidate(kdev_t, int);
+static int i2ob_evt(void *);
+
+static int evt_pid = 0;
+static int evt_running = 0;
+static int scan_unit = 0;
+
+/*
+ * I2O OSM registration structure...keeps getting bigger and bigger :)
+ */
+static struct i2o_handler i2o_block_handler =
+{
+ i2o_block_reply,
+ i2ob_new_device,
+ i2ob_del_device,
+ i2ob_reboot_event,
+ "I2O Block OSM",
+ 0,
+ I2O_CLASS_RANDOM_BLOCK_STORAGE
+};
+
+/*
+ * Get a message
+ */
+
+static u32 i2ob_get(struct i2ob_device *dev)
+{
+ struct i2o_controller *c=dev->controller;
+ return I2O_POST_READ32(c);
+}
+
+/*
+ * Turn a Linux block request into an I2O block read/write.
+ */
+
+static int i2ob_send(u32 m, struct i2ob_device *dev, struct i2ob_request *ireq, u32 base, int unit)
+{
+ struct i2o_controller *c = dev->controller;
+ int tid = dev->tid;
+ unsigned long msg;
+ unsigned long mptr;
+ u64 offset;
+ struct request *req = ireq->req;
+ struct buffer_head *bh = req->bh;
+ int count = req->nr_sectors<<9;
+ char *last = NULL;
+ unsigned short size = 0;
+
+ // printk(KERN_INFO "i2ob_send called\n");
+ /* Map the message to a virtual address */
+ msg = c->mem_offset + m;
+
+ /*
+ * Build the message based on the request.
+ */
+ __raw_writel(i2ob_context|(unit<<8), msg+8);
+ __raw_writel(ireq->num, msg+12);
+ __raw_writel(req->nr_sectors << 9, msg+20);
+
+ /*
+ * Mask out partitions from now on
+ */
+ unit &= 0xF0;
+
+ /* This can be optimised later - just want to be sure its right for
+ starters */
+ offset = ((u64)(req->sector+base)) << 9;
+ __raw_writel( offset & 0xFFFFFFFF, msg+24);
+ __raw_writel(offset>>32, msg+28);
+ mptr=msg+32;
+
+ if(req->cmd == READ)
+ {
+ __raw_writel(I2O_CMD_BLOCK_READ<<24|HOST_TID<<12|tid, msg+4);
+ while(bh!=NULL)
+ {
+ if(bh->b_data == last) {
+ size += bh->b_size;
+ last += bh->b_size;
+ if(bh->b_reqnext)
+ __raw_writel(0x14000000|(size), mptr-8);
+ else
+ __raw_writel(0xD4000000|(size), mptr-8);
+ }
+ else
+ {
+ if(bh->b_reqnext)
+ __raw_writel(0x10000000|(bh->b_size), mptr);
+ else
+ __raw_writel(0xD0000000|(bh->b_size), mptr);
+ __raw_writel(virt_to_bus(bh->b_data), mptr+4);
+ mptr += 8;
+ size = bh->b_size;
+ last = bh->b_data + size;
+ }
+
+ count -= bh->b_size;
+ bh = bh->b_reqnext;
+ }
+ /*
+ * Heuristic for now since the block layer doesnt give
+ * us enough info. If its a big write assume sequential
+ * readahead on controller. If its small then don't read
+ * ahead but do use the controller cache.
+ */
+ if(size >= 8192)
+ __raw_writel((8<<24)|(1<<16)|8, msg+16);
+ else
+ __raw_writel((8<<24)|(1<<16)|4, msg+16);
+ }
+ else if(req->cmd == WRITE)
+ {
+ __raw_writel(I2O_CMD_BLOCK_WRITE<<24|HOST_TID<<12|tid, msg+4);
+ while(bh!=NULL)
+ {
+ if(bh->b_data == last) {
+ size += bh->b_size;
+ last += bh->b_size;
+ if(bh->b_reqnext)
+ __raw_writel(0x14000000|(size), mptr-8);
+ else
+ __raw_writel(0xD4000000|(size), mptr-8);
+ }
+ else
+ {
+ if(bh->b_reqnext)
+ __raw_writel(0x14000000|(bh->b_size), mptr);
+ else
+ __raw_writel(0xD4000000|(bh->b_size), mptr);
+ __raw_writel(virt_to_bus(bh->b_data), mptr+4);
+ mptr += 8;
+ size = bh->b_size;
+ last = bh->b_data + size;
+ }
+
+ count -= bh->b_size;
+ bh = bh->b_reqnext;
+ }
+
+ if(c->battery)
+ {
+
+ if(size>16384)
+ __raw_writel(4, msg+16);
+ else
+ /*
+ * Allow replies to come back once data is cached in the controller
+ * This allows us to handle writes quickly thus giving more of the
+ * queue to reads.
+ */
+ __raw_writel(16, msg+16);
+ }
+ else
+ {
+ /* Large write, don't cache */
+ if(size>8192)
+ __raw_writel(4, msg+16);
+ else
+ /* write through */
+ __raw_writel(8, msg+16);
+ }
+ }
+ __raw_writel(I2O_MESSAGE_SIZE(mptr-msg)>>2 | SGL_OFFSET_8, msg);
+
+ if(count != 0)
+ {
+ printk(KERN_ERR "Request count botched by %d.\n", count);
+ }
+
+ i2o_post_message(c,m);
+ atomic_inc(&i2ob_queues[c->unit]->queue_depth);
+
+ return 0;
+}
+
+/*
+ * Remove a request from the _locked_ request list. We update both the
+ * list chain and if this is the last item the tail pointer. Caller
+ * must hold the lock.
+ */
+
+static inline void i2ob_unhook_request(struct i2ob_request *ireq,
+ unsigned int iop)
+{
+ ireq->next = i2ob_queues[iop]->i2ob_qhead;
+ i2ob_queues[iop]->i2ob_qhead = ireq;
+}
+
+/*
+ * Request completion handler
+ */
+
+static inline void i2ob_end_request(struct request *req)
+{
+ /*
+ * Loop until all of the buffers that are linked
+ * to this request have been marked updated and
+ * unlocked.
+ */
+
+ while (end_that_request_first( req, !req->errors, "i2o block" ));
+
+ /*
+ * It is now ok to complete the request.
+ */
+ end_that_request_last( req );
+}
+
+/*
+ * Request merging functions
+ */
+static inline int i2ob_new_segment(request_queue_t *q, struct request *req,
+ int __max_segments)
+{
+ int max_segments = i2ob_dev[MINOR(req->rq_dev)].max_segments;
+
+ if (__max_segments < max_segments)
+ max_segments = __max_segments;
+
+ if (req->nr_segments < max_segments) {
+ req->nr_segments++;
+ return 1;
+ }
+ return 0;
+}
+
+static int i2ob_back_merge(request_queue_t *q, struct request *req,
+ struct buffer_head *bh, int __max_segments)
+{
+ if (req->bhtail->b_data + req->bhtail->b_size == bh->b_data)
+ return 1;
+ return i2ob_new_segment(q, req, __max_segments);
+}
+
+static int i2ob_front_merge(request_queue_t *q, struct request *req,
+ struct buffer_head *bh, int __max_segments)
+{
+ if (bh->b_data + bh->b_size == req->bh->b_data)
+ return 1;
+ return i2ob_new_segment(q, req, __max_segments);
+}
+
+static int i2ob_merge_requests(request_queue_t *q,
+ struct request *req,
+ struct request *next,
+ int __max_segments)
+{
+ int max_segments = i2ob_dev[MINOR(req->rq_dev)].max_segments;
+ int total_segments = req->nr_segments + next->nr_segments;
+
+ if (__max_segments < max_segments)
+ max_segments = __max_segments;
+
+ if (req->bhtail->b_data + req->bhtail->b_size == next->bh->b_data)
+ total_segments--;
+
+ if (total_segments > max_segments)
+ return 0;
+
+ req->nr_segments = total_segments;
+ return 1;
+}
+
+static int i2ob_flush(struct i2o_controller *c, struct i2ob_device *d, int unit)
+{
+ unsigned long msg;
+ u32 m = i2ob_get(d);
+
+ if(m == 0xFFFFFFFF)
+ return -1;
+
+ msg = c->mem_offset + m;
+
+ /*
+ * Ask the controller to write the cache back. This sorts out
+ * the supertrak firmware flaw and also does roughly the right
+ * thing for other cases too.
+ */
+
+ __raw_writel(FIVE_WORD_MSG_SIZE|SGL_OFFSET_0, msg);
+ __raw_writel(I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|d->tid, msg+4);
+ __raw_writel(i2ob_context|(unit<<8), msg+8);
+ __raw_writel(0, msg+12);
+ __raw_writel(60<<16, msg+16);
+
+ i2o_post_message(c,m);
+ return 0;
+}
+
+/*
+ * OSM reply handler. This gets all the message replies
+ */
+
+static void i2o_block_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg)
+{
+ unsigned long flags;
+ struct i2ob_request *ireq = NULL;
+ u8 st;
+ u32 *m = (u32 *)msg;
+ u8 unit = (m[2]>>8)&0xF0; /* low 4 bits are partition */
+ struct i2ob_device *dev = &i2ob_dev[(unit&0xF0)];
+
+ /*
+ * FAILed message
+ */
+ if(m[0] & (1<<13))
+ {
+ /*
+ * FAILed message from controller
+ * We increment the error count and abort it
+ *
+ * In theory this will never happen. The I2O block class
+ * speficiation states that block devices never return
+ * FAILs but instead use the REQ status field...but
+ * better be on the safe side since no one really follows
+ * the spec to the book :)
+ */
+ ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
+ ireq->req->errors++;
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_unhook_request(ireq, c->unit);
+ i2ob_end_request(ireq->req);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+
+ /* Now flush the message by making it a NOP */
+ m[0]&=0x00FFFFFF;
+ m[0]|=(I2O_CMD_UTIL_NOP)<<24;
+ i2o_post_message(c,virt_to_bus(m));
+
+ return;
+ }
+
+ if(msg->function == I2O_CMD_UTIL_EVT_REGISTER)
+ {
+ spin_lock(&i2ob_evt_lock);
+ memcpy(evt_msg, msg, (m[0]>>16)<<2);
+ spin_unlock(&i2ob_evt_lock);
+ up(&i2ob_evt_sem);
+ return;
+ }
+
+ if(msg->function == I2O_CMD_BLOCK_CFLUSH)
+ {
+ spin_lock_irqsave(&io_request_lock, flags);
+ dev->constipated=0;
+ DEBUG(("unconstipated\n"));
+ if(i2ob_backlog_request(c, dev)==0)
+ i2ob_request(dev->req_queue);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ return;
+ }
+
+ if(!dev->i2odev)
+ {
+ /*
+ * This is HACK, but Intel Integrated RAID allows user
+ * to delete a volume that is claimed, locked, and in use
+ * by the OS. We have to check for a reply from a
+ * non-existent device and flag it as an error or the system
+ * goes kaput...
+ */
+ ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
+ ireq->req->errors++;
+ printk(KERN_WARNING "I2O Block: Data transfer to deleted device!\n");
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_unhook_request(ireq, c->unit);
+ i2ob_end_request(ireq->req);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ return;
+ }
+
+ /*
+ * Lets see what is cooking. We stuffed the
+ * request in the context.
+ */
+
+ ireq=&i2ob_queues[c->unit]->request_queue[m[3]];
+ st=m[4]>>24;
+
+ if(st!=0)
+ {
+ int err;
+ char *bsa_errors[] =
+ {
+ "Success",
+ "Media Error",
+ "Failure communicating to device",
+ "Device Failure",
+ "Device is not ready",
+ "Media not present",
+ "Media is locked by another user",
+ "Media has failed",
+ "Failure communicating to device",
+ "Device bus failure",
+ "Device is locked by another user",
+ "Device is write protected",
+ "Device has reset",
+ "Volume has changed, waiting for acknowledgement"
+ };
+
+ err = m[4]&0xFFFF;
+
+ /*
+ * Device not ready means two things. One is that the
+ * the thing went offline (but not a removal media)
+ *
+ * The second is that you have a SuperTrak 100 and the
+ * firmware got constipated. Unlike standard i2o card
+ * setups the supertrak returns an error rather than
+ * blocking for the timeout in these cases.
+ */
+
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ if(err==4)
+ {
+ /*
+ * Time to uncork stuff
+ */
+
+ if(!dev->constipated)
+ {
+ dev->constipated = 1;
+ DEBUG(("constipated\n"));
+ /* Now pull the chain */
+ if(i2ob_flush(c, dev, unit)<0)
+ {
+ DEBUG(("i2ob: Unable to queue flush. Retrying I/O immediately.\n"));
+ dev->constipated=0;
+ }
+ DEBUG(("flushing\n"));
+ }
+
+ /*
+ * Recycle the request
+ */
+
+// i2ob_unhook_request(ireq, c->unit);
+
+ /*
+ * Place it on the recycle queue
+ */
+
+ ireq->next = NULL;
+ if(i2ob_backlog_tail[c->unit]!=NULL)
+ i2ob_backlog_tail[c->unit]->next = ireq;
+ else
+ i2ob_backlog[c->unit] = ireq;
+ i2ob_backlog_tail[c->unit] = ireq;
+
+ atomic_dec(&i2ob_queues[c->unit]->queue_depth);
+
+ /*
+ * If the constipator flush failed we want to
+ * poke the queue again.
+ */
+
+ i2ob_request(dev->req_queue);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+
+ /*
+ * and out
+ */
+
+ return;
+ }
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ printk(KERN_ERR "\n/dev/%s error: %s", dev->i2odev->dev_name,
+ bsa_errors[m[4]&0XFFFF]);
+ if(m[4]&0x00FF0000)
+ printk(" - DDM attempted %d retries", (m[4]>>16)&0x00FF );
+ printk(".\n");
+ ireq->req->errors++;
+ }
+ else
+ ireq->req->errors = 0;
+
+ /*
+ * Dequeue the request. We use irqsave locks as one day we
+ * may be running polled controllers from a BH...
+ */
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_unhook_request(ireq, c->unit);
+ i2ob_end_request(ireq->req);
+ atomic_dec(&i2ob_queues[c->unit]->queue_depth);
+
+ /*
+ * We may be able to do more I/O
+ */
+
+ if(i2ob_backlog_request(c, dev)==0)
+ i2ob_request(dev->req_queue);
+
+ spin_unlock_irqrestore(&io_request_lock, flags);
+}
+
+/*
+ * Event handler. Needs to be a separate thread b/c we may have
+ * to do things like scan a partition table, or query parameters
+ * which cannot be done from an interrupt or from a bottom half.
+ */
+static int i2ob_evt(void *dummy)
+{
+ unsigned int evt;
+ unsigned long flags;
+ int unit;
+ int i;
+ //The only event that has data is the SCSI_SMART event.
+ struct i2o_reply {
+ u32 header[4];
+ u32 evt_indicator;
+ u8 ASC;
+ u8 ASCQ;
+ u8 data[16];
+ } *evt_local;
+
+ lock_kernel();
+ daemonize();
+ unlock_kernel();
+
+ strcpy(current->comm, "i2oblock");
+ evt_running = 1;
+
+ while(1)
+ {
+ if(down_interruptible(&i2ob_evt_sem))
+ {
+ evt_running = 0;
+ printk("exiting...");
+ break;
+ }
+
+ /*
+ * Keep another CPU/interrupt from overwriting the
+ * message while we're reading it
+ *
+ * We stuffed the unit in the TxContext and grab the event mask
+ * None of the BSA we care about events have EventData
+ */
+ spin_lock_irqsave(&i2ob_evt_lock, flags);
+ evt_local = (struct i2o_reply *)evt_msg;
+ spin_unlock_irqrestore(&i2ob_evt_lock, flags);
+
+ unit = evt_local->header[3];
+ evt = evt_local->evt_indicator;
+
+ switch(evt)
+ {
+ /*
+ * New volume loaded on same TID, so we just re-install.
+ * The TID/controller don't change as it is the same
+ * I2O device. It's just new media that we have to
+ * rescan.
+ */
+ case I2O_EVT_IND_BSA_VOLUME_LOAD:
+ {
+ i2ob_install_device(i2ob_dev[unit].i2odev->controller,
+ i2ob_dev[unit].i2odev, unit);
+ break;
+ }
+
+ /*
+ * No media, so set all parameters to 0 and set the media
+ * change flag. The I2O device is still valid, just doesn't
+ * have media, so we don't want to clear the controller or
+ * device pointer.
+ */
+ case I2O_EVT_IND_BSA_VOLUME_UNLOAD:
+ {
+ for(i = unit; i <= unit+15; i++)
+ {
+ i2ob_sizes[i] = 0;
+ i2ob_hardsizes[i] = 0;
+ i2ob_max_sectors[i] = 0;
+ i2ob[i].nr_sects = 0;
+ i2ob_gendisk.part[i].nr_sects = 0;
+ }
+ i2ob_media_change_flag[unit] = 1;
+ break;
+ }
+
+ case I2O_EVT_IND_BSA_VOLUME_UNLOAD_REQ:
+ printk(KERN_WARNING "%s: Attempt to eject locked media\n",
+ i2ob_dev[unit].i2odev->dev_name);
+ break;
+
+ /*
+ * The capacity has changed and we are going to be
+ * updating the max_sectors and other information
+ * about this disk. We try a revalidate first. If
+ * the block device is in use, we don't want to
+ * do that as there may be I/Os bound for the disk
+ * at the moment. In that case we read the size
+ * from the device and update the information ourselves
+ * and the user can later force a partition table
+ * update through an ioctl.
+ */
+ case I2O_EVT_IND_BSA_CAPACITY_CHANGE:
+ {
+ u64 size;
+
+ if(do_i2ob_revalidate(MKDEV(MAJOR_NR, unit),0) != -EBUSY)
+ continue;
+
+ if(i2ob_query_device(&i2ob_dev[unit], 0x0004, 0, &size, 8) !=0 )
+ i2ob_query_device(&i2ob_dev[unit], 0x0000, 4, &size, 8);
+
+ spin_lock_irqsave(&io_request_lock, flags);
+ i2ob_sizes[unit] = (int)(size>>10);
+ i2ob_gendisk.part[unit].nr_sects = size>>9;
+ i2ob[unit].nr_sects = (int)(size>>9);
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ break;
+ }
+
+ /*
+ * We got a SCSI SMART event, we just log the relevant
+ * information and let the user decide what they want
+ * to do with the information.
+ */
+ case I2O_EVT_IND_BSA_SCSI_SMART:
+ {
+ char buf[16];
+ printk(KERN_INFO "I2O Block: %s received a SCSI SMART Event\n",i2ob_dev[unit].i2odev->dev_name);
+ evt_local->data[16]='\0';
+ sprintf(buf,"%s",&evt_local->data[0]);
+ printk(KERN_INFO " Disk Serial#:%s\n",buf);
+ printk(KERN_INFO " ASC 0x%02x \n",evt_local->ASC);
+ printk(KERN_INFO " ASCQ 0x%02x \n",evt_local->ASCQ);
+ break;
+ }
+
+ /*
+ * Non event
+ */
+
+ case 0:
+ break;
+
+ /*
+ * An event we didn't ask for. Call the card manufacturer
+ * and tell them to fix their firmware :)
+ */
+ default:
+ printk(KERN_INFO "%s: Received event %d we didn't register for\n"
+ KERN_INFO " Blame the I2O card manufacturer 8)\n",
+ i2ob_dev[unit].i2odev->dev_name, evt);
+ break;
+ }
+ };
+
+ complete_and_exit(&i2ob_thread_dead,0);
+ return 0;
+}
+
+/*
+ * The timer handler will attempt to restart requests
+ * that are queued to the driver. This handler
+ * currently only gets called if the controller
+ * had no more room in its inbound fifo.
+ */
+
+static void i2ob_timer_handler(unsigned long q)
+{
+ unsigned long flags;
+
+ /*
+ * We cannot touch the request queue or the timer
+ * flag without holding the io_request_lock.
+ */
+ spin_lock_irqsave(&io_request_lock,flags);
+
+ /*
+ * Clear the timer started flag so that
+ * the timer can be queued again.
+ */
+ i2ob_timer_started = 0;
+
+ /*
+ * Restart any requests.
+ */
+ i2ob_request((request_queue_t*)q);
+
+ /*
+ * Free the lock.
+ */
+ spin_unlock_irqrestore(&io_request_lock,flags);
+}
+
+static int i2ob_backlog_request(struct i2o_controller *c, struct i2ob_device *dev)
+{
+ u32 m;
+ struct i2ob_request *ireq;
+
+ while((ireq=i2ob_backlog[c->unit])!=NULL)
+ {
+ int unit;
+
+ if(atomic_read(&i2ob_queues[c->unit]->queue_depth) > dev->depth/4)
+ break;
+
+ m = i2ob_get(dev);
+ if(m == 0xFFFFFFFF)
+ break;
+
+ i2ob_backlog[c->unit] = ireq->next;
+ if(i2ob_backlog[c->unit] == NULL)
+ i2ob_backlog_tail[c->unit] = NULL;
+
+ unit = MINOR(ireq->req->rq_dev);
+ i2ob_send(m, dev, ireq, i2ob[unit].start_sect, unit);
+ }
+ if(i2ob_backlog[c->unit])
+ return 1;
+ return 0;
+}
+
+/*
+ * The I2O block driver is listed as one of those that pulls the
+ * front entry off the queue before processing it. This is important
+ * to remember here. If we drop the io lock then CURRENT will change
+ * on us. We must unlink CURRENT in this routine before we return, if
+ * we use it.
+ */
+
+static void i2ob_request(request_queue_t *q)
+{
+ struct request *req;
+ struct i2ob_request *ireq;
+ int unit;
+ struct i2ob_device *dev;
+ u32 m;
+
+
+ while (!list_empty(&q->queue_head)) {
+ /*
+ * On an IRQ completion if there is an inactive
+ * request on the queue head it means it isnt yet
+ * ready to dispatch.
+ */
+ req = blkdev_entry_next_request(&q->queue_head);
+
+ if(req->rq_status == RQ_INACTIVE)
+ return;
+
+ unit = MINOR(req->rq_dev);
+ dev = &i2ob_dev[(unit&0xF0)];
+
+ /*
+ * Queue depths probably belong with some kind of
+ * generic IOP commit control. Certainly its not right
+ * its global!
+ */
+ if(atomic_read(&i2ob_queues[dev->unit]->queue_depth) >= dev->depth)
+ break;
+
+ /*
+ * Is the channel constipated ?
+ */
+
+ if(i2ob_backlog[dev->unit]!=NULL)
+ break;
+
+ /* Get a message */
+ m = i2ob_get(dev);
+
+ if(m==0xFFFFFFFF)
+ {
+ /*
+ * See if the timer has already been queued.
+ */
+ if (!i2ob_timer_started)
+ {
+ DEBUG((KERN_ERR "i2ob: starting timer\n"));
+
+ /*
+ * Set the timer_started flag to insure
+ * that the timer is only queued once.
+ * Queing it more than once will corrupt
+ * the timer queue.
+ */
+ i2ob_timer_started = 1;
+
+ /*
+ * Set up the timer to expire in
+ * 500ms.
+ */
+ i2ob_timer.expires = jiffies + (HZ >> 1);
+ i2ob_timer.data = (unsigned int)q;
+
+ /*
+ * Start it.
+ */
+
+ add_timer(&i2ob_timer);
+ return;
+ }
+ }
+
+ /*
+ * Everything ok, so pull from kernel queue onto our queue
+ */
+ req->errors = 0;
+ blkdev_dequeue_request(req);
+ req->waiting = NULL;
+
+ ireq = i2ob_queues[dev->unit]->i2ob_qhead;
+ i2ob_queues[dev->unit]->i2ob_qhead = ireq->next;
+ ireq->req = req;
+
+ i2ob_send(m, dev, ireq, i2ob[unit].start_sect, (unit&0xF0));
+ }
+}
+
+
+/*
+ * SCSI-CAM for ioctl geometry mapping
+ * Duplicated with SCSI - this should be moved into somewhere common
+ * perhaps genhd ?
+ *
+ * LBA -> CHS mapping table taken from:
+ *
+ * "Incorporating the I2O Architecture into BIOS for Intel Architecture
+ * Platforms"
+ *
+ * This is an I2O document that is only available to I2O members,
+ * not developers.
+ *
+ * From my understanding, this is how all the I2O cards do this
+ *
+ * Disk Size | Sectors | Heads | Cylinders
+ * ---------------+---------+-------+-------------------
+ * 1 < X <= 528M | 63 | 16 | X/(63 * 16 * 512)
+ * 528M < X <= 1G | 63 | 32 | X/(63 * 32 * 512)
+ * 1 < X <528M | 63 | 16 | X/(63 * 16 * 512)
+ * 1 < X <528M | 63 | 16 | X/(63 * 16 * 512)
+ *
+ */
+#define BLOCK_SIZE_528M 1081344
+#define BLOCK_SIZE_1G 2097152
+#define BLOCK_SIZE_21G 4403200
+#define BLOCK_SIZE_42G 8806400
+#define BLOCK_SIZE_84G 17612800
+
+static void i2o_block_biosparam(
+ unsigned long capacity,
+ unsigned short *cyls,
+ unsigned char *hds,
+ unsigned char *secs)
+{
+ unsigned long heads, sectors, cylinders;
+
+ sectors = 63L; /* Maximize sectors per track */
+ if(capacity <= BLOCK_SIZE_528M)
+ heads = 16;
+ else if(capacity <= BLOCK_SIZE_1G)
+ heads = 32;
+ else if(capacity <= BLOCK_SIZE_21G)
+ heads = 64;
+ else if(capacity <= BLOCK_SIZE_42G)
+ heads = 128;
+ else
+ heads = 255;
+
+ cylinders = capacity / (heads * sectors);
+
+ *cyls = (unsigned short) cylinders; /* Stuff return values */
+ *secs = (unsigned char) sectors;
+ *hds = (unsigned char) heads;
+}
+
+
+/*
+ * Rescan the partition tables
+ */
+
+static int do_i2ob_revalidate(kdev_t dev, int maxu)
+{
+ int minor=MINOR(dev);
+ int i;
+
+ minor&=0xF0;
+
+ i2ob_dev[minor].refcnt++;
+ if(i2ob_dev[minor].refcnt>maxu+1)
+ {
+ i2ob_dev[minor].refcnt--;
+ return -EBUSY;
+ }
+
+ for( i = 15; i>=0 ; i--)
+ {
+ int m = minor+i;
+ invalidate_device(MKDEV(MAJOR_NR, m), 1);
+ i2ob_gendisk.part[m].start_sect = 0;
+ i2ob_gendisk.part[m].nr_sects = 0;
+ }
+
+ /*
+ * Do a physical check and then reconfigure
+ */
+
+ i2ob_install_device(i2ob_dev[minor].controller, i2ob_dev[minor].i2odev,
+ minor);
+ i2ob_dev[minor].refcnt--;
+ return 0;
+}
+
+/*
+ * Issue device specific ioctl calls.
+ */
+
+static int i2ob_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct i2ob_device *dev;
+ int minor;
+
+ /* Anyone capable of this syscall can do *real bad* things */
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ if (!inode)
+ return -EINVAL;
+ minor = MINOR(inode->i_rdev);
+ if (minor >= (MAX_I2OB<<4))
+ return -ENODEV;
+
+ dev = &i2ob_dev[minor];
+ switch (cmd) {
+ case BLKGETSIZE:
+ return put_user(i2ob[minor].nr_sects, (unsigned long *) arg);
+ case BLKGETSIZE64:
+ return put_user((u64)i2ob[minor].nr_sects << 9, (u64 *)arg);
+
+ case HDIO_GETGEO:
+ {
+ struct hd_geometry g;
+ int u=minor&0xF0;
+ i2o_block_biosparam(i2ob_sizes[u]<<1,
+ &g.cylinders, &g.heads, &g.sectors);
+ g.start = i2ob[minor].start_sect;
+ return copy_to_user((void *)arg,&g, sizeof(g))?-EFAULT:0;
+ }
+
+ case BLKRRPART:
+ if(!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+ return do_i2ob_revalidate(inode->i_rdev,1);
+
+ case BLKFLSBUF:
+ case BLKROSET:
+ case BLKROGET:
+ case BLKRASET:
+ case BLKRAGET:
+ case BLKPG:
+ return blk_ioctl(inode->i_rdev, cmd, arg);
+
+ default:
+ return -EINVAL;
+ }
+}
+
+/*
+ * Close the block device down
+ */
+
+static int i2ob_release(struct inode *inode, struct file *file)
+{
+ struct i2ob_device *dev;
+ int minor;
+
+ minor = MINOR(inode->i_rdev);
+ if (minor >= (MAX_I2OB<<4))
+ return -ENODEV;
+ dev = &i2ob_dev[(minor&0xF0)];
+
+ /*
+ * This is to deail with the case of an application
+ * opening a device and then the device dissapears while
+ * it's in use, and then the application tries to release
+ * it. ex: Unmounting a deleted RAID volume at reboot.
+ * If we send messages, it will just cause FAILs since
+ * the TID no longer exists.
+ */
+ if(!dev->i2odev)
+ return 0;
+
+ if (dev->refcnt <= 0)
+ printk(KERN_ALERT "i2ob_release: refcount(%d) <= 0\n", dev->refcnt);
+ dev->refcnt--;
+ if(dev->refcnt==0)
+ {
+ /*
+ * Flush the onboard cache on unmount
+ */
+ u32 msg[5];
+ int *query_done = &dev->done_flag;
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|dev->tid;
+ msg[2] = i2ob_context|0x40000000;
+ msg[3] = (u32)query_done;
+ msg[4] = 60<<16;
+ DEBUG("Flushing...");
+ i2o_post_wait(dev->controller, msg, 20, 60);
+
+ /*
+ * Unlock the media
+ */
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_BLOCK_MUNLOCK<<24|HOST_TID<<12|dev->tid;
+ msg[2] = i2ob_context|0x40000000;
+ msg[3] = (u32)query_done;
+ msg[4] = -1;
+ DEBUG("Unlocking...");
+ i2o_post_wait(dev->controller, msg, 20, 2);
+ DEBUG("Unlocked.\n");
+
+ /*
+ * Now unclaim the device.
+ */
+
+ if (i2o_release_device(dev->i2odev, &i2o_block_handler))
+ printk(KERN_ERR "i2ob_release: controller rejected unclaim.\n");
+
+ DEBUG("Unclaim\n");
+ }
+ MOD_DEC_USE_COUNT;
+ return 0;
+}
+
+/*
+ * Open the block device.
+ */
+
+static int i2ob_open(struct inode *inode, struct file *file)
+{
+ int minor;
+ struct i2ob_device *dev;
+
+ if (!inode)
+ return -EINVAL;
+ minor = MINOR(inode->i_rdev);
+ if (minor >= MAX_I2OB<<4)
+ return -ENODEV;
+ dev=&i2ob_dev[(minor&0xF0)];
+
+ if(!dev->i2odev)
+ return -ENODEV;
+
+ if(dev->refcnt++==0)
+ {
+ u32 msg[6];
+
+ DEBUG("Claim ");
+ if(i2o_claim_device(dev->i2odev, &i2o_block_handler))
+ {
+ dev->refcnt--;
+ printk(KERN_INFO "I2O Block: Could not open device\n");
+ return -EBUSY;
+ }
+ DEBUG("Claimed ");
+
+ /*
+ * Mount the media if needed. Note that we don't use
+ * the lock bit. Since we have to issue a lock if it
+ * refuses a mount (quite possible) then we might as
+ * well just send two messages out.
+ */
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_BLOCK_MMOUNT<<24|HOST_TID<<12|dev->tid;
+ msg[4] = -1;
+ msg[5] = 0;
+ DEBUG("Mount ");
+ i2o_post_wait(dev->controller, msg, 24, 2);
+
+ /*
+ * Lock the media
+ */
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_BLOCK_MLOCK<<24|HOST_TID<<12|dev->tid;
+ msg[4] = -1;
+ DEBUG("Lock ");
+ i2o_post_wait(dev->controller, msg, 20, 2);
+ DEBUG("Ready.\n");
+ }
+ MOD_INC_USE_COUNT;
+ return 0;
+}
+
+/*
+ * Issue a device query
+ */
+
+static int i2ob_query_device(struct i2ob_device *dev, int table,
+ int field, void *buf, int buflen)
+{
+ return i2o_query_scalar(dev->controller, dev->tid,
+ table, field, buf, buflen);
+}
+
+
+/*
+ * Install the I2O block device we found.
+ */
+
+static int i2ob_install_device(struct i2o_controller *c, struct i2o_device *d, int unit)
+{
+ u64 size;
+ u32 blocksize;
+ u32 limit;
+ u8 type;
+ u32 flags, status;
+ struct i2ob_device *dev=&i2ob_dev[unit];
+ int i;
+
+ /*
+ * For logging purposes...
+ */
+ printk(KERN_INFO "i2ob: Installing tid %d device at unit %d\n",
+ d->lct_data.tid, unit);
+
+ /*
+ * Ask for the current media data. If that isn't supported
+ * then we ask for the device capacity data
+ */
+ if(i2ob_query_device(dev, 0x0004, 1, &blocksize, 4) != 0
+ || i2ob_query_device(dev, 0x0004, 0, &size, 8) !=0 )
+ {
+ i2ob_query_device(dev, 0x0000, 3, &blocksize, 4);
+ i2ob_query_device(dev, 0x0000, 4, &size, 8);
+ }
+
+ i2ob_query_device(dev, 0x0000, 5, &flags, 4);
+ i2ob_query_device(dev, 0x0000, 6, &status, 4);
+ i2ob_sizes[unit] = (int)(size>>10);
+ for(i=unit; i <= unit+15 ; i++)
+ i2ob_hardsizes[i] = blocksize;
+ i2ob_gendisk.part[unit].nr_sects = size>>9;
+ i2ob[unit].nr_sects = (int)(size>>9);
+
+ /* Set limit based on inbound frame size */
+ limit = (d->controller->status_block->inbound_frame_size - 8)/2;
+ limit = limit<<9;
+
+ /*
+ * Max number of Scatter-Gather Elements
+ */
+
+ for(i=unit;i<=unit+15;i++)
+ {
+ if(d->controller->type == I2O_TYPE_PCI && d->controller->bus.pci.queue_buggy)
+ {
+ i2ob_max_sectors[i] = 32;
+ i2ob_dev[i].max_segments = 8;
+ i2ob_dev[i].depth = 4;
+ }
+ else if(d->controller->type == I2O_TYPE_PCI && d->controller->bus.pci.short_req)
+ {
+ i2ob_max_sectors[i] = 8;
+ i2ob_dev[i].max_segments = 8;
+ }
+ else
+ {
+ /* MAX_SECTORS was used but 255 is a dumb number for
+ striped RAID */
+ i2ob_max_sectors[i]=256;
+ i2ob_dev[i].max_segments = (d->controller->status_block->inbound_frame_size - 8)/2;
+ }
+ }
+
+ printk(KERN_INFO "Max segments set to %d\n",
+ i2ob_dev[unit].max_segments);
+ printk(KERN_INFO "Byte limit is %d.\n", limit);
+
+ i2ob_query_device(dev, 0x0000, 0, &type, 1);
+
+ sprintf(d->dev_name, "%s%c", i2ob_gendisk.major_name, 'a' + (unit>>4));
+
+ printk(KERN_INFO "%s: ", d->dev_name);
+ switch(type)
+ {
+ case 0: printk("Disk Storage");break;
+ case 4: printk("WORM");break;
+ case 5: printk("CD-ROM");break;
+ case 7: printk("Optical device");break;
+ default:
+ printk("Type %d", type);
+ }
+ if(status&(1<<10))
+ printk("(RAID)");
+ if(((flags & (1<<3)) && !(status & (1<<3))) ||
+ ((flags & (1<<4)) && !(status & (1<<4))))
+ {
+ printk(KERN_INFO " Not loaded.\n");
+ return 1;
+ }
+ printk("- %dMb, %d byte sectors",
+ (int)(size>>20), blocksize);
+ if(status&(1<<0))
+ {
+ u32 cachesize;
+ i2ob_query_device(dev, 0x0003, 0, &cachesize, 4);
+ cachesize>>=10;
+ if(cachesize>4095)
+ printk(", %dMb cache", cachesize>>10);
+ else
+ printk(", %dKb cache", cachesize);
+
+ }
+ printk(".\n");
+ printk(KERN_INFO "%s: Maximum sectors/read set to %d.\n",
+ d->dev_name, i2ob_max_sectors[unit]);
+
+ /*
+ * If this is the first I2O block device found on this IOP,
+ * we need to initialize all the queue data structures
+ * before any I/O can be performed. If it fails, this
+ * device is useless.
+ */
+ if(!i2ob_queues[c->unit]) {
+ if(i2ob_init_iop(c->unit))
+ return 1;
+ }
+
+ /*
+ * This will save one level of lookup/indirection in critical
+ * code so that we can directly get the queue ptr from the
+ * device instead of having to go the IOP data structure.
+ */
+ dev->req_queue = &i2ob_queues[c->unit]->req_queue;
+
+ grok_partitions(&i2ob_gendisk, unit>>4, 1<<4, (long)(size>>9));
+
+ /*
+ * Register for the events we're interested in and that the
+ * device actually supports.
+ */
+ i2o_event_register(c, d->lct_data.tid, i2ob_context, unit,
+ (I2OB_EVENT_MASK & d->lct_data.event_capabilities));
+
+ return 0;
+}
+
+/*
+ * Initialize IOP specific queue structures. This is called
+ * once for each IOP that has a block device sitting behind it.
+ */
+static int i2ob_init_iop(unsigned int unit)
+{
+ int i;
+
+ i2ob_queues[unit] = (struct i2ob_iop_queue*)
+ kmalloc(sizeof(struct i2ob_iop_queue), GFP_ATOMIC);
+ if(!i2ob_queues[unit])
+ {
+ printk(KERN_WARNING
+ "Could not allocate request queue for I2O block device!\n");
+ return -1;
+ }
+
+ for(i = 0; i< MAX_I2OB_DEPTH; i++)
+ {
+ i2ob_queues[unit]->request_queue[i].next =
+ &i2ob_queues[unit]->request_queue[i+1];
+ i2ob_queues[unit]->request_queue[i].num = i;
+ }
+
+ /* Queue is MAX_I2OB + 1... */
+ i2ob_queues[unit]->request_queue[i].next = NULL;
+ i2ob_queues[unit]->i2ob_qhead = &i2ob_queues[unit]->request_queue[0];
+ atomic_set(&i2ob_queues[unit]->queue_depth, 0);
+
+ blk_init_queue(&i2ob_queues[unit]->req_queue, i2ob_request);
+ blk_queue_headactive(&i2ob_queues[unit]->req_queue, 0);
+ i2ob_queues[unit]->req_queue.back_merge_fn = i2ob_back_merge;
+ i2ob_queues[unit]->req_queue.front_merge_fn = i2ob_front_merge;
+ i2ob_queues[unit]->req_queue.merge_requests_fn = i2ob_merge_requests;
+ i2ob_queues[unit]->req_queue.queuedata = &i2ob_queues[unit];
+
+ return 0;
+}
+
+/*
+ * Get the request queue for the given device.
+ */
+static request_queue_t* i2ob_get_queue(kdev_t dev)
+{
+ int unit = MINOR(dev)&0xF0;
+
+ return i2ob_dev[unit].req_queue;
+}
+
+/*
+ * Probe the I2O subsytem for block class devices
+ */
+static void i2ob_scan(int bios)
+{
+ int i;
+ int warned = 0;
+
+ struct i2o_device *d, *b=NULL;
+ struct i2o_controller *c;
+ struct i2ob_device *dev;
+
+ for(i=0; i< MAX_I2O_CONTROLLERS; i++)
+ {
+ c=i2o_find_controller(i);
+
+ if(c==NULL)
+ continue;
+
+ /*
+ * The device list connected to the I2O Controller is doubly linked
+ * Here we traverse the end of the list , and start claiming devices
+ * from that end. This assures that within an I2O controller atleast
+ * the newly created volumes get claimed after the older ones, thus
+ * mapping to same major/minor (and hence device file name) after
+ * every reboot.
+ * The exception being:
+ * 1. If there was a TID reuse.
+ * 2. There was more than one I2O controller.
+ */
+
+ if(!bios)
+ {
+ for (d=c->devices;d!=NULL;d=d->next)
+ if(d->next == NULL)
+ b = d;
+ }
+ else
+ b = c->devices;
+
+ while(b != NULL)
+ {
+ d=b;
+ if(bios)
+ b = b->next;
+ else
+ b = b->prev;
+
+ if(d->lct_data.class_id!=I2O_CLASS_RANDOM_BLOCK_STORAGE)
+ continue;
+
+ if(d->lct_data.user_tid != 0xFFF)
+ continue;
+
+ if(bios)
+ {
+ if(d->lct_data.bios_info != 0x80)
+ continue;
+ printk(KERN_INFO "Claiming as Boot device: Controller %d, TID %d\n", c->unit, d->lct_data.tid);
+ }
+ else
+ {
+ if(d->lct_data.bios_info == 0x80)
+ continue; /*Already claimed on pass 1 */
+ }
+
+ if(i2o_claim_device(d, &i2o_block_handler))
+ {
+ printk(KERN_WARNING "i2o_block: Controller %d, TID %d\n", c->unit,
+ d->lct_data.tid);
+ printk(KERN_WARNING "\t%sevice refused claim! Skipping installation\n", bios?"Boot d":"D");
+ continue;
+ }
+
+ if(scan_unit<MAX_I2OB<<4)
+ {
+ /*
+ * Get the device and fill in the
+ * Tid and controller.
+ */
+ dev=&i2ob_dev[scan_unit];
+ dev->i2odev = d;
+ dev->controller = c;
+ dev->unit = c->unit;
+ dev->tid = d->lct_data.tid;
+
+ if(i2ob_install_device(c,d,scan_unit))
+ printk(KERN_WARNING "Could not install I2O block device\n");
+ else
+ {
+ scan_unit+=16;
+ i2ob_dev_count++;
+
+ /* We want to know when device goes away */
+ i2o_device_notify_on(d, &i2o_block_handler);
+ }
+ }
+ else
+ {
+ if(!warned++)
+ printk(KERN_WARNING "i2o_block: too many device, registering only %d.\n", scan_unit>>4);
+ }
+ i2o_release_device(d, &i2o_block_handler);
+ }
+ i2o_unlock_controller(c);
+ }
+}
+
+static void i2ob_probe(void)
+{
+ /*
+ * Some overhead/redundancy involved here, while trying to
+ * claim the first boot volume encountered as /dev/i2o/hda
+ * everytime. All the i2o_controllers are searched and the
+ * first i2o block device marked as bootable is claimed
+ * If an I2O block device was booted off , the bios sets
+ * its bios_info field to 0x80, this what we search for.
+ * Assuming that the bootable volume is /dev/i2o/hda
+ * everytime will prevent any kernel panic while mounting
+ * root partition
+ */
+
+ printk(KERN_INFO "i2o_block: Checking for Boot device...\n");
+ i2ob_scan(1);
+
+ /*
+ * Now the remainder.
+ */
+ printk(KERN_INFO "i2o_block: Checking for I2O Block devices...\n");
+ i2ob_scan(0);
+}
+
+
+/*
+ * New device notification handler. Called whenever a new
+ * I2O block storage device is added to the system.
+ *
+ * Should we spin lock around this to keep multiple devs from
+ * getting updated at the same time?
+ *
+ */
+void i2ob_new_device(struct i2o_controller *c, struct i2o_device *d)
+{
+ struct i2ob_device *dev;
+ int unit = 0;
+
+ printk(KERN_INFO "i2o_block: New device detected\n");
+ printk(KERN_INFO " Controller %d Tid %d\n",c->unit, d->lct_data.tid);
+
+ /* Check for available space */
+ if(i2ob_dev_count>=MAX_I2OB<<4)
+ {
+ printk(KERN_ERR "i2o_block: No more devices allowed!\n");
+ return;
+ }
+ for(unit = 0; unit < (MAX_I2OB<<4); unit += 16)
+ {
+ if(!i2ob_dev[unit].i2odev)
+ break;
+ }
+
+ if(i2o_claim_device(d, &i2o_block_handler))
+ {
+ printk(KERN_INFO
+ "i2o_block: Unable to claim device. Installation aborted\n");
+ return;
+ }
+
+ dev = &i2ob_dev[unit];
+ dev->i2odev = d;
+ dev->controller = c;
+ dev->tid = d->lct_data.tid;
+
+ if(i2ob_install_device(c,d,unit))
+ printk(KERN_ERR "i2o_block: Could not install new device\n");
+ else
+ {
+ i2ob_dev_count++;
+ i2o_device_notify_on(d, &i2o_block_handler);
+ }
+
+ i2o_release_device(d, &i2o_block_handler);
+
+ return;
+}
+
+/*
+ * Deleted device notification handler. Called when a device we
+ * are talking to has been deleted by the user or some other
+ * mysterious fource outside the kernel.
+ */
+void i2ob_del_device(struct i2o_controller *c, struct i2o_device *d)
+{
+ int unit = 0;
+ int i = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&io_request_lock, flags);
+
+ /*
+ * Need to do this...we somtimes get two events from the IRTOS
+ * in a row and that causes lots of problems.
+ */
+ i2o_device_notify_off(d, &i2o_block_handler);
+
+ printk(KERN_INFO "I2O Block Device Deleted\n");
+
+ for(unit = 0; unit < MAX_I2OB<<4; unit += 16)
+ {
+ if(i2ob_dev[unit].i2odev == d)
+ {
+ printk(KERN_INFO " /dev/%s: Controller %d Tid %d\n",
+ d->dev_name, c->unit, d->lct_data.tid);
+ break;
+ }
+ }
+ if(unit >= MAX_I2OB<<4)
+ {
+ printk(KERN_ERR "i2ob_del_device called, but not in dev table!\n");
+ spin_unlock_irqrestore(&io_request_lock, flags);
+ return;
+ }
+
+ /*
+ * This will force errors when i2ob_get_queue() is called
+ * by the kenrel.
+ */
+ i2ob_dev[unit].req_queue = NULL;
+ for(i = unit; i <= unit+15; i++)
+ {
+ i2ob_dev[i].i2odev = NULL;
+ i2ob_sizes[i] = 0;
+ i2ob_hardsizes[i] = 0;
+ i2ob_max_sectors[i] = 0;
+ i2ob[i].nr_sects = 0;
+ i2ob_gendisk.part[i].nr_sects = 0;
+ }
+ spin_unlock_irqrestore(&io_request_lock, flags);
+
+ /*
+ * Sync the device...this will force all outstanding I/Os
+ * to attempt to complete, thus causing error messages.
+ * We have to do this as the user could immediatelly create
+ * a new volume that gets assigned the same minor number.
+ * If there are still outstanding writes to the device,
+ * that could cause data corruption on the new volume!
+ *
+ * The truth is that deleting a volume that you are currently
+ * accessing will do _bad things_ to your system. This
+ * handler will keep it from crashing, but must probably
+ * you'll have to do a 'reboot' to get the system running
+ * properly. Deleting disks you are using is dumb.
+ * Umount them first and all will be good!
+ *
+ * It's not this driver's job to protect the system from
+ * dumb user mistakes :)
+ */
+ if(i2ob_dev[unit].refcnt)
+ fsync_dev(MKDEV(MAJOR_NR,unit));
+
+ /*
+ * Decrease usage count for module
+ */
+ while(i2ob_dev[unit].refcnt--)
+ MOD_DEC_USE_COUNT;
+
+ i2ob_dev[unit].refcnt = 0;
+
+ i2ob_dev[i].tid = 0;
+
+ /*
+ * Do we need this?
+ * The media didn't really change...the device is just gone
+ */
+ i2ob_media_change_flag[unit] = 1;
+
+ i2ob_dev_count--;
+}
+
+/*
+ * Have we seen a media change ?
+ */
+static int i2ob_media_change(kdev_t dev)
+{
+ int i=MINOR(dev);
+ i>>=4;
+ if(i2ob_media_change_flag[i])
+ {
+ i2ob_media_change_flag[i]=0;
+ return 1;
+ }
+ return 0;
+}
+
+static int i2ob_revalidate(kdev_t dev)
+{
+ return do_i2ob_revalidate(dev, 0);
+}
+
+/*
+ * Reboot notifier. This is called by i2o_core when the system
+ * shuts down.
+ */
+static void i2ob_reboot_event(void)
+{
+ int i;
+
+ for(i=0;i<MAX_I2OB;i++)
+ {
+ struct i2ob_device *dev=&i2ob_dev[(i<<4)];
+
+ if(dev->refcnt!=0)
+ {
+ /*
+ * Flush the onboard cache
+ */
+ u32 msg[5];
+ int *query_done = &dev->done_flag;
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_BLOCK_CFLUSH<<24|HOST_TID<<12|dev->tid;
+ msg[2] = i2ob_context|0x40000000;
+ msg[3] = (u32)query_done;
+ msg[4] = 60<<16;
+
+ DEBUG("Flushing...");
+ i2o_post_wait(dev->controller, msg, 20, 60);
+
+ DEBUG("Unlocking...");
+ /*
+ * Unlock the media
+ */
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_BLOCK_MUNLOCK<<24|HOST_TID<<12|dev->tid;
+ msg[2] = i2ob_context|0x40000000;
+ msg[3] = (u32)query_done;
+ msg[4] = -1;
+ i2o_post_wait(dev->controller, msg, 20, 2);
+
+ DEBUG("Unlocked.\n");
+ }
+ }
+}
+
+static struct block_device_operations i2ob_fops =
+{
+ open: i2ob_open,
+ release: i2ob_release,
+ ioctl: i2ob_ioctl,
+ check_media_change: i2ob_media_change,
+ revalidate: i2ob_revalidate,
+};
+
+static struct gendisk i2ob_gendisk =
+{
+ major: MAJOR_NR,
+ major_name: "i2o/hd",
+ minor_shift: 4,
+ max_p: 1<<4,
+ part: i2ob,
+ sizes: i2ob_sizes,
+ nr_real: MAX_I2OB,
+ fops: &i2ob_fops,
+};
+
+
+/*
+ * And here should be modules and kernel interface
+ * (Just smiley confuses emacs :-)
+ */
+
+#ifdef MODULE
+#define i2o_block_init init_module
+#endif
+
+int i2o_block_init(void)
+{
+ int i;
+
+ printk(KERN_INFO "I2O Block Storage OSM v0.9\n");
+ printk(KERN_INFO " (c) Copyright 1999-2001 Red Hat Software.\n");
+
+ /*
+ * Register the block device interfaces
+ */
+
+ if (register_blkdev(MAJOR_NR, "i2o_block", &i2ob_fops)) {
+ printk(KERN_ERR "Unable to get major number %d for i2o_block\n",
+ MAJOR_NR);
+ return -EIO;
+ }
+#ifdef MODULE
+ printk(KERN_INFO "i2o_block: registered device at major %d\n", MAJOR_NR);
+#endif
+
+ /*
+ * Now fill in the boiler plate
+ */
+
+ blksize_size[MAJOR_NR] = i2ob_blksizes;
+ hardsect_size[MAJOR_NR] = i2ob_hardsizes;
+ blk_size[MAJOR_NR] = i2ob_sizes;
+ max_sectors[MAJOR_NR] = i2ob_max_sectors;
+ blk_dev[MAJOR_NR].queue = i2ob_get_queue;
+
+ blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), i2ob_request);
+ blk_queue_headactive(BLK_DEFAULT_QUEUE(MAJOR_NR), 0);
+
+ for (i = 0; i < MAX_I2OB << 4; i++) {
+ i2ob_dev[i].refcnt = 0;
+ i2ob_dev[i].flags = 0;
+ i2ob_dev[i].controller = NULL;
+ i2ob_dev[i].i2odev = NULL;
+ i2ob_dev[i].tid = 0;
+ i2ob_dev[i].head = NULL;
+ i2ob_dev[i].tail = NULL;
+ i2ob_dev[i].depth = MAX_I2OB_DEPTH;
+ i2ob_blksizes[i] = 1024;
+ i2ob_max_sectors[i] = 2;
+ }
+
+ /*
+ * Set up the queue
+ */
+ for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
+ {
+ i2ob_queues[i] = NULL;
+ }
+
+ /*
+ * Timers
+ */
+
+ init_timer(&i2ob_timer);
+ i2ob_timer.function = i2ob_timer_handler;
+ i2ob_timer.data = 0;
+
+ /*
+ * Register the OSM handler as we will need this to probe for
+ * drives, geometry and other goodies.
+ */
+
+ if(i2o_install_handler(&i2o_block_handler)<0)
+ {
+ unregister_blkdev(MAJOR_NR, "i2o_block");
+ blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
+ printk(KERN_ERR "i2o_block: unable to register OSM.\n");
+ return -EINVAL;
+ }
+ i2ob_context = i2o_block_handler.context;
+
+ /*
+ * Initialize event handling thread
+ */
+ init_MUTEX_LOCKED(&i2ob_evt_sem);
+ evt_pid = kernel_thread(i2ob_evt, NULL, CLONE_SIGHAND);
+ if(evt_pid < 0)
+ {
+ printk(KERN_ERR
+ "i2o_block: Could not initialize event thread. Aborting\n");
+ i2o_remove_handler(&i2o_block_handler);
+ return 0;
+ }
+
+ /*
+ * Finally see what is actually plugged in to our controllers
+ */
+ for (i = 0; i < MAX_I2OB; i++)
+ register_disk(&i2ob_gendisk, MKDEV(MAJOR_NR,i<<4), 1<<4,
+ &i2ob_fops, 0);
+ i2ob_probe();
+
+ /*
+ * Adding i2ob_gendisk into the gendisk list.
+ */
+ add_gendisk(&i2ob_gendisk);
+
+ return 0;
+}
+
+#ifdef MODULE
+
+EXPORT_NO_SYMBOLS;
+MODULE_AUTHOR("Red Hat Software");
+MODULE_DESCRIPTION("I2O Block Device OSM");
+
+void cleanup_module(void)
+{
+ struct gendisk *gdp;
+ int i;
+
+ if(evt_running) {
+ printk(KERN_INFO "Killing I2O block threads...");
+ i = kill_proc(evt_pid, SIGTERM, 1);
+ if(!i) {
+ printk("waiting...");
+ }
+ /* Be sure it died */
+ wait_for_completion(&i2ob_thread_dead);
+ printk("done.\n");
+ }
+
+ /*
+ * Unregister for updates from any devices..otherwise we still
+ * get them and the core jumps to random memory :O
+ */
+ if(i2ob_dev_count) {
+ struct i2o_device *d;
+ for(i = 0; i < MAX_I2OB; i++)
+ if((d=i2ob_dev[i<<4].i2odev)) {
+ i2o_device_notify_off(d, &i2o_block_handler);
+ i2o_event_register(d->controller, d->lct_data.tid,
+ i2ob_context, i<<4, 0);
+ }
+ }
+
+ /*
+ * We may get further callbacks for ourself. The i2o_core
+ * code handles this case reasonably sanely. The problem here
+ * is we shouldn't get them .. but a couple of cards feel
+ * obliged to tell us stuff we dont care about.
+ *
+ * This isnt ideal at all but will do for now.
+ */
+
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ);
+
+ /*
+ * Flush the OSM
+ */
+
+ i2o_remove_handler(&i2o_block_handler);
+
+ /*
+ * Return the block device
+ */
+ if (unregister_blkdev(MAJOR_NR, "i2o_block") != 0)
+ printk("i2o_block: cleanup_module failed\n");
+
+ /*
+ * free request queue
+ */
+ blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
+
+ del_gendisk(&i2ob_gendisk);
+}
+#endif
--- /dev/null
+/*
+ * I2O Configuration Interface Driver
+ *
+ * (C) Copyright 1999 Red Hat Software
+ *
+ * Written by Alan Cox, Building Number Three Ltd
+ *
+ * Modified 04/20/1999 by Deepak Saxena
+ * - Added basic ioctl() support
+ * Modified 06/07/1999 by Deepak Saxena
+ * - Added software download ioctl (still testing)
+ * Modified 09/10/1999 by Auvo Häkkinen
+ * - Changes to i2o_cfg_reply(), ioctl_parms()
+ * - Added ioct_validate()
+ * Modified 09/30/1999 by Taneli Vähäkangas
+ * - Fixed ioctl_swdl()
+ * Modified 10/04/1999 by Taneli Vähäkangas
+ * - Changed ioctl_swdl(), implemented ioctl_swul() and ioctl_swdel()
+ * Modified 11/18/199 by Deepak Saxena
+ * - Added event managmenet support
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/i2o.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/miscdevice.h>
+#include <linux/mm.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+
+#include <asm/uaccess.h>
+#include <asm/io.h>
+
+static int i2o_cfg_context = -1;
+static void *page_buf;
+static spinlock_t i2o_config_lock = SPIN_LOCK_UNLOCKED;
+struct wait_queue *i2o_wait_queue;
+
+#define MODINC(x,y) (x = x++ % y)
+
+struct i2o_cfg_info
+{
+ struct file* fp;
+ struct fasync_struct *fasync;
+ struct i2o_evt_info event_q[I2O_EVT_Q_LEN];
+ u16 q_in; // Queue head index
+ u16 q_out; // Queue tail index
+ u16 q_len; // Queue length
+ u16 q_lost; // Number of lost events
+ u32 q_id; // Event queue ID...used as tx_context
+ struct i2o_cfg_info *next;
+};
+static struct i2o_cfg_info *open_files = NULL;
+static int i2o_cfg_info_id = 0;
+
+static int ioctl_getiops(unsigned long);
+static int ioctl_gethrt(unsigned long);
+static int ioctl_getlct(unsigned long);
+static int ioctl_parms(unsigned long, unsigned int);
+static int ioctl_html(unsigned long);
+static int ioctl_swdl(unsigned long);
+static int ioctl_swul(unsigned long);
+static int ioctl_swdel(unsigned long);
+static int ioctl_validate(unsigned long);
+static int ioctl_evt_reg(unsigned long, struct file *);
+static int ioctl_evt_get(unsigned long, struct file *);
+static int cfg_fasync(int, struct file*, int);
+
+/*
+ * This is the callback for any message we have posted. The message itself
+ * will be returned to the message pool when we return from the IRQ
+ *
+ * This runs in irq context so be short and sweet.
+ */
+static void i2o_cfg_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *m)
+{
+ u32 *msg = (u32 *)m;
+
+ if (msg[0] & MSG_FAIL) {
+ u32 *preserved_msg = (u32*)(c->mem_offset + msg[7]);
+
+ printk(KERN_ERR "i2o_config: IOP failed to process the msg.\n");
+
+ /* Release the preserved msg frame by resubmitting it as a NOP */
+
+ preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
+ preserved_msg[2] = 0;
+ i2o_post_message(c, msg[7]);
+ }
+
+ if (msg[4] >> 24) // ReqStatus != SUCCESS
+ i2o_report_status(KERN_INFO,"i2o_config", msg);
+
+ if(m->function == I2O_CMD_UTIL_EVT_REGISTER)
+ {
+ struct i2o_cfg_info *inf;
+
+ for(inf = open_files; inf; inf = inf->next)
+ if(inf->q_id == msg[3])
+ break;
+
+ //
+ // If this is the case, it means that we're getting
+ // events for a file descriptor that's been close()'d
+ // w/o the user unregistering for events first.
+ // The code currently assumes that the user will
+ // take care of unregistering for events before closing
+ // a file.
+ //
+ // TODO:
+ // Should we track event registartion and deregister
+ // for events when a file is close()'d so this doesn't
+ // happen? That would get rid of the search through
+ // the linked list since file->private_data could point
+ // directly to the i2o_config_info data structure...but
+ // it would mean having all sorts of tables to track
+ // what each file is registered for...I think the
+ // current method is simpler. - DS
+ //
+ if(!inf)
+ return;
+
+ inf->event_q[inf->q_in].id.iop = c->unit;
+ inf->event_q[inf->q_in].id.tid = m->target_tid;
+ inf->event_q[inf->q_in].id.evt_mask = msg[4];
+
+ //
+ // Data size = msg size - reply header
+ //
+ inf->event_q[inf->q_in].data_size = (m->size - 5) * 4;
+ if(inf->event_q[inf->q_in].data_size)
+ memcpy(inf->event_q[inf->q_in].evt_data,
+ (unsigned char *)(msg + 5),
+ inf->event_q[inf->q_in].data_size);
+
+ spin_lock(&i2o_config_lock);
+ MODINC(inf->q_in, I2O_EVT_Q_LEN);
+ if(inf->q_len == I2O_EVT_Q_LEN)
+ {
+ MODINC(inf->q_out, I2O_EVT_Q_LEN);
+ inf->q_lost++;
+ }
+ else
+ {
+ // Keep I2OEVTGET on another CPU from touching this
+ inf->q_len++;
+ }
+ spin_unlock(&i2o_config_lock);
+
+
+// printk(KERN_INFO "File %p w/id %d has %d events\n",
+// inf->fp, inf->q_id, inf->q_len);
+
+ kill_fasync(&inf->fasync, SIGIO, POLL_IN);
+ }
+
+ return;
+}
+
+/*
+ * Each of these describes an i2o message handler. They are
+ * multiplexed by the i2o_core code
+ */
+
+struct i2o_handler cfg_handler=
+{
+ i2o_cfg_reply,
+ NULL,
+ NULL,
+ NULL,
+ "Configuration",
+ 0,
+ 0xffffffff // All classes
+};
+
+static ssize_t cfg_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
+{
+ printk(KERN_INFO "i2o_config write not yet supported\n");
+
+ return 0;
+}
+
+
+static ssize_t cfg_read(struct file *file, char *buf, size_t count, loff_t *ptr)
+{
+ return 0;
+}
+
+/*
+ * IOCTL Handler
+ */
+static int cfg_ioctl(struct inode *inode, struct file *fp, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret;
+
+ switch(cmd)
+ {
+ case I2OGETIOPS:
+ ret = ioctl_getiops(arg);
+ break;
+
+ case I2OHRTGET:
+ ret = ioctl_gethrt(arg);
+ break;
+
+ case I2OLCTGET:
+ ret = ioctl_getlct(arg);
+ break;
+
+ case I2OPARMSET:
+ ret = ioctl_parms(arg, I2OPARMSET);
+ break;
+
+ case I2OPARMGET:
+ ret = ioctl_parms(arg, I2OPARMGET);
+ break;
+
+ case I2OSWDL:
+ ret = ioctl_swdl(arg);
+ break;
+
+ case I2OSWUL:
+ ret = ioctl_swul(arg);
+ break;
+
+ case I2OSWDEL:
+ ret = ioctl_swdel(arg);
+ break;
+
+ case I2OVALIDATE:
+ ret = ioctl_validate(arg);
+ break;
+
+ case I2OHTML:
+ ret = ioctl_html(arg);
+ break;
+
+ case I2OEVTREG:
+ ret = ioctl_evt_reg(arg, fp);
+ break;
+
+ case I2OEVTGET:
+ ret = ioctl_evt_get(arg, fp);
+ break;
+
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+int ioctl_getiops(unsigned long arg)
+{
+ u8 *user_iop_table = (u8*)arg;
+ struct i2o_controller *c = NULL;
+ int i;
+ u8 foo[MAX_I2O_CONTROLLERS];
+
+ if(!access_ok(VERIFY_WRITE, user_iop_table, MAX_I2O_CONTROLLERS))
+ return -EFAULT;
+
+ for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
+ {
+ c = i2o_find_controller(i);
+ if(c)
+ {
+ foo[i] = 1;
+ i2o_unlock_controller(c);
+ }
+ else
+ {
+ foo[i] = 0;
+ }
+ }
+
+ __copy_to_user(user_iop_table, foo, MAX_I2O_CONTROLLERS);
+ return 0;
+}
+
+int ioctl_gethrt(unsigned long arg)
+{
+ struct i2o_controller *c;
+ struct i2o_cmd_hrtlct *cmd = (struct i2o_cmd_hrtlct*)arg;
+ struct i2o_cmd_hrtlct kcmd;
+ i2o_hrt *hrt;
+ int len;
+ u32 reslen;
+ int ret = 0;
+
+ if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct)))
+ return -EFAULT;
+
+ if(get_user(reslen, kcmd.reslen) < 0)
+ return -EFAULT;
+
+ if(kcmd.resbuf == NULL)
+ return -EFAULT;
+
+ c = i2o_find_controller(kcmd.iop);
+ if(!c)
+ return -ENXIO;
+
+ hrt = (i2o_hrt *)c->hrt;
+
+ i2o_unlock_controller(c);
+
+ len = 8 + ((hrt->entry_len * hrt->num_entries) << 2);
+
+ /* We did a get user...so assuming mem is ok...is this bad? */
+ put_user(len, kcmd.reslen);
+ if(len > reslen)
+ ret = -ENOBUFS;
+ if(copy_to_user(kcmd.resbuf, (void*)hrt, len))
+ ret = -EFAULT;
+
+ return ret;
+}
+
+int ioctl_getlct(unsigned long arg)
+{
+ struct i2o_controller *c;
+ struct i2o_cmd_hrtlct *cmd = (struct i2o_cmd_hrtlct*)arg;
+ struct i2o_cmd_hrtlct kcmd;
+ i2o_lct *lct;
+ int len;
+ int ret = 0;
+ u32 reslen;
+
+ if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_hrtlct)))
+ return -EFAULT;
+
+ if(get_user(reslen, kcmd.reslen) < 0)
+ return -EFAULT;
+
+ if(kcmd.resbuf == NULL)
+ return -EFAULT;
+
+ c = i2o_find_controller(kcmd.iop);
+ if(!c)
+ return -ENXIO;
+
+ lct = (i2o_lct *)c->lct;
+ i2o_unlock_controller(c);
+
+ len = (unsigned int)lct->table_size << 2;
+ put_user(len, kcmd.reslen);
+ if(len > reslen)
+ ret = -ENOBUFS;
+ else if(copy_to_user(kcmd.resbuf, (void*)lct, len))
+ ret = -EFAULT;
+
+ return ret;
+}
+
+static int ioctl_parms(unsigned long arg, unsigned int type)
+{
+ int ret = 0;
+ struct i2o_controller *c;
+ struct i2o_cmd_psetget *cmd = (struct i2o_cmd_psetget*)arg;
+ struct i2o_cmd_psetget kcmd;
+ u32 reslen;
+ u8 *ops;
+ u8 *res;
+ int len;
+
+ u32 i2o_cmd = (type == I2OPARMGET ?
+ I2O_CMD_UTIL_PARAMS_GET :
+ I2O_CMD_UTIL_PARAMS_SET);
+
+ if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_cmd_psetget)))
+ return -EFAULT;
+
+ if(get_user(reslen, kcmd.reslen))
+ return -EFAULT;
+
+ c = i2o_find_controller(kcmd.iop);
+ if(!c)
+ return -ENXIO;
+
+ ops = (u8*)kmalloc(kcmd.oplen, GFP_KERNEL);
+ if(!ops)
+ {
+ i2o_unlock_controller(c);
+ return -ENOMEM;
+ }
+
+ if(copy_from_user(ops, kcmd.opbuf, kcmd.oplen))
+ {
+ i2o_unlock_controller(c);
+ kfree(ops);
+ return -EFAULT;
+ }
+
+ /*
+ * It's possible to have a _very_ large table
+ * and that the user asks for all of it at once...
+ */
+ res = (u8*)kmalloc(65536, GFP_KERNEL);
+ if(!res)
+ {
+ i2o_unlock_controller(c);
+ kfree(ops);
+ return -ENOMEM;
+ }
+
+ len = i2o_issue_params(i2o_cmd, c, kcmd.tid,
+ ops, kcmd.oplen, res, 65536);
+ i2o_unlock_controller(c);
+ kfree(ops);
+
+ if (len < 0) {
+ kfree(res);
+ return -EAGAIN;
+ }
+
+ put_user(len, kcmd.reslen);
+ if(len > reslen)
+ ret = -ENOBUFS;
+ else if(copy_to_user(cmd->resbuf, res, len))
+ ret = -EFAULT;
+
+ kfree(res);
+
+ return ret;
+}
+
+int ioctl_html(unsigned long arg)
+{
+ struct i2o_html *cmd = (struct i2o_html*)arg;
+ struct i2o_html kcmd;
+ struct i2o_controller *c;
+ u8 *res = NULL;
+ void *query = NULL;
+ int ret = 0;
+ int token;
+ u32 len;
+ u32 reslen;
+ u32 msg[MSG_FRAME_SIZE/4];
+
+ if(copy_from_user(&kcmd, cmd, sizeof(struct i2o_html)))
+ {
+ printk(KERN_INFO "i2o_config: can't copy html cmd\n");
+ return -EFAULT;
+ }
+
+ if(get_user(reslen, kcmd.reslen) < 0)
+ {
+ printk(KERN_INFO "i2o_config: can't copy html reslen\n");
+ return -EFAULT;
+ }
+
+ if(!kcmd.resbuf)
+ {
+ printk(KERN_INFO "i2o_config: NULL html buffer\n");
+ return -EFAULT;
+ }
+
+ c = i2o_find_controller(kcmd.iop);
+ if(!c)
+ return -ENXIO;
+
+ if(kcmd.qlen) /* Check for post data */
+ {
+ query = kmalloc(kcmd.qlen, GFP_KERNEL);
+ if(!query)
+ {
+ i2o_unlock_controller(c);
+ return -ENOMEM;
+ }
+ if(copy_from_user(query, kcmd.qbuf, kcmd.qlen))
+ {
+ i2o_unlock_controller(c);
+ printk(KERN_INFO "i2o_config: could not get query\n");
+ kfree(query);
+ return -EFAULT;
+ }
+ }
+
+ res = kmalloc(65536, GFP_KERNEL);
+ if(!res)
+ {
+ i2o_unlock_controller(c);
+ kfree(query);
+ return -ENOMEM;
+ }
+
+ msg[1] = (I2O_CMD_UTIL_CONFIG_DIALOG << 24)|HOST_TID<<12|kcmd.tid;
+ msg[2] = i2o_cfg_context;
+ msg[3] = 0;
+ msg[4] = kcmd.page;
+ msg[5] = 0xD0000000|65536;
+ msg[6] = virt_to_bus(res);
+ if(!kcmd.qlen) /* Check for post data */
+ msg[0] = SEVEN_WORD_MSG_SIZE|SGL_OFFSET_5;
+ else
+ {
+ msg[0] = NINE_WORD_MSG_SIZE|SGL_OFFSET_5;
+ msg[5] = 0x50000000|65536;
+ msg[7] = 0xD4000000|(kcmd.qlen);
+ msg[8] = virt_to_bus(query);
+ }
+ /*
+ Wait for a considerable time till the Controller
+ does its job before timing out. The controller might
+ take more time to process this request if there are
+ many devices connected to it.
+ */
+ token = i2o_post_wait_mem(c, msg, 9*4, 400, query, res);
+ if(token < 0)
+ {
+ printk(KERN_DEBUG "token = %#10x\n", token);
+ i2o_unlock_controller(c);
+
+ if(token != -ETIMEDOUT)
+ {
+ kfree(res);
+ if(kcmd.qlen) kfree(query);
+ }
+
+ return token;
+ }
+ i2o_unlock_controller(c);
+
+ len = strnlen(res, 65536);
+ put_user(len, kcmd.reslen);
+ if(len > reslen)
+ ret = -ENOMEM;
+ if(copy_to_user(kcmd.resbuf, res, len))
+ ret = -EFAULT;
+
+ kfree(res);
+ if(kcmd.qlen)
+ kfree(query);
+
+ return ret;
+}
+
+int ioctl_swdl(unsigned long arg)
+{
+ struct i2o_sw_xfer kxfer;
+ struct i2o_sw_xfer *pxfer = (struct i2o_sw_xfer *)arg;
+ unsigned char maxfrag = 0, curfrag = 1;
+ unsigned char *buffer;
+ u32 msg[9];
+ unsigned int status = 0, swlen = 0, fragsize = 8192;
+ struct i2o_controller *c;
+
+ if(copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
+ return -EFAULT;
+
+ if(get_user(swlen, kxfer.swlen) < 0)
+ return -EFAULT;
+
+ if(get_user(maxfrag, kxfer.maxfrag) < 0)
+ return -EFAULT;
+
+ if(get_user(curfrag, kxfer.curfrag) < 0)
+ return -EFAULT;
+
+ if(curfrag==maxfrag) fragsize = swlen-(maxfrag-1)*8192;
+
+ if(!kxfer.buf || !access_ok(VERIFY_READ, kxfer.buf, fragsize))
+ return -EFAULT;
+
+ c = i2o_find_controller(kxfer.iop);
+ if(!c)
+ return -ENXIO;
+
+ buffer=kmalloc(fragsize, GFP_KERNEL);
+ if (buffer==NULL)
+ {
+ i2o_unlock_controller(c);
+ return -ENOMEM;
+ }
+ __copy_from_user(buffer, kxfer.buf, fragsize);
+
+ msg[0]= NINE_WORD_MSG_SIZE | SGL_OFFSET_7;
+ msg[1]= I2O_CMD_SW_DOWNLOAD<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2]= (u32)cfg_handler.context;
+ msg[3]= 0;
+ msg[4]= (((u32)kxfer.flags)<<24) | (((u32)kxfer.sw_type)<<16) |
+ (((u32)maxfrag)<<8) | (((u32)curfrag));
+ msg[5]= swlen;
+ msg[6]= kxfer.sw_id;
+ msg[7]= (0xD0000000 | fragsize);
+ msg[8]= virt_to_bus(buffer);
+
+// printk("i2o_config: swdl frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize);
+ status = i2o_post_wait_mem(c, msg, sizeof(msg), 60, buffer, NULL);
+
+ i2o_unlock_controller(c);
+ if(status != -ETIMEDOUT)
+ kfree(buffer);
+
+ if (status != I2O_POST_WAIT_OK)
+ {
+ // it fails if you try and send frags out of order
+ // and for some yet unknown reasons too
+ printk(KERN_INFO "i2o_config: swdl failed, DetailedStatus = %d\n", status);
+ return status;
+ }
+
+ return 0;
+}
+
+int ioctl_swul(unsigned long arg)
+{
+ struct i2o_sw_xfer kxfer;
+ struct i2o_sw_xfer *pxfer = (struct i2o_sw_xfer *)arg;
+ unsigned char maxfrag = 0, curfrag = 1;
+ unsigned char *buffer;
+ u32 msg[9];
+ unsigned int status = 0, swlen = 0, fragsize = 8192;
+ struct i2o_controller *c;
+
+ if(copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
+ return -EFAULT;
+
+ if(get_user(swlen, kxfer.swlen) < 0)
+ return -EFAULT;
+
+ if(get_user(maxfrag, kxfer.maxfrag) < 0)
+ return -EFAULT;
+
+ if(get_user(curfrag, kxfer.curfrag) < 0)
+ return -EFAULT;
+
+ if(curfrag==maxfrag) fragsize = swlen-(maxfrag-1)*8192;
+
+ if(!kxfer.buf || !access_ok(VERIFY_WRITE, kxfer.buf, fragsize))
+ return -EFAULT;
+
+ c = i2o_find_controller(kxfer.iop);
+ if(!c)
+ return -ENXIO;
+
+ buffer=kmalloc(fragsize, GFP_KERNEL);
+ if (buffer==NULL)
+ {
+ i2o_unlock_controller(c);
+ return -ENOMEM;
+ }
+
+ msg[0]= NINE_WORD_MSG_SIZE | SGL_OFFSET_7;
+ msg[1]= I2O_CMD_SW_UPLOAD<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2]= (u32)cfg_handler.context;
+ msg[3]= 0;
+ msg[4]= (u32)kxfer.flags<<24|(u32)kxfer.sw_type<<16|(u32)maxfrag<<8|(u32)curfrag;
+ msg[5]= swlen;
+ msg[6]= kxfer.sw_id;
+ msg[7]= (0xD0000000 | fragsize);
+ msg[8]= virt_to_bus(buffer);
+
+// printk("i2o_config: swul frag %d/%d (size %d)\n", curfrag, maxfrag, fragsize);
+ status = i2o_post_wait_mem(c, msg, sizeof(msg), 60, buffer, NULL);
+ i2o_unlock_controller(c);
+
+ if (status != I2O_POST_WAIT_OK)
+ {
+ if(status != -ETIMEDOUT)
+ kfree(buffer);
+ printk(KERN_INFO "i2o_config: swul failed, DetailedStatus = %d\n", status);
+ return status;
+ }
+
+ __copy_to_user(kxfer.buf, buffer, fragsize);
+ kfree(buffer);
+
+ return 0;
+}
+
+int ioctl_swdel(unsigned long arg)
+{
+ struct i2o_controller *c;
+ struct i2o_sw_xfer kxfer, *pxfer = (struct i2o_sw_xfer *)arg;
+ u32 msg[7];
+ unsigned int swlen;
+ int token;
+
+ if (copy_from_user(&kxfer, pxfer, sizeof(struct i2o_sw_xfer)))
+ return -EFAULT;
+
+ if (get_user(swlen, kxfer.swlen) < 0)
+ return -EFAULT;
+
+ c = i2o_find_controller(kxfer.iop);
+ if (!c)
+ return -ENXIO;
+
+ msg[0] = SEVEN_WORD_MSG_SIZE | SGL_OFFSET_0;
+ msg[1] = I2O_CMD_SW_REMOVE<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2] = (u32)i2o_cfg_context;
+ msg[3] = 0;
+ msg[4] = (u32)kxfer.flags<<24 | (u32)kxfer.sw_type<<16;
+ msg[5] = swlen;
+ msg[6] = kxfer.sw_id;
+
+ token = i2o_post_wait(c, msg, sizeof(msg), 10);
+ i2o_unlock_controller(c);
+
+ if (token != I2O_POST_WAIT_OK)
+ {
+ printk(KERN_INFO "i2o_config: swdel failed, DetailedStatus = %d\n", token);
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+int ioctl_validate(unsigned long arg)
+{
+ int token;
+ int iop = (int)arg;
+ u32 msg[4];
+ struct i2o_controller *c;
+
+ c=i2o_find_controller(iop);
+ if (!c)
+ return -ENXIO;
+
+ msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_CONFIG_VALIDATE<<24 | HOST_TID<<12 | iop;
+ msg[2] = (u32)i2o_cfg_context;
+ msg[3] = 0;
+
+ token = i2o_post_wait(c, msg, sizeof(msg), 10);
+ i2o_unlock_controller(c);
+
+ if (token != I2O_POST_WAIT_OK)
+ {
+ printk(KERN_INFO "Can't validate configuration, ErrorStatus = %d\n",
+ token);
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static int ioctl_evt_reg(unsigned long arg, struct file *fp)
+{
+ u32 msg[5];
+ struct i2o_evt_id *pdesc = (struct i2o_evt_id *)arg;
+ struct i2o_evt_id kdesc;
+ struct i2o_controller *iop;
+ struct i2o_device *d;
+
+ if (copy_from_user(&kdesc, pdesc, sizeof(struct i2o_evt_id)))
+ return -EFAULT;
+
+ /* IOP exists? */
+ iop = i2o_find_controller(kdesc.iop);
+ if(!iop)
+ return -ENXIO;
+ i2o_unlock_controller(iop);
+
+ /* Device exists? */
+ for(d = iop->devices; d; d = d->next)
+ if(d->lct_data.tid == kdesc.tid)
+ break;
+
+ if(!d)
+ return -ENODEV;
+
+ msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | kdesc.tid;
+ msg[2] = (u32)i2o_cfg_context;
+ msg[3] = (u32)fp->private_data;
+ msg[4] = kdesc.evt_mask;
+
+ i2o_post_this(iop, msg, 20);
+
+ return 0;
+}
+
+static int ioctl_evt_get(unsigned long arg, struct file *fp)
+{
+ u32 id = (u32)fp->private_data;
+ struct i2o_cfg_info *p = NULL;
+ struct i2o_evt_get *uget = (struct i2o_evt_get*)arg;
+ struct i2o_evt_get kget;
+ unsigned long flags;
+
+ for(p = open_files; p; p = p->next)
+ if(p->q_id == id)
+ break;
+
+ if(!p->q_len)
+ {
+ return -ENOENT;
+ return 0;
+ }
+
+ memcpy(&kget.info, &p->event_q[p->q_out], sizeof(struct i2o_evt_info));
+ MODINC(p->q_out, I2O_EVT_Q_LEN);
+ spin_lock_irqsave(&i2o_config_lock, flags);
+ p->q_len--;
+ kget.pending = p->q_len;
+ kget.lost = p->q_lost;
+ spin_unlock_irqrestore(&i2o_config_lock, flags);
+
+ if(copy_to_user(uget, &kget, sizeof(struct i2o_evt_get)))
+ return -EFAULT;
+ return 0;
+}
+
+static int cfg_open(struct inode *inode, struct file *file)
+{
+ struct i2o_cfg_info *tmp =
+ (struct i2o_cfg_info *)kmalloc(sizeof(struct i2o_cfg_info), GFP_KERNEL);
+ unsigned long flags;
+
+ if(!tmp)
+ return -ENOMEM;
+
+ file->private_data = (void*)(i2o_cfg_info_id++);
+ tmp->fp = file;
+ tmp->fasync = NULL;
+ tmp->q_id = (u32)file->private_data;
+ tmp->q_len = 0;
+ tmp->q_in = 0;
+ tmp->q_out = 0;
+ tmp->q_lost = 0;
+ tmp->next = open_files;
+
+ spin_lock_irqsave(&i2o_config_lock, flags);
+ open_files = tmp;
+ spin_unlock_irqrestore(&i2o_config_lock, flags);
+
+ return 0;
+}
+
+static int cfg_release(struct inode *inode, struct file *file)
+{
+ u32 id = (u32)file->private_data;
+ struct i2o_cfg_info *p1, *p2;
+ unsigned long flags;
+
+ lock_kernel();
+ p1 = p2 = NULL;
+
+ spin_lock_irqsave(&i2o_config_lock, flags);
+ for(p1 = open_files; p1; )
+ {
+ if(p1->q_id == id)
+ {
+
+ if(p1->fasync)
+ cfg_fasync(-1, file, 0);
+ if(p2)
+ p2->next = p1->next;
+ else
+ open_files = p1->next;
+
+ kfree(p1);
+ break;
+ }
+ p2 = p1;
+ p1 = p1->next;
+ }
+ spin_unlock_irqrestore(&i2o_config_lock, flags);
+ unlock_kernel();
+
+ return 0;
+}
+
+static int cfg_fasync(int fd, struct file *fp, int on)
+{
+ u32 id = (u32)fp->private_data;
+ struct i2o_cfg_info *p;
+
+ for(p = open_files; p; p = p->next)
+ if(p->q_id == id)
+ break;
+
+ if(!p)
+ return -EBADF;
+
+ return fasync_helper(fd, fp, on, &p->fasync);
+}
+
+static struct file_operations config_fops =
+{
+ owner: THIS_MODULE,
+ llseek: no_llseek,
+ read: cfg_read,
+ write: cfg_write,
+ ioctl: cfg_ioctl,
+ open: cfg_open,
+ release: cfg_release,
+ fasync: cfg_fasync,
+};
+
+static struct miscdevice i2o_miscdev = {
+ I2O_MINOR,
+ "i2octl",
+ &config_fops
+};
+
+#ifdef MODULE
+int init_module(void)
+#else
+int __init i2o_config_init(void)
+#endif
+{
+ printk(KERN_INFO "I2O configuration manager v 0.04.\n");
+ printk(KERN_INFO " (C) Copyright 1999 Red Hat Software\n");
+
+ if((page_buf = kmalloc(4096, GFP_KERNEL))==NULL)
+ {
+ printk(KERN_ERR "i2o_config: no memory for page buffer.\n");
+ return -ENOBUFS;
+ }
+ if(misc_register(&i2o_miscdev)==-1)
+ {
+ printk(KERN_ERR "i2o_config: can't register device.\n");
+ kfree(page_buf);
+ return -EBUSY;
+ }
+ /*
+ * Install our handler
+ */
+ if(i2o_install_handler(&cfg_handler)<0)
+ {
+ kfree(page_buf);
+ printk(KERN_ERR "i2o_config: handler register failed.\n");
+ misc_deregister(&i2o_miscdev);
+ return -EBUSY;
+ }
+ /*
+ * The low 16bits of the transaction context must match this
+ * for everything we post. Otherwise someone else gets our mail
+ */
+ i2o_cfg_context = cfg_handler.context;
+ return 0;
+}
+
+#ifdef MODULE
+
+void cleanup_module(void)
+{
+ misc_deregister(&i2o_miscdev);
+
+ if(page_buf)
+ kfree(page_buf);
+ if(i2o_cfg_context != -1)
+ i2o_remove_handler(&cfg_handler);
+}
+
+EXPORT_NO_SYMBOLS;
+MODULE_AUTHOR("Red Hat Software");
+MODULE_DESCRIPTION("I2O Configuration");
+
+#endif
--- /dev/null
+/*
+ * Core I2O structure management
+ *
+ * (C) Copyright 1999 Red Hat Software
+ *
+ * Written by Alan Cox, Building Number Three Ltd
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * A lot of the I2O message side code from this is taken from the
+ * Red Creek RCPCI45 adapter driver by Red Creek Communications
+ *
+ * Fixes by:
+ * Philipp Rumpf
+ * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
+ * Deepak Saxena <deepak@plexity.net>
+ * Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+
+#include <linux/i2o.h>
+
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+
+#include <linux/bitops.h>
+#include <linux/wait.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/tqueue.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <asm/semaphore.h>
+#include <linux/completion.h>
+
+#include <asm/io.h>
+#include <linux/reboot.h>
+
+#include "i2o_lan.h"
+
+//#define DRIVERDEBUG
+
+#ifdef DRIVERDEBUG
+#define dprintk(s, args...) printk(s, ## args)
+#else
+#define dprintk(s, args...)
+#endif
+
+/* OSM table */
+static struct i2o_handler *i2o_handlers[MAX_I2O_MODULES];
+
+/* Controller list */
+static struct i2o_controller *i2o_controllers[MAX_I2O_CONTROLLERS];
+struct i2o_controller *i2o_controller_chain;
+int i2o_num_controllers;
+
+/* Initiator Context for Core message */
+static int core_context;
+
+/* Initialization && shutdown functions */
+static void i2o_sys_init(void);
+static void i2o_sys_shutdown(void);
+static int i2o_reset_controller(struct i2o_controller *);
+static int i2o_reboot_event(struct notifier_block *, unsigned long , void *);
+static int i2o_online_controller(struct i2o_controller *);
+static int i2o_init_outbound_q(struct i2o_controller *);
+static int i2o_post_outbound_messages(struct i2o_controller *);
+
+/* Reply handler */
+static void i2o_core_reply(struct i2o_handler *, struct i2o_controller *,
+ struct i2o_message *);
+
+/* Various helper functions */
+static int i2o_lct_get(struct i2o_controller *);
+static int i2o_lct_notify(struct i2o_controller *);
+static int i2o_hrt_get(struct i2o_controller *);
+
+static int i2o_build_sys_table(void);
+static int i2o_systab_send(struct i2o_controller *c);
+
+/* I2O core event handler */
+static int i2o_core_evt(void *);
+static int evt_pid;
+static int evt_running;
+
+/* Dynamic LCT update handler */
+static int i2o_dyn_lct(void *);
+
+void i2o_report_controller_unit(struct i2o_controller *, struct i2o_device *);
+
+/*
+ * I2O System Table. Contains information about
+ * all the IOPs in the system. Used to inform IOPs
+ * about each other's existence.
+ *
+ * sys_tbl_ver is the CurrentChangeIndicator that is
+ * used by IOPs to track changes.
+ */
+static struct i2o_sys_tbl *sys_tbl;
+static int sys_tbl_ind;
+static int sys_tbl_len;
+
+/*
+ * This spin lock is used to keep a device from being
+ * added and deleted concurrently across CPUs or interrupts.
+ * This can occur when a user creates a device and immediatelly
+ * deletes it before the new_dev_notify() handler is called.
+ */
+static spinlock_t i2o_dev_lock = SPIN_LOCK_UNLOCKED;
+
+#ifdef MODULE
+/*
+ * Function table to send to bus specific layers
+ * See <include/linux/i2o.h> for explanation of this
+ */
+static struct i2o_core_func_table i2o_core_functions =
+{
+ i2o_install_controller,
+ i2o_activate_controller,
+ i2o_find_controller,
+ i2o_unlock_controller,
+ i2o_run_queue,
+ i2o_delete_controller
+};
+
+#ifdef CONFIG_I2O_PCI_MODULE
+extern int i2o_pci_core_attach(struct i2o_core_func_table *);
+extern void i2o_pci_core_detach(void);
+#endif /* CONFIG_I2O_PCI_MODULE */
+
+#endif /* MODULE */
+
+/*
+ * Structures and definitions for synchronous message posting.
+ * See i2o_post_wait() for description.
+ */
+struct i2o_post_wait_data
+{
+ int *status; /* Pointer to status block on caller stack */
+ int *complete; /* Pointer to completion flag on caller stack */
+ u32 id; /* Unique identifier */
+ wait_queue_head_t *wq; /* Wake up for caller (NULL for dead) */
+ struct i2o_post_wait_data *next; /* Chain */
+ void *mem[2]; /* Memory blocks to recover on failure path */
+};
+static struct i2o_post_wait_data *post_wait_queue;
+static u32 post_wait_id; // Unique ID for each post_wait
+static spinlock_t post_wait_lock = SPIN_LOCK_UNLOCKED;
+static void i2o_post_wait_complete(u32, int);
+
+/* OSM descriptor handler */
+static struct i2o_handler i2o_core_handler =
+{
+ (void *)i2o_core_reply,
+ NULL,
+ NULL,
+ NULL,
+ "I2O core layer",
+ 0,
+ I2O_CLASS_EXECUTIVE
+};
+
+/*
+ * Used when queueing a reply to be handled later
+ */
+
+struct reply_info
+{
+ struct i2o_controller *iop;
+ u32 msg[MSG_FRAME_SIZE];
+};
+static struct reply_info evt_reply;
+static struct reply_info events[I2O_EVT_Q_LEN];
+static int evt_in;
+static int evt_out;
+static int evt_q_len;
+#define MODINC(x,y) ((x) = ((x) + 1) % (y))
+
+/*
+ * I2O configuration spinlock. This isnt a big deal for contention
+ * so we have one only
+ */
+
+static DECLARE_MUTEX(i2o_configuration_lock);
+
+/*
+ * Event spinlock. Used to keep event queue sane and from
+ * handling multiple events simultaneously.
+ */
+static spinlock_t i2o_evt_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Semaphore used to synchronize event handling thread with
+ * interrupt handler.
+ */
+
+static DECLARE_MUTEX(evt_sem);
+static DECLARE_COMPLETION(evt_dead);
+DECLARE_WAIT_QUEUE_HEAD(evt_wait);
+
+static struct notifier_block i2o_reboot_notifier =
+{
+ i2o_reboot_event,
+ NULL,
+ 0
+};
+
+/*
+ * Config options
+ */
+
+static int verbose;
+MODULE_PARM(verbose, "i");
+
+/*
+ * I2O Core reply handler
+ */
+static void i2o_core_reply(struct i2o_handler *h, struct i2o_controller *c,
+ struct i2o_message *m)
+{
+ u32 *msg=(u32 *)m;
+ u32 status;
+ u32 context = msg[2];
+
+ if (msg[0] & MSG_FAIL) // Fail bit is set
+ {
+ u32 *preserved_msg = (u32*)(c->mem_offset + msg[7]);
+
+ i2o_report_status(KERN_INFO, "i2o_core", msg);
+ i2o_dump_message(preserved_msg);
+
+ /* If the failed request needs special treatment,
+ * it should be done here. */
+
+ /* Release the preserved msg by resubmitting it as a NOP */
+
+ preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
+ preserved_msg[2] = 0;
+ i2o_post_message(c, msg[7]);
+
+ /* If reply to i2o_post_wait failed, return causes a timeout */
+
+ return;
+ }
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, "i2o_core", msg);
+#endif
+
+ if(msg[2]&0x80000000) // Post wait message
+ {
+ if (msg[4] >> 24)
+ status = (msg[4] & 0xFFFF);
+ else
+ status = I2O_POST_WAIT_OK;
+
+ i2o_post_wait_complete(context, status);
+ return;
+ }
+
+ if(m->function == I2O_CMD_UTIL_EVT_REGISTER)
+ {
+ memcpy(events[evt_in].msg, msg, (msg[0]>>16)<<2);
+ events[evt_in].iop = c;
+
+ spin_lock(&i2o_evt_lock);
+ MODINC(evt_in, I2O_EVT_Q_LEN);
+ if(evt_q_len == I2O_EVT_Q_LEN)
+ MODINC(evt_out, I2O_EVT_Q_LEN);
+ else
+ evt_q_len++;
+ spin_unlock(&i2o_evt_lock);
+
+ up(&evt_sem);
+ wake_up_interruptible(&evt_wait);
+ return;
+ }
+
+ if(m->function == I2O_CMD_LCT_NOTIFY)
+ {
+ up(&c->lct_sem);
+ return;
+ }
+
+ /*
+ * If this happens, we want to dump the message to the syslog so
+ * it can be sent back to the card manufacturer by the end user
+ * to aid in debugging.
+ *
+ */
+ printk(KERN_WARNING "%s: Unsolicited message reply sent to core!"
+ "Message dumped to syslog\n",
+ c->name);
+ i2o_dump_message(msg);
+
+ return;
+}
+
+/**
+ * i2o_install_handler - install a message handler
+ * @h: Handler structure
+ *
+ * Install an I2O handler - these handle the asynchronous messaging
+ * from the card once it has initialised. If the table of handlers is
+ * full then -ENOSPC is returned. On a success 0 is returned and the
+ * context field is set by the function. The structure is part of the
+ * system from this time onwards. It must not be freed until it has
+ * been uninstalled
+ */
+
+int i2o_install_handler(struct i2o_handler *h)
+{
+ int i;
+ down(&i2o_configuration_lock);
+ for(i=0;i<MAX_I2O_MODULES;i++)
+ {
+ if(i2o_handlers[i]==NULL)
+ {
+ h->context = i;
+ i2o_handlers[i]=h;
+ up(&i2o_configuration_lock);
+ return 0;
+ }
+ }
+ up(&i2o_configuration_lock);
+ return -ENOSPC;
+}
+
+/**
+ * i2o_remove_handler - remove an i2o message handler
+ * @h: handler
+ *
+ * Remove a message handler previously installed with i2o_install_handler.
+ * After this function returns the handler object can be freed or re-used
+ */
+
+int i2o_remove_handler(struct i2o_handler *h)
+{
+ i2o_handlers[h->context]=NULL;
+ return 0;
+}
+
+
+/*
+ * Each I2O controller has a chain of devices on it.
+ * Each device has a pointer to it's LCT entry to be used
+ * for fun purposes.
+ */
+
+/**
+ * i2o_install_device - attach a device to a controller
+ * @c: controller
+ * @d: device
+ *
+ * Add a new device to an i2o controller. This can be called from
+ * non interrupt contexts only. It adds the device and marks it as
+ * unclaimed. The device memory becomes part of the kernel and must
+ * be uninstalled before being freed or reused. Zero is returned
+ * on success.
+ */
+
+int i2o_install_device(struct i2o_controller *c, struct i2o_device *d)
+{
+ int i;
+
+ down(&i2o_configuration_lock);
+ d->controller=c;
+ d->owner=NULL;
+ d->next=c->devices;
+ d->prev=NULL;
+ if (c->devices != NULL)
+ c->devices->prev=d;
+ c->devices=d;
+ *d->dev_name = 0;
+
+ for(i = 0; i < I2O_MAX_MANAGERS; i++)
+ d->managers[i] = NULL;
+
+ up(&i2o_configuration_lock);
+ return 0;
+}
+
+/* we need this version to call out of i2o_delete_controller */
+
+int __i2o_delete_device(struct i2o_device *d)
+{
+ struct i2o_device **p;
+ int i;
+
+ p=&(d->controller->devices);
+
+ /*
+ * Hey we have a driver!
+ * Check to see if the driver wants us to notify it of
+ * device deletion. If it doesn't we assume that it
+ * is unsafe to delete a device with an owner and
+ * fail.
+ */
+ if(d->owner)
+ {
+ if(d->owner->dev_del_notify)
+ {
+ dprintk(KERN_INFO "Device has owner, notifying\n");
+ d->owner->dev_del_notify(d->controller, d);
+ if(d->owner)
+ {
+ printk(KERN_WARNING
+ "Driver \"%s\" did not release device!\n", d->owner->name);
+ return -EBUSY;
+ }
+ }
+ else
+ return -EBUSY;
+ }
+
+ /*
+ * Tell any other users who are talking to this device
+ * that it's going away. We assume that everything works.
+ */
+ for(i=0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(d->managers[i] && d->managers[i]->dev_del_notify)
+ d->managers[i]->dev_del_notify(d->controller, d);
+ }
+
+ while(*p!=NULL)
+ {
+ if(*p==d)
+ {
+ /*
+ * Destroy
+ */
+ *p=d->next;
+ kfree(d);
+ return 0;
+ }
+ p=&((*p)->next);
+ }
+ printk(KERN_ERR "i2o_delete_device: passed invalid device.\n");
+ return -EINVAL;
+}
+
+/**
+ * i2o_delete_device - remove an i2o device
+ * @d: device to remove
+ *
+ * This function unhooks a device from a controller. The device
+ * will not be unhooked if it has an owner who does not wish to free
+ * it, or if the owner lacks a dev_del_notify function. In that case
+ * -EBUSY is returned. On success 0 is returned. Other errors cause
+ * negative errno values to be returned
+ */
+
+int i2o_delete_device(struct i2o_device *d)
+{
+ int ret;
+
+ down(&i2o_configuration_lock);
+
+ /*
+ * Seek, locate
+ */
+
+ ret = __i2o_delete_device(d);
+
+ up(&i2o_configuration_lock);
+
+ return ret;
+}
+
+/**
+ * i2o_install_controller - attach a controller
+ * @c: controller
+ *
+ * Add a new controller to the i2o layer. This can be called from
+ * non interrupt contexts only. It adds the controller and marks it as
+ * unused with no devices. If the tables are full or memory allocations
+ * fail then a negative errno code is returned. On success zero is
+ * returned and the controller is bound to the system. The structure
+ * must not be freed or reused until being uninstalled.
+ */
+
+int i2o_install_controller(struct i2o_controller *c)
+{
+ int i;
+ down(&i2o_configuration_lock);
+ for(i=0;i<MAX_I2O_CONTROLLERS;i++)
+ {
+ if(i2o_controllers[i]==NULL)
+ {
+ c->dlct = (i2o_lct*)kmalloc(8192, GFP_KERNEL);
+ if(c->dlct==NULL)
+ {
+ up(&i2o_configuration_lock);
+ return -ENOMEM;
+ }
+ i2o_controllers[i]=c;
+ c->devices = NULL;
+ c->next=i2o_controller_chain;
+ i2o_controller_chain=c;
+ c->unit = i;
+ c->page_frame = NULL;
+ c->hrt = NULL;
+ c->lct = NULL;
+ c->status_block = NULL;
+ sprintf(c->name, "i2o/iop%d", i);
+ i2o_num_controllers++;
+ init_MUTEX_LOCKED(&c->lct_sem);
+ up(&i2o_configuration_lock);
+ return 0;
+ }
+ }
+ printk(KERN_ERR "No free i2o controller slots.\n");
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+}
+
+/**
+ * i2o_delete_controller - delete a controller
+ * @c: controller
+ *
+ * Remove an i2o controller from the system. If the controller or its
+ * devices are busy then -EBUSY is returned. On a failure a negative
+ * errno code is returned. On success zero is returned.
+ */
+
+int i2o_delete_controller(struct i2o_controller *c)
+{
+ struct i2o_controller **p;
+ int users;
+ char name[16];
+ int stat;
+
+ dprintk(KERN_INFO "Deleting controller %s\n", c->name);
+
+ /*
+ * Clear event registration as this can cause weird behavior
+ */
+ if(c->status_block->iop_state == ADAPTER_STATE_OPERATIONAL)
+ i2o_event_register(c, core_context, 0, 0, 0);
+
+ down(&i2o_configuration_lock);
+ if((users=atomic_read(&c->users)))
+ {
+ dprintk(KERN_INFO "I2O: %d users for controller %s\n", users,
+ c->name);
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+ }
+ while(c->devices)
+ {
+ if(__i2o_delete_device(c->devices)<0)
+ {
+ /* Shouldnt happen */
+ c->bus_disable(c);
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+ }
+ }
+
+ /*
+ * If this is shutdown time, the thread's already been killed
+ */
+ if(c->lct_running) {
+ stat = kill_proc(c->lct_pid, SIGTERM, 1);
+ if(!stat) {
+ int count = 10 * 100;
+ while(c->lct_running && --count) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+
+ if(!count)
+ printk(KERN_ERR
+ "%s: LCT thread still running!\n",
+ c->name);
+ }
+ }
+
+ p=&i2o_controller_chain;
+
+ while(*p)
+ {
+ if(*p==c)
+ {
+ /* Ask the IOP to switch to RESET state */
+ i2o_reset_controller(c);
+
+ /* Release IRQ */
+ c->destructor(c);
+
+ *p=c->next;
+ up(&i2o_configuration_lock);
+
+ if(c->page_frame)
+ kfree(c->page_frame);
+ if(c->hrt)
+ kfree(c->hrt);
+ if(c->lct)
+ kfree(c->lct);
+ if(c->status_block)
+ kfree(c->status_block);
+ if(c->dlct)
+ kfree(c->dlct);
+
+ i2o_controllers[c->unit]=NULL;
+ memcpy(name, c->name, strlen(c->name)+1);
+ kfree(c);
+ dprintk(KERN_INFO "%s: Deleted from controller chain.\n", name);
+
+ i2o_num_controllers--;
+ return 0;
+ }
+ p=&((*p)->next);
+ }
+ up(&i2o_configuration_lock);
+ printk(KERN_ERR "i2o_delete_controller: bad pointer!\n");
+ return -ENOENT;
+}
+
+/**
+ * i2o_unlock_controller - unlock a controller
+ * @c: controller to unlock
+ *
+ * Take a lock on an i2o controller. This prevents it being deleted.
+ * i2o controllers are not refcounted so a deletion of an in use device
+ * will fail, not take affect on the last dereference.
+ */
+
+void i2o_unlock_controller(struct i2o_controller *c)
+{
+ atomic_dec(&c->users);
+}
+
+/**
+ * i2o_find_controller - return a locked controller
+ * @n: controller number
+ *
+ * Returns a pointer to the controller object. The controller is locked
+ * on return. NULL is returned if the controller is not found.
+ */
+
+struct i2o_controller *i2o_find_controller(int n)
+{
+ struct i2o_controller *c;
+
+ if(n<0 || n>=MAX_I2O_CONTROLLERS)
+ return NULL;
+
+ down(&i2o_configuration_lock);
+ c=i2o_controllers[n];
+ if(c!=NULL)
+ atomic_inc(&c->users);
+ up(&i2o_configuration_lock);
+ return c;
+}
+
+/**
+ * i2o_issue_claim - claim or release a device
+ * @cmd: command
+ * @c: controller to claim for
+ * @tid: i2o task id
+ * @type: type of claim
+ *
+ * Issue I2O UTIL_CLAIM and UTIL_RELEASE messages. The message to be sent
+ * is set by cmd. The tid is the task id of the object to claim and the
+ * type is the claim type (see the i2o standard)
+ *
+ * Zero is returned on success.
+ */
+
+static int i2o_issue_claim(u32 cmd, struct i2o_controller *c, int tid, u32 type)
+{
+ u32 msg[5];
+
+ msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ msg[1] = cmd << 24 | HOST_TID<<12 | tid;
+ msg[3] = 0;
+ msg[4] = type;
+
+ return i2o_post_wait(c, msg, sizeof(msg), 60);
+}
+
+/*
+ * i2o_claim_device - claim a device for use by an OSM
+ * @d: device to claim
+ * @h: handler for this device
+ *
+ * Do the leg work to assign a device to a given OSM on Linux. The
+ * kernel updates the internal handler data for the device and then
+ * performs an I2O claim for the device, attempting to claim the
+ * device as primary. If the attempt fails a negative errno code
+ * is returned. On success zero is returned.
+ */
+
+int i2o_claim_device(struct i2o_device *d, struct i2o_handler *h)
+{
+ down(&i2o_configuration_lock);
+ if (d->owner) {
+ printk(KERN_INFO "Device claim called, but dev already owned by %s!",
+ h->name);
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+ }
+ d->owner=h;
+
+ if(i2o_issue_claim(I2O_CMD_UTIL_CLAIM ,d->controller,d->lct_data.tid,
+ I2O_CLAIM_PRIMARY))
+ {
+ d->owner = NULL;
+ return -EBUSY;
+ }
+ up(&i2o_configuration_lock);
+ return 0;
+}
+
+/**
+ * i2o_release_device - release a device that the OSM is using
+ * @d: device to claim
+ * @h: handler for this device
+ *
+ * Drop a claim by an OSM on a given I2O device. The handler is cleared
+ * and 0 is returned on success.
+ *
+ * AC - some devices seem to want to refuse an unclaim until they have
+ * finished internal processing. It makes sense since you don't want a
+ * new device to go reconfiguring the entire system until you are done.
+ * Thus we are prepared to wait briefly.
+ */
+
+int i2o_release_device(struct i2o_device *d, struct i2o_handler *h)
+{
+ int err = 0;
+ int tries;
+
+ down(&i2o_configuration_lock);
+ if (d->owner != h) {
+ printk(KERN_INFO "Claim release called, but not owned by %s!\n",
+ h->name);
+ up(&i2o_configuration_lock);
+ return -ENOENT;
+ }
+
+ for(tries=0;tries<10;tries++)
+ {
+ d->owner = NULL;
+
+ /*
+ * If the controller takes a nonblocking approach to
+ * releases we have to sleep/poll for a few times.
+ */
+
+ if((err=i2o_issue_claim(I2O_CMD_UTIL_RELEASE, d->controller, d->lct_data.tid, I2O_CLAIM_PRIMARY)) )
+ {
+ err = -ENXIO;
+ current->state = TASK_UNINTERRUPTIBLE;
+ schedule_timeout(HZ);
+ }
+ else
+ {
+ err=0;
+ break;
+ }
+ }
+ up(&i2o_configuration_lock);
+ return err;
+}
+
+/**
+ * i2o_device_notify_on - Enable deletion notifiers
+ * @d: device for notification
+ * @h: handler to install
+ *
+ * Called by OSMs to let the core know that they want to be
+ * notified if the given device is deleted from the system.
+ */
+
+int i2o_device_notify_on(struct i2o_device *d, struct i2o_handler *h)
+{
+ int i;
+
+ if(d->num_managers == I2O_MAX_MANAGERS)
+ return -ENOSPC;
+
+ for(i = 0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(!d->managers[i])
+ {
+ d->managers[i] = h;
+ break;
+ }
+ }
+
+ d->num_managers++;
+
+ return 0;
+}
+
+/**
+ * i2o_device_notify_off - Remove deletion notifiers
+ * @d: device for notification
+ * @h: handler to remove
+ *
+ * Called by OSMs to let the core know that they no longer
+ * are interested in the fate of the given device.
+ */
+int i2o_device_notify_off(struct i2o_device *d, struct i2o_handler *h)
+{
+ int i;
+
+ for(i=0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(d->managers[i] == h)
+ {
+ d->managers[i] = NULL;
+ d->num_managers--;
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+/**
+ * i2o_event_register - register interest in an event
+ * @c: Controller to register interest with
+ * @tid: I2O task id
+ * @init_context: initiator context to use with this notifier
+ * @tr_context: transaction context to use with this notifier
+ * @evt_mask: mask of events
+ *
+ * Create and posts an event registration message to the task. No reply
+ * is waited for, or expected. Errors in posting will be reported.
+ */
+
+int i2o_event_register(struct i2o_controller *c, u32 tid,
+ u32 init_context, u32 tr_context, u32 evt_mask)
+{
+ u32 msg[5]; // Not performance critical, so we just
+ // i2o_post_this it instead of building it
+ // in IOP memory
+
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | tid;
+ msg[2] = init_context;
+ msg[3] = tr_context;
+ msg[4] = evt_mask;
+
+ return i2o_post_this(c, msg, sizeof(msg));
+}
+
+/*
+ * i2o_event_ack - acknowledge an event
+ * @c: controller
+ * @msg: pointer to the UTIL_EVENT_REGISTER reply we received
+ *
+ * We just take a pointer to the original UTIL_EVENT_REGISTER reply
+ * message and change the function code since that's what spec
+ * describes an EventAck message looking like.
+ */
+
+int i2o_event_ack(struct i2o_controller *c, u32 *msg)
+{
+ struct i2o_message *m = (struct i2o_message *)msg;
+
+ m->function = I2O_CMD_UTIL_EVT_ACK;
+
+ return i2o_post_wait(c, msg, m->size * 4, 2);
+}
+
+/*
+ * Core event handler. Runs as a separate thread and is woken
+ * up whenever there is an Executive class event.
+ */
+static int i2o_core_evt(void *reply_data)
+{
+ struct reply_info *reply = (struct reply_info *) reply_data;
+ u32 *msg = reply->msg;
+ struct i2o_controller *c = NULL;
+ unsigned long flags;
+
+ lock_kernel();
+ daemonize();
+ unlock_kernel();
+
+ strcpy(current->comm, "i2oevtd");
+ evt_running = 1;
+
+ while(1)
+ {
+ if(down_interruptible(&evt_sem))
+ {
+ dprintk(KERN_INFO "I2O event thread dead\n");
+ printk("exiting...");
+ evt_running = 0;
+ complete_and_exit(&evt_dead, 0);
+ }
+
+ /*
+ * Copy the data out of the queue so that we don't have to lock
+ * around the whole function and just around the qlen update
+ */
+ spin_lock_irqsave(&i2o_evt_lock, flags);
+ memcpy(reply, &events[evt_out], sizeof(struct reply_info));
+ MODINC(evt_out, I2O_EVT_Q_LEN);
+ evt_q_len--;
+ spin_unlock_irqrestore(&i2o_evt_lock, flags);
+
+ c = reply->iop;
+ dprintk(KERN_INFO "I2O IRTOS EVENT: iop%d, event %#10x\n", c->unit, msg[4]);
+
+ /*
+ * We do not attempt to delete/quiesce/etc. the controller if
+ * some sort of error indidication occurs. We may want to do
+ * so in the future, but for now we just let the user deal with
+ * it. One reason for this is that what to do with an error
+ * or when to send what ærror is not really agreed on, so
+ * we get errors that may not be fatal but just look like they
+ * are...so let the user deal with it.
+ */
+ switch(msg[4])
+ {
+ case I2O_EVT_IND_EXEC_RESOURCE_LIMITS:
+ printk(KERN_ERR "%s: Out of resources\n", c->name);
+ break;
+
+ case I2O_EVT_IND_EXEC_POWER_FAIL:
+ printk(KERN_ERR "%s: Power failure\n", c->name);
+ break;
+
+ case I2O_EVT_IND_EXEC_HW_FAIL:
+ {
+ char *fail[] =
+ {
+ "Unknown Error",
+ "Power Lost",
+ "Code Violation",
+ "Parity Error",
+ "Code Execution Exception",
+ "Watchdog Timer Expired"
+ };
+
+ if(msg[5] <= 6)
+ printk(KERN_ERR "%s: Hardware Failure: %s\n",
+ c->name, fail[msg[5]]);
+ else
+ printk(KERN_ERR "%s: Unknown Hardware Failure\n", c->name);
+
+ break;
+ }
+
+ /*
+ * New device created
+ * - Create a new i2o_device entry
+ * - Inform all interested drivers about this device's existence
+ */
+ case I2O_EVT_IND_EXEC_NEW_LCT_ENTRY:
+ {
+ struct i2o_device *d = (struct i2o_device *)
+ kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
+ int i;
+
+ if (d == NULL) {
+ printk(KERN_EMERG "i2oevtd: out of memory\n");
+ break;
+ }
+ memcpy(&d->lct_data, &msg[5], sizeof(i2o_lct_entry));
+
+ d->next = NULL;
+ d->controller = c;
+ d->flags = 0;
+
+ i2o_report_controller_unit(c, d);
+ i2o_install_device(c,d);
+
+ for(i = 0; i < MAX_I2O_MODULES; i++)
+ {
+ if(i2o_handlers[i] &&
+ i2o_handlers[i]->new_dev_notify &&
+ (i2o_handlers[i]->class&d->lct_data.class_id))
+ {
+ spin_lock(&i2o_dev_lock);
+ i2o_handlers[i]->new_dev_notify(c,d);
+ spin_unlock(&i2o_dev_lock);
+ }
+ }
+
+ break;
+ }
+
+ /*
+ * LCT entry for a device has been modified, so update it
+ * internally.
+ */
+ case I2O_EVT_IND_EXEC_MODIFIED_LCT:
+ {
+ struct i2o_device *d;
+ i2o_lct_entry *new_lct = (i2o_lct_entry *)&msg[5];
+
+ for(d = c->devices; d; d = d->next)
+ {
+ if(d->lct_data.tid == new_lct->tid)
+ {
+ memcpy(&d->lct_data, new_lct, sizeof(i2o_lct_entry));
+ break;
+ }
+ }
+ break;
+ }
+
+ case I2O_EVT_IND_CONFIGURATION_FLAG:
+ printk(KERN_WARNING "%s requires user configuration\n", c->name);
+ break;
+
+ case I2O_EVT_IND_GENERAL_WARNING:
+ printk(KERN_WARNING "%s: Warning notification received!"
+ "Check configuration for errors!\n", c->name);
+ break;
+
+ case I2O_EVT_IND_EVT_MASK_MODIFIED:
+ /* Well I guess that was us hey .. */
+ break;
+
+ default:
+ printk(KERN_WARNING "%s: No handler for event (0x%08x)\n", c->name, msg[4]);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Dynamic LCT update. This compares the LCT with the currently
+ * installed devices to check for device deletions..this needed b/c there
+ * is no DELETED_LCT_ENTRY EventIndicator for the Executive class so
+ * we can't just have the event handler do this...annoying
+ *
+ * This is a hole in the spec that will hopefully be fixed someday.
+ */
+static int i2o_dyn_lct(void *foo)
+{
+ struct i2o_controller *c = (struct i2o_controller *)foo;
+ struct i2o_device *d = NULL;
+ struct i2o_device *d1 = NULL;
+ int i = 0;
+ int found = 0;
+ int entries;
+ void *tmp;
+ char name[16];
+
+ lock_kernel();
+ daemonize();
+ unlock_kernel();
+
+ sprintf(name, "iop%d_lctd", c->unit);
+ strcpy(current->comm, name);
+
+ c->lct_running = 1;
+
+ while(1)
+ {
+ down_interruptible(&c->lct_sem);
+ if(signal_pending(current))
+ {
+ dprintk(KERN_ERR "%s: LCT thread dead\n", c->name);
+ c->lct_running = 0;
+ return 0;
+ }
+
+ entries = c->dlct->table_size;
+ entries -= 3;
+ entries /= 9;
+
+ dprintk(KERN_INFO "%s: Dynamic LCT Update\n",c->name);
+ dprintk(KERN_INFO "%s: Dynamic LCT contains %d entries\n", c->name, entries);
+
+ if(!entries)
+ {
+ printk(KERN_INFO "%s: Empty LCT???\n", c->name);
+ continue;
+ }
+
+ /*
+ * Loop through all the devices on the IOP looking for their
+ * LCT data in the LCT. We assume that TIDs are not repeated.
+ * as that is the only way to really tell. It's been confirmed
+ * by the IRTOS vendor(s?) that TIDs are not reused until they
+ * wrap arround(4096), and I doubt a system will up long enough
+ * to create/delete that many devices.
+ */
+ for(d = c->devices; d; )
+ {
+ found = 0;
+ d1 = d->next;
+
+ for(i = 0; i < entries; i++)
+ {
+ if(d->lct_data.tid == c->dlct->lct_entry[i].tid)
+ {
+ found = 1;
+ break;
+ }
+ }
+ if(!found)
+ {
+ dprintk(KERN_INFO "i2o_core: Deleted device!\n");
+ spin_lock(&i2o_dev_lock);
+ i2o_delete_device(d);
+ spin_unlock(&i2o_dev_lock);
+ }
+ d = d1;
+ }
+
+ /*
+ * Tell LCT to renotify us next time there is a change
+ */
+ i2o_lct_notify(c);
+
+ /*
+ * Copy new LCT into public LCT
+ *
+ * Possible race if someone is reading LCT while we are copying
+ * over it. If this happens, we'll fix it then. but I doubt that
+ * the LCT will get updated often enough or will get read by
+ * a user often enough to worry.
+ */
+ if(c->lct->table_size < c->dlct->table_size)
+ {
+ tmp = c->lct;
+ c->lct = kmalloc(c->dlct->table_size<<2, GFP_KERNEL);
+ if(!c->lct)
+ {
+ printk(KERN_ERR "%s: No memory for LCT!\n", c->name);
+ c->lct = tmp;
+ continue;
+ }
+ kfree(tmp);
+ }
+ memcpy(c->lct, c->dlct, c->dlct->table_size<<2);
+ }
+
+ return 0;
+}
+
+/**
+ * i2o_run_queue - process pending events on a controller
+ * @c: controller to process
+ *
+ * This is called by the bus specific driver layer when an interrupt
+ * or poll of this card interface is desired.
+ */
+
+void i2o_run_queue(struct i2o_controller *c)
+{
+ struct i2o_message *m;
+ u32 mv;
+ u32 *msg;
+
+ /*
+ * Old 960 steppings had a bug in the I2O unit that caused
+ * the queue to appear empty when it wasn't.
+ */
+ if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
+ mv=I2O_REPLY_READ32(c);
+
+ while(mv!=0xFFFFFFFF)
+ {
+ struct i2o_handler *i;
+ m=(struct i2o_message *)bus_to_virt(mv);
+ msg=(u32*)m;
+
+ i=i2o_handlers[m->initiator_context&(MAX_I2O_MODULES-1)];
+ if(i && i->reply)
+ i->reply(i,c,m);
+ else
+ {
+ printk(KERN_WARNING "I2O: Spurious reply to handler %d\n",
+ m->initiator_context&(MAX_I2O_MODULES-1));
+ }
+ i2o_flush_reply(c,mv);
+ mb();
+
+ /* That 960 bug again... */
+ if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
+ mv=I2O_REPLY_READ32(c);
+ }
+}
+
+
+/**
+ * i2o_get_class_name - do i2o class name lookup
+ * @class: class number
+ *
+ * Return a descriptive string for an i2o class
+ */
+
+const char *i2o_get_class_name(int class)
+{
+ int idx = 16;
+ static char *i2o_class_name[] = {
+ "Executive",
+ "Device Driver Module",
+ "Block Device",
+ "Tape Device",
+ "LAN Interface",
+ "WAN Interface",
+ "Fibre Channel Port",
+ "Fibre Channel Device",
+ "SCSI Device",
+ "ATE Port",
+ "ATE Device",
+ "Floppy Controller",
+ "Floppy Device",
+ "Secondary Bus Port",
+ "Peer Transport Agent",
+ "Peer Transport",
+ "Unknown"
+ };
+
+ switch(class&0xFFF)
+ {
+ case I2O_CLASS_EXECUTIVE:
+ idx = 0; break;
+ case I2O_CLASS_DDM:
+ idx = 1; break;
+ case I2O_CLASS_RANDOM_BLOCK_STORAGE:
+ idx = 2; break;
+ case I2O_CLASS_SEQUENTIAL_STORAGE:
+ idx = 3; break;
+ case I2O_CLASS_LAN:
+ idx = 4; break;
+ case I2O_CLASS_WAN:
+ idx = 5; break;
+ case I2O_CLASS_FIBRE_CHANNEL_PORT:
+ idx = 6; break;
+ case I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL:
+ idx = 7; break;
+ case I2O_CLASS_SCSI_PERIPHERAL:
+ idx = 8; break;
+ case I2O_CLASS_ATE_PORT:
+ idx = 9; break;
+ case I2O_CLASS_ATE_PERIPHERAL:
+ idx = 10; break;
+ case I2O_CLASS_FLOPPY_CONTROLLER:
+ idx = 11; break;
+ case I2O_CLASS_FLOPPY_DEVICE:
+ idx = 12; break;
+ case I2O_CLASS_BUS_ADAPTER_PORT:
+ idx = 13; break;
+ case I2O_CLASS_PEER_TRANSPORT_AGENT:
+ idx = 14; break;
+ case I2O_CLASS_PEER_TRANSPORT:
+ idx = 15; break;
+ }
+
+ return i2o_class_name[idx];
+}
+
+
+/**
+ * i2o_wait_message - obtain an i2o message from the IOP
+ * @c: controller
+ * @why: explanation
+ *
+ * This function waits up to 5 seconds for a message slot to be
+ * available. If no message is available it prints an error message
+ * that is expected to be what the message will be used for (eg
+ * "get_status"). 0xFFFFFFFF is returned on a failure.
+ *
+ * On a success the message is returned. This is the physical page
+ * frame offset address from the read port. (See the i2o spec)
+ */
+
+u32 i2o_wait_message(struct i2o_controller *c, char *why)
+{
+ long time=jiffies;
+ u32 m;
+ while((m=I2O_POST_READ32(c))==0xFFFFFFFF)
+ {
+ if((jiffies-time)>=5*HZ)
+ {
+ dprintk(KERN_ERR "%s: Timeout waiting for message frame to send %s.\n",
+ c->name, why);
+ return 0xFFFFFFFF;
+ }
+ schedule();
+ barrier();
+ }
+ return m;
+}
+
+/**
+ * i2o_report_controller_unit - print information about a tid
+ * @c: controller
+ * @d: device
+ *
+ * Dump an information block associated with a given unit (TID). The
+ * tables are read and a block of text is output to printk that is
+ * formatted intended for the user.
+ */
+
+void i2o_report_controller_unit(struct i2o_controller *c, struct i2o_device *d)
+{
+ char buf[64];
+ char str[22];
+ int ret;
+ int unit = d->lct_data.tid;
+
+ if(verbose==0)
+ return;
+
+ printk(KERN_INFO "Target ID %d.\n", unit);
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 3, buf, 16))>=0)
+ {
+ buf[16]=0;
+ printk(KERN_INFO " Vendor: %s\n", buf);
+ }
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 4, buf, 16))>=0)
+ {
+ buf[16]=0;
+ printk(KERN_INFO " Device: %s\n", buf);
+ }
+ if(i2o_query_scalar(c, unit, 0xF100, 5, buf, 16)>=0)
+ {
+ buf[16]=0;
+ printk(KERN_INFO " Description: %s\n", buf);
+ }
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 6, buf, 8))>=0)
+ {
+ buf[8]=0;
+ printk(KERN_INFO " Rev: %s\n", buf);
+ }
+
+ printk(KERN_INFO " Class: ");
+ sprintf(str, "%-21s", i2o_get_class_name(d->lct_data.class_id));
+ printk("%s\n", str);
+
+ printk(KERN_INFO " Subclass: 0x%04X\n", d->lct_data.sub_class);
+ printk(KERN_INFO " Flags: ");
+
+ if(d->lct_data.device_flags&(1<<0))
+ printk("C"); // ConfigDialog requested
+ if(d->lct_data.device_flags&(1<<1))
+ printk("U"); // Multi-user capable
+ if(!(d->lct_data.device_flags&(1<<4)))
+ printk("P"); // Peer service enabled!
+ if(!(d->lct_data.device_flags&(1<<5)))
+ printk("M"); // Mgmt service enabled!
+ printk("\n");
+
+}
+
+
+/*
+ * Parse the hardware resource table. Right now we print it out
+ * and don't do a lot with it. We should collate these and then
+ * interact with the Linux resource allocation block.
+ *
+ * Lets prove we can read it first eh ?
+ *
+ * This is full of endianisms!
+ */
+
+static int i2o_parse_hrt(struct i2o_controller *c)
+{
+#ifdef DRIVERDEBUG
+ u32 *rows=(u32*)c->hrt;
+ u8 *p=(u8 *)c->hrt;
+ u8 *d;
+ int count;
+ int length;
+ int i;
+ int state;
+
+ if(p[3]!=0)
+ {
+ printk(KERN_ERR "%s: HRT table for controller is too new a version.\n",
+ c->name);
+ return -1;
+ }
+
+ count=p[0]|(p[1]<<8);
+ length = p[2];
+
+ printk(KERN_INFO "%s: HRT has %d entries of %d bytes each.\n",
+ c->name, count, length<<2);
+
+ rows+=2;
+
+ for(i=0;i<count;i++)
+ {
+ printk(KERN_INFO "Adapter %08X: ", rows[0]);
+ p=(u8 *)(rows+1);
+ d=(u8 *)(rows+2);
+ state=p[1]<<8|p[0];
+
+ printk("TID %04X:[", state&0xFFF);
+ state>>=12;
+ if(state&(1<<0))
+ printk("H"); /* Hidden */
+ if(state&(1<<2))
+ {
+ printk("P"); /* Present */
+ if(state&(1<<1))
+ printk("C"); /* Controlled */
+ }
+ if(state>9)
+ printk("*"); /* Hard */
+
+ printk("]:");
+
+ switch(p[3]&0xFFFF)
+ {
+ case 0:
+ /* Adapter private bus - easy */
+ printk("Local bus %d: I/O at 0x%04X Mem 0x%08X",
+ p[2], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+ case 1:
+ /* ISA bus */
+ printk("ISA %d: CSN %d I/O at 0x%04X Mem 0x%08X",
+ p[2], d[2], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+
+ case 2: /* EISA bus */
+ printk("EISA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
+ p[2], d[3], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+
+ case 3: /* MCA bus */
+ printk("MCA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
+ p[2], d[3], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+
+ case 4: /* PCI bus */
+ printk("PCI %d: Bus %d Device %d Function %d",
+ p[2], d[2], d[1], d[0]);
+ break;
+
+ case 0x80: /* Other */
+ default:
+ printk("Unsupported bus type.");
+ break;
+ }
+ printk("\n");
+ rows+=length;
+ }
+#endif
+ return 0;
+}
+
+/*
+ * The logical configuration table tells us what we can talk to
+ * on the board. Most of the stuff isn't interesting to us.
+ */
+
+static int i2o_parse_lct(struct i2o_controller *c)
+{
+ int i;
+ int max;
+ int tid;
+ struct i2o_device *d;
+ i2o_lct *lct = c->lct;
+
+ if (lct == NULL) {
+ printk(KERN_ERR "%s: LCT is empty???\n", c->name);
+ return -1;
+ }
+
+ max = lct->table_size;
+ max -= 3;
+ max /= 9;
+
+ printk(KERN_INFO "%s: LCT has %d entries.\n", c->name, max);
+
+ if(lct->iop_flags&(1<<0))
+ printk(KERN_WARNING "%s: Configuration dialog desired.\n", c->name);
+
+ for(i=0;i<max;i++)
+ {
+ d = (struct i2o_device *)kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
+ if(d==NULL)
+ {
+ printk(KERN_CRIT "i2o_core: Out of memory for I2O device data.\n");
+ return -ENOMEM;
+ }
+
+ d->controller = c;
+ d->next = NULL;
+
+ memcpy(&d->lct_data, &lct->lct_entry[i], sizeof(i2o_lct_entry));
+
+ d->flags = 0;
+ tid = d->lct_data.tid;
+
+ i2o_report_controller_unit(c, d);
+
+ i2o_install_device(c, d);
+ }
+ return 0;
+}
+
+
+/**
+ * i2o_quiesce_controller - quiesce controller
+ * @c: controller
+ *
+ * Quiesce an IOP. Causes IOP to make external operation quiescent
+ * (i2o 'READY' state). Internal operation of the IOP continues normally.
+ */
+
+int i2o_quiesce_controller(struct i2o_controller *c)
+{
+ u32 msg[4];
+ int ret;
+
+ i2o_status_get(c);
+
+ /* SysQuiesce discarded if IOP not in READY or OPERATIONAL state */
+
+ if ((c->status_block->iop_state != ADAPTER_STATE_READY) &&
+ (c->status_block->iop_state != ADAPTER_STATE_OPERATIONAL))
+ {
+ return 0;
+ }
+
+ msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_SYS_QUIESCE<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[3] = 0;
+
+ /* Long timeout needed for quiesce if lots of devices */
+
+ if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
+ printk(KERN_INFO "%s: Unable to quiesce (status=%#x).\n",
+ c->name, -ret);
+ else
+ dprintk(KERN_INFO "%s: Quiesced.\n", c->name);
+
+ i2o_status_get(c); // Entered READY state
+ return ret;
+}
+
+/**
+ * i2o_enable_controller - move controller from ready to operational
+ * @c: controller
+ *
+ * Enable IOP. This allows the IOP to resume external operations and
+ * reverses the effect of a quiesce. In the event of an error a negative
+ * errno code is returned.
+ */
+
+int i2o_enable_controller(struct i2o_controller *c)
+{
+ u32 msg[4];
+ int ret;
+
+ i2o_status_get(c);
+
+ /* Enable only allowed on READY state */
+ if(c->status_block->iop_state != ADAPTER_STATE_READY)
+ return -EINVAL;
+
+ msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_SYS_ENABLE<<24|HOST_TID<<12|ADAPTER_TID;
+
+ /* How long of a timeout do we need? */
+
+ if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
+ printk(KERN_ERR "%s: Could not enable (status=%#x).\n",
+ c->name, -ret);
+ else
+ dprintk(KERN_INFO "%s: Enabled.\n", c->name);
+
+ i2o_status_get(c); // entered OPERATIONAL state
+
+ return ret;
+}
+
+/**
+ * i2o_clear_controller - clear a controller
+ * @c: controller
+ *
+ * Clear an IOP to HOLD state, ie. terminate external operations, clear all
+ * input queues and prepare for a system restart. IOP's internal operation
+ * continues normally and the outbound queue is alive.
+ * The IOP is not expected to rebuild its LCT.
+ */
+
+int i2o_clear_controller(struct i2o_controller *c)
+{
+ struct i2o_controller *iop;
+ u32 msg[4];
+ int ret;
+
+ /* Quiesce all IOPs first */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ i2o_quiesce_controller(iop);
+
+ msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_ADAPTER_CLEAR<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[3]=0;
+
+ if ((ret=i2o_post_wait(c, msg, sizeof(msg), 30)))
+ printk(KERN_INFO "%s: Unable to clear (status=%#x).\n",
+ c->name, -ret);
+ else
+ dprintk(KERN_INFO "%s: Cleared.\n",c->name);
+
+ i2o_status_get(c);
+
+ /* Enable other IOPs */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ if (iop != c)
+ i2o_enable_controller(iop);
+
+ return ret;
+}
+
+
+/**
+ * i2o_reset_controller - reset an IOP
+ * @c: controller to reset
+ *
+ * Reset the IOP into INIT state and wait until IOP gets into RESET state.
+ * Terminate all external operations, clear IOP's inbound and outbound
+ * queues, terminate all DDMs, and reload the IOP's operating environment
+ * and all local DDMs. The IOP rebuilds its LCT.
+ */
+
+static int i2o_reset_controller(struct i2o_controller *c)
+{
+ struct i2o_controller *iop;
+ u32 m;
+ u8 *status;
+ u32 *msg;
+ long time;
+
+ /* Quiesce all IOPs first */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ {
+ if(iop->type != I2O_TYPE_PCI || !iop->bus.pci.dpt)
+ i2o_quiesce_controller(iop);
+ }
+
+ m=i2o_wait_message(c, "AdapterReset");
+ if(m==0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg=(u32 *)(c->mem_offset+m);
+
+ status=(void *)kmalloc(4, GFP_KERNEL);
+ if(status==NULL) {
+ printk(KERN_ERR "IOP reset failed - no free memory.\n");
+ return -ENOMEM;
+ }
+ memset(status, 0, 4);
+
+ msg[0]=EIGHT_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_ADAPTER_RESET<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[2]=core_context;
+ msg[3]=0;
+ msg[4]=0;
+ msg[5]=0;
+ msg[6]=virt_to_bus(status);
+ msg[7]=0; /* 64bit host FIXME */
+
+ i2o_post_message(c,m);
+
+ /* Wait for a reply */
+ time=jiffies;
+ while(*status==0)
+ {
+ if((jiffies-time)>=20*HZ)
+ {
+ printk(KERN_ERR "IOP reset timeout.\n");
+ // Better to leak this for safety: kfree(status);
+ return -ETIMEDOUT;
+ }
+ schedule();
+ barrier();
+ }
+
+ if (*status==I2O_CMD_IN_PROGRESS)
+ {
+ /*
+ * Once the reset is sent, the IOP goes into the INIT state
+ * which is indeterminate. We need to wait until the IOP
+ * has rebooted before we can let the system talk to
+ * it. We read the inbound Free_List until a message is
+ * available. If we can't read one in the given ammount of
+ * time, we assume the IOP could not reboot properly.
+ */
+
+ dprintk(KERN_INFO "%s: Reset in progress, waiting for reboot...\n",
+ c->name);
+
+ time = jiffies;
+ m = I2O_POST_READ32(c);
+ while(m == 0XFFFFFFFF)
+ {
+ if((jiffies-time) >= 30*HZ)
+ {
+ printk(KERN_ERR "%s: Timeout waiting for IOP reset.\n",
+ c->name);
+ return -ETIMEDOUT;
+ }
+ schedule();
+ barrier();
+ m = I2O_POST_READ32(c);
+ }
+ i2o_flush_reply(c,m);
+ }
+
+ /* If IopReset was rejected or didn't perform reset, try IopClear */
+
+ i2o_status_get(c);
+ if (status[0] == I2O_CMD_REJECTED ||
+ c->status_block->iop_state != ADAPTER_STATE_RESET)
+ {
+ printk(KERN_WARNING "%s: Reset rejected, trying to clear\n",c->name);
+ i2o_clear_controller(c);
+ }
+ else
+ dprintk(KERN_INFO "%s: Reset completed.\n", c->name);
+
+ /* Enable other IOPs */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ if (iop != c)
+ i2o_enable_controller(iop);
+
+ kfree(status);
+ return 0;
+}
+
+
+/**
+ * i2o_status_get - get the status block for the IOP
+ * @c: controller
+ *
+ * Issue a status query on the controller. This updates the
+ * attached status_block. If the controller fails to reply or an
+ * error occurs then a negative errno code is returned. On success
+ * zero is returned and the status_blok is updated.
+ */
+
+int i2o_status_get(struct i2o_controller *c)
+{
+ long time;
+ u32 m;
+ u32 *msg;
+ u8 *status_block;
+
+ if (c->status_block == NULL)
+ {
+ c->status_block = (i2o_status_block *)
+ kmalloc(sizeof(i2o_status_block),GFP_KERNEL);
+ if (c->status_block == NULL)
+ {
+ printk(KERN_CRIT "%s: Get Status Block failed; Out of memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ }
+
+ status_block = (u8*)c->status_block;
+ memset(c->status_block,0,sizeof(i2o_status_block));
+
+ m=i2o_wait_message(c, "StatusGet");
+ if(m==0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg=(u32 *)(c->mem_offset+m);
+
+ msg[0]=NINE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_STATUS_GET<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[2]=core_context;
+ msg[3]=0;
+ msg[4]=0;
+ msg[5]=0;
+ msg[6]=virt_to_bus(c->status_block);
+ msg[7]=0; /* 64bit host FIXME */
+ msg[8]=sizeof(i2o_status_block); /* always 88 bytes */
+
+ i2o_post_message(c,m);
+
+ /* Wait for a reply */
+
+ time=jiffies;
+ while(status_block[87]!=0xFF)
+ {
+ if((jiffies-time)>=5*HZ)
+ {
+ printk(KERN_ERR "%s: Get status timeout.\n",c->name);
+ return -ETIMEDOUT;
+ }
+ schedule();
+ barrier();
+ }
+
+#ifdef DRIVERDEBUG
+ printk(KERN_INFO "%s: State = ", c->name);
+ switch (c->status_block->iop_state) {
+ case 0x01:
+ printk("INIT\n");
+ break;
+ case 0x02:
+ printk("RESET\n");
+ break;
+ case 0x04:
+ printk("HOLD\n");
+ break;
+ case 0x05:
+ printk("READY\n");
+ break;
+ case 0x08:
+ printk("OPERATIONAL\n");
+ break;
+ case 0x10:
+ printk("FAILED\n");
+ break;
+ case 0x11:
+ printk("FAULTED\n");
+ break;
+ default:
+ printk("%x (unknown !!)\n",c->status_block->iop_state);
+}
+#endif
+
+ return 0;
+}
+
+/*
+ * Get the Hardware Resource Table for the device.
+ * The HRT contains information about possible hidden devices
+ * but is mostly useless to us
+ */
+int i2o_hrt_get(struct i2o_controller *c)
+{
+ u32 msg[6];
+ int ret, size = sizeof(i2o_hrt);
+
+ /* First read just the header to figure out the real size */
+
+ do {
+ if (c->hrt == NULL) {
+ c->hrt=kmalloc(size, GFP_KERNEL);
+ if (c->hrt == NULL) {
+ printk(KERN_CRIT "%s: Hrt Get failed; Out of memory.\n", c->name);
+ return -ENOMEM;
+ }
+ }
+
+ msg[0]= SIX_WORD_MSG_SIZE| SGL_OFFSET_4;
+ msg[1]= I2O_CMD_HRT_GET<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[3]= 0;
+ msg[4]= (0xD0000000 | size); /* Simple transaction */
+ msg[5]= virt_to_bus(c->hrt); /* Dump it here */
+
+ ret = i2o_post_wait_mem(c, msg, sizeof(msg), 20, c->hrt, NULL);
+
+ if(ret == -ETIMEDOUT)
+ {
+ /* The HRT block we used is in limbo somewhere. When the iop wakes up
+ we will recover it */
+ c->hrt = NULL;
+ return ret;
+ }
+
+ if(ret<0)
+ {
+ printk(KERN_ERR "%s: Unable to get HRT (status=%#x)\n",
+ c->name, -ret);
+ return ret;
+ }
+
+ if (c->hrt->num_entries * c->hrt->entry_len << 2 > size) {
+ size = c->hrt->num_entries * c->hrt->entry_len << 2;
+ kfree(c->hrt);
+ c->hrt = NULL;
+ }
+ } while (c->hrt == NULL);
+
+ i2o_parse_hrt(c); // just for debugging
+
+ return 0;
+}
+
+/*
+ * Send the I2O System Table to the specified IOP
+ *
+ * The system table contains information about all the IOPs in the
+ * system. It is build and then sent to each IOP so that IOPs can
+ * establish connections between each other.
+ *
+ */
+static int i2o_systab_send(struct i2o_controller *iop)
+{
+ u32 msg[12];
+ int ret;
+ u32 *privbuf = kmalloc(16, GFP_KERNEL);
+ if(privbuf == NULL)
+ return -ENOMEM;
+
+ if(iop->type == I2O_TYPE_PCI)
+ {
+ struct resource *root;
+
+ if(iop->status_block->current_mem_size < iop->status_block->desired_mem_size)
+ {
+ struct resource *res = &iop->mem_resource;
+ res->name = iop->bus.pci.pdev->bus->name;
+ res->flags = IORESOURCE_MEM;
+ res->start = 0;
+ res->end = 0;
+ printk("%s: requires private memory resources.\n", iop->name);
+ root = pci_find_parent_resource(iop->bus.pci.pdev, res);
+ if(root==NULL)
+ printk("Can't find parent resource!\n");
+ if(root && allocate_resource(root, res,
+ iop->status_block->desired_mem_size,
+ iop->status_block->desired_mem_size,
+ iop->status_block->desired_mem_size,
+ 1<<20, /* Unspecified, so use 1Mb and play safe */
+ NULL,
+ NULL)>=0)
+ {
+ iop->mem_alloc = 1;
+ iop->status_block->current_mem_size = 1 + res->end - res->start;
+ iop->status_block->current_mem_base = res->start;
+ printk(KERN_INFO "%s: allocated %ld bytes of PCI memory at 0x%08lX.\n",
+ iop->name, 1+res->end-res->start, res->start);
+ }
+ }
+ if(iop->status_block->current_io_size < iop->status_block->desired_io_size)
+ {
+ struct resource *res = &iop->io_resource;
+ res->name = iop->bus.pci.pdev->bus->name;
+ res->flags = IORESOURCE_IO;
+ res->start = 0;
+ res->end = 0;
+ printk("%s: requires private memory resources.\n", iop->name);
+ root = pci_find_parent_resource(iop->bus.pci.pdev, res);
+ if(root==NULL)
+ printk("Can't find parent resource!\n");
+ if(root && allocate_resource(root, res,
+ iop->status_block->desired_io_size,
+ iop->status_block->desired_io_size,
+ iop->status_block->desired_io_size,
+ 1<<20, /* Unspecified, so use 1Mb and play safe */
+ NULL,
+ NULL)>=0)
+ {
+ iop->io_alloc = 1;
+ iop->status_block->current_io_size = 1 + res->end - res->start;
+ iop->status_block->current_mem_base = res->start;
+ printk(KERN_INFO "%s: allocated %ld bytes of PCI I/O at 0x%08lX.\n",
+ iop->name, 1+res->end-res->start, res->start);
+ }
+ }
+ }
+ else
+ {
+ privbuf[0] = iop->status_block->current_mem_base;
+ privbuf[1] = iop->status_block->current_mem_size;
+ privbuf[2] = iop->status_block->current_io_base;
+ privbuf[3] = iop->status_block->current_io_size;
+ }
+
+ msg[0] = I2O_MESSAGE_SIZE(12) | SGL_OFFSET_6;
+ msg[1] = I2O_CMD_SYS_TAB_SET<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[3] = 0;
+ msg[4] = (0<<16) | ((iop->unit+2) << 12); /* Host 0 IOP ID (unit + 2) */
+ msg[5] = 0; /* Segment 0 */
+
+ /*
+ * Provide three SGL-elements:
+ * System table (SysTab), Private memory space declaration and
+ * Private i/o space declaration
+ *
+ * FIXME: provide these for controllers needing them
+ */
+ msg[6] = 0x54000000 | sys_tbl_len;
+ msg[7] = virt_to_bus(sys_tbl);
+ msg[8] = 0x54000000 | 8;
+ msg[9] = virt_to_bus(privbuf);
+ msg[10] = 0xD4000000 | 8;
+ msg[11] = virt_to_bus(privbuf+2);
+
+ ret=i2o_post_wait_mem(iop, msg, sizeof(msg), 120, privbuf, NULL);
+
+ if(ret==-ETIMEDOUT)
+ {
+ printk(KERN_ERR "%s: SysTab setup timed out.\n", iop->name);
+ }
+ else if(ret<0)
+ {
+ printk(KERN_ERR "%s: Unable to set SysTab (status=%#x).\n",
+ iop->name, -ret);
+ kfree(privbuf);
+ }
+ else
+ {
+ dprintk(KERN_INFO "%s: SysTab set.\n", iop->name);
+ kfree(privbuf);
+ }
+ i2o_status_get(iop); // Entered READY state
+
+ return ret;
+
+ }
+
+/*
+ * Initialize I2O subsystem.
+ */
+static void __init i2o_sys_init(void)
+{
+ struct i2o_controller *iop, *niop = NULL;
+
+ printk(KERN_INFO "Activating I2O controllers...\n");
+ printk(KERN_INFO "This may take a few minutes if there are many devices\n");
+
+ /* In INIT state, Activate IOPs */
+ for (iop = i2o_controller_chain; iop; iop = niop) {
+ dprintk(KERN_INFO "Calling i2o_activate_controller for %s...\n",
+ iop->name);
+ niop = iop->next;
+ if (i2o_activate_controller(iop) < 0)
+ i2o_delete_controller(iop);
+ }
+
+ /* Active IOPs in HOLD state */
+
+rebuild_sys_tab:
+ if (i2o_controller_chain == NULL)
+ return;
+
+ /*
+ * If build_sys_table fails, we kill everything and bail
+ * as we can't init the IOPs w/o a system table
+ */
+ dprintk(KERN_INFO "i2o_core: Calling i2o_build_sys_table...\n");
+ if (i2o_build_sys_table() < 0) {
+ i2o_sys_shutdown();
+ return;
+ }
+
+ /* If IOP don't get online, we need to rebuild the System table */
+ for (iop = i2o_controller_chain; iop; iop = niop) {
+ niop = iop->next;
+ dprintk(KERN_INFO "Calling i2o_online_controller for %s...\n", iop->name);
+ if (i2o_online_controller(iop) < 0) {
+ i2o_delete_controller(iop);
+ goto rebuild_sys_tab;
+ }
+ }
+
+ /* Active IOPs now in OPERATIONAL state */
+
+ /*
+ * Register for status updates from all IOPs
+ */
+ for(iop = i2o_controller_chain; iop; iop=iop->next) {
+
+ /* Create a kernel thread to deal with dynamic LCT updates */
+ iop->lct_pid = kernel_thread(i2o_dyn_lct, iop, CLONE_SIGHAND);
+
+ /* Update change ind on DLCT */
+ iop->dlct->change_ind = iop->lct->change_ind;
+
+ /* Start dynamic LCT updates */
+ i2o_lct_notify(iop);
+
+ /* Register for all events from IRTOS */
+ i2o_event_register(iop, core_context, 0, 0, 0xFFFFFFFF);
+ }
+}
+
+/**
+ * i2o_sys_shutdown - shutdown I2O system
+ *
+ * Bring down each i2o controller and then return. Each controller
+ * is taken through an orderly shutdown
+ */
+
+static void i2o_sys_shutdown(void)
+{
+ struct i2o_controller *iop, *niop;
+
+ /* Delete all IOPs from the controller chain */
+ /* that will reset all IOPs too */
+
+ for (iop = i2o_controller_chain; iop; iop = niop) {
+ niop = iop->next;
+ i2o_delete_controller(iop);
+ }
+}
+
+/**
+ * i2o_activate_controller - bring controller up to HOLD
+ * @iop: controller
+ *
+ * This function brings an I2O controller into HOLD state. The adapter
+ * is reset if neccessary and then the queues and resource table
+ * are read. -1 is returned on a failure, 0 on success.
+ *
+ */
+
+int i2o_activate_controller(struct i2o_controller *iop)
+{
+ /* In INIT state, Wait Inbound Q to initialize (in i2o_status_get) */
+ /* In READY state, Get status */
+
+ if (i2o_status_get(iop) < 0) {
+ printk(KERN_INFO "Unable to obtain status of %s, "
+ "attempting a reset.\n", iop->name);
+ if (i2o_reset_controller(iop) < 0)
+ return -1;
+ }
+
+ if(iop->status_block->iop_state == ADAPTER_STATE_FAULTED) {
+ printk(KERN_CRIT "%s: hardware fault\n", iop->name);
+ return -1;
+ }
+
+ if (iop->status_block->i2o_version > I2OVER15) {
+ printk(KERN_ERR "%s: Not running vrs. 1.5. of the I2O Specification.\n",
+ iop->name);
+ return -1;
+ }
+
+ if (iop->status_block->iop_state == ADAPTER_STATE_READY ||
+ iop->status_block->iop_state == ADAPTER_STATE_OPERATIONAL ||
+ iop->status_block->iop_state == ADAPTER_STATE_HOLD ||
+ iop->status_block->iop_state == ADAPTER_STATE_FAILED)
+ {
+ dprintk(KERN_INFO "%s: Already running, trying to reset...\n",
+ iop->name);
+ if (i2o_reset_controller(iop) < 0)
+ return -1;
+ }
+
+ if (i2o_init_outbound_q(iop) < 0)
+ return -1;
+
+ if (i2o_post_outbound_messages(iop))
+ return -1;
+
+ /* In HOLD state */
+
+ if (i2o_hrt_get(iop) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+/**
+ * i2o_init_outbound_queue - setup the outbound queue
+ * @c: controller
+ *
+ * Clear and (re)initialize IOP's outbound queue. Returns 0 on
+ * success or a negative errno code on a failure.
+ */
+
+int i2o_init_outbound_q(struct i2o_controller *c)
+{
+ u8 *status;
+ u32 m;
+ u32 *msg;
+ u32 time;
+
+ dprintk(KERN_INFO "%s: Initializing Outbound Queue...\n", c->name);
+ m=i2o_wait_message(c, "OutboundInit");
+ if(m==0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg=(u32 *)(c->mem_offset+m);
+
+ status = kmalloc(4,GFP_KERNEL);
+ if (status==NULL) {
+ printk(KERN_ERR "%s: Outbound Queue initialization failed - no free memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ memset(status, 0, 4);
+
+ msg[0]= EIGHT_WORD_MSG_SIZE| TRL_OFFSET_6;
+ msg[1]= I2O_CMD_OUTBOUND_INIT<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2]= core_context;
+ msg[3]= 0x0106; /* Transaction context */
+ msg[4]= 4096; /* Host page frame size */
+ /* Frame size is in words. Pick 128, its what everyone elses uses and
+ other sizes break some adapters. */
+ msg[5]= MSG_FRAME_SIZE<<16|0x80; /* Outbound msg frame size and Initcode */
+ msg[6]= 0xD0000004; /* Simple SG LE, EOB */
+ msg[7]= virt_to_bus(status);
+
+ i2o_post_message(c,m);
+
+ barrier();
+ time=jiffies;
+ while(status[0] < I2O_CMD_REJECTED)
+ {
+ if((jiffies-time)>=30*HZ)
+ {
+ if(status[0]==0x00)
+ printk(KERN_ERR "%s: Ignored queue initialize request.\n",
+ c->name);
+ else
+ printk(KERN_ERR "%s: Outbound queue initialize timeout.\n",
+ c->name);
+ kfree(status);
+ return -ETIMEDOUT;
+ }
+ schedule();
+ barrier();
+ }
+
+ if(status[0] != I2O_CMD_COMPLETED)
+ {
+ printk(KERN_ERR "%s: IOP outbound initialise failed.\n", c->name);
+ kfree(status);
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+/**
+ * i2o_post_outbound_messages - fill message queue
+ * @c: controller
+ *
+ * Allocate a message frame and load the messages into the IOP. The
+ * function returns zero on success or a negative errno code on
+ * failure.
+ */
+
+int i2o_post_outbound_messages(struct i2o_controller *c)
+{
+ int i;
+ u32 m;
+ /* Alloc space for IOP's outbound queue message frames */
+
+ c->page_frame = kmalloc(MSG_POOL_SIZE, GFP_KERNEL);
+ if(c->page_frame==NULL) {
+ printk(KERN_CRIT "%s: Outbound Q initialize failed; out of memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ m=virt_to_bus(c->page_frame);
+
+ /* Post frames */
+
+ for(i=0; i< NMBR_MSG_FRAMES; i++) {
+ I2O_REPLY_WRITE32(c,m);
+ mb();
+ m += MSG_FRAME_SIZE;
+ }
+
+ return 0;
+}
+
+/*
+ * Get the IOP's Logical Configuration Table
+ */
+int i2o_lct_get(struct i2o_controller *c)
+{
+ u32 msg[8];
+ int ret, size = c->status_block->expected_lct_size;
+
+ do {
+ if (c->lct == NULL) {
+ c->lct = kmalloc(size, GFP_KERNEL);
+ if(c->lct == NULL) {
+ printk(KERN_CRIT "%s: Lct Get failed. Out of memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ }
+ memset(c->lct, 0, size);
+
+ msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
+ msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
+ /* msg[2] filled in i2o_post_wait */
+ msg[3] = 0;
+ msg[4] = 0xFFFFFFFF; /* All devices */
+ msg[5] = 0x00000000; /* Report now */
+ msg[6] = 0xD0000000|size;
+ msg[7] = virt_to_bus(c->lct);
+
+ ret=i2o_post_wait_mem(c, msg, sizeof(msg), 120, c->lct, NULL);
+
+ if(ret == -ETIMEDOUT)
+ {
+ c->lct = NULL;
+ return ret;
+ }
+
+ if(ret<0)
+ {
+ printk(KERN_ERR "%s: LCT Get failed (status=%#x.\n",
+ c->name, -ret);
+ return ret;
+ }
+
+ if (c->lct->table_size << 2 > size) {
+ size = c->lct->table_size << 2;
+ kfree(c->lct);
+ c->lct = NULL;
+ }
+ } while (c->lct == NULL);
+
+ if ((ret=i2o_parse_lct(c)) < 0)
+ return ret;
+
+ return 0;
+}
+
+/*
+ * Like above, but used for async notification. The main
+ * difference is that we keep track of the CurrentChangeIndiicator
+ * so that we only get updates when it actually changes.
+ *
+ */
+int i2o_lct_notify(struct i2o_controller *c)
+{
+ u32 msg[8];
+
+ msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
+ msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2] = core_context;
+ msg[3] = 0xDEADBEEF;
+ msg[4] = 0xFFFFFFFF; /* All devices */
+ msg[5] = c->dlct->change_ind+1; /* Next change */
+ msg[6] = 0xD0000000|8192;
+ msg[7] = virt_to_bus(c->dlct);
+
+ return i2o_post_this(c, msg, sizeof(msg));
+}
+
+/*
+ * Bring a controller online into OPERATIONAL state.
+ */
+
+int i2o_online_controller(struct i2o_controller *iop)
+{
+ u32 v;
+
+ if (i2o_systab_send(iop) < 0)
+ return -1;
+
+ /* In READY state */
+
+ dprintk(KERN_INFO "%s: Attempting to enable...\n", iop->name);
+ if (i2o_enable_controller(iop) < 0)
+ return -1;
+
+ /* In OPERATIONAL state */
+
+ dprintk(KERN_INFO "%s: Attempting to get/parse lct...\n", iop->name);
+ if (i2o_lct_get(iop) < 0)
+ return -1;
+
+ /* Check battery status */
+
+ iop->battery = 0;
+ if(i2o_query_scalar(iop, ADAPTER_TID, 0x0000, 4, &v, 4)>=0)
+ {
+ if(v&16)
+ iop->battery = 1;
+ }
+
+ return 0;
+}
+
+/*
+ * Build system table
+ *
+ * The system table contains information about all the IOPs in the
+ * system (duh) and is used by the Executives on the IOPs to establish
+ * peer2peer connections. We're not supporting peer2peer at the moment,
+ * but this will be needed down the road for things like lan2lan forwarding.
+ */
+static int i2o_build_sys_table(void)
+{
+ struct i2o_controller *iop = NULL;
+ struct i2o_controller *niop = NULL;
+ int count = 0;
+
+ sys_tbl_len = sizeof(struct i2o_sys_tbl) + // Header + IOPs
+ (i2o_num_controllers) *
+ sizeof(struct i2o_sys_tbl_entry);
+
+ if(sys_tbl)
+ kfree(sys_tbl);
+
+ sys_tbl = kmalloc(sys_tbl_len, GFP_KERNEL);
+ if(!sys_tbl) {
+ printk(KERN_CRIT "SysTab Set failed. Out of memory.\n");
+ return -ENOMEM;
+ }
+ memset((void*)sys_tbl, 0, sys_tbl_len);
+
+ sys_tbl->num_entries = i2o_num_controllers;
+ sys_tbl->version = I2OVERSION; /* TODO: Version 2.0 */
+ sys_tbl->change_ind = sys_tbl_ind++;
+
+ for(iop = i2o_controller_chain; iop; iop = niop)
+ {
+ niop = iop->next;
+
+ /*
+ * Get updated IOP state so we have the latest information
+ *
+ * We should delete the controller at this point if it
+ * doesn't respond since if it's not on the system table
+ * it is techninically not part of the I2O subsyßtem...
+ */
+ if(i2o_status_get(iop)) {
+ printk(KERN_ERR "%s: Deleting b/c could not get status while"
+ "attempting to build system table\n", iop->name);
+ i2o_delete_controller(iop);
+ sys_tbl->num_entries--;
+ continue; // try the next one
+ }
+
+ sys_tbl->iops[count].org_id = iop->status_block->org_id;
+ sys_tbl->iops[count].iop_id = iop->unit + 2;
+ sys_tbl->iops[count].seg_num = 0;
+ sys_tbl->iops[count].i2o_version =
+ iop->status_block->i2o_version;
+ sys_tbl->iops[count].iop_state =
+ iop->status_block->iop_state;
+ sys_tbl->iops[count].msg_type =
+ iop->status_block->msg_type;
+ sys_tbl->iops[count].frame_size =
+ iop->status_block->inbound_frame_size;
+ sys_tbl->iops[count].last_changed = sys_tbl_ind - 1; // ??
+ sys_tbl->iops[count].iop_capabilities =
+ iop->status_block->iop_capabilities;
+ sys_tbl->iops[count].inbound_low =
+ (u32)virt_to_bus(iop->post_port);
+ sys_tbl->iops[count].inbound_high = 0; // TODO: 64-bit support
+
+ count++;
+ }
+
+#ifdef DRIVERDEBUG
+{
+ u32 *table;
+ table = (u32*)sys_tbl;
+ for(count = 0; count < (sys_tbl_len >>2); count++)
+ printk(KERN_INFO "sys_tbl[%d] = %0#10x\n", count, table[count]);
+}
+#endif
+
+ return 0;
+}
+
+
+/*
+ * Run time support routines
+ */
+
+/*
+ * Generic "post and forget" helpers. This is less efficient - we do
+ * a memcpy for example that isnt strictly needed, but for most uses
+ * this is simply not worth optimising
+ */
+
+int i2o_post_this(struct i2o_controller *c, u32 *data, int len)
+{
+ u32 m;
+ u32 *msg;
+ unsigned long t=jiffies;
+
+ do
+ {
+ mb();
+ m = I2O_POST_READ32(c);
+ }
+ while(m==0xFFFFFFFF && (jiffies-t)<HZ);
+
+ if(m==0xFFFFFFFF)
+ {
+ printk(KERN_ERR "%s: Timeout waiting for message frame!\n",
+ c->name);
+ return -ETIMEDOUT;
+ }
+ msg = (u32 *)(c->mem_offset + m);
+ memcpy_toio(msg, data, len);
+ i2o_post_message(c,m);
+ return 0;
+}
+
+/**
+ * i2o_post_wait_mem - I2O query/reply with DMA buffers
+ * @c: controller
+ * @msg: message to send
+ * @len: length of message
+ * @timeout: time in seconds to wait
+ * @mem1: attached memory buffer 1
+ * @mem2: attached memory buffer 2
+ *
+ * This core API allows an OSM to post a message and then be told whether
+ * or not the system received a successful reply.
+ *
+ * If the message times out then the value '-ETIMEDOUT' is returned. This
+ * is a special case. In this situation the message may (should) complete
+ * at an indefinite time in the future. When it completes it will use the
+ * memory buffers attached to the request. If -ETIMEDOUT is returned then
+ * the memory buffers must not be freed. Instead the event completion will
+ * free them for you. In all other cases the buffers are your problem.
+ *
+ * Pass NULL for unneeded buffers.
+ */
+
+int i2o_post_wait_mem(struct i2o_controller *c, u32 *msg, int len, int timeout, void *mem1, void *mem2)
+{
+ DECLARE_WAIT_QUEUE_HEAD(wq_i2o_post);
+ int complete = 0;
+ int status;
+ unsigned long flags = 0;
+ struct i2o_post_wait_data *wait_data =
+ kmalloc(sizeof(struct i2o_post_wait_data), GFP_KERNEL);
+
+ if(!wait_data)
+ return -ENOMEM;
+
+ /*
+ * Create a new notification object
+ */
+ wait_data->status = &status;
+ wait_data->complete = &complete;
+ wait_data->mem[0] = mem1;
+ wait_data->mem[1] = mem2;
+ /*
+ * Queue the event with its unique id
+ */
+ spin_lock_irqsave(&post_wait_lock, flags);
+
+ wait_data->next = post_wait_queue;
+ post_wait_queue = wait_data;
+ wait_data->id = (++post_wait_id) & 0x7fff;
+ wait_data->wq = &wq_i2o_post;
+
+ spin_unlock_irqrestore(&post_wait_lock, flags);
+
+ /*
+ * Fill in the message id
+ */
+
+ msg[2] = 0x80000000|(u32)core_context|((u32)wait_data->id<<16);
+
+ /*
+ * Post the message to the controller. At some point later it
+ * will return. If we time out before it returns then
+ * complete will be zero. From the point post_this returns
+ * the wait_data may have been deleted.
+ */
+ if ((status = i2o_post_this(c, msg, len))==0) {
+ sleep_on_timeout(&wq_i2o_post, HZ * timeout);
+ }
+ else
+ return -EIO;
+
+ if(signal_pending(current))
+ status = -EINTR;
+
+ spin_lock_irqsave(&post_wait_lock, flags);
+ barrier(); /* Be sure we see complete as it is locked */
+ if(!complete)
+ {
+ /*
+ * Mark the entry dead. We cannot remove it. This is important.
+ * When it does terminate (which it must do if the controller hasnt
+ * died..) then it will otherwise scribble on stuff.
+ * !complete lets us safely check if the entry is still
+ * allocated and thus we can write into it
+ */
+ wait_data->wq = NULL;
+ status = -ETIMEDOUT;
+ }
+ else
+ {
+ /* Debugging check - remove me soon */
+ if(status == -ETIMEDOUT)
+ {
+ printk("TIMEDOUT BUG!\n");
+ status = -EIO;
+ }
+ }
+ /* And the wait_data is not leaked either! */
+ spin_unlock_irqrestore(&post_wait_lock, flags);
+ return status;
+}
+
+/**
+ * i2o_post_wait - I2O query/reply
+ * @c: controller
+ * @msg: message to send
+ * @len: length of message
+ * @timeout: time in seconds to wait
+ *
+ * This core API allows an OSM to post a message and then be told whether
+ * or not the system received a successful reply.
+ */
+
+int i2o_post_wait(struct i2o_controller *c, u32 *msg, int len, int timeout)
+{
+ return i2o_post_wait_mem(c, msg, len, timeout, NULL, NULL);
+}
+
+/*
+ * i2o_post_wait is completed and we want to wake up the
+ * sleeping proccess. Called by core's reply handler.
+ */
+
+static void i2o_post_wait_complete(u32 context, int status)
+{
+ struct i2o_post_wait_data **p1, *q;
+ unsigned long flags;
+
+ /*
+ * We need to search through the post_wait
+ * queue to see if the given message is still
+ * outstanding. If not, it means that the IOP
+ * took longer to respond to the message than we
+ * had allowed and timer has already expired.
+ * Not much we can do about that except log
+ * it for debug purposes, increase timeout, and recompile
+ *
+ * Lock needed to keep anyone from moving queue pointers
+ * around while we're looking through them.
+ */
+
+ spin_lock_irqsave(&post_wait_lock, flags);
+
+ for(p1 = &post_wait_queue; *p1!=NULL; p1 = &((*p1)->next))
+ {
+ q = (*p1);
+ if(q->id == ((context >> 16) & 0x7fff)) {
+ /*
+ * Delete it
+ */
+
+ *p1 = q->next;
+
+ /*
+ * Live or dead ?
+ */
+
+ if(q->wq)
+ {
+ /* Live entry - wakeup and set status */
+ *q->status = status;
+ *q->complete = 1;
+ wake_up(q->wq);
+ }
+ else
+ {
+ /*
+ * Free resources. Caller is dead
+ */
+ if(q->mem[0])
+ kfree(q->mem[0]);
+ if(q->mem[1])
+ kfree(q->mem[1]);
+ printk(KERN_WARNING "i2o_post_wait event completed after timeout.\n");
+ }
+ kfree(q);
+ spin_unlock(&post_wait_lock);
+ return;
+ }
+ }
+ spin_unlock(&post_wait_lock);
+
+ printk(KERN_DEBUG "i2o_post_wait: Bogus reply!\n");
+}
+
+/* Issue UTIL_PARAMS_GET or UTIL_PARAMS_SET
+ *
+ * This function can be used for all UtilParamsGet/Set operations.
+ * The OperationList is given in oplist-buffer,
+ * and results are returned in reslist-buffer.
+ * Note that the minimum sized reslist is 8 bytes and contains
+ * ResultCount, ErrorInfoSize, BlockStatus and BlockSize.
+ */
+int i2o_issue_params(int cmd, struct i2o_controller *iop, int tid,
+ void *oplist, int oplen, void *reslist, int reslen)
+{
+ u32 msg[9];
+ u32 *res32 = (u32*)reslist;
+ u32 *restmp = (u32*)reslist;
+ int len = 0;
+ int i = 0;
+ int wait_status;
+ u32 *opmem, *resmem;
+
+ /* Get DMAable memory */
+ opmem = kmalloc(oplen, GFP_KERNEL);
+ if(opmem == NULL)
+ return -ENOMEM;
+ memcpy(opmem, oplist, oplen);
+
+ resmem = kmalloc(reslen, GFP_KERNEL);
+ if(resmem == NULL)
+ {
+ kfree(opmem);
+ return -ENOMEM;
+ }
+
+ msg[0] = NINE_WORD_MSG_SIZE | SGL_OFFSET_5;
+ msg[1] = cmd << 24 | HOST_TID << 12 | tid;
+ msg[3] = 0;
+ msg[4] = 0;
+ msg[5] = 0x54000000 | oplen; /* OperationList */
+ msg[6] = virt_to_bus(opmem);
+ msg[7] = 0xD0000000 | reslen; /* ResultList */
+ msg[8] = virt_to_bus(resmem);
+
+ wait_status = i2o_post_wait_mem(iop, msg, sizeof(msg), 10, opmem, resmem);
+
+ /*
+ * This only looks like a memory leak - don't "fix" it.
+ */
+ if(wait_status == -ETIMEDOUT)
+ return wait_status;
+
+ /* Query failed */
+ if(wait_status != 0)
+ {
+ kfree(resmem);
+ kfree(opmem);
+ return wait_status;
+ }
+
+ memcpy(reslist, resmem, reslen);
+ /*
+ * Calculate number of bytes of Result LIST
+ * We need to loop through each Result BLOCK and grab the length
+ */
+ restmp = res32 + 1;
+ len = 1;
+ for(i = 0; i < (res32[0]&0X0000FFFF); i++)
+ {
+ if(restmp[0]&0x00FF0000) /* BlockStatus != SUCCESS */
+ {
+ printk(KERN_WARNING "%s - Error:\n ErrorInfoSize = 0x%02x, "
+ "BlockStatus = 0x%02x, BlockSize = 0x%04x\n",
+ (cmd == I2O_CMD_UTIL_PARAMS_SET) ? "PARAMS_SET"
+ : "PARAMS_GET",
+ res32[1]>>24, (res32[1]>>16)&0xFF, res32[1]&0xFFFF);
+
+ /*
+ * If this is the only request,than we return an error
+ */
+ if((res32[0]&0x0000FFFF) == 1)
+ {
+ return -((res32[1] >> 16) & 0xFF); /* -BlockStatus */
+ }
+ }
+ len += restmp[0] & 0x0000FFFF; /* Length of res BLOCK */
+ restmp += restmp[0] & 0x0000FFFF; /* Skip to next BLOCK */
+ }
+ return (len << 2); /* bytes used by result list */
+}
+
+/*
+ * Query one scalar group value or a whole scalar group.
+ */
+int i2o_query_scalar(struct i2o_controller *iop, int tid,
+ int group, int field, void *buf, int buflen)
+{
+ u16 opblk[] = { 1, 0, I2O_PARAMS_FIELD_GET, group, 1, field };
+ u8 resblk[8+buflen]; /* 8 bytes for header */
+ int size;
+
+ if (field == -1) /* whole group */
+ opblk[4] = -1;
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET, iop, tid,
+ opblk, sizeof(opblk), resblk, sizeof(resblk));
+
+ memcpy(buf, resblk+8, buflen); /* cut off header */
+
+ if(size>buflen)
+ return buflen;
+ return size;
+}
+
+/*
+ * Set a scalar group value or a whole group.
+ */
+int i2o_set_scalar(struct i2o_controller *iop, int tid,
+ int group, int field, void *buf, int buflen)
+{
+ u16 *opblk;
+ u8 resblk[8+buflen]; /* 8 bytes for header */
+ int size;
+
+ opblk = kmalloc(buflen+64, GFP_KERNEL);
+ if (opblk == NULL)
+ {
+ printk(KERN_ERR "i2o: no memory for operation buffer.\n");
+ return -ENOMEM;
+ }
+
+ opblk[0] = 1; /* operation count */
+ opblk[1] = 0; /* pad */
+ opblk[2] = I2O_PARAMS_FIELD_SET;
+ opblk[3] = group;
+
+ if(field == -1) { /* whole group */
+ opblk[4] = -1;
+ memcpy(opblk+5, buf, buflen);
+ }
+ else /* single field */
+ {
+ opblk[4] = 1;
+ opblk[5] = field;
+ memcpy(opblk+6, buf, buflen);
+ }
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
+ opblk, 12+buflen, resblk, sizeof(resblk));
+
+ kfree(opblk);
+ if(size>buflen)
+ return buflen;
+ return size;
+}
+
+/*
+ * if oper == I2O_PARAMS_TABLE_GET, get from all rows
+ * if fieldcount == -1 return all fields
+ * ibuf and ibuflen are unused (use NULL, 0)
+ * else return specific fields
+ * ibuf contains fieldindexes
+ *
+ * if oper == I2O_PARAMS_LIST_GET, get from specific rows
+ * if fieldcount == -1 return all fields
+ * ibuf contains rowcount, keyvalues
+ * else return specific fields
+ * fieldcount is # of fieldindexes
+ * ibuf contains fieldindexes, rowcount, keyvalues
+ *
+ * You could also use directly function i2o_issue_params().
+ */
+int i2o_query_table(int oper, struct i2o_controller *iop, int tid, int group,
+ int fieldcount, void *ibuf, int ibuflen,
+ void *resblk, int reslen)
+{
+ u16 *opblk;
+ int size;
+
+ opblk = kmalloc(10 + ibuflen, GFP_KERNEL);
+ if (opblk == NULL)
+ {
+ printk(KERN_ERR "i2o: no memory for query buffer.\n");
+ return -ENOMEM;
+ }
+
+ opblk[0] = 1; /* operation count */
+ opblk[1] = 0; /* pad */
+ opblk[2] = oper;
+ opblk[3] = group;
+ opblk[4] = fieldcount;
+ memcpy(opblk+5, ibuf, ibuflen); /* other params */
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET,iop, tid,
+ opblk, 10+ibuflen, resblk, reslen);
+
+ kfree(opblk);
+ if(size>reslen)
+ return reslen;
+ return size;
+}
+
+/*
+ * Clear table group, i.e. delete all rows.
+ */
+int i2o_clear_table(struct i2o_controller *iop, int tid, int group)
+{
+ u16 opblk[] = { 1, 0, I2O_PARAMS_TABLE_CLEAR, group };
+ u8 resblk[32]; /* min 8 bytes for result header */
+
+ return i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
+ opblk, sizeof(opblk), resblk, sizeof(resblk));
+}
+
+/*
+ * Add a new row into a table group.
+ *
+ * if fieldcount==-1 then we add whole rows
+ * buf contains rowcount, keyvalues
+ * else just specific fields are given, rest use defaults
+ * buf contains fieldindexes, rowcount, keyvalues
+ */
+int i2o_row_add_table(struct i2o_controller *iop, int tid,
+ int group, int fieldcount, void *buf, int buflen)
+{
+ u16 *opblk;
+ u8 resblk[32]; /* min 8 bytes for header */
+ int size;
+
+ opblk = kmalloc(buflen+64, GFP_KERNEL);
+ if (opblk == NULL)
+ {
+ printk(KERN_ERR "i2o: no memory for operation buffer.\n");
+ return -ENOMEM;
+ }
+
+ opblk[0] = 1; /* operation count */
+ opblk[1] = 0; /* pad */
+ opblk[2] = I2O_PARAMS_ROW_ADD;
+ opblk[3] = group;
+ opblk[4] = fieldcount;
+ memcpy(opblk+5, buf, buflen);
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
+ opblk, 10+buflen, resblk, sizeof(resblk));
+
+ kfree(opblk);
+ if(size>buflen)
+ return buflen;
+ return size;
+}
+
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Following fail status are common to all classes.
+ * The preserved message must be handled in the reply handler.
+ */
+void i2o_report_fail_status(u8 req_status, u32* msg)
+{
+ static char *FAIL_STATUS[] = {
+ "0x80", /* not used */
+ "SERVICE_SUSPENDED", /* 0x81 */
+ "SERVICE_TERMINATED", /* 0x82 */
+ "CONGESTION",
+ "FAILURE",
+ "STATE_ERROR",
+ "TIME_OUT",
+ "ROUTING_FAILURE",
+ "INVALID_VERSION",
+ "INVALID_OFFSET",
+ "INVALID_MSG_FLAGS",
+ "FRAME_TOO_SMALL",
+ "FRAME_TOO_LARGE",
+ "INVALID_TARGET_ID",
+ "INVALID_INITIATOR_ID",
+ "INVALID_INITIATOR_CONTEX", /* 0x8F */
+ "UNKNOWN_FAILURE" /* 0xFF */
+ };
+
+ if (req_status == I2O_FSC_TRANSPORT_UNKNOWN_FAILURE)
+ printk("TRANSPORT_UNKNOWN_FAILURE (%0#2x)\n.", req_status);
+ else
+ printk("TRANSPORT_%s.\n", FAIL_STATUS[req_status & 0x0F]);
+
+ /* Dump some details */
+
+ printk(KERN_ERR " InitiatorId = %d, TargetId = %d\n",
+ (msg[1] >> 12) & 0xFFF, msg[1] & 0xFFF);
+ printk(KERN_ERR " LowestVersion = 0x%02X, HighestVersion = 0x%02X\n",
+ (msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
+ printk(KERN_ERR " FailingHostUnit = 0x%04X, FailingIOP = 0x%03X\n",
+ msg[5] >> 16, msg[5] & 0xFFF);
+
+ printk(KERN_ERR " Severity: 0x%02X ", (msg[4] >> 16) & 0xFF);
+ if (msg[4] & (1<<16))
+ printk("(FormatError), "
+ "this msg can never be delivered/processed.\n");
+ if (msg[4] & (1<<17))
+ printk("(PathError), "
+ "this msg can no longer be delivered/processed.\n");
+ if (msg[4] & (1<<18))
+ printk("(PathState), "
+ "the system state does not allow delivery.\n");
+ if (msg[4] & (1<<19))
+ printk("(Congestion), resources temporarily not available;"
+ "do not retry immediately.\n");
+}
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Following reply status are common to all classes.
+ */
+void i2o_report_common_status(u8 req_status)
+{
+ static char *REPLY_STATUS[] = {
+ "SUCCESS",
+ "ABORT_DIRTY",
+ "ABORT_NO_DATA_TRANSFER",
+ "ABORT_PARTIAL_TRANSFER",
+ "ERROR_DIRTY",
+ "ERROR_NO_DATA_TRANSFER",
+ "ERROR_PARTIAL_TRANSFER",
+ "PROCESS_ABORT_DIRTY",
+ "PROCESS_ABORT_NO_DATA_TRANSFER",
+ "PROCESS_ABORT_PARTIAL_TRANSFER",
+ "TRANSACTION_ERROR",
+ "PROGRESS_REPORT"
+ };
+
+ if (req_status > I2O_REPLY_STATUS_PROGRESS_REPORT)
+ printk("RequestStatus = %0#2x", req_status);
+ else
+ printk("%s", REPLY_STATUS[req_status]);
+}
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Following detailed status are valid for executive class,
+ * utility class, DDM class and for transaction error replies.
+ */
+static void i2o_report_common_dsc(u16 detailed_status)
+{
+ static char *COMMON_DSC[] = {
+ "SUCCESS",
+ "0x01", // not used
+ "BAD_KEY",
+ "TCL_ERROR",
+ "REPLY_BUFFER_FULL",
+ "NO_SUCH_PAGE",
+ "INSUFFICIENT_RESOURCE_SOFT",
+ "INSUFFICIENT_RESOURCE_HARD",
+ "0x08", // not used
+ "CHAIN_BUFFER_TOO_LARGE",
+ "UNSUPPORTED_FUNCTION",
+ "DEVICE_LOCKED",
+ "DEVICE_RESET",
+ "INAPPROPRIATE_FUNCTION",
+ "INVALID_INITIATOR_ADDRESS",
+ "INVALID_MESSAGE_FLAGS",
+ "INVALID_OFFSET",
+ "INVALID_PARAMETER",
+ "INVALID_REQUEST",
+ "INVALID_TARGET_ADDRESS",
+ "MESSAGE_TOO_LARGE",
+ "MESSAGE_TOO_SMALL",
+ "MISSING_PARAMETER",
+ "TIMEOUT",
+ "UNKNOWN_ERROR",
+ "UNKNOWN_FUNCTION",
+ "UNSUPPORTED_VERSION",
+ "DEVICE_BUSY",
+ "DEVICE_NOT_AVAILABLE"
+ };
+
+ if (detailed_status > I2O_DSC_DEVICE_NOT_AVAILABLE)
+ printk(" / DetailedStatus = %0#4x.\n", detailed_status);
+ else
+ printk(" / %s.\n", COMMON_DSC[detailed_status]);
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_lan_dsc(u16 detailed_status)
+{
+ static char *LAN_DSC[] = { // Lan detailed status code strings
+ "SUCCESS",
+ "DEVICE_FAILURE",
+ "DESTINATION_NOT_FOUND",
+ "TRANSMIT_ERROR",
+ "TRANSMIT_ABORTED",
+ "RECEIVE_ERROR",
+ "RECEIVE_ABORTED",
+ "DMA_ERROR",
+ "BAD_PACKET_DETECTED",
+ "OUT_OF_MEMORY",
+ "BUCKET_OVERRUN",
+ "IOP_INTERNAL_ERROR",
+ "CANCELED",
+ "INVALID_TRANSACTION_CONTEXT",
+ "DEST_ADDRESS_DETECTED",
+ "DEST_ADDRESS_OMITTED",
+ "PARTIAL_PACKET_RETURNED",
+ "TEMP_SUSPENDED_STATE", // last Lan detailed status code
+ "INVALID_REQUEST" // general detailed status code
+ };
+
+ if (detailed_status > I2O_DSC_INVALID_REQUEST)
+ printk(" / %0#4x.\n", detailed_status);
+ else
+ printk(" / %s.\n", LAN_DSC[detailed_status]);
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_util_cmd(u8 cmd)
+{
+ switch (cmd) {
+ case I2O_CMD_UTIL_NOP:
+ printk("UTIL_NOP, ");
+ break;
+ case I2O_CMD_UTIL_ABORT:
+ printk("UTIL_ABORT, ");
+ break;
+ case I2O_CMD_UTIL_CLAIM:
+ printk("UTIL_CLAIM, ");
+ break;
+ case I2O_CMD_UTIL_RELEASE:
+ printk("UTIL_CLAIM_RELEASE, ");
+ break;
+ case I2O_CMD_UTIL_CONFIG_DIALOG:
+ printk("UTIL_CONFIG_DIALOG, ");
+ break;
+ case I2O_CMD_UTIL_DEVICE_RESERVE:
+ printk("UTIL_DEVICE_RESERVE, ");
+ break;
+ case I2O_CMD_UTIL_DEVICE_RELEASE:
+ printk("UTIL_DEVICE_RELEASE, ");
+ break;
+ case I2O_CMD_UTIL_EVT_ACK:
+ printk("UTIL_EVENT_ACKNOWLEDGE, ");
+ break;
+ case I2O_CMD_UTIL_EVT_REGISTER:
+ printk("UTIL_EVENT_REGISTER, ");
+ break;
+ case I2O_CMD_UTIL_LOCK:
+ printk("UTIL_LOCK, ");
+ break;
+ case I2O_CMD_UTIL_LOCK_RELEASE:
+ printk("UTIL_LOCK_RELEASE, ");
+ break;
+ case I2O_CMD_UTIL_PARAMS_GET:
+ printk("UTIL_PARAMS_GET, ");
+ break;
+ case I2O_CMD_UTIL_PARAMS_SET:
+ printk("UTIL_PARAMS_SET, ");
+ break;
+ case I2O_CMD_UTIL_REPLY_FAULT_NOTIFY:
+ printk("UTIL_REPLY_FAULT_NOTIFY, ");
+ break;
+ default:
+ printk("Cmd = %0#2x, ",cmd);
+ }
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_exec_cmd(u8 cmd)
+{
+ switch (cmd) {
+ case I2O_CMD_ADAPTER_ASSIGN:
+ printk("EXEC_ADAPTER_ASSIGN, ");
+ break;
+ case I2O_CMD_ADAPTER_READ:
+ printk("EXEC_ADAPTER_READ, ");
+ break;
+ case I2O_CMD_ADAPTER_RELEASE:
+ printk("EXEC_ADAPTER_RELEASE, ");
+ break;
+ case I2O_CMD_BIOS_INFO_SET:
+ printk("EXEC_BIOS_INFO_SET, ");
+ break;
+ case I2O_CMD_BOOT_DEVICE_SET:
+ printk("EXEC_BOOT_DEVICE_SET, ");
+ break;
+ case I2O_CMD_CONFIG_VALIDATE:
+ printk("EXEC_CONFIG_VALIDATE, ");
+ break;
+ case I2O_CMD_CONN_SETUP:
+ printk("EXEC_CONN_SETUP, ");
+ break;
+ case I2O_CMD_DDM_DESTROY:
+ printk("EXEC_DDM_DESTROY, ");
+ break;
+ case I2O_CMD_DDM_ENABLE:
+ printk("EXEC_DDM_ENABLE, ");
+ break;
+ case I2O_CMD_DDM_QUIESCE:
+ printk("EXEC_DDM_QUIESCE, ");
+ break;
+ case I2O_CMD_DDM_RESET:
+ printk("EXEC_DDM_RESET, ");
+ break;
+ case I2O_CMD_DDM_SUSPEND:
+ printk("EXEC_DDM_SUSPEND, ");
+ break;
+ case I2O_CMD_DEVICE_ASSIGN:
+ printk("EXEC_DEVICE_ASSIGN, ");
+ break;
+ case I2O_CMD_DEVICE_RELEASE:
+ printk("EXEC_DEVICE_RELEASE, ");
+ break;
+ case I2O_CMD_HRT_GET:
+ printk("EXEC_HRT_GET, ");
+ break;
+ case I2O_CMD_ADAPTER_CLEAR:
+ printk("EXEC_IOP_CLEAR, ");
+ break;
+ case I2O_CMD_ADAPTER_CONNECT:
+ printk("EXEC_IOP_CONNECT, ");
+ break;
+ case I2O_CMD_ADAPTER_RESET:
+ printk("EXEC_IOP_RESET, ");
+ break;
+ case I2O_CMD_LCT_NOTIFY:
+ printk("EXEC_LCT_NOTIFY, ");
+ break;
+ case I2O_CMD_OUTBOUND_INIT:
+ printk("EXEC_OUTBOUND_INIT, ");
+ break;
+ case I2O_CMD_PATH_ENABLE:
+ printk("EXEC_PATH_ENABLE, ");
+ break;
+ case I2O_CMD_PATH_QUIESCE:
+ printk("EXEC_PATH_QUIESCE, ");
+ break;
+ case I2O_CMD_PATH_RESET:
+ printk("EXEC_PATH_RESET, ");
+ break;
+ case I2O_CMD_STATIC_MF_CREATE:
+ printk("EXEC_STATIC_MF_CREATE, ");
+ break;
+ case I2O_CMD_STATIC_MF_RELEASE:
+ printk("EXEC_STATIC_MF_RELEASE, ");
+ break;
+ case I2O_CMD_STATUS_GET:
+ printk("EXEC_STATUS_GET, ");
+ break;
+ case I2O_CMD_SW_DOWNLOAD:
+ printk("EXEC_SW_DOWNLOAD, ");
+ break;
+ case I2O_CMD_SW_UPLOAD:
+ printk("EXEC_SW_UPLOAD, ");
+ break;
+ case I2O_CMD_SW_REMOVE:
+ printk("EXEC_SW_REMOVE, ");
+ break;
+ case I2O_CMD_SYS_ENABLE:
+ printk("EXEC_SYS_ENABLE, ");
+ break;
+ case I2O_CMD_SYS_MODIFY:
+ printk("EXEC_SYS_MODIFY, ");
+ break;
+ case I2O_CMD_SYS_QUIESCE:
+ printk("EXEC_SYS_QUIESCE, ");
+ break;
+ case I2O_CMD_SYS_TAB_SET:
+ printk("EXEC_SYS_TAB_SET, ");
+ break;
+ default:
+ printk("Cmd = %#02x, ",cmd);
+ }
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_lan_cmd(u8 cmd)
+{
+ switch (cmd) {
+ case LAN_PACKET_SEND:
+ printk("LAN_PACKET_SEND, ");
+ break;
+ case LAN_SDU_SEND:
+ printk("LAN_SDU_SEND, ");
+ break;
+ case LAN_RECEIVE_POST:
+ printk("LAN_RECEIVE_POST, ");
+ break;
+ case LAN_RESET:
+ printk("LAN_RESET, ");
+ break;
+ case LAN_SUSPEND:
+ printk("LAN_SUSPEND, ");
+ break;
+ default:
+ printk("Cmd = %0#2x, ",cmd);
+ }
+}
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Report Cmd name, Request status, Detailed Status.
+ */
+void i2o_report_status(const char *severity, const char *str, u32 *msg)
+{
+ u8 cmd = (msg[1]>>24)&0xFF;
+ u8 req_status = (msg[4]>>24)&0xFF;
+ u16 detailed_status = msg[4]&0xFFFF;
+ struct i2o_handler *h = i2o_handlers[msg[2] & (MAX_I2O_MODULES-1)];
+
+ printk("%s%s: ", severity, str);
+
+ if (cmd < 0x1F) // Utility cmd
+ i2o_report_util_cmd(cmd);
+
+ else if (cmd >= 0xA0 && cmd <= 0xEF) // Executive cmd
+ i2o_report_exec_cmd(cmd);
+
+ else if (h->class == I2O_CLASS_LAN && cmd >= 0x30 && cmd <= 0x3F)
+ i2o_report_lan_cmd(cmd); // LAN cmd
+ else
+ printk("Cmd = %0#2x, ", cmd); // Other cmds
+
+ if (msg[0] & MSG_FAIL) {
+ i2o_report_fail_status(req_status, msg);
+ return;
+ }
+
+ i2o_report_common_status(req_status);
+
+ if (cmd < 0x1F || (cmd >= 0xA0 && cmd <= 0xEF))
+ i2o_report_common_dsc(detailed_status);
+ else if (h->class == I2O_CLASS_LAN && cmd >= 0x30 && cmd <= 0x3F)
+ i2o_report_lan_dsc(detailed_status);
+ else
+ printk(" / DetailedStatus = %0#4x.\n", detailed_status);
+}
+
+/* Used to dump a message to syslog during debugging */
+void i2o_dump_message(u32 *msg)
+{
+#ifdef DRIVERDEBUG
+ int i;
+ printk(KERN_INFO "Dumping I2O message size %d @ %p\n",
+ msg[0]>>16&0xffff, msg);
+ for(i = 0; i < ((msg[0]>>16)&0xffff); i++)
+ printk(KERN_INFO " msg[%d] = %0#10x\n", i, msg[i]);
+#endif
+}
+
+/*
+ * I2O reboot/shutdown notification.
+ *
+ * - Call each OSM's reboot notifier (if one exists)
+ * - Quiesce each IOP in the system
+ *
+ * Each IOP has to be quiesced before we can ensure that the system
+ * can be properly shutdown as a transaction that has already been
+ * acknowledged still needs to be placed in permanent store on the IOP.
+ * The SysQuiesce causes the IOP to force all HDMs to complete their
+ * transactions before returning, so only at that point is it safe
+ *
+ */
+static int i2o_reboot_event(struct notifier_block *n, unsigned long code, void
+*p)
+{
+ int i = 0;
+ struct i2o_controller *c = NULL;
+
+ if(code != SYS_RESTART && code != SYS_HALT && code != SYS_POWER_OFF)
+ return NOTIFY_DONE;
+
+ printk(KERN_INFO "Shutting down I2O system.\n");
+ printk(KERN_INFO
+ " This could take a few minutes if there are many devices attached\n");
+
+ for(i = 0; i < MAX_I2O_MODULES; i++)
+ {
+ if(i2o_handlers[i] && i2o_handlers[i]->reboot_notify)
+ i2o_handlers[i]->reboot_notify();
+ }
+
+ for(c = i2o_controller_chain; c; c = c->next)
+ {
+ if(i2o_quiesce_controller(c))
+ {
+ printk(KERN_WARNING "i2o: Could not quiesce %s." "
+ Verify setup on next system power up.\n", c->name);
+ }
+ }
+
+ printk(KERN_INFO "I2O system down.\n");
+ return NOTIFY_DONE;
+}
+
+
+EXPORT_SYMBOL(i2o_controller_chain);
+EXPORT_SYMBOL(i2o_num_controllers);
+EXPORT_SYMBOL(i2o_find_controller);
+EXPORT_SYMBOL(i2o_unlock_controller);
+EXPORT_SYMBOL(i2o_status_get);
+
+EXPORT_SYMBOL(i2o_install_handler);
+EXPORT_SYMBOL(i2o_remove_handler);
+
+EXPORT_SYMBOL(i2o_claim_device);
+EXPORT_SYMBOL(i2o_release_device);
+EXPORT_SYMBOL(i2o_device_notify_on);
+EXPORT_SYMBOL(i2o_device_notify_off);
+
+EXPORT_SYMBOL(i2o_post_this);
+EXPORT_SYMBOL(i2o_post_wait);
+EXPORT_SYMBOL(i2o_post_wait_mem);
+
+EXPORT_SYMBOL(i2o_query_scalar);
+EXPORT_SYMBOL(i2o_set_scalar);
+EXPORT_SYMBOL(i2o_query_table);
+EXPORT_SYMBOL(i2o_clear_table);
+EXPORT_SYMBOL(i2o_row_add_table);
+EXPORT_SYMBOL(i2o_issue_params);
+
+EXPORT_SYMBOL(i2o_event_register);
+EXPORT_SYMBOL(i2o_event_ack);
+
+EXPORT_SYMBOL(i2o_report_status);
+EXPORT_SYMBOL(i2o_dump_message);
+
+EXPORT_SYMBOL(i2o_get_class_name);
+
+#ifdef MODULE
+
+MODULE_AUTHOR("Red Hat Software");
+MODULE_DESCRIPTION("I2O Core");
+
+
+int init_module(void)
+{
+ printk(KERN_INFO "I2O Core - (C) Copyright 1999 Red Hat Software\n");
+ if (i2o_install_handler(&i2o_core_handler) < 0)
+ {
+ printk(KERN_ERR
+ "i2o_core: Unable to install core handler.\nI2O stack not loaded!");
+ return 0;
+ }
+
+ core_context = i2o_core_handler.context;
+
+ /*
+ * Attach core to I2O PCI transport (and others as they are developed)
+ */
+#ifdef CONFIG_I2O_PCI_MODULE
+ if(i2o_pci_core_attach(&i2o_core_functions) < 0)
+ printk(KERN_INFO "i2o: No PCI I2O controllers found\n");
+#endif
+
+ /*
+ * Initialize event handling thread
+ */
+ init_MUTEX_LOCKED(&evt_sem);
+ evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND);
+ if(evt_pid < 0)
+ {
+ printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
+ i2o_remove_handler(&i2o_core_handler);
+ return 0;
+ }
+ else
+ printk(KERN_INFO "I2O: Event thread created as pid %d\n", evt_pid);
+
+ if(i2o_num_controllers)
+ i2o_sys_init();
+
+ register_reboot_notifier(&i2o_reboot_notifier);
+
+ return 0;
+}
+
+void cleanup_module(void)
+{
+ int stat;
+
+ unregister_reboot_notifier(&i2o_reboot_notifier);
+
+ if(i2o_num_controllers)
+ i2o_sys_shutdown();
+
+ /*
+ * If this is shutdown time, the thread has already been killed
+ */
+ if(evt_running) {
+ printk("Terminating i2o threads...");
+ stat = kill_proc(evt_pid, SIGTERM, 1);
+ if(!stat) {
+ printk("waiting...");
+ wait_for_completion(&evt_dead);
+ }
+ printk("done.\n");
+ }
+
+#ifdef CONFIG_I2O_PCI_MODULE
+ i2o_pci_core_detach();
+#endif
+
+ i2o_remove_handler(&i2o_core_handler);
+
+ unregister_reboot_notifier(&i2o_reboot_notifier);
+}
+
+#else
+
+extern int i2o_block_init(void);
+extern int i2o_config_init(void);
+extern int i2o_lan_init(void);
+extern int i2o_pci_init(void);
+extern int i2o_proc_init(void);
+extern int i2o_scsi_init(void);
+
+int __init i2o_init(void)
+{
+ printk(KERN_INFO "Loading I2O Core - (c) Copyright 1999 Red Hat Software\n");
+
+ if (i2o_install_handler(&i2o_core_handler) < 0)
+ {
+ printk(KERN_ERR
+ "i2o_core: Unable to install core handler.\nI2O stack not loaded!");
+ return 0;
+ }
+
+ core_context = i2o_core_handler.context;
+
+ /*
+ * Initialize event handling thread
+ * We may not find any controllers, but still want this as
+ * down the road we may have hot pluggable controllers that
+ * need to be dealt with.
+ */
+ init_MUTEX_LOCKED(&evt_sem);
+ if((evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND)) < 0)
+ {
+ printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
+ i2o_remove_handler(&i2o_core_handler);
+ return 0;
+ }
+
+
+#ifdef CONFIG_I2O_PCI
+ i2o_pci_init();
+#endif
+
+ if(i2o_num_controllers)
+ i2o_sys_init();
+
+ register_reboot_notifier(&i2o_reboot_notifier);
+
+ i2o_config_init();
+#ifdef CONFIG_I2O_BLOCK
+ i2o_block_init();
+#endif
+#ifdef CONFIG_I2O_LAN
+ i2o_lan_init();
+#endif
+#ifdef CONFIG_I2O_PROC
+ i2o_proc_init();
+#endif
+ return 0;
+}
+
+#endif
--- /dev/null
+/*
+ * drivers/message/i2o/i2o_lan.c
+ *
+ * I2O LAN CLASS OSM May 26th 2000
+ *
+ * (C) Copyright 1999, 2000 University of Helsinki,
+ * Department of Computer Science
+ *
+ * This code is still under development / test.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Authors: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
+ * Fixes: Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Taneli Vähäkangas <Taneli.Vahakangas@cs.Helsinki.FI>
+ * Deepak Saxena <deepak@plexity.net>
+ *
+ * Tested: in FDDI environment (using SysKonnect's DDM)
+ * in Gigabit Eth environment (using SysKonnect's DDM)
+ * in Fast Ethernet environment (using Intel 82558 DDM)
+ *
+ * TODO: tests for other LAN classes (Token Ring, Fibre Channel)
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+
+#include <linux/pci.h>
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/fddidevice.h>
+#include <linux/trdevice.h>
+#include <linux/fcdevice.h>
+
+#include <linux/skbuff.h>
+#include <linux/if_arp.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/tqueue.h>
+#include <asm/io.h>
+
+#include <linux/errno.h>
+
+#include <linux/i2o.h>
+#include "i2o_lan.h"
+
+//#define DRIVERDEBUG
+#ifdef DRIVERDEBUG
+#define dprintk(s, args...) printk(s, ## args)
+#else
+#define dprintk(s, args...)
+#endif
+
+/* The following module parameters are used as default values
+ * for per interface values located in the net_device private area.
+ * Private values are changed via /proc filesystem.
+ */
+static u32 max_buckets_out = I2O_LAN_MAX_BUCKETS_OUT;
+static u32 bucket_thresh = I2O_LAN_BUCKET_THRESH;
+static u32 rx_copybreak = I2O_LAN_RX_COPYBREAK;
+static u8 tx_batch_mode = I2O_LAN_TX_BATCH_MODE;
+static u32 i2o_event_mask = I2O_LAN_EVENT_MASK;
+
+#define MAX_LAN_CARDS 16
+static struct net_device *i2o_landevs[MAX_LAN_CARDS+1];
+static int unit = -1; /* device unit number */
+
+static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
+static void i2o_lan_send_post_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
+static int i2o_lan_receive_post(struct net_device *dev);
+static void i2o_lan_receive_post_reply(struct i2o_handler *h, struct i2o_controller *iop, struct i2o_message *m);
+static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg);
+
+static int i2o_lan_reset(struct net_device *dev);
+static void i2o_lan_handle_event(struct net_device *dev, u32 *msg);
+
+/* Structures to register handlers for the incoming replies. */
+
+static struct i2o_handler i2o_lan_send_handler = {
+ i2o_lan_send_post_reply, // For send replies
+ NULL,
+ NULL,
+ NULL,
+ "I2O LAN OSM send",
+ -1,
+ I2O_CLASS_LAN
+};
+static int lan_send_context;
+
+static struct i2o_handler i2o_lan_receive_handler = {
+ i2o_lan_receive_post_reply, // For receive replies
+ NULL,
+ NULL,
+ NULL,
+ "I2O LAN OSM receive",
+ -1,
+ I2O_CLASS_LAN
+};
+static int lan_receive_context;
+
+static struct i2o_handler i2o_lan_handler = {
+ i2o_lan_reply, // For other replies
+ NULL,
+ NULL,
+ NULL,
+ "I2O LAN OSM",
+ -1,
+ I2O_CLASS_LAN
+};
+static int lan_context;
+
+DECLARE_TASK_QUEUE(i2o_post_buckets_task);
+struct tq_struct run_i2o_post_buckets_task = {
+ routine: (void (*)(void *)) run_task_queue,
+ data: (void *) 0
+};
+
+/* Functions to handle message failures and transaction errors:
+==============================================================*/
+
+/*
+ * i2o_lan_handle_failure(): Fail bit has been set since IOP's message
+ * layer cannot deliver the request to the target, or the target cannot
+ * process the request.
+ */
+static void i2o_lan_handle_failure(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+
+ u32 *preserved_msg = (u32*)(iop->mem_offset + msg[7]);
+ u32 *sgl_elem = &preserved_msg[4];
+ struct sk_buff *skb = NULL;
+ u8 le_flag;
+
+ i2o_report_status(KERN_INFO, dev->name, msg);
+
+ /* If PacketSend failed, free sk_buffs reserved by upper layers */
+
+ if (msg[1] >> 24 == LAN_PACKET_SEND) {
+ do {
+ skb = (struct sk_buff *)(sgl_elem[1]);
+ dev_kfree_skb_irq(skb);
+
+ atomic_dec(&priv->tx_out);
+
+ le_flag = *sgl_elem >> 31;
+ sgl_elem +=3;
+ } while (le_flag == 0); /* Last element flag not set */
+
+ if (netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+ }
+
+ /* If ReceivePost failed, free sk_buffs we have reserved */
+
+ if (msg[1] >> 24 == LAN_RECEIVE_POST) {
+ do {
+ skb = (struct sk_buff *)(sgl_elem[1]);
+ dev_kfree_skb_irq(skb);
+
+ atomic_dec(&priv->buckets_out);
+
+ le_flag = *sgl_elem >> 31;
+ sgl_elem +=3;
+ } while (le_flag == 0); /* Last element flag not set */
+ }
+
+ /* Release the preserved msg frame by resubmitting it as a NOP */
+
+ preserved_msg[0] = THREE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ preserved_msg[1] = I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0;
+ preserved_msg[2] = 0;
+ i2o_post_message(iop, msg[7]);
+}
+/*
+ * i2o_lan_handle_transaction_error(): IOP or DDM has rejected the request
+ * for general cause (format error, bad function code, insufficient resources,
+ * etc.). We get one transaction_error for each failed transaction.
+ */
+static void i2o_lan_handle_transaction_error(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct sk_buff *skb;
+
+ i2o_report_status(KERN_INFO, dev->name, msg);
+
+ /* If PacketSend was rejected, free sk_buff reserved by upper layers */
+
+ if (msg[1] >> 24 == LAN_PACKET_SEND) {
+ skb = (struct sk_buff *)(msg[3]); // TransactionContext
+ dev_kfree_skb_irq(skb);
+ atomic_dec(&priv->tx_out);
+
+ if (netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+ }
+
+ /* If ReceivePost was rejected, free sk_buff we have reserved */
+
+ if (msg[1] >> 24 == LAN_RECEIVE_POST) {
+ skb = (struct sk_buff *)(msg[3]);
+ dev_kfree_skb_irq(skb);
+ atomic_dec(&priv->buckets_out);
+ }
+}
+
+/*
+ * i2o_lan_handle_status(): Common parts of handling a not succeeded request
+ * (status != SUCCESS).
+ */
+static int i2o_lan_handle_status(struct net_device *dev, u32 *msg)
+{
+ /* Fail bit set? */
+
+ if (msg[0] & MSG_FAIL) {
+ i2o_lan_handle_failure(dev, msg);
+ return -1;
+ }
+
+ /* Message rejected for general cause? */
+
+ if ((msg[4]>>24) == I2O_REPLY_STATUS_TRANSACTION_ERROR) {
+ i2o_lan_handle_transaction_error(dev, msg);
+ return -1;
+ }
+
+ /* Else have to handle it in the callback function */
+
+ return 0;
+}
+
+/* Callback functions called from the interrupt routine:
+=======================================================*/
+
+/*
+ * i2o_lan_send_post_reply(): Callback function to handle PostSend replies.
+ */
+static void i2o_lan_send_post_reply(struct i2o_handler *h,
+ struct i2o_controller *iop, struct i2o_message *m)
+{
+ u32 *msg = (u32 *)m;
+ u8 unit = (u8)(msg[2]>>16); // InitiatorContext
+ struct net_device *dev = i2o_landevs[unit];
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ u8 trl_count = msg[3] & 0x000000FF;
+
+ if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
+ if (i2o_lan_handle_status(dev, msg))
+ return;
+ }
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, dev->name, msg);
+#endif
+
+ /* DDM has handled transmit request(s), free sk_buffs.
+ * We get similar single transaction reply also in error cases
+ * (except if msg failure or transaction error).
+ */
+ while (trl_count) {
+ dev_kfree_skb_irq((struct sk_buff *)msg[4 + trl_count]);
+ dprintk(KERN_INFO "%s: tx skb freed (trl_count=%d).\n",
+ dev->name, trl_count);
+ atomic_dec(&priv->tx_out);
+ trl_count--;
+ }
+
+ /* If priv->tx_out had reached tx_max_out, the queue was stopped */
+
+ if (netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+}
+
+/*
+ * i2o_lan_receive_post_reply(): Callback function to process incoming packets.
+ */
+static void i2o_lan_receive_post_reply(struct i2o_handler *h,
+ struct i2o_controller *iop, struct i2o_message *m)
+{
+ u32 *msg = (u32 *)m;
+ u8 unit = (u8)(msg[2]>>16); // InitiatorContext
+ struct net_device *dev = i2o_landevs[unit];
+
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_bucket_descriptor *bucket = (struct i2o_bucket_descriptor *)&msg[6];
+ struct i2o_packet_info *packet;
+ u8 trl_count = msg[3] & 0x000000FF;
+ struct sk_buff *skb, *old_skb;
+ unsigned long flags = 0;
+
+ if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
+ if (i2o_lan_handle_status(dev, msg))
+ return;
+
+ i2o_lan_release_buckets(dev, msg);
+ return;
+ }
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, dev->name, msg);
+#endif
+
+ /* Else we are receiving incoming post. */
+
+ while (trl_count--) {
+ skb = (struct sk_buff *)bucket->context;
+ packet = (struct i2o_packet_info *)bucket->packet_info;
+ atomic_dec(&priv->buckets_out);
+
+ /* Sanity checks: Any weird characteristics in bucket? */
+
+ if (packet->flags & 0x0f || ! packet->flags & 0x40) {
+ if (packet->flags & 0x01)
+ printk(KERN_WARNING "%s: packet with errors, error code=0x%02x.\n",
+ dev->name, packet->status & 0xff);
+
+ /* The following shouldn't happen, unless parameters in
+ * LAN_OPERATION group are changed during the run time.
+ */
+ if (packet->flags & 0x0c)
+ printk(KERN_DEBUG "%s: multi-bucket packets not supported!\n",
+ dev->name);
+
+ if (! packet->flags & 0x40)
+ printk(KERN_DEBUG "%s: multiple packets in a bucket not supported!\n",
+ dev->name);
+
+ dev_kfree_skb_irq(skb);
+
+ bucket++;
+ continue;
+ }
+
+ /* Copy short packet to a new skb */
+
+ if (packet->len < priv->rx_copybreak) {
+ old_skb = skb;
+ skb = (struct sk_buff *)dev_alloc_skb(packet->len+2);
+ if (skb == NULL) {
+ printk(KERN_ERR "%s: Can't allocate skb.\n", dev->name);
+ return;
+ }
+ skb_reserve(skb, 2);
+ memcpy(skb_put(skb, packet->len), old_skb->data, packet->len);
+
+ spin_lock_irqsave(&priv->fbl_lock, flags);
+ if (priv->i2o_fbl_tail < I2O_LAN_MAX_BUCKETS_OUT)
+ priv->i2o_fbl[++priv->i2o_fbl_tail] = old_skb;
+ else
+ dev_kfree_skb_irq(old_skb);
+
+ spin_unlock_irqrestore(&priv->fbl_lock, flags);
+ } else
+ skb_put(skb, packet->len);
+
+ /* Deliver to upper layers */
+
+ skb->dev = dev;
+ skb->protocol = priv->type_trans(skb, dev);
+ netif_rx(skb);
+
+ dev->last_rx = jiffies;
+
+ dprintk(KERN_INFO "%s: Incoming packet (%d bytes) delivered "
+ "to upper level.\n", dev->name, packet->len);
+
+ bucket++; // to next Packet Descriptor Block
+ }
+
+#ifdef DRIVERDEBUG
+ if (msg[5] == 0)
+ printk(KERN_INFO "%s: DDM out of buckets (priv->count = %d)!\n",
+ dev->name, atomic_read(&priv->buckets_out));
+#endif
+
+ /* If DDM has already consumed bucket_thresh buckets, post new ones */
+
+ if (atomic_read(&priv->buckets_out) <= priv->max_buckets_out - priv->bucket_thresh) {
+ run_i2o_post_buckets_task.data = (void *)dev;
+ queue_task(&run_i2o_post_buckets_task, &tq_immediate);
+ mark_bh(IMMEDIATE_BH);
+ }
+
+ return;
+}
+
+/*
+ * i2o_lan_reply(): Callback function to handle other incoming messages
+ * except SendPost and ReceivePost.
+ */
+static void i2o_lan_reply(struct i2o_handler *h, struct i2o_controller *iop,
+ struct i2o_message *m)
+{
+ u32 *msg = (u32 *)m;
+ u8 unit = (u8)(msg[2]>>16); // InitiatorContext
+ struct net_device *dev = i2o_landevs[unit];
+
+ if ((msg[4] >> 24) != I2O_REPLY_STATUS_SUCCESS) {
+ if (i2o_lan_handle_status(dev, msg))
+ return;
+
+ /* In other error cases just report and continue */
+
+ i2o_report_status(KERN_INFO, dev->name, msg);
+ }
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, dev->name, msg);
+#endif
+ switch (msg[1] >> 24) {
+ case LAN_RESET:
+ case LAN_SUSPEND:
+ /* default reply without payload */
+ break;
+
+ case I2O_CMD_UTIL_EVT_REGISTER:
+ case I2O_CMD_UTIL_EVT_ACK:
+ i2o_lan_handle_event(dev, msg);
+ break;
+
+ case I2O_CMD_UTIL_PARAMS_SET:
+ /* default reply, results in ReplyPayload (not examined) */
+ switch (msg[3] >> 16) {
+ case 1: dprintk(KERN_INFO "%s: Reply to set MAC filter mask.\n",
+ dev->name);
+ break;
+ case 2: dprintk(KERN_INFO "%s: Reply to set MAC table.\n",
+ dev->name);
+ break;
+ default: printk(KERN_WARNING "%s: Bad group 0x%04X\n",
+ dev->name,msg[3] >> 16);
+ }
+ break;
+
+ default:
+ printk(KERN_ERR "%s: No handler for the reply.\n",
+ dev->name);
+ i2o_report_status(KERN_INFO, dev->name, msg);
+ }
+}
+
+/* Functions used by the above callback functions:
+=================================================*/
+/*
+ * i2o_lan_release_buckets(): Free unused buckets (sk_buffs).
+ */
+static void i2o_lan_release_buckets(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ u8 trl_elem_size = (u8)(msg[3]>>8 & 0x000000FF);
+ u8 trl_count = (u8)(msg[3] & 0x000000FF);
+ u32 *pskb = &msg[6];
+
+ while (trl_count--) {
+ dprintk(KERN_DEBUG "%s: Releasing unused rx skb %p (trl_count=%d).\n",
+ dev->name, (struct sk_buff*)(*pskb),trl_count+1);
+ dev_kfree_skb_irq((struct sk_buff *)(*pskb));
+ pskb += 1 + trl_elem_size;
+ atomic_dec(&priv->buckets_out);
+ }
+}
+
+/*
+ * i2o_lan_event_reply(): Handle events.
+ */
+static void i2o_lan_handle_event(struct net_device *dev, u32 *msg)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 max_evt_data_size =iop->status_block->inbound_frame_size-5;
+ struct i2o_reply {
+ u32 header[4];
+ u32 evt_indicator;
+ u32 data[max_evt_data_size];
+ } *evt = (struct i2o_reply *)msg;
+ int evt_data_len = ((msg[0]>>16) - 5) * 4; /* real size*/
+
+ printk(KERN_INFO "%s: I2O event - ", dev->name);
+
+ if (msg[1]>>24 == I2O_CMD_UTIL_EVT_ACK) {
+ printk("Event acknowledgement reply.\n");
+ return;
+ }
+
+ /* Else evt->function == I2O_CMD_UTIL_EVT_REGISTER) */
+
+ switch (evt->evt_indicator) {
+ case I2O_EVT_IND_STATE_CHANGE: {
+ struct state_data {
+ u16 status;
+ u8 state;
+ u8 data;
+ } *evt_data = (struct state_data *)(evt->data[0]);
+
+ printk("State chance 0x%08x.\n", evt->data[0]);
+
+ /* If the DDM is in error state, recovery may be
+ * possible if status = Transmit or Receive Control
+ * Unit Inoperable.
+ */
+ if (evt_data->state==0x05 && evt_data->status==0x0003)
+ i2o_lan_reset(dev);
+ break;
+ }
+
+ case I2O_EVT_IND_FIELD_MODIFIED: {
+ u16 *work16 = (u16 *)evt->data;
+ printk("Group 0x%04x, field %d changed.\n", work16[0], work16[1]);
+ break;
+ }
+
+ case I2O_EVT_IND_VENDOR_EVT: {
+ int i;
+ printk("Vendor event:\n");
+ for (i = 0; i < evt_data_len / 4; i++)
+ printk(" 0x%08x\n", evt->data[i]);
+ break;
+ }
+
+ case I2O_EVT_IND_DEVICE_RESET:
+ /* Spec 2.0 p. 6-121:
+ * The event of _DEVICE_RESET should also be responded
+ */
+ printk("Device reset.\n");
+ if (i2o_event_ack(iop, msg) < 0)
+ printk("%s: Event Acknowledge timeout.\n", dev->name);
+ break;
+
+#if 0
+ case I2O_EVT_IND_EVT_MASK_MODIFIED:
+ printk("Event mask modified, 0x%08x.\n", evt->data[0]);
+ break;
+
+ case I2O_EVT_IND_GENERAL_WARNING:
+ printk("General warning 0x%04x.\n", evt->data[0]);
+ break;
+
+ case I2O_EVT_IND_CONFIGURATION_FLAG:
+ printk("Configuration requested.\n");
+ break;
+
+ case I2O_EVT_IND_CAPABILITY_CHANGE:
+ printk("Capability change 0x%04x.\n", evt->data[0]);
+ break;
+
+ case I2O_EVT_IND_DEVICE_STATE:
+ printk("Device state changed 0x%08x.\n", evt->data[0]);
+ break;
+#endif
+ case I2O_LAN_EVT_LINK_DOWN:
+ netif_carrier_off(dev);
+ printk("Link to the physical device is lost.\n");
+ break;
+
+ case I2O_LAN_EVT_LINK_UP:
+ netif_carrier_on(dev);
+ printk("Link to the physical device is (re)established.\n");
+ break;
+
+ case I2O_LAN_EVT_MEDIA_CHANGE:
+ printk("Media change.\n");
+ break;
+ default:
+ printk("0x%08x. No handler.\n", evt->evt_indicator);
+ }
+}
+
+/*
+ * i2o_lan_receive_post(): Post buckets to receive packets.
+ */
+static int i2o_lan_receive_post(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ struct sk_buff *skb;
+ u32 m, *msg;
+ u32 bucket_len = (dev->mtu + dev->hard_header_len);
+ u32 total = priv->max_buckets_out - atomic_read(&priv->buckets_out);
+ u32 bucket_count;
+ u32 *sgl_elem;
+ unsigned long flags;
+
+ /* Send (total/bucket_count) separate I2O requests */
+
+ while (total) {
+ m = I2O_POST_READ32(iop);
+ if (m == 0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg = (u32 *)(iop->mem_offset + m);
+
+ bucket_count = (total >= priv->sgl_max) ? priv->sgl_max : total;
+ total -= bucket_count;
+ atomic_add(bucket_count, &priv->buckets_out);
+
+ dprintk(KERN_INFO "%s: Sending %d buckets (size %d) to LAN DDM.\n",
+ dev->name, bucket_count, bucket_len);
+
+ /* Fill in the header */
+
+ __raw_writel(I2O_MESSAGE_SIZE(4 + 3 * bucket_count) | SGL_OFFSET_4, msg);
+ __raw_writel(LAN_RECEIVE_POST<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
+ __raw_writel(priv->unit << 16 | lan_receive_context, msg+2);
+ __raw_writel(bucket_count, msg+3);
+ sgl_elem = &msg[4];
+
+ /* Fill in the payload - contains bucket_count SGL elements */
+
+ while (bucket_count--) {
+ spin_lock_irqsave(&priv->fbl_lock, flags);
+ if (priv->i2o_fbl_tail >= 0)
+ skb = priv->i2o_fbl[priv->i2o_fbl_tail--];
+ else {
+ skb = dev_alloc_skb(bucket_len + 2);
+ if (skb == NULL) {
+ spin_unlock_irqrestore(&priv->fbl_lock, flags);
+ return -ENOMEM;
+ }
+ skb_reserve(skb, 2);
+ }
+ spin_unlock_irqrestore(&priv->fbl_lock, flags);
+
+ __raw_writel(0x51000000 | bucket_len, sgl_elem);
+ __raw_writel((u32)skb, sgl_elem+1);
+ __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
+ sgl_elem += 3;
+ }
+
+ /* set LE flag and post */
+ __raw_writel(__raw_readl(sgl_elem-3) | 0x80000000, (sgl_elem-3));
+ i2o_post_message(iop, m);
+ }
+
+ return 0;
+}
+
+/* Functions called from the network stack, and functions called by them:
+========================================================================*/
+
+/*
+ * i2o_lan_reset(): Reset the LAN adapter into the operational state and
+ * restore it to full operation.
+ */
+static int i2o_lan_reset(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 msg[5];
+
+ dprintk(KERN_INFO "%s: LAN RESET MESSAGE.\n", dev->name);
+ msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ msg[1] = LAN_RESET<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid;
+ msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
+ msg[3] = 0; // TransactionContext
+ msg[4] = 0; // Keep posted buckets
+
+ if (i2o_post_this(iop, msg, sizeof(msg)) < 0)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+/*
+ * i2o_lan_suspend(): Put LAN adapter into a safe, non-active state.
+ * IOP replies to any LAN class message with status error_no_data_transfer
+ * / suspended.
+ */
+static int i2o_lan_suspend(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 msg[5];
+
+ dprintk(KERN_INFO "%s: LAN SUSPEND MESSAGE.\n", dev->name);
+ msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ msg[1] = LAN_SUSPEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid;
+ msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
+ msg[3] = 0; // TransactionContext
+ msg[4] = 1 << 16; // return posted buckets
+
+ if (i2o_post_this(iop, msg, sizeof(msg)) < 0)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+/*
+ * i2o_set_ddm_parameters:
+ * These settings are done to ensure proper initial values for DDM.
+ * They can be changed via proc file system or vai configuration utility.
+ */
+static void i2o_set_ddm_parameters(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 val;
+
+ /*
+ * When PacketOrphanlimit is set to the maximum packet length,
+ * the packets will never be split into two separate buckets
+ */
+ val = dev->mtu + dev->hard_header_len;
+ if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0004, 2, &val, sizeof(val)) < 0)
+ printk(KERN_WARNING "%s: Unable to set PacketOrphanLimit.\n",
+ dev->name);
+ else
+ dprintk(KERN_INFO "%s: PacketOrphanLimit set to %d.\n",
+ dev->name, val);
+
+ /* When RxMaxPacketsBucket = 1, DDM puts only one packet into bucket */
+
+ val = 1;
+ if (i2o_set_scalar(iop, i2o_dev->lct_data.tid, 0x0008, 4, &val, sizeof(val)) <0)
+ printk(KERN_WARNING "%s: Unable to set RxMaxPacketsBucket.\n",
+ dev->name);
+ else
+ dprintk(KERN_INFO "%s: RxMaxPacketsBucket set to %d.\n",
+ dev->name, val);
+ return;
+}
+
+/* Functions called from the network stack:
+==========================================*/
+
+/*
+ * i2o_lan_open(): Open the device to send/receive packets via
+ * the network device.
+ */
+static int i2o_lan_open(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 mc_addr_group[64];
+
+ MOD_INC_USE_COUNT;
+
+ if (i2o_claim_device(i2o_dev, &i2o_lan_handler)) {
+ printk(KERN_WARNING "%s: Unable to claim the I2O LAN device.\n", dev->name);
+ MOD_DEC_USE_COUNT;
+ return -EAGAIN;
+ }
+ dprintk(KERN_INFO "%s: I2O LAN device (tid=%d) claimed by LAN OSM.\n",
+ dev->name, i2o_dev->lct_data.tid);
+
+ if (i2o_event_register(iop, i2o_dev->lct_data.tid,
+ priv->unit << 16 | lan_context, 0, priv->i2o_event_mask) < 0)
+ printk(KERN_WARNING "%s: Unable to set the event mask.\n", dev->name);
+
+ i2o_lan_reset(dev);
+
+ /* Get the max number of multicast addresses */
+
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0001, -1,
+ &mc_addr_group, sizeof(mc_addr_group)) < 0 ) {
+ printk(KERN_WARNING "%s: Unable to query LAN_MAC_ADDRESS group.\n", dev->name);
+ MOD_DEC_USE_COUNT;
+ return -EAGAIN;
+ }
+ priv->max_size_mc_table = mc_addr_group[8];
+
+ /* Malloc space for free bucket list to resuse reveive post buckets */
+
+ priv->i2o_fbl = kmalloc(priv->max_buckets_out * sizeof(struct sk_buff *),
+ GFP_KERNEL);
+ if (priv->i2o_fbl == NULL) {
+ MOD_DEC_USE_COUNT;
+ return -ENOMEM;
+ }
+ priv->i2o_fbl_tail = -1;
+ priv->send_active = 0;
+
+ i2o_set_ddm_parameters(dev);
+ i2o_lan_receive_post(dev);
+
+ netif_start_queue(dev);
+
+ return 0;
+}
+
+/*
+ * i2o_lan_close(): End the transfering.
+ */
+static int i2o_lan_close(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ int ret = 0;
+
+ netif_stop_queue(dev);
+ i2o_lan_suspend(dev);
+
+ if (i2o_event_register(iop, i2o_dev->lct_data.tid,
+ priv->unit << 16 | lan_context, 0, 0) < 0)
+ printk(KERN_WARNING "%s: Unable to clear the event mask.\n",
+ dev->name);
+
+ while (priv->i2o_fbl_tail >= 0)
+ dev_kfree_skb(priv->i2o_fbl[priv->i2o_fbl_tail--]);
+
+ kfree(priv->i2o_fbl);
+
+ if (i2o_release_device(i2o_dev, &i2o_lan_handler)) {
+ printk(KERN_WARNING "%s: Unable to unclaim I2O LAN device "
+ "(tid=%d).\n", dev->name, i2o_dev->lct_data.tid);
+ ret = -EBUSY;
+ }
+
+ MOD_DEC_USE_COUNT;
+
+ return ret;
+}
+
+/*
+ * i2o_lan_tx_timeout(): Tx timeout handler.
+ */
+static void i2o_lan_tx_timeout(struct net_device *dev)
+{
+ if (!netif_queue_stopped(dev))
+ netif_start_queue(dev);
+}
+
+/*
+ * i2o_lan_batch_send(): Send packets in batch.
+ * Both i2o_lan_sdu_send and i2o_lan_packet_send use this.
+ */
+static void i2o_lan_batch_send(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_controller *iop = priv->i2o_dev->controller;
+
+ spin_lock_irq(&priv->tx_lock);
+ if (priv->tx_count != 0) {
+ dev->trans_start = jiffies;
+ i2o_post_message(iop, priv->m);
+ dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
+ priv->tx_count = 0;
+ }
+ priv->send_active = 0;
+ spin_unlock_irq(&priv->tx_lock);
+ MOD_DEC_USE_COUNT;
+}
+
+#ifdef CONFIG_NET_FC
+/*
+ * i2o_lan_sdu_send(): Send a packet, MAC header added by the DDM.
+ * Must be supported by Fibre Channel, optional for Ethernet/802.3,
+ * Token Ring, FDDI
+ */
+static int i2o_lan_sdu_send(struct sk_buff *skb, struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ int tickssofar = jiffies - dev->trans_start;
+ u32 m, *msg;
+ u32 *sgl_elem;
+
+ spin_lock_irq(&priv->tx_lock);
+
+ priv->tx_count++;
+ atomic_inc(&priv->tx_out);
+
+ /*
+ * If tx_batch_mode = 0x00 forced to immediate mode
+ * If tx_batch_mode = 0x01 forced to batch mode
+ * If tx_batch_mode = 0x10 switch automatically, current mode immediate
+ * If tx_batch_mode = 0x11 switch automatically, current mode batch
+ * If gap between two packets is > 0 ticks, switch to immediate
+ */
+ if (priv->tx_batch_mode >> 1) // switch automatically
+ priv->tx_batch_mode = tickssofar ? 0x02 : 0x03;
+
+ if (priv->tx_count == 1) {
+ m = I2O_POST_READ32(iop);
+ if (m == 0xFFFFFFFF) {
+ spin_unlock_irq(&priv->tx_lock);
+ return 1;
+ }
+ msg = (u32 *)(iop->mem_offset + m);
+ priv->m = m;
+
+ __raw_writel(NINE_WORD_MSG_SIZE | 1<<12 | SGL_OFFSET_4, msg);
+ __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
+ __raw_writel(priv->unit << 16 | lan_send_context, msg+2); // InitiatorContext
+ __raw_writel(1 << 30 | 1 << 3, msg+3); // TransmitControlWord
+
+ __raw_writel(0xD7000000 | skb->len, msg+4); // MAC hdr included
+ __raw_writel((u32)skb, msg+5); // TransactionContext
+ __raw_writel(virt_to_bus(skb->data), msg+6);
+ __raw_writel((u32)skb->mac.raw, msg+7);
+ __raw_writel((u32)skb->mac.raw+4, msg+8);
+
+ if ((priv->tx_batch_mode & 0x01) && !priv->send_active) {
+ priv->send_active = 1;
+ MOD_INC_USE_COUNT;
+ if (schedule_task(&priv->i2o_batch_send_task) == 0)
+ MOD_DEC_USE_COUNT;
+ }
+ } else { /* Add new SGL element to the previous message frame */
+
+ msg = (u32 *)(iop->mem_offset + priv->m);
+ sgl_elem = &msg[priv->tx_count * 5 + 1];
+
+ __raw_writel(I2O_MESSAGE_SIZE((__raw_readl(msg)>>16) + 5) | 1<<12 | SGL_OFFSET_4, msg);
+ __raw_writel(__raw_readl(sgl_elem-5) & 0x7FFFFFFF, sgl_elem-5); /* clear LE flag */
+ __raw_writel(0xD5000000 | skb->len, sgl_elem);
+ __raw_writel((u32)skb, sgl_elem+1);
+ __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
+ __raw_writel((u32)(skb->mac.raw), sgl_elem+3);
+ __raw_writel((u32)(skb->mac.raw)+1, sgl_elem+4);
+ }
+
+ /* If tx not in batch mode or frame is full, send immediatelly */
+
+ if (!(priv->tx_batch_mode & 0x01) || priv->tx_count == priv->sgl_max) {
+ dev->trans_start = jiffies;
+ i2o_post_message(iop, priv->m);
+ dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
+ priv->tx_count = 0;
+ }
+
+ /* If DDMs TxMaxPktOut reached, stop queueing layer to send more */
+
+ if (atomic_read(&priv->tx_out) >= priv->tx_max_out)
+ netif_stop_queue(dev);
+
+ spin_unlock_irq(&priv->tx_lock);
+ return 0;
+}
+#endif /* CONFIG_NET_FC */
+
+/*
+ * i2o_lan_packet_send(): Send a packet as is, including the MAC header.
+ *
+ * Must be supported by Ethernet/802.3, Token Ring, FDDI, optional for
+ * Fibre Channel
+ */
+static int i2o_lan_packet_send(struct sk_buff *skb, struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ int tickssofar = jiffies - dev->trans_start;
+ u32 m, *msg;
+ u32 *sgl_elem;
+
+ spin_lock_irq(&priv->tx_lock);
+
+ priv->tx_count++;
+ atomic_inc(&priv->tx_out);
+
+ /*
+ * If tx_batch_mode = 0x00 forced to immediate mode
+ * If tx_batch_mode = 0x01 forced to batch mode
+ * If tx_batch_mode = 0x10 switch automatically, current mode immediate
+ * If tx_batch_mode = 0x11 switch automatically, current mode batch
+ * If gap between two packets is > 0 ticks, switch to immediate
+ */
+ if (priv->tx_batch_mode >> 1) // switch automatically
+ priv->tx_batch_mode = tickssofar ? 0x02 : 0x03;
+
+ if (priv->tx_count == 1) {
+ m = I2O_POST_READ32(iop);
+ if (m == 0xFFFFFFFF) {
+ spin_unlock_irq(&priv->tx_lock);
+ return 1;
+ }
+ msg = (u32 *)(iop->mem_offset + m);
+ priv->m = m;
+
+ __raw_writel(SEVEN_WORD_MSG_SIZE | 1<<12 | SGL_OFFSET_4, msg);
+ __raw_writel(LAN_PACKET_SEND<<24 | HOST_TID<<12 | i2o_dev->lct_data.tid, msg+1);
+ __raw_writel(priv->unit << 16 | lan_send_context, msg+2); // InitiatorContext
+ __raw_writel(1 << 30 | 1 << 3, msg+3); // TransmitControlWord
+ // bit 30: reply as soon as transmission attempt is complete
+ // bit 3: Suppress CRC generation
+ __raw_writel(0xD5000000 | skb->len, msg+4); // MAC hdr included
+ __raw_writel((u32)skb, msg+5); // TransactionContext
+ __raw_writel(virt_to_bus(skb->data), msg+6);
+
+ if ((priv->tx_batch_mode & 0x01) && !priv->send_active) {
+ priv->send_active = 1;
+ MOD_INC_USE_COUNT;
+ if (schedule_task(&priv->i2o_batch_send_task) == 0)
+ MOD_DEC_USE_COUNT;
+ }
+ } else { /* Add new SGL element to the previous message frame */
+
+ msg = (u32 *)(iop->mem_offset + priv->m);
+ sgl_elem = &msg[priv->tx_count * 3 + 1];
+
+ __raw_writel(I2O_MESSAGE_SIZE((__raw_readl(msg)>>16) + 3) | 1<<12 | SGL_OFFSET_4, msg);
+ __raw_writel(__raw_readl(sgl_elem-3) & 0x7FFFFFFF, sgl_elem-3); /* clear LE flag */
+ __raw_writel(0xD5000000 | skb->len, sgl_elem);
+ __raw_writel((u32)skb, sgl_elem+1);
+ __raw_writel(virt_to_bus(skb->data), sgl_elem+2);
+ }
+
+ /* If tx is in immediate mode or frame is full, send now */
+
+ if (!(priv->tx_batch_mode & 0x01) || priv->tx_count == priv->sgl_max) {
+ dev->trans_start = jiffies;
+ i2o_post_message(iop, priv->m);
+ dprintk(KERN_DEBUG "%s: %d packets sent.\n", dev->name, priv->tx_count);
+ priv->tx_count = 0;
+ }
+
+ /* If DDMs TxMaxPktOut reached, stop queueing layer to send more */
+
+ if (atomic_read(&priv->tx_out) >= priv->tx_max_out)
+ netif_stop_queue(dev);
+
+ spin_unlock_irq(&priv->tx_lock);
+ return 0;
+}
+
+/*
+ * i2o_lan_get_stats(): Fill in the statistics.
+ */
+static struct net_device_stats *i2o_lan_get_stats(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u64 val64[16];
+ u64 supported_group[4] = { 0, 0, 0, 0 };
+
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0100, -1, val64,
+ sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_HISTORICAL_STATS.\n", dev->name);
+ else {
+ dprintk(KERN_DEBUG "%s: LAN_HISTORICAL_STATS queried.\n", dev->name);
+ priv->stats.tx_packets = val64[0];
+ priv->stats.tx_bytes = val64[1];
+ priv->stats.rx_packets = val64[2];
+ priv->stats.rx_bytes = val64[3];
+ priv->stats.tx_errors = val64[4];
+ priv->stats.rx_errors = val64[5];
+ priv->stats.rx_dropped = val64[6];
+ }
+
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0180, -1,
+ &supported_group, sizeof(supported_group)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_SUPPORTED_OPTIONAL_HISTORICAL_STATS.\n", dev->name);
+
+ if (supported_group[2]) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0183, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_OPTIONAL_RX_HISTORICAL_STATS.\n", dev->name);
+ else {
+ dprintk(KERN_DEBUG "%s: LAN_OPTIONAL_RX_HISTORICAL_STATS queried.\n", dev->name);
+ priv->stats.multicast = val64[4];
+ priv->stats.rx_length_errors = val64[10];
+ priv->stats.rx_crc_errors = val64[0];
+ }
+ }
+
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_ETHERNET) {
+ u64 supported_stats = 0;
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0200, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_802_3_HISTORICAL_STATS.\n", dev->name);
+ else {
+ dprintk(KERN_DEBUG "%s: LAN_802_3_HISTORICAL_STATS queried.\n", dev->name);
+ priv->stats.transmit_collision = val64[1] + val64[2];
+ priv->stats.rx_frame_errors = val64[0];
+ priv->stats.tx_carrier_errors = val64[6];
+ }
+
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0280, -1,
+ &supported_stats, sizeof(supported_stats)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_SUPPORTED_802_3_HISTORICAL_STATS.\n", dev->name);
+
+ if (supported_stats != 0) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0281, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_OPTIONAL_802_3_HISTORICAL_STATS.\n", dev->name);
+ else {
+ dprintk(KERN_DEBUG "%s: LAN_OPTIONAL_802_3_HISTORICAL_STATS queried.\n", dev->name);
+ if (supported_stats & 0x1)
+ priv->stats.rx_over_errors = val64[0];
+ if (supported_stats & 0x4)
+ priv->stats.tx_heartbeat_errors = val64[2];
+ }
+ }
+ }
+
+#ifdef CONFIG_TR
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_TR) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0300, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_802_5_HISTORICAL_STATS.\n", dev->name);
+ else {
+ struct tr_statistics *stats =
+ (struct tr_statistics *)&priv->stats;
+ dprintk(KERN_DEBUG "%s: LAN_802_5_HISTORICAL_STATS queried.\n", dev->name);
+
+ stats->line_errors = val64[0];
+ stats->internal_errors = val64[7];
+ stats->burst_errors = val64[4];
+ stats->A_C_errors = val64[2];
+ stats->abort_delimiters = val64[3];
+ stats->lost_frames = val64[1];
+ /* stats->recv_congest_count = ?; FIXME ??*/
+ stats->frame_copied_errors = val64[5];
+ stats->frequency_errors = val64[6];
+ stats->token_errors = val64[9];
+ }
+ /* Token Ring optional stats not yet defined */
+ }
+#endif
+
+#ifdef CONFIG_FDDI
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_FDDI) {
+ if (i2o_query_scalar(iop, i2o_dev->lct_data.tid, 0x0400, -1,
+ val64, sizeof(val64)) < 0)
+ printk(KERN_INFO "%s: Unable to query LAN_FDDI_HISTORICAL_STATS.\n", dev->name);
+ else {
+ dprintk(KERN_DEBUG "%s: LAN_FDDI_HISTORICAL_STATS queried.\n", dev->name);
+ priv->stats.smt_cf_state = val64[0];
+ memcpy(priv->stats.mac_upstream_nbr, &val64[1], FDDI_K_ALEN);
+ memcpy(priv->stats.mac_downstream_nbr, &val64[2], FDDI_K_ALEN);
+ priv->stats.mac_error_cts = val64[3];
+ priv->stats.mac_lost_cts = val64[4];
+ priv->stats.mac_rmt_state = val64[5];
+ memcpy(priv->stats.port_lct_fail_cts, &val64[6], 8);
+ memcpy(priv->stats.port_lem_reject_cts, &val64[7], 8);
+ memcpy(priv->stats.port_lem_cts, &val64[8], 8);
+ memcpy(priv->stats.port_pcm_state, &val64[9], 8);
+ }
+ /* FDDI optional stats not yet defined */
+ }
+#endif
+
+#ifdef CONFIG_NET_FC
+ /* Fibre Channel Statistics not yet defined in 1.53 nor 2.0 */
+#endif
+
+ return (struct net_device_stats *)&priv->stats;
+}
+
+/*
+ * i2o_lan_set_mc_filter(): Post a request to set multicast filter.
+ */
+int i2o_lan_set_mc_filter(struct net_device *dev, u32 filter_mask)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ u32 msg[10];
+
+ msg[0] = TEN_WORD_MSG_SIZE | SGL_OFFSET_5;
+ msg[1] = I2O_CMD_UTIL_PARAMS_SET << 24 | HOST_TID << 12 | i2o_dev->lct_data.tid;
+ msg[2] = priv->unit << 16 | lan_context;
+ msg[3] = 0x0001 << 16 | 3 ; // TransactionContext: group&field
+ msg[4] = 0;
+ msg[5] = 0xCC000000 | 16; // Immediate data SGL
+ msg[6] = 1; // OperationCount
+ msg[7] = 0x0001<<16 | I2O_PARAMS_FIELD_SET; // Group, Operation
+ msg[8] = 3 << 16 | 1; // FieldIndex, FieldCount
+ msg[9] = filter_mask; // Value
+
+ return i2o_post_this(iop, msg, sizeof(msg));
+}
+
+/*
+ * i2o_lan_set_mc_table(): Post a request to set LAN_MULTICAST_MAC_ADDRESS table.
+ */
+int i2o_lan_set_mc_table(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ struct i2o_controller *iop = i2o_dev->controller;
+ struct dev_mc_list *mc;
+ u32 msg[10 + 2 * dev->mc_count];
+ u8 *work8 = (u8 *)(msg + 10);
+
+ msg[0] = I2O_MESSAGE_SIZE(10 + 2 * dev->mc_count) | SGL_OFFSET_5;
+ msg[1] = I2O_CMD_UTIL_PARAMS_SET << 24 | HOST_TID << 12 | i2o_dev->lct_data.tid;
+ msg[2] = priv->unit << 16 | lan_context; // InitiatorContext
+ msg[3] = 0x0002 << 16 | (u16)-1; // TransactionContext
+ msg[4] = 0; // OperationFlags
+ msg[5] = 0xCC000000 | (16 + 8 * dev->mc_count); // Immediate data SGL
+ msg[6] = 2; // OperationCount
+ msg[7] = 0x0002 << 16 | I2O_PARAMS_TABLE_CLEAR; // Group, Operation
+ msg[8] = 0x0002 << 16 | I2O_PARAMS_ROW_ADD; // Group, Operation
+ msg[9] = dev->mc_count << 16 | (u16)-1; // RowCount, FieldCount
+
+ for (mc = dev->mc_list; mc ; mc = mc->next, work8 += 8) {
+ memset(work8, 0, 8);
+ memcpy(work8, mc->dmi_addr, mc->dmi_addrlen); // Values
+ }
+
+ return i2o_post_this(iop, msg, sizeof(msg));
+}
+
+/*
+ * i2o_lan_set_multicast_list(): Enable a network device to receive packets
+ * not send to the protocol address.
+ */
+static void i2o_lan_set_multicast_list(struct net_device *dev)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ u32 filter_mask;
+
+ if (dev->flags & IFF_PROMISC) {
+ filter_mask = 0x00000002;
+ dprintk(KERN_INFO "%s: Enabling promiscuous mode...\n", dev->name);
+ } else if ((dev->flags & IFF_ALLMULTI) || dev->mc_count > priv->max_size_mc_table) {
+ filter_mask = 0x00000004;
+ dprintk(KERN_INFO "%s: Enabling all multicast mode...\n", dev->name);
+ } else if (dev->mc_count) {
+ filter_mask = 0x00000000;
+ dprintk(KERN_INFO "%s: Enabling multicast mode...\n", dev->name);
+ if (i2o_lan_set_mc_table(dev) < 0)
+ printk(KERN_WARNING "%s: Unable to send MAC table.\n", dev->name);
+ } else {
+ filter_mask = 0x00000300; // Broadcast, Multicast disabled
+ dprintk(KERN_INFO "%s: Enabling unicast mode...\n", dev->name);
+ }
+
+ /* Finally copy new FilterMask to DDM */
+
+ if (i2o_lan_set_mc_filter(dev, filter_mask) < 0)
+ printk(KERN_WARNING "%s: Unable to send MAC FilterMask.\n", dev->name);
+}
+
+/*
+ * i2o_lan_change_mtu(): Change maximum transfer unit size.
+ */
+static int i2o_lan_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+ u32 max_pkt_size;
+
+ if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
+ 0x0000, 6, &max_pkt_size, 4) < 0)
+ return -EFAULT;
+
+ if (new_mtu < 68 || new_mtu > 9000 || new_mtu > max_pkt_size)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+
+ i2o_lan_suspend(dev); // to SUSPENDED state, return buckets
+
+ while (priv->i2o_fbl_tail >= 0) // free buffered buckets
+ dev_kfree_skb(priv->i2o_fbl[priv->i2o_fbl_tail--]);
+
+ i2o_lan_reset(dev); // to OPERATIONAL state
+ i2o_set_ddm_parameters(dev); // reset some parameters
+ i2o_lan_receive_post(dev); // post new buckets (new size)
+
+ return 0;
+}
+
+/* Functions to initialize I2O LAN OSM:
+======================================*/
+
+/*
+ * i2o_lan_register_device(): Register LAN class device to kernel.
+ */
+struct net_device *i2o_lan_register_device(struct i2o_device *i2o_dev)
+{
+ struct net_device *dev = NULL;
+ struct i2o_lan_local *priv = NULL;
+ u8 hw_addr[8];
+ u32 tx_max_out = 0;
+ unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
+ void (*unregister_dev)(struct net_device *dev);
+
+ switch (i2o_dev->lct_data.sub_class) {
+ case I2O_LAN_ETHERNET:
+ dev = init_etherdev(NULL, sizeof(struct i2o_lan_local));
+ if (dev == NULL)
+ return NULL;
+ type_trans = eth_type_trans;
+ unregister_dev = unregister_netdev;
+ break;
+
+#ifdef CONFIG_ANYLAN
+ case I2O_LAN_100VG:
+ printk(KERN_ERR "i2o_lan: 100base VG not yet supported.\n");
+ return NULL;
+ break;
+#endif
+
+#ifdef CONFIG_TR
+ case I2O_LAN_TR:
+ dev = init_trdev(NULL, sizeof(struct i2o_lan_local));
+ if (dev==NULL)
+ return NULL;
+ type_trans = tr_type_trans;
+ unregister_dev = unregister_trdev;
+ break;
+#endif
+
+#ifdef CONFIG_FDDI
+ case I2O_LAN_FDDI:
+ {
+ int size = sizeof(struct net_device) + sizeof(struct i2o_lan_local);
+
+ dev = (struct net_device *) kmalloc(size, GFP_KERNEL);
+ if (dev == NULL)
+ return NULL;
+ memset((char *)dev, 0, size);
+ dev->priv = (void *)(dev + 1);
+
+ if (dev_alloc_name(dev, "fddi%d") < 0) {
+ printk(KERN_WARNING "i2o_lan: Too many FDDI devices.\n");
+ kfree(dev);
+ return NULL;
+ }
+ type_trans = fddi_type_trans;
+ unregister_dev = (void *)unregister_netdevice;
+
+ fddi_setup(dev);
+ register_netdev(dev);
+ }
+ break;
+#endif
+
+#ifdef CONFIG_NET_FC
+ case I2O_LAN_FIBRE_CHANNEL:
+ dev = init_fcdev(NULL, sizeof(struct i2o_lan_local));
+ if (dev == NULL)
+ return NULL;
+ type_trans = NULL;
+/* FIXME: Move fc_type_trans() from drivers/net/fc/iph5526.c to net/802/fc.c
+ * and export it in include/linux/fcdevice.h
+ * type_trans = fc_type_trans;
+ */
+ unregister_dev = (void *)unregister_fcdev;
+ break;
+#endif
+
+ case I2O_LAN_UNKNOWN:
+ default:
+ printk(KERN_ERR "i2o_lan: LAN type 0x%04x not supported.\n",
+ i2o_dev->lct_data.sub_class);
+ return NULL;
+ }
+
+ priv = (struct i2o_lan_local *)dev->priv;
+ priv->i2o_dev = i2o_dev;
+ priv->type_trans = type_trans;
+ priv->sgl_max = (i2o_dev->controller->status_block->inbound_frame_size - 4) / 3;
+ atomic_set(&priv->buckets_out, 0);
+
+ /* Set default values for user configurable parameters */
+ /* Private values are changed via /proc file system */
+
+ priv->max_buckets_out = max_buckets_out;
+ priv->bucket_thresh = bucket_thresh;
+ priv->rx_copybreak = rx_copybreak;
+ priv->tx_batch_mode = tx_batch_mode & 0x03;
+ priv->i2o_event_mask = i2o_event_mask;
+
+ priv->tx_lock = SPIN_LOCK_UNLOCKED;
+ priv->fbl_lock = SPIN_LOCK_UNLOCKED;
+
+ unit++;
+ i2o_landevs[unit] = dev;
+ priv->unit = unit;
+
+ if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
+ 0x0001, 0, &hw_addr, sizeof(hw_addr)) < 0) {
+ printk(KERN_ERR "%s: Unable to query hardware address.\n", dev->name);
+ unit--;
+ unregister_dev(dev);
+ kfree(dev);
+ return NULL;
+ }
+ dprintk(KERN_DEBUG "%s: hwaddr = %02X:%02X:%02X:%02X:%02X:%02X\n",
+ dev->name, hw_addr[0], hw_addr[1], hw_addr[2], hw_addr[3],
+ hw_addr[4], hw_addr[5]);
+
+ dev->addr_len = 6;
+ memcpy(dev->dev_addr, hw_addr, 6);
+
+ if (i2o_query_scalar(i2o_dev->controller, i2o_dev->lct_data.tid,
+ 0x0007, 2, &tx_max_out, sizeof(tx_max_out)) < 0) {
+ printk(KERN_ERR "%s: Unable to query max TX queue.\n", dev->name);
+ unit--;
+ unregister_dev(dev);
+ kfree(dev);
+ return NULL;
+ }
+ dprintk(KERN_INFO "%s: Max TX Outstanding = %d.\n", dev->name, tx_max_out);
+ priv->tx_max_out = tx_max_out;
+ atomic_set(&priv->tx_out, 0);
+ priv->tx_count = 0;
+
+ INIT_LIST_HEAD(&priv->i2o_batch_send_task.list);
+ priv->i2o_batch_send_task.sync = 0;
+ priv->i2o_batch_send_task.routine = (void *)i2o_lan_batch_send;
+ priv->i2o_batch_send_task.data = (void *)dev;
+
+ dev->open = i2o_lan_open;
+ dev->stop = i2o_lan_close;
+ dev->get_stats = i2o_lan_get_stats;
+ dev->set_multicast_list = i2o_lan_set_multicast_list;
+ dev->tx_timeout = i2o_lan_tx_timeout;
+ dev->watchdog_timeo = I2O_LAN_TX_TIMEOUT;
+
+#ifdef CONFIG_NET_FC
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_FIBRE_CHANNEL)
+ dev->hard_start_xmit = i2o_lan_sdu_send;
+ else
+#endif
+ dev->hard_start_xmit = i2o_lan_packet_send;
+
+ if (i2o_dev->lct_data.sub_class == I2O_LAN_ETHERNET)
+ dev->change_mtu = i2o_lan_change_mtu;
+
+ return dev;
+}
+
+#ifdef MODULE
+#define i2o_lan_init init_module
+#endif
+
+int __init i2o_lan_init(void)
+{
+ struct net_device *dev;
+ int i;
+
+ printk(KERN_INFO "I2O LAN OSM (C) 1999 University of Helsinki.\n");
+
+ /* Module params are used as global defaults for private values */
+
+ if (max_buckets_out > I2O_LAN_MAX_BUCKETS_OUT)
+ max_buckets_out = I2O_LAN_MAX_BUCKETS_OUT;
+ if (bucket_thresh > max_buckets_out)
+ bucket_thresh = max_buckets_out;
+
+ /* Install handlers for incoming replies */
+
+ if (i2o_install_handler(&i2o_lan_send_handler) < 0) {
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
+ return -EINVAL;
+ }
+ lan_send_context = i2o_lan_send_handler.context;
+
+ if (i2o_install_handler(&i2o_lan_receive_handler) < 0) {
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
+ return -EINVAL;
+ }
+ lan_receive_context = i2o_lan_receive_handler.context;
+
+ if (i2o_install_handler(&i2o_lan_handler) < 0) {
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN OSM.\n");
+ return -EINVAL;
+ }
+ lan_context = i2o_lan_handler.context;
+
+ for(i=0; i <= MAX_LAN_CARDS; i++)
+ i2o_landevs[i] = NULL;
+
+ for (i=0; i < MAX_I2O_CONTROLLERS; i++) {
+ struct i2o_controller *iop = i2o_find_controller(i);
+ struct i2o_device *i2o_dev;
+
+ if (iop==NULL)
+ continue;
+
+ for (i2o_dev=iop->devices;i2o_dev != NULL;i2o_dev=i2o_dev->next) {
+
+ if (i2o_dev->lct_data.class_id != I2O_CLASS_LAN)
+ continue;
+
+ /* Make sure device not already claimed by an ISM */
+ if (i2o_dev->lct_data.user_tid != 0xFFF)
+ continue;
+
+ if (unit == MAX_LAN_CARDS) {
+ i2o_unlock_controller(iop);
+ printk(KERN_WARNING "i2o_lan: Too many I2O LAN devices.\n");
+ return -EINVAL;
+ }
+
+ dev = i2o_lan_register_device(i2o_dev);
+ if (dev == NULL) {
+ printk(KERN_ERR "i2o_lan: Unable to register I2O LAN device 0x%04x.\n",
+ i2o_dev->lct_data.sub_class);
+ continue;
+ }
+
+ printk(KERN_INFO "%s: I2O LAN device registered, "
+ "subclass = 0x%04x, unit = %d, tid = %d.\n",
+ dev->name, i2o_dev->lct_data.sub_class,
+ ((struct i2o_lan_local *)dev->priv)->unit,
+ i2o_dev->lct_data.tid);
+ }
+
+ i2o_unlock_controller(iop);
+ }
+
+ dprintk(KERN_INFO "%d I2O LAN devices found and registered.\n", unit+1);
+
+ return 0;
+}
+
+#ifdef MODULE
+
+void cleanup_module(void)
+{
+ int i;
+
+ for (i = 0; i <= unit; i++) {
+ struct net_device *dev = i2o_landevs[i];
+ struct i2o_lan_local *priv = (struct i2o_lan_local *)dev->priv;
+ struct i2o_device *i2o_dev = priv->i2o_dev;
+
+ switch (i2o_dev->lct_data.sub_class) {
+ case I2O_LAN_ETHERNET:
+ unregister_netdev(dev);
+ break;
+#ifdef CONFIG_FDDI
+ case I2O_LAN_FDDI:
+ unregister_netdevice(dev);
+ break;
+#endif
+#ifdef CONFIG_TR
+ case I2O_LAN_TR:
+ unregister_trdev(dev);
+ break;
+#endif
+#ifdef CONFIG_NET_FC
+ case I2O_LAN_FIBRE_CHANNEL:
+ unregister_fcdev(dev);
+ break;
+#endif
+ default:
+ printk(KERN_WARNING "%s: Spurious I2O LAN subclass 0x%08x.\n",
+ dev->name, i2o_dev->lct_data.sub_class);
+ }
+
+ dprintk(KERN_INFO "%s: I2O LAN device unregistered.\n",
+ dev->name);
+ kfree(dev);
+ }
+
+ i2o_remove_handler(&i2o_lan_handler);
+ i2o_remove_handler(&i2o_lan_send_handler);
+ i2o_remove_handler(&i2o_lan_receive_handler);
+}
+
+EXPORT_NO_SYMBOLS;
+
+MODULE_AUTHOR("University of Helsinki, Department of Computer Science");
+MODULE_DESCRIPTION("I2O Lan OSM");
+
+MODULE_PARM(max_buckets_out, "1-" __MODULE_STRING(I2O_LAN_MAX_BUCKETS_OUT) "i");
+MODULE_PARM_DESC(max_buckets_out, "Total number of buckets to post (1-)");
+MODULE_PARM(bucket_thresh, "1-" __MODULE_STRING(I2O_LAN_MAX_BUCKETS_OUT) "i");
+MODULE_PARM_DESC(bucket_thresh, "Bucket post threshold (1-)");
+MODULE_PARM(rx_copybreak, "1-" "i");
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy only small frames (1-)");
+MODULE_PARM(tx_batch_mode, "0-2" "i");
+MODULE_PARM_DESC(tx_batch_mode, "0=Send immediatelly, 1=Send in batches, 2=Switch automatically");
+
+#endif
--- /dev/null
+/*
+ * i2o_lan.h I2O LAN Class definitions
+ *
+ * I2O LAN CLASS OSM May 26th 2000
+ *
+ * (C) Copyright 1999, 2000 University of Helsinki,
+ * Department of Computer Science
+ *
+ * This code is still under development / test.
+ *
+ * Author: Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
+ * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Taneli Vähäkangas <Taneli.Vahakangas@cs.Helsinki.FI>
+ */
+
+#ifndef _I2O_LAN_H
+#define _I2O_LAN_H
+
+/* Default values for tunable parameters first */
+
+#define I2O_LAN_MAX_BUCKETS_OUT 96
+#define I2O_LAN_BUCKET_THRESH 18 /* 9 buckets in one message */
+#define I2O_LAN_RX_COPYBREAK 200
+#define I2O_LAN_TX_TIMEOUT (1*HZ)
+#define I2O_LAN_TX_BATCH_MODE 2 /* 2=automatic, 1=on, 0=off */
+#define I2O_LAN_EVENT_MASK 0 /* 0=None, 0xFFC00002=All */
+
+/* LAN types */
+#define I2O_LAN_ETHERNET 0x0030
+#define I2O_LAN_100VG 0x0040
+#define I2O_LAN_TR 0x0050
+#define I2O_LAN_FDDI 0x0060
+#define I2O_LAN_FIBRE_CHANNEL 0x0070
+#define I2O_LAN_UNKNOWN 0x00000000
+
+/* Connector types */
+
+/* Ethernet */
+#define I2O_LAN_AUI (I2O_LAN_ETHERNET << 4) + 0x00000001
+#define I2O_LAN_10BASE5 (I2O_LAN_ETHERNET << 4) + 0x00000002
+#define I2O_LAN_FIORL (I2O_LAN_ETHERNET << 4) + 0x00000003
+#define I2O_LAN_10BASE2 (I2O_LAN_ETHERNET << 4) + 0x00000004
+#define I2O_LAN_10BROAD36 (I2O_LAN_ETHERNET << 4) + 0x00000005
+#define I2O_LAN_10BASE_T (I2O_LAN_ETHERNET << 4) + 0x00000006
+#define I2O_LAN_10BASE_FP (I2O_LAN_ETHERNET << 4) + 0x00000007
+#define I2O_LAN_10BASE_FB (I2O_LAN_ETHERNET << 4) + 0x00000008
+#define I2O_LAN_10BASE_FL (I2O_LAN_ETHERNET << 4) + 0x00000009
+#define I2O_LAN_100BASE_TX (I2O_LAN_ETHERNET << 4) + 0x0000000A
+#define I2O_LAN_100BASE_FX (I2O_LAN_ETHERNET << 4) + 0x0000000B
+#define I2O_LAN_100BASE_T4 (I2O_LAN_ETHERNET << 4) + 0x0000000C
+#define I2O_LAN_1000BASE_SX (I2O_LAN_ETHERNET << 4) + 0x0000000D
+#define I2O_LAN_1000BASE_LX (I2O_LAN_ETHERNET << 4) + 0x0000000E
+#define I2O_LAN_1000BASE_CX (I2O_LAN_ETHERNET << 4) + 0x0000000F
+#define I2O_LAN_1000BASE_T (I2O_LAN_ETHERNET << 4) + 0x00000010
+
+/* AnyLAN */
+#define I2O_LAN_100VG_ETHERNET (I2O_LAN_100VG << 4) + 0x00000001
+#define I2O_LAN_100VG_TR (I2O_LAN_100VG << 4) + 0x00000002
+
+/* Token Ring */
+#define I2O_LAN_4MBIT (I2O_LAN_TR << 4) + 0x00000001
+#define I2O_LAN_16MBIT (I2O_LAN_TR << 4) + 0x00000002
+
+/* FDDI */
+#define I2O_LAN_125MBAUD (I2O_LAN_FDDI << 4) + 0x00000001
+
+/* Fibre Channel */
+#define I2O_LAN_POINT_POINT (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000001
+#define I2O_LAN_ARB_LOOP (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000002
+#define I2O_LAN_PUBLIC_LOOP (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000003
+#define I2O_LAN_FABRIC (I2O_LAN_FIBRE_CHANNEL << 4) + 0x00000004
+
+#define I2O_LAN_EMULATION 0x00000F00
+#define I2O_LAN_OTHER 0x00000F01
+#define I2O_LAN_DEFAULT 0xFFFFFFFF
+
+/* LAN class functions */
+
+#define LAN_PACKET_SEND 0x3B
+#define LAN_SDU_SEND 0x3D
+#define LAN_RECEIVE_POST 0x3E
+#define LAN_RESET 0x35
+#define LAN_SUSPEND 0x37
+
+/* LAN DetailedStatusCode defines */
+#define I2O_LAN_DSC_SUCCESS 0x00
+#define I2O_LAN_DSC_DEVICE_FAILURE 0x01
+#define I2O_LAN_DSC_DESTINATION_NOT_FOUND 0x02
+#define I2O_LAN_DSC_TRANSMIT_ERROR 0x03
+#define I2O_LAN_DSC_TRANSMIT_ABORTED 0x04
+#define I2O_LAN_DSC_RECEIVE_ERROR 0x05
+#define I2O_LAN_DSC_RECEIVE_ABORTED 0x06
+#define I2O_LAN_DSC_DMA_ERROR 0x07
+#define I2O_LAN_DSC_BAD_PACKET_DETECTED 0x08
+#define I2O_LAN_DSC_OUT_OF_MEMORY 0x09
+#define I2O_LAN_DSC_BUCKET_OVERRUN 0x0A
+#define I2O_LAN_DSC_IOP_INTERNAL_ERROR 0x0B
+#define I2O_LAN_DSC_CANCELED 0x0C
+#define I2O_LAN_DSC_INVALID_TRANSACTION_CONTEXT 0x0D
+#define I2O_LAN_DSC_DEST_ADDRESS_DETECTED 0x0E
+#define I2O_LAN_DSC_DEST_ADDRESS_OMITTED 0x0F
+#define I2O_LAN_DSC_PARTIAL_PACKET_RETURNED 0x10
+#define I2O_LAN_DSC_SUSPENDED 0x11
+
+struct i2o_packet_info {
+ u32 offset : 24;
+ u32 flags : 8;
+ u32 len : 24;
+ u32 status : 8;
+};
+
+struct i2o_bucket_descriptor {
+ u32 context; /* FIXME: 64bit support */
+ struct i2o_packet_info packet_info[1];
+};
+
+/* Event Indicator Mask Flags for LAN OSM */
+
+#define I2O_LAN_EVT_LINK_DOWN 0x01
+#define I2O_LAN_EVT_LINK_UP 0x02
+#define I2O_LAN_EVT_MEDIA_CHANGE 0x04
+
+#include <linux/netdevice.h>
+#include <linux/fddidevice.h>
+
+struct i2o_lan_local {
+ u8 unit;
+ struct i2o_device *i2o_dev;
+
+ struct fddi_statistics stats; /* see also struct net_device_stats */
+ unsigned short (*type_trans)(struct sk_buff *, struct net_device *);
+ atomic_t buckets_out; /* nbr of unused buckets on DDM */
+ atomic_t tx_out; /* outstanding TXes */
+ u8 tx_count; /* packets in one TX message frame */
+ u16 tx_max_out; /* DDM's Tx queue len */
+ u8 sgl_max; /* max SGLs in one message frame */
+ u32 m; /* IOP address of the batch msg frame */
+
+ struct tq_struct i2o_batch_send_task;
+ int send_active;
+ struct sk_buff **i2o_fbl; /* Free bucket list (to reuse skbs) */
+ int i2o_fbl_tail;
+ spinlock_t fbl_lock;
+
+ spinlock_t tx_lock;
+
+ u32 max_size_mc_table; /* max number of multicast addresses */
+
+ /* LAN OSM configurable parameters are here: */
+
+ u16 max_buckets_out; /* max nbr of buckets to send to DDM */
+ u16 bucket_thresh; /* send more when this many used */
+ u16 rx_copybreak;
+
+ u8 tx_batch_mode; /* Set when using batch mode sends */
+ u32 i2o_event_mask; /* To turn on interesting event flags */
+};
+
+#endif /* _I2O_LAN_H */
--- /dev/null
+/*
+ * Find I2O capable controllers on the PCI bus, and register/install
+ * them with the I2O layer
+ *
+ * (C) Copyright 1999 Red Hat Software
+ *
+ * Written by Alan Cox, Building Number Three Ltd
+ * Modified by Deepak Saxena <deepak@plexity.net>
+ * Modified by Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * TODO:
+ * Support polled I2O PCI controllers.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/i2o.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_MTRR
+#include <asm/mtrr.h>
+#endif // CONFIG_MTRR
+
+#ifdef MODULE
+/*
+ * Core function table
+ * See <include/linux/i2o.h> for an explanation
+ */
+static struct i2o_core_func_table *core;
+
+/* Core attach function */
+extern int i2o_pci_core_attach(struct i2o_core_func_table *);
+extern void i2o_pci_core_detach(void);
+#endif /* MODULE */
+
+/*
+ * Free bus specific resources
+ */
+static void i2o_pci_dispose(struct i2o_controller *c)
+{
+ I2O_IRQ_WRITE32(c,0xFFFFFFFF);
+ if(c->bus.pci.irq > 0)
+ free_irq(c->bus.pci.irq, c);
+ iounmap(((u8 *)c->post_port)-0x40);
+
+#ifdef CONFIG_MTRR
+ if(c->bus.pci.mtrr_reg0 > 0)
+ mtrr_del(c->bus.pci.mtrr_reg0, 0, 0);
+ if(c->bus.pci.mtrr_reg1 > 0)
+ mtrr_del(c->bus.pci.mtrr_reg1, 0, 0);
+#endif
+}
+
+/*
+ * No real bus specific handling yet (note that later we will
+ * need to 'steal' PCI devices on i960 mainboards)
+ */
+
+static int i2o_pci_bind(struct i2o_controller *c, struct i2o_device *dev)
+{
+ MOD_INC_USE_COUNT;
+ return 0;
+}
+
+static int i2o_pci_unbind(struct i2o_controller *c, struct i2o_device *dev)
+{
+ MOD_DEC_USE_COUNT;
+ return 0;
+}
+
+/*
+ * Bus specific enable/disable functions
+ */
+static void i2o_pci_enable(struct i2o_controller *c)
+{
+ I2O_IRQ_WRITE32(c, 0);
+ c->enabled = 1;
+}
+
+static void i2o_pci_disable(struct i2o_controller *c)
+{
+ I2O_IRQ_WRITE32(c, 0xFFFFFFFF);
+ c->enabled = 0;
+}
+
+/*
+ * Bus specific interrupt handler
+ */
+
+static void i2o_pci_interrupt(int irq, void *dev_id, struct pt_regs *r)
+{
+ struct i2o_controller *c = dev_id;
+#ifdef MODULE
+ core->run_queue(c);
+#else
+ i2o_run_queue(c);
+#endif /* MODULE */
+}
+
+/*
+ * Install a PCI (or in theory AGP) i2o controller
+ *
+ * TODO: Add support for polled controllers
+ */
+int __init i2o_pci_install(struct pci_dev *dev)
+{
+ struct i2o_controller *c=kmalloc(sizeof(struct i2o_controller),
+ GFP_KERNEL);
+ u8 *mem;
+ u32 memptr = 0;
+ u32 size;
+
+ int i;
+
+ if(c==NULL)
+ {
+ printk(KERN_ERR "i2o: Insufficient memory to add controller.\n");
+ return -ENOMEM;
+ }
+ memset(c, 0, sizeof(*c));
+
+ for(i=0; i<6; i++)
+ {
+ /* Skip I/O spaces */
+ if(!(pci_resource_flags(dev, i) & IORESOURCE_IO))
+ {
+ memptr = pci_resource_start(dev, i);
+ break;
+ }
+ }
+
+ if(i==6)
+ {
+ printk(KERN_ERR "i2o: I2O controller has no memory regions defined.\n");
+ kfree(c);
+ return -EINVAL;
+ }
+
+ size = dev->resource[i].end-dev->resource[i].start+1;
+ /* Map the I2O controller */
+
+ printk(KERN_INFO "i2o: PCI I2O controller at 0x%08X size=%d\n", memptr, size);
+ mem = ioremap(memptr, size);
+ if(mem==NULL)
+ {
+ printk(KERN_ERR "i2o: Unable to map controller.\n");
+ kfree(c);
+ return -EINVAL;
+ }
+
+ c->bus.pci.irq = -1;
+ c->bus.pci.queue_buggy = 0;
+ c->bus.pci.dpt = 0;
+ c->bus.pci.short_req = 0;
+ c->bus.pci.pdev = dev;
+
+ c->irq_mask = (volatile u32 *)(mem+0x34);
+ c->post_port = (volatile u32 *)(mem+0x40);
+ c->reply_port = (volatile u32 *)(mem+0x44);
+
+ c->mem_phys = memptr;
+ c->mem_offset = (u32)mem;
+ c->destructor = i2o_pci_dispose;
+
+ c->bind = i2o_pci_bind;
+ c->unbind = i2o_pci_unbind;
+ c->bus_enable = i2o_pci_enable;
+ c->bus_disable = i2o_pci_disable;
+
+ c->type = I2O_TYPE_PCI;
+
+ /*
+ * Cards that fall apart if you hit them with large I/O
+ * loads...
+ */
+
+ if(dev->vendor == PCI_VENDOR_ID_NCR && dev->device == 0x0630)
+ {
+ c->bus.pci.short_req=1;
+ printk(KERN_INFO "I2O: Symbios FC920 workarounds activated.\n");
+ }
+ if(dev->subsystem_vendor == PCI_VENDOR_ID_PROMISE)
+ {
+ c->bus.pci.queue_buggy=1;
+ printk(KERN_INFO "I2O: Promise workarounds activated.\n");
+ }
+
+ /*
+ * Cards that go bananas if you quiesce them before you reset
+ * them
+ */
+
+ if(dev->vendor == PCI_VENDOR_ID_DPT)
+ c->bus.pci.dpt=1;
+
+ /*
+ * Enable Write Combining MTRR for IOP's memory region
+ */
+#ifdef CONFIG_MTRR
+ c->bus.pci.mtrr_reg0 =
+ mtrr_add(c->mem_phys, size, MTRR_TYPE_WRCOMB, 1);
+/*
+* If it is an INTEL i960 I/O processor then set the first 64K to Uncacheable
+* since the region contains the Messaging unit which shouldn't be cached.
+*/
+ c->bus.pci.mtrr_reg1 = -1;
+ if(dev->vendor == PCI_VENDOR_ID_INTEL || dev->vendor == PCI_VENDOR_ID_DPT)
+ {
+ printk(KERN_INFO "I2O: MTRR workaround for Intel i960 processor\n");
+ c->bus.pci.mtrr_reg1 = mtrr_add(c->mem_phys, 65536, MTRR_TYPE_UNCACHABLE, 1);
+ if(c->bus.pci.mtrr_reg1< 0)
+ printk(KERN_INFO "i2o_pci: Error in setting MTRR_TYPE_UNCACHABLE\n");
+ }
+
+#endif
+
+ I2O_IRQ_WRITE32(c,0xFFFFFFFF);
+
+#ifdef MODULE
+ i = core->install(c);
+#else
+ i = i2o_install_controller(c);
+#endif /* MODULE */
+
+ if(i<0)
+ {
+ printk(KERN_ERR "i2o: Unable to install controller.\n");
+ kfree(c);
+ iounmap(mem);
+ return i;
+ }
+
+ c->bus.pci.irq = dev->irq;
+ if(c->bus.pci.irq)
+ {
+ i=request_irq(dev->irq, i2o_pci_interrupt, SA_SHIRQ,
+ c->name, c);
+ if(i<0)
+ {
+ printk(KERN_ERR "%s: unable to allocate interrupt %d.\n",
+ c->name, dev->irq);
+ c->bus.pci.irq = -1;
+#ifdef MODULE
+ core->delete(c);
+#else
+ i2o_delete_controller(c);
+#endif /* MODULE */
+ iounmap(mem);
+ return -EBUSY;
+ }
+ }
+
+ printk(KERN_INFO "%s: Installed at IRQ%d\n", c->name, dev->irq);
+ I2O_IRQ_WRITE32(c,0x0);
+ c->enabled = 1;
+ return 0;
+}
+
+int __init i2o_pci_scan(void)
+{
+ struct pci_dev *dev;
+ int count=0;
+
+ printk(KERN_INFO "i2o: Checking for PCI I2O controllers...\n");
+
+ pci_for_each_dev(dev)
+ {
+ if((dev->class>>8)!=PCI_CLASS_INTELLIGENT_I2O)
+ continue;
+ if((dev->class&0xFF)>1)
+ {
+ printk(KERN_INFO "i2o: I2O Controller found but does not support I2O 1.5 (skipping).\n");
+ continue;
+ }
+ if (pci_enable_device(dev))
+ continue;
+ printk(KERN_INFO "i2o: I2O controller on bus %d at %d.\n",
+ dev->bus->number, dev->devfn);
+ pci_set_master(dev);
+ if(i2o_pci_install(dev)==0)
+ count++;
+ }
+ if(count)
+ printk(KERN_INFO "i2o: %d I2O controller%s found and installed.\n", count,
+ count==1?"":"s");
+ return count?count:-ENODEV;
+}
+
+#ifdef I2O_HOTPLUG_SUPPORT
+/*
+ * Activate a newly found PCI I2O controller
+ * Not used now, but will be needed in future for
+ * hot plug PCI support
+ */
+static void i2o_pci_activate(i2o_controller * c)
+{
+ int i=0;
+ struct i2o_controller *c;
+
+ if(c->type == I2O_TYPE_PCI)
+ {
+ I2O_IRQ_WRITE32(c,0);
+#ifdef MODULE
+ if(core->activate(c))
+#else
+ if(i2o_activate_controller(c))
+#endif /* MODULE */
+ {
+ printk("%s: Failed to initialize.\n", c->name);
+#ifdef MODULE
+ core->unlock(c);
+ core->delete(c);
+#else
+ i2o_unlock_controller(c);
+ i2o_delete_controller(c);
+#endif
+ continue;
+ }
+ }
+}
+#endif // I2O_HOTPLUG_SUPPORT
+
+#ifdef MODULE
+
+int i2o_pci_core_attach(struct i2o_core_func_table *table)
+{
+ MOD_INC_USE_COUNT;
+
+ core = table;
+
+ return i2o_pci_scan();
+}
+
+void i2o_pci_core_detach(void)
+{
+ core = NULL;
+
+ MOD_DEC_USE_COUNT;
+}
+
+int init_module(void)
+{
+ printk(KERN_INFO "Linux I2O PCI support (c) 1999 Red Hat Software.\n");
+
+ core = NULL;
+
+ return 0;
+
+}
+
+void cleanup_module(void)
+{
+}
+
+EXPORT_SYMBOL(i2o_pci_core_attach);
+EXPORT_SYMBOL(i2o_pci_core_detach);
+
+MODULE_AUTHOR("Red Hat Software");
+MODULE_DESCRIPTION("I2O PCI Interface");
+
+#else
+void __init i2o_pci_init(void)
+{
+ printk(KERN_INFO "Linux I2O PCI support (c) 1999 Red Hat Software.\n");
+ i2o_pci_scan();
+}
+#endif
--- /dev/null
+/*
+ * procfs handler for Linux I2O subsystem
+ *
+ * (c) Copyright 1999 Deepak Saxena
+ *
+ * Originally written by Deepak Saxena(deepak@plexity.net)
+ *
+ * This program is free software. You can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * This is an initial test release. The code is based on the design
+ * of the ide procfs system (drivers/block/ide-proc.c). Some code
+ * taken from i2o-core module by Alan Cox.
+ *
+ * DISCLAIMER: This code is still under development/test and may cause
+ * your system to behave unpredictably. Use at your own discretion.
+ *
+ * LAN entries by Juha Sievänen (Juha.Sievanen@cs.Helsinki.FI),
+ * Auvo Häkkinen (Auvo.Hakkinen@cs.Helsinki.FI)
+ * University of Helsinki, Department of Computer Science
+ */
+
+/*
+ * set tabstop=3
+ */
+
+/*
+ * TODO List
+ *
+ * - Add support for any version 2.0 spec changes once 2.0 IRTOS is
+ * is available to test with
+ * - Clean up code to use official structure definitions
+ */
+
+// FIXME!
+#define FMT_U64_HEX "0x%08x%08x"
+#define U64_VAL(pu64) *((u32*)(pu64)+1), *((u32*)(pu64))
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/i2o.h>
+#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/spinlock.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/byteorder.h>
+
+#include "i2o_lan.h"
+
+/*
+ * Structure used to define /proc entries
+ */
+typedef struct _i2o_proc_entry_t
+{
+ char *name; /* entry name */
+ mode_t mode; /* mode */
+ read_proc_t *read_proc; /* read func */
+ write_proc_t *write_proc; /* write func */
+} i2o_proc_entry;
+
+// #define DRIVERDEBUG
+
+static int i2o_proc_read_lct(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_hrt(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_status(char *, char **, off_t, int, int *, void *);
+
+static int i2o_proc_read_hw(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_ddm_table(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_driver_store(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_drivers_stored(char *, char **, off_t, int, int *, void *);
+
+static int i2o_proc_read_groups(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_phys_device(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_claimed(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_users(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_priv_msgs(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_authorized_users(char *, char **, off_t, int, int *, void *);
+
+static int i2o_proc_read_dev_name(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_dev_identity(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_ddm_identity(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_uinfo(char *, char **, off_t, int, int *, void *);
+static int i2o_proc_read_sgl_limits(char *, char **, off_t, int, int *, void *);
+
+static int i2o_proc_read_sensors(char *, char **, off_t, int, int *, void *);
+
+static int print_serial_number(char *, int, u8 *, int);
+
+static int i2o_proc_create_entries(void *, i2o_proc_entry *,
+ struct proc_dir_entry *);
+static void i2o_proc_remove_entries(i2o_proc_entry *, struct proc_dir_entry *);
+static int i2o_proc_add_controller(struct i2o_controller *,
+ struct proc_dir_entry * );
+static void i2o_proc_remove_controller(struct i2o_controller *,
+ struct proc_dir_entry * );
+static void i2o_proc_add_device(struct i2o_device *, struct proc_dir_entry *);
+static void i2o_proc_remove_device(struct i2o_device *);
+static int create_i2o_procfs(void);
+static int destroy_i2o_procfs(void);
+static void i2o_proc_new_dev(struct i2o_controller *, struct i2o_device *);
+static void i2o_proc_dev_del(struct i2o_controller *, struct i2o_device *);
+
+static int i2o_proc_read_lan_dev_info(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_mac_addr(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_mcast_addr(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_batch_control(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_operation(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_media_operation(char *, char **, off_t, int,
+ int *, void *);
+static int i2o_proc_read_lan_alt_addr(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_tx_info(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_rx_info(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_hist_stats(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_eth_stats(char *, char **, off_t, int,
+ int *, void *);
+static int i2o_proc_read_lan_tr_stats(char *, char **, off_t, int, int *,
+ void *);
+static int i2o_proc_read_lan_fddi_stats(char *, char **, off_t, int, int *,
+ void *);
+
+static struct proc_dir_entry *i2o_proc_dir_root;
+
+/*
+ * I2O OSM descriptor
+ */
+static struct i2o_handler i2o_proc_handler =
+{
+ NULL,
+ i2o_proc_new_dev,
+ i2o_proc_dev_del,
+ NULL,
+ "I2O procfs Layer",
+ 0,
+ 0xffffffff // All classes
+};
+
+/*
+ * IOP specific entries...write field just in case someone
+ * ever wants one.
+ */
+static i2o_proc_entry generic_iop_entries[] =
+{
+ {"hrt", S_IFREG|S_IRUGO, i2o_proc_read_hrt, NULL},
+ {"lct", S_IFREG|S_IRUGO, i2o_proc_read_lct, NULL},
+ {"status", S_IFREG|S_IRUGO, i2o_proc_read_status, NULL},
+ {"hw", S_IFREG|S_IRUGO, i2o_proc_read_hw, NULL},
+ {"ddm_table", S_IFREG|S_IRUGO, i2o_proc_read_ddm_table, NULL},
+ {"driver_store", S_IFREG|S_IRUGO, i2o_proc_read_driver_store, NULL},
+ {"drivers_stored", S_IFREG|S_IRUGO, i2o_proc_read_drivers_stored, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+/*
+ * Device specific entries
+ */
+static i2o_proc_entry generic_dev_entries[] =
+{
+ {"groups", S_IFREG|S_IRUGO, i2o_proc_read_groups, NULL},
+ {"phys_dev", S_IFREG|S_IRUGO, i2o_proc_read_phys_device, NULL},
+ {"claimed", S_IFREG|S_IRUGO, i2o_proc_read_claimed, NULL},
+ {"users", S_IFREG|S_IRUGO, i2o_proc_read_users, NULL},
+ {"priv_msgs", S_IFREG|S_IRUGO, i2o_proc_read_priv_msgs, NULL},
+ {"authorized_users", S_IFREG|S_IRUGO, i2o_proc_read_authorized_users, NULL},
+ {"dev_identity", S_IFREG|S_IRUGO, i2o_proc_read_dev_identity, NULL},
+ {"ddm_identity", S_IFREG|S_IRUGO, i2o_proc_read_ddm_identity, NULL},
+ {"user_info", S_IFREG|S_IRUGO, i2o_proc_read_uinfo, NULL},
+ {"sgl_limits", S_IFREG|S_IRUGO, i2o_proc_read_sgl_limits, NULL},
+ {"sensors", S_IFREG|S_IRUGO, i2o_proc_read_sensors, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+/*
+ * Storage unit specific entries (SCSI Periph, BS) with device names
+ */
+static i2o_proc_entry rbs_dev_entries[] =
+{
+ {"dev_name", S_IFREG|S_IRUGO, i2o_proc_read_dev_name, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+#define SCSI_TABLE_SIZE 13
+static char *scsi_devices[] =
+{
+ "Direct-Access Read/Write",
+ "Sequential-Access Storage",
+ "Printer",
+ "Processor",
+ "WORM Device",
+ "CD-ROM Device",
+ "Scanner Device",
+ "Optical Memory Device",
+ "Medium Changer Device",
+ "Communications Device",
+ "Graphics Art Pre-Press Device",
+ "Graphics Art Pre-Press Device",
+ "Array Controller Device"
+};
+
+/* private */
+
+/*
+ * Generic LAN specific entries
+ *
+ * Should groups with r/w entries have their own subdirectory?
+ *
+ */
+static i2o_proc_entry lan_entries[] =
+{
+ {"lan_dev_info", S_IFREG|S_IRUGO, i2o_proc_read_lan_dev_info, NULL},
+ {"lan_mac_addr", S_IFREG|S_IRUGO, i2o_proc_read_lan_mac_addr, NULL},
+ {"lan_mcast_addr", S_IFREG|S_IRUGO|S_IWUSR,
+ i2o_proc_read_lan_mcast_addr, NULL},
+ {"lan_batch_ctrl", S_IFREG|S_IRUGO|S_IWUSR,
+ i2o_proc_read_lan_batch_control, NULL},
+ {"lan_operation", S_IFREG|S_IRUGO, i2o_proc_read_lan_operation, NULL},
+ {"lan_media_operation", S_IFREG|S_IRUGO,
+ i2o_proc_read_lan_media_operation, NULL},
+ {"lan_alt_addr", S_IFREG|S_IRUGO, i2o_proc_read_lan_alt_addr, NULL},
+ {"lan_tx_info", S_IFREG|S_IRUGO, i2o_proc_read_lan_tx_info, NULL},
+ {"lan_rx_info", S_IFREG|S_IRUGO, i2o_proc_read_lan_rx_info, NULL},
+
+ {"lan_hist_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_hist_stats, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+/*
+ * Port specific LAN entries
+ *
+ */
+static i2o_proc_entry lan_eth_entries[] =
+{
+ {"lan_eth_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_eth_stats, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+static i2o_proc_entry lan_tr_entries[] =
+{
+ {"lan_tr_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_tr_stats, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+static i2o_proc_entry lan_fddi_entries[] =
+{
+ {"lan_fddi_stats", S_IFREG|S_IRUGO, i2o_proc_read_lan_fddi_stats, NULL},
+ {NULL, 0, NULL, NULL}
+};
+
+
+static char *chtostr(u8 *chars, int n)
+{
+ char tmp[256];
+ tmp[0] = 0;
+ return strncat(tmp, (char *)chars, n);
+}
+
+static int i2o_report_query_status(char *buf, int block_status, char *group)
+{
+ switch (block_status)
+ {
+ case -ETIMEDOUT:
+ return sprintf(buf, "Timeout reading group %s.\n",group);
+ case -ENOMEM:
+ return sprintf(buf, "No free memory to read the table.\n");
+ case -I2O_PARAMS_STATUS_INVALID_GROUP_ID:
+ return sprintf(buf, "Group %s not supported.\n", group);
+ default:
+ return sprintf(buf, "Error reading group %s. BlockStatus 0x%02X\n",
+ group, -block_status);
+ }
+}
+
+static char* bus_strings[] =
+{
+ "Local Bus",
+ "ISA",
+ "EISA",
+ "MCA",
+ "PCI",
+ "PCMCIA",
+ "NUBUS",
+ "CARDBUS"
+};
+
+static spinlock_t i2o_proc_lock = SPIN_LOCK_UNLOCKED;
+
+int i2o_proc_read_hrt(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_controller *c = (struct i2o_controller *)data;
+ i2o_hrt *hrt = (i2o_hrt *)c->hrt;
+ u32 bus;
+ int count;
+ int i;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ if(hrt->hrt_version)
+ {
+ len += sprintf(buf+len,
+ "HRT table for controller is too new a version.\n");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ count = hrt->num_entries;
+
+ if((count * hrt->entry_len + 8) > 2048) {
+ printk(KERN_WARNING "i2o_proc: HRT does not fit into buffer\n");
+ len += sprintf(buf+len,
+ "HRT table too big to fit in buffer.\n");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "HRT has %d entries of %d bytes each.\n",
+ count, hrt->entry_len << 2);
+
+ for(i = 0; i < count; i++)
+ {
+ len += sprintf(buf+len, "Entry %d:\n", i);
+ len += sprintf(buf+len, " Adapter ID: %0#10x\n",
+ hrt->hrt_entry[i].adapter_id);
+ len += sprintf(buf+len, " Controlling tid: %0#6x\n",
+ hrt->hrt_entry[i].parent_tid);
+
+ if(hrt->hrt_entry[i].bus_type != 0x80)
+ {
+ bus = hrt->hrt_entry[i].bus_type;
+ len += sprintf(buf+len, " %s Information\n", bus_strings[bus]);
+
+ switch(bus)
+ {
+ case I2O_BUS_LOCAL:
+ len += sprintf(buf+len, " IOBase: %0#6x,",
+ hrt->hrt_entry[i].bus.local_bus.LbBaseIOPort);
+ len += sprintf(buf+len, " MemoryBase: %0#10x\n",
+ hrt->hrt_entry[i].bus.local_bus.LbBaseMemoryAddress);
+ break;
+
+ case I2O_BUS_ISA:
+ len += sprintf(buf+len, " IOBase: %0#6x,",
+ hrt->hrt_entry[i].bus.isa_bus.IsaBaseIOPort);
+ len += sprintf(buf+len, " MemoryBase: %0#10x,",
+ hrt->hrt_entry[i].bus.isa_bus.IsaBaseMemoryAddress);
+ len += sprintf(buf+len, " CSN: %0#4x,",
+ hrt->hrt_entry[i].bus.isa_bus.CSN);
+ break;
+
+ case I2O_BUS_EISA:
+ len += sprintf(buf+len, " IOBase: %0#6x,",
+ hrt->hrt_entry[i].bus.eisa_bus.EisaBaseIOPort);
+ len += sprintf(buf+len, " MemoryBase: %0#10x,",
+ hrt->hrt_entry[i].bus.eisa_bus.EisaBaseMemoryAddress);
+ len += sprintf(buf+len, " Slot: %0#4x,",
+ hrt->hrt_entry[i].bus.eisa_bus.EisaSlotNumber);
+ break;
+
+ case I2O_BUS_MCA:
+ len += sprintf(buf+len, " IOBase: %0#6x,",
+ hrt->hrt_entry[i].bus.mca_bus.McaBaseIOPort);
+ len += sprintf(buf+len, " MemoryBase: %0#10x,",
+ hrt->hrt_entry[i].bus.mca_bus.McaBaseMemoryAddress);
+ len += sprintf(buf+len, " Slot: %0#4x,",
+ hrt->hrt_entry[i].bus.mca_bus.McaSlotNumber);
+ break;
+
+ case I2O_BUS_PCI:
+ len += sprintf(buf+len, " Bus: %0#4x",
+ hrt->hrt_entry[i].bus.pci_bus.PciBusNumber);
+ len += sprintf(buf+len, " Dev: %0#4x",
+ hrt->hrt_entry[i].bus.pci_bus.PciDeviceNumber);
+ len += sprintf(buf+len, " Func: %0#4x",
+ hrt->hrt_entry[i].bus.pci_bus.PciFunctionNumber);
+ len += sprintf(buf+len, " Vendor: %0#6x",
+ hrt->hrt_entry[i].bus.pci_bus.PciVendorID);
+ len += sprintf(buf+len, " Device: %0#6x\n",
+ hrt->hrt_entry[i].bus.pci_bus.PciDeviceID);
+ break;
+
+ default:
+ len += sprintf(buf+len, " Unsupported Bus Type\n");
+ }
+ }
+ else
+ len += sprintf(buf+len, " Unknown Bus Type\n");
+ }
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+int i2o_proc_read_lct(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_controller *c = (struct i2o_controller*)data;
+ i2o_lct *lct = (i2o_lct *)c->lct;
+ int entries;
+ int i;
+
+#define BUS_TABLE_SIZE 3
+ static char *bus_ports[] =
+ {
+ "Generic Bus",
+ "SCSI Bus",
+ "Fibre Channel Bus"
+ };
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ entries = (lct->table_size - 3)/9;
+
+ len += sprintf(buf, "LCT contains %d %s\n", entries,
+ entries == 1 ? "entry" : "entries");
+ if(lct->boot_tid)
+ len += sprintf(buf+len, "Boot Device @ ID %d\n", lct->boot_tid);
+
+ len +=
+ sprintf(buf+len, "Current Change Indicator: %#10x\n", lct->change_ind);
+
+ for(i = 0; i < entries; i++)
+ {
+ len += sprintf(buf+len, "Entry %d\n", i);
+ len += sprintf(buf+len, " Class, SubClass : %s", i2o_get_class_name(lct->lct_entry[i].class_id));
+
+ /*
+ * Classes which we'll print subclass info for
+ */
+ switch(lct->lct_entry[i].class_id & 0xFFF)
+ {
+ case I2O_CLASS_RANDOM_BLOCK_STORAGE:
+ switch(lct->lct_entry[i].sub_class)
+ {
+ case 0x00:
+ len += sprintf(buf+len, ", Direct-Access Read/Write");
+ break;
+
+ case 0x04:
+ len += sprintf(buf+len, ", WORM Drive");
+ break;
+
+ case 0x05:
+ len += sprintf(buf+len, ", CD-ROM Drive");
+ break;
+
+ case 0x07:
+ len += sprintf(buf+len, ", Optical Memory Device");
+ break;
+
+ default:
+ len += sprintf(buf+len, ", Unknown (0x%02x)",
+ lct->lct_entry[i].sub_class);
+ break;
+ }
+ break;
+
+ case I2O_CLASS_LAN:
+ switch(lct->lct_entry[i].sub_class & 0xFF)
+ {
+ case 0x30:
+ len += sprintf(buf+len, ", Ethernet");
+ break;
+
+ case 0x40:
+ len += sprintf(buf+len, ", 100base VG");
+ break;
+
+ case 0x50:
+ len += sprintf(buf+len, ", IEEE 802.5/Token-Ring");
+ break;
+
+ case 0x60:
+ len += sprintf(buf+len, ", ANSI X3T9.5 FDDI");
+ break;
+
+ case 0x70:
+ len += sprintf(buf+len, ", Fibre Channel");
+ break;
+
+ default:
+ len += sprintf(buf+len, ", Unknown Sub-Class (0x%02x)",
+ lct->lct_entry[i].sub_class & 0xFF);
+ break;
+ }
+ break;
+
+ case I2O_CLASS_SCSI_PERIPHERAL:
+ if(lct->lct_entry[i].sub_class < SCSI_TABLE_SIZE)
+ len += sprintf(buf+len, ", %s",
+ scsi_devices[lct->lct_entry[i].sub_class]);
+ else
+ len += sprintf(buf+len, ", Unknown Device Type");
+ break;
+
+ case I2O_CLASS_BUS_ADAPTER_PORT:
+ if(lct->lct_entry[i].sub_class < BUS_TABLE_SIZE)
+ len += sprintf(buf+len, ", %s",
+ bus_ports[lct->lct_entry[i].sub_class]);
+ else
+ len += sprintf(buf+len, ", Unknown Bus Type");
+ break;
+ }
+ len += sprintf(buf+len, "\n");
+
+ len += sprintf(buf+len, " Local TID : 0x%03x\n", lct->lct_entry[i].tid);
+ len += sprintf(buf+len, " User TID : 0x%03x\n", lct->lct_entry[i].user_tid);
+ len += sprintf(buf+len, " Parent TID : 0x%03x\n",
+ lct->lct_entry[i].parent_tid);
+ len += sprintf(buf+len, " Identity Tag : 0x%x%x%x%x%x%x%x%x\n",
+ lct->lct_entry[i].identity_tag[0],
+ lct->lct_entry[i].identity_tag[1],
+ lct->lct_entry[i].identity_tag[2],
+ lct->lct_entry[i].identity_tag[3],
+ lct->lct_entry[i].identity_tag[4],
+ lct->lct_entry[i].identity_tag[5],
+ lct->lct_entry[i].identity_tag[6],
+ lct->lct_entry[i].identity_tag[7]);
+ len += sprintf(buf+len, " Change Indicator : %0#10x\n",
+ lct->lct_entry[i].change_ind);
+ len += sprintf(buf+len, " Event Capab Mask : %0#10x\n",
+ lct->lct_entry[i].device_flags);
+ }
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+int i2o_proc_read_status(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_controller *c = (struct i2o_controller*)data;
+ char prodstr[25];
+ int version;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ i2o_status_get(c); // reread the status block
+
+ len += sprintf(buf+len,"Organization ID : %0#6x\n",
+ c->status_block->org_id);
+
+ version = c->status_block->i2o_version;
+
+/* FIXME for Spec 2.0
+ if (version == 0x02) {
+ len += sprintf(buf+len,"Lowest I2O version supported: ");
+ switch(workspace[2]) {
+ case 0x00:
+ len += sprintf(buf+len,"1.0\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len,"1.5\n");
+ break;
+ case 0x02:
+ len += sprintf(buf+len,"2.0\n");
+ break;
+ }
+
+ len += sprintf(buf+len, "Highest I2O version supported: ");
+ switch(workspace[3]) {
+ case 0x00:
+ len += sprintf(buf+len,"1.0\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len,"1.5\n");
+ break;
+ case 0x02:
+ len += sprintf(buf+len,"2.0\n");
+ break;
+ }
+ }
+*/
+ len += sprintf(buf+len,"IOP ID : %0#5x\n",
+ c->status_block->iop_id);
+ len += sprintf(buf+len,"Host Unit ID : %0#6x\n",
+ c->status_block->host_unit_id);
+ len += sprintf(buf+len,"Segment Number : %0#5x\n",
+ c->status_block->segment_number);
+
+ len += sprintf(buf+len, "I2O version : ");
+ switch (version) {
+ case 0x00:
+ len += sprintf(buf+len,"1.0\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len,"1.5\n");
+ break;
+ case 0x02:
+ len += sprintf(buf+len,"2.0\n");
+ break;
+ default:
+ len += sprintf(buf+len,"Unknown version\n");
+ }
+
+ len += sprintf(buf+len, "IOP State : ");
+ switch (c->status_block->iop_state) {
+ case 0x01:
+ len += sprintf(buf+len,"INIT\n");
+ break;
+
+ case 0x02:
+ len += sprintf(buf+len,"RESET\n");
+ break;
+
+ case 0x04:
+ len += sprintf(buf+len,"HOLD\n");
+ break;
+
+ case 0x05:
+ len += sprintf(buf+len,"READY\n");
+ break;
+
+ case 0x08:
+ len += sprintf(buf+len,"OPERATIONAL\n");
+ break;
+
+ case 0x10:
+ len += sprintf(buf+len,"FAILED\n");
+ break;
+
+ case 0x11:
+ len += sprintf(buf+len,"FAULTED\n");
+ break;
+
+ default:
+ len += sprintf(buf+len,"Unknown\n");
+ break;
+ }
+
+ len += sprintf(buf+len,"Messenger Type : ");
+ switch (c->status_block->msg_type) {
+ case 0x00:
+ len += sprintf(buf+len,"Memory mapped\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len,"Memory mapped only\n");
+ break;
+ case 0x02:
+ len += sprintf(buf+len,"Remote only\n");
+ break;
+ case 0x03:
+ len += sprintf(buf+len,"Memory mapped and remote\n");
+ break;
+ default:
+ len += sprintf(buf+len,"Unknown\n");
+ }
+
+ len += sprintf(buf+len,"Inbound Frame Size : %d bytes\n",
+ c->status_block->inbound_frame_size<<2);
+ len += sprintf(buf+len,"Max Inbound Frames : %d\n",
+ c->status_block->max_inbound_frames);
+ len += sprintf(buf+len,"Current Inbound Frames : %d\n",
+ c->status_block->cur_inbound_frames);
+ len += sprintf(buf+len,"Max Outbound Frames : %d\n",
+ c->status_block->max_outbound_frames);
+
+ /* Spec doesn't say if NULL terminated or not... */
+ memcpy(prodstr, c->status_block->product_id, 24);
+ prodstr[24] = '\0';
+ len += sprintf(buf+len,"Product ID : %s\n", prodstr);
+ len += sprintf(buf+len,"Expected LCT Size : %d bytes\n",
+ c->status_block->expected_lct_size);
+
+ len += sprintf(buf+len,"IOP Capabilities\n");
+ len += sprintf(buf+len," Context Field Size Support : ");
+ switch (c->status_block->iop_capabilities & 0x0000003) {
+ case 0:
+ len += sprintf(buf+len,"Supports only 32-bit context fields\n");
+ break;
+ case 1:
+ len += sprintf(buf+len,"Supports only 64-bit context fields\n");
+ break;
+ case 2:
+ len += sprintf(buf+len,"Supports 32-bit and 64-bit context fields, "
+ "but not concurrently\n");
+ break;
+ case 3:
+ len += sprintf(buf+len,"Supports 32-bit and 64-bit context fields "
+ "concurrently\n");
+ break;
+ default:
+ len += sprintf(buf+len,"0x%08x\n",c->status_block->iop_capabilities);
+ }
+ len += sprintf(buf+len," Current Context Field Size : ");
+ switch (c->status_block->iop_capabilities & 0x0000000C) {
+ case 0:
+ len += sprintf(buf+len,"not configured\n");
+ break;
+ case 4:
+ len += sprintf(buf+len,"Supports only 32-bit context fields\n");
+ break;
+ case 8:
+ len += sprintf(buf+len,"Supports only 64-bit context fields\n");
+ break;
+ case 12:
+ len += sprintf(buf+len,"Supports both 32-bit or 64-bit context fields "
+ "concurrently\n");
+ break;
+ default:
+ len += sprintf(buf+len,"\n");
+ }
+ len += sprintf(buf+len," Inbound Peer Support : %s\n",
+ (c->status_block->iop_capabilities & 0x00000010) ? "Supported" : "Not supported");
+ len += sprintf(buf+len," Outbound Peer Support : %s\n",
+ (c->status_block->iop_capabilities & 0x00000020) ? "Supported" : "Not supported");
+ len += sprintf(buf+len," Peer to Peer Support : %s\n",
+ (c->status_block->iop_capabilities & 0x00000040) ? "Supported" : "Not supported");
+
+ len += sprintf(buf+len, "Desired private memory size : %d kB\n",
+ c->status_block->desired_mem_size>>10);
+ len += sprintf(buf+len, "Allocated private memory size : %d kB\n",
+ c->status_block->current_mem_size>>10);
+ len += sprintf(buf+len, "Private memory base address : %0#10x\n",
+ c->status_block->current_mem_base);
+ len += sprintf(buf+len, "Desired private I/O size : %d kB\n",
+ c->status_block->desired_io_size>>10);
+ len += sprintf(buf+len, "Allocated private I/O size : %d kB\n",
+ c->status_block->current_io_size>>10);
+ len += sprintf(buf+len, "Private I/O base address : %0#10x\n",
+ c->status_block->current_io_base);
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+int i2o_proc_read_hw(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_controller *c = (struct i2o_controller*)data;
+ static u32 work32[5];
+ static u8 *work8 = (u8*)work32;
+ static u16 *work16 = (u16*)work32;
+ int token;
+ u32 hwcap;
+
+ static char *cpu_table[] =
+ {
+ "Intel 80960 series",
+ "AMD2900 series",
+ "Motorola 68000 series",
+ "ARM series",
+ "MIPS series",
+ "Sparc series",
+ "PowerPC series",
+ "Intel x86 series"
+ };
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_scalar(c, ADAPTER_TID, 0x0000, -1, &work32, sizeof(work32));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0000 IOP Hardware");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "I2O Vendor ID : %0#6x\n", work16[0]);
+ len += sprintf(buf+len, "Product ID : %0#6x\n", work16[1]);
+ len += sprintf(buf+len, "CPU : ");
+ if(work8[16] > 8)
+ len += sprintf(buf+len, "Unknown\n");
+ else
+ len += sprintf(buf+len, "%s\n", cpu_table[work8[16]]);
+ /* Anyone using ProcessorVersion? */
+
+ len += sprintf(buf+len, "RAM : %dkB\n", work32[1]>>10);
+ len += sprintf(buf+len, "Non-Volatile Mem : %dkB\n", work32[2]>>10);
+
+ hwcap = work32[3];
+ len += sprintf(buf+len, "Capabilities : 0x%08x\n", hwcap);
+ len += sprintf(buf+len, " [%s] Self booting\n",
+ (hwcap&0x00000001) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Upgradable IRTOS\n",
+ (hwcap&0x00000002) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Supports downloading DDMs\n",
+ (hwcap&0x00000004) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Supports installing DDMs\n",
+ (hwcap&0x00000008) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Battery-backed RAM\n",
+ (hwcap&0x00000010) ? "+" : "-");
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+
+/* Executive group 0003h - Executing DDM List (table) */
+int i2o_proc_read_ddm_table(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_controller *c = (struct i2o_controller*)data;
+ int token;
+ int i;
+
+ typedef struct _i2o_exec_execute_ddm_table {
+ u16 ddm_tid;
+ u8 module_type;
+ u8 reserved;
+ u16 i2o_vendor_id;
+ u16 module_id;
+ u8 module_name_version[28];
+ u32 data_size;
+ u32 code_size;
+ } i2o_exec_execute_ddm_table;
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ i2o_exec_execute_ddm_table ddm_table[MAX_I2O_MODULES];
+ } result;
+
+ i2o_exec_execute_ddm_table ddm_table;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ c, ADAPTER_TID,
+ 0x0003, -1,
+ NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0003 Executing DDM List");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "Tid Module_type Vendor Mod_id Module_name Vrs Data_size Code_size\n");
+ ddm_table=result.ddm_table[0];
+
+ for(i=0; i < result.row_count; ddm_table=result.ddm_table[++i])
+ {
+ len += sprintf(buf+len, "0x%03x ", ddm_table.ddm_tid & 0xFFF);
+
+ switch(ddm_table.module_type)
+ {
+ case 0x01:
+ len += sprintf(buf+len, "Downloaded DDM ");
+ break;
+ case 0x22:
+ len += sprintf(buf+len, "Embedded DDM ");
+ break;
+ default:
+ len += sprintf(buf+len, " ");
+ }
+
+ len += sprintf(buf+len, "%-#7x", ddm_table.i2o_vendor_id);
+ len += sprintf(buf+len, "%-#8x", ddm_table.module_id);
+ len += sprintf(buf+len, "%-29s", chtostr(ddm_table.module_name_version, 28));
+ len += sprintf(buf+len, "%9d ", ddm_table.data_size);
+ len += sprintf(buf+len, "%8d", ddm_table.code_size);
+
+ len += sprintf(buf+len, "\n");
+ }
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+
+/* Executive group 0004h - Driver Store (scalar) */
+int i2o_proc_read_driver_store(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_controller *c = (struct i2o_controller*)data;
+ u32 work32[8];
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_scalar(c, ADAPTER_TID, 0x0004, -1, &work32, sizeof(work32));
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0004 Driver Store");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "Module limit : %d\n"
+ "Module count : %d\n"
+ "Current space : %d kB\n"
+ "Free space : %d kB\n",
+ work32[0], work32[1], work32[2]>>10, work32[3]>>10);
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+
+/* Executive group 0005h - Driver Store Table (table) */
+int i2o_proc_read_drivers_stored(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ typedef struct _i2o_driver_store {
+ u16 stored_ddm_index;
+ u8 module_type;
+ u8 reserved;
+ u16 i2o_vendor_id;
+ u16 module_id;
+ u8 module_name_version[28];
+ u8 date[8];
+ u32 module_size;
+ u32 mpb_size;
+ u32 module_flags;
+ } i2o_driver_store_table;
+
+ struct i2o_controller *c = (struct i2o_controller*)data;
+ int token;
+ int i;
+
+ typedef struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ i2o_driver_store_table dst[MAX_I2O_MODULES];
+ } i2o_driver_result_table;
+
+ i2o_driver_result_table *result;
+ i2o_driver_store_table *dst;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ result = kmalloc(sizeof(i2o_driver_result_table), GFP_KERNEL);
+ if(result == NULL)
+ return -ENOMEM;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ c, ADAPTER_TID, 0x0005, -1, NULL, 0,
+ result, sizeof(*result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0005 DRIVER STORE TABLE");
+ spin_unlock(&i2o_proc_lock);
+ kfree(result);
+ return len;
+ }
+
+ len += sprintf(buf+len, "# Module_type Vendor Mod_id Module_name Vrs"
+ "Date Mod_size Par_size Flags\n");
+ for(i=0, dst=&result->dst[0]; i < result->row_count; dst=&result->dst[++i])
+ {
+ len += sprintf(buf+len, "%-3d", dst->stored_ddm_index);
+ switch(dst->module_type)
+ {
+ case 0x01:
+ len += sprintf(buf+len, "Downloaded DDM ");
+ break;
+ case 0x22:
+ len += sprintf(buf+len, "Embedded DDM ");
+ break;
+ default:
+ len += sprintf(buf+len, " ");
+ }
+
+#if 0
+ if(c->i2oversion == 0x02)
+ len += sprintf(buf+len, "%-d", dst->module_state);
+#endif
+
+ len += sprintf(buf+len, "%-#7x", dst->i2o_vendor_id);
+ len += sprintf(buf+len, "%-#8x", dst->module_id);
+ len += sprintf(buf+len, "%-29s", chtostr(dst->module_name_version,28));
+ len += sprintf(buf+len, "%-9s", chtostr(dst->date,8));
+ len += sprintf(buf+len, "%8d ", dst->module_size);
+ len += sprintf(buf+len, "%8d ", dst->mpb_size);
+ len += sprintf(buf+len, "0x%04x", dst->module_flags);
+#if 0
+ if(c->i2oversion == 0x02)
+ len += sprintf(buf+len, "%d",
+ dst->notification_level);
+#endif
+ len += sprintf(buf+len, "\n");
+ }
+
+ spin_unlock(&i2o_proc_lock);
+ kfree(result);
+ return len;
+}
+
+
+/* Generic group F000h - Params Descriptor (table) */
+int i2o_proc_read_groups(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+ u8 properties;
+
+ typedef struct _i2o_group_info
+ {
+ u16 group_number;
+ u16 field_count;
+ u16 row_count;
+ u8 properties;
+ u8 reserved;
+ } i2o_group_info;
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ i2o_group_info group[256];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid, 0xF000, -1, NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len = i2o_report_query_status(buf+len, token, "0xF000 Params Descriptor");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "# Group FieldCount RowCount Type Add Del Clear\n");
+
+ for (i=0; i < result.row_count; i++)
+ {
+ len += sprintf(buf+len, "%-3d", i);
+ len += sprintf(buf+len, "0x%04X ", result.group[i].group_number);
+ len += sprintf(buf+len, "%10d ", result.group[i].field_count);
+ len += sprintf(buf+len, "%8d ", result.group[i].row_count);
+
+ properties = result.group[i].properties;
+ if (properties & 0x1) len += sprintf(buf+len, "Table ");
+ else len += sprintf(buf+len, "Scalar ");
+ if (properties & 0x2) len += sprintf(buf+len, " + ");
+ else len += sprintf(buf+len, " - ");
+ if (properties & 0x4) len += sprintf(buf+len, " + ");
+ else len += sprintf(buf+len, " - ");
+ if (properties & 0x8) len += sprintf(buf+len, " + ");
+ else len += sprintf(buf+len, " - ");
+
+ len += sprintf(buf+len, "\n");
+ }
+
+ if (result.more_flag)
+ len += sprintf(buf+len, "There is more...\n");
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+
+/* Generic group F001h - Physical Device Table (table) */
+int i2o_proc_read_phys_device(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ u32 adapter_id[64];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid,
+ 0xF001, -1, NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF001 Physical Device Table");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ if (result.row_count)
+ len += sprintf(buf+len, "# AdapterId\n");
+
+ for (i=0; i < result.row_count; i++)
+ {
+ len += sprintf(buf+len, "%-2d", i);
+ len += sprintf(buf+len, "%#7x\n", result.adapter_id[i]);
+ }
+
+ if (result.more_flag)
+ len += sprintf(buf+len, "There is more...\n");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* Generic group F002h - Claimed Table (table) */
+int i2o_proc_read_claimed(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+
+ struct {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ u16 claimed_tid[64];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid,
+ 0xF002, -1, NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF002 Claimed Table");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ if (result.row_count)
+ len += sprintf(buf+len, "# ClaimedTid\n");
+
+ for (i=0; i < result.row_count; i++)
+ {
+ len += sprintf(buf+len, "%-2d", i);
+ len += sprintf(buf+len, "%#7x\n", result.claimed_tid[i]);
+ }
+
+ if (result.more_flag)
+ len += sprintf(buf+len, "There is more...\n");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* Generic group F003h - User Table (table) */
+int i2o_proc_read_users(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+
+ typedef struct _i2o_user_table
+ {
+ u16 instance;
+ u16 user_tid;
+ u8 claim_type;
+ u8 reserved1;
+ u16 reserved2;
+ } i2o_user_table;
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ i2o_user_table user[64];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid,
+ 0xF003, -1, NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF003 User Table");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "# Instance UserTid ClaimType\n");
+
+ for(i=0; i < result.row_count; i++)
+ {
+ len += sprintf(buf+len, "%-3d", i);
+ len += sprintf(buf+len, "%#8x ", result.user[i].instance);
+ len += sprintf(buf+len, "%#7x ", result.user[i].user_tid);
+ len += sprintf(buf+len, "%#9x\n", result.user[i].claim_type);
+ }
+
+ if (result.more_flag)
+ len += sprintf(buf+len, "There is more...\n");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* Generic group F005h - Private message extensions (table) (optional) */
+int i2o_proc_read_priv_msgs(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+
+ typedef struct _i2o_private
+ {
+ u16 ext_instance;
+ u16 organization_id;
+ u16 x_function_code;
+ } i2o_private;
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ i2o_private extension[64];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid,
+ 0xF000, -1,
+ NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF005 Private Message Extensions (optional)");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "Instance# OrgId FunctionCode\n");
+
+ for(i=0; i < result.row_count; i++)
+ {
+ len += sprintf(buf+len, "%0#9x ", result.extension[i].ext_instance);
+ len += sprintf(buf+len, "%0#6x ", result.extension[i].organization_id);
+ len += sprintf(buf+len, "%0#6x", result.extension[i].x_function_code);
+
+ len += sprintf(buf+len, "\n");
+ }
+
+ if(result.more_flag)
+ len += sprintf(buf+len, "There is more...\n");
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+
+/* Generic group F006h - Authorized User Table (table) */
+int i2o_proc_read_authorized_users(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ u32 alternate_tid[64];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid,
+ 0xF006, -1,
+ NULL, 0,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF006 Autohorized User Table");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ if (result.row_count)
+ len += sprintf(buf+len, "# AlternateTid\n");
+
+ for(i=0; i < result.row_count; i++)
+ {
+ len += sprintf(buf+len, "%-2d", i);
+ len += sprintf(buf+len, "%#7x ", result.alternate_tid[i]);
+ }
+
+ if (result.more_flag)
+ len += sprintf(buf+len, "There is more...\n");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+
+/* Generic group F100h - Device Identity (scalar) */
+int i2o_proc_read_dev_identity(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[128]; // allow for "stuff" + up to 256 byte (max) serial number
+ // == (allow) 512d bytes (max)
+ static u16 *work16 = (u16*)work32;
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0xF100, -1,
+ &work32, sizeof(work32));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token ,"0xF100 Device Identity");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Device Class : %s\n", i2o_get_class_name(work16[0]));
+ len += sprintf(buf+len, "Owner TID : %0#5x\n", work16[2]);
+ len += sprintf(buf+len, "Parent TID : %0#5x\n", work16[3]);
+ len += sprintf(buf+len, "Vendor info : %s\n", chtostr((u8 *)(work32+2), 16));
+ len += sprintf(buf+len, "Product info : %s\n", chtostr((u8 *)(work32+6), 16));
+ len += sprintf(buf+len, "Description : %s\n", chtostr((u8 *)(work32+10), 16));
+ len += sprintf(buf+len, "Product rev. : %s\n", chtostr((u8 *)(work32+14), 8));
+
+ len += sprintf(buf+len, "Serial number : ");
+ len = print_serial_number(buf, len,
+ (u8*)(work32+16),
+ /* allow for SNLen plus
+ * possible trailing '\0'
+ */
+ sizeof(work32)-(16*sizeof(u32))-2
+ );
+ len += sprintf(buf+len, "\n");
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+
+int i2o_proc_read_dev_name(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+
+ if ( d->dev_name[0] == '\0' )
+ return 0;
+
+ len = sprintf(buf, "%s\n", d->dev_name);
+
+ return len;
+}
+
+
+/* Generic group F101h - DDM Identity (scalar) */
+int i2o_proc_read_ddm_identity(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+
+ struct
+ {
+ u16 ddm_tid;
+ u8 module_name[24];
+ u8 module_rev[8];
+ u8 sn_format;
+ u8 serial_number[12];
+ u8 pad[256]; // allow up to 256 byte (max) serial number
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0xF101, -1,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF101 DDM Identity");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Registering DDM TID : 0x%03x\n", result.ddm_tid);
+ len += sprintf(buf+len, "Module name : %s\n", chtostr(result.module_name, 24));
+ len += sprintf(buf+len, "Module revision : %s\n", chtostr(result.module_rev, 8));
+
+ len += sprintf(buf+len, "Serial number : ");
+ len = print_serial_number(buf, len, result.serial_number, sizeof(result)-36);
+ /* allow for SNLen plus possible trailing '\0' */
+
+ len += sprintf(buf+len, "\n");
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+/* Generic group F102h - User Information (scalar) */
+int i2o_proc_read_uinfo(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+
+ struct
+ {
+ u8 device_name[64];
+ u8 service_name[64];
+ u8 physical_location[64];
+ u8 instance_number[4];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0xF102, -1,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF102 User Information");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Device name : %s\n", chtostr(result.device_name, 64));
+ len += sprintf(buf+len, "Service name : %s\n", chtostr(result.service_name, 64));
+ len += sprintf(buf+len, "Physical name : %s\n", chtostr(result.physical_location, 64));
+ len += sprintf(buf+len, "Instance number : %s\n", chtostr(result.instance_number, 4));
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* Generic group F103h - SGL Operating Limits (scalar) */
+int i2o_proc_read_sgl_limits(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[12];
+ static u16 *work16 = (u16 *)work32;
+ static u8 *work8 = (u8 *)work32;
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0xF103, -1,
+ &work32, sizeof(work32));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF103 SGL Operating Limits");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "SGL chain size : %d\n", work32[0]);
+ len += sprintf(buf+len, "Max SGL chain size : %d\n", work32[1]);
+ len += sprintf(buf+len, "SGL chain size target : %d\n", work32[2]);
+ len += sprintf(buf+len, "SGL frag count : %d\n", work16[6]);
+ len += sprintf(buf+len, "Max SGL frag count : %d\n", work16[7]);
+ len += sprintf(buf+len, "SGL frag count target : %d\n", work16[8]);
+
+ if (d->i2oversion == 0x02)
+ {
+ len += sprintf(buf+len, "SGL data alignment : %d\n", work16[8]);
+ len += sprintf(buf+len, "SGL addr limit : %d\n", work8[20]);
+ len += sprintf(buf+len, "SGL addr sizes supported : ");
+ if (work8[21] & 0x01)
+ len += sprintf(buf+len, "32 bit ");
+ if (work8[21] & 0x02)
+ len += sprintf(buf+len, "64 bit ");
+ if (work8[21] & 0x04)
+ len += sprintf(buf+len, "96 bit ");
+ if (work8[21] & 0x08)
+ len += sprintf(buf+len, "128 bit ");
+ len += sprintf(buf+len, "\n");
+ }
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+/* Generic group F200h - Sensors (scalar) */
+int i2o_proc_read_sensors(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+
+ struct
+ {
+ u16 sensor_instance;
+ u8 component;
+ u16 component_instance;
+ u8 sensor_class;
+ u8 sensor_type;
+ u8 scaling_exponent;
+ u32 actual_reading;
+ u32 minimum_reading;
+ u32 low2lowcat_treshold;
+ u32 lowcat2low_treshold;
+ u32 lowwarn2low_treshold;
+ u32 low2lowwarn_treshold;
+ u32 norm2lowwarn_treshold;
+ u32 lowwarn2norm_treshold;
+ u32 nominal_reading;
+ u32 hiwarn2norm_treshold;
+ u32 norm2hiwarn_treshold;
+ u32 high2hiwarn_treshold;
+ u32 hiwarn2high_treshold;
+ u32 hicat2high_treshold;
+ u32 hi2hicat_treshold;
+ u32 maximum_reading;
+ u8 sensor_state;
+ u16 event_enable;
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0xF200, -1,
+ &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0xF200 Sensors (optional)");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "Sensor instance : %d\n", result.sensor_instance);
+
+ len += sprintf(buf+len, "Component : %d = ", result.component);
+ switch (result.component)
+ {
+ case 0: len += sprintf(buf+len, "Other");
+ break;
+ case 1: len += sprintf(buf+len, "Planar logic Board");
+ break;
+ case 2: len += sprintf(buf+len, "CPU");
+ break;
+ case 3: len += sprintf(buf+len, "Chassis");
+ break;
+ case 4: len += sprintf(buf+len, "Power Supply");
+ break;
+ case 5: len += sprintf(buf+len, "Storage");
+ break;
+ case 6: len += sprintf(buf+len, "External");
+ break;
+ }
+ len += sprintf(buf+len,"\n");
+
+ len += sprintf(buf+len, "Component instance : %d\n", result.component_instance);
+ len += sprintf(buf+len, "Sensor class : %s\n",
+ result.sensor_class ? "Analog" : "Digital");
+
+ len += sprintf(buf+len, "Sensor type : %d = ",result.sensor_type);
+ switch (result.sensor_type)
+ {
+ case 0: len += sprintf(buf+len, "Other\n");
+ break;
+ case 1: len += sprintf(buf+len, "Thermal\n");
+ break;
+ case 2: len += sprintf(buf+len, "DC voltage (DC volts)\n");
+ break;
+ case 3: len += sprintf(buf+len, "AC voltage (AC volts)\n");
+ break;
+ case 4: len += sprintf(buf+len, "DC current (DC amps)\n");
+ break;
+ case 5: len += sprintf(buf+len, "AC current (AC volts)\n");
+ break;
+ case 6: len += sprintf(buf+len, "Door open\n");
+ break;
+ case 7: len += sprintf(buf+len, "Fan operational\n");
+ break;
+ }
+
+ len += sprintf(buf+len, "Scaling exponent : %d\n", result.scaling_exponent);
+ len += sprintf(buf+len, "Actual reading : %d\n", result.actual_reading);
+ len += sprintf(buf+len, "Minimum reading : %d\n", result.minimum_reading);
+ len += sprintf(buf+len, "Low2LowCat treshold : %d\n", result.low2lowcat_treshold);
+ len += sprintf(buf+len, "LowCat2Low treshold : %d\n", result.lowcat2low_treshold);
+ len += sprintf(buf+len, "LowWarn2Low treshold : %d\n", result.lowwarn2low_treshold);
+ len += sprintf(buf+len, "Low2LowWarn treshold : %d\n", result.low2lowwarn_treshold);
+ len += sprintf(buf+len, "Norm2LowWarn treshold : %d\n", result.norm2lowwarn_treshold);
+ len += sprintf(buf+len, "LowWarn2Norm treshold : %d\n", result.lowwarn2norm_treshold);
+ len += sprintf(buf+len, "Nominal reading : %d\n", result.nominal_reading);
+ len += sprintf(buf+len, "HiWarn2Norm treshold : %d\n", result.hiwarn2norm_treshold);
+ len += sprintf(buf+len, "Norm2HiWarn treshold : %d\n", result.norm2hiwarn_treshold);
+ len += sprintf(buf+len, "High2HiWarn treshold : %d\n", result.high2hiwarn_treshold);
+ len += sprintf(buf+len, "HiWarn2High treshold : %d\n", result.hiwarn2high_treshold);
+ len += sprintf(buf+len, "HiCat2High treshold : %d\n", result.hicat2high_treshold);
+ len += sprintf(buf+len, "High2HiCat treshold : %d\n", result.hi2hicat_treshold);
+ len += sprintf(buf+len, "Maximum reading : %d\n", result.maximum_reading);
+
+ len += sprintf(buf+len, "Sensor state : %d = ", result.sensor_state);
+ switch (result.sensor_state)
+ {
+ case 0: len += sprintf(buf+len, "Normal\n");
+ break;
+ case 1: len += sprintf(buf+len, "Abnormal\n");
+ break;
+ case 2: len += sprintf(buf+len, "Unknown\n");
+ break;
+ case 3: len += sprintf(buf+len, "Low Catastrophic (LoCat)\n");
+ break;
+ case 4: len += sprintf(buf+len, "Low (Low)\n");
+ break;
+ case 5: len += sprintf(buf+len, "Low Warning (LoWarn)\n");
+ break;
+ case 6: len += sprintf(buf+len, "High Warning (HiWarn)\n");
+ break;
+ case 7: len += sprintf(buf+len, "High (High)\n");
+ break;
+ case 8: len += sprintf(buf+len, "High Catastrophic (HiCat)\n");
+ break;
+ }
+
+ len += sprintf(buf+len, "Event_enable : 0x%02X\n", result.event_enable);
+ len += sprintf(buf+len, " [%s] Operational state change. \n",
+ (result.event_enable & 0x01) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] Low catastrophic. \n",
+ (result.event_enable & 0x02) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] Low reading. \n",
+ (result.event_enable & 0x04) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] Low warning. \n",
+ (result.event_enable & 0x08) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] Change back to normal from out of range state. \n",
+ (result.event_enable & 0x10) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] High warning. \n",
+ (result.event_enable & 0x20) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] High reading. \n",
+ (result.event_enable & 0x40) ? "+" : "-" );
+ len += sprintf(buf+len, " [%s] High catastrophic. \n",
+ (result.event_enable & 0x80) ? "+" : "-" );
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+
+static int print_serial_number(char *buff, int pos, u8 *serialno, int max_len)
+{
+ int i;
+
+ /* 19990419 -sralston
+ * The I2O v1.5 (and v2.0 so far) "official specification"
+ * got serial numbers WRONG!
+ * Apparently, and despite what Section 3.4.4 says and
+ * Figure 3-35 shows (pg 3-39 in the pdf doc),
+ * the convention / consensus seems to be:
+ * + First byte is SNFormat
+ * + Second byte is SNLen (but only if SNFormat==7 (?))
+ * + (v2.0) SCSI+BS may use IEEE Registered (64 or 128 bit) format
+ */
+ switch(serialno[0])
+ {
+ case I2O_SNFORMAT_BINARY: /* Binary */
+ pos += sprintf(buff+pos, "0x");
+ for(i = 0; i < serialno[1]; i++)
+ {
+ pos += sprintf(buff+pos, "%02X", serialno[2+i]);
+ }
+ break;
+
+ case I2O_SNFORMAT_ASCII: /* ASCII */
+ if ( serialno[1] < ' ' ) /* printable or SNLen? */
+ {
+ /* sanity */
+ max_len = (max_len < serialno[1]) ? max_len : serialno[1];
+ serialno[1+max_len] = '\0';
+
+ /* just print it */
+ pos += sprintf(buff+pos, "%s", &serialno[2]);
+ }
+ else
+ {
+ /* print chars for specified length */
+ for(i = 0; i < serialno[1]; i++)
+ {
+ pos += sprintf(buff+pos, "%c", serialno[2+i]);
+ }
+ }
+ break;
+
+ case I2O_SNFORMAT_UNICODE: /* UNICODE */
+ pos += sprintf(buff+pos, "UNICODE Format. Can't Display\n");
+ break;
+
+ case I2O_SNFORMAT_LAN48_MAC: /* LAN-48 MAC Address */
+ pos += sprintf(buff+pos,
+ "LAN-48 MAC address @ %02X:%02X:%02X:%02X:%02X:%02X",
+ serialno[2], serialno[3],
+ serialno[4], serialno[5],
+ serialno[6], serialno[7]);
+ break;
+
+ case I2O_SNFORMAT_WAN: /* WAN MAC Address */
+ /* FIXME: Figure out what a WAN access address looks like?? */
+ pos += sprintf(buff+pos, "WAN Access Address");
+ break;
+
+/* plus new in v2.0 */
+ case I2O_SNFORMAT_LAN64_MAC: /* LAN-64 MAC Address */
+ /* FIXME: Figure out what a LAN-64 address really looks like?? */
+ pos += sprintf(buff+pos,
+ "LAN-64 MAC address @ [?:%02X:%02X:?] %02X:%02X:%02X:%02X:%02X:%02X",
+ serialno[8], serialno[9],
+ serialno[2], serialno[3],
+ serialno[4], serialno[5],
+ serialno[6], serialno[7]);
+ break;
+
+
+ case I2O_SNFORMAT_DDM: /* I2O DDM */
+ pos += sprintf(buff+pos,
+ "DDM: Tid=%03Xh, Rsvd=%04Xh, OrgId=%04Xh",
+ *(u16*)&serialno[2],
+ *(u16*)&serialno[4],
+ *(u16*)&serialno[6]);
+ break;
+
+ case I2O_SNFORMAT_IEEE_REG64: /* IEEE Registered (64-bit) */
+ case I2O_SNFORMAT_IEEE_REG128: /* IEEE Registered (128-bit) */
+ /* FIXME: Figure if this is even close?? */
+ pos += sprintf(buff+pos,
+ "IEEE NodeName(hi,lo)=(%08Xh:%08Xh), PortName(hi,lo)=(%08Xh:%08Xh)\n",
+ *(u32*)&serialno[2],
+ *(u32*)&serialno[6],
+ *(u32*)&serialno[10],
+ *(u32*)&serialno[14]);
+ break;
+
+
+ case I2O_SNFORMAT_UNKNOWN: /* Unknown 0 */
+ case I2O_SNFORMAT_UNKNOWN2: /* Unknown 0xff */
+ default:
+ pos += sprintf(buff+pos, "Unknown data format (0x%02x)",
+ serialno[0]);
+ break;
+ }
+
+ return pos;
+}
+
+const char * i2o_get_connector_type(int conn)
+{
+ int idx = 16;
+ static char *i2o_connector_type[] = {
+ "OTHER",
+ "UNKNOWN",
+ "AUI",
+ "UTP",
+ "BNC",
+ "RJ45",
+ "STP DB9",
+ "FIBER MIC",
+ "APPLE AUI",
+ "MII",
+ "DB9",
+ "HSSDC",
+ "DUPLEX SC FIBER",
+ "DUPLEX ST FIBER",
+ "TNC/BNC",
+ "HW DEFAULT"
+ };
+
+ switch(conn)
+ {
+ case 0x00000000:
+ idx = 0;
+ break;
+ case 0x00000001:
+ idx = 1;
+ break;
+ case 0x00000002:
+ idx = 2;
+ break;
+ case 0x00000003:
+ idx = 3;
+ break;
+ case 0x00000004:
+ idx = 4;
+ break;
+ case 0x00000005:
+ idx = 5;
+ break;
+ case 0x00000006:
+ idx = 6;
+ break;
+ case 0x00000007:
+ idx = 7;
+ break;
+ case 0x00000008:
+ idx = 8;
+ break;
+ case 0x00000009:
+ idx = 9;
+ break;
+ case 0x0000000A:
+ idx = 10;
+ break;
+ case 0x0000000B:
+ idx = 11;
+ break;
+ case 0x0000000C:
+ idx = 12;
+ break;
+ case 0x0000000D:
+ idx = 13;
+ break;
+ case 0x0000000E:
+ idx = 14;
+ break;
+ case 0xFFFFFFFF:
+ idx = 15;
+ break;
+ }
+
+ return i2o_connector_type[idx];
+}
+
+
+const char * i2o_get_connection_type(int conn)
+{
+ int idx = 0;
+ static char *i2o_connection_type[] = {
+ "Unknown",
+ "AUI",
+ "10BASE5",
+ "FIORL",
+ "10BASE2",
+ "10BROAD36",
+ "10BASE-T",
+ "10BASE-FP",
+ "10BASE-FB",
+ "10BASE-FL",
+ "100BASE-TX",
+ "100BASE-FX",
+ "100BASE-T4",
+ "1000BASE-SX",
+ "1000BASE-LX",
+ "1000BASE-CX",
+ "1000BASE-T",
+ "100VG-ETHERNET",
+ "100VG-TOKEN RING",
+ "4MBIT TOKEN RING",
+ "16 Mb Token Ring",
+ "125 MBAUD FDDI",
+ "Point-to-point",
+ "Arbitrated loop",
+ "Public loop",
+ "Fabric",
+ "Emulation",
+ "Other",
+ "HW default"
+ };
+
+ switch(conn)
+ {
+ case I2O_LAN_UNKNOWN:
+ idx = 0;
+ break;
+ case I2O_LAN_AUI:
+ idx = 1;
+ break;
+ case I2O_LAN_10BASE5:
+ idx = 2;
+ break;
+ case I2O_LAN_FIORL:
+ idx = 3;
+ break;
+ case I2O_LAN_10BASE2:
+ idx = 4;
+ break;
+ case I2O_LAN_10BROAD36:
+ idx = 5;
+ break;
+ case I2O_LAN_10BASE_T:
+ idx = 6;
+ break;
+ case I2O_LAN_10BASE_FP:
+ idx = 7;
+ break;
+ case I2O_LAN_10BASE_FB:
+ idx = 8;
+ break;
+ case I2O_LAN_10BASE_FL:
+ idx = 9;
+ break;
+ case I2O_LAN_100BASE_TX:
+ idx = 10;
+ break;
+ case I2O_LAN_100BASE_FX:
+ idx = 11;
+ break;
+ case I2O_LAN_100BASE_T4:
+ idx = 12;
+ break;
+ case I2O_LAN_1000BASE_SX:
+ idx = 13;
+ break;
+ case I2O_LAN_1000BASE_LX:
+ idx = 14;
+ break;
+ case I2O_LAN_1000BASE_CX:
+ idx = 15;
+ break;
+ case I2O_LAN_1000BASE_T:
+ idx = 16;
+ break;
+ case I2O_LAN_100VG_ETHERNET:
+ idx = 17;
+ break;
+ case I2O_LAN_100VG_TR:
+ idx = 18;
+ break;
+ case I2O_LAN_4MBIT:
+ idx = 19;
+ break;
+ case I2O_LAN_16MBIT:
+ idx = 20;
+ break;
+ case I2O_LAN_125MBAUD:
+ idx = 21;
+ break;
+ case I2O_LAN_POINT_POINT:
+ idx = 22;
+ break;
+ case I2O_LAN_ARB_LOOP:
+ idx = 23;
+ break;
+ case I2O_LAN_PUBLIC_LOOP:
+ idx = 24;
+ break;
+ case I2O_LAN_FABRIC:
+ idx = 25;
+ break;
+ case I2O_LAN_EMULATION:
+ idx = 26;
+ break;
+ case I2O_LAN_OTHER:
+ idx = 27;
+ break;
+ case I2O_LAN_DEFAULT:
+ idx = 28;
+ break;
+ }
+
+ return i2o_connection_type[idx];
+}
+
+
+/* LAN group 0000h - Device info (scalar) */
+int i2o_proc_read_lan_dev_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[56];
+ static u8 *work8 = (u8*)work32;
+ static u16 *work16 = (u16*)work32;
+ static u64 *work64 = (u64*)work32;
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0000, -1, &work32, 56*4);
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token, "0x0000 LAN Device Info");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "LAN Type : ");
+ switch (work16[0])
+ {
+ case 0x0030:
+ len += sprintf(buf+len, "Ethernet, ");
+ break;
+ case 0x0040:
+ len += sprintf(buf+len, "100Base VG, ");
+ break;
+ case 0x0050:
+ len += sprintf(buf+len, "Token Ring, ");
+ break;
+ case 0x0060:
+ len += sprintf(buf+len, "FDDI, ");
+ break;
+ case 0x0070:
+ len += sprintf(buf+len, "Fibre Channel, ");
+ break;
+ default:
+ len += sprintf(buf+len, "Unknown type (0x%04x), ", work16[0]);
+ break;
+ }
+
+ if (work16[1]&0x00000001)
+ len += sprintf(buf+len, "emulated LAN, ");
+ else
+ len += sprintf(buf+len, "physical LAN port, ");
+
+ if (work16[1]&0x00000002)
+ len += sprintf(buf+len, "full duplex\n");
+ else
+ len += sprintf(buf+len, "simplex\n");
+
+ len += sprintf(buf+len, "Address format : ");
+ switch(work8[4]) {
+ case 0x00:
+ len += sprintf(buf+len, "IEEE 48bit\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len, "FC IEEE\n");
+ break;
+ default:
+ len += sprintf(buf+len, "Unknown (0x%02x)\n", work8[4]);
+ break;
+ }
+
+ len += sprintf(buf+len, "State : ");
+ switch(work8[5])
+ {
+ case 0x00:
+ len += sprintf(buf+len, "Unknown\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len, "Unclaimed\n");
+ break;
+ case 0x02:
+ len += sprintf(buf+len, "Operational\n");
+ break;
+ case 0x03:
+ len += sprintf(buf+len, "Suspended\n");
+ break;
+ case 0x04:
+ len += sprintf(buf+len, "Resetting\n");
+ break;
+ case 0x05:
+ len += sprintf(buf+len, "ERROR: ");
+ if(work16[3]&0x0001)
+ len += sprintf(buf+len, "TxCU inoperative ");
+ if(work16[3]&0x0002)
+ len += sprintf(buf+len, "RxCU inoperative ");
+ if(work16[3]&0x0004)
+ len += sprintf(buf+len, "Local mem alloc ");
+ len += sprintf(buf+len, "\n");
+ break;
+ case 0x06:
+ len += sprintf(buf+len, "Operational no Rx\n");
+ break;
+ case 0x07:
+ len += sprintf(buf+len, "Suspended no Rx\n");
+ break;
+ default:
+ len += sprintf(buf+len, "Unspecified\n");
+ break;
+ }
+
+ len += sprintf(buf+len, "Min packet size : %d\n", work32[2]);
+ len += sprintf(buf+len, "Max packet size : %d\n", work32[3]);
+ len += sprintf(buf+len, "HW address : "
+ "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ work8[16],work8[17],work8[18],work8[19],
+ work8[20],work8[21],work8[22],work8[23]);
+
+ len += sprintf(buf+len, "Max Tx wire speed : %d bps\n", (int)work64[3]);
+ len += sprintf(buf+len, "Max Rx wire speed : %d bps\n", (int)work64[4]);
+
+ len += sprintf(buf+len, "Min SDU packet size : 0x%08x\n", work32[10]);
+ len += sprintf(buf+len, "Max SDU packet size : 0x%08x\n", work32[11]);
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0001h - MAC address table (scalar) */
+int i2o_proc_read_lan_mac_addr(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[48];
+ static u8 *work8 = (u8*)work32;
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0001, -1, &work32, 48*4);
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0001 LAN MAC Address");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Active address : "
+ "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ work8[0],work8[1],work8[2],work8[3],
+ work8[4],work8[5],work8[6],work8[7]);
+ len += sprintf(buf+len, "Current address : "
+ "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ work8[8],work8[9],work8[10],work8[11],
+ work8[12],work8[13],work8[14],work8[15]);
+ len += sprintf(buf+len, "Functional address mask : "
+ "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ work8[16],work8[17],work8[18],work8[19],
+ work8[20],work8[21],work8[22],work8[23]);
+
+ len += sprintf(buf+len,"HW/DDM capabilities : 0x%08x\n", work32[7]);
+ len += sprintf(buf+len," [%s] Unicast packets supported\n",
+ (work32[7]&0x00000001)?"+":"-");
+ len += sprintf(buf+len," [%s] Promiscuous mode supported\n",
+ (work32[7]&0x00000002)?"+":"-");
+ len += sprintf(buf+len," [%s] Promiscuous multicast mode supported\n",
+ (work32[7]&0x00000004)?"+":"-");
+ len += sprintf(buf+len," [%s] Broadcast reception disabling supported\n",
+ (work32[7]&0x00000100)?"+":"-");
+ len += sprintf(buf+len," [%s] Multicast reception disabling supported\n",
+ (work32[7]&0x00000200)?"+":"-");
+ len += sprintf(buf+len," [%s] Functional address disabling supported\n",
+ (work32[7]&0x00000400)?"+":"-");
+ len += sprintf(buf+len," [%s] MAC reporting supported\n",
+ (work32[7]&0x00000800)?"+":"-");
+
+ len += sprintf(buf+len,"Filter mask : 0x%08x\n", work32[6]);
+ len += sprintf(buf+len," [%s] Unicast packets disable\n",
+ (work32[6]&0x00000001)?"+":"-");
+ len += sprintf(buf+len," [%s] Promiscuous mode enable\n",
+ (work32[6]&0x00000002)?"+":"-");
+ len += sprintf(buf+len," [%s] Promiscuous multicast mode enable\n",
+ (work32[6]&0x00000004)?"+":"-");
+ len += sprintf(buf+len," [%s] Broadcast packets disable\n",
+ (work32[6]&0x00000100)?"+":"-");
+ len += sprintf(buf+len," [%s] Multicast packets disable\n",
+ (work32[6]&0x00000200)?"+":"-");
+ len += sprintf(buf+len," [%s] Functional address disable\n",
+ (work32[6]&0x00000400)?"+":"-");
+
+ if (work32[7]&0x00000800) {
+ len += sprintf(buf+len, " MAC reporting mode : ");
+ if (work32[6]&0x00000800)
+ len += sprintf(buf+len, "Pass only priority MAC packets to user\n");
+ else if (work32[6]&0x00001000)
+ len += sprintf(buf+len, "Pass all MAC packets to user\n");
+ else if (work32[6]&0x00001800)
+ len += sprintf(buf+len, "Pass all MAC packets (promiscuous) to user\n");
+ else
+ len += sprintf(buf+len, "Do not pass MAC packets to user\n");
+ }
+ len += sprintf(buf+len, "Number of multicast addresses : %d\n", work32[8]);
+ len += sprintf(buf+len, "Perfect filtering for max %d multicast addresses\n",
+ work32[9]);
+ len += sprintf(buf+len, "Imperfect filtering for max %d multicast addresses\n",
+ work32[10]);
+
+ spin_unlock(&i2o_proc_lock);
+
+ return len;
+}
+
+/* LAN group 0002h - Multicast MAC address table (table) */
+int i2o_proc_read_lan_mcast_addr(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+ u8 mc_addr[8];
+
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ u8 mc_addr[256][8];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid, 0x0002, -1,
+ NULL, 0, &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x002 LAN Multicast MAC Address");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ for (i = 0; i < result.row_count; i++)
+ {
+ memcpy(mc_addr, result.mc_addr[i], 8);
+
+ len += sprintf(buf+len, "MC MAC address[%d]: "
+ "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ i, mc_addr[0], mc_addr[1], mc_addr[2],
+ mc_addr[3], mc_addr[4], mc_addr[5],
+ mc_addr[6], mc_addr[7]);
+ }
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0003h - Batch Control (scalar) */
+int i2o_proc_read_lan_batch_control(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[9];
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0003, -1, &work32, 9*4);
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0003 LAN Batch Control");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Batch mode ");
+ if (work32[0]&0x00000001)
+ len += sprintf(buf+len, "disabled");
+ else
+ len += sprintf(buf+len, "enabled");
+ if (work32[0]&0x00000002)
+ len += sprintf(buf+len, " (current setting)");
+ if (work32[0]&0x00000004)
+ len += sprintf(buf+len, ", forced");
+ else
+ len += sprintf(buf+len, ", toggle");
+ len += sprintf(buf+len, "\n");
+
+ len += sprintf(buf+len, "Max Rx batch count : %d\n", work32[5]);
+ len += sprintf(buf+len, "Max Rx batch delay : %d\n", work32[6]);
+ len += sprintf(buf+len, "Max Tx batch delay : %d\n", work32[7]);
+ len += sprintf(buf+len, "Max Tx batch count : %d\n", work32[8]);
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0004h - LAN Operation (scalar) */
+int i2o_proc_read_lan_operation(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[5];
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0004, -1, &work32, 20);
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0004 LAN Operation");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Packet prepadding (32b words) : %d\n", work32[0]);
+ len += sprintf(buf+len, "Transmission error reporting : %s\n",
+ (work32[1]&1)?"on":"off");
+ len += sprintf(buf+len, "Bad packet handling : %s\n",
+ (work32[1]&0x2)?"by host":"by DDM");
+ len += sprintf(buf+len, "Packet orphan limit : %d\n", work32[2]);
+
+ len += sprintf(buf+len, "Tx modes : 0x%08x\n", work32[3]);
+ len += sprintf(buf+len, " [%s] HW CRC suppression\n",
+ (work32[3]&0x00000004) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW IPv4 checksum\n",
+ (work32[3]&0x00000100) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW TCP checksum\n",
+ (work32[3]&0x00000200) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW UDP checksum\n",
+ (work32[3]&0x00000400) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW RSVP checksum\n",
+ (work32[3]&0x00000800) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW ICMP checksum\n",
+ (work32[3]&0x00001000) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Loopback suppression enable\n",
+ (work32[3]&0x00002000) ? "+" : "-");
+
+ len += sprintf(buf+len, "Rx modes : 0x%08x\n", work32[4]);
+ len += sprintf(buf+len, " [%s] FCS in payload\n",
+ (work32[4]&0x00000004) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW IPv4 checksum validation\n",
+ (work32[4]&0x00000100) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW TCP checksum validation\n",
+ (work32[4]&0x00000200) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW UDP checksum validation\n",
+ (work32[4]&0x00000400) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW RSVP checksum validation\n",
+ (work32[4]&0x00000800) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] HW ICMP checksum validation\n",
+ (work32[4]&0x00001000) ? "+" : "-");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0005h - Media operation (scalar) */
+int i2o_proc_read_lan_media_operation(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+
+ struct
+ {
+ u32 connector_type;
+ u32 connection_type;
+ u64 current_tx_wire_speed;
+ u64 current_rx_wire_speed;
+ u8 duplex_mode;
+ u8 link_status;
+ u8 reserved;
+ u8 duplex_mode_target;
+ u32 connector_type_target;
+ u32 connection_type_target;
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0005, -1, &result, sizeof(result));
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token, "0x0005 LAN Media Operation");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Connector type : %s\n",
+ i2o_get_connector_type(result.connector_type));
+ len += sprintf(buf+len, "Connection type : %s\n",
+ i2o_get_connection_type(result.connection_type));
+
+ len += sprintf(buf+len, "Current Tx wire speed : %d bps\n", (int)result.current_tx_wire_speed);
+ len += sprintf(buf+len, "Current Rx wire speed : %d bps\n", (int)result.current_rx_wire_speed);
+ len += sprintf(buf+len, "Duplex mode : %s duplex\n",
+ (result.duplex_mode)?"Full":"Half");
+
+ len += sprintf(buf+len, "Link status : ");
+ switch (result.link_status)
+ {
+ case 0x00:
+ len += sprintf(buf+len, "Unknown\n");
+ break;
+ case 0x01:
+ len += sprintf(buf+len, "Normal\n");
+ break;
+ case 0x02:
+ len += sprintf(buf+len, "Failure\n");
+ break;
+ case 0x03:
+ len += sprintf(buf+len, "Reset\n");
+ break;
+ default:
+ len += sprintf(buf+len, "Unspecified\n");
+ }
+
+ len += sprintf(buf+len, "Duplex mode target : ");
+ switch (result.duplex_mode_target){
+ case 0:
+ len += sprintf(buf+len, "Half duplex\n");
+ break;
+ case 1:
+ len += sprintf(buf+len, "Full duplex\n");
+ break;
+ default:
+ len += sprintf(buf+len, "\n");
+ }
+
+ len += sprintf(buf+len, "Connector type target : %s\n",
+ i2o_get_connector_type(result.connector_type_target));
+ len += sprintf(buf+len, "Connection type target : %s\n",
+ i2o_get_connection_type(result.connection_type_target));
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0006h - Alternate address (table) (optional) */
+int i2o_proc_read_lan_alt_addr(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+ int i;
+ u8 alt_addr[8];
+ struct
+ {
+ u16 result_count;
+ u16 pad;
+ u16 block_size;
+ u8 block_status;
+ u8 error_info_size;
+ u16 row_count;
+ u16 more_flag;
+ u8 alt_addr[256][8];
+ } result;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_table(I2O_PARAMS_TABLE_GET,
+ d->controller, d->lct_data.tid,
+ 0x0006, -1, NULL, 0, &result, sizeof(result));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token, "0x0006 LAN Alternate Address (optional)");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ for (i=0; i < result.row_count; i++)
+ {
+ memcpy(alt_addr,result.alt_addr[i],8);
+ len += sprintf(buf+len, "Alternate address[%d]: "
+ "%02X:%02X:%02X:%02X:%02X:%02X:%02X:%02X\n",
+ i, alt_addr[0], alt_addr[1], alt_addr[2],
+ alt_addr[3], alt_addr[4], alt_addr[5],
+ alt_addr[6], alt_addr[7]);
+ }
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+
+/* LAN group 0007h - Transmit info (scalar) */
+int i2o_proc_read_lan_tx_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[8];
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0007, -1, &work32, 8*4);
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0007 LAN Transmit Info");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "Tx Max SG elements per packet : %d\n", work32[0]);
+ len += sprintf(buf+len, "Tx Max SG elements per chain : %d\n", work32[1]);
+ len += sprintf(buf+len, "Tx Max outstanding packets : %d\n", work32[2]);
+ len += sprintf(buf+len, "Tx Max packets per request : %d\n", work32[3]);
+
+ len += sprintf(buf+len, "Tx modes : 0x%08x\n", work32[4]);
+ len += sprintf(buf+len, " [%s] No DA in SGL\n",
+ (work32[4]&0x00000002) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] CRC suppression\n",
+ (work32[4]&0x00000004) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] MAC insertion\n",
+ (work32[4]&0x00000010) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] RIF insertion\n",
+ (work32[4]&0x00000020) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] IPv4 checksum generation\n",
+ (work32[4]&0x00000100) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] TCP checksum generation\n",
+ (work32[4]&0x00000200) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] UDP checksum generation\n",
+ (work32[4]&0x00000400) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] RSVP checksum generation\n",
+ (work32[4]&0x00000800) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] ICMP checksum generation\n",
+ (work32[4]&0x00001000) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Loopback enabled\n",
+ (work32[4]&0x00010000) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] Loopback suppression enabled\n",
+ (work32[4]&0x00020000) ? "+" : "-");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0008h - Receive info (scalar) */
+int i2o_proc_read_lan_rx_info(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u32 work32[8];
+ int token;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0008, -1, &work32, 8*4);
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0008 LAN Receive Info");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf ,"Rx Max size of chain element : %d\n", work32[0]);
+ len += sprintf(buf+len, "Rx Max Buckets : %d\n", work32[1]);
+ len += sprintf(buf+len, "Rx Max Buckets in Reply : %d\n", work32[3]);
+ len += sprintf(buf+len, "Rx Max Packets in Bucket : %d\n", work32[4]);
+ len += sprintf(buf+len, "Rx Max Buckets in Post : %d\n", work32[5]);
+
+ len += sprintf(buf+len, "Rx Modes : 0x%08x\n", work32[2]);
+ len += sprintf(buf+len, " [%s] FCS reception\n",
+ (work32[2]&0x00000004) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] IPv4 checksum validation \n",
+ (work32[2]&0x00000100) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] TCP checksum validation \n",
+ (work32[2]&0x00000200) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] UDP checksum validation \n",
+ (work32[2]&0x00000400) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] RSVP checksum validation \n",
+ (work32[2]&0x00000800) ? "+" : "-");
+ len += sprintf(buf+len, " [%s] ICMP checksum validation \n",
+ (work32[2]&0x00001000) ? "+" : "-");
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+static int i2o_report_opt_field(char *buf, char *field_name,
+ int field_nbr, int supp_fields, u64 *value)
+{
+ if (supp_fields & (1 << field_nbr))
+ return sprintf(buf, "%-24s : " FMT_U64_HEX "\n", field_name, U64_VAL(value));
+ else
+ return sprintf(buf, "%-24s : Not supported\n", field_name);
+}
+
+/* LAN group 0100h - LAN Historical statistics (scalar) */
+/* LAN group 0180h - Supported Optional Historical Statistics (scalar) */
+/* LAN group 0182h - Optional Non Media Specific Transmit Historical Statistics (scalar) */
+/* LAN group 0183h - Optional Non Media Specific Receive Historical Statistics (scalar) */
+
+int i2o_proc_read_lan_hist_stats(char *buf, char **start, off_t offset, int len,
+ int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+
+ struct
+ {
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 rx_packets;
+ u64 rx_bytes;
+ u64 tx_errors;
+ u64 rx_errors;
+ u64 rx_dropped;
+ u64 adapter_resets;
+ u64 adapter_suspends;
+ } stats; // 0x0100
+
+ static u64 supp_groups[4]; // 0x0180
+
+ struct
+ {
+ u64 tx_retries;
+ u64 tx_directed_bytes;
+ u64 tx_directed_packets;
+ u64 tx_multicast_bytes;
+ u64 tx_multicast_packets;
+ u64 tx_broadcast_bytes;
+ u64 tx_broadcast_packets;
+ u64 tx_group_addr_packets;
+ u64 tx_short_packets;
+ } tx_stats; // 0x0182
+
+ struct
+ {
+ u64 rx_crc_errors;
+ u64 rx_directed_bytes;
+ u64 rx_directed_packets;
+ u64 rx_multicast_bytes;
+ u64 rx_multicast_packets;
+ u64 rx_broadcast_bytes;
+ u64 rx_broadcast_packets;
+ u64 rx_group_addr_packets;
+ u64 rx_short_packets;
+ u64 rx_long_packets;
+ u64 rx_runt_packets;
+ } rx_stats; // 0x0183
+
+ struct
+ {
+ u64 ipv4_generate;
+ u64 ipv4_validate_success;
+ u64 ipv4_validate_errors;
+ u64 tcp_generate;
+ u64 tcp_validate_success;
+ u64 tcp_validate_errors;
+ u64 udp_generate;
+ u64 udp_validate_success;
+ u64 udp_validate_errors;
+ u64 rsvp_generate;
+ u64 rsvp_validate_success;
+ u64 rsvp_validate_errors;
+ u64 icmp_generate;
+ u64 icmp_validate_success;
+ u64 icmp_validate_errors;
+ } chksum_stats; // 0x0184
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0100, -1, &stats, sizeof(stats));
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x100 LAN Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "Tx packets : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_packets));
+ len += sprintf(buf+len, "Tx bytes : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_bytes));
+ len += sprintf(buf+len, "Rx packets : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.rx_packets));
+ len += sprintf(buf+len, "Rx bytes : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.rx_bytes));
+ len += sprintf(buf+len, "Tx errors : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_errors));
+ len += sprintf(buf+len, "Rx errors : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.rx_errors));
+ len += sprintf(buf+len, "Rx dropped : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.rx_dropped));
+ len += sprintf(buf+len, "Adapter resets : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.adapter_resets));
+ len += sprintf(buf+len, "Adapter suspends : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.adapter_suspends));
+
+ /* Optional statistics follows */
+ /* Get 0x0180 to see which optional groups/fields are supported */
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0180, -1, &supp_groups, sizeof(supp_groups));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token, "0x180 LAN Supported Optional Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ if (supp_groups[1]) /* 0x0182 */
+ {
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0182, -1, &tx_stats, sizeof(tx_stats));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x182 LAN Optional Tx Historical Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "==== Optional TX statistics (group 0182h)\n");
+
+ len += i2o_report_opt_field(buf+len, "Tx RetryCount",
+ 0, supp_groups[1], &tx_stats.tx_retries);
+ len += i2o_report_opt_field(buf+len, "Tx DirectedBytes",
+ 1, supp_groups[1], &tx_stats.tx_directed_bytes);
+ len += i2o_report_opt_field(buf+len, "Tx DirectedPackets",
+ 2, supp_groups[1], &tx_stats.tx_directed_packets);
+ len += i2o_report_opt_field(buf+len, "Tx MulticastBytes",
+ 3, supp_groups[1], &tx_stats.tx_multicast_bytes);
+ len += i2o_report_opt_field(buf+len, "Tx MulticastPackets",
+ 4, supp_groups[1], &tx_stats.tx_multicast_packets);
+ len += i2o_report_opt_field(buf+len, "Tx BroadcastBytes",
+ 5, supp_groups[1], &tx_stats.tx_broadcast_bytes);
+ len += i2o_report_opt_field(buf+len, "Tx BroadcastPackets",
+ 6, supp_groups[1], &tx_stats.tx_broadcast_packets);
+ len += i2o_report_opt_field(buf+len, "Tx TotalGroupAddrPackets",
+ 7, supp_groups[1], &tx_stats.tx_group_addr_packets);
+ len += i2o_report_opt_field(buf+len, "Tx TotalPacketsTooShort",
+ 8, supp_groups[1], &tx_stats.tx_short_packets);
+ }
+
+ if (supp_groups[2]) /* 0x0183 */
+ {
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0183, -1, &rx_stats, sizeof(rx_stats));
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x183 LAN Optional Rx Historical Stats");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "==== Optional RX statistics (group 0183h)\n");
+
+ len += i2o_report_opt_field(buf+len, "Rx CRCErrorCount",
+ 0, supp_groups[2], &rx_stats.rx_crc_errors);
+ len += i2o_report_opt_field(buf+len, "Rx DirectedBytes",
+ 1, supp_groups[2], &rx_stats.rx_directed_bytes);
+ len += i2o_report_opt_field(buf+len, "Rx DirectedPackets",
+ 2, supp_groups[2], &rx_stats.rx_directed_packets);
+ len += i2o_report_opt_field(buf+len, "Rx MulticastBytes",
+ 3, supp_groups[2], &rx_stats.rx_multicast_bytes);
+ len += i2o_report_opt_field(buf+len, "Rx MulticastPackets",
+ 4, supp_groups[2], &rx_stats.rx_multicast_packets);
+ len += i2o_report_opt_field(buf+len, "Rx BroadcastBytes",
+ 5, supp_groups[2], &rx_stats.rx_broadcast_bytes);
+ len += i2o_report_opt_field(buf+len, "Rx BroadcastPackets",
+ 6, supp_groups[2], &rx_stats.rx_broadcast_packets);
+ len += i2o_report_opt_field(buf+len, "Rx TotalGroupAddrPackets",
+ 7, supp_groups[2], &rx_stats.rx_group_addr_packets);
+ len += i2o_report_opt_field(buf+len, "Rx TotalPacketsTooShort",
+ 8, supp_groups[2], &rx_stats.rx_short_packets);
+ len += i2o_report_opt_field(buf+len, "Rx TotalPacketsTooLong",
+ 9, supp_groups[2], &rx_stats.rx_long_packets);
+ len += i2o_report_opt_field(buf+len, "Rx TotalPacketsRunt",
+ 10, supp_groups[2], &rx_stats.rx_runt_packets);
+ }
+
+ if (supp_groups[3]) /* 0x0184 */
+ {
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0184, -1, &chksum_stats, sizeof(chksum_stats));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x184 LAN Optional Chksum Historical Stats");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "==== Optional CHKSUM statistics (group 0x0184)\n");
+
+ len += i2o_report_opt_field(buf+len, "IPv4 Generate",
+ 0, supp_groups[3], &chksum_stats.ipv4_generate);
+ len += i2o_report_opt_field(buf+len, "IPv4 ValidateSuccess",
+ 1, supp_groups[3], &chksum_stats.ipv4_validate_success);
+ len += i2o_report_opt_field(buf+len, "IPv4 ValidateError",
+ 2, supp_groups[3], &chksum_stats.ipv4_validate_errors);
+ len += i2o_report_opt_field(buf+len, "TCP Generate",
+ 3, supp_groups[3], &chksum_stats.tcp_generate);
+ len += i2o_report_opt_field(buf+len, "TCP ValidateSuccess",
+ 4, supp_groups[3], &chksum_stats.tcp_validate_success);
+ len += i2o_report_opt_field(buf+len, "TCP ValidateError",
+ 5, supp_groups[3], &chksum_stats.tcp_validate_errors);
+ len += i2o_report_opt_field(buf+len, "UDP Generate",
+ 6, supp_groups[3], &chksum_stats.udp_generate);
+ len += i2o_report_opt_field(buf+len, "UDP ValidateSuccess",
+ 7, supp_groups[3], &chksum_stats.udp_validate_success);
+ len += i2o_report_opt_field(buf+len, "UDP ValidateError",
+ 8, supp_groups[3], &chksum_stats.udp_validate_errors);
+ len += i2o_report_opt_field(buf+len, "RSVP Generate",
+ 9, supp_groups[3], &chksum_stats.rsvp_generate);
+ len += i2o_report_opt_field(buf+len, "RSVP ValidateSuccess",
+ 10, supp_groups[3], &chksum_stats.rsvp_validate_success);
+ len += i2o_report_opt_field(buf+len, "RSVP ValidateError",
+ 11, supp_groups[3], &chksum_stats.rsvp_validate_errors);
+ len += i2o_report_opt_field(buf+len, "ICMP Generate",
+ 12, supp_groups[3], &chksum_stats.icmp_generate);
+ len += i2o_report_opt_field(buf+len, "ICMP ValidateSuccess",
+ 13, supp_groups[3], &chksum_stats.icmp_validate_success);
+ len += i2o_report_opt_field(buf+len, "ICMP ValidateError",
+ 14, supp_groups[3], &chksum_stats.icmp_validate_errors);
+ }
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0200h - Required Ethernet Statistics (scalar) */
+/* LAN group 0280h - Optional Ethernet Statistics Supported (scalar) */
+/* LAN group 0281h - Optional Ethernet Historical Statistics (scalar) */
+int i2o_proc_read_lan_eth_stats(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ int token;
+
+ struct
+ {
+ u64 rx_align_errors;
+ u64 tx_one_collisions;
+ u64 tx_multiple_collisions;
+ u64 tx_deferred;
+ u64 tx_late_collisions;
+ u64 tx_max_collisions;
+ u64 tx_carrier_lost;
+ u64 tx_excessive_deferrals;
+ } stats;
+
+ static u64 supp_fields;
+ struct
+ {
+ u64 rx_overrun;
+ u64 tx_underrun;
+ u64 tx_heartbeat_failure;
+ } hist_stats;
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0200, -1, &stats, sizeof(stats));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0200 LAN Ethernet Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "Rx alignment errors : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.rx_align_errors));
+ len += sprintf(buf+len, "Tx one collisions : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_one_collisions));
+ len += sprintf(buf+len, "Tx multicollisions : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_multiple_collisions));
+ len += sprintf(buf+len, "Tx deferred : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_deferred));
+ len += sprintf(buf+len, "Tx late collisions : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_late_collisions));
+ len += sprintf(buf+len, "Tx max collisions : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_max_collisions));
+ len += sprintf(buf+len, "Tx carrier lost : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_carrier_lost));
+ len += sprintf(buf+len, "Tx excessive deferrals : " FMT_U64_HEX "\n",
+ U64_VAL(&stats.tx_excessive_deferrals));
+
+ /* Optional Ethernet statistics follows */
+ /* Get 0x0280 to see which optional fields are supported */
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0280, -1, &supp_fields, sizeof(supp_fields));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0280 LAN Supported Optional Ethernet Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ if (supp_fields) /* 0x0281 */
+ {
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0281, -1, &stats, sizeof(stats));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0281 LAN Optional Ethernet Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "==== Optional ETHERNET statistics (group 0x0281)\n");
+
+ len += i2o_report_opt_field(buf+len, "Rx Overrun",
+ 0, supp_fields, &hist_stats.rx_overrun);
+ len += i2o_report_opt_field(buf+len, "Tx Underrun",
+ 1, supp_fields, &hist_stats.tx_underrun);
+ len += i2o_report_opt_field(buf+len, "Tx HeartbeatFailure",
+ 2, supp_fields, &hist_stats.tx_heartbeat_failure);
+ }
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0300h - Required Token Ring Statistics (scalar) */
+/* LAN group 0380h, 0381h - Optional Statistics not yet defined (TODO) */
+int i2o_proc_read_lan_tr_stats(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u64 work64[13];
+ int token;
+
+ static char *ring_status[] =
+ {
+ "",
+ "",
+ "",
+ "",
+ "",
+ "Ring Recovery",
+ "Single Station",
+ "Counter Overflow",
+ "Remove Received",
+ "",
+ "Auto-Removal Error 1",
+ "Lobe Wire Fault",
+ "Transmit Beacon",
+ "Soft Error",
+ "Hard Error",
+ "Signal Loss"
+ };
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0300, -1, &work64, sizeof(work64));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0300 Token Ring Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf, "LineErrors : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[0]));
+ len += sprintf(buf+len, "LostFrames : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[1]));
+ len += sprintf(buf+len, "ACError : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[2]));
+ len += sprintf(buf+len, "TxAbortDelimiter : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[3]));
+ len += sprintf(buf+len, "BursErrors : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[4]));
+ len += sprintf(buf+len, "FrameCopiedErrors : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[5]));
+ len += sprintf(buf+len, "FrequencyErrors : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[6]));
+ len += sprintf(buf+len, "InternalErrors : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[7]));
+ len += sprintf(buf+len, "LastRingStatus : %s\n", ring_status[work64[8]]);
+ len += sprintf(buf+len, "TokenError : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[9]));
+ len += sprintf(buf+len, "UpstreamNodeAddress : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[10]));
+ len += sprintf(buf+len, "LastRingID : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[11]));
+ len += sprintf(buf+len, "LastBeaconType : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[12]));
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+/* LAN group 0400h - Required FDDI Statistics (scalar) */
+/* LAN group 0480h, 0481h - Optional Statistics, not yet defined (TODO) */
+int i2o_proc_read_lan_fddi_stats(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ struct i2o_device *d = (struct i2o_device*)data;
+ static u64 work64[11];
+ int token;
+
+ static char *conf_state[] =
+ {
+ "Isolated",
+ "Local a",
+ "Local b",
+ "Local ab",
+ "Local s",
+ "Wrap a",
+ "Wrap b",
+ "Wrap ab",
+ "Wrap s",
+ "C-Wrap a",
+ "C-Wrap b",
+ "C-Wrap s",
+ "Through",
+ };
+
+ static char *ring_state[] =
+ {
+ "Isolated",
+ "Non-op",
+ "Rind-op",
+ "Detect",
+ "Non-op-Dup",
+ "Ring-op-Dup",
+ "Directed",
+ "Trace"
+ };
+
+ static char *link_state[] =
+ {
+ "Off",
+ "Break",
+ "Trace",
+ "Connect",
+ "Next",
+ "Signal",
+ "Join",
+ "Verify",
+ "Active",
+ "Maintenance"
+ };
+
+ spin_lock(&i2o_proc_lock);
+ len = 0;
+
+ token = i2o_query_scalar(d->controller, d->lct_data.tid,
+ 0x0400, -1, &work64, sizeof(work64));
+
+ if (token < 0) {
+ len += i2o_report_query_status(buf+len, token,"0x0400 FDDI Required Statistics");
+ spin_unlock(&i2o_proc_lock);
+ return len;
+ }
+
+ len += sprintf(buf+len, "ConfigurationState : %s\n", conf_state[work64[0]]);
+ len += sprintf(buf+len, "UpstreamNode : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[1]));
+ len += sprintf(buf+len, "DownStreamNode : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[2]));
+ len += sprintf(buf+len, "FrameErrors : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[3]));
+ len += sprintf(buf+len, "FramesLost : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[4]));
+ len += sprintf(buf+len, "RingMgmtState : %s\n", ring_state[work64[5]]);
+ len += sprintf(buf+len, "LCTFailures : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[6]));
+ len += sprintf(buf+len, "LEMRejects : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[7]));
+ len += sprintf(buf+len, "LEMCount : " FMT_U64_HEX "\n",
+ U64_VAL(&work64[8]));
+ len += sprintf(buf+len, "LConnectionState : %s\n",
+ link_state[work64[9]]);
+
+ spin_unlock(&i2o_proc_lock);
+ return len;
+}
+
+static int i2o_proc_create_entries(void *data, i2o_proc_entry *pentry,
+ struct proc_dir_entry *parent)
+{
+ struct proc_dir_entry *ent;
+
+ while(pentry->name != NULL)
+ {
+ ent = create_proc_entry(pentry->name, pentry->mode, parent);
+ if(!ent) return -1;
+
+ ent->data = data;
+ ent->read_proc = pentry->read_proc;
+ ent->write_proc = pentry->write_proc;
+ ent->nlink = 1;
+
+ pentry++;
+ }
+
+ return 0;
+}
+
+static void i2o_proc_remove_entries(i2o_proc_entry *pentry,
+ struct proc_dir_entry *parent)
+{
+ while(pentry->name != NULL)
+ {
+ remove_proc_entry(pentry->name, parent);
+ pentry++;
+ }
+}
+
+static int i2o_proc_add_controller(struct i2o_controller *pctrl,
+ struct proc_dir_entry *root )
+{
+ struct proc_dir_entry *dir, *dir1;
+ struct i2o_device *dev;
+ char buff[10];
+
+ sprintf(buff, "iop%d", pctrl->unit);
+
+ dir = proc_mkdir(buff, root);
+ if(!dir)
+ return -1;
+
+ pctrl->proc_entry = dir;
+
+ i2o_proc_create_entries(pctrl, generic_iop_entries, dir);
+
+ for(dev = pctrl->devices; dev; dev = dev->next)
+ {
+ sprintf(buff, "%0#5x", dev->lct_data.tid);
+
+ dir1 = proc_mkdir(buff, dir);
+ dev->proc_entry = dir1;
+
+ if(!dir1)
+ printk(KERN_INFO "i2o_proc: Could not allocate proc dir\n");
+
+ i2o_proc_add_device(dev, dir1);
+ }
+
+ return 0;
+}
+
+void i2o_proc_new_dev(struct i2o_controller *c, struct i2o_device *d)
+{
+ char buff[10];
+
+#ifdef DRIVERDEBUG
+ printk(KERN_INFO "Adding new device to /proc/i2o/iop%d\n", c->unit);
+#endif
+ sprintf(buff, "%0#5x", d->lct_data.tid);
+
+ d->proc_entry = proc_mkdir(buff, c->proc_entry);
+
+ if(!d->proc_entry)
+ {
+ printk(KERN_WARNING "i2o: Could not allocate procdir!\n");
+ return;
+ }
+
+ i2o_proc_add_device(d, d->proc_entry);
+}
+
+void i2o_proc_add_device(struct i2o_device *dev, struct proc_dir_entry *dir)
+{
+ i2o_proc_create_entries(dev, generic_dev_entries, dir);
+
+ /* Inform core that we want updates about this device's status */
+ i2o_device_notify_on(dev, &i2o_proc_handler);
+ switch(dev->lct_data.class_id)
+ {
+ case I2O_CLASS_SCSI_PERIPHERAL:
+ case I2O_CLASS_RANDOM_BLOCK_STORAGE:
+ i2o_proc_create_entries(dev, rbs_dev_entries, dir);
+ break;
+ case I2O_CLASS_LAN:
+ i2o_proc_create_entries(dev, lan_entries, dir);
+ switch(dev->lct_data.sub_class)
+ {
+ case I2O_LAN_ETHERNET:
+ i2o_proc_create_entries(dev, lan_eth_entries, dir);
+ break;
+ case I2O_LAN_FDDI:
+ i2o_proc_create_entries(dev, lan_fddi_entries, dir);
+ break;
+ case I2O_LAN_TR:
+ i2o_proc_create_entries(dev, lan_tr_entries, dir);
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+static void i2o_proc_remove_controller(struct i2o_controller *pctrl,
+ struct proc_dir_entry *parent)
+{
+ char buff[10];
+ struct i2o_device *dev;
+
+ /* Remove unused device entries */
+ for(dev=pctrl->devices; dev; dev=dev->next)
+ i2o_proc_remove_device(dev);
+
+ if(!atomic_read(&pctrl->proc_entry->count))
+ {
+ sprintf(buff, "iop%d", pctrl->unit);
+
+ i2o_proc_remove_entries(generic_iop_entries, pctrl->proc_entry);
+
+ remove_proc_entry(buff, parent);
+ pctrl->proc_entry = NULL;
+ }
+}
+
+void i2o_proc_remove_device(struct i2o_device *dev)
+{
+ struct proc_dir_entry *de=dev->proc_entry;
+ char dev_id[10];
+
+ sprintf(dev_id, "%0#5x", dev->lct_data.tid);
+
+ i2o_device_notify_off(dev, &i2o_proc_handler);
+ /* Would it be safe to remove _files_ even if they are in use? */
+ if((de) && (!atomic_read(&de->count)))
+ {
+ i2o_proc_remove_entries(generic_dev_entries, de);
+ switch(dev->lct_data.class_id)
+ {
+ case I2O_CLASS_SCSI_PERIPHERAL:
+ case I2O_CLASS_RANDOM_BLOCK_STORAGE:
+ i2o_proc_remove_entries(rbs_dev_entries, de);
+ break;
+ case I2O_CLASS_LAN:
+ {
+ i2o_proc_remove_entries(lan_entries, de);
+ switch(dev->lct_data.sub_class)
+ {
+ case I2O_LAN_ETHERNET:
+ i2o_proc_remove_entries(lan_eth_entries, de);
+ break;
+ case I2O_LAN_FDDI:
+ i2o_proc_remove_entries(lan_fddi_entries, de);
+ break;
+ case I2O_LAN_TR:
+ i2o_proc_remove_entries(lan_tr_entries, de);
+ break;
+ }
+ }
+ remove_proc_entry(dev_id, dev->controller->proc_entry);
+ }
+ }
+}
+
+void i2o_proc_dev_del(struct i2o_controller *c, struct i2o_device *d)
+{
+#ifdef DRIVERDEBUG
+ printk(KERN_INFO, "Deleting device %d from iop%d\n",
+ d->lct_data.tid, c->unit);
+#endif
+
+ i2o_proc_remove_device(d);
+}
+
+static int create_i2o_procfs(void)
+{
+ struct i2o_controller *pctrl = NULL;
+ int i;
+
+ i2o_proc_dir_root = proc_mkdir("i2o", 0);
+ if(!i2o_proc_dir_root)
+ return -1;
+
+ for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
+ {
+ pctrl = i2o_find_controller(i);
+ if(pctrl)
+ {
+ i2o_proc_add_controller(pctrl, i2o_proc_dir_root);
+ i2o_unlock_controller(pctrl);
+ }
+ };
+
+ return 0;
+}
+
+static int __exit destroy_i2o_procfs(void)
+{
+ struct i2o_controller *pctrl = NULL;
+ int i;
+
+ for(i = 0; i < MAX_I2O_CONTROLLERS; i++)
+ {
+ pctrl = i2o_find_controller(i);
+ if(pctrl)
+ {
+ i2o_proc_remove_controller(pctrl, i2o_proc_dir_root);
+ i2o_unlock_controller(pctrl);
+ }
+ }
+
+ if(!atomic_read(&i2o_proc_dir_root->count))
+ remove_proc_entry("i2o", 0);
+ else
+ return -1;
+
+ return 0;
+}
+
+int __init i2o_proc_init(void)
+{
+ if (i2o_install_handler(&i2o_proc_handler) < 0)
+ {
+ printk(KERN_ERR "i2o_proc: Unable to install PROC handler.\n");
+ return 0;
+ }
+
+ if(create_i2o_procfs())
+ return -EBUSY;
+
+ return 0;
+}
+
+MODULE_AUTHOR("Deepak Saxena");
+MODULE_DESCRIPTION("I2O procfs Handler");
+
+static void __exit i2o_proc_exit(void)
+{
+ destroy_i2o_procfs();
+ i2o_remove_handler(&i2o_proc_handler);
+}
+
+#ifdef MODULE
+module_init(i2o_proc_init);
+#endif
+module_exit(i2o_proc_exit);
+
--- /dev/null
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2, or (at your option) any
+ * later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * Complications for I2O scsi
+ *
+ * o Each (bus,lun) is a logical device in I2O. We keep a map
+ * table. We spoof failed selection for unmapped units
+ * o Request sense buffers can come back for free.
+ * o Scatter gather is a bit dynamic. We have to investigate at
+ * setup time.
+ * o Some of our resources are dynamically shared. The i2o core
+ * needs a message reservation protocol to avoid swap v net
+ * deadlocking. We need to back off queue requests.
+ *
+ * In general the firmware wants to help. Where its help isn't performance
+ * useful we just ignore the aid. Its not worth the code in truth.
+ *
+ * Fixes:
+ * Steve Ralston : Scatter gather now works
+ *
+ * To Do
+ * 64bit cleanups
+ * Fix the resource management problems.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/ioport.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/timer.h>
+#include <linux/delay.h>
+#include <linux/proc_fs.h>
+#include <asm/dma.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/atomic.h>
+#include <linux/blk.h>
+#include <linux/version.h>
+#include <linux/i2o.h>
+#include "../scsi/scsi.h"
+#include "../scsi/hosts.h"
+#include "../scsi/sd.h"
+#include "i2o_scsi.h"
+
+#define VERSION_STRING "Version 0.0.1"
+
+#define dprintk(x)
+
+#define MAXHOSTS 32
+
+struct i2o_scsi_host
+{
+ struct i2o_controller *controller;
+ s16 task[16][8]; /* Allow 16 devices for now */
+ unsigned long tagclock[16][8]; /* Tag clock for queueing */
+ s16 bus_task; /* The adapter TID */
+};
+
+static int scsi_context;
+static int lun_done;
+static int i2o_scsi_hosts;
+
+static u32 *retry[32];
+static struct i2o_controller *retry_ctrl[32];
+static struct timer_list retry_timer;
+static int retry_ct = 0;
+
+static atomic_t queue_depth;
+
+/*
+ * SG Chain buffer support...
+ */
+
+#define SG_MAX_FRAGS 64
+
+/*
+ * FIXME: we should allocate one of these per bus we find as we
+ * locate them not in a lump at boot.
+ */
+
+typedef struct _chain_buf
+{
+ u32 sg_flags_cnt[SG_MAX_FRAGS];
+ u32 sg_buf[SG_MAX_FRAGS];
+} chain_buf;
+
+#define SG_CHAIN_BUF_SZ sizeof(chain_buf)
+
+#define SG_MAX_BUFS (i2o_num_controllers * I2O_SCSI_CAN_QUEUE)
+#define SG_CHAIN_POOL_SZ (SG_MAX_BUFS * SG_CHAIN_BUF_SZ)
+
+static int max_sg_len = 0;
+static chain_buf *sg_chain_pool = NULL;
+static int sg_chain_tag = 0;
+static int sg_max_frags = SG_MAX_FRAGS;
+
+/*
+ * Retry congested frames. This actually needs pushing down into
+ * i2o core. We should only bother the OSM with this when we can't
+ * queue and retry the frame. Or perhaps we should call the OSM
+ * and its default handler should be this in the core, and this
+ * call a 2nd "I give up" handler in the OSM ?
+ */
+
+static void i2o_retry_run(unsigned long f)
+{
+ int i;
+ unsigned long flags;
+
+ save_flags(flags);
+ cli();
+
+ for(i=0;i<retry_ct;i++)
+ i2o_post_message(retry_ctrl[i], virt_to_bus(retry[i]));
+ retry_ct=0;
+
+ restore_flags(flags);
+}
+
+static void flush_pending(void)
+{
+ int i;
+ unsigned long flags;
+
+ save_flags(flags);
+ cli();
+
+ for(i=0;i<retry_ct;i++)
+ {
+ retry[i][0]&=~0xFFFFFF;
+ retry[i][0]|=I2O_CMD_UTIL_NOP<<24;
+ i2o_post_message(retry_ctrl[i],virt_to_bus(retry[i]));
+ }
+ retry_ct=0;
+
+ restore_flags(flags);
+}
+
+static void i2o_scsi_reply(struct i2o_handler *h, struct i2o_controller *c, struct i2o_message *msg)
+{
+ Scsi_Cmnd *current_command;
+ u32 *m = (u32 *)msg;
+ u8 as,ds,st;
+
+ if(m[0] & (1<<13))
+ {
+ printk("IOP fail.\n");
+ printk("From %d To %d Cmd %d.\n",
+ (m[1]>>12)&0xFFF,
+ m[1]&0xFFF,
+ m[1]>>24);
+ printk("Failure Code %d.\n", m[4]>>24);
+ if(m[4]&(1<<16))
+ printk("Format error.\n");
+ if(m[4]&(1<<17))
+ printk("Path error.\n");
+ if(m[4]&(1<<18))
+ printk("Path State.\n");
+ if(m[4]&(1<<18))
+ printk("Congestion.\n");
+
+ m=(u32 *)bus_to_virt(m[7]);
+ printk("Failing message is %p.\n", m);
+
+ if((m[4]&(1<<18)) && retry_ct < 32)
+ {
+ retry_ctrl[retry_ct]=c;
+ retry[retry_ct]=m;
+ if(!retry_ct++)
+ {
+ retry_timer.expires=jiffies+1;
+ add_timer(&retry_timer);
+ }
+ }
+ else
+ {
+ /* Create a scsi error for this */
+ current_command = (Scsi_Cmnd *)m[3];
+ printk("Aborted %ld\n", current_command->serial_number);
+
+ spin_lock_irq(&io_request_lock);
+ current_command->result = DID_ERROR << 16;
+ current_command->scsi_done(current_command);
+ spin_unlock_irq(&io_request_lock);
+
+ /* Now flush the message by making it a NOP */
+ m[0]&=0x00FFFFFF;
+ m[0]|=(I2O_CMD_UTIL_NOP)<<24;
+ i2o_post_message(c,virt_to_bus(m));
+ }
+ return;
+ }
+
+
+ /*
+ * Low byte is device status, next is adapter status,
+ * (then one byte reserved), then request status.
+ */
+ ds=(u8)m[4];
+ as=(u8)(m[4]>>8);
+ st=(u8)(m[4]>>24);
+
+ dprintk(("i2o got a scsi reply %08X: ", m[0]));
+ dprintk(("m[2]=%08X: ", m[2]));
+ dprintk(("m[4]=%08X\n", m[4]));
+
+ if(m[2]&0x80000000)
+ {
+ if(m[2]&0x40000000)
+ {
+ dprintk(("Event.\n"));
+ lun_done=1;
+ return;
+ }
+ printk(KERN_ERR "i2o_scsi: bus reset reply.\n");
+ return;
+ }
+
+ current_command = (Scsi_Cmnd *)m[3];
+
+ /*
+ * Is this a control request coming back - eg an abort ?
+ */
+
+ if(current_command==NULL)
+ {
+ if(st)
+ dprintk(("SCSI abort: %08X", m[4]));
+ dprintk(("SCSI abort completed.\n"));
+ return;
+ }
+
+ dprintk(("Completed %ld\n", current_command->serial_number));
+
+ atomic_dec(&queue_depth);
+
+ if(st == 0x06)
+ {
+ if(m[5] < current_command->underflow)
+ {
+ int i;
+ printk(KERN_ERR "SCSI: underflow 0x%08X 0x%08X\n",
+ m[5], current_command->underflow);
+ printk("Cmd: ");
+ for(i=0;i<15;i++)
+ printk("%02X ", current_command->cmnd[i]);
+ printk(".\n");
+ }
+ else st=0;
+ }
+
+ if(st)
+ {
+ /* An error has occurred */
+
+ dprintk((KERN_DEBUG "SCSI error %08X", m[4]));
+
+ if (as == 0x0E)
+ /* SCSI Reset */
+ current_command->result = DID_RESET << 16;
+ else if (as == 0x0F)
+ current_command->result = DID_PARITY << 16;
+ else
+ current_command->result = DID_ERROR << 16;
+ }
+ else
+ /*
+ * It worked maybe ?
+ */
+ current_command->result = DID_OK << 16 | ds;
+ spin_lock(&io_request_lock);
+ current_command->scsi_done(current_command);
+ spin_unlock(&io_request_lock);
+ return;
+}
+
+struct i2o_handler i2o_scsi_handler=
+{
+ i2o_scsi_reply,
+ NULL,
+ NULL,
+ NULL,
+ "I2O SCSI OSM",
+ 0,
+ I2O_CLASS_SCSI_PERIPHERAL
+};
+
+static int i2o_find_lun(struct i2o_controller *c, struct i2o_device *d, int *target, int *lun)
+{
+ u8 reply[8];
+
+ if(i2o_query_scalar(c, d->lct_data.tid, 0, 3, reply, 4)<0)
+ return -1;
+
+ *target=reply[0];
+
+ if(i2o_query_scalar(c, d->lct_data.tid, 0, 4, reply, 8)<0)
+ return -1;
+
+ *lun=reply[1];
+
+ dprintk(("SCSI (%d,%d)\n", *target, *lun));
+ return 0;
+}
+
+void i2o_scsi_init(struct i2o_controller *c, struct i2o_device *d, struct Scsi_Host *shpnt)
+{
+ struct i2o_device *unit;
+ struct i2o_scsi_host *h =(struct i2o_scsi_host *)shpnt->hostdata;
+ int lun;
+ int target;
+
+ h->controller=c;
+ h->bus_task=d->lct_data.tid;
+
+ for(target=0;target<16;target++)
+ for(lun=0;lun<8;lun++)
+ h->task[target][lun] = -1;
+
+ for(unit=c->devices;unit!=NULL;unit=unit->next)
+ {
+ dprintk(("Class %03X, parent %d, want %d.\n",
+ unit->lct_data.class_id, unit->lct_data.parent_tid, d->lct_data.tid));
+
+ /* Only look at scsi and fc devices */
+ if ( (unit->lct_data.class_id != I2O_CLASS_SCSI_PERIPHERAL)
+ && (unit->lct_data.class_id != I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL)
+ )
+ continue;
+
+ /* On our bus ? */
+ dprintk(("Found a disk (%d).\n", unit->lct_data.tid));
+ if ((unit->lct_data.parent_tid == d->lct_data.tid)
+ || (unit->lct_data.parent_tid == d->lct_data.parent_tid)
+ )
+ {
+ u16 limit;
+ dprintk(("Its ours.\n"));
+ if(i2o_find_lun(c, unit, &target, &lun)==-1)
+ {
+ printk(KERN_ERR "i2o_scsi: Unable to get lun for tid %d.\n", unit->lct_data.tid);
+ continue;
+ }
+ dprintk(("Found disk %d %d.\n", target, lun));
+ h->task[target][lun]=unit->lct_data.tid;
+ h->tagclock[target][lun]=jiffies;
+
+ /* Get the max fragments/request */
+ i2o_query_scalar(c, d->lct_data.tid, 0xF103, 3, &limit, 2);
+
+ /* sanity */
+ if ( limit == 0 )
+ {
+ printk(KERN_WARNING "i2o_scsi: Ignoring unreasonable SG limit of 0 from IOP!\n");
+ limit = 1;
+ }
+
+ shpnt->sg_tablesize = limit;
+
+ dprintk(("i2o_scsi: set scatter-gather to %d.\n",
+ shpnt->sg_tablesize));
+ }
+ }
+}
+
+int i2o_scsi_detect(Scsi_Host_Template * tpnt)
+{
+ unsigned long flags;
+ struct Scsi_Host *shpnt = NULL;
+ int i;
+ int count;
+
+ printk("i2o_scsi.c: %s\n", VERSION_STRING);
+
+ if(i2o_install_handler(&i2o_scsi_handler)<0)
+ {
+ printk(KERN_ERR "i2o_scsi: Unable to install OSM handler.\n");
+ return 0;
+ }
+ scsi_context = i2o_scsi_handler.context;
+
+ if((sg_chain_pool = kmalloc(SG_CHAIN_POOL_SZ, GFP_KERNEL)) == NULL)
+ {
+ printk("i2o_scsi: Unable to alloc %d byte SG chain buffer pool.\n", SG_CHAIN_POOL_SZ);
+ printk("i2o_scsi: SG chaining DISABLED!\n");
+ sg_max_frags = 11;
+ }
+ else
+ {
+ printk(" chain_pool: %d bytes @ %p\n", SG_CHAIN_POOL_SZ, sg_chain_pool);
+ printk(" (%d byte buffers X %d can_queue X %d i2o controllers)\n",
+ SG_CHAIN_BUF_SZ, I2O_SCSI_CAN_QUEUE, i2o_num_controllers);
+ sg_max_frags = SG_MAX_FRAGS; // 64
+ }
+
+ init_timer(&retry_timer);
+ retry_timer.data = 0UL;
+ retry_timer.function = i2o_retry_run;
+
+// printk("SCSI OSM at %d.\n", scsi_context);
+
+ for (count = 0, i = 0; i < MAX_I2O_CONTROLLERS; i++)
+ {
+ struct i2o_controller *c=i2o_find_controller(i);
+ struct i2o_device *d;
+ /*
+ * This controller doesn't exist.
+ */
+
+ if(c==NULL)
+ continue;
+
+ /*
+ * Fixme - we need some altered device locking. This
+ * is racing with device addition in theory. Easy to fix.
+ */
+
+ for(d=c->devices;d!=NULL;d=d->next)
+ {
+ /*
+ * bus_adapter, SCSI (obsolete), or FibreChannel busses only
+ */
+ if( (d->lct_data.class_id!=I2O_CLASS_BUS_ADAPTER_PORT) // bus_adapter
+// && (d->lct_data.class_id!=I2O_CLASS_FIBRE_CHANNEL_PORT) // FC_PORT
+ )
+ continue;
+
+ shpnt = scsi_register(tpnt, sizeof(struct i2o_scsi_host));
+ if(shpnt==NULL)
+ continue;
+ save_flags(flags);
+ cli();
+ shpnt->unique_id = (u32)d;
+ shpnt->io_port = 0;
+ shpnt->n_io_port = 0;
+ shpnt->irq = 0;
+ shpnt->this_id = /* Good question */15;
+ restore_flags(flags);
+ i2o_scsi_init(c, d, shpnt);
+ count++;
+ }
+ }
+ i2o_scsi_hosts = count;
+
+ if(count==0)
+ {
+ if(sg_chain_pool!=NULL)
+ {
+ kfree(sg_chain_pool);
+ sg_chain_pool = NULL;
+ }
+ flush_pending();
+ del_timer(&retry_timer);
+ i2o_remove_handler(&i2o_scsi_handler);
+ }
+
+ return count;
+}
+
+int i2o_scsi_release(struct Scsi_Host *host)
+{
+ if(--i2o_scsi_hosts==0)
+ {
+ if(sg_chain_pool!=NULL)
+ {
+ kfree(sg_chain_pool);
+ sg_chain_pool = NULL;
+ }
+ flush_pending();
+ del_timer(&retry_timer);
+ i2o_remove_handler(&i2o_scsi_handler);
+ }
+ return 0;
+}
+
+
+const char *i2o_scsi_info(struct Scsi_Host *SChost)
+{
+ struct i2o_scsi_host *hostdata;
+
+ hostdata = (struct i2o_scsi_host *)SChost->hostdata;
+
+ return(&hostdata->controller->name[0]);
+}
+
+
+/*
+ * From the wd93 driver:
+ * Returns true if there will be a DATA_OUT phase with this command,
+ * false otherwise.
+ * (Thanks to Joerg Dorchain for the research and suggestion.)
+ *
+ */
+static int is_dir_out(Scsi_Cmnd *cmd)
+{
+ switch (cmd->cmnd[0])
+ {
+ case WRITE_6: case WRITE_10: case WRITE_12:
+ case WRITE_LONG: case WRITE_SAME: case WRITE_BUFFER:
+ case WRITE_VERIFY: case WRITE_VERIFY_12:
+ case COMPARE: case COPY: case COPY_VERIFY:
+ case SEARCH_EQUAL: case SEARCH_HIGH: case SEARCH_LOW:
+ case SEARCH_EQUAL_12: case SEARCH_HIGH_12: case SEARCH_LOW_12:
+ case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE:
+ case MODE_SELECT: case MODE_SELECT_10: case LOG_SELECT:
+ case SEND_DIAGNOSTIC: case CHANGE_DEFINITION: case UPDATE_BLOCK:
+ case SET_WINDOW: case MEDIUM_SCAN: case SEND_VOLUME_TAG:
+ case 0xea:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+int i2o_scsi_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
+{
+ int i;
+ int tid;
+ struct i2o_controller *c;
+ Scsi_Cmnd *current_command;
+ struct Scsi_Host *host;
+ struct i2o_scsi_host *hostdata;
+ u32 *msg, *mptr;
+ u32 m;
+ u32 *lenptr;
+ int direction;
+ int scsidir;
+ u32 len;
+ u32 reqlen;
+ u32 tag;
+
+ static int max_qd = 1;
+
+ /*
+ * Do the incoming paperwork
+ */
+
+ host = SCpnt->host;
+ hostdata = (struct i2o_scsi_host *)host->hostdata;
+ SCpnt->scsi_done = done;
+
+ if(SCpnt->target > 15)
+ {
+ printk(KERN_ERR "i2o_scsi: Wild target %d.\n", SCpnt->target);
+ return -1;
+ }
+
+ tid = hostdata->task[SCpnt->target][SCpnt->lun];
+
+ dprintk(("qcmd: Tid = %d\n", tid));
+
+ current_command = SCpnt; /* set current command */
+ current_command->scsi_done = done; /* set ptr to done function */
+
+ /* We don't have such a device. Pretend we did the command
+ and that selection timed out */
+
+ if(tid == -1)
+ {
+ SCpnt->result = DID_NO_CONNECT << 16;
+ done(SCpnt);
+ return 0;
+ }
+
+ dprintk(("Real scsi messages.\n"));
+
+ c = hostdata->controller;
+
+ /*
+ * Obtain an I2O message. Right now we _have_ to obtain one
+ * until the scsi layer stuff is cleaned up.
+ */
+
+ do
+ {
+ mb();
+ m = I2O_POST_READ32(c);
+ }
+ while(m==0xFFFFFFFF);
+ msg = (u32 *)(c->mem_offset + m);
+
+ /*
+ * Put together a scsi execscb message
+ */
+
+ len = SCpnt->request_bufflen;
+ direction = 0x00000000; // SGL IN (osm<--iop)
+
+ /*
+ * The scsi layer should be handling this stuff
+ */
+
+ scsidir = 0x00000000; // DATA NO XFER
+ if(len)
+ {
+ if(is_dir_out(SCpnt))
+ {
+ direction=0x04000000; // SGL OUT (osm-->iop)
+ scsidir =0x80000000; // DATA OUT (iop-->dev)
+ }
+ else
+ {
+ scsidir =0x40000000; // DATA IN (iop<--dev)
+ }
+ }
+
+ __raw_writel(I2O_CMD_SCSI_EXEC<<24|HOST_TID<<12|tid, &msg[1]);
+ __raw_writel(scsi_context, &msg[2]); /* So the I2O layer passes to us */
+ /* Sorry 64bit folks. FIXME */
+ __raw_writel((u32)SCpnt, &msg[3]); /* We want the SCSI control block back */
+
+ /* LSI_920_PCI_QUIRK
+ *
+ * Intermittant observations of msg frame word data corruption
+ * observed on msg[4] after:
+ * WRITE, READ-MODIFY-WRITE
+ * operations. 19990606 -sralston
+ *
+ * (Hence we build this word via tag. Its good practice anyway
+ * we don't want fetches over PCI needlessly)
+ */
+
+ tag=0;
+
+ /*
+ * Attach tags to the devices
+ */
+ if(SCpnt->device->tagged_supported)
+ {
+ /*
+ * Some drives are too stupid to handle fairness issues
+ * with tagged queueing. We throw in the odd ordered
+ * tag to stop them starving themselves.
+ */
+ if((jiffies - hostdata->tagclock[SCpnt->target][SCpnt->lun]) > (5*HZ))
+ {
+ tag=0x01800000; /* ORDERED! */
+ hostdata->tagclock[SCpnt->target][SCpnt->lun]=jiffies;
+ }
+ else
+ {
+ /* Hmmm... I always see value of 0 here,
+ * of which {HEAD_OF, ORDERED, SIMPLE} are NOT! -sralston
+ */
+ if(SCpnt->tag == HEAD_OF_QUEUE_TAG)
+ tag=0x01000000;
+ else if(SCpnt->tag == ORDERED_QUEUE_TAG)
+ tag=0x01800000;
+ }
+ }
+
+ /* Direction, disconnect ok, tag, CDBLen */
+ __raw_writel(scsidir|0x20000000|SCpnt->cmd_len|tag, &msg[4]);
+
+ mptr=msg+5;
+
+ /*
+ * Write SCSI command into the message - always 16 byte block
+ */
+
+ memcpy_toio(mptr, SCpnt->cmnd, 16);
+ mptr+=4;
+ lenptr=mptr++; /* Remember me - fill in when we know */
+
+ reqlen = 12; // SINGLE SGE
+
+ /*
+ * Now fill in the SGList and command
+ *
+ * FIXME: we need to set the sglist limits according to the
+ * message size of the I2O controller. We might only have room
+ * for 6 or so worst case
+ */
+
+ if(SCpnt->use_sg)
+ {
+ struct scatterlist *sg = (struct scatterlist *)SCpnt->request_buffer;
+ int chain = 0;
+
+ len = 0;
+
+ if((sg_max_frags > 11) && (SCpnt->use_sg > 11))
+ {
+ chain = 1;
+ /*
+ * Need to chain!
+ */
+ __raw_writel(direction|0xB0000000|(SCpnt->use_sg*2*4), mptr++);
+ __raw_writel(virt_to_bus(sg_chain_pool + sg_chain_tag), mptr);
+ mptr = (u32*)(sg_chain_pool + sg_chain_tag);
+ if (SCpnt->use_sg > max_sg_len)
+ {
+ max_sg_len = SCpnt->use_sg;
+ printk("i2o_scsi: Chain SG! SCpnt=%p, SG_FragCnt=%d, SG_idx=%d\n",
+ SCpnt, SCpnt->use_sg, sg_chain_tag);
+ }
+ if ( ++sg_chain_tag == SG_MAX_BUFS )
+ sg_chain_tag = 0;
+ for(i = 0 ; i < SCpnt->use_sg; i++)
+ {
+ *mptr++=direction|0x10000000|sg->length;
+ len+=sg->length;
+ *mptr++=virt_to_bus(sg->address);
+ sg++;
+ }
+ mptr[-2]=direction|0xD0000000|(sg-1)->length;
+ }
+ else
+ {
+ for(i = 0 ; i < SCpnt->use_sg; i++)
+ {
+ __raw_writel(direction|0x10000000|sg->length, mptr++);
+ len+=sg->length;
+ __raw_writel(virt_to_bus(sg->address), mptr++);
+ sg++;
+ }
+
+ /* Make this an end of list. Again evade the 920 bug and
+ unwanted PCI read traffic */
+
+ __raw_writel(direction|0xD0000000|(sg-1)->length, &mptr[-2]);
+ }
+
+ if(!chain)
+ reqlen = mptr - msg;
+
+ __raw_writel(len, lenptr);
+
+ if(len != SCpnt->underflow)
+ printk("Cmd len %08X Cmd underflow %08X\n",
+ len, SCpnt->underflow);
+ }
+ else
+ {
+ dprintk(("non sg for %p, %d\n", SCpnt->request_buffer,
+ SCpnt->request_bufflen));
+ __raw_writel(len = SCpnt->request_bufflen, lenptr);
+ if(len == 0)
+ {
+ reqlen = 9;
+ }
+ else
+ {
+ __raw_writel(0xD0000000|direction|SCpnt->request_bufflen, mptr++);
+ __raw_writel(virt_to_bus(SCpnt->request_buffer), mptr++);
+ }
+ }
+
+ /*
+ * Stick the headers on
+ */
+
+ __raw_writel(reqlen<<16 | SGL_OFFSET_10, msg);
+
+ /* Queue the message */
+ i2o_post_message(c,m);
+
+ atomic_inc(&queue_depth);
+
+ if(atomic_read(&queue_depth)> max_qd)
+ {
+ max_qd=atomic_read(&queue_depth);
+ printk("Queue depth now %d.\n", max_qd);
+ }
+
+ mb();
+ dprintk(("Issued %ld\n", current_command->serial_number));
+
+ return 0;
+}
+
+static void internal_done(Scsi_Cmnd * SCpnt)
+{
+ SCpnt->SCp.Status++;
+}
+
+int i2o_scsi_command(Scsi_Cmnd * SCpnt)
+{
+ i2o_scsi_queuecommand(SCpnt, internal_done);
+ SCpnt->SCp.Status = 0;
+ while (!SCpnt->SCp.Status)
+ barrier();
+ return SCpnt->result;
+}
+
+int i2o_scsi_abort(Scsi_Cmnd * SCpnt)
+{
+ struct i2o_controller *c;
+ struct Scsi_Host *host;
+ struct i2o_scsi_host *hostdata;
+ u32 *msg;
+ u32 m;
+ int tid;
+
+ printk("i2o_scsi: Aborting command block.\n");
+
+ host = SCpnt->host;
+ hostdata = (struct i2o_scsi_host *)host->hostdata;
+ tid = hostdata->task[SCpnt->target][SCpnt->lun];
+ if(tid==-1)
+ {
+ printk(KERN_ERR "impossible command to abort.\n");
+ return SCSI_ABORT_NOT_RUNNING;
+ }
+ c = hostdata->controller;
+
+ /*
+ * Obtain an I2O message. Right now we _have_ to obtain one
+ * until the scsi layer stuff is cleaned up.
+ */
+
+ do
+ {
+ mb();
+ m = I2O_POST_READ32(c);
+ }
+ while(m==0xFFFFFFFF);
+ msg = (u32 *)(c->mem_offset + m);
+
+ __raw_writel(FIVE_WORD_MSG_SIZE, &msg[0]);
+ __raw_writel(I2O_CMD_SCSI_ABORT<<24|HOST_TID<<12|tid, &msg[1]);
+ __raw_writel(scsi_context, &msg[2]);
+ __raw_writel(0, &msg[3]); /* Not needed for an abort */
+ __raw_writel((u32)SCpnt, &msg[4]);
+ wmb();
+ i2o_post_message(c,m);
+ wmb();
+ return SCSI_ABORT_PENDING;
+}
+
+int i2o_scsi_reset(Scsi_Cmnd * SCpnt, unsigned int reset_flags)
+{
+ int tid;
+ struct i2o_controller *c;
+ struct Scsi_Host *host;
+ struct i2o_scsi_host *hostdata;
+ u32 m;
+ u32 *msg;
+
+ /*
+ * Find the TID for the bus
+ */
+
+ printk("i2o_scsi: Attempting to reset the bus.\n");
+
+ host = SCpnt->host;
+ hostdata = (struct i2o_scsi_host *)host->hostdata;
+ tid = hostdata->bus_task;
+ c = hostdata->controller;
+
+ /*
+ * Now send a SCSI reset request. Any remaining commands
+ * will be aborted by the IOP. We need to catch the reply
+ * possibly ?
+ */
+
+ m = I2O_POST_READ32(c);
+
+ /*
+ * No free messages, try again next time - no big deal
+ */
+
+ if(m == 0xFFFFFFFF)
+ return SCSI_RESET_PUNT;
+
+ msg = (u32 *)(c->mem_offset + m);
+ __raw_writel(FOUR_WORD_MSG_SIZE|SGL_OFFSET_0, &msg[0]);
+ __raw_writel(I2O_CMD_SCSI_BUSRESET<<24|HOST_TID<<12|tid, &msg[1]);
+ __raw_writel(scsi_context|0x80000000, &msg[2]);
+ /* We use the top bit to split controller and unit transactions */
+ /* Now store unit,tid so we can tie the completion back to a specific device */
+ __raw_writel(c->unit << 16 | tid, &msg[3]);
+ wmb();
+ i2o_post_message(c,m);
+ return SCSI_RESET_PENDING;
+}
+
+/*
+ * This is anyones guess quite frankly.
+ */
+
+int i2o_scsi_bios_param(Disk * disk, kdev_t dev, int *ip)
+{
+ int size;
+
+ size = disk->capacity;
+ ip[0] = 64; /* heads */
+ ip[1] = 32; /* sectors */
+ if ((ip[2] = size >> 11) > 1024) { /* cylinders, test for big disk */
+ ip[0] = 255; /* heads */
+ ip[1] = 63; /* sectors */
+ ip[2] = size / (255 * 63); /* cylinders */
+ }
+ return 0;
+}
+
+MODULE_AUTHOR("Red Hat Software");
+
+static Scsi_Host_Template driver_template = I2OSCSI;
+
+#include "../scsi/scsi_module.c"
--- /dev/null
+#ifndef _I2O_SCSI_H
+#define _I2O_SCSI_H
+
+#if !defined(LINUX_VERSION_CODE)
+#include <linux/version.h>
+#endif
+
+#define LinuxVersionCode(v, p, s) (((v)<<16)+((p)<<8)+(s))
+
+#include <linux/types.h>
+#include <linux/kdev_t.h>
+
+#define I2O_SCSI_ID 15
+#define I2O_SCSI_CAN_QUEUE 4
+#define I2O_SCSI_CMD_PER_LUN 6
+
+extern int i2o_scsi_detect(Scsi_Host_Template *);
+extern const char *i2o_scsi_info(struct Scsi_Host *);
+extern int i2o_scsi_command(Scsi_Cmnd *);
+extern int i2o_scsi_queuecommand(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
+extern int i2o_scsi_abort(Scsi_Cmnd *);
+extern int i2o_scsi_reset(Scsi_Cmnd *, unsigned int);
+extern int i2o_scsi_bios_param(Disk *, kdev_t, int *);
+extern void i2o_scsi_setup(char *str, int *ints);
+extern int i2o_scsi_release(struct Scsi_Host *host);
+
+#define I2OSCSI { \
+ next: NULL, \
+ proc_name: "i2o_scsi", \
+ name: "I2O SCSI Layer", \
+ detect: i2o_scsi_detect, \
+ release: i2o_scsi_release, \
+ info: i2o_scsi_info, \
+ command: i2o_scsi_command, \
+ queuecommand: i2o_scsi_queuecommand, \
+ abort: i2o_scsi_abort, \
+ reset: i2o_scsi_reset, \
+ bios_param: i2o_scsi_bios_param, \
+ can_queue: I2O_SCSI_CAN_QUEUE, \
+ this_id: I2O_SCSI_ID, \
+ sg_tablesize: 8, \
+ cmd_per_lun: I2O_SCSI_CMD_PER_LUN, \
+ unchecked_isa_dma: 0, \
+ use_clustering: ENABLE_CLUSTERING \
+ }
+
+#endif
put_user(ftl_hd[minor].start_sect, (u_long *)&geo->start);
break;
case BLKGETSIZE:
- ret = put_user(ftl_hd[minor].nr_sects, (long *)arg);
+ ret = put_user(ftl_hd[minor].nr_sects, (unsigned long *)arg);
break;
case BLKGETSIZE64:
ret = put_user((u64)ftl_hd[minor].nr_sects << 9, (u64 *)arg);
switch (cmd) {
case BLKGETSIZE: /* Return device size */
- return put_user((mtdblk->mtd->size >> 9), (long *) arg);
+ return put_user((mtdblk->mtd->size >> 9), (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)mtdblk->mtd->size, (u64 *)arg);
switch (cmd) {
case BLKGETSIZE: /* Return device size */
- return put_user((mtd->size >> 9), (long *) arg);
+ return put_user((mtd->size >> 9), (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)mtd->size, (u64 *)arg);
}
case BLKGETSIZE: /* Return device size */
return put_user(part_table[MINOR(inode->i_rdev)].nr_sects,
- (long *) arg);
+ (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)part_table[MINOR(inode->i_rdev)].nr_sects << 9,
(u64 *)arg);
RX_RING_SIZE * sizeof(struct rx_desc) +
TX_RING_SIZE * sizeof(struct tx_desc),
np->rx_ring, np->rx_ring_dma);
+ np->tx_ring = NULL;
if (np->tx_bufs)
pci_free_consistent(np->pdev, PKT_BUF_SZ * TX_RING_SIZE,
iounmap((char *)(dev->base_addr));
#endif
- pci_free_consistent(pdev,
- RX_RING_SIZE * sizeof(struct rx_desc) +
- TX_RING_SIZE * sizeof(struct tx_desc),
- np->rx_ring, np->rx_ring_dma);
-
kfree(dev);
pci_set_drvdata(pdev, NULL);
break;
}
case BLKGETSIZE:{ /* Return device size */
- long blocks = major_info->gendisk.sizes
- [MINOR (inp->i_rdev)] << 1;
- rc = put_user(blocks, (long *) data);
+ unsigned long blocks = major_info->gendisk.sizes
+ [MINOR (inp->i_rdev)] << 1;
+ rc = put_user(blocks, (unsigned long *) data);
break;
}
case BLKGETSIZE64:{
int xpram_devs, xpram_rahead;
int xpram_blksize, xpram_hardsect;
int xpram_mem_avail = 0;
-int xpram_sizes[XPRAM_MAX_DEVS];
+unsigned long xpram_sizes[XPRAM_MAX_DEVS];
MODULE_PARM(devs,"i");
/* Return the device size, expressed in sectors */
return put_user( 1024* xpram_sizes[MINOR(inode->i_rdev)]
/ XPRAM_SOFTSECT,
- (long *) arg);
+ (unsigned long *) arg);
case BLKGETSIZE64:
return put_user( (u64)(1024* xpram_sizes[MINOR(inode->i_rdev)]
switch (cmd) {
case BLKGETSIZE:
- return put_user(jsfd_bytesizes[dev] >> 9, (long *) arg);
+ return put_user(jsfd_bytesizes[dev] >> 9, (unsigned long *) arg);
case BLKGETSIZE64:
return put_user(jsfd_bytesizes[dev], (u64 *) arg);
return 0;
}
case BLKGETSIZE: /* Return device size */
- return put_user(sd[SD_PARTITION(inode->i_rdev)].nr_sects, (long *) arg);
+ return put_user(sd[SD_PARTITION(inode->i_rdev)].nr_sects, (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)sd[SD_PARTITION(inode->i_rdev)].nr_sects << 9, (u64 *)arg);
switch (cmd) {
case BLKGETSIZE:
- return put_user(scsi_CDs[target].capacity, (long *) arg);
+ return put_user(scsi_CDs[target].capacity, (unsigned long *) arg);
case BLKGETSIZE64:
return put_user((u64)scsi_CDs[target].capacity << 9, (u64 *)arg);
case BLKROSET:
if [ "$CONFIG_FB_ATY" != "n" ]; then
bool ' Mach64 GX support (EXPERIMENTAL)' CONFIG_FB_ATY_GX
bool ' Mach64 CT/VT/GT/LT (incl. 3D RAGE) support' CONFIG_FB_ATY_CT
- if [ "$CONFIG_FB_ATY_CT" = "y" ]; then
- bool ' Sony Vaio C1VE 1024x480 LCD support' CONFIG_FB_ATY_CT_VAIO_LCD
- fi
fi
tristate ' ATI Radeon display support (EXPERIMENTAL)' CONFIG_FB_RADEON
tristate ' ATI Rage128 display support (EXPERIMENTAL)' CONFIG_FB_ATY128
if [ "$CONFIG_NINO" = "y" ]; then
bool ' TMPTX3912/PR31700 frame buffer support' CONFIG_FB_TX3912
fi
- if [ "$CONFIG_DECSTATION" = "y" ]; then
- if [ "$CONFIG_TC" = "y" ]; then
- bool ' PMAG-BA TURBOchannel framebuffer support' CONFIG_FB_PMAG_BA
- bool ' PMAGB-B TURBOchannel framebuffer spport' CONFIG_FB_PMAGB_B
- bool ' Maxine (Personal DECstation) onboard framebuffer spport' CONFIG_FB_MAXINE
- fi
- fi
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
tristate ' Virtual Frame Buffer support (ONLY FOR TESTING!)' CONFIG_FB_VIRTUAL
fi
unsigned long flags;
int i, x, y;
u8 *fd1, *fd2, *fd3, *fd4;
+ u16 c;
spin_lock_irqsave(&fb->lock, flags);
do {
i = sbus_readl(&fbc->s);
} while (i & 0x10000000);
- sbus_writel(attr_fgcol(p, scr_readw(s)), &fbc->fg);
- sbus_writel(attr_bgcol(p, scr_readw(s)), &fbc->bg);
+ c = scr_readw(s);
+ sbus_writel(attr_fgcol(p, c), &fbc->fg);
+ sbus_writel(attr_bgcol(p, c), &fbc->bg);
sbus_writel(0x140000, &fbc->mode);
sbus_writel(0xe880fc30, &fbc->alu);
sbus_writel(~0, &fbc->pixelm);
unsigned long flags;
int i, xy;
u8 *fd1, *fd2, *fd3, *fd4;
+ u16 c;
u64 fgbg;
spin_lock_irqsave(&fb->lock, flags);
- fgbg = (((u64)(((u32 *)p->dispsw_data)[attr_fgcol(p,scr_readw(s))])) << 32) |
- ((u32 *)p->dispsw_data)[attr_bgcol(p,scr_readw(s))];
+ c = scr_readw(s);
+ fgbg = (((u64)(((u32 *)p->dispsw_data)[attr_fgcol(p, c)])) << 32) |
+ ((u32 *)p->dispsw_data)[attr_bgcol(p, c)];
if (fgbg != *(u64 *)&fb->s.ffb.fg_cache) {
FFBFifo(fb, 2);
upa_writeq(fgbg, &fbc->fg);
underl = attr_underline(p,conp);
while (count--) {
- c = *s++;
+ c = scr_readw(s++);
dest = dest0++;
cdat = p->fontdata+c*p->fontheight;
for (rows = p->fontheight; rows--; dest += p->next_line) {
int fg0, bg0, fg, bg;
dest0 = p->screen_base+yy*fontheight(p)*p->next_line+xx;
- fg0 = attr_fgcol(p, scr_readw(s));
- bg0 = attr_bgcol(p, scr_readw(s));
+ c1 = scr_readw(s);
+ fg0 = attr_fgcol(p, c1);
+ bg0 = attr_bgcol(p, c1);
while (count--)
if (xx&3 || count < 3) { /* Slow version */
u32 eorx, fgx, bgx;
dest0 = p->screen_base + yy * fontheight(p) * bytes + xx * fontwidth(p) * 2;
- fgx = ((u16 *)p->dispsw_data)[attr_fgcol(p, scr_readw(s))];
- bgx = ((u16 *)p->dispsw_data)[attr_bgcol(p, scr_readw(s))];
+ c = scr_readw(s);
+ fgx = ((u16 *)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u16 *)p->dispsw_data)[attr_bgcol(p, c)];
fgx |= (fgx << 16);
bgx |= (bgx << 16);
eorx = fgx ^ bgx;
u32 eorx, fgx, bgx;
dest0 = p->screen_base + yy * fontheight(p) * bytes + xx * 2;
- fgx=3/*attr_fgcol(p,scr_readw(s))*/;
- bgx=attr_bgcol(p,scr_readw(s));
+ c = scr_readw(s);
+ fgx = 3/*attr_fgcol(p, c)*/;
+ bgx = attr_bgcol(p, c);
fgx |= (fgx << 2);
fgx |= (fgx << 4);
bgx |= (bgx << 2);
u32 eorx, fgx, bgx, d1, d2, d3, d4;
dest0 = p->screen_base + yy * fontheight(p) * bytes + xx * fontwidth(p) * 3;
- fgx = ((u32 *)p->dispsw_data)[attr_fgcol(p, scr_readw(s))];
- bgx = ((u32 *)p->dispsw_data)[attr_bgcol(p, scr_readw(s))];
+ c = scr_readw(s);
+ fgx = ((u32 *)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u32 *)p->dispsw_data)[attr_bgcol(p, c)];
eorx = fgx ^ bgx;
while (count--) {
c = scr_readw(s++) & p->charmask;
u32 eorx, fgx, bgx, *pt;
dest0 = p->screen_base + yy * fontheight(p) * bytes + xx * fontwidth(p) * 4;
- fgx = ((u32 *)p->dispsw_data)[attr_fgcol(p, scr_readw(s))];
- bgx = ((u32 *)p->dispsw_data)[attr_bgcol(p, scr_readw(s))];
+ c = scr_readw(s);
+ fgx = ((u32 *)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u32 *)p->dispsw_data)[attr_bgcol(p, c)];
eorx = fgx ^ bgx;
while (count--) {
c = scr_readw(s++) & p->charmask;
u32 eorx, fgx, bgx;
dest0 = p->screen_base + yy * fontheight(p) * bytes + xx * 4;
- fgx=attr_fgcol(p,scr_readw(s));
- bgx=attr_bgcol(p,scr_readw(s));
+ c = scr_readw(s);
+ fgx = attr_fgcol(p, c);
+ bgx = attr_bgcol(p, c);
fgx |= (fgx << 4);
fgx |= (fgx << 8);
fgx |= (fgx << 16);
u32 eorx, fgx, bgx;
dest0 = p->screen_base + yy * fontheight(p) * bytes + xx * fontwidth(p);
- fgx=attr_fgcol(p,scr_readw(s));
- bgx=attr_bgcol(p,scr_readw(s));
+ c = scr_readw(s);
+ fgx = attr_fgcol(p, c);
+ bgx = attr_bgcol(p, c);
fgx |= (fgx << 8);
fgx |= (fgx << 16);
bgx |= (bgx << 8);
u8 d;
u16 c;
- bold = attr_bold(p,scr_readw(s));
- revs = attr_reverse(p,scr_readw(s));
- underl = attr_underline(p,scr_readw(s));
+ c = scr_readw(s);
+ bold = attr_bold(p, c);
+ revs = attr_reverse(p, c);
+ underl = attr_underline(p, c);
y0 = yy*fontheight(p);
while (count--) {
int fg0, bg0, fg, bg;
dest0 = p->screen_base+yy*fontheight(p)*p->next_line+xx;
- fg0 = attr_fgcol(p,scr_readw(s));
- bg0 = attr_bgcol(p,scr_readw(s));
+ c1 = scr_readw(s);
+ fg0 = attr_fgcol(p, c1);
+ bg0 = attr_bgcol(p, c1);
while (count--)
if (xx&3 || count < 3) { /* Slow version */
else
dest0 = (p->screen_base + yy * bytes * fontheight(p) +
(xx>>1)*4 + (xx & 1));
- fgx = expand2w(COLOR_2P(attr_fgcol(p,scr_readw(s))));
- bgx = expand2w(COLOR_2P(attr_bgcol(p,scr_readw(s))));
+ c = scr_readw(s);
+ fgx = expand2w(COLOR_2P(attr_fgcol(p, c)));
+ bgx = expand2w(COLOR_2P(attr_bgcol(p, c)));
eorx = fgx ^ bgx;
while (count--) {
else
dest0 = (p->screen_base + yy * bytes * fontheight(p) +
(xx>>1)*8 + (xx & 1));
- fgx = expand4l(attr_fgcol(p,scr_readw(s)));
- bgx = expand4l(attr_bgcol(p,scr_readw(s)));
+ c = scr_readw(s);
+ fgx = expand4l(attr_fgcol(p, c));
+ bgx = expand4l(attr_bgcol(p, c));
eorx = fgx ^ bgx;
while (count--) {
dest0 = (p->screen_base + yy * bytes * fontheight(p) +
(xx>>1)*16 + (xx & 1));
- expand8dl(attr_fgcol(p,scr_readw(s)), &fgx1, &fgx2);
- expand8dl(attr_bgcol(p,scr_readw(s)), &bgx1, &bgx2);
+ c = scr_readw(s);
+ expand8dl(attr_fgcol(p, c), &fgx1, &fgx2);
+ expand8dl(attr_bgcol(p, c), &bgx1, &bgx2);
eorx1 = fgx1 ^ bgx1; eorx2 = fgx2 ^ bgx2;
while (count--) {
u16 c;
dest0 = p->screen_base+yy*fontheight(p)*p->next_line+xx;
- bold = attr_bold(p,scr_readw(s));
- revs = attr_reverse(p,scr_readw(s));
- underl = attr_underline(p,scr_readw(s));
+ c = scr_readw(s);
+ bold = attr_bold(p, c);
+ revs = attr_reverse(p, c);
+ underl = attr_underline(p, c);
while (count--) {
c = scr_readw(s++) & p->charmask;
return;
}
- bold = attr_bold(p,scr_readw(s));
- revs = attr_reverse(p,scr_readw(s));
- underl = attr_underline(p,scr_readw(s));
+ c = scr_readw(s);
+ bold = attr_bold(p, c);
+ revs = attr_reverse(p, c);
+ underl = attr_underline(p, c);
while (count--) {
c = scr_readw(s++) & p->charmask;
void fbcon_ega_planes_putcs(struct vc_data *conp, struct display *p, const unsigned short *s,
int count, int yy, int xx)
{
- int fg = attr_fgcol(p,scr_readw(s));
- int bg = attr_bgcol(p,scr_readw(s));
+ u16 c = scr_readw(s);
+ int fg = attr_fgcol(p, c);
+ int bg = attr_bgcol(p, c);
char *where;
int n;
void fbcon_vga_planes_putcs(struct vc_data *conp, struct display *p, const unsigned short *s,
int count, int yy, int xx)
{
- int fg = attr_fgcol(p,*s);
- int bg = attr_bgcol(p,*s);
+ u16 c = scr_readw(s);
+ int fg = attr_fgcol(p, c);
+ int bg = attr_bgcol(p, c);
char *where;
int n;
wmb();
for (n = 0; n < count; n++) {
int y;
- int c = *s++ & p->charmask;
+ int c = scr_readw(s++) & p->charmask;
u8 *cdat = p->fontdata + (c & p->charmask) * fontheight(p);
for (y = 0; y < fontheight(p); y++, cdat++) {
scr_memsetw(save, conp->vc_video_erase_char, logo_lines * nr_cols * 2);
r = q - step;
for (cnt = 0; cnt < logo_lines; cnt++, r += i)
- scr_memcpyw_from(save + cnt * nr_cols, r, 2 * i);
+ scr_memcpyw(save + cnt * nr_cols, r, 2 * i);
r = q;
}
}
}
scr_memsetw((unsigned short *)conp->vc_origin,
conp->vc_video_erase_char,
- conp->vc_size_row * logo_lines);
+ conp->vc_size_row * logo_lines);
}
/*
static void fbcon_invert_region(struct vc_data *conp, u16 *p, int cnt)
{
while (cnt--) {
+ u16 a = scr_readw(p);
if (!conp->vc_can_do_color)
- *p++ ^= 0x0800;
- else if (conp->vc_hi_font_mask == 0x100) {
- u16 a = *p;
+ a ^= 0x0800;
+ else if (conp->vc_hi_font_mask == 0x100)
a = ((a) & 0x11ff) | (((a) & 0xe000) >> 4) | (((a) & 0x0e00) << 4);
- *p++ = a;
- } else {
- u16 a = *p;
+ else
a = ((a) & 0x88ff) | (((a) & 0x7000) >> 4) | (((a) & 0x0700) << 4);
- *p++ = a;
- }
+ scr_writew(a, p++);
if (p == (u16 *)softback_end)
p = (u16 *)softback_buf;
if (p == (u16 *)softback_in)
unsigned long flags;
int i, x, y;
u8 *fd1, *fd2, *fd3, *fd4;
+ u16 c;
u32 *u;
spin_lock_irqsave(&fb->lock, flags);
do {
i = sbus_readl(&us->csr);
} while (i & 0x20000000);
- sbus_writel(attr_fgcol(p,scr_readw(s)) << 24, &ss->fg);
- sbus_writel(attr_bgcol(p,scr_readw(s)) << 24, &ss->bg);
+ c = scr_readw(s);
+ sbus_writel(attr_fgcol(p, c) << 24, &ss->fg);
+ sbus_writel(attr_bgcol(p, c) << 24, &ss->bg);
sbus_writel(0xFFFFFFFF<<(32-fontwidth(p)), &us->fontmsk);
if (fontwidthlog(p))
x = (xx << fontwidthlog(p));
#ifdef FBCON_HAS_CFB8
static void matrox_cfb8_putcs(struct vc_data* conp, struct display* p, const unsigned short* s, int count, int yy, int xx) {
+ u_int16_t c;
u_int32_t fgx, bgx;
MINFO_FROM_DISP(p);
DBG_HEAVY("matroxfb_cfb8_putcs");
- fgx = attr_fgcol(p, scr_readw(s));
- bgx = attr_bgcol(p, scr_readw(s));
+ c = scr_readw(s);
+ fgx = attr_fgcol(p, c);
+ bgx = attr_bgcol(p, c);
fgx |= (fgx << 8);
fgx |= (fgx << 16);
bgx |= (bgx << 8);
#ifdef FBCON_HAS_CFB16
static void matrox_cfb16_putcs(struct vc_data* conp, struct display* p, const unsigned short* s, int count, int yy, int xx) {
+ u_int16_t c;
u_int32_t fgx, bgx;
MINFO_FROM_DISP(p);
DBG_HEAVY("matroxfb_cfb16_putcs");
- fgx = ((u_int16_t*)p->dispsw_data)[attr_fgcol(p, scr_readw(s))];
- bgx = ((u_int16_t*)p->dispsw_data)[attr_bgcol(p, scr_readw(s))];
+ c = scr_readw(s);
+ fgx = ((u_int16_t*)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u_int16_t*)p->dispsw_data)[attr_bgcol(p, c)];
fgx |= (fgx << 16);
bgx |= (bgx << 16);
ACCESS_FBINFO(curr.putcs)(fgx, bgx, p, s, count, yy, xx);
#if defined(FBCON_HAS_CFB32) || defined(FBCON_HAS_CFB24)
static void matrox_cfb32_putcs(struct vc_data* conp, struct display* p, const unsigned short* s, int count, int yy, int xx) {
+ u_int16_t c;
u_int32_t fgx, bgx;
MINFO_FROM_DISP(p);
DBG_HEAVY("matroxfb_cfb32_putcs");
- fgx = ((u_int32_t*)p->dispsw_data)[attr_fgcol(p, scr_readw(s))];
- bgx = ((u_int32_t*)p->dispsw_data)[attr_bgcol(p, scr_readw(s))];
+ c = scr_readw(s);
+ fgx = ((u_int32_t*)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u_int32_t*)p->dispsw_data)[attr_bgcol(p, c)];
ACCESS_FBINFO(curr.putcs)(fgx, bgx, p, s, count, yy, xx);
}
#endif
unsigned int offs;
unsigned int attr;
unsigned int step;
+ u_int16_t c;
CRITFLAGS
MINFO_FROM_DISP(p);
step = ACCESS_FBINFO(devflags.textstep);
offs = yy * p->next_line + xx * step;
- attr = attr_fgcol(p, scr_readw(s)) | (attr_bgcol(p, scr_readw(s)) << 4);
+ c = scr_readw(s);
+ attr = attr_fgcol(p, c) | (attr_bgcol(p, c) << 4);
CRITBEGIN
int charattr;
unsigned char *p;
- charattr = (*s >> 8) & 0xff;
+ charattr = (scr_readw(s) >> 8) & 0xff;
xpos <<= 3;
ypos <<= 4;
NPORT_DMODE0_L32);
for (i = 0; i < count; i++, xpos += 8) {
- p = &font_data[vc->vc_num][(s[i] & 0xff) << 4];
+ p = &font_data[vc->vc_num][(scr_readw(s++) & 0xff) << 4];
newport_wait();
{
unsigned short *s = (unsigned short *)
(conp->vc_origin + py * conp->vc_size_row + (px << 1));
+ u16 cs;
+ cs = scr_readw(s);
if (px == pw) {
unsigned short *t = s - 1;
-
- if (inverted(*s) && inverted(*t))
- return sprintf(b, "\b\033[7m%c\b\033[@%c\033[m",
- *s, *t);
- else if (inverted(*s))
- return sprintf(b, "\b\033[7m%c\033[m\b\033[@%c",
- *s, *t);
- else if (inverted(*t))
- return sprintf(b, "\b%c\b\033[@\033[7m%c\033[m",
- *s, *t);
+ u16 ct = scr_readw(t);
+
+ if (inverted(cs) && inverted(ct))
+ return sprintf(b, "\b\033[7m%c\b\033[@%c\033[m", cs,
+ ct);
+ else if (inverted(cs))
+ return sprintf(b, "\b\033[7m%c\033[m\b\033[@%c", cs,
+ ct);
+ else if (inverted(ct))
+ return sprintf(b, "\b%c\b\033[@\033[7m%c\033[m", cs,
+ ct);
else
- return sprintf(b, "\b%c\b\033[@%c", *s, *t);
+ return sprintf(b, "\b%c\b\033[@%c", cs, ct);
}
- if (inverted(*s))
- return sprintf(b, "\033[7m%c\033[m\b", *s);
+ if (inverted(cs))
+ return sprintf(b, "\033[7m%c\033[m\b", cs);
else
- return sprintf(b, "%c\b", *s);
+ return sprintf(b, "%c\b", cs);
}
static int
unsigned short *s = (unsigned short *)
(conp->vc_origin + py * conp->vc_size_row + (px << 1));
char *p = b;
+ u16 cs;
b += sprintf(b, "\033[%d;%dH", py + 1, px + 1);
+ cs = scr_readw(s);
if (px == pw) {
unsigned short *t = s - 1;
-
- if (inverted(*s) && inverted(*t))
- b += sprintf(b, "\b%c\b\033[@\033[7m%c\033[m", *s, *t);
- else if (inverted(*s))
- b += sprintf(b, "\b%c\b\033[@%c", *s, *t);
- else if (inverted(*t))
- b += sprintf(b, "\b\033[7m%c\b\033[@%c\033[m", *s, *t);
+ u16 ct = scr_readw(t);
+
+ if (inverted(cs) && inverted(ct))
+ b += sprintf(b, "\b%c\b\033[@\033[7m%c\033[m", cs, ct);
+ else if (inverted(cs))
+ b += sprintf(b, "\b%c\b\033[@%c", cs, ct);
+ else if (inverted(ct))
+ b += sprintf(b, "\b\033[7m%c\b\033[@%c\033[m", cs, ct);
else
- b += sprintf(b, "\b\033[7m%c\033[m\b\033[@%c", *s, *t);
+ b += sprintf(b, "\b\033[7m%c\033[m\b\033[@%c", cs, ct);
return b - p;
}
- if (inverted(*s))
- b += sprintf(b, "%c\b", *s);
+ if (inverted(cs))
+ b += sprintf(b, "%c\b", cs);
else
- b += sprintf(b, "\033[7m%c\033[m\b", *s);
+ b += sprintf(b, "\033[7m%c\033[m\b", cs);
return b - p;
}
unsigned char *b = *bp;
while (cnt--) {
- if (attr != inverted(*s)) {
- attr = inverted(*s);
+ u16 c = scr_readw(s);
+ if (attr != inverted(c)) {
+ attr = inverted(c);
if (attr) {
strcpy (b, "\033[7m");
b += 4;
b += 3;
}
}
- *b++ = *s++;
+ *b++ = c;
+ s++;
if (b - buf >= 224) {
promcon_puts(buf, b - buf);
b = buf;
if (x + count >= pw + 1) {
if (count == 1) {
x -= 1;
- save = *(unsigned short *)(conp->vc_origin
+ save = scr_readw((unsigned short *)(conp->vc_origin
+ y * conp->vc_size_row
- + (x << 1));
+ + (x << 1)));
if (px != x || py != y) {
b += sprintf(b, "\033[%d;%dH", y + 1, x + 1);
{
/* 640x480 @ 60hz (VGA) */
- "vga_640x480", 60, 640, 480, 38, 33, 0, 18, 146, 26,
+ "vga_640x480", 60, 640, 480, VGA_CLK, 38, 33, 0, 18, 146, 26,
0, FB_VMODE_YWRAP
},
pvr2_encode_fix(&fix, &par);
display->screen_base = (char *)fix.smem_start;
+ display->scrollmode = SCROLL_YREDRAW;
display->visual = fix.visual;
display->type = fix.type;
display->type_aux = fix.type_aux;
#ifdef FBCON_HAS_CFB16
case 16: /* RGB 565 */
fbcon_cmap.cfb16[regno] = (red & 0xf800) |
- ((green & 0xf800) >> 6) |
+ ((green & 0xfc00) >> 5) |
((blue & 0xf800) >> 11);
break;
#endif
printk("fb%d: Mode %dx%d-%d pitch = %ld cable: %s video output: %s\n",
GET_FB_IDX(fb_info.node), var.xres, var.yres, var.bits_per_pixel,
get_line_length(var.xres, var.bits_per_pixel),
- (char *)pvr2_get_param(cables, NULL, cable_type, 6),
- (char *)pvr2_get_param(outputs, NULL, video_output, 6));
+ (char *)pvr2_get_param(cables, NULL, cable_type, 3),
+ (char *)pvr2_get_param(outputs, NULL, video_output, 3));
return 0;
}
}
if (*cable_arg)
- cable_type = pvr2_get_param(cables, cable_arg, 0, 6);
+ cable_type = pvr2_get_param(cables, cable_arg, 0, 3);
if (*output_arg)
- video_output = pvr2_get_param(outputs, output_arg, 0, 6);
+ video_output = pvr2_get_param(outputs, output_arg, 0, 3);
return 0;
}
xx *= fontwidth(p);
yy *= fontheight(p);
+ c = scr_readw(s);
+ fgx = attr_fgcol(p, c);
+ bgx = attr_bgcol(p, c);
while (count--) {
c = scr_readw(s++);
- fgx = attr_fgcol(p,c);
- bgx = attr_bgcol(p,c);
fbcon_riva_writechr(conp, p, c, fgx, bgx, yy, xx);
xx += fontwidth(p);
}
xx *= fontwidth(p);
yy *= fontheight(p);
+ c = scr_readw(s);
+ fgx = ((u16 *)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u16 *)p->dispsw_data)[attr_bgcol(p, c)];
+ if (p->var.green.length == 6)
+ convert_bgcolor_16(&bgx);
while (count--) {
c = scr_readw(s++);
- fgx = ((u16 *)p->dispsw_data)[attr_fgcol(p,c)];
- bgx = ((u16 *)p->dispsw_data)[attr_bgcol(p,c)];
- if (p->var.green.length == 6)
- convert_bgcolor_16(&bgx);
fbcon_riva_writechr(conp, p, c, fgx, bgx, yy, xx);
xx += fontwidth(p);
}
xx *= fontwidth(p);
yy *= fontheight(p);
+ c = scr_readw(s);
+ fgx = ((u32 *)p->dispsw_data)[attr_fgcol(p, c)];
+ bgx = ((u32 *)p->dispsw_data)[attr_bgcol(p, c)];
while (count--) {
c = scr_readw(s++);
- fgx = ((u32 *)p->dispsw_data)[attr_fgcol(p,c)];
- bgx = ((u32 *)p->dispsw_data)[attr_bgcol(p,c)];
fbcon_riva_writechr(conp, p, c, fgx, bgx, yy, xx);
xx += fontwidth(p);
}
int count, int ypos, int xpos)
{
while(count--) {
- sti_putc(&default_sti, *s++, ypos, xpos++);
+ sti_putc(&default_sti, scr_readw(s++), ypos, xpos++);
}
}
return 0;
}
-static u16 *sticon_screen_pos(struct vc_data *conp, int offset)
-{
- return NULL;
-}
-
-static unsigned long sticon_getxy(struct vc_data *conp, unsigned long pos, int *px, int *py)
-{
- return 0;
-}
-
static u8 sticon_build_attr(struct vc_data *conp, u8 color, u8 intens, u8 blink, u8 underline, u8 reverse)
{
u8 attr = ((color & 0x70) >> 1) | ((color & 7));
con_set_palette: sticon_set_palette,
con_scrolldelta: sticon_scrolldelta,
con_set_origin: sticon_set_origin,
- con_save_screen: NULL,
con_build_attr: sticon_build_attr,
- con_invert_region: NULL,
- con_screen_pos: sticon_screen_pos,
- con_getxy: sticon_getxy,
};
#include <asm/pgalloc.h> /* need cache flush routines */
int count, int ypos, int xpos)
{
while(count--) {
- sti_putc(&default_sti, *s++, ypos, xpos++);
+ sti_putc(&default_sti, scr_readw(s++), ypos, xpos++);
}
}
return 0;
}
-static u16 *sticon_screen_pos(struct vc_data *conp, int offset)
-{
- return NULL;
-}
-
-static unsigned long sticon_getxy(struct vc_data *conp, unsigned long pos, int *px, int *py)
-{
- return 0;
-}
-
static u8 sticon_build_attr(struct vc_data *conp, u8 color, u8 intens, u8 blink, u8 underline, u8 reverse)
{
u8 attr = ((color & 0x70) >> 1) | ((color & 7));
con_set_palette: sticon_set_palette,
con_scrolldelta: sticon_scrolldelta,
con_set_origin: sticon_set_origin,
- con_save_screen: NULL,
con_build_attr: sticon_build_attr,
- con_invert_region: NULL,
- con_screen_pos: sticon_screen_pos,
- con_getxy: sticon_getxy,
};
static int __init sti_init(void)
struct display* p,
const unsigned short *s,int count,int yy,int xx)
{
- u32 fgx,bgx;
- fgx=attr_fgcol(p, *s);
- bgx=attr_bgcol(p, *s);
+ u16 c = scr_readw(s);
+ u32 fgx = attr_fgcol(p, c);
+ u32 bgx = attr_bgcol(p, c);
do_putcs( fgx,bgx,p,s,count,yy,xx );
}
static void tdfx_cfb16_putcs(struct vc_data* conp,
struct display* p,
const unsigned short *s,int count,int yy,int xx)
{
- u32 fgx,bgx;
- fgx=((u16*)p->dispsw_data)[attr_fgcol(p,*s)];
- bgx=((u16*)p->dispsw_data)[attr_bgcol(p,*s)];
+ u16 c = scr_readw(s);
+ u32 fgx = ((u16*)p->dispsw_data)[attr_fgcol(p, c)];
+ u32 bgx = ((u16*)p->dispsw_data)[attr_bgcol(p, c)];
do_putcs( fgx,bgx,p,s,count,yy,xx );
}
static void tdfx_cfb32_putcs(struct vc_data* conp,
struct display* p,
const unsigned short *s,int count,int yy,int xx)
{
- u32 fgx,bgx;
- fgx=((u32*)p->dispsw_data)[attr_fgcol(p,*s)];
- bgx=((u32*)p->dispsw_data)[attr_bgcol(p,*s)];
+ u16 c = scr_readw(s);
+ u32 fgx = ((u32*)p->dispsw_data)[attr_fgcol(p, c)];
+ u32 bgx = ((u32*)p->dispsw_data)[attr_bgcol(p, c)];
do_putcs( fgx,bgx,p,s,count,yy,xx );
}
vga_video_num_columns = c->vc_cols;
vga_video_num_lines = c->vc_rows;
if (!vga_is_gfx)
- scr_memcpyw_to((u16 *) c->vc_origin, (u16 *) c->vc_screenbuf, c->vc_screenbuf_size);
+ scr_memcpyw((u16 *) c->vc_origin, (u16 *) c->vc_screenbuf, c->vc_screenbuf_size);
return 0; /* Redrawing not needed */
}
c->vc_y = ORIG_Y;
}
if (!vga_is_gfx)
- scr_memcpyw_from((u16 *) c->vc_screenbuf, (u16 *) c->vc_origin, c->vc_screenbuf_size);
+ scr_memcpyw((u16 *) c->vc_screenbuf, (u16 *) c->vc_origin, c->vc_screenbuf_size);
}
static int vgacon_scroll(struct vc_data *c, int t, int b, int dir, int lines)
static __inline__ void __hash_unlink(struct buffer_head *bh)
{
- if (bh->b_pprev) {
- if (bh->b_next)
- bh->b_next->b_pprev = bh->b_pprev;
- *(bh->b_pprev) = bh->b_next;
+ struct buffer_head **pprev = bh->b_pprev;
+ if (pprev) {
+ struct buffer_head *next = bh->b_next;
+ if (next)
+ next->b_pprev = pprev;
+ *pprev = next;
bh->b_pprev = NULL;
}
}
static void __remove_from_lru_list(struct buffer_head * bh, int blist)
{
- if (bh->b_prev_free || bh->b_next_free) {
- bh->b_prev_free->b_next_free = bh->b_next_free;
- bh->b_next_free->b_prev_free = bh->b_prev_free;
- if (lru_list[blist] == bh)
- lru_list[blist] = bh->b_next_free;
- if (lru_list[blist] == bh)
- lru_list[blist] = NULL;
- bh->b_next_free = bh->b_prev_free = NULL;
+ struct buffer_head *next = bh->b_next_free;
+ if (next) {
+ struct buffer_head *prev = bh->b_prev_free;
+ prev->b_next_free = next;
+ next->b_prev_free = prev;
+ if (lru_list[blist] == bh) {
+ if (next == bh)
+ next = NULL;
+ lru_list[blist] = next;
+ }
+ bh->b_next_free = NULL;
+ bh->b_prev_free = NULL;
nr_buffers_type[blist]--;
size_buffers_type[blist] -= bh->b_size;
}
** highest allocated oid, it is far from perfect, and files will tend
** to be grouped towards the start of the border
*/
- border = (INODE_PKEY(p_s_inode)->k_dir_id) % (SB_BLOCK_COUNT(th->t_super) - bstart - 1) ;
+ border = le32_to_cpu(INODE_PKEY(p_s_inode)->k_dir_id) % (SB_BLOCK_COUNT(th->t_super) - bstart - 1) ;
} else {
/* why would we want to delcare a local variable to this if statement
** name border????? -chris
** unsigned long border = 0;
*/
if (!reiserfs_hashed_relocation(th->t_super)) {
- hash_in = (INODE_PKEY(p_s_inode))->k_dir_id;
+ hash_in = le32_to_cpu((INODE_PKEY(p_s_inode))->k_dir_id);
/* I wonder if the CPU cost of the
hash will obscure the layout
effect? Of course, whether that
inode->i_ctime = sd_v1_ctime(sd);
inode->i_blocks = sd_v1_blocks(sd);
- inode->i_generation = INODE_PKEY (inode)->k_dir_id;
+ inode->i_generation = le32_to_cpu (INODE_PKEY (inode)->k_dir_id);
blocks = (inode->i_size + 511) >> 9;
blocks = _ROUND_UP (blocks, inode->i_blksize >> 9);
if (inode->i_blocks > blocks) {
inode->i_blocks = sd_v2_blocks(sd);
rdev = sd_v2_rdev(sd);
if( S_ISCHR( inode -> i_mode ) || S_ISBLK( inode -> i_mode ) )
- inode->i_generation = INODE_PKEY (inode)->k_dir_id;
+ inode->i_generation = le32_to_cpu (INODE_PKEY (inode)->k_dir_id);
else
inode->i_generation = sd_v2_generation(sd);
** note that the private part of inode isn't filled in yet, we have
** to use the directory.
*/
- inode->i_generation = INODE_PKEY (dir)->k_objectid;
+ inode->i_generation = le32_to_cpu (INODE_PKEY (dir)->k_objectid);
else
#if defined( USE_INODE_GENERATION_COUNTER )
inode->i_generation =
return 0;
}
-static char * reiserfs_version (char * buf)
-{
- __u16 * pversion;
-
- pversion = (__u16 *)(buf) + 36;
- if (*pversion == 0)
- return "0";
- if (*pversion == 2)
- return "2";
- return "Unknown";
-}
-
-
/* return 1 if this is not super block */
static int print_super_block (struct buffer_head * bh)
{
} else {
kdev_t dev = get_unnamed_dev();
if (!dev) {
+ spin_unlock(&sb_lock);
put_super(s);
return ERR_PTR(-EMFILE);
}
return vol_desc_start;
}
-unsigned int
+unsigned long
udf_get_last_block(struct super_block *sb)
{
struct block_device *bdev = sb->s_bdev;
#define VT_BUF_HAVE_RW
#define VT_BUF_HAVE_MEMSETW
#define VT_BUF_HAVE_MEMCPYW
-#define VT_BUF_HAVE_MEMCPYF
extern inline void scr_writew(u16 val, volatile u16 *addr)
{
/* Do not trust that the usage will be correct; analyze the arguments. */
extern void scr_memcpyw(u16 *d, const u16 *s, unsigned int count);
-#define scr_memcpyw_from scr_memcpyw
-#define scr_memcpyw_to scr_memcpyw
/* ??? These are currently only used for downloading character sets. As
such, they don't need memory barriers. Is this all they are intended
unsigned long bi_vco; /* VCO Out from PLL, in MHz */
#endif
unsigned long bi_baudrate; /* Console Baudrate */
-#if defined(CONFIG_PPC405)
+#if defined(CONFIG_405GP)
unsigned char bi_s_version[4]; /* Version of this structure */
unsigned char bi_r_version[32]; /* Version of the ROM (IBM) */
unsigned int bi_procfreq; /* CPU (Internal) Freq, in Hz */
-/* $Id: ioctl.h,v 1.1 1999/09/18 17:29:53 gniibe Exp $
+/* $Id: ioctl.h,v 1.1 2000/04/14 16:48:21 mjd Exp $
*
* linux/ioctl.h for Linux by H.H. Bergman.
*/
-/* $Id: namei.h,v 1.1 1999/09/18 17:30:11 gniibe Exp $
+/* $Id: namei.h,v 1.3 2000/07/04 06:24:49 gniibe Exp $
* linux/include/asm-sh/namei.h
*
* Included from linux/fs/namei.c
-/* $Id: uaccess.h,v 1.12 2001/07/27 06:09:47 gniibe Exp $
+/* $Id: uaccess.h,v 1.13 2001/10/01 02:22:01 gniibe Exp $
*
* User space memory access functions
*
case 1: __put_user_asm("b"); break; \
case 2: __put_user_asm("w"); break; \
case 4: __put_user_asm("l"); break; \
+case 8: __put_user_u64(__pu_val,__pu_addr,__pu_err); break; \
default: __put_user_unknown(); break; \
} __pu_err; })
case 1: __put_user_asm("b"); break; \
case 2: __put_user_asm("w"); break; \
case 4: __put_user_asm("l"); break; \
+case 8: __put_user_u64(__pu_val,__pu_addr,__pu_err); break; \
default: __put_user_unknown(); break; \
} } __pu_err; })
:"r" (__pu_val), "m" (__m(__pu_addr)), "i" (-EFAULT) \
:"memory"); })
+#if defined(__LITTLE_ENDIAN__)
+#define __put_user_u64(val,addr,retval) \
+({ \
+__asm__ __volatile__( \
+ "1:\n\t" \
+ "mov.l %R1,%2\n\t" \
+ "mov.l %S1,%T2\n\t" \
+ "mov #0,%0\n" \
+ "2:\n" \
+ ".section .fixup,\"ax\"\n" \
+ "3:\n\t" \
+ "nop\n\t" \
+ "mov.l 4f,%0\n\t" \
+ "jmp @%0\n\t" \
+ " mov %3,%0\n" \
+ "4: .long 2b\n\t" \
+ ".previous\n" \
+ ".section __ex_table,\"a\"\n\t" \
+ ".long 1b, 3b\n\t" \
+ ".previous" \
+ : "=r" (retval) \
+ : "r" (val), "m" (__m(addr)), "i" (-EFAULT) \
+ : "memory"); })
+#else
+({ \
+__asm__ __volatile__( \
+ "1:\n\t" \
+ "mov.l %S1,%2\n\t" \
+ "mov.l %R1,%T2\n\t" \
+ "mov #0,%0\n" \
+ "2:\n" \
+ ".section .fixup,\"ax\"\n" \
+ "3:\n\t" \
+ "nop\n\t" \
+ "mov.l 4f,%0\n\t" \
+ "jmp @%0\n\t" \
+ " mov %3,%0\n" \
+ "4: .long 2b\n\t" \
+ ".previous\n" \
+ ".section __ex_table,\"a\"\n\t" \
+ ".long 1b, 3b\n\t" \
+ ".previous" \
+ : "=r" (retval) \
+ : "r" (val), "m" (__m(addr)), "i" (-EFAULT) \
+ : "memory"); })
+#endif
+
extern void __put_user_unknown(void);
\f
/* Generic arbitrary sized copy. */
struct request {
struct list_head queue;
int elevator_sequence;
- struct list_head table;
volatile int rq_status; /* should split this into a few status bits */
#define RQ_INACTIVE (-1)
# include <linux/devfs_fs_kernel.h>
struct hd_struct {
- long start_sect;
- long nr_sects;
+ unsigned long start_sect;
+ unsigned long nr_sects;
devfs_handle_t de; /* primary (master) devfs entry */
int number; /* stupid old code wastes space */
};
#define scr_memmovew(d, s, c) memmove(d, s, c)
#define VT_BUF_HAVE_MEMCPYW
#define VT_BUF_HAVE_MEMMOVEW
-#define scr_memcpyw_from(d, s, c) memcpy(d, s, c)
-#define scr_memcpyw_to(d, s, c) memcpy(d, s, c)
-#define VT_BUF_HAVE_MEMCPYF
#endif
#ifndef VT_BUF_HAVE_MEMSETW
}
#endif
-#ifndef VT_BUF_HAVE_MEMCPYF
-static inline void scr_memcpyw_from(u16 *d, const u16 *s, unsigned int count)
-{
- count /= 2;
- while (count--)
- *d++ = scr_readw(s++);
-}
-
-static inline void scr_memcpyw_to(u16 *d, const u16 *s, unsigned int count)
-{
- count /= 2;
- while (count--)
- scr_writew(*s++, d++);
-}
-#endif
-
#endif
struct address_space *mapping = file->f_dentry->d_inode->i_mapping;
struct inode *inode = mapping->host;
struct page *page, **hash, *old_page;
- unsigned long size, pgoff;
+ unsigned long size, pgoff, endoff;
pgoff = ((address - area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
+ endoff = ((area->vm_end - area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
retry_all:
/*
if ((pgoff >= size) && (area->vm_mm == current->mm))
return NULL;
+ /* The "size" of the file, as far as mmap is concerned, isn't bigger than the mapping */
+ if (size > endoff)
+ size = endoff;
+
/*
* Do we have something in the page cache already?
*/
unsigned long pgoff)
{
unsigned char present = 0;
- struct address_space * as = &vma->vm_file->f_dentry->d_inode->i_data;
+ struct address_space * as = &vma->vm_file->f_dentry->d_inode->i_mapping;
struct page * page, ** hash = page_hash(as, pgoff);
spin_lock(&pagecache_lock);
spin_unlock(&mm->page_table_lock);
page = lookup_swap_cache(entry);
if (!page) {
- lock_kernel();
swapin_readahead(entry);
page = read_swap_cache_async(entry);
- unlock_kernel();
if (!page) {
spin_lock(&mm->page_table_lock);
/*
spin_unlock (&info->lock);
return 0;
found:
+ delete_from_swap_cache(page);
add_to_page_cache(page, info->inode->i_mapping, offset + idx);
SetPageDirty(page);
SetPageUptodate(page);
- UnlockPage(page);
info->swapped--;
spin_unlock(&info->lock);
return 1;
if (!page) {
swp_entry_t swap = *entry;
spin_unlock (&info->lock);
- lock_kernel();
swapin_readahead(*entry);
page = read_swap_cache_async(*entry);
- unlock_kernel();
if (!page) {
if (entry->val != swap.val)
goto repeat;
*/
void __init swap_setup(void)
{
- /* Use a smaller cluster for memory <16MB or <32MB */
- if (num_physpages < ((16 * 1024 * 1024) >> PAGE_SHIFT))
+ unsigned long megs = num_physpages >> (20 - PAGE_SHIFT);
+
+ /* Use a smaller cluster for small-memory machines */
+ if (megs < 16)
page_cluster = 2;
- else if (num_physpages < ((32 * 1024 * 1024) >> PAGE_SHIFT))
+ else if (megs < 32)
page_cluster = 3;
- else
+ else if (megs < 64)
page_cluster = 4;
+ else if (megs < 128)
+ page_cluster = 5;
+ else
+ page_cluster = 6;
}
* share this swap entry, so be cautious and let do_wp_page work out
* what to do if a write is requested later.
*/
-/* BKL, mmlist_lock and vma->vm_mm->page_table_lock are held */
+/* mmlist_lock and vma->vm_mm->page_table_lock are held */
static inline void unuse_pte(struct vm_area_struct * vma, unsigned long address,
pte_t *dir, swp_entry_t entry, struct page* page)
{
++vma->vm_mm->rss;
}
-/* BKL, mmlist_lock and vma->vm_mm->page_table_lock are held */
+/* mmlist_lock and vma->vm_mm->page_table_lock are held */
static inline void unuse_pmd(struct vm_area_struct * vma, pmd_t *dir,
unsigned long address, unsigned long size, unsigned long offset,
swp_entry_t entry, struct page* page)
} while (address && (address < end));
}
-/* BKL, mmlist_lock and vma->vm_mm->page_table_lock are held */
+/* mmlist_lock and vma->vm_mm->page_table_lock are held */
static inline void unuse_pgd(struct vm_area_struct * vma, pgd_t *dir,
unsigned long address, unsigned long size,
swp_entry_t entry, struct page* page)
} while (address && (address < end));
}
-/* BKL, mmlist_lock and vma->vm_mm->page_table_lock are held */
+/* mmlist_lock and vma->vm_mm->page_table_lock are held */
static void unuse_vma(struct vm_area_struct * vma, pgd_t *pgdir,
swp_entry_t entry, struct page* page)
{
/*
* Don't hold on to start_mm if it looks like exiting.
- * Can mmput ever block? if so, then we cannot risk
- * it between deleting the page from the swap cache,
- * and completing the search through mms (and cannot
- * use it to avoid the long hold on mmlist_lock there).
*/
if (atomic_read(&start_mm->mm_users) == 1) {
mmput(start_mm);
}
/*
- * Wait for and lock page. Remove it from swap cache
- * so try_to_swap_out won't bump swap count. Mark dirty
- * so try_to_swap_out will preserve it without us having
- * to mark any present ptes as dirty: so we can skip
- * searching processes once swap count has all gone.
+ * Wait for and lock page. When do_swap_page races with
+ * try_to_unuse, do_swap_page can handle the fault much
+ * faster than try_to_unuse can locate the entry. This
+ * apparently redundant "wait_on_page" lets try_to_unuse
+ * defer to do_swap_page in such a case - in some tests,
+ * do_swap_page and try_to_unuse repeatedly compete.
*/
+ wait_on_page(page);
lock_page(page);
- if (PageSwapCache(page))
- delete_from_swap_cache(page);
- SetPageDirty(page);
- UnlockPage(page);
- flush_page_to_ram(page);
/*
* Remove all references to entry, without blocking.
* to search, but use it as a reminder to search shmem.
*/
swcount = *swap_map;
- if (swcount) {
+ if (swcount > 1) {
+ flush_page_to_ram(page);
if (start_mm == &init_mm)
shmem_unuse(entry, page);
else
unuse_process(start_mm, entry, page);
}
- if (*swap_map) {
+ if (*swap_map > 1) {
int set_start_mm = (*swap_map >= swcount);
struct list_head *p = &start_mm->mmlist;
struct mm_struct *new_start_mm = start_mm;
struct mm_struct *mm;
spin_lock(&mmlist_lock);
- while (*swap_map && (p = p->next) != &start_mm->mmlist) {
+ while (*swap_map > 1 &&
+ (p = p->next) != &start_mm->mmlist) {
mm = list_entry(p, struct mm_struct, mmlist);
swcount = *swap_map;
if (mm == &init_mm) {
mmput(start_mm);
start_mm = new_start_mm;
}
- page_cache_release(page);
/*
* How could swap count reach 0x7fff when the maximum
swap_list_lock();
swap_device_lock(si);
nr_swap_pages++;
- *swap_map = 0;
+ *swap_map = 1;
swap_device_unlock(si);
swap_list_unlock();
reset_overflow = 1;
}
+ /*
+ * If a reference remains (rare), we would like to leave
+ * the page in the swap cache; but try_to_swap_out could
+ * then re-duplicate the entry once we drop page lock,
+ * so we might loop indefinitely; also, that page could
+ * not be swapped out to other storage meanwhile. So:
+ * delete from cache even if there's another reference,
+ * after ensuring that the data has been saved to disk -
+ * since if the reference remains (rarer), it will be
+ * read from disk into another page. Splitting into two
+ * pages would be incorrect if swap supported "shared
+ * private" pages, but they are handled by tmpfs files.
+ * Note shmem_unuse already deleted its from swap cache.
+ */
+ swcount = *swap_map;
+ if ((swcount > 0) != PageSwapCache(page))
+ BUG();
+ if ((swcount > 1) && PageDirty(page)) {
+ rw_swap_page(WRITE, page);
+ lock_page(page);
+ }
+ if (PageSwapCache(page))
+ delete_from_swap_cache(page);
+
+ /*
+ * So we could skip searching mms once swap count went
+ * to 1, we did not mark any present ptes as dirty: must
+ * mark page dirty so try_to_swap_out will preserve it.
+ */
+ SetPageDirty(page);
+ UnlockPage(page);
+ page_cache_release(page);
+
/*
* Make sure that we aren't completely killing
* interactive performance. Interruptible check on
*/
if (current->need_resched)
schedule();
- else {
- unlock_kernel();
- lock_kernel();
- }
}
mmput(start_mm);
total_swap_pages -= p->pages;
p->flags = SWP_USED;
swap_list_unlock();
+ unlock_kernel();
err = try_to_unuse(type);
+ lock_kernel();
if (err) {
/* re-insert swap space back into swap_list */
swap_list_lock();