E: philip@raptor.com
D: Kernel / timekeeping stuff
+N: Jan-Benedict Glaw
+E: jbglaw@lug-owl.de
+D: SRM environment driver (for Alpha systems)
+P: 1024D/8399E1BB 250D 3BCF 7127 0D8C A444 A961 1DBD 5E75 8399 E1BB
+
N: Richard E. Gooch
E: rgooch@atnf.csiro.au
D: parent process death signal to children
Support for these adaptors is so far still incomplete and buggy.
You have been warned.
-Hermes PCMCIA card support
-CONFIG_PCMCIA_HERMES
- Enable support for PCMCIA 802.11b cards using the Hermes or Intersil
- HFA384x (Prism 2) chipset. To use your PC-cards, you will need
- supporting software from David Hinds' pcmcia-cs package (see the
- file Documentation/Changes for location). You also want to check out
- the PCMCIA-HOWTO, available from
- http://www.linuxdoc.org/docs.html#howto .
-
Hermes support (Orinoco/WavelanIEEE/PrismII/Symbol 802.11b cards)
CONFIG_PCMCIA_HERMES
A driver for "Hermes" chipset based PCMCIA wireless adaptors, such
module, say M here and read Documentation/modules.txt as well as
Documentation/networking/net-modules.txt.
+RealTek RTL-8139C+ 10/100 PCI Fast Ethernet Adapter support
+CONFIG_8139CP
+ This is a driver for the Fast Ethernet PCI network cards based on
+ the RTL8139C+ chips. If you have one of those, say Y and read
+ the Ethernet-HOWTO, available from
+ http://www.linuxdoc.org/docs.html#howto .
+
+ If you want to compile this driver as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want),
+ say M here and read Documentation/modules.txt. This is recommended.
+ The module will be called 8139cp.o.
+
RealTek RTL-8139 PCI Fast Ethernet Adapter support
CONFIG_8139TOO
This is a driver for the Fast Ethernet PCI network cards based on
of debug messages to the system log. Select this if you are having a
problem with USB support and want to see more of what is going on.
-UHCI (intel PIIX4, VIA, ...) support?
+USB fetch large config
+CONFIG_USB_LARGE_CONFIG
+ This option changes the initial request for a config descriptor so
+ that some poorly designed devices will still work. Some APC UPSes
+ need it. Basically, the usb subsystem sends a request for a short
+ (8 byte) config, just to find out how large the real config is.
+ Incorrectly implemented devices may choke on this small config
+ request. This option make the initial request for a quite large
+ config (1009 bytes), and things just work.
+
+ If you have an APC UPS, say Y; otherwise say N.
+
+USB long timeout
+CONFIG_USB_LONG_TIMEOUT
+ This option makes the standard time out a bit longer. Basically,
+ some devices are just slow to respond, so this makes usb more
+ patient. There should be no harm in selecting this, but it is
+ needed for some MGE Ellipse UPSes.
+
+ If you have an MGE Ellipse UPS, or you see timeouts in HID
+ transactions, say Y; otherwise say N.
+
+UHCI (intel PIIX4, VIA, ...) support
CONFIG_USB_UHCI
The Universal Host Controller Interface is a standard by Intel for
accessing the USB hardware in the PC (which is also called the USB
The module will be called hid.o. If you want to compile it as a
module, say M here and read Documentation/modules.txt.
+/dev/usb/hiddev raw HID device support
+CONFIG_USB_HIDDEV
+ Say Y here if you want to support HID devices (from the USB
+ specification standpoint) that aren't strictly user interface
+ devices, like monitor controls and Uninterruptable Power Supplies.
+ It is also used for "consumer keys" on multimedia keyboards and
+ USB speakers.
+
+ This module supports these devices separately using a separate
+ event interface on /dev/usb/hiddevX (char 180:96 to 180:111).
+ This driver requires CONFIG_USB_HID.
+
+ If unsure, say N.
+
USB HIDBP Keyboard (basic) support
CONFIG_USB_KBD
Say Y here if you don't want to use the generic HID driver for your
and was developed with their support. You must also include
firmware to support your particular device(s).
- See http://www.linuxcare.com.au/hugh/keyspan.html for
- more information.
+ See http://misc.nu/hugh/keyspan.html for more information.
This code is also available as a module ( = code which can be
inserted in and removed from the running kernel whenever you want).
USB Keyspan USA-28X Firmware
CONFIG_USB_SERIAL_KEYSPAN_USA28X
Say Y here to include firmware for the USA-28X converter.
+ Be sure you have a USA-28X, there are also 28XA and 28XB
+ models, the label underneath has the actual part number.
+
+USB Keyspan USA-28XA Firmware
+CONFIG_USB_SERIAL_KEYSPAN_USA28XA
+ Say Y here to include firmware for the USA-28XA converter.
+ Be sure you have a USA-28XA, there are also 28X and 28XB
+ models, the label underneath has the actual part number.
+
+USB Keyspan USA-28XB Firmware
+CONFIG_USB_SERIAL_KEYSPAN_USA28XB
+ Say Y here to include firmware for the USA-28XB converter.
+ Be sure you have a USA-28XB, there are also 28X and 28XA
+ models, the label underneath has the actual part number.
USB Keyspan USA-19 Firmware
CONFIG_USB_SERIAL_KEYSPAN_USA19
The module will be called CDCEther.o. If you want to compile it as
a module, say M here and read <file:Documentation/modules.txt>.
+NetChip 1080-based USB Host-to-Host Link
+CONFIG_USB_NET1080
+ The NetChip 1080 is a USB 1.1 host controller. NetChip has a web
+ site with technical information at http://www.netchip.com/ .
USB Kodak DC-2xx Camera support
CONFIG_USB_DC2XX
and work. SANE 1.0.4 or newer is needed to make use of your scanner.
This driver can be compiled as a module.
+HP 53xx and Minolta Dual Scanner support
+CONFIG_USB_HPUSBSCSI
+ Say Y here if you want support for the HP 53xx series of scanners
+ and the Minolta Scan Dual. This driver is experimental.
+ The scanner will be accessible as a SCSI device.
+
USB Bluetooth support
CONFIG_USB_BLUETOOTH
Say Y here if you want to connect a USB Bluetooth device to your
It is recommended to be used on a NetWinder, but it is not a
necessity.
+Debug high memory support
+CONFIG_DEBUG_HIGHMEM
+ This options enables addition error checking for high memory systems.
+ Disable for production systems.
+
Verbose kernel error messages
CONFIG_DEBUG_ERRORS
This option controls verbose debugging information which can be
update_mmu_cache(), a check is made of this flag bit, and if
set the flush is done and the flag bit is cleared.
+ IMPORTANT NOTE: It is often important, if you defer the flush,
+ that the actual flush occurs on the same CPU
+ as did the cpu stores into the page to make it
+ dirty. Again, see sparc64 for examples of how
+ to deal with this.
+
void flush_icache_range(unsigned long start, unsigned long end)
When the kernel stores into addresses that it will execute
out of (eg when loading modules), this function is called.
Change History
--------------
+Version 0.9.20 - October 18, 2001
+
+* Print out notice when 8139C+ chip is detected
+* Add id for D-Link DFE690TXD pcmcia cardbus card (Gert Dewit)
+
+
Version 0.9.19 - October 9, 2001
* Eliminate buffer copy for unaligned Tx's (manfred)
-Copyright (C) 1999, 2000 David E. Nelson
+Copyright (C) 1999, 2000 David E. Nelson <dnelson@jump.net>
April 26, 2000
CHANGES
+- Amended for linux-2.4.12
+- Updated devfs support
- Amended for linux-2.3.99-pre6-3
- Appended hp_scan.c to end of this README
- Removed most references to HP
(Compaq and others) hardware port should work. At the time of this
writing, there are two UHCI drivers and one OHCI.
-A Linux development kernel (2.3.x) with USB support enabled or a
-backported version to linux-2.2.x. See http://www.linux-usb.org for
-more information on accomplishing this.
-
-A Linux kernel with USB Scanner support enabled.
+A Linux kernel with USB support enabled or a backported version to
+linux-2.2.x. See http://www.linux-usb.org for more information on
+accomplishing this.
'lspci' which is only needed to determine the type of USB hardware
available/installed in your machine.
YMMV.
Beginning with version 0.4 of the driver, up to 16 scanners can be
-connected/used simultaneously. If you intend to use more than
-one scanner at a time:
+connected/used simultaneously. For devfs support, see next section.
+If you intend to use more than one scanner at a time w/o devfs support:
Add a device for the USB scanner:
`mknod /dev/usbscanner0 c 180 48`
`mknod /dev/usbscanner1 c 180 49`
.
.
- `mknod /dev/usb/scanner15 180 63`
+ `mknod /dev/usbscanner15 180 63`
If you foresee using only one scanner it is best to:
modprobe usb-uhci
modprobe scanner
+DEVFS
+
+The later versions of the Linux kernel (2.4.8'ish) included a dynamic
+device filesystem call 'devfs'. With devfs, there is no need to
+create the device files as explained above; instead, they are
+dynamically created for you. For USB Scanner, the device is created
+in /dev/usb/scannerX where X can range from 0 to 15 depending on the
+number of scanners connected to the system.
+
+To see if you have devfs, issue the command `cat /proc/filesytems`.
+If devfs is listed you should be ready to go. You sould also have a
+process running called 'devfsd'. In order to make sure, issue the
+command `ps aux | grep '[d]evfsd'`.
+
+If you would like to maintain /dev/usbscanner0 in order to maintain
+compatibility with applications, then add the following to
+/etc/devfsd.conf:
+
+REGISTER ^usb/scanner0$ CFUNCTION GLOBAL symlink usb/scanner0 usbscanner0
+UNREGISTER ^usb/scanner0$ CFUNCTION GLOBAL unlink usbscanner0
+
+Then reset the scanner (reseat the USB connector or power cycle). This
+will create the necessary symlinks in /dev to /dev/usb.
+
+CONCLUSION
+
That's it. SANE should now be able to access the device.
There is a small test program (hp_scan.c -- appended below) that can
MESSAGES
-On occasions the message 'usb_control/bulk_msg: timeout' or something
-similar will appear in '/var/adm/messages' or on the console or both,
-depending on how your system is configured. This is a side effect
-that scanners are sometimes very slow at warming up and/or
-initializing. In most cases, however, only several of these messages
-should appear and is generally considered to be normal. If you see
-a message of the type 'excessive NAK's received' then this should
-be considered abnormal and generally indicates that the USB system is
-unable to communicate with the scanner for some particular reason.
+usb_control/bulk_msg: timeout -- On occasions this message will appear
+in '/var/adm/messages', on the console, or both depending on how
+your system is configured. This is a side effect that scanners are
+sometimes very slow at warming up and/or initializing. In most cases,
+however, only several of these messages should appear and is generally
+considered to be normal.
+
+excessive NAK's received -- This message should be considered abnormal
+and generally indicates that the USB system is unable to communicate
+with the scanner for some particular reason.
+
+probe_scanner: Undetected endpoint -- The USB Scanner driver is fairly
+general when it comes to communicating to scanners. Unfortunately,
+some vendors have designed their scanners in one way or another that
+this driver doesn't account for.
+
+probe_scanner: Endpoint determination failed -- This means that the
+driver is unable to detect a supported configuration for means to
+communicate with the scanner. See also 'probe_scanner: Undetected
+endpoint'.
+
+funky result -- Most of the time the data flow between the computer
+and the scanner goes smoothly. However, due to whatever reason,
+whether it be solar flares or stray neutrons, sometimes the
+communications don't work as expected. The driver tries to handle
+most types of errors but not all. When this message is seen,
+something weird happened. Please contact the maintaner listed at the
+top of this file.
SUPPORTED SCANNERS
L: samba@samba.org
S: Maintained
+SNA NETWORK LAYER
+P: Jay Schulist
+M: jschlst@samba.org
+L: linux-sna@turbolinux.com
+W: http://www.linux-sna.org
+S: Supported
+
SOFTWARE RAID (Multiple Disks) SUPPORT
P: Ingo Molnar
M: mingo@redhat.com
L: linux-net@vger.kernel.org
S: Supported
-SNA NETWORK LAYER
-P: Jay Schulist
-M: jschlst@samba.org
-L: linux-sna@turbolinux.com
-W: http://www.linux-sna.org
-S: Supported
+SRM (Alpha) environment access
+P: Jan-Benedict Glaw
+M: jbglaw@lug-owl.de
+L: linux-kernel@vger.kernel.org
+S: Maintained
STALLION TECHNOLOGIES MULTIPORT SERIAL BOARDS
M: support@stallion.oz.au
VERSION = 2
PATCHLEVEL = 4
SUBLEVEL = 13
-EXTRAVERSION =-pre5
+EXTRAVERSION =-pre6
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
ARCH := $(shell uname -m | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ -e s/arm.*/arm/ -e s/sa110/arm/)
+KERNELPATH=kernel-$(shell echo $(KERNELRELEASE) | sed -e "s/-//")
CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
else if [ -x /bin/bash ]; then echo /bin/bash; \
*(vip)CIA_IOC_HAE_IO = 0;
/* For PYXIS, we always use BWX bus and i/o accesses. To that end,
- make sure they're enabled on the controller. */
+ make sure they're enabled on the controller. At the same time,
+ enable the monster window. */
if (is_pyxis) {
temp = *(vip)CIA_IOC_CIA_CNFG;
- temp |= CIA_CNFG_IOA_BWEN;
+ temp |= CIA_CNFG_IOA_BWEN | CIA_CNFG_PCI_MWEN;
*(vip)CIA_IOC_CIA_CNFG = temp;
}
port->wsba[2].csr = 0x80000000 | 1;
port->wsm[2].csr = (0x80000000 - 1) & 0xfff00000;
- port->tba[2].csr = 0x80000000;
+ port->tba[2].csr = 0;
port->wsba[3].csr = 0;
pchip->wsba[2].csr = 0x80000000 | 1;
pchip->wsm[2].csr = (0x80000000 - 1) & 0xfff00000;
- pchip->tba[2].csr = 0x80000000;
+ pchip->tba[2].csr = 0;
pchip->wsba[3].csr = 0;
static void __init
quirk_cypress(struct pci_dev *dev)
{
-/*
- * Notorious Cy82C693 chip. One of its numerous bugs: although
- * Cypress IDE controller doesn't support native mode, it has
- * programmable addresses of IDE command/control registers.
- * This violates PCI specifications, confuses IDE subsystem
- * and causes resource conflict between primary HD_CMD register
- * and floppy controller. Ugh.
- * Fix that.
- */
+ /* The Notorious Cy82C693 chip. */
+
+ /* The Cypress IDE controller doesn't support native mode, but it
+ has programmable addresses of IDE command/control registers.
+ This violates PCI specifications, confuses the IDE subsystem and
+ causes resource conflicts between the primary HD_CMD register and
+ the floppy controller. Ugh. Fix that. */
if (dev->class >> 8 == PCI_CLASS_STORAGE_IDE) {
dev->resource[0].flags = 0;
dev->resource[1].flags = 0;
- return;
}
-/*
- * Another "feature": Cypress bridge responds on the PCI bus
- * in the address range 0xffff0000-0xffffffff (conventional
- * x86 BIOS ROM). No way to turn this off, so if we use
- * large SG window, we must avoid these addresses.
- */
- if (dev->class >> 8 == PCI_CLASS_BRIDGE_ISA) {
- struct pci_controller *hose = dev->sysdata;
- long overlap;
-
- if (hose->sg_pci) {
- overlap = hose->sg_pci->dma_base + hose->sg_pci->size;
- overlap -= 0xffff0000;
- if (overlap > 0)
- hose->sg_pci->size -= overlap;
+
+ /* The Cypress bridge responds on the PCI bus in the address range
+ 0xffff0000-0xffffffff (conventional x86 BIOS ROM). There is no
+ way to turn this off, so if we use a large direct-map window, or
+ a large SG window, we must avoid this region. */
+ else if (dev->class >> 8 == PCI_CLASS_BRIDGE_ISA) {
+ if (__direct_map_base + __direct_map_size >= 0xffff0000)
+ __direct_map_size = 0xffff0000 - __direct_map_base;
+ else {
+ struct pci_controller *hose = dev->sysdata;
+ struct pci_iommu_arena *pci = hose->sg_pci;
+ if (pci && pci->dma_base + pci->size >= 0xffff0000)
+ pci->size = 0xffff0000 - pci->dma_base;
}
}
}
* srm_env.c - Access to SRC environment variables through
* the linux procfs
*
- * (C)2001, Jan-Benedict Glaw <jbgaw@lug-owl.de>
+ * (C)2001, Jan-Benedict Glaw <jbglaw@lug-owl.de>
*
* This driver is at all a modified version of Erik Mouw's
* ./linux/Documentation/DocBook/procfs_example.c, so: thanky
- * you, erik! He can be reached via email at
+ * you, Erik! He can be reached via email at
* <J.A.K.Mouw@its.tudelft.nl>. It is based on an idea
* provided by DEC^WCompaq's "Jumpstart" CD. They included
* a patch like this as well. Thanks for idea!
#include <asm/uaccess.h>
#define DIRNAME "srm_environment" /* Subdir in /proc/ */
-#define VERSION "0.0.1" /* Module version */
+#define VERSION "0.0.2" /* Module version */
#define NAME "srm_env" /* Module name */
#define DEBUG
MODULE_AUTHOR("Jan-Benedict Glaw <jbglaw@lug-owl.de>");
MODULE_DESCRIPTION("Accessing Alpha SRM environment through procfs interface");
+MODULE_LICENSE("GPL");
EXPORT_NO_SYMBOLS;
typedef struct _srm_env {
bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
+ bool ' Debug high memory support' CONFIG_DEBUG_HIGHMEM
bool ' Debug memory allocations' CONFIG_DEBUG_SLAB
bool ' Memory mapped I/O debugging' CONFIG_DEBUG_IOVIRT
bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
# CONFIG_NE2K_PCI is not set
# CONFIG_NE3210 is not set
# CONFIG_ES3210 is not set
+# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
* Jan 2000, Version 1.12
* Feb 2000, Version 1.13
* Nov 2000, Version 1.14
+ * Oct 2001, Version 1.15
*
* History:
* 0.6b: first version in official kernel, Linux 1.3.46
* Work around byte swap bug in one of the Vaio's BIOS's
* (Marc Boucher <marc@mbsi.ca>).
* Exposed the disable flag to dmi so that we can handle known
- * broken APM (Alan Cox <alan@redhat.com>).
+ * broken APM (Alan Cox <alan@redhat.com>).
+ * 1.14ac: If the BIOS says "I slowed the CPU down" then don't spin
+ * calling it - instead idle. (Alan Cox <alan@redhat.com>)
+ * If an APM idle fails log it and idle sensibly
+ * 1.15: Don't queue events to clients who open the device O_WRONLY.
+ * Don't expect replies from clients who open the device O_RDONLY.
+ * (Idea from Thomas Hood <jdthood at yahoo.co.uk>)
+ * Minor waitqueue cleanups.(John Fremlin <chief@bandits.org>)
*
* APM 1.1 Reference:
*
* Various options can be changed at boot time as follows:
* (We allow underscores for compatibility with the modules code)
* apm=on/off enable/disable APM
+ * [no-]allow[-_]ints allow interrupts during BIOS calls
+ * [no-]broken[-_]psr BIOS has a broken GetPowerStatus call
+ * [no-]realmode[-_]power[-_]off switch to real mode before
+ * powering off
* [no-]debug log some debugging messages
* [no-]power[-_]off power off on shutdown
+ * bounce[-_]interval=<n> number of ticks to ignore suspend
+ * bounces
*/
/* KNOWN PROBLEM MACHINES:
int magic;
struct apm_user * next;
int suser: 1;
+ int writer: 1;
+ int reader: 1;
int suspend_wait: 1;
int suspend_result;
int suspends_pending;
static DECLARE_WAIT_QUEUE_HEAD(apm_suspend_waitqueue);
static struct apm_user * user_list;
-static char driver_version[] = "1.14"; /* no spaces */
+static char driver_version[] = "1.15"; /* no spaces */
+/*
+ * APM event names taken from the APM 1.2 specification. These are
+ * the message codes that the BIOS uses to tell us about events
+ */
static char * apm_event_name[] = {
"system standby",
"system suspend",
char * msg;
} lookup_t;
+/*
+ * The BIOS returns a set of standard error codes in AX when the
+ * carry flag is set.
+ */
+
static const lookup_t error_table[] = {
/* N/A { APM_SUCCESS, "Operation succeeded" }, */
{ APM_DISABLED, "Power management disabled" },
/*
* These are the actual BIOS calls. Depending on APM_ZERO_SEGS and
- * CONFIG_APM_ALLOW_INTS, we are being really paranoid here! Not only
+ * apm_info.allow_ints, we are being really paranoid here! Not only
* are interrupts disabled, but all the segment registers (except SS)
* are saved and zeroed this means that if the BIOS tries to reference
* any data without explicitly loading the segment registers, the kernel
# define APM_DO_RESTORE_SEGS
#endif
+/**
+ * apm_bios_call - Make an APM BIOS 32bit call
+ * @func: APM function to execute
+ * @ebx_in: EBX register for call entry
+ * @ecx_in: ECX register for call entry
+ * @eax: EAX register return
+ * @ebx: EBX register return
+ * @ecx: ECX register return
+ * @edx: EDX register return
+ * @esi: ESI register return
+ *
+ * Make an APM call using the 32bit protected mode interface. The
+ * caller is responsible for knowing if APM BIOS is configured and
+ * enabled. This call can disable interrupts for a long period of
+ * time on some laptops. The return value is in AH and the carry
+ * flag is loaded into AL. If there is an error, then the error
+ * code is returned in AH (bits 8-15 of eax) and this function
+ * returns non-zero.
+ */
+
static u8 apm_bios_call(u32 func, u32 ebx_in, u32 ecx_in,
u32 *eax, u32 *ebx, u32 *ecx, u32 *edx, u32 *esi)
{
return *eax & 0xff;
}
-/*
- * This version only returns one value (usually an error code)
+/**
+ * apm_bios_call_simple - make a simple APM BIOS 32bit call
+ * @func: APM function to invoke
+ * @ebx_in: EBX register value for BIOS call
+ * @ecx_in: ECX register value for BIOS call
+ * @eax: EAX register on return from the BIOS call
+ *
+ * Make a BIOS call that does only returns one value, or just status.
+ * If there is an error, then the error code is returned in AH
+ * (bits 8-15 of eax) and this function returns non-zero. This is
+ * used for simpler BIOS operations. This call may hold interrupts
+ * off for a long time on some laptops.
*/
static u8 apm_bios_call_simple(u32 func, u32 ebx_in, u32 ecx_in, u32 *eax)
return error;
}
+/**
+ * apm_driver_version - APM driver version
+ * @val: loaded with the APM version on return
+ *
+ * Retrieve the APM version supported by the BIOS. This is only
+ * supported for APM 1.1 or higher. An error indicates APM 1.0 is
+ * probably present.
+ *
+ * On entry val should point to a value indicating the APM driver
+ * version with the high byte being the major and the low byte the
+ * minor number both in BCD
+ *
+ * On return it will hold the BIOS revision supported in the
+ * same format.
+ */
+
static int __init apm_driver_version(u_short *val)
{
u32 eax;
return APM_SUCCESS;
}
+/**
+ * apm_get_event - get an APM event from the BIOS
+ * @event: pointer to the event
+ * @info: point to the event information
+ *
+ * The APM BIOS provides a polled information for event
+ * reporting. The BIOS expects to be polled at least every second
+ * when events are pending. When a message is found the caller should
+ * poll until no more messages are present. However, this causes
+ * problems on some laptops where a suspend event notification is
+ * not cleared until it is acknowledged.
+ *
+ * Additional information is returned in the info pointer, providing
+ * that APM 1.2 is in use. If no messges are pending the value 0x80
+ * is returned (No power management events pending).
+ */
+
static int apm_get_event(apm_event_t *event, apm_eventinfo_t *info)
{
u32 eax;
return APM_SUCCESS;
}
+/**
+ * set_power_state - set the power management state
+ * @what: which items to transition
+ * @state: state to transition to
+ *
+ * Request an APM change of state for one or more system devices. The
+ * processor state must be transitioned last of all. what holds the
+ * class of device in the upper byte and the device number (0xFF for
+ * all) for the object to be transitioned.
+ *
+ * The state holds the state to transition to, which may in fact
+ * be an acceptance of a BIOS requested state change.
+ */
+
static int set_power_state(u_short what, u_short state)
{
u32 eax;
return APM_SUCCESS;
}
+/**
+ * apm_set_power_state - set system wide power state
+ * @state: which state to enter
+ *
+ * Transition the entire system into a new APM power state.
+ */
+
static int apm_set_power_state(u_short state)
{
return set_power_state(APM_DEVICE_ALL, state);
}
#ifdef CONFIG_APM_CPU_IDLE
+
+/**
+ * apm_do_idle - perform power saving
+ *
+ * This function notifies the BIOS that the processor is (in the view
+ * of the OS) idle. It returns -1 in the event that the BIOS refuses
+ * to handle the idle request. On a success the function returns 1
+ * if the BIOS did clock slowing or 0 otherwise.
+ */
+
static int apm_do_idle(void)
{
- u32 dummy;
+ u32 eax;
+ int slowed;
- if (apm_bios_call_simple(APM_FUNC_IDLE, 0, 0, &dummy))
- return 0;
+ if (apm_bios_call_simple(APM_FUNC_IDLE, 0, 0, &eax)) {
+ static unsigned long t;
+ if (time_after(jiffies, t + 10 * HZ)) {
+ printk(KERN_DEBUG "apm_do_idle failed (%d)\n",
+ (eax >> 8) & 0xff);
+ t = jiffies;
+ }
+ return -1;
+ }
+ slowed = (apm_info.bios.flags & APM_IDLE_SLOWS_CLOCK) != 0;
#ifdef ALWAYS_CALL_BUSY
clock_slowed = 1;
#else
- clock_slowed = (apm_info.bios.flags & APM_IDLE_SLOWS_CLOCK) != 0;
+ clock_slowed = slowed;
#endif
- return 1;
+ return slowed;
}
+/**
+ * apm_do_busy - inform the BIOS the CPU is busy
+ *
+ * Request that the BIOS brings the CPU back to full performance.
+ */
+
static void apm_do_busy(void)
{
u32 dummy;
/* This should wake up kapmd and ask it to slow the CPU */
#define powermanagement_idle() do { } while (0)
-/*
- * This is the idle thing.
+/**
+ * apm_cpu_idle - cpu idling for APM capable Linux
+ *
+ * This is the idling function the kernel executes when APM is available. It
+ * tries to save processor time directly by using hlt instructions. A
+ * separate apm thread tries to do the BIOS power management.
+ *
+ * N.B. This is curently not used for kernels 2.4.x.
*/
+
static void apm_cpu_idle(void)
{
unsigned int start_idle;
}
#endif
+/**
+ * apm_power_off - ask the BIOS to power off
+ *
+ * Handle the power off sequence. This is the one piece of code we
+ * will execute even on SMP machines. In order to deal with BIOS
+ * bugs we support real mode APM BIOS power off calls. We also make
+ * the SMP call on CPU0 as some systems will only honour this call
+ * on their first cpu.
+ */
+
static void apm_power_off(void)
{
unsigned char po_bios_call[] = {
(void) apm_set_power_state(APM_STATE_OFF);
}
-/*
- * Magic sysrq key and handler for the power off function
+/**
+ * handle_poweroff - sysrq callback for power down
+ * @key: key pressed (unused)
+ * @pt_regs: register state (unused)
+ * @kbd: keyboard state (unused)
+ * @tty: tty involved (unused)
+ *
+ * When the user hits Sys-Rq o to power down the machine this is the
+ * callback we use.
*/
void handle_poweroff (int key, struct pt_regs *pt_regs,
- struct kbd_struct *kbd, struct tty_struct *tty) {
- apm_power_off();
+ struct kbd_struct *kbd, struct tty_struct *tty) {
+ apm_power_off();
}
-struct sysrq_key_op sysrq_poweroff_op = {
+
+struct sysrq_key_op sysrq_poweroff_op = {
handler: handle_poweroff,
help_msg: "Off",
action_msg: "Power Off\n"
#ifdef CONFIG_APM_DO_ENABLE
+
+/**
+ * apm_enable_power_management - enable BIOS APM power management
+ * @enable: enable yes/no
+ *
+ * Enable or disable the APM BIOS power services.
+ */
+
static int apm_enable_power_management(int enable)
{
u32 eax;
}
#endif
+/**
+ * apm_get_power_status - get current power state
+ * @status: returned status
+ * @bat: battery info
+ * @life: estimated life
+ *
+ * Obtain the current power status from the APM BIOS. We return a
+ * status which gives the rough battery status, and current power
+ * source. The bat value returned give an estimate as a percentage
+ * of life and a status value for the battery. The estimated life
+ * if reported is a lifetime in secodnds/minutes at current powwer
+ * consumption.
+ */
+
static int apm_get_power_status(u_short *status, u_short *bat, u_short *life)
{
u32 eax;
}
#endif
+/**
+ * apm_engage_power_management - enable PM on a device
+ * @device: identity of device
+ * @enable: on/off
+ *
+ * Activate or deactive power management on either a specific device
+ * or the entire system (%APM_DEVICE_ALL).
+ */
+
static int apm_engage_power_management(u_short device, int enable)
{
u32 eax;
return APM_SUCCESS;
}
+/**
+ * apm_error - display an APM error
+ * @str: information string
+ * @err: APM BIOS return code
+ *
+ * Write a meaningful log entry to the kernel log in the event of
+ * an APM error.
+ */
+
static void apm_error(char *str, int err)
{
int i;
}
#if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT)
+
+/**
+ * apm_console_blank - blank the display
+ * @blank: on/off
+ *
+ * Attempt to blank the console, firstly by blanking just video device
+ * zero, and if that fails (some BIOSes dont support it) then it blanks
+ * all video devices. Typically the BIOS will do laptop backlight and
+ * monitor powerdown for us.
+ */
+
static int apm_console_blank(int blank)
{
int error;
if (user_list == NULL)
return;
for (as = user_list; as != NULL; as = as->next) {
- if (as == sender)
+ if ((as == sender) || (!as->reader))
continue;
as->event_head = (as->event_head + 1) % APM_MAX_EVENTS;
if (as->event_head == as->event_tail) {
as->event_tail = (as->event_tail + 1) % APM_MAX_EVENTS;
}
as->events[as->event_head] = event;
- if (!as->suser)
+ if ((!as->suser) || (!as->writer))
continue;
switch (event) {
case APM_SYS_SUSPEND:
/* map all suspends to ACPI D3 */
if (pm_send_all(PM_SUSPEND, (void *)3)) {
if (event == APM_CRITICAL_SUSPEND) {
- printk(KERN_CRIT "apm: Critical suspend was vetoed, expect armageddon\n" );
+ printk(KERN_CRIT
+ "apm: Critical suspend was vetoed, "
+ "expect armageddon\n" );
return 0;
}
if (apm_info.connection_version > 0x100)
int err;
if ((standbys_pending > 0) || (suspends_pending > 0)) {
- if ((apm_info.connection_version > 0x100) && (pending_count-- <= 0)) {
+ if ((apm_info.connection_version > 0x100) &&
+ (pending_count-- <= 0)) {
pending_count = 4;
if (debug)
printk(KERN_DEBUG "apm: setting state busy\n");
set_current_state(TASK_INTERRUPTIBLE);
for (;;) {
/* Nothing to do, just sleep for the timeout */
- timeout = 2*timeout;
+ timeout = 2 * timeout;
if (timeout > APM_CHECK_TIMEOUT)
timeout = APM_CHECK_TIMEOUT;
schedule_timeout(timeout);
#ifdef CONFIG_APM_CPU_IDLE
if (!system_idle())
continue;
- if (apm_do_idle()) {
+
+ /*
+ * If we can idle...
+ */
+ if (apm_do_idle() != -1) {
unsigned long start = jiffies;
while ((!exit_kapmd) && system_idle()) {
- apm_do_idle();
+ if (apm_do_idle()) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ /* APM needs us to snooze .. either
+ the BIOS call failed (-1) or it
+ slowed the clock (1). We sleep
+ until it talks to us again */
+ schedule_timeout(1);
+ }
if ((jiffies - start) > APM_CHECK_TIMEOUT) {
apm_event_handler();
start = jiffies;
struct apm_user * as;
int i;
apm_event_t event;
- DECLARE_WAITQUEUE(wait, current);
as = fp->private_data;
if (check_apm_user(as, "read"))
return -EIO;
if (count < sizeof(apm_event_t))
return -EINVAL;
- if (queue_empty(as)) {
- if (fp->f_flags & O_NONBLOCK)
- return -EAGAIN;
- add_wait_queue(&apm_waitqueue, &wait);
-repeat:
- set_current_state(TASK_INTERRUPTIBLE);
- if (queue_empty(as) && !signal_pending(current)) {
- schedule();
- goto repeat;
- }
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&apm_waitqueue, &wait);
- }
+ if ((queue_empty(as)) && (fp->f_flags & O_NONBLOCK))
+ return -EAGAIN;
+ wait_event_interruptible(apm_waitqueue, !queue_empty(as));
i = count;
while ((i >= sizeof(event)) && !queue_empty(as)) {
event = get_queued_event(as);
u_int cmd, u_long arg)
{
struct apm_user * as;
- DECLARE_WAITQUEUE(wait, current);
as = filp->private_data;
if (check_apm_user(as, "ioctl"))
return -EIO;
} else {
as->suspend_wait = 1;
- add_wait_queue(&apm_suspend_waitqueue, &wait);
- while (1) {
- set_current_state(TASK_INTERRUPTIBLE);
- if ((as->suspend_wait == 0)
- || signal_pending(current))
- break;
- schedule();
- }
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&apm_suspend_waitqueue, &wait);
+ wait_event_interruptible(apm_suspend_waitqueue,
+ as->suspend_wait == 0);
return as->suspend_result;
}
break;
* privileged operation -- cevans
*/
as->suser = capable(CAP_SYS_ADMIN);
+ as->writer = (filp->f_mode & FMODE_WRITE) == FMODE_WRITE;
+ as->reader = (filp->f_mode & FMODE_READ) == FMODE_READ;
as->next = user_list;
user_list = as;
filp->private_data = as;
/* Install our power off handler.. */
if (power_off)
pm_power_off = apm_power_off;
- register_sysrq_key('o',&sysrq_poweroff_op);
+ register_sysrq_key('o', &sysrq_poweroff_op);
if (smp_num_cpus == 1) {
#if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT)
apm_disabled = 1;
if (strncmp(str, "on", 2) == 0)
apm_disabled = 0;
- if ((strncmp(str, "allow-ints", 10) == 0) ||
- (strncmp(str, "allow_ints", 10) == 0))
- apm_info.allow_ints = 1;
- if ((strncmp(str, "broken-psr", 10) == 0) ||
- (strncmp(str, "broken_psr", 10) == 0))
- apm_info.get_power_status_broken = 1;
- if ((strncmp(str, "realmode-power-off", 18) == 0) ||
- (strncmp(str, "realmode_power_off", 18) == 0))
- apm_info.realmode_power_off = 1;
+ if ((strncmp(str, "bounce-interval=", 16) == 0) ||
+ (strncmp(str, "bounce_interval=", 16) == 0))
+ bounce_interval = simple_strtol(str + 16, NULL, 0);
invert = (strncmp(str, "no-", 3) == 0);
if (invert)
str += 3;
if ((strncmp(str, "power-off", 9) == 0) ||
(strncmp(str, "power_off", 9) == 0))
power_off = !invert;
- if ((strncmp(str, "bounce-interval=", 16) == 0) ||
- (strncmp(str, "bounce_interval=", 16) == 0))
- bounce_interval = simple_strtol(str + 16, NULL, 0);
+ if ((strncmp(str, "allow-ints", 10) == 0) ||
+ (strncmp(str, "allow_ints", 10) == 0))
+ apm_info.allow_ints = !invert;
+ if ((strncmp(str, "broken-psr", 10) == 0) ||
+ (strncmp(str, "broken_psr", 10) == 0))
+ apm_info.get_power_status_broken = !invert;
+ if ((strncmp(str, "realmode-power-off", 18) == 0) ||
+ (strncmp(str, "realmode_power_off", 18) == 0))
+ apm_info.realmode_power_off = !invert;
str = strchr(str, ',');
if (str != NULL)
str += strspn(str, ", \t");
MODULE_AUTHOR("Stephen Rothwell");
MODULE_DESCRIPTION("Advanced Power Management");
+MODULE_LICENSE("GPL");
MODULE_PARM(debug, "i");
MODULE_PARM_DESC(debug, "Enable debug mode");
MODULE_PARM(power_off, "i");
MODULE_PARM_DESC(power_off, "Enable power off");
MODULE_PARM(bounce_interval, "i");
-MODULE_PARM_DESC(bounce_interval, "Set the number of ticks to ignore suspend bounces");
+MODULE_PARM_DESC(bounce_interval,
+ "Set the number of ticks to ignore suspend bounces");
MODULE_PARM(allow_ints, "i");
MODULE_PARM_DESC(allow_ints, "Allow interrupts during BIOS calls");
MODULE_PARM(broken_psr, "i");
MODULE_PARM_DESC(broken_psr, "BIOS has a broken GetPowerStatus call");
+MODULE_PARM(realmode_power_off, "i");
+MODULE_PARM_DESC(realmode_power_off,
+ "Switch to real mode before powering off");
EXPORT_NO_SYMBOLS;
-/* $Id: systbls.S,v 1.100 2001/10/09 10:54:38 davem Exp $
+/* $Id: systbls.S,v 1.101 2001/10/18 08:27:05 davem Exp $
* systbls.S: System call entry point tables for OS compatibility.
* The native Linux system call table lives here also.
*
/*190*/ .long sys_init_module, sys_personality, sys_nis_syscall, sys_nis_syscall, sys_nis_syscall
/*195*/ .long sys_nis_syscall, sys_nis_syscall, sys_getppid, sparc_sigaction, sys_sgetmask
/*200*/ .long sys_ssetmask, sys_sigsuspend, sys_newlstat, sys_uselib, old_readdir
-/*205*/ .long sys_nis_syscall, sys_socketcall, sys_syslog, sys_nis_syscall, sys_nis_syscall
+/*205*/ .long sys_readahead, sys_socketcall, sys_syslog, sys_nis_syscall, sys_nis_syscall
/*210*/ .long sys_nis_syscall, sys_nis_syscall, sys_waitpid, sys_swapoff, sys_sysinfo
/*215*/ .long sys_ipc, sys_sigreturn, sys_clone, sys_nis_syscall, sys_adjtimex
/*220*/ .long sys_sigprocmask, sys_create_module, sys_delete_module, sys_get_kernel_syms, sys_getpgid
-# $Id: Makefile,v 1.48 2001/10/15 09:24:51 davem Exp $
+# $Id: Makefile,v 1.49 2001/10/17 18:26:58 davem Exp $
# sparc64/Makefile
#
# Makefile for the architecture dependent flags and dependencies on the
endif
endif
+# Uncomment this to keep track of how often flush_dcache_page
+# actually flushes the caches, output via /proc/cpuinfo
+#
+# DEBUG_DCACHE_FLUSH = 1
+ifdef DEBUG_DCACHE_FLUSH
+ CFLAGS += -DDCFLUSH_DEBUG
+ AFLAGS += -DDCFLUSH_DEBUG
+endif
+
LINKFLAGS = -T arch/sparc64/vmlinux.lds
HEAD := arch/sparc64/kernel/head.o arch/sparc64/kernel/init_task.o
CONFIG_FB_ATY=y
# CONFIG_FB_ATY_GX is not set
CONFIG_FB_ATY_CT=y
-# CONFIG_FB_ATY_CT_VAIO_LCD is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_SIS is not set
# CONFIG_TLAN is not set
CONFIG_VIA_RHINE=m
CONFIG_WINBOND_840=m
-# CONFIG_LAN_SAA9730 is not set
# CONFIG_NET_POCKET is not set
#
#
CONFIG_USB_DEVICEFS=y
# CONFIG_USB_BANDWIDTH is not set
+# CONFIG_USB_LONG_TIMEOUT is not set
+# CONFIG_USB_LARGE_CONFIG is not set
#
# USB Controllers
#
CONFIG_USB_UHCI=y
+# CONFIG_USB_UHCI_ALT is not set
CONFIG_USB_OHCI=y
#
CONFIG_USB_BLUETOOTH=m
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
+# CONFIG_USB_STORAGE_DATAFAB is not set
CONFIG_USB_STORAGE_FREECOM=y
CONFIG_USB_STORAGE_ISD200=y
CONFIG_USB_STORAGE_DPCM=y
CONFIG_USB_STORAGE_HP8200e=y
CONFIG_USB_STORAGE_SDDR09=y
+# CONFIG_USB_STORAGE_JUMPSHOT is not set
CONFIG_USB_ACM=m
CONFIG_USB_PRINTER=m
# USB Human Interface Devices (HID)
#
CONFIG_USB_HID=y
+# CONFIG_USB_HIDDEV is not set
CONFIG_USB_WACOM=m
#
# USB Network adaptors
#
CONFIG_USB_PEGASUS=m
+CONFIG_USB_KAWETH=m
CONFIG_USB_CATC=m
CONFIG_USB_CDCETHER=m
-CONFIG_USB_KAWETH=m
CONFIG_USB_USBNET=m
#
CONFIG_USB_SERIAL_OMNINET=m
#
-# USB misc drivers
+# USB Miscellaneous drivers
#
CONFIG_USB_RIO500=m
-/* $Id: entry.S,v 1.135 2001/10/13 23:04:09 kanoj Exp $
+/* $Id: entry.S,v 1.137 2001/10/18 09:06:36 davem Exp $
* arch/sparc64/kernel/entry.S: Sparc64 trap low-level entry points.
*
* Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: ioctl32.c,v 1.125 2001/09/18 22:29:05 davem Exp $
+/* $Id: ioctl32.c,v 1.126 2001/10/18 11:41:02 davem Exp $
* ioctl32.c: Conversion between 32bit and 64bit native ioctls.
*
* Copyright (C) 1997-2000 Jakub Jelinek (jakub@redhat.com)
return err;
}
+typedef struct sg_io_hdr32 {
+ s32 interface_id; /* [i] 'S' for SCSI generic (required) */
+ s32 dxfer_direction; /* [i] data transfer direction */
+ u8 cmd_len; /* [i] SCSI command length ( <= 16 bytes) */
+ u8 mx_sb_len; /* [i] max length to write to sbp */
+ u16 iovec_count; /* [i] 0 implies no scatter gather */
+ u32 dxfer_len; /* [i] byte count of data transfer */
+ u32 dxferp; /* [i], [*io] points to data transfer memory
+ or scatter gather list */
+ u32 cmdp; /* [i], [*i] points to command to perform */
+ u32 sbp; /* [i], [*o] points to sense_buffer memory */
+ u32 timeout; /* [i] MAX_UINT->no timeout (unit: millisec) */
+ u32 flags; /* [i] 0 -> default, see SG_FLAG... */
+ s32 pack_id; /* [i->o] unused internally (normally) */
+ u32 usr_ptr; /* [i->o] unused internally */
+ u8 status; /* [o] scsi status */
+ u8 masked_status; /* [o] shifted, masked scsi status */
+ u8 msg_status; /* [o] messaging level data (optional) */
+ u8 sb_len_wr; /* [o] byte count actually written to sbp */
+ u16 host_status; /* [o] errors from host adapter */
+ u16 driver_status; /* [o] errors from software driver */
+ s32 resid; /* [o] dxfer_len - actual_transferred */
+ u32 duration; /* [o] time taken by cmd (unit: millisec) */
+ u32 info; /* [o] auxiliary information */
+} sg_io_hdr32_t; /* 64 bytes long (on sparc32) */
+
+typedef struct sg_iovec32 {
+ u32 iov_base;
+ u32 iov_len;
+} sg_iovec32_t;
+
+static int alloc_sg_iovec(sg_io_hdr_t *sgp, u32 uptr32)
+{
+ sg_iovec32_t *uiov = (sg_iovec32_t *) A(uptr32);
+ sg_iovec_t *kiov;
+ int i;
+
+ sgp->dxferp = kmalloc(sgp->iovec_count *
+ sizeof(sg_iovec_t), GFP_KERNEL);
+ if (!sgp->dxferp)
+ return -ENOMEM;
+ memset(sgp->dxferp, 0,
+ sgp->iovec_count * sizeof(sg_iovec_t));
+
+ kiov = (sg_iovec_t *) sgp->dxferp;
+ for (i = 0; i < sgp->iovec_count; i++) {
+ u32 iov_base32;
+ if (__get_user(iov_base32, &uiov->iov_base) ||
+ __get_user(kiov->iov_len, &uiov->iov_len))
+ return -EFAULT;
+
+ kiov->iov_base = kmalloc(kiov->iov_len, GFP_KERNEL);
+ if (!kiov->iov_base)
+ return -ENOMEM;
+ if (copy_from_user(kiov->iov_base,
+ (void *) A(iov_base32),
+ kiov->iov_len))
+ return -EFAULT;
+
+ uiov++;
+ kiov++;
+ }
+
+ return 0;
+}
+
+static int copy_back_sg_iovec(sg_io_hdr_t *sgp, u32 uptr32)
+{
+ sg_iovec32_t *uiov = (sg_iovec32_t *) A(uptr32);
+ sg_iovec_t *kiov = (sg_iovec_t *) sgp->dxferp;
+ int i;
+
+ for (i = 0; i < sgp->iovec_count; i++) {
+ u32 iov_base32;
+
+ if (__get_user(iov_base32, &uiov->iov_base))
+ return -EFAULT;
+
+ if (copy_to_user((void *) A(iov_base32),
+ kiov->iov_base,
+ kiov->iov_len))
+ return -EFAULT;
+
+ uiov++;
+ kiov++;
+ }
+
+ return 0;
+}
+
+static void free_sg_iovec(sg_io_hdr_t *sgp)
+{
+ sg_iovec_t *kiov = (sg_iovec_t *) sgp->dxferp;
+ int i;
+
+ for (i = 0; i < sgp->iovec_count; i++) {
+ if (kiov->iov_base) {
+ kfree(kiov->iov_base);
+ kiov->iov_base = NULL;
+ }
+ kiov++;
+ }
+ kfree(sgp->dxferp);
+ sgp->dxferp = NULL;
+}
+
+static int sg_ioctl_trans(unsigned int fd, unsigned int cmd, unsigned long arg)
+{
+ sg_io_hdr32_t *sg_io32;
+ sg_io_hdr_t sg_io64;
+ u32 dxferp32, cmdp32, sbp32;
+ mm_segment_t old_fs;
+ int err = 0;
+
+ sg_io32 = (sg_io_hdr32_t *)arg;
+ err = __get_user(sg_io64.interface_id, &sg_io32->interface_id);
+ err |= __get_user(sg_io64.dxfer_direction, &sg_io32->dxfer_direction);
+ err |= __get_user(sg_io64.cmd_len, &sg_io32->cmd_len);
+ err |= __get_user(sg_io64.mx_sb_len, &sg_io32->mx_sb_len);
+ err |= __get_user(sg_io64.iovec_count, &sg_io32->iovec_count);
+ err |= __get_user(sg_io64.dxfer_len, &sg_io32->dxfer_len);
+ err |= __get_user(sg_io64.timeout, &sg_io32->timeout);
+ err |= __get_user(sg_io64.flags, &sg_io32->flags);
+ err |= __get_user(sg_io64.pack_id, &sg_io32->pack_id);
+
+ sg_io64.dxferp = NULL;
+ sg_io64.cmdp = NULL;
+ sg_io64.sbp = NULL;
+
+ err |= __get_user(cmdp32, &sg_io32->cmdp);
+ sg_io64.cmdp = kmalloc(sg_io64.cmd_len, GFP_KERNEL);
+ if (!sg_io64.cmdp) {
+ err = -ENOMEM;
+ goto out;
+ }
+ if (copy_from_user(sg_io64.cmdp,
+ (void *) A(cmdp32),
+ sg_io64.cmd_len)) {
+ err = -EFAULT;
+ goto out;
+ }
+
+ err |= __get_user(sbp32, &sg_io32->sbp);
+ sg_io64.sbp = kmalloc(64, GFP_KERNEL);
+ if (!sg_io64.sbp) {
+ err = -ENOMEM;
+ goto out;
+ }
+ memset(sg_io64.sbp, 0, 64);
+
+ err |= __get_user(dxferp32, &sg_io32->dxferp);
+ if (sg_io64.iovec_count) {
+ int ret;
+
+ if ((ret = alloc_sg_iovec(&sg_io64, dxferp32))) {
+ err = ret;
+ goto out;
+ }
+ } else {
+ sg_io64.dxferp = kmalloc(sg_io64.dxfer_len, GFP_KERNEL);
+ if (!sg_io64.dxferp) {
+ err = -ENOMEM;
+ goto out;
+ }
+ if (copy_from_user(sg_io64.dxferp,
+ (void *) A(dxferp32),
+ sg_io64.dxfer_len)) {
+ err = -EFAULT;
+ goto out;
+ }
+ }
+
+ /* Unused internally, do not even bother to copy it over. */
+ sg_io64.usr_ptr = NULL;
+
+ if (err)
+ return -EFAULT;
+
+ old_fs = get_fs();
+ set_fs (KERNEL_DS);
+ err = sys_ioctl (fd, cmd, (unsigned long) &sg_io64);
+ set_fs (old_fs);
+
+ if (err < 0)
+ goto out;
+
+ err = __put_user(sg_io64.pack_id, &sg_io32->pack_id);
+ err |= __put_user(sg_io64.status, &sg_io32->status);
+ err |= __put_user(sg_io64.masked_status, &sg_io32->masked_status);
+ err |= __put_user(sg_io64.msg_status, &sg_io32->msg_status);
+ err |= __put_user(sg_io64.sb_len_wr, &sg_io32->sb_len_wr);
+ err |= __put_user(sg_io64.host_status, &sg_io32->host_status);
+ err |= __put_user(sg_io64.driver_status, &sg_io32->driver_status);
+ err |= __put_user(sg_io64.resid, &sg_io32->resid);
+ err |= __put_user(sg_io64.duration, &sg_io32->duration);
+ err |= __put_user(sg_io64.info, &sg_io32->info);
+ err |= copy_to_user((void *)A(sbp32), sg_io64.sbp, 64);
+ if (sg_io64.dxferp) {
+ if (sg_io64.iovec_count)
+ err |= copy_back_sg_iovec(&sg_io64, dxferp32);
+ else
+ err |= copy_to_user((void *)A(dxferp32),
+ sg_io64.dxferp,
+ sg_io64.dxfer_len);
+ }
+ if (err)
+ err = -EFAULT;
+
+out:
+ if (sg_io64.cmdp)
+ kfree(sg_io64.cmdp);
+ if (sg_io64.sbp)
+ kfree(sg_io64.sbp);
+ if (sg_io64.dxferp) {
+ if (sg_io64.iovec_count) {
+ free_sg_iovec(&sg_io64);
+ } else {
+ kfree(sg_io64.dxferp);
+ }
+ }
+ return err;
+}
+
struct ppp_option_data32 {
__kernel_caddr_t32 ptr;
__u32 length;
COMPATIBLE_IOCTL(SG_GET_VERSION_NUM)
COMPATIBLE_IOCTL(SG_NEXT_CMD_LEN)
COMPATIBLE_IOCTL(SG_SCSI_RESET)
-COMPATIBLE_IOCTL(SG_IO)
COMPATIBLE_IOCTL(SG_GET_REQUEST_TABLE)
COMPATIBLE_IOCTL(SG_SET_KEEP_ORPHAN)
COMPATIBLE_IOCTL(SG_GET_KEEP_ORPHAN)
HANDLE_IOCTL(FDPOLLDRVSTAT32, fd_ioctl_trans)
HANDLE_IOCTL(FDGETFDCSTAT32, fd_ioctl_trans)
HANDLE_IOCTL(FDWERRORGET32, fd_ioctl_trans)
+HANDLE_IOCTL(SG_IO,sg_ioctl_trans)
HANDLE_IOCTL(PPPIOCGIDLE32, ppp_ioctl_trans)
HANDLE_IOCTL(PPPIOCSCOMPRESS32, ppp_ioctl_trans)
HANDLE_IOCTL(MTIOCGET32, mt_ioctl_trans)
-/* $Id: process.c,v 1.120 2001/10/02 02:22:26 davem Exp $
+/* $Id: process.c,v 1.122 2001/10/18 09:06:36 davem Exp $
* arch/sparc64/kernel/process.c
*
* Copyright (C) 1995, 1996 David S. Miller (davem@caip.rutgers.edu)
-/* $Id: setup.c,v 1.68 2001/10/13 00:14:34 kanoj Exp $
+/* $Id: setup.c,v 1.69 2001/10/18 09:40:00 davem Exp $
* linux/arch/sparc64/kernel/setup.c
*
* Copyright (C) 1995,1996 David S. Miller (davem@caip.rutgers.edu)
struct console *cons, *saved_console = NULL;
unsigned long flags;
char *cmd;
+ extern spinlock_t prom_entry_lock;
if (!args)
return -1;
*/
irq_exit(smp_processor_id(), 0);
save_and_cli(flags);
+ spin_unlock(&prom_entry_lock);
cons = console_drivers;
while (cons) {
unregister_console(cons);
saved_console = cons->next;
register_console(cons);
}
+ spin_lock(&prom_entry_lock);
restore_flags(flags);
/*
* Restore in-interrupt status for a resume from obp.
extern unsigned long xcall_flush_cache_all;
extern unsigned long xcall_report_regs;
extern unsigned long xcall_receive_signal;
+extern unsigned long xcall_flush_dcache_page_cheetah;
+extern unsigned long xcall_flush_dcache_page_spitfire;
+
+static spinlock_t dcache_xcall_lock = SPIN_LOCK_UNLOCKED;
+static struct page *dcache_page;
+#ifdef DCFLUSH_DEBUG
+extern atomic_t dcpage_flushes;
+extern atomic_t dcpage_flushes_xcall;
+#endif
+
+static __inline__ void __smp_flush_dcache_page_client(struct page *page)
+{
+#if (L1DCACHE_SIZE > PAGE_SIZE)
+ __flush_dcache_page(page->virtual,
+ ((tlb_type == spitfire) &&
+ page->mapping != NULL));
+#else
+ if (page->mapping != NULL &&
+ tlb_type == spitfire)
+ __flush_icache_page(__pa(page->virtual));
+#endif
+}
+
+void smp_flush_dcache_page_client(void)
+{
+ __smp_flush_dcache_page_client(dcache_page);
+ spin_unlock(&dcache_xcall_lock);
+}
+
+void smp_flush_dcache_page_impl(struct page *page)
+{
+ if (smp_processors_ready) {
+ int cpu = dcache_dirty_cpu(page);
+ unsigned long mask = 1UL << cpu;
+
+#ifdef DCFLUSH_DEBUG
+ atomic_inc(&dcpage_flushes);
+#endif
+ if (cpu == smp_processor_id()) {
+ __smp_flush_dcache_page_client(page);
+ } else if ((cpu_present_map & mask) != 0) {
+ u64 data0;
+
+ if (tlb_type == spitfire) {
+ spin_lock(&dcache_xcall_lock);
+ dcache_page = page;
+ data0 = ((u64)&xcall_flush_dcache_page_spitfire);
+ spitfire_xcall_deliver(data0, 0, 0, mask);
+ /* Target cpu drops dcache_xcall_lock. */
+ } else {
+ /* Look mom, no locks... */
+ data0 = ((u64)&xcall_flush_dcache_page_cheetah);
+ cheetah_xcall_deliver(data0,
+ (u64) page->virtual,
+ 0, mask);
+ }
+#ifdef DCFLUSH_DEBUG
+ atomic_inc(&dcpage_flushes_xcall);
+#endif
+ }
+ }
+}
void smp_receive_signal(int cpu)
{
-/* $Id: sparc64_ksyms.c,v 1.112 2001/09/25 23:30:23 davem Exp $
+/* $Id: sparc64_ksyms.c,v 1.113 2001/10/17 18:26:58 davem Exp $
* arch/sparc64/kernel/sparc64_ksyms.c: Sparc64 specific ksyms support.
*
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
EXPORT_SYMBOL(tlb_type);
EXPORT_SYMBOL(get_fb_unmapped_area);
EXPORT_SYMBOL(flush_icache_range);
-EXPORT_SYMBOL(__flush_dcache_page);
+EXPORT_SYMBOL(flush_dcache_page);
EXPORT_SYMBOL(mostek_lock);
EXPORT_SYMBOL(mstk48t02_regs);
-/* $Id: sys_sparc32.c,v 1.179 2001/09/25 00:48:09 davem Exp $
+/* $Id: sys_sparc32.c,v 1.182 2001/10/18 09:06:36 davem Exp $
* sys_sparc32.c: Conversion between 32bit and 64bit native syscalls.
*
* Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
return sys_pwrite(fd, ubuf, count, ((loff_t)AA(poshi) << 32) | AA(poslo));
}
+extern asmlinkage ssize_t sys_readahead(int fd, loff_t offset, size_t count);
+
+asmlinkage ssize_t32 sys32_readahead(int fd, u32 offhi, u32 offlo, s32 count)
+{
+ return sys_readahead(fd, ((loff_t)AA(offhi) << 32) | AA(offlo), count);
+}
extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offset, size_t count);
-/* $Id: systbls.S,v 1.78 2001/10/09 10:54:38 davem Exp $
+/* $Id: systbls.S,v 1.79 2001/10/18 08:27:05 davem Exp $
* systbls.S: System call entry point tables for OS compatibility.
* The native Linux system call table lives here also.
*
/*190*/ .word sys32_init_module, sparc64_personality, sys_nis_syscall, sys_nis_syscall, sys_nis_syscall
.word sys_nis_syscall, sys_nis_syscall, sys_getppid, sys32_sigaction, sys_sgetmask
/*200*/ .word sys_ssetmask, sys_sigsuspend, sys32_newlstat, sys_uselib, old32_readdir
- .word sys_nis_syscall, sys32_socketcall, sys_syslog, sys_nis_syscall, sys_nis_syscall
+ .word sys32_readahead, sys32_socketcall, sys_syslog, sys_nis_syscall, sys_nis_syscall
/*210*/ .word sys_nis_syscall, sys_nis_syscall, sys_waitpid, sys_swapoff, sys32_sysinfo
.word sys32_ipc, sys32_sigreturn, sys_clone, sys_nis_syscall, sys32_adjtimex
/*220*/ .word sys32_sigprocmask, sys32_create_module, sys32_delete_module, sys32_get_kernel_syms, sys_getpgid
/*190*/ .word sys_init_module, sparc64_personality, sys_nis_syscall, sys_nis_syscall, sys_nis_syscall
.word sys_nis_syscall, sys_nis_syscall, sys_getppid, sys_nis_syscall, sys_sgetmask
/*200*/ .word sys_ssetmask, sys_nis_syscall, sys_newlstat, sys_uselib, sys_nis_syscall
- .word sys_nis_syscall, sys_socketcall, sys_syslog, sys_nis_syscall, sys_nis_syscall
+ .word sys_readahead, sys_socketcall, sys_syslog, sys_nis_syscall, sys_nis_syscall
/*210*/ .word sys_nis_syscall, sys_nis_syscall, sys_waitpid, sys_swapoff, sys_sysinfo
.word sys_ipc, sys_nis_syscall, sys_clone, sys_nis_syscall, sys_adjtimex
/*220*/ .word sys_nis_syscall, sys_create_module, sys_delete_module, sys_get_kernel_syms, sys_getpgid
-/* $Id: init.c,v 1.193 2001/09/25 22:47:35 davem Exp $
+/* $Id: init.c,v 1.194 2001/10/17 18:26:58 davem Exp $
* arch/sparc64/mm/init.c
*
* Copyright (C) 1996-1999 David S. Miller (davem@caip.rutgers.edu)
extern void __update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t);
+#ifdef DCFLUSH_DEBUG
+atomic_t dcpage_flushes = ATOMIC_INIT(0);
+#ifdef CONFIG_SMP
+atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0);
+#endif
+#endif
+
+__inline__ void flush_dcache_page_impl(struct page *page)
+{
+#ifdef DCFLUSH_DEBUG
+ atomic_inc(&dcpage_flushes);
+#endif
+
+#if (L1DCACHE_SIZE > PAGE_SIZE)
+ __flush_dcache_page(page->virtual,
+ ((tlb_type == spitfire) &&
+ page->mapping != NULL));
+#else
+ if (page->mapping != NULL &&
+ tlb_type == spitfire)
+ __flush_icache_page(__pa(page->virtual));
+#endif
+}
+
void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte)
{
struct page *page = pte_page(pte);
if (VALID_PAGE(page) && page->mapping &&
test_bit(PG_dcache_dirty, &page->flags)) {
-#if (L1DCACHE_SIZE > PAGE_SIZE) /* is there D$ aliasing problem */
- __flush_dcache_page(page->virtual, (tlb_type == spitfire));
-#else
- if (tlb_type == spitfire) /* fix local I$ coherency */
- __flush_icache_page(__get_phys((unsigned long)(page->virtual)));
-#endif
- clear_bit(PG_dcache_dirty, &page->flags);
+ /* This is just to optimize away some function calls
+ * in the SMP case.
+ */
+ if (dcache_dirty_cpu(page) == smp_processor_id())
+ flush_dcache_page_impl(page);
+ else
+ smp_flush_dcache_page_impl(page);
+
+ clear_dcache_dirty(page);
}
__update_mmu_cache(vma, address, pte);
}
+void flush_dcache_page(struct page *page)
+{
+ int dirty = test_bit(PG_dcache_dirty, &page->flags);
+ int dirty_cpu = dcache_dirty_cpu(page);
+
+ if (page->mapping &&
+ page->mapping->i_mmap == NULL &&
+ page->mapping->i_mmap_shared == NULL) {
+ if (dirty) {
+ if (dirty_cpu == smp_processor_id())
+ return;
+ smp_flush_dcache_page_impl(page);
+ }
+ set_dcache_dirty(page);
+ } else {
+ /* We could delay the flush for the !page->mapping
+ * case too. But that case is for exec env/arg
+ * pages and those are %99 certainly going to get
+ * faulted into the tlb (and thus flushed) anyways.
+ */
+ flush_dcache_page_impl(page);
+ }
+}
+
void flush_icache_range(unsigned long start, unsigned long end)
{
/* Cheetah has coherent I-cache. */
int mmu_info(char *buf)
{
+ int len;
+
if (tlb_type == cheetah)
- return sprintf(buf, "MMU Type\t: Cheetah\n");
+ len = sprintf(buf, "MMU Type\t: Cheetah\n");
else if (tlb_type == spitfire)
- return sprintf(buf, "MMU Type\t: Spitfire\n");
+ len = sprintf(buf, "MMU Type\t: Spitfire\n");
else
- return sprintf(buf, "MMU Type\t: ???\n");
+ len = sprintf(buf, "MMU Type\t: ???\n");
+
+#ifdef DCFLUSH_DEBUG
+ len += sprintf(buf + len, "DCPageFlushes\t: %d\n",
+ atomic_read(&dcpage_flushes));
+#ifdef CONFIG_SMP
+ len += sprintf(buf + len, "DCPageFlushesXC\t: %d\n",
+ atomic_read(&dcpage_flushes_xcall));
+#endif /* CONFIG_SMP */
+#endif /* DCFLUSH_DEBUG */
+
+ return len;
}
struct linux_prom_translation {
-/* $Id: ultra.S,v 1.61 2001/09/25 18:04:51 kanoj Exp $
+/* $Id: ultra.S,v 1.63 2001/10/17 19:30:21 davem Exp $
* ultra.S: Don't expand these all over the place...
*
* Copyright (C) 1997, 2000 David S. Miller (davem@redhat.com)
b,pt %xcc, rtrap
clr %l6
+ .align 32
+ .globl xcall_flush_dcache_page_cheetah
+xcall_flush_dcache_page_cheetah:
+ sethi %hi(PAGE_SIZE), %g3
+1: subcc %g3, (1 << 5), %g3
+ stxa %g0, [%g1 + %g3] ASI_DCACHE_INVALIDATE
+ membar #Sync
+ bne,pt %icc, 1b
+ nop
+ retry
+ nop
+
+ .globl xcall_flush_dcache_page_spitfire
+xcall_flush_dcache_page_spitfire:
+ rdpr %pstate, %g2
+ wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate
+ rdpr %pil, %g2
+ wrpr %g0, 15, %pil
+ sethi %hi(109f), %g7
+ b,pt %xcc, etrap_irq
+109: or %g7, %lo(109b), %g7
+ call smp_flush_dcache_page_client
+ nop
+ b,pt %xcc, rtrap
+ clr %l6
+
.globl xcall_capture
xcall_capture:
rdpr %pstate, %g2
-/* $Id: p1275.c,v 1.21 2001/04/24 01:09:12 davem Exp $
+/* $Id: p1275.c,v 1.22 2001/10/18 09:40:00 davem Exp $
* p1275.c: Sun IEEE 1275 PROM low level interface routines
*
* Copyright (C) 1996,1997 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
" " : : "r" (&p1275buf), "i" (PSTATE_PRIV));
}
-/* We need some SMP protection here. But be careful as
- * prom callback code can call into here too, this is why
- * the counter is needed. -DaveM
+/*
+ * This provides SMP safety on the p1275buf. prom_callback() drops this lock
+ * to allow recursuve acquisition.
*/
-static int prom_entry_depth = 0;
spinlock_t prom_entry_lock = SPIN_LOCK_UNLOCKED;
-static __inline__ unsigned long prom_get_lock(void)
-{
- unsigned long flags;
-
- __save_and_cli(flags);
- if (prom_entry_depth == 0) {
- spin_lock(&prom_entry_lock);
-
-#if 1 /* DEBUGGING */
- if (prom_entry_depth != 0)
- panic("prom_get_lock");
-#endif
- }
- prom_entry_depth++;
-
- return flags;
-}
-
-static __inline__ void prom_release_lock(unsigned long flags)
-{
- if (--prom_entry_depth == 0)
- spin_unlock(&prom_entry_lock);
-
- __restore_flags(flags);
-}
-
long p1275_cmd (char *service, long fmt, ...)
{
char *p, *q;
spitfire_set_primary_context (0);
}
- flags = prom_get_lock();
+ spin_lock_irqsave(&prom_entry_lock, flags);
p1275buf.prom_args[0] = (unsigned long)p; /* service */
strcpy (p, service);
va_end(list);
x = p1275buf.prom_args [nargs + 3];
- prom_release_lock(flags);
+ spin_unlock_irqrestore(&prom_entry_lock, flags);
if (ctx)
spitfire_set_primary_context (ctx);
mod-subdirs := dio mtd sbus video macintosh usb input telephony sgi ide \
- i2o message/fusion scsi md ieee1394 pnp isdn atm \
+ message/i2o message/fusion scsi md ieee1394 pnp isdn atm \
fc4 net/hamradio i2c acpi bluetooth
subdir-y := parport char block net sound misc media cdrom
atomic_inc( &dev->counts[_DRM_STAT_UNLOCKS] );
+#if __HAVE_KERNEL_CTX_SWITCH
+ /* We no longer really hold it, but if we are the next
+ * agent to request it then we should just be able to
+ * take it immediately and not eat the ioctl.
+ */
+ dev->lock.pid = 0;
+ {
+ __volatile__ unsigned int *plock = &dev->lock.hw_lock->lock;
+ unsigned int old, new, prev, ctx;
+
+ ctx = lock.context;
+ do {
+ old = *plock;
+ new = ctx;
+ prev = cmpxchg(plock, old, new);
+ } while (prev != old);
+ }
+ wake_up_interruptible(&dev->lock.lock_queue);
+#else
DRM(lock_transfer)( dev, &dev->lock.hw_lock->lock,
DRM_KERNEL_CONTEXT );
#if __HAVE_DMA_SCHEDULE
DRM_ERROR( "\n" );
}
}
+#endif /* !__HAVE_KERNEL_CTX_SWITCH */
unblock_all_signals();
return 0;
vma->vm_flags |= VM_IO; /* not in core dump */
}
offset = DRIVER_GET_REG_OFS();
+#ifdef __sparc__
+ if (io_remap_page_range(vma->vm_start,
+ VM_OFFSET(vma) + offset,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot, 0))
+#else
if (remap_page_range(vma->vm_start,
VM_OFFSET(vma) + offset,
vma->vm_end - vma->vm_start,
vma->vm_page_prot))
+#endif
return -EAGAIN;
DRM_DEBUG(" Type = %d; start = 0x%lx, end = 0x%lx,"
" offset = 0x%lx\n",
-/* $Id: ffb_drv.c,v 1.15 2001/08/09 17:47:51 davem Exp $
+/* $Id: ffb_drv.c,v 1.16 2001/10/18 16:00:24 davem Exp $
* ffb_drv.c: Creator/Creator3D direct rendering driver.
*
* Copyright (C) 2000 David S. Miller (davem@redhat.com)
#define DRIVER_PRESETUP() do { \
int _ret; \
_ret = ffb_presetup(dev); \
- if(_ret != 0) return _ret; \
+ if (_ret != 0) return _ret; \
} while(0)
/* Free private structure */
#define DRIVER_PRETAKEDOWN() do { \
- if(dev->dev_private) kfree(dev->dev_private); \
+ if (dev->dev_private) kfree(dev->dev_private); \
} while(0)
#define DRIVER_POSTCLEANUP() do { \
- if(ffb_position != NULL) kfree(ffb_position); \
+ if (ffb_position != NULL) kfree(ffb_position); \
} while(0)
/* We have to free up the rogue hw context state holding error or
int idx; \
\
idx = context - 1; \
- if (fpriv && fpriv->hw_state[idx] != NULL) { \
+ if (fpriv && \
+ context != DRM_KERNEL_CONTEXT && \
+ fpriv->hw_state[idx] != NULL) { \
kfree(fpriv->hw_state[idx]); \
fpriv->hw_state[idx] = NULL; \
} \
* sequence:
*
* echo "Initializing random number generator..."
- * random_seed=/var/run/random-seed
+ * random_seed=/var/run/random-seed
* # Carry a random seed from start-up to start-up
- * # Load and then save 512 bytes, which is the size of the entropy pool
- * if [ -f $random_seed ]; then
+ * # Load and then save the whole entropy pool
+ * if [ -f $random_seed ]; then
* cat $random_seed >/dev/urandom
- * fi
- * dd if=/dev/urandom of=$random_seed count=1
- * chmod 600 $random_seed
+ * else
+ * touch $random_seed
+ * fi
+ * chmod 600 $random_seed
+ * poolfile=/proc/sys/kernel/random/poolsize
+ * [ -r $poolfile ] && bytes=`cat $poolfile` || bytes=512
+ * dd if=/dev/urandom of=$random_seed count=1 bs=bytes
*
* and the following lines in an appropriate script which is run as
* the system is shutdown:
- *
+ *
* # Carry a random seed from shut-down to start-up
- * # Save 512 bytes, which is the size of the entropy pool
+ * # Save the whole entropy pool
* echo "Saving random seed..."
- * random_seed=/var/run/random-seed
- * dd if=/dev/urandom of=$random_seed count=1
- * chmod 600 $random_seed
- *
+ * random_seed=/var/run/random-seed
+ * touch $random_seed
+ * chmod 600 $random_seed
+ * poolfile=/proc/sys/kernel/random/poolsize
+ * [ -r $poolfile ] && bytes=`cat $poolfile` || bytes=512
+ * dd if=/dev/urandom of=$random_seed count=1 bs=bytes
+ *
* For example, on most modern systems using the System V init
* scripts, such code fragments would be found in
* /etc/rc.d/init.d/random. On older Linux systems, the correct script
static int random_write_wakeup_thresh = 128;
/*
- * A pool of size POOLWORDS is stirred with a primitive polynomial
- * of degree POOLWORDS over GF(2). The taps for various sizes are
+ * A pool of size .poolwords is stirred with a primitive polynomial
+ * of degree .poolwords over GF(2). The taps for various sizes are
* defined below. They are chosen to be evenly spaced (minimum RMS
* distance from evenly spaced; the numbers in the comments are a
* scaled squared error sum) except for the last tap, which is 1 to
int tap1, tap2, tap3, tap4, tap5;
} poolinfo_table[] = {
/* x^2048 + x^1638 + x^1231 + x^819 + x^411 + x + 1 -- 115 */
- { 2048, 1638, 1231, 819, 411, 1 },
+ { 2048, 1638, 1231, 819, 411, 1 },
/* x^1024 + x^817 + x^615 + x^412 + x^204 + x + 1 -- 290 */
- { 1024, 817, 615, 412, 204, 1 },
-
+ { 1024, 817, 615, 412, 204, 1 },
#if 0 /* Alternate polynomial */
/* x^1024 + x^819 + x^616 + x^410 + x^207 + x^2 + 1 -- 115 */
{ 1024, 819, 616, 410, 207, 2 },
#endif
-
+
/* x^512 + x^411 + x^308 + x^208 + x^104 + x + 1 -- 225 */
{ 512, 411, 308, 208, 104, 1 },
-
#if 0 /* Alternates */
/* x^512 + x^409 + x^307 + x^206 + x^102 + x^2 + 1 -- 95 */
{ 512, 409, 307, 206, 102, 2 },
/* x^256 + x^205 + x^155 + x^101 + x^52 + x + 1 -- 125 */
{ 256, 205, 155, 101, 52, 1 },
-
+
/* x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 -- 105 */
{ 128, 103, 76, 51, 25, 1 },
-
#if 0 /* Alternate polynomial */
/* x^128 + x^103 + x^78 + x^51 + x^27 + x^2 + 1 -- 70 */
{ 128, 103, 78, 51, 27, 2 },
/* x^32 + x^26 + x^20 + x^14 + x^7 + x + 1 -- 15 */
{ 32, 26, 20, 14, 7, 1 },
- { 0, 0, 0, 0, 0, 0 },
-};
-
+ { 0, 0, 0, 0, 0, 0 },
+};
+
+#define POOLBITS poolwords*32
+#define POOLBYTES poolwords*4
+
/*
* For the purposes of better mixing, we use the CRC-32 polynomial as
* well to make a twisted Generalized Feedback Shift Reigster
}
#endif
+#if 0
+#define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg)
+#else
+#define DEBUG_ENT(fmt, arg...) do {} while (0)
+#endif
+
/**********************************************************************
*
* OS independent entropy store. Here are the functions which handle
/*
* Initialize the entropy store. The input argument is the size of
* the random pool.
- *
+ *
* Returns an negative error if there is a problem.
*/
static int create_entropy_store(int size, struct entropy_store **ret_bucket)
memset (r, 0, sizeof(struct entropy_store));
r->poolinfo = *p;
- r->pool = kmalloc(poolwords*4, GFP_KERNEL);
+ r->pool = kmalloc(POOLBYTES, GFP_KERNEL);
if (!r->pool) {
kfree(r);
return -ENOMEM;
}
- memset(r->pool, 0, poolwords*4);
+ memset(r->pool, 0, POOLBYTES);
*ret_bucket = r;
return 0;
}
r->entropy_count = 0;
r->input_rotate = 0;
r->extract_count = 0;
- memset(r->pool, 0, r->poolinfo.poolwords*4);
+ memset(r->pool, 0, r->poolinfo.POOLBYTES);
}
static void free_entropy_store(struct entropy_store *r)
* the entropy is concentrated in the low-order bits.
*/
static void add_entropy_words(struct entropy_store *r, const __u32 *in,
- int num)
+ int nwords)
{
static __u32 const twist_table[8] = {
0, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
unsigned i;
int new_rotate;
+ int wordmask = r->poolinfo.poolwords - 1;
__u32 w;
- while (num--) {
+ while (nwords--) {
w = rotate_left(r->input_rotate, *in);
- i = r->add_ptr = (r->add_ptr - 1) & (r->poolinfo.poolwords-1);
+ i = r->add_ptr = (r->add_ptr - 1) & wordmask;
/*
* Normally, we add 7 bits of rotation to the pool.
* At the beginning of the pool, add an extra 7 bits
r->input_rotate = new_rotate & 31;
/* XOR in the various taps */
- w ^= r->pool[(i+r->poolinfo.tap1)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap2)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap3)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap4)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap5)&(r->poolinfo.poolwords-1)];
+ w ^= r->pool[(i + r->poolinfo.tap1) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap2) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap3) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap4) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap5) & wordmask];
w ^= r->pool[i];
r->pool[i] = (w >> 3) ^ twist_table[w & 7];
}
/*
* Credit (or debit) the entropy store with n bits of entropy
*/
-static void credit_entropy_store(struct entropy_store *r, int num)
+static void credit_entropy_store(struct entropy_store *r, int nbits)
{
- int max_entropy = r->poolinfo.poolwords*32;
-
- if (r->entropy_count + num < 0)
+ if (r->entropy_count + nbits < 0) {
+ DEBUG_ENT("negative entropy/overflow (%d+%d)\n",
+ r->entropy_count, nbits);
r->entropy_count = 0;
- else if (r->entropy_count + num > max_entropy)
- r->entropy_count = max_entropy;
- else
- r->entropy_count = r->entropy_count + num;
+ } else if (r->entropy_count + nbits > r->poolinfo.POOLBITS) {
+ r->entropy_count = r->poolinfo.POOLBITS;
+ } else {
+ r->entropy_count += nbits;
+ if (nbits)
+ DEBUG_ENT("%s added %d bits, now %d\n",
+ r == sec_random_state ? "secondary" :
+ r == random_state ? "primary" : "unknown",
+ nbits, r->entropy_count);
+ }
}
/**********************************************************************
return 0;
}
+/*
+ * Changes to the entropy data is put into a queue rather than being added to
+ * the entropy counts directly. This is presumably to avoid doing heavy
+ * hashing calculations during an interrupt in add_timer_randomness().
+ * Instead, the entropy is only added to the pool once per timer tick.
+ */
void batch_entropy_store(u32 a, u32 b, int num)
{
int new;
queue_task(&batch_tqueue, &tq_timer);
batch_head = new;
} else {
-#if 0
- printk(KERN_NOTICE "random: batch entropy buffer full\n");
-#endif
+ DEBUG_ENT("batch entropy buffer full\n");
}
}
+/*
+ * Flush out the accumulated entropy operations, adding entropy to the passed
+ * store (normally random_state). If that store has enough entropy, alternate
+ * between randomizing the data of the primary and secondary stores.
+ */
static void batch_entropy_process(void *private_)
{
- int num = 0;
- int max_entropy;
struct entropy_store *r = (struct entropy_store *) private_, *p;
-
+ int max_entropy = r->poolinfo.POOLBITS;
+
if (!batch_max)
return;
- max_entropy = r->poolinfo.poolwords*32;
+ p = r;
while (batch_head != batch_tail) {
+ if (r->entropy_count >= max_entropy) {
+ r = (r == sec_random_state) ? random_state :
+ sec_random_state;
+ max_entropy = r->poolinfo.POOLBITS;
+ }
add_entropy_words(r, batch_entropy_pool + 2*batch_tail, 2);
- p = r;
- if (r->entropy_count > max_entropy && (num & 1))
- r = sec_random_state;
credit_entropy_store(r, batch_entropy_credit[batch_tail]);
batch_tail = (batch_tail+1) & (batch_max-1);
- num++;
}
- if (r->entropy_count >= random_read_wakeup_thresh)
+ if (p->entropy_count >= random_read_wakeup_thresh)
wake_up_interruptible(&random_read_wait);
}
/*
* This utility inline function is responsible for transfering entropy
- * from the primary pool to the secondary extraction pool. We pull
- * randomness under two conditions; one is if there isn't enough entropy
- * in the secondary pool. The other is after we have extract 1024 bytes,
+ * from the primary pool to the secondary extraction pool. We pull
+ * randomness under two conditions; one is if there isn't enough entropy
+ * in the secondary pool. The other is after we have extracted 1024 bytes,
* at which point we do a "catastrophic reseeding".
*/
static inline void xfer_secondary_pool(struct entropy_store *r,
{
__u32 tmp[TMP_BUF_SIZE];
- if (r->entropy_count < nbytes*8) {
- extract_entropy(random_state, tmp, sizeof(tmp), 0);
- add_entropy_words(r, tmp, TMP_BUF_SIZE);
- credit_entropy_store(r, TMP_BUF_SIZE*8);
+ if (r->entropy_count < nbytes * 8 &&
+ r->entropy_count < r->poolinfo.POOLBITS) {
+ int nwords = min(r->poolinfo.poolwords - r->entropy_count/32,
+ sizeof(tmp) / 4);
+
+ DEBUG_ENT("xfer %d from primary to %s (have %d, need %d)\n",
+ nwords * 32,
+ r == sec_random_state ? "secondary" : "unknown",
+ r->entropy_count, nbytes * 8);
+
+ extract_entropy(random_state, tmp, nwords, 0);
+ add_entropy_words(r, tmp, nwords);
+ credit_entropy_store(r, nwords * 32);
}
if (r->extract_count > 1024) {
+ DEBUG_ENT("reseeding %s with %d from primary\n",
+ r == sec_random_state ? "secondary" : "unknown",
+ sizeof(tmp) * 8);
extract_entropy(random_state, tmp, sizeof(tmp), 0);
- add_entropy_words(r, tmp, TMP_BUF_SIZE);
+ add_entropy_words(r, tmp, sizeof(tmp) / 4);
r->extract_count = 0;
}
}
* bits of entropy are left in the pool, but it does not restrict the
* number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER
* flag is given, then the buf pointer is assumed to be in user space.
- * If the EXTRACT_ENTROPY_SECONDARY flag is given, then this function will
*
- * Note: extract_entropy() assumes that POOLWORDS is a multiple of 16 words.
+ * If the EXTRACT_ENTROPY_SECONDARY flag is given, then we are actually
+ * extracting entropy from the secondary pool, and can refill from the
+ * primary pool if needed.
+ *
+ * Note: extract_entropy() assumes that .poolwords is a multiple of 16 words.
*/
static ssize_t extract_entropy(struct entropy_store *r, void * buf,
size_t nbytes, int flags)
__u32 x;
add_timer_randomness(&extract_timer_state, nbytes);
-
+
/* Redundant, but just in case... */
- if (r->entropy_count > r->poolinfo.poolwords)
- r->entropy_count = r->poolinfo.poolwords;
+ if (r->entropy_count > r->poolinfo.POOLBITS)
+ r->entropy_count = r->poolinfo.POOLBITS;
if (flags & EXTRACT_ENTROPY_SECONDARY)
xfer_secondary_pool(r, nbytes);
+ DEBUG_ENT("%s has %d bits, want %d bits\n",
+ r == sec_random_state ? "secondary" :
+ r == random_state ? "primary" : "unknown",
+ r->entropy_count, nbytes * 8);
+
if (r->entropy_count / 8 >= nbytes)
r->entropy_count -= nbytes*8;
else
c -= bytes;
p += bytes;
- /* Convert bytes to words */
- bytes = (bytes + 3) / sizeof(__u32);
- add_entropy_words(random_state, buf, bytes);
+ add_entropy_words(random_state, buf, (bytes + 3) / 4);
}
if (p == buffer) {
return (ssize_t)ret;
return -EINVAL;
if (size > random_state->poolinfo.poolwords)
size = random_state->poolinfo.poolwords;
- if (copy_to_user(p, random_state->pool, size*sizeof(__u32)))
+ if (copy_to_user(p, random_state->pool, size * 4))
return -EFAULT;
return 0;
case RNDADDENTROPY:
{
int ret;
- sysctl_poolsize = random_state->poolinfo.poolwords * 4;
+ sysctl_poolsize = random_state->poolinfo.POOLBYTES;
ret = proc_dointvec(table, write, filp, buffer, lenp);
if (ret || !write ||
- (sysctl_poolsize == random_state->poolinfo.poolwords * 4))
+ (sysctl_poolsize == random_state->poolinfo.POOLBYTES))
return ret;
return change_poolsize(sysctl_poolsize);
{
int len;
- sysctl_poolsize = random_state->poolinfo.poolwords * 4;
+ sysctl_poolsize = random_state->poolinfo.POOLBYTES;
/*
* We only handle the write case, since the read case gets
return -EFAULT;
}
- if (sysctl_poolsize != random_state->poolinfo.poolwords * 4)
+ if (sysctl_poolsize != random_state->poolinfo.POOLBYTES)
return change_poolsize(sysctl_poolsize);
return 0;
{
min_read_thresh = 8;
min_write_thresh = 0;
- max_read_thresh = max_write_thresh =
- random_state->poolinfo.poolwords * 32;
+ max_read_thresh = max_write_thresh = random_state->poolinfo.POOLBITS;
random_table[1].data = &random_state->entropy_count;
}
#endif /* CONFIG_SYSCTL */
EXPORT_SYMBOL(add_interrupt_randomness);
EXPORT_SYMBOL(add_blkdev_randomness);
EXPORT_SYMBOL(batch_entropy_store);
+EXPORT_SYMBOL(generate_random_uuid);
if(iop->status_block->current_mem_size < iop->status_block->desired_mem_size)
{
struct resource *res = &iop->mem_resource;
- res->name = iop->bus.pci.pdev->bus->name;
+ res->name = iop->pdev->bus->name;
res->flags = IORESOURCE_MEM;
res->start = 0;
res->end = 0;
printk("%s: requires private memory resources.\n", iop->name);
- root = pci_find_parent_resource(iop->bus.pci.pdev, res);
+ root = pci_find_parent_resource(iop->pdev, res);
if(root==NULL)
printk("Can't find parent resource!\n");
if(root && allocate_resource(root, res,
if(iop->status_block->current_io_size < iop->status_block->desired_io_size)
{
struct resource *res = &iop->io_resource;
- res->name = iop->bus.pci.pdev->bus->name;
+ res->name = iop->pdev->bus->name;
res->flags = IORESOURCE_IO;
res->start = 0;
res->end = 0;
printk("%s: requires private memory resources.\n", iop->name);
- root = pci_find_parent_resource(iop->bus.pci.pdev, res);
+ root = pci_find_parent_resource(iop->pdev, res);
if(root==NULL)
printk("Can't find parent resource!\n");
if(root && allocate_resource(root, res,
c->bus.pci.queue_buggy = 0;
c->bus.pci.dpt = 0;
c->bus.pci.short_req = 0;
- c->bus.pci.pdev = dev;
+ c->pdev = dev;
c->irq_mask = (volatile u32 *)(mem+0x34);
c->post_port = (volatile u32 *)(mem+0x40);
#include <linux/blk.h>
#include <linux/version.h>
#include <linux/i2o.h>
-#include "../scsi/scsi.h"
-#include "../scsi/hosts.h"
-#include "../scsi/sd.h"
+#include "../../scsi/scsi.h"
+#include "../../scsi/hosts.h"
+#include "../../scsi/sd.h"
#include "i2o_scsi.h"
#define VERSION_STRING "Version 0.0.1"
static Scsi_Host_Template driver_template = I2OSCSI;
-#include "../scsi/scsi_module.c"
+#include "../../scsi/scsi_module.c"
--- /dev/null
+/* 8139cp.c: A Linux PCI Ethernet driver for the RealTek 8139C+ chips. */
+/*
+ Copyright 2001 Jeff Garzik <jgarzik@mandrakesoft.com>
+
+ Copyright (C) 2000, 2001 David S. Miller (davem@redhat.com) [sungem.c]
+ Copyright 2001 Manfred Spraul [natsemi.c]
+ Copyright 1999-2001 by Donald Becker. [natsemi.c]
+ Written 1997-2001 by Donald Becker. [8139too.c]
+ Copyright 1998-2001 by Jes Sorensen, <jes@trained-monkey.org>. [acenic.c]
+
+ This software may be used and distributed according to the terms of
+ the GNU General Public License (GPL), incorporated herein by reference.
+ Drivers based on or derived from this code fall under the GPL and must
+ retain the authorship, copyright and license notice. This file is not
+ a complete program and may only be used when the entire operating
+ system is licensed under the GPL.
+
+ See the file COPYING in this distribution for more information.
+
+ TODO:
+ * dev->tx_timeout
+ * Constants (module parms?) for Rx work limit
+ * support 64-bit PCI DMA
+ * ETHTOOL_[GS]SET, ETHTOOL_GREGS, ETHTOOL_[GS]WOL,
+ ETHTOOL_[GS]MSGLVL, ETHTOOL_NWAY_RST
+ * Complete reset on PciErr
+ * LinkChg and LenChg interrupts
+ * Consider Rx interrupt mitigation using TimerIntr
+ * Implement 8139C+ statistics dump
+ * Support forcing media type with a module parameter,
+ like dl2k.c/sundance.c
+ * Rx checksumming
+ * Tx checksumming
+ * Jumbo frames / dev->change_mtu
+ * Tx abort stops Tx DMA?
+ * Investigate IntrStatus bit 10 purpose and use
+ * Investigate using skb->priority with h/w VLAN priority
+ * Investigate using High Priority Tx Queue with skb->priority
+ * Adjust Rx FIFO threshold and Max Rx DMA burst on Rx FIFO error
+ * Adjust Tx FIFO threshold and Max Tx DMA burst on Tx FIFO error
+
+ */
+
+#define DRV_NAME "8139cp"
+#define DRV_VERSION "0.0.5"
+#define DRV_RELDATE "Oct 19, 2001"
+
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+/* These identify the driver base version and may not be removed. */
+static char version[] __devinitdata =
+KERN_INFO DRV_NAME " 10/100 PCI Ethernet driver v" DRV_VERSION " (" DRV_RELDATE ")\n";
+
+MODULE_AUTHOR("Jeff Garzik <jgarzik@mandrakesoft.com>");
+MODULE_DESCRIPTION("RealTek RTL-8139C+ series 10/100 PCI Ethernet driver");
+MODULE_LICENSE("GPL");
+
+static int debug = -1;
+MODULE_PARM (debug, "i");
+MODULE_PARM_DESC (debug, "8139cp bitmapped message enable number");
+
+/* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
+ The RTL chips use a 64 element hash table based on the Ethernet CRC. */
+static int multicast_filter_limit = 32;
+MODULE_PARM (multicast_filter_limit, "i");
+MODULE_PARM_DESC (multicast_filter_limit, "8139cp maximum number of filtered multicast addresses");
+
+/* Set the copy breakpoint for the copy-only-tiny-buffer Rx structure. */
+#if defined(__alpha__) || defined(__arm__) || defined(__hppa__) \
+ || defined(__sparc_) || defined(__ia64__) \
+ || defined(__sh__) || defined(__mips__)
+static int rx_copybreak = 1518;
+#else
+static int rx_copybreak = 100;
+#endif
+MODULE_PARM (rx_copybreak, "i");
+MODULE_PARM_DESC (rx_copybreak, "8139cp Breakpoint at which Rx packets are copied");
+
+#define PFX DRV_NAME ": "
+
+#define CP_DEF_MSG_ENABLE (NETIF_MSG_DRV | \
+ NETIF_MSG_PROBE | \
+ NETIF_MSG_LINK)
+#define CP_REGS_SIZE (0xff + 1)
+#define CP_RX_RING_SIZE 64
+#define CP_TX_RING_SIZE 64
+#define CP_RING_BYTES \
+ ((sizeof(struct cp_desc) * CP_RX_RING_SIZE) + \
+ (sizeof(struct cp_desc) * CP_TX_RING_SIZE))
+#define NEXT_TX(N) (((N) + 1) & (CP_TX_RING_SIZE - 1))
+#define NEXT_RX(N) (((N) + 1) & (CP_RX_RING_SIZE - 1))
+#define TX_BUFFS_AVAIL(CP) \
+ (((CP)->tx_tail <= (CP)->tx_head) ? \
+ (CP)->tx_tail + (CP_TX_RING_SIZE - 1) - (CP)->tx_head : \
+ (CP)->tx_tail - (CP)->tx_head - 1)
+#define CP_CHIP_VERSION 0x76
+
+#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
+#define RX_OFFSET 2
+
+/* The following settings are log_2(bytes)-4: 0 == 16 bytes .. 6==1024, 7==end of packet. */
+#define RX_FIFO_THRESH 5 /* Rx buffer level before first PCI xfer. */
+#define RX_DMA_BURST 4 /* Maximum PCI burst, '4' is 256 */
+#define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
+#define TX_EARLY_THRESH 256 /* Early Tx threshold, in bytes */
+
+/* Time in jiffies before concluding the transmitter is hung. */
+#define TX_TIMEOUT (6*HZ)
+
+
+enum {
+ /* NIC register offsets */
+ MAC0 = 0x00, /* Ethernet hardware address. */
+ MAR0 = 0x08, /* Multicast filter. */
+ TxRingAddr = 0x20, /* 64-bit start addr of Tx ring */
+ HiTxRingAddr = 0x28, /* 64-bit start addr of high priority Tx ring */
+ Cmd = 0x37, /* Command register */
+ IntrMask = 0x3C, /* Interrupt mask */
+ IntrStatus = 0x3E, /* Interrupt status */
+ TxConfig = 0x40, /* Tx configuration */
+ ChipVersion = 0x43, /* 8-bit chip version, inside TxConfig */
+ RxConfig = 0x44, /* Rx configuration */
+ Cfg9346 = 0x50, /* EEPROM select/control; Cfg reg [un]lock */
+ Config1 = 0x52, /* Config1 */
+ Config3 = 0x59, /* Config3 */
+ Config4 = 0x5A, /* Config4 */
+ MultiIntr = 0x5C, /* Multiple interrupt select */
+ Config5 = 0xD8, /* Config5 */
+ TxPoll = 0xD9, /* Tell chip to check Tx descriptors for work */
+ CpCmd = 0xE0, /* C+ Command register (C+ mode only) */
+ RxRingAddr = 0xE4, /* 64-bit start addr of Rx ring */
+ TxThresh = 0xEC, /* Early Tx threshold */
+ OldRxBufAddr = 0x30, /* DMA address of Rx ring buffer (C mode) */
+ OldTSD0 = 0x10, /* DMA address of first Tx desc (C mode) */
+
+ /* Tx and Rx status descriptors */
+ DescOwn = (1 << 31), /* Descriptor is owned by NIC */
+ RingEnd = (1 << 30), /* End of descriptor ring */
+ FirstFrag = (1 << 29), /* First segment of a packet */
+ LastFrag = (1 << 28), /* Final segment of a packet */
+ TxError = (1 << 23), /* Tx error summary */
+ RxError = (1 << 20), /* Rx error summary */
+ IPCS = (1 << 18), /* Calculate IP checksum */
+ UDPCS = (1 << 17), /* Calculate UDP/IP checksum */
+ TCPCS = (1 << 16), /* Calculate TCP/IP checksum */
+ IPFail = (1 << 15), /* IP checksum failed */
+ UDPFail = (1 << 14), /* UDP/IP checksum failed */
+ TCPFail = (1 << 13), /* TCP/IP checksum failed */
+ NormalTxPoll = (1 << 6), /* One or more normal Tx packets to send */
+ PID1 = (1 << 17), /* 2 protocol id bits: 0==non-IP, */
+ PID0 = (1 << 16), /* 1==UDP/IP, 2==TCP/IP, 3==IP */
+ TxFIFOUnder = (1 << 25), /* Tx FIFO underrun */
+ TxOWC = (1 << 22), /* Tx Out-of-window collision */
+ TxLinkFail = (1 << 21), /* Link failed during Tx of packet */
+ TxMaxCol = (1 << 20), /* Tx aborted due to excessive collisions */
+ TxColCntShift = 16, /* Shift, to get 4-bit Tx collision cnt */
+ TxColCntMask = 0x01 | 0x02 | 0x04 | 0x08, /* 4-bit collision count */
+ RxErrFrame = (1 << 27), /* Rx frame alignment error */
+ RxMcast = (1 << 26), /* Rx multicast packet rcv'd */
+ RxErrCRC = (1 << 18), /* Rx CRC error */
+ RxErrRunt = (1 << 19), /* Rx error, packet < 64 bytes */
+ RxErrLong = (1 << 21), /* Rx error, packet > 4096 bytes */
+ RxErrFIFO = (1 << 22), /* Rx error, FIFO overflowed, pkt bad */
+
+ /* RxConfig register */
+ RxCfgFIFOShift = 13, /* Shift, to get Rx FIFO thresh value */
+ RxCfgDMAShift = 8, /* Shift, to get Rx Max DMA value */
+ AcceptErr = 0x20, /* Accept packets with CRC errors */
+ AcceptRunt = 0x10, /* Accept runt (<64 bytes) packets */
+ AcceptBroadcast = 0x08, /* Accept broadcast packets */
+ AcceptMulticast = 0x04, /* Accept multicast packets */
+ AcceptMyPhys = 0x02, /* Accept pkts with our MAC as dest */
+ AcceptAllPhys = 0x01, /* Accept all pkts w/ physical dest */
+
+ /* IntrMask / IntrStatus registers */
+ PciErr = (1 << 15), /* System error on the PCI bus */
+ TimerIntr = (1 << 14), /* Asserted when TCTR reaches TimerInt value */
+ LenChg = (1 << 13), /* Cable length change */
+ SWInt = (1 << 8), /* Software-requested interrupt */
+ TxEmpty = (1 << 7), /* No Tx descriptors available */
+ RxFIFOOvr = (1 << 6), /* Rx FIFO Overflow */
+ LinkChg = (1 << 5), /* Packet underrun, or link change */
+ RxEmpty = (1 << 4), /* No Rx descriptors available */
+ TxErr = (1 << 3), /* Tx error */
+ TxOK = (1 << 2), /* Tx packet sent */
+ RxErr = (1 << 1), /* Rx error */
+ RxOK = (1 << 0), /* Rx packet received */
+ IntrResvd = (1 << 10), /* reserved, according to RealTek engineers,
+ but hardware likes to raise it */
+
+ IntrAll = PciErr | TimerIntr | LenChg | SWInt | TxEmpty |
+ RxFIFOOvr | LinkChg | RxEmpty | TxErr | TxOK |
+ RxErr | RxOK | IntrResvd,
+
+ /* C mode command register */
+ CmdReset = (1 << 4), /* Enable to reset; self-clearing */
+ RxOn = (1 << 3), /* Rx mode enable */
+ TxOn = (1 << 2), /* Tx mode enable */
+
+ /* C+ mode command register */
+ RxChkSum = (1 << 5), /* Rx checksum offload enable */
+ PCIMulRW = (1 << 3), /* Enable PCI read/write multiple */
+ CpRxOn = (1 << 1), /* Rx mode enable */
+ CpTxOn = (1 << 0), /* Tx mode enable */
+
+ /* Cfg9436 EEPROM control register */
+ Cfg9346_Lock = 0x00, /* Lock ConfigX/MII register access */
+ Cfg9346_Unlock = 0xC0, /* Unlock ConfigX/MII register access */
+
+ /* TxConfig register */
+ IFG = (1 << 25) | (1 << 24), /* standard IEEE interframe gap */
+ TxDMAShift = 8, /* DMA burst value (0-7) is shift this many bits */
+
+ /* Early Tx Threshold register */
+ TxThreshMask = 0x3f, /* Mask bits 5-0 */
+ TxThreshMax = 2048, /* Max early Tx threshold */
+
+ /* Config1 register */
+ DriverLoaded = (1 << 5), /* Software marker, driver is loaded */
+ PMEnable = (1 << 0), /* Enable various PM features of chip */
+
+ /* Config3 register */
+ PARMEnable = (1 << 6), /* Enable auto-loading of PHY parms */
+
+ /* Config5 register */
+ PMEStatus = (1 << 0), /* PME status can be reset by PCI RST# */
+};
+
+static const unsigned int cp_intr_mask =
+ PciErr | LinkChg |
+ RxOK | RxErr | RxEmpty | RxFIFOOvr |
+ TxOK | TxErr | TxEmpty;
+
+static const unsigned int cp_rx_config =
+ (RX_FIFO_THRESH << RxCfgFIFOShift) |
+ (RX_DMA_BURST << RxCfgDMAShift);
+
+struct cp_desc {
+ u32 opts1;
+ u32 opts2;
+ u32 addr_lo;
+ u32 addr_hi;
+};
+
+struct ring_info {
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+ unsigned frag;
+};
+
+struct cp_extra_stats {
+ unsigned long rx_frags;
+};
+
+struct cp_private {
+ unsigned tx_head;
+ unsigned tx_tail;
+ unsigned rx_tail;
+
+ void *regs;
+ struct net_device *dev;
+ spinlock_t lock;
+
+ struct cp_desc *rx_ring;
+ struct cp_desc *tx_ring;
+ struct ring_info tx_skb[CP_TX_RING_SIZE];
+ struct ring_info rx_skb[CP_RX_RING_SIZE];
+ unsigned rx_buf_sz;
+ dma_addr_t ring_dma;
+
+ u32 msg_enable;
+
+ struct net_device_stats net_stats;
+ struct cp_extra_stats cp_stats;
+
+ struct pci_dev *pdev;
+ u32 rx_config;
+
+ struct sk_buff *frag_skb;
+ unsigned dropping_frag : 1;
+};
+
+#define cpr8(reg) readb(cp->regs + (reg))
+#define cpr16(reg) readw(cp->regs + (reg))
+#define cpr32(reg) readl(cp->regs + (reg))
+#define cpw8(reg,val) writeb((val), cp->regs + (reg))
+#define cpw16(reg,val) writew((val), cp->regs + (reg))
+#define cpw32(reg,val) writel((val), cp->regs + (reg))
+#define cpw8_f(reg,val) do { \
+ writeb((val), cp->regs + (reg)); \
+ readb(cp->regs + (reg)); \
+ } while (0)
+#define cpw16_f(reg,val) do { \
+ writew((val), cp->regs + (reg)); \
+ readw(cp->regs + (reg)); \
+ } while (0)
+#define cpw32_f(reg,val) do { \
+ writel((val), cp->regs + (reg)); \
+ readl(cp->regs + (reg)); \
+ } while (0)
+
+
+static void __cp_set_rx_mode (struct net_device *dev);
+static void cp_tx (struct cp_private *cp);
+static void cp_clean_rings (struct cp_private *cp);
+
+
+static struct pci_device_id cp_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_REALTEK, PCI_DEVICE_ID_REALTEK_8139,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
+ { },
+};
+MODULE_DEVICE_TABLE(pci, cp_pci_tbl);
+
+static inline void cp_rx_skb (struct cp_private *cp, struct sk_buff *skb)
+{
+ skb->protocol = eth_type_trans (skb, cp->dev);
+
+ cp->net_stats.rx_packets++;
+ cp->net_stats.rx_bytes += skb->len;
+ cp->dev->last_rx = jiffies;
+ netif_rx (skb);
+}
+
+static inline void cp_rx_err_acct (struct cp_private *cp, unsigned rx_tail,
+ u32 status, u32 len)
+{
+ if (netif_msg_rx_err (cp))
+ printk (KERN_DEBUG
+ "%s: rx err, slot %d status 0x%x len %d\n",
+ cp->dev->name, rx_tail, status, len);
+ cp->net_stats.rx_errors++;
+ if (status & RxErrFrame)
+ cp->net_stats.rx_frame_errors++;
+ if (status & RxErrCRC)
+ cp->net_stats.rx_crc_errors++;
+ if (status & RxErrRunt)
+ cp->net_stats.rx_length_errors++;
+ if (status & RxErrLong)
+ cp->net_stats.rx_length_errors++;
+ if (status & RxErrFIFO)
+ cp->net_stats.rx_fifo_errors++;
+}
+
+static void cp_rx_frag (struct cp_private *cp, unsigned rx_tail,
+ struct sk_buff *skb, u32 status, u32 len)
+{
+ struct sk_buff *copy_skb, *frag_skb = cp->frag_skb;
+ unsigned orig_len = frag_skb ? frag_skb->len : 0;
+ unsigned target_len = orig_len + len;
+ unsigned first_frag = status & FirstFrag;
+ unsigned last_frag = status & LastFrag;
+
+ if (netif_msg_rx_status (cp))
+ printk (KERN_DEBUG "%s: rx %s%sfrag, slot %d status 0x%x len %d\n",
+ cp->dev->name,
+ cp->dropping_frag ? "dropping " : "",
+ first_frag ? "first " :
+ last_frag ? "last " : "",
+ rx_tail, status, len);
+
+ cp->cp_stats.rx_frags++;
+
+ if (!frag_skb && !first_frag)
+ cp->dropping_frag = 1;
+ if (cp->dropping_frag)
+ goto drop_frag;
+
+ copy_skb = dev_alloc_skb (target_len + RX_OFFSET);
+ if (!copy_skb) {
+ printk(KERN_WARNING "%s: rx slot %d alloc failed\n",
+ cp->dev->name, rx_tail);
+
+ cp->dropping_frag = 1;
+drop_frag:
+ if (frag_skb) {
+ dev_kfree_skb_irq(frag_skb);
+ cp->frag_skb = NULL;
+ }
+ if (last_frag) {
+ cp->net_stats.rx_dropped++;
+ cp->dropping_frag = 0;
+ }
+ return;
+ }
+
+ copy_skb->dev = cp->dev;
+ skb_reserve(copy_skb, RX_OFFSET);
+ skb_put(copy_skb, target_len);
+ if (frag_skb) {
+ memcpy(copy_skb->data, frag_skb->data, orig_len);
+ dev_kfree_skb_irq(frag_skb);
+ }
+ pci_dma_sync_single(cp->pdev, cp->rx_skb[rx_tail].mapping,
+ len, PCI_DMA_FROMDEVICE);
+ memcpy(copy_skb->data + orig_len, skb->data, len);
+
+ copy_skb->ip_summed = CHECKSUM_NONE;
+
+ if (last_frag) {
+ if (status & (RxError | RxErrFIFO)) {
+ cp_rx_err_acct(cp, rx_tail, status, len);
+ dev_kfree_skb_irq(copy_skb);
+ } else
+ cp_rx_skb(cp, copy_skb);
+ cp->frag_skb = NULL;
+ } else {
+ cp->frag_skb = copy_skb;
+ }
+}
+
+static void cp_rx (struct cp_private *cp)
+{
+ unsigned rx_tail = cp->rx_tail;
+ unsigned rx_work = 100;
+
+ while (rx_work--) {
+ u32 status, len;
+ dma_addr_t mapping;
+ struct sk_buff *skb, *copy_skb;
+ unsigned copying_skb, buflen;
+
+ skb = cp->rx_skb[rx_tail].skb;
+ if (!skb)
+ BUG();
+ rmb();
+ status = le32_to_cpu(cp->rx_ring[rx_tail].opts1);
+ if (status & DescOwn)
+ break;
+
+ len = (status & 0x1fff) - 4;
+ mapping = cp->rx_skb[rx_tail].mapping;
+
+ if ((status & (FirstFrag | LastFrag)) != (FirstFrag | LastFrag)) {
+ cp_rx_frag(cp, rx_tail, skb, status, len);
+ goto rx_next;
+ }
+
+ if (status & (RxError | RxErrFIFO)) {
+ cp_rx_err_acct(cp, rx_tail, status, len);
+ goto rx_next;
+ }
+
+ copying_skb = (len <= rx_copybreak);
+
+ if (netif_msg_rx_status(cp))
+ printk(KERN_DEBUG "%s: rx slot %d status 0x%x len %d copying? %d\n",
+ cp->dev->name, rx_tail, status, len,
+ copying_skb);
+
+ buflen = copying_skb ? len : cp->rx_buf_sz;
+ copy_skb = dev_alloc_skb (buflen + RX_OFFSET);
+ if (!copy_skb) {
+ cp->net_stats.rx_dropped++;
+ goto rx_next;
+ }
+
+ skb_reserve(copy_skb, RX_OFFSET);
+ copy_skb->dev = cp->dev;
+
+ if (!copying_skb) {
+ pci_unmap_single(cp->pdev, mapping,
+ buflen, PCI_DMA_FROMDEVICE);
+ skb->ip_summed = CHECKSUM_NONE;
+ skb_trim(skb, len);
+
+ mapping =
+ cp->rx_skb[rx_tail].mapping =
+ pci_map_single(cp->pdev, copy_skb->data,
+ buflen, PCI_DMA_FROMDEVICE);
+ cp->rx_skb[rx_tail].skb = copy_skb;
+ skb_put(copy_skb, buflen);
+ } else {
+ skb_put(copy_skb, len);
+ pci_dma_sync_single(cp->pdev, mapping, len, PCI_DMA_FROMDEVICE);
+ memcpy(copy_skb->data, skb->data, len);
+
+ /* We'll reuse the original ring buffer. */
+ skb = copy_skb;
+ }
+
+ cp_rx_skb(cp, skb);
+
+rx_next:
+ if (rx_tail == (CP_RX_RING_SIZE - 1))
+ cp->rx_ring[rx_tail].opts1 =
+ cpu_to_le32(DescOwn | RingEnd | cp->rx_buf_sz);
+ else
+ cp->rx_ring[rx_tail].opts1 =
+ cpu_to_le32(DescOwn | cp->rx_buf_sz);
+ cp->rx_ring[rx_tail].opts2 = 0;
+ cp->rx_ring[rx_tail].addr_lo = cpu_to_le32(mapping);
+ rx_tail = NEXT_RX(rx_tail);
+ }
+
+ if (!rx_work)
+ printk(KERN_WARNING "%s: rx work limit reached\n", cp->dev->name);
+
+ cp->rx_tail = rx_tail;
+}
+
+static void cp_interrupt (int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_instance;
+ struct cp_private *cp = dev->priv;
+ u16 status;
+
+ status = cpr16(IntrStatus);
+ if (!status || (status == 0xFFFF))
+ return;
+
+ if (netif_msg_intr(cp))
+ printk(KERN_DEBUG "%s: intr, status %04x cmd %02x cpcmd %04x\n",
+ dev->name, status, cpr8(Cmd), cpr16(CpCmd));
+
+ spin_lock(&cp->lock);
+
+ if (status & (RxOK | RxErr | RxEmpty | RxFIFOOvr))
+ cp_rx(cp);
+ if (status & (TxOK | TxErr | TxEmpty | SWInt))
+ cp_tx(cp);
+
+ cpw16_f(IntrStatus, status);
+
+ if (status & PciErr) {
+ u16 pci_status;
+
+ pci_read_config_word(cp->pdev, PCI_STATUS, &pci_status);
+ pci_write_config_word(cp->pdev, PCI_STATUS, pci_status);
+ printk(KERN_ERR "%s: PCI bus error, status=%04x, PCI status=%04x\n",
+ dev->name, status, pci_status);
+ }
+
+ spin_unlock(&cp->lock);
+}
+
+static void cp_tx (struct cp_private *cp)
+{
+ unsigned tx_head = cp->tx_head;
+ unsigned tx_tail = cp->tx_tail;
+
+ while (tx_tail != tx_head) {
+ struct sk_buff *skb;
+ u32 status;
+
+ rmb();
+ status = le32_to_cpu(cp->tx_ring[tx_tail].opts1);
+ if (status & DescOwn)
+ break;
+
+ skb = cp->tx_skb[tx_tail].skb;
+ if (!skb)
+ BUG();
+
+ pci_unmap_single(cp->pdev, cp->tx_skb[tx_tail].mapping,
+ skb->len, PCI_DMA_TODEVICE);
+
+ if (status & LastFrag) {
+ if (status & (TxError | TxFIFOUnder)) {
+ if (netif_msg_tx_err(cp))
+ printk(KERN_DEBUG "%s: tx err, status 0x%x\n",
+ cp->dev->name, status);
+ cp->net_stats.tx_errors++;
+ if (status & TxOWC)
+ cp->net_stats.tx_window_errors++;
+ if (status & TxMaxCol)
+ cp->net_stats.tx_aborted_errors++;
+ if (status & TxLinkFail)
+ cp->net_stats.tx_carrier_errors++;
+ if (status & TxFIFOUnder)
+ cp->net_stats.tx_fifo_errors++;
+ } else {
+ cp->net_stats.collisions +=
+ ((status >> TxColCntShift) & TxColCntMask);
+ cp->net_stats.tx_packets++;
+ cp->net_stats.tx_bytes += skb->len;
+ if (netif_msg_tx_done(cp))
+ printk(KERN_DEBUG "%s: tx done, slot %d\n", cp->dev->name, tx_tail);
+ }
+ dev_kfree_skb_irq(skb);
+ }
+
+ cp->tx_skb[tx_tail].skb = NULL;
+
+ tx_tail = NEXT_TX(tx_tail);
+ }
+
+ cp->tx_tail = tx_tail;
+
+ if (netif_queue_stopped(cp->dev) && (TX_BUFFS_AVAIL(cp) > 1))
+ netif_wake_queue(cp->dev);
+}
+
+static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
+{
+ struct cp_private *cp = dev->priv;
+ unsigned entry;
+ u32 eor;
+
+ spin_lock_irq(&cp->lock);
+
+ if (TX_BUFFS_AVAIL(cp) <= (skb_shinfo(skb)->nr_frags + 1)) {
+ netif_stop_queue(dev);
+ spin_unlock_irq(&cp->lock);
+ return 1;
+ }
+
+ entry = cp->tx_head;
+ eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0;
+ if (skb_shinfo(skb)->nr_frags == 0) {
+ struct cp_desc *txd = &cp->tx_ring[entry];
+ u32 mapping, len;
+
+ len = skb->len;
+ mapping = pci_map_single(cp->pdev, skb->data, len, PCI_DMA_TODEVICE);
+ eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0;
+#ifdef CP_TX_CHECKSUM
+ txd->opts1 = cpu_to_le32(eor | len | DescOwn | FirstFrag |
+ LastFrag | IPCS | UDPCS | TCPCS);
+#else
+ txd->opts1 = cpu_to_le32(eor | len | DescOwn | FirstFrag |
+ LastFrag);
+#endif
+ txd->opts2 = 0;
+ txd->addr_lo = cpu_to_le32(mapping);
+
+ cp->tx_skb[entry].skb = skb;
+ cp->tx_skb[entry].mapping = mapping;
+ cp->tx_skb[entry].frag = 0;
+ wmb();
+ entry = NEXT_TX(entry);
+ } else {
+ struct cp_desc *txd;
+ u32 first_len, first_mapping;
+ int frag, first_entry = entry;
+
+ /* We must give this initial chunk to the device last.
+ * Otherwise we could race with the device.
+ */
+ first_len = skb->len - skb->data_len;
+ first_mapping = pci_map_single(cp->pdev, skb->data,
+ first_len, PCI_DMA_TODEVICE);
+ cp->tx_skb[entry].skb = skb;
+ cp->tx_skb[entry].mapping = first_mapping;
+ cp->tx_skb[entry].frag = 1;
+ entry = NEXT_TX(entry);
+
+ for (frag = 0; frag < skb_shinfo(skb)->nr_frags; frag++) {
+ skb_frag_t *this_frag = &skb_shinfo(skb)->frags[frag];
+ u32 len, mapping;
+ u32 ctrl;
+
+ len = this_frag->size;
+ mapping = pci_map_single(cp->pdev,
+ ((void *) page_address(this_frag->page) +
+ this_frag->page_offset),
+ len, PCI_DMA_TODEVICE);
+ eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0;
+#ifdef CP_TX_CHECKSUM
+ ctrl = eor | len | DescOwn | IPCS | UDPCS | TCPCS;
+#else
+ ctrl = eor | len | DescOwn;
+#endif
+ if (frag == skb_shinfo(skb)->nr_frags - 1)
+ ctrl |= LastFrag;
+
+ txd = &cp->tx_ring[entry];
+ txd->opts1 = cpu_to_le32(ctrl);
+ txd->opts2 = 0;
+ txd->addr_lo = cpu_to_le32(mapping);
+
+ cp->tx_skb[entry].skb = skb;
+ cp->tx_skb[entry].mapping = mapping;
+ cp->tx_skb[entry].frag = frag + 2;
+ wmb();
+ entry = NEXT_TX(entry);
+ }
+ txd = &cp->tx_ring[first_entry];
+#ifdef CP_TX_CHECKSUM
+ txd->opts1 = cpu_to_le32(first_len | FirstFrag | DescOwn | IPCS | UDPCS | TCPCS);
+#else
+ txd->opts1 = cpu_to_le32(first_len | FirstFrag | DescOwn);
+#endif
+ txd->opts2 = 0;
+ txd->addr_lo = cpu_to_le32(first_mapping);
+ wmb();
+ }
+ cp->tx_head = entry;
+ if (netif_msg_tx_queued(cp))
+ printk(KERN_DEBUG "%s: tx queued, slot %d, skblen %d\n",
+ dev->name, entry, skb->len);
+ if (TX_BUFFS_AVAIL(cp) < 0)
+ BUG();
+ if (TX_BUFFS_AVAIL(cp) == 0)
+ netif_stop_queue(dev);
+
+ spin_unlock_irq(&cp->lock);
+
+ cpw8(TxPoll, NormalTxPoll);
+ dev->trans_start = jiffies;
+
+ return 0;
+}
+
+/* Set or clear the multicast filter for this adaptor.
+ This routine is not state sensitive and need not be SMP locked. */
+
+static unsigned const ethernet_polynomial = 0x04c11db7U;
+static inline u32 ether_crc (int length, unsigned char *data)
+{
+ int crc = -1;
+
+ while (--length >= 0) {
+ unsigned char current_octet = *data++;
+ int bit;
+ for (bit = 0; bit < 8; bit++, current_octet >>= 1)
+ crc = (crc << 1) ^ ((crc < 0) ^ (current_octet & 1) ?
+ ethernet_polynomial : 0);
+ }
+
+ return crc;
+}
+
+static void __cp_set_rx_mode (struct net_device *dev)
+{
+ struct cp_private *cp = dev->priv;
+ u32 mc_filter[2]; /* Multicast hash filter */
+ int i, rx_mode;
+ u32 tmp;
+
+ /* Note: do not reorder, GCC is clever about common statements. */
+ if (dev->flags & IFF_PROMISC) {
+ /* Unconditionally log net taps. */
+ printk (KERN_NOTICE "%s: Promiscuous mode enabled.\n",
+ dev->name);
+ rx_mode =
+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
+ AcceptAllPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else if ((dev->mc_count > multicast_filter_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ /* Too many to filter perfectly -- accept all multicasts. */
+ rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0xffffffff;
+ } else {
+ struct dev_mc_list *mclist;
+ rx_mode = AcceptBroadcast | AcceptMyPhys;
+ mc_filter[1] = mc_filter[0] = 0;
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
+ i++, mclist = mclist->next) {
+ int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
+
+ mc_filter[bit_nr >> 5] |= cpu_to_le32(1 << (bit_nr & 31));
+ rx_mode |= AcceptMulticast;
+ }
+ }
+
+ /* We can safely update without stopping the chip. */
+ tmp = cp_rx_config | rx_mode;
+ if (cp->rx_config != tmp) {
+ cpw32_f (RxConfig, tmp);
+ cp->rx_config = tmp;
+ }
+ cpw32_f (MAR0 + 0, mc_filter[0]);
+ cpw32_f (MAR0 + 4, mc_filter[1]);
+}
+
+static void cp_set_rx_mode (struct net_device *dev)
+{
+ unsigned long flags;
+ struct cp_private *cp = dev->priv;
+
+ spin_lock_irqsave (&cp->lock, flags);
+ __cp_set_rx_mode(dev);
+ spin_unlock_irqrestore (&cp->lock, flags);
+}
+
+static void __cp_get_stats(struct cp_private *cp)
+{
+ /* XXX implement */
+}
+
+static struct net_device_stats *cp_get_stats(struct net_device *dev)
+{
+ struct cp_private *cp = dev->priv;
+
+ /* The chip only need report frame silently dropped. */
+ spin_lock_irq(&cp->lock);
+ if (netif_running(dev) && netif_device_present(dev))
+ __cp_get_stats(cp);
+ spin_unlock_irq(&cp->lock);
+
+ return &cp->net_stats;
+}
+
+static void cp_stop_hw (struct cp_private *cp)
+{
+ cpw16(IntrMask, 0);
+ cpr16(IntrMask);
+ cpw8(Cmd, 0);
+ cpw16(CpCmd, 0);
+ cpr16(CpCmd);
+ cpw16(IntrStatus, ~(cpr16(IntrStatus)));
+ synchronize_irq();
+ udelay(10);
+
+ cp->rx_tail = 0;
+ cp->tx_head = cp->tx_tail = 0;
+}
+
+static void cp_reset_hw (struct cp_private *cp)
+{
+ unsigned work = 1000;
+
+ cpw8(Cmd, CmdReset);
+
+ while (work--) {
+ if (!(cpr8(Cmd) & CmdReset))
+ return;
+
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(10);
+ }
+
+ printk(KERN_ERR "%s: hardware reset timeout\n", cp->dev->name);
+}
+
+static void cp_init_hw (struct cp_private *cp)
+{
+ struct net_device *dev = cp->dev;
+
+ cp_reset_hw(cp);
+
+ cpw8_f (Cfg9346, Cfg9346_Unlock);
+
+ /* Restore our idea of the MAC address. */
+ cpw32_f (MAC0 + 0, cpu_to_le32 (*(u32 *) (dev->dev_addr + 0)));
+ cpw32_f (MAC0 + 4, cpu_to_le32 (*(u32 *) (dev->dev_addr + 4)));
+
+ cpw8(Cmd, RxOn | TxOn);
+ cpw16(CpCmd, PCIMulRW | CpRxOn | CpTxOn);
+ cpw8(TxThresh, 0x06); /* XXX convert magic num to a constant */
+
+ __cp_set_rx_mode(dev);
+ cpw32_f (TxConfig, IFG | (TX_DMA_BURST << TxDMAShift));
+
+ cpw8(Config1, cpr8(Config1) | DriverLoaded | PMEnable);
+ cpw8(Config3, PARMEnable); /* disables magic packet and WOL */
+ cpw8(Config5, cpr8(Config5) & PMEStatus); /* disables more WOL stuff */
+
+ cpw32_f(HiTxRingAddr, 0);
+ cpw32_f(HiTxRingAddr + 4, 0);
+ cpw32_f(OldRxBufAddr, 0);
+ cpw32_f(OldTSD0, 0);
+ cpw32_f(OldTSD0 + 4, 0);
+ cpw32_f(OldTSD0 + 8, 0);
+ cpw32_f(OldTSD0 + 12, 0);
+
+ cpw32_f(RxRingAddr, cp->ring_dma);
+ cpw32_f(RxRingAddr + 4, 0);
+ cpw32_f(TxRingAddr, cp->ring_dma + (sizeof(struct cp_desc) * CP_RX_RING_SIZE));
+ cpw32_f(TxRingAddr + 4, 0);
+
+ cpw16(MultiIntr, 0);
+
+ cpw16(IntrMask, cp_intr_mask);
+
+ cpw8_f (Cfg9346, Cfg9346_Lock);
+}
+
+static int cp_refill_rx (struct cp_private *cp)
+{
+ unsigned i;
+
+ for (i = 0; i < CP_RX_RING_SIZE; i++) {
+ struct sk_buff *skb;
+
+ skb = dev_alloc_skb(cp->rx_buf_sz + RX_OFFSET);
+ if (!skb)
+ goto err_out;
+
+ skb->dev = cp->dev;
+ skb_reserve(skb, RX_OFFSET);
+ skb_put(skb, cp->rx_buf_sz);
+
+ cp->rx_skb[i].mapping = pci_map_single(cp->pdev,
+ skb->data, cp->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ cp->rx_skb[i].skb = skb;
+ cp->rx_skb[i].frag = 0;
+
+ if (i == (CP_RX_RING_SIZE - 1))
+ cp->rx_ring[i].opts1 =
+ cpu_to_le32(DescOwn | RingEnd | cp->rx_buf_sz);
+ else
+ cp->rx_ring[i].opts1 =
+ cpu_to_le32(DescOwn | cp->rx_buf_sz);
+ cp->rx_ring[i].opts2 = 0;
+ cp->rx_ring[i].addr_lo = cpu_to_le32(cp->rx_skb[i].mapping);
+ cp->rx_ring[i].addr_hi = 0;
+ }
+
+ return 0;
+
+err_out:
+ cp_clean_rings(cp);
+ return -ENOMEM;
+}
+
+static int cp_init_rings (struct cp_private *cp)
+{
+ memset(cp->tx_ring, 0, sizeof(struct cp_desc) * CP_TX_RING_SIZE);
+ cp->tx_ring[CP_TX_RING_SIZE - 1].opts1 = cpu_to_le32(RingEnd);
+
+ cp->rx_tail = 0;
+ cp->tx_head = cp->tx_tail = 0;
+
+ return cp_refill_rx (cp);
+}
+
+static int cp_alloc_rings (struct cp_private *cp)
+{
+ cp->rx_ring = pci_alloc_consistent(cp->pdev, CP_RING_BYTES, &cp->ring_dma);
+ if (!cp->rx_ring)
+ return -ENOMEM;
+ cp->tx_ring = &cp->rx_ring[CP_RX_RING_SIZE];
+ return cp_init_rings(cp);
+}
+
+static void cp_clean_rings (struct cp_private *cp)
+{
+ unsigned i;
+
+ memset(cp->rx_ring, 0, sizeof(struct cp_desc) * CP_RX_RING_SIZE);
+ memset(cp->tx_ring, 0, sizeof(struct cp_desc) * CP_TX_RING_SIZE);
+
+ for (i = 0; i < CP_RX_RING_SIZE; i++) {
+ if (cp->rx_skb[i].skb) {
+ pci_unmap_single(cp->pdev, cp->rx_skb[i].mapping,
+ cp->rx_buf_sz, PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(cp->rx_skb[i].skb);
+ }
+ }
+
+ for (i = 0; i < CP_TX_RING_SIZE; i++) {
+ if (cp->tx_skb[i].skb) {
+ struct sk_buff *skb = cp->tx_skb[i].skb;
+ pci_unmap_single(cp->pdev, cp->tx_skb[i].mapping,
+ skb->len, PCI_DMA_TODEVICE);
+ dev_kfree_skb(skb);
+ cp->net_stats.tx_dropped++;
+ }
+ }
+
+ memset(&cp->rx_skb, 0, sizeof(struct ring_info) * CP_RX_RING_SIZE);
+ memset(&cp->tx_skb, 0, sizeof(struct ring_info) * CP_TX_RING_SIZE);
+}
+
+static void cp_free_rings (struct cp_private *cp)
+{
+ cp_clean_rings(cp);
+ pci_free_consistent(cp->pdev, CP_RING_BYTES, cp->rx_ring, cp->ring_dma);
+ cp->rx_ring = NULL;
+ cp->tx_ring = NULL;
+}
+
+static int cp_open (struct net_device *dev)
+{
+ struct cp_private *cp = dev->priv;
+ int rc;
+
+ if (netif_msg_ifup(cp))
+ printk(KERN_DEBUG "%s: enabling interface\n", dev->name);
+
+ cp->rx_buf_sz = (dev->mtu <= 1500 ? PKT_BUF_SZ : dev->mtu + 32);
+
+ rc = cp_alloc_rings(cp);
+ if (rc)
+ return rc;
+
+ cp_init_hw(cp);
+
+ rc = request_irq(dev->irq, cp_interrupt, SA_SHIRQ, dev->name, dev);
+ if (rc)
+ goto err_out_hw;
+
+ netif_start_queue(dev);
+
+ return 0;
+
+err_out_hw:
+ cp_stop_hw(cp);
+ cp_free_rings(cp);
+ return rc;
+}
+
+static int cp_close (struct net_device *dev)
+{
+ struct cp_private *cp = dev->priv;
+
+ if (netif_msg_ifdown(cp))
+ printk(KERN_DEBUG "%s: disabling interface\n", dev->name);
+
+ netif_stop_queue(dev);
+ cp_stop_hw(cp);
+ free_irq(dev->irq, dev);
+ cp_free_rings(cp);
+ return 0;
+}
+
+static int cp_ethtool_ioctl (struct cp_private *cp, void *useraddr)
+{
+ u32 ethcmd;
+
+ /* dev_ioctl() in ../../net/core/dev.c has already checked
+ capable(CAP_NET_ADMIN), so don't bother with that here. */
+
+ if (copy_from_user (ðcmd, useraddr, sizeof (ethcmd)))
+ return -EFAULT;
+
+ switch (ethcmd) {
+
+ case ETHTOOL_GDRVINFO:
+ {
+ struct ethtool_drvinfo info = { ETHTOOL_GDRVINFO };
+ strcpy (info.driver, DRV_NAME);
+ strcpy (info.version, DRV_VERSION);
+ strcpy (info.bus_info, cp->pdev->slot_name);
+ if (copy_to_user (useraddr, &info, sizeof (info)))
+ return -EFAULT;
+ return 0;
+ }
+
+ default:
+ break;
+ }
+
+ return -EOPNOTSUPP;
+}
+
+
+static int cp_ioctl (struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct cp_private *cp = dev->priv;
+ int rc = 0;
+
+ switch (cmd) {
+ case SIOCETHTOOL:
+ return cp_ethtool_ioctl(cp, (void *) rq->ifr_data);
+
+ default:
+ rc = -EOPNOTSUPP;
+ break;
+ }
+
+ return rc;
+}
+
+
+
+/* Serial EEPROM section. */
+
+/* EEPROM_Ctrl bits. */
+#define EE_SHIFT_CLK 0x04 /* EEPROM shift clock. */
+#define EE_CS 0x08 /* EEPROM chip select. */
+#define EE_DATA_WRITE 0x02 /* EEPROM chip data in. */
+#define EE_WRITE_0 0x00
+#define EE_WRITE_1 0x02
+#define EE_DATA_READ 0x01 /* EEPROM chip data out. */
+#define EE_ENB (0x80 | EE_CS)
+
+/* Delay between EEPROM clock transitions.
+ No extra delay is needed with 33Mhz PCI, but 66Mhz may change this.
+ */
+
+#define eeprom_delay() readl(ee_addr)
+
+/* The EEPROM commands include the alway-set leading bit. */
+#define EE_WRITE_CMD (5)
+#define EE_READ_CMD (6)
+#define EE_ERASE_CMD (7)
+
+static int __devinit read_eeprom (void *ioaddr, int location, int addr_len)
+{
+ int i;
+ unsigned retval = 0;
+ void *ee_addr = ioaddr + Cfg9346;
+ int read_cmd = location | (EE_READ_CMD << addr_len);
+
+ writeb (EE_ENB & ~EE_CS, ee_addr);
+ writeb (EE_ENB, ee_addr);
+ eeprom_delay ();
+
+ /* Shift the read command bits out. */
+ for (i = 4 + addr_len; i >= 0; i--) {
+ int dataval = (read_cmd & (1 << i)) ? EE_DATA_WRITE : 0;
+ writeb (EE_ENB | dataval, ee_addr);
+ eeprom_delay ();
+ writeb (EE_ENB | dataval | EE_SHIFT_CLK, ee_addr);
+ eeprom_delay ();
+ }
+ writeb (EE_ENB, ee_addr);
+ eeprom_delay ();
+
+ for (i = 16; i > 0; i--) {
+ writeb (EE_ENB | EE_SHIFT_CLK, ee_addr);
+ eeprom_delay ();
+ retval =
+ (retval << 1) | ((readb (ee_addr) & EE_DATA_READ) ? 1 :
+ 0);
+ writeb (EE_ENB, ee_addr);
+ eeprom_delay ();
+ }
+
+ /* Terminate the EEPROM access. */
+ writeb (~EE_CS, ee_addr);
+ eeprom_delay ();
+
+ return retval;
+}
+
+static int __devinit cp_init_one (struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *dev;
+ struct cp_private *cp;
+ int rc;
+ void *regs;
+ long pciaddr;
+ unsigned addr_len, i;
+ u8 pci_rev, cache_size;
+ u16 pci_command;
+
+#ifndef MODULE
+ static int version_printed;
+ if (version_printed++ == 0)
+ printk("%s", version);
+#endif
+
+ pci_read_config_byte(pdev, PCI_REVISION_ID, &pci_rev);
+
+ if (pdev->vendor == PCI_VENDOR_ID_REALTEK &&
+ pdev->device == PCI_DEVICE_ID_REALTEK_8139 && pci_rev < 0x20) {
+ printk(KERN_ERR PFX "pci dev %s (id %04x:%04x rev %02x) is not an 8139C+ compatible chip\n",
+ pdev->slot_name, pdev->vendor, pdev->device, pci_rev);
+ printk(KERN_ERR PFX "Try the \"8139too\" driver instead.\n");
+ return -ENODEV;
+ }
+
+ dev = alloc_etherdev(sizeof(struct cp_private));
+ if (!dev)
+ return -ENOMEM;
+ SET_MODULE_OWNER(dev);
+ cp = dev->priv;
+ cp->pdev = pdev;
+ cp->dev = dev;
+ cp->msg_enable = (debug < 0 ? CP_DEF_MSG_ENABLE : debug);
+ spin_lock_init (&cp->lock);
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ goto err_out_free;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out_disable;
+
+ if (pdev->irq < 2) {
+ rc = -EIO;
+ printk(KERN_ERR PFX "invalid irq (%d) for pci dev %s\n",
+ pdev->irq, pdev->slot_name);
+ goto err_out_res;
+ }
+ pciaddr = pci_resource_start(pdev, 1);
+ if (!pciaddr) {
+ rc = -EIO;
+ printk(KERN_ERR PFX "no MMIO resource for pci dev %s\n",
+ pdev->slot_name);
+ goto err_out_res;
+ }
+ if (pci_resource_len(pdev, 1) < CP_REGS_SIZE) {
+ rc = -EIO;
+ printk(KERN_ERR PFX "MMIO resource (%lx) too small on pci dev %s\n",
+ pci_resource_len(pdev, 1), pdev->slot_name);
+ goto err_out_res;
+ }
+
+ regs = ioremap_nocache(pciaddr, CP_REGS_SIZE);
+ if (!regs) {
+ rc = -EIO;
+ printk(KERN_ERR PFX "Cannot map PCI MMIO (%lx@%lx) on pci dev %s\n",
+ pci_resource_len(pdev, 1), pciaddr, pdev->slot_name);
+ goto err_out_res;
+ }
+ dev->base_addr = (unsigned long) regs;
+ cp->regs = regs;
+
+ cp_stop_hw(cp);
+
+ /* read MAC address from EEPROM */
+ addr_len = read_eeprom (regs, 0, 8) == 0x8129 ? 8 : 6;
+ for (i = 0; i < 3; i++)
+ ((u16 *) (dev->dev_addr))[i] =
+ le16_to_cpu (read_eeprom (regs, i + 7, addr_len));
+
+ dev->open = cp_open;
+ dev->stop = cp_close;
+ dev->set_multicast_list = cp_set_rx_mode;
+ dev->hard_start_xmit = cp_start_xmit;
+ dev->get_stats = cp_get_stats;
+ dev->do_ioctl = cp_ioctl;
+#if 0
+ dev->tx_timeout = cp_tx_timeout;
+ dev->watchdog_timeo = TX_TIMEOUT;
+#endif
+#ifdef CP_TX_CHECKSUM
+ dev->features |= NETIF_F_SG | NETIF_F_IP_CSUM;
+#endif
+
+ dev->irq = pdev->irq;
+
+ rc = register_netdev(dev);
+ if (rc)
+ goto err_out_iomap;
+
+ printk (KERN_INFO "%s: %s at 0x%lx, "
+ "%02x:%02x:%02x:%02x:%02x:%02x, "
+ "IRQ %d\n",
+ dev->name,
+ "RTL-8139C+",
+ dev->base_addr,
+ dev->dev_addr[0], dev->dev_addr[1],
+ dev->dev_addr[2], dev->dev_addr[3],
+ dev->dev_addr[4], dev->dev_addr[5],
+ dev->irq);
+
+ pci_set_drvdata(pdev, dev);
+
+ /*
+ * Looks like this is necessary to deal with on all architectures,
+ * even this %$#%$# N440BX Intel based thing doesn't get it right.
+ * Ie. having two NICs in the machine, one will have the cache
+ * line set at boot time, the other will not.
+ */
+ pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &cache_size);
+ cache_size <<= 2;
+ if (cache_size != SMP_CACHE_BYTES) {
+ printk(KERN_INFO "%s: PCI cache line size set incorrectly "
+ "(%i bytes) by BIOS/FW, ", dev->name, cache_size);
+ if (cache_size > SMP_CACHE_BYTES)
+ printk("expecting %i\n", SMP_CACHE_BYTES);
+ else {
+ printk("correcting to %i\n", SMP_CACHE_BYTES);
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE,
+ SMP_CACHE_BYTES >> 2);
+ }
+ }
+
+ /* enable busmastering and memory-write-invalidate */
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
+ if (!(pci_command & PCI_COMMAND_INVALIDATE)) {
+ pci_command |= PCI_COMMAND_INVALIDATE;
+ pci_write_config_word(pdev, PCI_COMMAND, pci_command);
+ }
+ pci_set_master(pdev);
+
+ return 0;
+
+err_out_iomap:
+ iounmap(regs);
+err_out_res:
+ pci_release_regions(pdev);
+err_out_disable:
+ pci_disable_device(pdev);
+err_out_free:
+ kfree(dev);
+ return rc;
+}
+
+static void __devexit cp_remove_one (struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct cp_private *cp = dev->priv;
+
+ if (!dev)
+ BUG();
+ unregister_netdev(dev);
+ iounmap(cp->regs);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ kfree(dev);
+}
+
+static struct pci_driver cp_driver = {
+ name: DRV_NAME,
+ id_table: cp_pci_tbl,
+ probe: cp_init_one,
+ remove: cp_remove_one,
+};
+
+static int __init cp_init (void)
+{
+#ifdef MODULE
+ printk("%s", version);
+#endif
+ return pci_module_init (&cp_driver);
+}
+
+static void __exit cp_exit (void)
+{
+ pci_unregister_driver (&cp_driver);
+}
+
+module_init(cp_init);
+module_exit(cp_exit);
features of the 8139 chips
Jean-Jacques Michel - bug fix
-
+
Tobias Ringström - Rx interrupt status checking suggestion
Andrew Morton - Clear blocked signals, avoid
*/
#define DRV_NAME "8139too"
-#define DRV_VERSION "0.9.19"
+#define DRV_VERSION "0.9.20"
#include <linux/config.h>
DELTA8139,
ADDTRON8139,
DFE538TX,
+ DFE690TXD,
RTL8129,
} board_t;
{ "Delta Electronics 8139 10/100BaseTX", RTL8139_CAPS },
{ "Addtron Technolgy 8139 10/100BaseTX", RTL8139_CAPS },
{ "D-Link DFE-538TX (RealTek RTL8139)", RTL8139_CAPS },
+ { "D-Link DFE-690TXD (RealTek RTL8139)", RTL8139_CAPS },
{ "RealTek RTL8129", RTL8129_CAPS },
};
{0x1500, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DELTA8139 },
{0x4033, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ADDTRON8139 },
{0x1186, 0x1300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DFE538TX },
+ {0x1186, 0x1340, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DFE690TXD },
#ifdef CONFIG_8139TOO_8129
{0x10ec, 0x8129, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8129 },
int i, addr_len, option;
void *ioaddr;
static int board_idx = -1;
+ u8 pci_rev;
DPRINTK ("ENTER\n");
}
#endif
+ pci_read_config_byte(pdev, PCI_REVISION_ID, &pci_rev);
+
+ if (pdev->vendor == PCI_VENDOR_ID_REALTEK &&
+ pdev->device == PCI_DEVICE_ID_REALTEK_8139 && pci_rev >= 0x20) {
+ printk(KERN_INFO PFX "pci dev %s (id %04x:%04x rev %02x) is an enhanced 8139C+ chip\n",
+ pdev->slot_name, pdev->vendor, pdev->device, pci_rev);
+ printk(KERN_INFO PFX "Use the \"8139cp\" driver for improved performance and stability.\n");
+ }
+
i = rtl8139_init_board (pdev, &dev);
if (i < 0) {
DPRINTK ("EXIT, returning %d\n", i);
struct rtl8139_private *tp = dev->priv;
DPRINTK("ENTER\n");
-
+
if (tp->phys[0] >= 0) {
u16 mii_reg5 = mdio_read(dev, tp->phys[0], 5);
if (mii_reg5 == 0xffff)
RTL_W32 (TxConfig, (TX_DMA_BURST << TxDMAShift));
tp->cur_rx = 0;
-
+
rtl_check_media (dev);
if (tp->chipset >= CH_8139B) {
{
u8 tmp8;
int tmp_work;
-
+
DPRINTK ("%s: Ethernet frame had errors, status %8.8x.\n",
dev->name, rx_status);
if (rx_status & RxTooLong) {
struct sk_buff *skb;
rmb();
-
+
/* read size+status of next frame from DMA ring buffer */
rx_status = le32_to_cpu (*(u32 *) (rx_ring + ring_offset));
rx_size = rx_status >> 16;
}
/* TODO: ETHTOOL_SSET */
-
+
case ETHTOOL_GDRVINFO:
{
struct ethtool_drvinfo info = { ETHTOOL_GDRVINFO };
dep_tristate ' PCI NE2000 and clones support (see help)' CONFIG_NE2K_PCI $CONFIG_PCI
dep_tristate ' Novell/Eagle/Microdyne NE3210 EISA support (EXPERIMENTAL)' CONFIG_NE3210 $CONFIG_EISA $CONFIG_EXPERIMENTAL
dep_tristate ' Racal-Interlan EISA ES3210 support (EXPERIMENTAL)' CONFIG_ES3210 $CONFIG_EISA $CONFIG_EXPERIMENTAL
+ dep_tristate ' RealTek RTL-8139 C+ PCI Fast Ethernet Adapter support (EXPERIMENTAL)' CONFIG_8139CP $CONFIG_PCI $CONFIG_EXPERIMENTAL
dep_tristate ' RealTek RTL-8139 PCI Fast Ethernet Adapter support' CONFIG_8139TOO $CONFIG_PCI
dep_mbool ' Use PIO instead of MMIO' CONFIG_8139TOO_PIO $CONFIG_8139TOO
dep_mbool ' Support for automatic channel equalization (EXPERIMENTAL)' CONFIG_8139TOO_TUNE_TWISTER $CONFIG_8139TOO $CONFIG_EXPERIMENTAL
obj-$(CONFIG_3C515) += 3c515.o
obj-$(CONFIG_EEXPRESS) += eexpress.o
obj-$(CONFIG_EEXPRESS_PRO) += eepro.o
+obj-$(CONFIG_8139CP) += 8139cp.o
obj-$(CONFIG_8139TOO) += 8139too.o
obj-$(CONFIG_WAVELAN) += wavelan.o
obj-$(CONFIG_ARLAN) += arlan.o arlan-proc.o
module_init(myri_sbus_probe);
module_exit(myri_sbus_cleanup);
+MODULE_LICENSE("GPL");
static struct pci_device_id pcnet32_pci_tbl[] __devinitdata = {
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LANCE_HOME, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LANCE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
+/* this id is never reached as the match above occurs first.
+ * However it clearly has significance, so let's not remove it
+ * until we know what that significance is. -jgarzik
+ */
+#if 0
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LANCE, 0x1014, 0x2000, 0, 0, 0 },
+#endif
{ 0, }
};
module_init(bigmac_probe);
module_exit(bigmac_cleanup);
+MODULE_LICENSE("GPL");
/* The user-configurable values.
These may be modified when a driver module is loaded.*/
-
static int debug = 1; /* 1 normal messages, 0 quiet .. 7 verbose. */
/* Maximum events (Rx packets, etc.) to handle at each interrupt. */
-static int max_interrupt_work = 20;
+static int max_interrupt_work = 30;
static int mtu;
/* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
Typical is a 64 element hash table based on the Ethernet CRC. */
need a copy-align. */
static int rx_copybreak;
-/* Used to pass the media type, etc.
- Both 'options[]' and 'full_duplex[]' should exist for driver
- interoperability.
- The media type is usually passed in 'options[]'.
+/* media[] specifies the media type the NIC operates at.
+ autosense Autosensing active media.
+ 10mbps_hd 10Mbps half duplex.
+ 10mbps_fd 10Mbps full duplex.
+ 100mbps_hd 100Mbps half duplex.
+ 100mbps_fd 100Mbps full duplex.
+ 0 Autosensing active media.
+ 1 10Mbps half duplex.
+ 2 10Mbps full duplex.
+ 3 100Mbps half duplex.
+ 4 100Mbps full duplex.
*/
-#define MAX_UNITS 8 /* More are supported, limit only on options */
-static int options[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
-static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
-
+#define MAX_UNITS 8
+static char *media[MAX_UNITS];
/* Operational parameters that are set at compile time. */
/* Keep the ring sizes a power of two for compile efficiency.
/* Operational parameters that usually are not changed. */
/* Time in jiffies before concluding the transmitter is hung. */
-#define TX_TIMEOUT (2*HZ)
+#define TX_TIMEOUT (4*HZ)
#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
#include <asm/processor.h> /* Processor type for cache alignment. */
#include <asm/bitops.h>
#include <asm/io.h>
-
+#include <linux/delay.h>
#include <linux/spinlock.h>
/* These identify the driver base version and may not be removed. */
MODULE_PARM(mtu, "i");
MODULE_PARM(debug, "i");
MODULE_PARM(rx_copybreak, "i");
-MODULE_PARM(options, "1-" __MODULE_STRING(MAX_UNITS) "i");
-MODULE_PARM(full_duplex, "1-" __MODULE_STRING(MAX_UNITS) "i");
+MODULE_PARM(media, "1-" __MODULE_STRING(MAX_UNITS) "s");
MODULE_PARM_DESC(max_interrupt_work, "Sundance Alta maximum events handled per interrupt");
MODULE_PARM_DESC(mtu, "Sundance Alta MTU (all boards)");
MODULE_PARM_DESC(debug, "Sundance Alta debug level (0-5)");
MODULE_PARM_DESC(rx_copybreak, "Sundance Alta copy breakpoint for copy-only-tiny-frames");
-MODULE_PARM_DESC(options, "Sundance Alta: Bits 0-3: media type, bit 17: full duplex");
-MODULE_PARM_DESC(full_duplex, "Sundance Alta full duplex setting(s) (1)");
-
/*
Theory of Operation
#endif
static struct pci_device_id sundance_pci_tbl[] __devinitdata = {
- { 0x1186, 0x1002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
- { 0x13F0, 0x0201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1 },
- { 0, }
+ {0x1186, 0x1002, 0x1186, 0x1002, 0, 0, 0},
+ {0x1186, 0x1002, 0x1186, 0x1003, 0, 0, 1},
+ {0x1186, 0x1002, 0x1186, 0x1012, 0, 0, 2},
+ {0x1186, 0x1002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3},
+ {0x13F0, 0x0201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4},
+ {0,}
};
MODULE_DEVICE_TABLE(pci, sundance_pci_tbl);
int drv_flags; /* Driver use, intended as capability flags. */
};
static struct pci_id_info pci_id_tbl[] = {
- {"OEM Sundance Technology ST201", {0x10021186, 0xffffffff, },
+ {"D-Link DFE-550TX FAST Ethernet Adapter", {0x10021186, 0xffffffff,},
PCI_IOTYPE, 128, CanHaveMII},
- {"Sundance Technology Alta", {0x020113F0, 0xffffffff, },
+ {"D-Link DFE-550FX 100Mbps Fiber-optics Adapter",
+ {0x10031186, 0xffffffff,},
PCI_IOTYPE, 128, CanHaveMII},
- {0,}, /* 0 terminated list. */
+ {"D-Link DFE-580TX 4 port Server Adapter", {0x10121186, 0xffffffff,},
+ PCI_IOTYPE, 128, CanHaveMII},
+ {"D-Link DL10050-based FAST Ethernet Adapter",
+ {0x10021186, 0xffffffff,},
+ PCI_IOTYPE, 128, CanHaveMII},
+ {"Sundance Technology Alta", {0x020113F0, 0xffffffff,},
+ PCI_IOTYPE, 128, CanHaveMII},
+ {0,}, /* 0 terminated list. */
};
/* This driver was written to use PCI memory space, however x86-oriented
unsigned int tx_full:1; /* The Tx queue is full. */
/* These values are keep track of the transceiver/media in use. */
unsigned int full_duplex:1; /* Full-duplex operation requested. */
- unsigned int duplex_lock:1;
unsigned int medialock:1; /* Do not sense media. */
unsigned int default_port:4; /* Last dev->if_port value. */
+ unsigned int an_enable:1;
+ unsigned int speed;
/* Multicast and receive mode. */
spinlock_t mcastlock; /* SMP lock multicast updates. */
u16 mcast_filter[4];
static int card_idx;
int chip_idx = ent->driver_data;
int irq;
- int i, option = card_idx < MAX_UNITS ? options[card_idx] : 0;
+ int i;
long ioaddr;
+ u16 mii_reg0;
void *ring_space;
dma_addr_t ring_dma;
np->rx_ring = (struct netdev_desc *)ring_space;
np->rx_ring_dma = ring_dma;
- if (dev->mem_start)
- option = dev->mem_start;
-
- /* The lower four bits are the media type. */
- if (option > 0) {
- if (option & 0x200)
- np->full_duplex = 1;
- np->default_port = option & 15;
- if (np->default_port)
- np->medialock = 1;
- }
- if (card_idx < MAX_UNITS && full_duplex[card_idx] > 0)
- np->full_duplex = 1;
-
- if (np->full_duplex)
- np->duplex_lock = 1;
-
/* The chip-specific entries in the device structure. */
dev->open = &netdev_open;
dev->hard_start_xmit = &start_tx;
printk(KERN_INFO "%s: No MII transceiver found!, ASIC status %x\n",
dev->name, readl(ioaddr + ASICCtrl));
}
+ /* Parse override configuration */
+ np->an_enable = 1;
+ if (card_idx < MAX_UNITS) {
+ if (media[card_idx] != NULL) {
+ np->an_enable = 0;
+ if (strcmp (media[card_idx], "100mbps_fd") == 0 ||
+ strcmp (media[card_idx], "4") == 0) {
+ np->speed = 100;
+ np->full_duplex = 1;
+ } else if (strcmp (media[card_idx], "100mbps_hd") == 0
+ || strcmp (media[card_idx], "3") == 0) {
+ np->speed = 100;
+ np->full_duplex = 0;
+ } else if (strcmp (media[card_idx], "10mbps_fd") == 0 ||
+ strcmp (media[card_idx], "2") == 0) {
+ np->speed = 10;
+ np->full_duplex = 1;
+ } else if (strcmp (media[card_idx], "10mbps_hd") == 0 ||
+ strcmp (media[card_idx], "1") == 0) {
+ np->speed = 10;
+ np->full_duplex = 0;
+ } else {
+ np->an_enable = 1;
+ }
+ }
+ }
+
+ /* Fibre PHY? */
+ if (readl (ioaddr + ASICCtrl) & 0x80) {
+ /* Default 100Mbps Full */
+ if (np->an_enable) {
+ np->speed = 100;
+ np->full_duplex = 1;
+ np->an_enable = 0;
+ }
+ }
+ /* Reset PHY */
+ mdio_write (dev, np->phys[0], 0, 0x8000);
+ mdelay (300);
+ mdio_write (dev, np->phys[0], 0, 0x1200);
+ /* Force media type */
+ if (!np->an_enable) {
+ mii_reg0 = 0;
+ mii_reg0 |= (np->speed == 100) ? 0x2000 : 0;
+ mii_reg0 |= (np->full_duplex) ? 0x0100 : 0;
+ mdio_write (dev, np->phys[0], 0, mii_reg0);
+ printk (KERN_INFO "Override speed=%d, %s duplex\n",
+ np->speed, np->full_duplex ? "Full" : "Half");
+
+ }
/* Perhaps move the reset here? */
/* Reset the chip to erase previous misconfiguration. */
if (dev->if_port == 0)
dev->if_port = np->default_port;
- np->full_duplex = np->duplex_lock;
np->mcastlock = (spinlock_t) SPIN_LOCK_UNLOCKED;
set_rx_mode(dev);
int mii_reg5 = mdio_read(dev, np->phys[0], 5);
int negotiated = mii_reg5 & np->advertising;
int duplex;
-
- if (np->duplex_lock || mii_reg5 == 0xffff)
+
+ /* Force media */
+ if (!np->an_enable || mii_reg5 == 0xffff) {
+ if (np->full_duplex)
+ writew (readw (ioaddr + MACCtrl0) | EnbFullDuplex,
+ ioaddr + MACCtrl0);
return;
+ }
+ /* Autonegotiation */
duplex = (negotiated & 0x0100) || (negotiated & 0x01C0) == 0x0040;
if (np->full_duplex != duplex) {
np->full_duplex = duplex;
/* Abnormal error summary/uncommon events handlers. */
if (intr_status & (IntrDrvRqst | IntrPCIErr | LinkChange | StatsMax))
netdev_error(dev, intr_status);
-
if (--boguscnt < 0) {
get_stats(dev);
- printk(KERN_WARNING "%s: Too much work at interrupt, "
+ if (debug > 1)
+ printk(KERN_WARNING "%s: Too much work at interrupt, "
"status=0x%4.4x / 0x%4.4x.\n",
dev->name, intr_status, readw(ioaddr + IntrClear));
/* Re-enable us in 3.2msec. */
{
long ioaddr = dev->base_addr;
struct netdev_private *np = dev->priv;
+ u16 mii_reg0, mii_reg4, mii_reg5;
+ int speed;
if (intr_status & IntrDrvRqst) {
/* Stop the down counter and turn interrupts back on. */
- printk("%s: Turning interrupts back on.\n", dev->name);
+ if (debug > 1)
+ printk("%s: Turning interrupts back on.\n", dev->name);
writew(0, ioaddr + IntrEnable);
writew(0, ioaddr + DownCounter);
writew(IntrRxDone | IntrRxDMADone | IntrPCIErr | IntrDrvRqst |
IntrTxDone | StatsMax | LinkChange, ioaddr + IntrEnable);
+ /* Ack buggy InRequest */
+ writew (IntrDrvRqst, ioaddr + IntrStatus);
}
if (intr_status & LinkChange) {
- printk(KERN_ERR "%s: Link changed: Autonegotiation advertising"
- " %4.4x partner %4.4x.\n", dev->name,
- mdio_read(dev, np->phys[0], 4),
- mdio_read(dev, np->phys[0], 5));
- check_duplex(dev);
+ if (np->an_enable) {
+ mii_reg4 = mdio_read (dev, np->phys[0], 4);
+ mii_reg5= mdio_read (dev, np->phys[0], 5);
+ mii_reg4 &= mii_reg5;
+ printk (KERN_INFO "%s: Link changed: ", dev->name);
+ if (mii_reg4 & 0x0100)
+ printk ("100Mbps, full duplex\n");
+ else if (mii_reg4 & 0x0080)
+ printk ("100Mbps, half duplex\n");
+ else if (mii_reg4 & 0x0040)
+ printk ("10Mbps, full duplex\n");
+ else if (mii_reg4 & 0x0020)
+ printk ("10Mbps, half duplex\n");
+ else
+ printk ("\n");
+
+ } else {
+ mii_reg0 = mdio_read (dev, np->phys[0], 0);
+ speed = (mii_reg0 & 0x2000) ? 100 : 10;
+ printk (KERN_INFO "%s: Link changed: %dMbps ,",
+ dev->name, speed);
+ printk ("%s duplex.\n", (mii_reg0 & 0x0100) ?
+ "full" : "half");
+ }
+ check_duplex (dev);
}
if (intr_status & StatsMax) {
get_stats(dev);
-/* $Id: sungem.c,v 1.22 2001/10/09 02:24:33 davem Exp $
+/* $Id: sungem.c,v 1.30 2001/10/17 06:55:10 davem Exp $
* sungem.c: Sun GEM ethernet driver.
*
* Copyright (C) 2000, 2001 David S. Miller (davem@redhat.com)
#include <asm/pbm.h>
#endif
-#ifdef __powerpc__
+#ifdef CONFIG_ALL_PPC
#include <asm/pci-bridge.h>
#include <asm/prom.h>
+#include <asm/machdep.h>
+#include <asm/pmac_feature.h>
#endif
#include "sungem.h"
static char version[] __devinitdata =
- "sungem.c:v0.75 21/Mar/01 David S. Miller (davem@redhat.com)\n";
+ "sungem.c:v0.95 16/Oct/01 David S. Miller (davem@redhat.com)\n";
MODULE_AUTHOR("David S. Miller (davem@redhat.com)");
MODULE_DESCRIPTION("Sun GEM Gbit ethernet driver");
/* These models only differ from the original GEM in
* that their tx/rx fifos are of a different size and
* they only support 10/100 speeds. -DaveM
+ *
+ * Apple's GMAC does support gigabit on machines with
+ * the BCM5400 or 5401 PHYs. -BenH
*/
{ PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_RIO_GEM,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
MODULE_DEVICE_TABLE(pci, gem_pci_tbl);
-static u16 phy_read(struct gem *gp, int reg)
+static u16 __phy_read(struct gem *gp, int reg, int phy_addr)
{
u32 cmd;
int limit = 10000;
cmd = (1 << 30);
cmd |= (2 << 28);
- cmd |= (gp->mii_phy_addr << 23) & MIF_FRAME_PHYAD;
+ cmd |= (phy_addr << 23) & MIF_FRAME_PHYAD;
cmd |= (reg << 18) & MIF_FRAME_REGAD;
cmd |= (MIF_FRAME_TAMSB);
writel(cmd, gp->regs + MIF_FRAME);
return cmd & MIF_FRAME_DATA;
}
-static void phy_write(struct gem *gp, int reg, u16 val)
+static inline u16 phy_read(struct gem *gp, int reg)
+{
+ return __phy_read(gp, reg, gp->mii_phy_addr);
+}
+
+static void __phy_write(struct gem *gp, int reg, u16 val, int phy_addr)
{
u32 cmd;
int limit = 10000;
cmd = (1 << 30);
cmd |= (1 << 28);
- cmd |= (gp->mii_phy_addr << 23) & MIF_FRAME_PHYAD;
+ cmd |= (phy_addr << 23) & MIF_FRAME_PHYAD;
cmd |= (reg << 18) & MIF_FRAME_REGAD;
cmd |= (MIF_FRAME_TAMSB);
cmd |= (val & MIF_FRAME_DATA);
}
}
+static inline void phy_write(struct gem *gp, int reg, u16 val)
+{
+ __phy_write(gp, reg, val, gp->mii_phy_addr);
+}
+
static void gem_handle_mif_event(struct gem *gp, u32 reg_val, u32 changed_bits)
{
}
int limit;
u32 val;
+ /* Make sure we won't get any more interrupts */
+ writel(0xffffffff, gp->regs + GREG_IMASK);
+
+ /* Reset the chip */
writel(GREG_SWRST_TXRST | GREG_SWRST_RXRST, regs + GREG_SWRST);
limit = STOP_TRIES;
printk(KERN_ERR "gem: SW reset is ghetto.\n");
}
+static void gem_start_dma(struct gem *gp)
+{
+ unsigned long val;
+
+ /* We are ready to rock, turn everything on. */
+ val = readl(gp->regs + TXDMA_CFG);
+ writel(val | TXDMA_CFG_ENABLE, gp->regs + TXDMA_CFG);
+ val = readl(gp->regs + RXDMA_CFG);
+ writel(val | RXDMA_CFG_ENABLE, gp->regs + RXDMA_CFG);
+ val = readl(gp->regs + MAC_TXCFG);
+ writel(val | MAC_TXCFG_ENAB, gp->regs + MAC_TXCFG);
+ val = readl(gp->regs + MAC_RXCFG);
+ writel(val | MAC_RXCFG_ENAB, gp->regs + MAC_RXCFG);
+
+ writel(GREG_STAT_TXDONE, gp->regs + GREG_IMASK);
+
+ writel(RX_RING_SIZE - 4, gp->regs + RXDMA_KICK);
+
+}
+
+/* Link modes of the BCM5400 PHY */
+static int phy_BCM5400_link_table[8][3] = {
+ { 0, 0, 0 }, /* No link */
+ { 0, 0, 0 }, /* 10BT Half Duplex */
+ { 1, 0, 0 }, /* 10BT Full Duplex */
+ { 0, 1, 0 }, /* 100BT Half Duplex */
+ { 0, 1, 0 }, /* 100BT Half Duplex */
+ { 1, 1, 0 }, /* 100BT Full Duplex*/
+ { 1, 0, 1 }, /* 1000BT */
+ { 1, 0, 1 }, /* 1000BT */
+};
+
/* A link-up condition has occurred, initialize and enable the
* rest of the chip.
*/
static void gem_set_link_modes(struct gem *gp)
{
u32 val;
- int full_duplex, speed;
+ int full_duplex, speed, pause;
full_duplex = 0;
speed = 10;
+ pause = 0;
+
if (gp->phy_type == phy_mii_mdio0 ||
gp->phy_type == phy_mii_mdio1) {
if (gp->lstate == aneg_wait) {
- val = phy_read(gp, PHY_LPA);
- if (val & (PHY_LPA_10FULL | PHY_LPA_100FULL))
- full_duplex = 1;
- if (val & (PHY_LPA_100FULL | PHY_LPA_100HALF))
- speed = 100;
+ if (gp->phy_mod == phymod_bcm5400 ||
+ gp->phy_mod == phymod_bcm5401 ||
+ gp->phy_mod == phymod_bcm5411) {
+ int link_mode;
+ val = phy_read(gp, PHY_BCM5400_AUXSTATUS);
+ link_mode = (val & PHY_BCM5400_AUXSTATUS_LINKMODE_MASK) >>
+ PHY_BCM5400_AUXSTATUS_LINKMODE_SHIFT;
+ full_duplex = phy_BCM5400_link_table[link_mode][0];
+ speed = phy_BCM5400_link_table[link_mode][2] ? 1000
+ : (phy_BCM5400_link_table[link_mode][1] ? 100 : 10);
+ val = phy_read(gp, PHY_LPA);
+ if (val & PHY_LPA_PAUSE)
+ pause = 1;
+ } else {
+ val = phy_read(gp, PHY_LPA);
+ if (val & (PHY_LPA_10FULL | PHY_LPA_100FULL))
+ full_duplex = 1;
+ if (val & (PHY_LPA_100FULL | PHY_LPA_100HALF))
+ speed = 100;
+ }
} else {
val = phy_read(gp, PHY_CTRL);
if (val & PHY_CTRL_FDPLX)
if (gp->phy_type == phy_serialink ||
gp->phy_type == phy_serdes) {
- u32 pcs_lpa = readl(gp->regs + PCS_MIILP);
+ u32 pcs_lpa = readl(gp->regs + PCS_MIILP);
- val = readl(gp->regs + MAC_MCCFG);
if (pcs_lpa & (PCS_MIIADV_SP | PCS_MIIADV_AP))
- val |= (MAC_MCCFG_SPE | MAC_MCCFG_RPE);
- else
- val &= ~(MAC_MCCFG_SPE | MAC_MCCFG_RPE);
- writel(val, gp->regs + MAC_MCCFG);
+ pause = 1;
+ }
- if (!full_duplex)
- writel(512, gp->regs + MAC_STIME);
- else
- writel(64, gp->regs + MAC_STIME);
- } else {
- /* Set slot-time of 64. */
+ if (!full_duplex)
+ writel(512, gp->regs + MAC_STIME);
+ else
writel(64, gp->regs + MAC_STIME);
- }
+ val = readl(gp->regs + MAC_MCCFG);
+ if (pause)
+ val |= (MAC_MCCFG_SPE | MAC_MCCFG_RPE);
+ else
+ val &= ~(MAC_MCCFG_SPE | MAC_MCCFG_RPE);
+ writel(val, gp->regs + MAC_MCCFG);
- /* We are ready to rock, turn everything on. */
- val = readl(gp->regs + TXDMA_CFG);
- writel(val | TXDMA_CFG_ENABLE, gp->regs + TXDMA_CFG);
- val = readl(gp->regs + RXDMA_CFG);
- writel(val | RXDMA_CFG_ENABLE, gp->regs + RXDMA_CFG);
- val = readl(gp->regs + MAC_TXCFG);
- writel(val | MAC_TXCFG_ENAB, gp->regs + MAC_TXCFG);
- val = readl(gp->regs + MAC_RXCFG);
- writel(val | MAC_RXCFG_ENAB, gp->regs + MAC_RXCFG);
+ gem_start_dma(gp);
}
static int gem_mdio_link_not_up(struct gem *gp)
}
}
+static int
+gem_reset_one_mii_phy(struct gem *gp, int phy_addr)
+{
+ u16 val;
+ int limit = 10000;
+
+ val = __phy_read(gp, PHY_CTRL, phy_addr);
+ val &= ~PHY_CTRL_ISO;
+ val |= PHY_CTRL_RST;
+ __phy_write(gp, PHY_CTRL, val, phy_addr);
+
+ udelay(100);
+
+ while (limit--) {
+ val = __phy_read(gp, PHY_CTRL, phy_addr);
+ if ((val & PHY_CTRL_RST) == 0)
+ break;
+ udelay(10);
+ }
+ if ((val & PHY_CTRL_ISO) && limit > 0)
+ __phy_write(gp, PHY_CTRL, val & ~PHY_CTRL_ISO, phy_addr);
+
+ return (limit <= 0);
+}
+
+static void
+gem_init_bcm5400_phy(struct gem *gp)
+{
+ u16 data;
+
+ /* Configure for gigabit full duplex */
+ data = phy_read(gp, PHY_BCM5400_AUXCONTROL);
+ data |= PHY_BCM5400_AUXCONTROL_PWR10BASET;
+ phy_write(gp, PHY_BCM5400_AUXCONTROL, data);
+
+ data = phy_read(gp, PHY_BCM5400_GB_CONTROL);
+ data |= PHY_BCM5400_GB_CONTROL_FULLDUPLEXCAP;
+ phy_write(gp, PHY_BCM5400_GB_CONTROL, data);
+
+ mdelay(10);
+
+ /* Reset and configure cascaded 10/100 PHY */
+ gem_reset_one_mii_phy(gp, 0x1f);
+
+ data = __phy_read(gp, PHY_BCM5201_MULTIPHY, 0x1f);
+ data |= PHY_BCM5201_MULTIPHY_SERIALMODE;
+ __phy_write(gp, PHY_BCM5201_MULTIPHY, data, 0x1f);
+
+ data = phy_read(gp, PHY_BCM5400_AUXCONTROL);
+ data &= ~PHY_BCM5400_AUXCONTROL_PWR10BASET;
+ phy_write(gp, PHY_BCM5400_AUXCONTROL, data);
+}
+
+static void
+gem_init_bcm5401_phy(struct gem *gp)
+{
+ u16 data;
+ int rev;
+
+ rev = phy_read(gp, PHY_ID1) & 0x000f;
+ if (rev == 0 || rev == 3) {
+ /* Some revisions of 5401 appear to need this
+ * initialisation sequence to disable, according
+ * to OF, "tap power management"
+ *
+ * WARNING ! OF and Darwin don't agree on the
+ * register addresses. OF seem to interpret the
+ * register numbers below as decimal
+ */
+ phy_write(gp, 0x18, 0x0c20);
+ phy_write(gp, 0x17, 0x0012);
+ phy_write(gp, 0x15, 0x1804);
+ phy_write(gp, 0x17, 0x0013);
+ phy_write(gp, 0x15, 0x1204);
+ phy_write(gp, 0x17, 0x8006);
+ phy_write(gp, 0x15, 0x0132);
+ phy_write(gp, 0x17, 0x8006);
+ phy_write(gp, 0x15, 0x0232);
+ phy_write(gp, 0x17, 0x201f);
+ phy_write(gp, 0x15, 0x0a20);
+ }
+
+ /* Configure for gigabit full duplex */
+ data = phy_read(gp, PHY_BCM5400_GB_CONTROL);
+ data |= PHY_BCM5400_GB_CONTROL_FULLDUPLEXCAP;
+ phy_write(gp, PHY_BCM5400_GB_CONTROL, data);
+
+ mdelay(1);
+
+ /* Reset and configure cascaded 10/100 PHY */
+ gem_reset_one_mii_phy(gp, 0x1f);
+
+ data = __phy_read(gp, PHY_BCM5201_MULTIPHY, 0x1f);
+ data |= PHY_BCM5201_MULTIPHY_SERIALMODE;
+ __phy_write(gp, PHY_BCM5201_MULTIPHY, data, 0x1f);
+}
+
+static void
+gem_init_bcm5411_phy(struct gem *gp)
+{
+ u16 data;
+
+ /* Here's some more Apple black magic to setup
+ * some voltage stuffs.
+ */
+ phy_write(gp, 0x1c, 0x8c23);
+ phy_write(gp, 0x1c, 0x8ca3);
+ phy_write(gp, 0x1c, 0x8c23);
+
+ /* Here, Apple seems to want to reset it, do
+ * it as well
+ */
+ phy_write(gp, PHY_CTRL, PHY_CTRL_RST);
+
+ /* Start autoneg */
+ phy_write(gp, PHY_CTRL,
+ (PHY_CTRL_ANENAB | PHY_CTRL_FDPLX |
+ PHY_CTRL_ANRES | PHY_CTRL_SPD2));
+
+ data = phy_read(gp, PHY_BCM5400_GB_CONTROL);
+ data |= PHY_BCM5400_GB_CONTROL_FULLDUPLEXCAP;
+ phy_write(gp, PHY_BCM5400_GB_CONTROL, data);
+}
+
static void gem_init_phy(struct gem *gp)
{
+#ifdef CONFIG_ALL_PPC
+ if (gp->pdev->vendor == PCI_VENDOR_ID_APPLE) {
+ int i;
+
+ pmac_call_feature(PMAC_FTR_GMAC_PHY_RESET, gp->of_node, 0, 0);
+ for (i = 0; i < 32; i++) {
+ gp->mii_phy_addr = i;
+ if (phy_read(gp, PHY_CTRL) != 0xffff)
+ break;
+ }
+ if (i == 32) {
+ printk(KERN_WARNING "%s: GMAC PHY not responding !\n",
+ gp->dev->name);
+ return;
+ }
+ }
+#endif /* CONFIG_ALL_PPC */
+
if (gp->pdev->vendor == PCI_VENDOR_ID_SUN &&
gp->pdev->device == PCI_DEVICE_ID_SUN_GEM) {
u32 val;
if (gp->phy_type == phy_mii_mdio0 ||
gp->phy_type == phy_mii_mdio1) {
- u16 val = phy_read(gp, PHY_CTRL);
- int limit = 10000;
-
+ u32 phy_id;
+ u16 val;
+
/* Take PHY out of isloate mode and reset it. */
- val &= ~PHY_CTRL_ISO;
- val |= PHY_CTRL_RST;
- phy_write(gp, PHY_CTRL, val);
-
- while (limit--) {
- val = phy_read(gp, PHY_CTRL);
- if ((val & PHY_CTRL_RST) == 0)
+ gem_reset_one_mii_phy(gp, gp->mii_phy_addr);
+
+ phy_id = (phy_read(gp, PHY_ID0) << 16 | phy_read(gp, PHY_ID1))
+ & 0xfffffff0;
+ printk(KERN_INFO "%s: MII PHY ID: %x ", gp->dev->name, phy_id);
+ switch(phy_id) {
+ case 0x406210:
+ gp->phy_mod = phymod_bcm5201;
+ printk("BCM 5201\n");
break;
- udelay(10);
- }
+ case 0x4061e0:
+ printk("BCM 5221\n");
+ gp->phy_mod = phymod_bcm5221;
+ break;
+ case 0x206040:
+ printk("BCM 5400\n");
+ gp->phy_mod = phymod_bcm5400;
+ gem_init_bcm5400_phy(gp);
+ break;
+ case 0x206050:
+ printk("BCM 5401\n");
+ gp->phy_mod = phymod_bcm5401;
+ gem_init_bcm5401_phy(gp);
+ break;
+ case 0x206070:
+ printk("BCM 5411\n");
+ gp->phy_mod = phymod_bcm5411;
+ gem_init_bcm5411_phy(gp);
+ break;
+ default:
+ printk("Generic\n");
+ gp->phy_mod = phymod_generic;
+ };
/* Init advertisement and enable autonegotiation. */
+ val = phy_read(gp, PHY_CTRL);
+ val &= ~PHY_CTRL_ANENAB;
+ phy_write(gp, PHY_CTRL, val);
+ udelay(10);
+
phy_write(gp, PHY_ADV,
+ phy_read(gp, PHY_ADV) |
(PHY_ADV_10HALF | PHY_ADV_10FULL |
PHY_ADV_100HALF | PHY_ADV_100FULL));
- val |= (PHY_CTRL_ANRES | PHY_CTRL_ANENAB);
+ val = phy_read(gp, PHY_CTRL);
+ val |= PHY_CTRL_ANENAB;
+ phy_write(gp, PHY_CTRL, val);
+ val |= PHY_CTRL_ANRES;
phy_write(gp, PHY_CTRL, val);
} else {
u32 val;
writel(0, gp->regs + MAC_MCMASK);
}
+static void
+gem_init_pause_thresholds(struct gem* gp)
+{
+ /* Calculate pause thresholds. Setting the OFF threshold to the
+ * full RX fifo size effectively disables PAUSE generation which
+ * is what we do for 10/100 only GEMs which have FIFOs too small
+ * to make real gains from PAUSE.
+ */
+ if (gp->rx_fifo_sz <= (2 * 1024)) {
+ gp->rx_pause_off = gp->rx_pause_on = gp->rx_fifo_sz;
+ } else {
+ int off = (gp->rx_fifo_sz - (5 * 1024));
+ int on = off - 1024;
+
+ gp->rx_pause_off = off;
+ gp->rx_pause_on = on;
+ }
+
+ {
+ u32 cfg = readl(gp->regs + GREG_BIFCFG);
+
+ /* XXX Why do I do this? -DaveM XXX */
+ cfg |= GREG_BIFCFG_B64DIS;
+ writel(cfg, gp->regs + GREG_BIFCFG);
+
+ cfg = GREG_CFG_IBURST;
+ cfg |= ((31 << 1) & GREG_CFG_TXDMALIM);
+ cfg |= ((31 << 6) & GREG_CFG_RXDMALIM);
+ writel(cfg, gp->regs + GREG_CFG);
+ }
+}
+
+static int gem_check_invariants(struct gem *gp)
+{
+ struct pci_dev *pdev = gp->pdev;
+ u32 mif_cfg;
+
+ /* On Apple's sungem, we can't realy on registers as the chip
+ * was been powered down by the firmware. We do the PHY lookup
+ * when the interface is opened and we configure the driver
+ * with known values.
+ */
+ if (pdev->vendor == PCI_VENDOR_ID_APPLE) {
+ gp->phy_type = phy_mii_mdio0;
+ mif_cfg = readl(gp->regs + MIF_CFG);
+ mif_cfg &= ~MIF_CFG_PSELECT;
+ writel(mif_cfg, gp->regs + MIF_CFG);
+ writel(PCS_DMODE_MGM, gp->regs + PCS_DMODE);
+ writel(MAC_XIFCFG_OE, gp->regs + MAC_XIFCFG);
+ gp->tx_fifo_sz = readl(gp->regs + TXDMA_FSZ) * 64;
+ gp->rx_fifo_sz = readl(gp->regs + RXDMA_FSZ) * 64;
+ gem_init_pause_thresholds(gp);
+ return 0;
+ }
+
+ mif_cfg = readl(gp->regs + MIF_CFG);
+
+ if (pdev->vendor == PCI_VENDOR_ID_SUN &&
+ pdev->device == PCI_DEVICE_ID_SUN_RIO_GEM) {
+ /* One of the MII PHYs _must_ be present
+ * as this chip has no gigabit PHY.
+ */
+ if ((mif_cfg & (MIF_CFG_MDI0 | MIF_CFG_MDI1)) == 0) {
+ printk(KERN_ERR PFX "RIO GEM lacks MII phy, mif_cfg[%08x]\n",
+ mif_cfg);
+ return -1;
+ }
+ }
+
+ /* Determine initial PHY interface type guess. MDIO1 is the
+ * external PHY and thus takes precedence over MDIO0.
+ */
+
+ if (mif_cfg & MIF_CFG_MDI1) {
+ gp->phy_type = phy_mii_mdio1;
+ mif_cfg |= MIF_CFG_PSELECT;
+ writel(mif_cfg, gp->regs + MIF_CFG);
+ } else if (mif_cfg & MIF_CFG_MDI0) {
+ gp->phy_type = phy_mii_mdio0;
+ mif_cfg &= ~MIF_CFG_PSELECT;
+ writel(mif_cfg, gp->regs + MIF_CFG);
+ } else {
+ gp->phy_type = phy_serialink;
+ }
+ if (gp->phy_type == phy_mii_mdio1 ||
+ gp->phy_type == phy_mii_mdio0) {
+ int i;
+
+ for (i = 0; i < 32; i++) {
+ gp->mii_phy_addr = i;
+ if (phy_read(gp, PHY_CTRL) != 0xffff)
+ break;
+ }
+ if (i == 32) {
+ if (pdev->device != PCI_DEVICE_ID_SUN_GEM) {
+ printk(KERN_ERR PFX "RIO MII phy will not respond.\n");
+ return -1;
+ }
+ gp->phy_type = phy_serdes;
+ }
+ }
+
+ /* Fetch the FIFO configurations now too. */
+ gp->tx_fifo_sz = readl(gp->regs + TXDMA_FSZ) * 64;
+ gp->rx_fifo_sz = readl(gp->regs + RXDMA_FSZ) * 64;
+
+ if (pdev->vendor == PCI_VENDOR_ID_SUN) {
+ if (pdev->device == PCI_DEVICE_ID_SUN_GEM) {
+ if (gp->tx_fifo_sz != (9 * 1024) ||
+ gp->rx_fifo_sz != (20 * 1024)) {
+ printk(KERN_ERR PFX "GEM has bogus fifo sizes tx(%d) rx(%d)\n",
+ gp->tx_fifo_sz, gp->rx_fifo_sz);
+ return -1;
+ }
+ } else {
+ if (gp->tx_fifo_sz != (2 * 1024) ||
+ gp->rx_fifo_sz != (2 * 1024)) {
+ printk(KERN_ERR PFX "RIO GEM has bogus fifo sizes tx(%d) rx(%d)\n",
+ gp->tx_fifo_sz, gp->rx_fifo_sz);
+ return -1;
+ }
+ }
+ }
+
+ gem_init_pause_thresholds(gp);
+
+ return 0;
+}
+
static void gem_init_hw(struct gem *gp)
{
+ /* On Apple's gmac, I initialize the PHY only after
+ * setting up the chip. It appears the gigabit PHYs
+ * don't quite like beeing talked to on the GII when
+ * the chip is not running, I suspect it might not
+ * be clocked at that point. --BenH
+ */
+ if (gp->pdev->vendor == PCI_VENDOR_ID_APPLE) {
+ gem_check_invariants(gp);
+ gp->hw_running = 1;
+ }
gem_init_phy(gp);
gem_init_dma(gp);
gem_init_mac(gp);
- writel(GREG_STAT_TXDONE, gp->regs + GREG_IMASK);
-
gp->timer_ticks = 0;
gp->lstate = aneg_wait;
gp->link_timer.expires = jiffies + ((12 * HZ) / 10);
add_timer(&gp->link_timer);
}
+#ifdef CONFIG_ALL_PPC
+/* Enable the chip's clock and make sure it's config space is
+ * setup properly. There appear to be no need to restore the
+ * base addresses.
+ */
+static void
+gem_apple_powerup(struct gem* gp)
+{
+ u16 cmd;
+
+ pmac_call_feature(PMAC_FTR_GMAC_ENABLE, gp->of_node, 0, 1);
+
+ udelay(100);
+
+ pci_read_config_word(gp->pdev, PCI_COMMAND, &cmd);
+ cmd |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE;
+ pci_write_config_word(gp->pdev, PCI_COMMAND, cmd);
+ pci_write_config_byte(gp->pdev, PCI_LATENCY_TIMER, 6);
+ pci_write_config_byte(gp->pdev, PCI_CACHE_LINE_SIZE, 8);
+}
+
+/* Turn off the chip's clock */
+static void
+gem_apple_powerdown(struct gem* gp)
+{
+ pmac_call_feature(PMAC_FTR_GMAC_ENABLE, gp->of_node, 0, 0);
+}
+#endif /* CONFIG_ALL_PPC */
+
static int gem_open(struct net_device *dev)
{
struct gem *gp = dev->priv;
del_timer(&gp->link_timer);
+#ifdef CONFIG_ALL_PPC
+ /* First, we need to bring up the chip */
+ if (gp->pdev->vendor == PCI_VENDOR_ID_APPLE)
+ gem_apple_powerup(gp);
+#endif /* CONFIG_ALL_PPC */
+
+ /* Reset the chip */
+ gem_stop(gp, regs);
+
+ /* We can now request the interrupt as we know it's masked
+ * on the controller
+ */
if (request_irq(gp->pdev->irq, gem_interrupt,
- SA_SHIRQ, dev->name, (void *)dev))
+ SA_SHIRQ, dev->name, (void *)dev)) {
+#ifdef CONFIG_ALL_PPC
+ if (gp->pdev->vendor == PCI_VENDOR_ID_APPLE)
+ gem_apple_powerdown(gp);
+#endif /* CONFIG_ALL_PPC */
return -EAGAIN;
+ }
- gem_stop(gp, regs);
+ /* Allocate & setup ring buffers */
gem_init_rings(gp, 0);
+
+ /* Init & setup chip hardware */
gem_init_hw(gp);
return 0;
del_timer(&gp->link_timer);
gem_stop(gp, gp->regs);
gem_clean_rings(gp);
+ gp->hw_running = 0;
+#ifdef CONFIG_ALL_PPC
+ if (gp->pdev->vendor == PCI_VENDOR_ID_APPLE)
+ gem_apple_powerdown(gp);
+#endif /* CONFIG_ALL_PPC */
free_irq(gp->pdev->irq, (void *)dev);
return 0;
}
struct gem *gp = dev->priv;
struct net_device_stats *stats = &gp->net_stats;
- stats->rx_crc_errors += readl(gp->regs + MAC_FCSERR);
- writel(0, gp->regs + MAC_FCSERR);
+ if (gp->hw_running) {
+ stats->rx_crc_errors += readl(gp->regs + MAC_FCSERR);
+ writel(0, gp->regs + MAC_FCSERR);
- stats->rx_frame_errors += readl(gp->regs + MAC_AERR);
- writel(0, gp->regs + MAC_AERR);
+ stats->rx_frame_errors += readl(gp->regs + MAC_AERR);
+ writel(0, gp->regs + MAC_AERR);
- stats->rx_length_errors += readl(gp->regs + MAC_LERR);
- writel(0, gp->regs + MAC_LERR);
-
- stats->tx_aborted_errors += readl(gp->regs + MAC_ECOLL);
- stats->collisions +=
- (readl(gp->regs + MAC_ECOLL) +
- readl(gp->regs + MAC_LCOLL));
- writel(0, gp->regs + MAC_ECOLL);
- writel(0, gp->regs + MAC_LCOLL);
+ stats->rx_length_errors += readl(gp->regs + MAC_LERR);
+ writel(0, gp->regs + MAC_LERR);
+ stats->tx_aborted_errors += readl(gp->regs + MAC_ECOLL);
+ stats->collisions +=
+ (readl(gp->regs + MAC_ECOLL) +
+ readl(gp->regs + MAC_LCOLL));
+ writel(0, gp->regs + MAC_ECOLL);
+ writel(0, gp->regs + MAC_LCOLL);
+ }
return &gp->net_stats;
}
{
struct gem *gp = dev->priv;
+ if (!gp->hw_running)
+ return;
+
netif_stop_queue(dev);
if ((gp->dev->flags & IFF_ALLMULTI) ||
return -EINVAL;
}
-static int __devinit gem_check_invariants(struct gem *gp)
-{
- struct pci_dev *pdev = gp->pdev;
- u32 mif_cfg = readl(gp->regs + MIF_CFG);
-
- if (pdev->vendor == PCI_VENDOR_ID_SUN &&
- pdev->device == PCI_DEVICE_ID_SUN_RIO_GEM) {
- /* One of the MII PHYs _must_ be present
- * as this chip has no gigabit PHY.
- */
- if ((mif_cfg & (MIF_CFG_MDI0 | MIF_CFG_MDI1)) == 0) {
- printk(KERN_ERR PFX "RIO GEM lacks MII phy, mif_cfg[%08x]\n",
- mif_cfg);
- return -1;
- }
- }
-
- /* Determine initial PHY interface type guess. MDIO1 is the
- * external PHY and thus takes precedence over MDIO0.
- */
- if (mif_cfg & MIF_CFG_MDI1) {
- gp->phy_type = phy_mii_mdio1;
- mif_cfg |= MIF_CFG_PSELECT;
- writel(mif_cfg, gp->regs + MIF_CFG);
- } else if (mif_cfg & MIF_CFG_MDI0) {
- gp->phy_type = phy_mii_mdio0;
- mif_cfg &= ~MIF_CFG_PSELECT;
- writel(mif_cfg, gp->regs + MIF_CFG);
- } else {
- gp->phy_type = phy_serialink;
- }
- if (gp->phy_type == phy_mii_mdio1 ||
- gp->phy_type == phy_mii_mdio0) {
- int i;
-
- for (i = 0; i < 32; i++) {
- gp->mii_phy_addr = i;
- if (phy_read(gp, PHY_CTRL) != 0xffff)
- break;
- }
- if (i == 32) {
- if (pdev->device != PCI_DEVICE_ID_SUN_GEM) {
- printk(KERN_ERR PFX "RIO MII phy will not respond.\n");
- return -1;
- }
- gp->phy_type = phy_serdes;
- }
- }
-
- /* Fetch the FIFO configurations now too. */
- gp->tx_fifo_sz = readl(gp->regs + TXDMA_FSZ) * 64;
- gp->rx_fifo_sz = readl(gp->regs + RXDMA_FSZ) * 64;
-
- if (pdev->vendor == PCI_VENDOR_ID_SUN) {
- if (pdev->device == PCI_DEVICE_ID_SUN_GEM) {
- if (gp->tx_fifo_sz != (9 * 1024) ||
- gp->rx_fifo_sz != (20 * 1024)) {
- printk(KERN_ERR PFX "GEM has bogus fifo sizes tx(%d) rx(%d)\n",
- gp->tx_fifo_sz, gp->rx_fifo_sz);
- return -1;
- }
- } else {
- if (gp->tx_fifo_sz != (2 * 1024) ||
- gp->rx_fifo_sz != (2 * 1024)) {
- printk(KERN_ERR PFX "RIO GEM has bogus fifo sizes tx(%d) rx(%d)\n",
- gp->tx_fifo_sz, gp->rx_fifo_sz);
- return -1;
- }
- }
- }
-
- /* Calculate pause thresholds. Setting the OFF threshold to the
- * full RX fifo size effectively disables PAUSE generation which
- * is what we do for 10/100 only GEMs which have FIFOs too small
- * to make real gains from PAUSE.
- */
- if (gp->rx_fifo_sz <= (2 * 1024)) {
- gp->rx_pause_off = gp->rx_pause_on = gp->rx_fifo_sz;
- } else {
- int off = (gp->rx_fifo_sz - (5 * 1024));
- int on = off - 1024;
-
- gp->rx_pause_off = off;
- gp->rx_pause_on = on;
- }
-
- {
- u32 cfg;
-
- /* XXX Why do I do this? -DaveM XXX */
- cfg = readl(gp->regs + GREG_BIFCFG);
- cfg |= GREG_BIFCFG_B64DIS;
- writel(cfg, gp->regs + GREG_BIFCFG);
-
- cfg = GREG_CFG_IBURST;
- cfg |= ((31 << 1) & GREG_CFG_TXDMALIM);
- cfg |= ((31 << 6) & GREG_CFG_RXDMALIM);
- writel(cfg, gp->regs + GREG_CFG);
- }
-
- return 0;
-}
-
static int __devinit gem_get_device_address(struct gem *gp)
{
-#if defined(__sparc__) || defined(__powerpc__)
+#if defined(__sparc__) || defined(CONFIG_ALL_PPC)
struct net_device *dev = gp->dev;
- struct pci_dev *pdev = gp->pdev;
#endif
#ifdef __sparc__
+ struct pci_dev *pdev = gp->pdev;
struct pcidev_cookie *pcp = pdev->sysdata;
int node = -1;
if (node == -1)
memcpy(dev->dev_addr, idprom->id_ethaddr, 6);
#endif
-#ifdef __powerpc__
- struct device_node *gem_node;
+#ifdef CONFIG_ALL_PPC
unsigned char *addr;
- gem_node = pci_device_to_OF_node(pdev);
- addr = get_property(gem_node, "local-mac-address", NULL);
+ addr = get_property(gp->of_node, "local-mac-address", NULL);
if (addr == NULL) {
printk("\n");
printk(KERN_ERR "%s: can't get mac-address\n", dev->name);
if (gem_version_printed++ == 0)
printk(KERN_INFO "%s", version);
+ /* Apple gmac note: during probe, the chip is powered up by
+ * the arch code to allow the code below to work (and to let
+ * the chip be probed on the config space. It won't stay powered
+ * up until the interface is brought up however, so we can't rely
+ * on register configuration done at this point.
+ */
err = pci_enable_device(pdev);
if (err) {
printk(KERN_ERR PFX "Cannot enable MMIO operation, "
goto err_out_free_mmio_res;
}
- if (gem_check_invariants(gp))
- goto err_out_iounmap;
+ /* On Apple's, we might not access the hardware at that point */
+ if (pdev->vendor != PCI_VENDOR_ID_APPLE) {
+ gem_stop(gp, gp->regs);
+ if (gem_check_invariants(gp))
+ goto err_out_iounmap;
+ gp->hw_running = 1;
+ }
/* It is guarenteed that the returned buffer will be at least
* PAGE_SIZE aligned.
printk(KERN_INFO "%s: Sun GEM (PCI) 10/100/1000BaseT Ethernet ",
dev->name);
+#ifdef CONFIG_ALL_PPC
+ gp->of_node = pci_device_to_OF_node(pdev);
+#endif
if (gem_get_device_address(gp))
goto err_out_iounmap;
iounmap((void *) gp->regs);
release_mem_region(pci_resource_start(pdev, 0),
pci_resource_len(pdev, 0));
+#ifdef CONFIG_ALL_PPC
+ pmac_call_feature(PMAC_FTR_GMAC_ENABLE, gp->of_node, 0, 0);
+#endif
kfree(dev);
pci_set_drvdata(pdev, NULL);
-/* $Id: sungem.h,v 1.7 2001/04/04 14:49:40 davem Exp $
+/* $Id: sungem.h,v 1.8 2001/10/17 05:55:39 davem Exp $
* sungem.h: Definitions for Sun GEM ethernet driver.
*
* Copyright (C) 2000 David S. Miller (davem@redhat.com)
/* MII phy registers */
#define PHY_CTRL 0x00
#define PHY_STAT 0x01
+#define PHY_ID0 0x02
+#define PHY_ID1 0x03
#define PHY_ADV 0x04
#define PHY_LPA 0x05
+#define PHY_CTRL_SPD2 0x0040 /* Gigabit enable? (bcm5411) */
#define PHY_CTRL_FDPLX 0x0100 /* Full duplex */
#define PHY_CTRL_ISO 0x0400 /* Isloate MII from PHY */
#define PHY_CTRL_ANRES 0x0200 /* Auto-negotiation restart */
#define PHY_LPA_10FULL 0x0040
#define PHY_LPA_100HALF 0x0080
#define PHY_LPA_100FULL 0x0100
+#define PHY_LPA_PAUSE 0x0400
#define PHY_LPA_FAULT 0x2000
+/* More PHY registers (specific to Broadcom models) */
+
+/* MII BCM5201 MULTIPHY interrupt register */
+#define PHY_BCM5201_INTERRUPT 0x1A
+#define PHY_BCM5201_INTERRUPT_INTENABLE 0x4000
+
+#define PHY_BCM5201_AUXMODE2 0x1B
+#define PHY_BCM5201_AUXMODE2_LOWPOWER 0x0008
+
+#define PHY_BCM5201_MULTIPHY 0x1E
+
+/* MII BCM5201 MULTIPHY register bits */
+#define PHY_BCM5201_MULTIPHY_SERIALMODE 0x0002
+#define PHY_BCM5201_MULTIPHY_SUPERISOLATE 0x0008
+
+/* MII BCM5400 1000-BASET Control register */
+#define PHY_BCM5400_GB_CONTROL 0x09
+#define PHY_BCM5400_GB_CONTROL_FULLDUPLEXCAP 0x0200
+
+/* MII BCM5400 AUXCONTROL register */
+#define PHY_BCM5400_AUXCONTROL 0x18
+#define PHY_BCM5400_AUXCONTROL_PWR10BASET 0x0004
+
+/* MII BCM5400 AUXSTATUS register */
+#define PHY_BCM5400_AUXSTATUS 0x19
+#define PHY_BCM5400_AUXSTATUS_LINKMODE_MASK 0x0700
+#define PHY_BCM5400_AUXSTATUS_LINKMODE_SHIFT 8
+
/* When it can, GEM internally caches 4 aligned TX descriptors
* at a time, so that it can use full cacheline DMA reads.
*
phy_serdes,
};
+enum gem_phy_model {
+ phymod_generic,
+ phymod_bcm5201,
+ phymod_bcm5221,
+ phymod_bcm5400,
+ phymod_bcm5401,
+ phymod_bcm5411,
+};
+
enum link_state {
aneg_wait,
force_wait,
+ aneg_up,
};
struct gem {
int rx_new, rx_old;
int tx_new, tx_old;
+ /* Set when chip is actually in operational state
+ * (ie. not power managed)
+ */
+ int hw_running;
+
struct gem_init_block *init_block;
struct sk_buff *rx_skbs[RX_RING_SIZE];
struct net_device_stats net_stats;
enum gem_phy_type phy_type;
+ enum gem_phy_model phy_mod;
int tx_fifo_sz;
int rx_fifo_sz;
int rx_pause_off;
dma_addr_t gblock_dvma;
struct pci_dev *pdev;
struct net_device *dev;
+#ifdef CONFIG_ALL_PPC
+ struct device_node *of_node;
+#endif
};
#define ALIGNED_RX_SKB_ADDR(addr) \
module_init(sparc_lance_probe);
module_exit(sparc_lance_cleanup);
+MODULE_LICENSE("GPL");
-/* $Id: sunqe.c,v 1.51 2001/04/19 22:32:42 davem Exp $
+/* $Id: sunqe.c,v 1.52 2001/10/18 08:18:08 davem Exp $
* sunqe.c: Sparc QuadEthernet 10baseT SBUS card driver.
* Once again I am out to prove that every ethernet
* controller out there can be most efficiently programmed
Aironet. Major code contributions were received from Javier Achirica
and Jean Tourrilhes <jt@hpl.hp.com>. Code was also integrated from
the Cisco Aironet driver for Linux.
-
+
======================================================================*/
#include <linux/config.h>
{ 0x14b9, 0x4800, PCI_ANY_ID, PCI_ANY_ID, },
{ 0x14b9, 0x0340, PCI_ANY_ID, PCI_ANY_ID, },
{ 0x14b9, 0x0350, PCI_ANY_ID, PCI_ANY_ID, },
- { 0, }
+ { 0, }
};
MODULE_DEVICE_TABLE(pci, card_ids);
static int auto_wep /* = 0 */; /* If set, it tries to figure out the wep mode */
static int aux_bap /* = 0 */; /* Checks to see if the aux ports are needed to read
- the bap, needed on some older cards and buses. */
+ the bap, needed on some older cards and buses. */
static int adhoc;
static int proc_uid /* = 0 */;
/* The RIDs */
#define RID_CAPABILITIES 0xFF00
+#define RID_APINFO 0xFF01
+#define RID_RADIOINFO 0xFF02
+#define RID_UNKNOWN3 0xFF03
#define RID_RSSI 0xFF04
#define RID_CONFIG 0xFF10
#define RID_SSID 0xFF11
#define RID_WEP_TEMP 0xFF15
#define RID_WEP_PERM 0xFF16
#define RID_MODULATION 0xFF17
+#define RID_OPTIONS 0xFF18
#define RID_ACTUALCONFIG 0xFF20 /*readonly*/
+#define RID_FACTORYCONFIG 0xFF21
+#define RID_UNKNOWN22 0xFF22
#define RID_LEAPUSERNAME 0xFF23
#define RID_LEAPPASSWORD 0xFF24
#define RID_STATUS 0xFF50
+#define RID_UNKNOWN52 0xFF52
+#define RID_UNKNOWN54 0xFF54
+#define RID_UNKNOWN55 0xFF55
+#define RID_UNKNOWN56 0xFF56
+#define RID_STATS16 0xFF60
+#define RID_STATS16DELTA 0xFF61
+#define RID_STATS16DELTACLEAR 0xFF62
#define RID_STATS 0xFF68
#define RID_STATSDELTA 0xFF69
#define RID_STATSDELTACLEAR 0xFF6A
+#define RID_UNKNOWN70 0xFF70
+#define RID_UNKNOWN71 0xFF71
#define RID_BSSLISTFIRST 0xFF72
#define RID_BSSLISTNEXT 0xFF73
u16 spacer;
u32 vals[100];
} StatsRid;
-
+
typedef struct {
u16 len;
#define RADIO_FH 1 /* Frequency hopping radio type */
#define RADIO_DS 2 /* Direct sequence radio type */
#define RADIO_TMA 4 /* Proprietary radio used in old cards (2500) */
- u16 radioType;
+ u16 radioType;
u8 bssid[6]; /* Mac address of the BSS */
u8 zero;
u8 ssidLen;
#ifdef CISCO_EXT
#define AIROMAGIC 0xa55a
-#define AIROIOCTL SIOCDEVPRIVATE
+/* Warning : SIOCDEVPRIVATE may disapear during 2.5.X - Jean II */
+#ifdef SIOCIWFIRSTPRIV
+#ifdef SIOCDEVPRIVATE
+#define AIROOLDIOCTL SIOCDEVPRIVATE
+#define AIROOLDIDIFC AIROOLDIOCTL + 1
+#endif /* SIOCDEVPRIVATE */
+#else /* SIOCIWFIRSTPRIV */
+#define SIOCIWFIRSTPRIV SIOCDEVPRIVATE
+#endif /* SIOCIWFIRSTPRIV */
+#define AIROIOCTL SIOCIWFIRSTPRIV
#define AIROIDIFC AIROIOCTL + 1
/* Ioctl constants to be used in airo_ioctl.command */
#define AIROGCAP 0 // Capability rid
-#define AIROGCFG 1 // USED A LOT
-#define AIROGSLIST 2 // System ID list
+#define AIROGCFG 1 // USED A LOT
+#define AIROGSLIST 2 // System ID list
#define AIROGVLIST 3 // List of specified AP's
#define AIROGDRVNAM 4 // NOTUSED
#define AIROGEHTENC 5 // NOTUSED
static u16 setup_card(struct airo_info*, u8 *mac, ConfigRid *);
static void enable_interrupts(struct airo_info*);
static void disable_interrupts(struct airo_info*);
+static u16 lock_issuecommand(struct airo_info*, Cmd *pCmd, Resp *pRsp);
static u16 issuecommand(struct airo_info*, Cmd *pCmd, Resp *pRsp);
static int bap_setup(struct airo_info*, u16 rid, u16 offset, int whichbap);
-static int aux_bap_read(struct airo_info*, u16 *pu16Dst, int bytelen,
+static int aux_bap_read(struct airo_info*, u16 *pu16Dst, int bytelen,
int whichbap);
-static int fast_bap_read(struct airo_info*, u16 *pu16Dst, int bytelen,
+static int fast_bap_read(struct airo_info*, u16 *pu16Dst, int bytelen,
int whichbap);
static int bap_write(struct airo_info*, const u16 *pu16Src, int bytelen,
int whichbap);
static int PC4500_readrid(struct airo_info*, u16 rid, void *pBuf, int len);
static int PC4500_writerid(struct airo_info*, u16 rid, const void
*pBuf, int len);
-static int do_writerid( struct airo_info*, u16 rid, const void *rid_data,
+static int do_writerid( struct airo_info*, u16 rid, const void *rid_data,
int len );
static u16 transmit_allocate(struct airo_info*, int lenPayload);
static int transmit_802_3_packet(struct airo_info*, u16 TxFid, char
int fids[MAX_FIDS];
int registered;
ConfigRid config;
- u16 authtype; // Used with auto_wep
+ u16 authtype; // Used with auto_wep
char keyindex; // Used with auto wep
char defindex; // Used with auto wep
struct timer_list timer;
struct proc_dir_entry *proc_entry;
struct airo_info *next;
- spinlock_t bap0_lock;
- spinlock_t bap1_lock;
spinlock_t aux_lock;
- spinlock_t cmd_lock;
+ spinlock_t main_lock;
int flags;
#define FLAG_PROMISC IFF_PROMISC
#define FLAG_RADIO_OFF 0x02
- int (*bap_read)(struct airo_info*, u16 *pu16Dst, int bytelen,
+ int (*bap_read)(struct airo_info*, u16 *pu16Dst, int bytelen,
int whichbap);
int (*header_parse)(struct sk_buff*, unsigned char *);
unsigned short *flash;
#endif /* WIRELESS_EXT */
};
-static inline int bap_read(struct airo_info *ai, u16 *pu16Dst, int bytelen,
+static inline int bap_read(struct airo_info *ai, u16 *pu16Dst, int bytelen,
int whichbap) {
return ai->bap_read(ai, pu16Dst, bytelen, whichbap);
}
if (first == 1) {
memset(&cmd, 0, sizeof(cmd));
cmd.cmd=CMD_LISTBSS;
- issuecommand(ai, &cmd, &rsp);
+ lock_issuecommand(ai, &cmd, &rsp);
/* Let the command take effect */
set_current_state (TASK_INTERRUPTIBLE);
schedule_timeout (3*HZ);
}
- rc = PC4500_readrid(ai,
+ rc = PC4500_readrid(ai,
first ? RID_BSSLISTFIRST : RID_BSSLISTNEXT,
list, sizeof(*list));
}
static int readWepKeyRid(struct airo_info*ai, WepKeyRid *wkr, int temp) {
- int rc = PC4500_readrid(ai, temp ? RID_WEP_TEMP : RID_WEP_PERM,
+ int rc = PC4500_readrid(ai, temp ? RID_WEP_TEMP : RID_WEP_PERM,
wkr, sizeof(*wkr));
-
+
wkr->len = le16_to_cpu(wkr->len);
wkr->kindex = le16_to_cpu(wkr->kindex);
wkr->klen = le16_to_cpu(wkr->klen);
wkr.kindex = cpu_to_le16(wkr.kindex);
wkr.klen = cpu_to_le16(wkr.klen);
rc = do_writerid(ai, RID_WEP_TEMP, &wkr, sizeof(wkr));
- if (rc!=SUCCESS) printk(KERN_ERR "airo: WEP_TEMP set %x\n", rc);
+ if (rc!=SUCCESS) printk(KERN_ERR "airo: WEP_TEMP set %x\n", rc);
if (perm) {
rc = do_writerid(ai, RID_WEP_PERM, &wkr, sizeof(wkr));
if (rc!=SUCCESS) {
static int readConfigRid(struct airo_info*ai, ConfigRid *cfgr) {
int rc = PC4500_readrid(ai, RID_ACTUALCONFIG, cfgr, sizeof(*cfgr));
u16 *s;
-
+
for(s = &cfgr->len; s <= &cfgr->rtsThres; s++) *s = le16_to_cpu(*s);
for(s = &cfgr->shortRetryLimit; s <= &cfgr->radioType; s++)
for(s = &cfgr->txPower; s <= &cfgr->radioSpecific; s++)
*s = le16_to_cpu(*s);
-
+
for(s = &cfgr->arlThreshold; s <= &cfgr->autoWake; s++)
*s = le16_to_cpu(*s);
-
+
return rc;
}
static int writeConfigRid(struct airo_info*ai, ConfigRid *pcfgr) {
u16 *s;
ConfigRid cfgr = *pcfgr;
-
+
for(s = &cfgr.len; s <= &cfgr.rtsThres; s++) *s = cpu_to_le16(*s);
for(s = &cfgr.shortRetryLimit; s <= &cfgr.radioType; s++)
for(s = &cfgr.txPower; s <= &cfgr.radioSpecific; s++)
*s = cpu_to_le16(*s);
-
+
for(s = &cfgr.arlThreshold; s <= &cfgr.autoWake; s++)
*s = cpu_to_le16(*s);
-
+
return do_writerid( ai, RID_CONFIG, &cfgr, sizeof(cfgr));
}
static int readStatusRid(struct airo_info*ai, StatusRid *statr) {
static int readCapabilityRid(struct airo_info*ai, CapabilityRid *capr) {
int rc = PC4500_readrid(ai, RID_CAPABILITIES, capr, sizeof(*capr));
u16 *s;
-
+
capr->len = le16_to_cpu(capr->len);
capr->prodNum = le16_to_cpu(capr->prodNum);
capr->radioType = le16_to_cpu(capr->radioType);
static int airo_start_xmit(struct sk_buff *skb, struct net_device *dev) {
s16 len;
- s16 retval = 0;
u16 status;
u32 flags;
- s8 *buffer;
int i,j;
struct airo_info *priv = (struct airo_info*)dev->priv;
u32 *fids = priv->fids;
-
+
if ( skb == NULL ) {
printk( KERN_ERR "airo: skb == NULL!!!\n" );
return 0;
}
-
+
/* Find a vacant FID */
- spin_lock_irqsave(&priv->bap1_lock, flags);
+ spin_lock_irqsave(&priv->main_lock, flags);
for( j = 0, i = -1; j < MAX_FIDS; j++ ) {
if ( !( fids[j] & 0xffff0000 ) ) {
if ( i == -1 ) i = j;
}
if ( j == MAX_FIDS ) netif_stop_queue(dev);
if ( i == -1 ) {
- retval = -EBUSY;
+ priv->stats.tx_fifo_errors++;
goto tx_done;
}
-
+
len = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN; /* check min length*/
- buffer = skb->data;
- status = transmit_802_3_packet( priv,
- fids[i],
- skb->data, len );
-
+ status = transmit_802_3_packet( priv, fids[i], skb->data, len );
+
if ( status == SUCCESS ) {
/* Mark fid as used & save length for later */
- fids[i] |= (len << 16);
+ fids[i] |= (len << 16);
dev->trans_start = jiffies;
} else {
- priv->stats.tx_errors++;
+ priv->stats.tx_window_errors++;
}
tx_done:
- spin_unlock_irqrestore(&priv->bap1_lock, flags);
+ spin_unlock_irqrestore(&priv->main_lock, flags);
dev_kfree_skb(skb);
return 0;
}
-static struct net_device_stats *airo_get_stats(struct net_device *dev) {
- return &(((struct airo_info*)dev->priv)->stats);
+struct net_device_stats *airo_get_stats(struct net_device *dev)
+{
+ struct airo_info *local = (struct airo_info*) dev->priv;
+ StatsRid stats_rid;
+ u32 *vals = stats_rid.vals;
+
+ /* Get stats out of the card */
+ readStatsRid(local, &stats_rid, RID_STATS);
+
+ local->stats.rx_packets = vals[43] + vals[44] + vals[45];
+ local->stats.tx_packets = vals[39] + vals[40] + vals[41];
+ local->stats.rx_bytes = vals[92];
+ local->stats.tx_bytes = vals[91];
+ local->stats.rx_errors = vals[0] + vals[2] + vals[3] + vals[4];
+ local->stats.tx_errors = vals[42] + local->stats.tx_fifo_errors;
+ local->stats.multicast = vals[43];
+ local->stats.collisions = vals[89];
+
+ /* detailed rx_errors: */
+ local->stats.rx_length_errors = vals[3];
+ local->stats.rx_crc_errors = vals[4];
+ local->stats.rx_frame_errors = vals[2];
+ local->stats.rx_fifo_errors = vals[0];
+
+ return (&local->stats);
}
static int enable_MAC( struct airo_info *ai, Resp *rsp );
struct airo_info *ai = (struct airo_info*)dev->priv;
Cmd cmd;
Resp rsp;
-
+
/* For some reason this command takes a lot of time (~20 ms) and it's
* run in an interrupt handler, so we'd better be sure we needed it
* before executing it.
memset(&cmd, 0, sizeof(cmd));
cmd.cmd=CMD_SETMODE;
cmd.parm0=(dev->flags&IFF_PROMISC) ? PROMISC : NOPROMISC;
- issuecommand(ai, &cmd, &rsp);
+ lock_issuecommand(ai, &cmd, &rsp);
ai->flags^=IFF_PROMISC;
}
}
-static int airo_close(struct net_device *dev) {
+static int airo_close(struct net_device *dev) {
struct airo_info *ai = (struct airo_info*)dev->priv;
netif_stop_queue(dev);
static void del_airo_dev( struct net_device *dev );
-void stop_airo_card( struct net_device *dev, int freeres )
+void stop_airo_card( struct net_device *dev, int freeres )
{
struct airo_info *ai = (struct airo_info*)dev->priv;
if (ai->flash)
kfree( dev );
}
-static int add_airo_dev( struct net_device *dev );
+static int add_airo_dev( struct net_device *dev );
struct net_device *init_airo_card( unsigned short irq, int port, int is_pcmcia )
{
struct net_device *dev;
struct airo_info *ai;
int i, rc;
-
+
/* Create the network device object. */
dev = alloc_etherdev(sizeof(*ai));
if (!dev) {
printk(KERN_ERR "airo: Couldn't alloc_etherdev\n");
return NULL;
}
+ if (dev_alloc_name(dev, dev->name) < 0) {
+ printk(KERN_ERR "airo: Couldn't get name!\n");
+ goto err_out_free;
+ }
+
ai = dev->priv;
- ai->registered = 1;
+ ai->registered = 0;
ai->dev = dev;
- ai->bap0_lock = SPIN_LOCK_UNLOCKED;
- ai->bap1_lock = SPIN_LOCK_UNLOCKED;
ai->aux_lock = SPIN_LOCK_UNLOCKED;
- ai->cmd_lock = SPIN_LOCK_UNLOCKED;
+ ai->main_lock = SPIN_LOCK_UNLOCKED;
ai->header_parse = dev->hard_header_parse;
rc = add_airo_dev( dev );
if (rc)
goto err_out_free;
-
+
/* The Airo-specific entries in the device structure. */
dev->hard_start_xmit = &airo_start_xmit;
dev->get_stats = &airo_get_stats;
dev->stop = &airo_close;
dev->irq = irq;
dev->base_addr = port;
-
- rc = register_netdev(dev);
- if (rc)
- goto err_out_unlink;
-
- rc = request_irq( dev->irq, airo_interrupt,
- SA_SHIRQ | SA_INTERRUPT, dev->name, dev );
+
+ rc = request_irq( dev->irq, airo_interrupt, SA_SHIRQ, dev->name, dev );
if (rc) {
printk(KERN_ERR "airo: register interrupt %d failed, rc %d\n", irq, rc );
- goto err_out_unregister;
+ goto err_out_unlink;
}
if (!is_pcmcia) {
if (!request_region( dev->base_addr, 64, dev->name )) {
goto err_out_irq;
}
}
-
+
if ( setup_card( ai, dev->dev_addr, &ai->config) != SUCCESS ) {
printk( KERN_ERR "airo: MAC could not be enabled\n" );
rc = -EIO;
goto err_out_res;
}
+ rc = register_netdev(dev);
+ if (rc)
+ goto err_out_res;
+
+ ai->registered = 1;
printk( KERN_INFO "airo: MAC enabled %s %x:%x:%x:%x:%x:%x\n",
dev->name,
dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
release_region( dev->base_addr, 64 );
err_out_irq:
free_irq(dev->irq, dev);
-err_out_unregister:
- unregister_netdev(dev);
err_out_unlink:
del_airo_dev(dev);
err_out_free:
udelay (10);
if (++delay % 20)
OUT4500(ai, EVACK, EV_CLEARCOMMANDBUSY);
- }
+ }
return delay < 10000;
}
u16 status;
u16 fid;
struct airo_info *apriv = (struct airo_info *)dev->priv;
- u16 savedInterrupts;
-
+ u16 savedInterrupts = 0;
+
if (!netif_device_present(dev))
return;
-
- status = IN4500( apriv, EVSTAT );
- if ( !status || status == 0xffff ) return;
-
- if ( status & EV_AWAKE ) {
- OUT4500( apriv, EVACK, EV_AWAKE );
- OUT4500( apriv, EVACK, EV_AWAKE );
- }
-
- savedInterrupts = IN4500( apriv, EVINTEN );
- OUT4500( apriv, EVINTEN, 0 );
-
- if ( status & EV_LINK ) {
- /* The link status has changed, if you want to put a
- monitor hook in, do it here. (Remember that
- interrupts are still disabled!)
- */
- u16 newStatus = IN4500(apriv, LINKSTAT);
- /* Here is what newStatus means: */
+
+ for (;;) {
+ status = IN4500( apriv, EVSTAT );
+ if ( !status || status == 0xffff ) break;
+
+ if ( status & EV_AWAKE ) {
+ OUT4500( apriv, EVACK, EV_AWAKE );
+ OUT4500( apriv, EVACK, EV_AWAKE );
+ }
+
+ if (!savedInterrupts) {
+ savedInterrupts = IN4500( apriv, EVINTEN );
+ OUT4500( apriv, EVINTEN, 0 );
+ }
+
+ if ( status & EV_LINK ) {
+ /* The link status has changed, if you want to put a
+ monitor hook in, do it here. (Remember that
+ interrupts are still disabled!)
+ */
+ u16 newStatus = IN4500(apriv, LINKSTAT);
+ OUT4500( apriv, EVACK, EV_LINK);
+ /* Here is what newStatus means: */
#define NOBEACON 0x8000 /* Loss of sync - missed beacons */
#define MAXRETRIES 0x8001 /* Loss of sync - max retries */
#define MAXARL 0x8002 /* Loss of sync - average retry level exceeded*/
leaving BSS */
#define RC_NOAUTH 9 /* Station requesting (Re)Association is not
Authenticated with the responding station */
- if (newStatus != ASSOCIATED) {
- if (auto_wep && !timer_pending(&apriv->timer)) {
- apriv->timer.expires = RUN_AT(HZ*3);
- add_timer(&apriv->timer);
- }
- }
- }
-
- /* Check to see if there is something to receive */
- if ( status & EV_RX ) {
- struct sk_buff *skb = NULL;
- long flags;
- u16 fc, len, hdrlen = 0;
- struct {
- u16 status, len;
- u8 rssi[2];
- } hdr;
-
- fid = IN4500( apriv, RXFID );
-
- /* Get the packet length */
- spin_lock_irqsave(&apriv->bap0_lock, flags);
- if (dev->type == ARPHRD_IEEE80211) {
- bap_setup (apriv, fid, 4, BAP0);
- bap_read (apriv, (u16*)&hdr, sizeof(hdr), BAP0);
- /* Bad CRC. Ignore packet */
- if (le16_to_cpu(hdr.status) == 2) {
- apriv->stats.rx_crc_errors++;
- apriv->stats.rx_errors++;
- hdr.len = 0;
+ if (newStatus != ASSOCIATED) {
+ if (auto_wep && !timer_pending(&apriv->timer)) {
+ apriv->timer.expires = RUN_AT(HZ*3);
+ add_timer(&apriv->timer);
+ }
}
- } else {
- bap_setup (apriv, fid, 6, BAP0);
- bap_read (apriv, (u16*)&hdr.len, 4, BAP0);
- }
- len = le16_to_cpu(hdr.len);
-
- if (len > 2312) {
- apriv->stats.rx_length_errors++;
- apriv->stats.rx_errors++;
- printk( KERN_ERR
- "airo: Bad size %d\n", len );
- len = 0;
}
- if (len) {
+
+ /* Check to see if there is something to receive */
+ if ( status & EV_RX ) {
+ struct sk_buff *skb = NULL;
+ u16 fc, len, hdrlen = 0;
+ struct {
+ u16 status, len;
+ u8 rssi[2];
+ } hdr;
+
+ fid = IN4500( apriv, RXFID );
+
+ /* Get the packet length */
if (dev->type == ARPHRD_IEEE80211) {
- bap_setup (apriv, fid, 0x14, BAP0);
- bap_read (apriv, (u16*)&fc, sizeof(fc), BAP0);
- if ((le16_to_cpu(fc) & 0x300) == 0x300)
- hdrlen = 30;
- else
- hdrlen = 24;
- } else
- hdrlen = 12;
+ bap_setup (apriv, fid, 4, BAP0);
+ bap_read (apriv, (u16*)&hdr, sizeof(hdr), BAP0);
+ /* Bad CRC. Ignore packet */
+ if (le16_to_cpu(hdr.status) & 2)
+ hdr.len = 0;
+ } else {
+ bap_setup (apriv, fid, 6, BAP0);
+ bap_read (apriv, (u16*)&hdr.len, 4, BAP0);
+ }
+ len = le16_to_cpu(hdr.len);
- skb = dev_alloc_skb( len + hdrlen + 2 );
- if ( !skb ) {
- apriv->stats.rx_dropped++;
+ if (len > 2312) {
+ printk( KERN_ERR "airo: Bad size %d\n", len );
len = 0;
}
- }
- if (len) {
- u16 *buffer;
- buffer = (u16*)skb_put (skb, len + hdrlen);
- if (dev->type == ARPHRD_IEEE80211) {
- u16 gap, tmpbuf[4];
- buffer[0] = fc;
- bap_read (apriv, buffer + 1, hdrlen - 2, BAP0);
- if (hdrlen == 24)
- bap_read (apriv, tmpbuf, 6, BAP0);
-
- bap_read (apriv, &gap, sizeof(gap), BAP0);
- gap = le16_to_cpu(gap);
- if (gap && gap <= 8)
- bap_read (apriv, tmpbuf, gap, BAP0);
-
- bap_read (apriv, buffer + hdrlen/2, len, BAP0);
- } else {
- bap_setup (apriv, fid, 0x38, BAP0);
- bap_read (apriv, buffer,len + hdrlen,BAP0);
+ if (len) {
+ if (dev->type == ARPHRD_IEEE80211) {
+ bap_setup (apriv, fid, 0x14, BAP0);
+ bap_read (apriv, (u16*)&fc, sizeof(fc), BAP0);
+ if ((le16_to_cpu(fc) & 0x300) == 0x300)
+ hdrlen = 30;
+ else
+ hdrlen = 24;
+ } else
+ hdrlen = 12;
+
+ skb = dev_alloc_skb( len + hdrlen + 2 );
+ if ( !skb ) {
+ apriv->stats.rx_dropped++;
+ len = 0;
+ }
}
+ if (len) {
+ u16 *buffer;
+ buffer = (u16*)skb_put (skb, len + hdrlen);
+ if (dev->type == ARPHRD_IEEE80211) {
+ u16 gap, tmpbuf[4];
+ buffer[0] = fc;
+ bap_read (apriv, buffer + 1, hdrlen - 2, BAP0);
+ if (hdrlen == 24)
+ bap_read (apriv, tmpbuf, 6, BAP0);
+
+ bap_read (apriv, &gap, sizeof(gap), BAP0);
+ gap = le16_to_cpu(gap);
+ if (gap && gap <= 8)
+ bap_read (apriv, tmpbuf, gap, BAP0);
+
+ bap_read (apriv, buffer + hdrlen/2, len, BAP0);
+ } else {
+ bap_setup (apriv, fid, 0x38, BAP0);
+ bap_read (apriv, buffer,len + hdrlen,BAP0);
+ }
+ OUT4500( apriv, EVACK, EV_RX);
#ifdef WIRELESS_SPY
- if (apriv->spy_number > 0) {
- int i;
- char *sa;
-
- sa = (char*)buffer + ((dev->type == ARPHRD_IEEE80211) ? 10 : 6);
-
- for (i=0; i<apriv->spy_number; i++)
- if (!memcmp(sa,apriv->spy_address[i],6))
- {
- apriv->spy_stat[i].qual = hdr.rssi[0];
- if (apriv->rssi)
- apriv->spy_stat[i].level = 0x100 - apriv->rssi[hdr.rssi[1]].rssidBm;
- else
- apriv->spy_stat[i].level = (hdr.rssi[1] + 321) / 2;
- apriv->spy_stat[i].noise = 0;
- apriv->spy_stat[i].updated = 3;
- break;
- }
- }
+ if (apriv->spy_number > 0) {
+ int i;
+ char *sa;
+
+ sa = (char*)buffer + ((dev->type == ARPHRD_IEEE80211) ? 10 : 6);
+
+ for (i=0; i<apriv->spy_number; i++)
+ if (!memcmp(sa,apriv->spy_address[i],6))
+ {
+ apriv->spy_stat[i].qual = hdr.rssi[0];
+ if (apriv->rssi)
+ apriv->spy_stat[i].level = 0x100 - apriv->rssi[hdr.rssi[1]].rssidBm;
+ else
+ apriv->spy_stat[i].level = (hdr.rssi[1] + 321) / 2;
+ apriv->spy_stat[i].noise = 0;
+ apriv->spy_stat[i].updated = 3;
+ break;
+ }
+ }
#endif /* WIRELESS_SPY */
- apriv->stats.rx_packets++;
- apriv->stats.rx_bytes += len + hdrlen;
- dev->last_rx = jiffies;
- skb->dev = dev;
- skb->ip_summed = CHECKSUM_NONE;
- if (dev->type == ARPHRD_IEEE80211) {
- skb->mac.raw = skb->data;
- skb_pull (skb, hdrlen);
- skb->pkt_type = PACKET_OTHERHOST;
- skb->protocol = htons(ETH_P_802_2);
- } else
- skb->protocol = eth_type_trans( skb, dev );
+ dev->last_rx = jiffies;
+ skb->dev = dev;
+ skb->ip_summed = CHECKSUM_NONE;
+ if (dev->type == ARPHRD_IEEE80211) {
+ skb->mac.raw = skb->data;
+ skb_pull (skb, hdrlen);
+ skb->pkt_type = PACKET_OTHERHOST;
+ skb->protocol = htons(ETH_P_802_2);
+ } else
+ skb->protocol = eth_type_trans(skb,dev);
- netif_rx( skb );
+ netif_rx( skb );
+ } else
+ OUT4500( apriv, EVACK, EV_RX);
}
- spin_unlock_irqrestore(&apriv->bap0_lock, flags);
- }
- /* Check to see if a packet has been transmitted */
- if ( status & ( EV_TX|EV_TXEXC ) ) {
- int i;
- int len = 0;
- int full = 1;
- int index = -1;
-
- fid = IN4500(apriv, TXCOMPLFID);
-
- for( i = 0; i < MAX_FIDS; i++ ) {
- if (!(apriv->fids[i] & 0xffff0000)) full = 0;
- if ( ( apriv->fids[i] & 0xffff ) == fid ) {
- len = apriv->fids[i] >> 16;
- index = i;
- /* Set up to be used again */
- apriv->fids[i] &= 0xffff;
+ /* Check to see if a packet has been transmitted */
+ if ( status & ( EV_TX|EV_TXEXC ) ) {
+ int i;
+ int len = 0;
+ int index = -1;
+
+ fid = IN4500(apriv, TXCOMPLFID);
+
+ for( i = 0; i < MAX_FIDS; i++ ) {
+ if ( ( apriv->fids[i] & 0xffff ) == fid ) {
+ len = apriv->fids[i] >> 16;
+ index = i;
+ /* Set up to be used again */
+ apriv->fids[i] &= 0xffff;
+ }
}
- }
- if (full) netif_wake_queue(dev);
- if (index==-1) {
- printk( KERN_ERR
- "airo: Unallocated FID was used to xmit\n" );
- }
- if ( status & EV_TX ) {
- apriv->stats.tx_packets++;
- if(index!=-1)
- apriv->stats.tx_bytes += len;
- } else {
- if (bap_setup(apriv, fid, 0x0004, BAP1) == SUCCESS) {
+ if (index != -1) netif_wake_queue(dev);
+ if ((status & EV_TXEXC) &&
+ (bap_setup(apriv, fid, 4, BAP1) == SUCCESS)) {
+
u16 status;
bap_read(apriv, &status, 2, BAP1);
if (le16_to_cpu(status) & 2)
if (le16_to_cpu(status) & 0x10)
apriv->stats.tx_carrier_errors++;
}
- apriv->stats.tx_errors++;
+ OUT4500( apriv, EVACK, status & (EV_TX | EV_TXEXC));
+ if (index==-1) {
+ printk( KERN_ERR "airo: Unallocated FID was used to xmit\n" );
+ }
}
+ if ( status & ~STATUS_INTS )
+ OUT4500( apriv, EVACK, status & ~STATUS_INTS);
+
+ if ( status & ~STATUS_INTS & ~IGNORE_INTS )
+ printk( KERN_WARNING "airo: Got weird status %x\n",
+ status & ~STATUS_INTS & ~IGNORE_INTS );
}
- if ( status & ~STATUS_INTS & ~IGNORE_INTS )
- printk( KERN_WARNING
- "airo: Got weird status %x\n",
- status & ~STATUS_INTS & ~IGNORE_INTS );
- OUT4500( apriv, EVACK, status & STATUS_INTS );
- OUT4500( apriv, EVINTEN, savedInterrupts );
-
+
+ if (savedInterrupts)
+ OUT4500( apriv, EVINTEN, savedInterrupts );
+
/* done.. */
- return;
+ return;
}
/*
static u16 IN4500( struct airo_info *ai, u16 reg ) {
unsigned short rc;
-
+
if ( !do8bitIO )
rc = inw( ai->dev->base_addr + reg );
else {
if (ai->flags&FLAG_RADIO_OFF) return SUCCESS;
memset(&cmd, 0, sizeof(cmd));
cmd.cmd = MAC_ENABLE;
- return issuecommand(ai, &cmd, rsp);
+ return lock_issuecommand(ai, &cmd, rsp);
}
static void disable_MAC( struct airo_info *ai ) {
memset(&cmd, 0, sizeof(cmd));
cmd.cmd = MAC_DISABLE; // disable in case already enabled
- issuecommand(ai, &cmd, &rsp);
+ lock_issuecommand(ai, &cmd, &rsp);
}
static void enable_interrupts( struct airo_info *ai ) {
OUT4500( ai, EVINTEN, 0 );
}
-static u16 setup_card(struct airo_info *ai, u8 *mac,
+static u16 setup_card(struct airo_info *ai, u8 *mac,
ConfigRid *config)
{
- Cmd cmd;
+ Cmd cmd;
Resp rsp;
ConfigRid cfg;
int status;
/* The NOP is the first step in getting the card going */
cmd.cmd = NOP;
cmd.parm0 = cmd.parm1 = cmd.parm2 = 0;
- if ( issuecommand( ai, &cmd, &rsp ) != SUCCESS ) {
+ if ( lock_issuecommand( ai, &cmd, &rsp ) != SUCCESS ) {
return ERROR;
}
memset(&cmd, 0, sizeof(cmd));
cmd.cmd = MAC_DISABLE; // disable in case already enabled
- if ( issuecommand( ai, &cmd, &rsp ) != SUCCESS ) {
+ if ( lock_issuecommand( ai, &cmd, &rsp ) != SUCCESS ) {
return ERROR;
}
-
+
// Let's figure out if we need to use the AUX port
cmd.cmd = CMD_ENABLEAUX;
- if (issuecommand(ai, &cmd, &rsp) != SUCCESS) {
+ if (lock_issuecommand(ai, &cmd, &rsp) != SUCCESS) {
printk(KERN_ERR "airo: Error checking for AUX port\n");
return ERROR;
}
cfg = *config;
} else {
tdsRssiRid rssi_rid;
-
+
// general configuration (read/modify/write)
status = readConfigRid(ai, &cfg);
if ( status != SUCCESS ) return ERROR;
if ((status == SUCCESS) && (cap_rid.softCap & 8))
cfg.rmode |= RXMODE_NORMALIZED_RSSI;
else
- printk(KERN_WARNING "airo: unknown received signal level\n");
+ printk(KERN_WARNING "airo: unknown received signal level scale\n");
}
cfg.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
-
+
/* Save off the MAC */
for( i = 0; i < 6; i++ ) {
mac[i] = cfg.macAddr[i];
}
- /* Check to see if there are any insmod configured
+ /* Check to see if there are any insmod configured
rates to add */
if ( rates ) {
int i = 0;
if ( rates[0] ) memset(cfg.rates,0,sizeof(cfg.rates));
for( i = 0; i < 8 && rates[i]; i++ ) {
cfg.rates[i] = rates[i];
- }
+ }
}
if ( basic_rate > 0 ) {
int i;
cfg.authType = ai->authtype;
*config = cfg;
}
-
+
/* Setup the SSIDs if present */
if ( ssids[0] ) {
int i = 0;
for( i = 0; i < 3 && ssids[i]; i++ ) {
mySsid.ssids[i].len = strlen(ssids[i]);
- if ( mySsid.ssids[i].len > 32 )
+ if ( mySsid.ssids[i].len > 32 )
mySsid.ssids[i].len = 32;
memcpy(mySsid.ssids[i].ssid, ssids[i],
mySsid.ssids[i].len);
mySsid.ssids[i].len = mySsid.ssids[i].len;
}
}
-
+
status = writeConfigRid(ai, &cfg);
if ( status != SUCCESS ) return ERROR;
-
+
/* Set up the SSID list */
status = writeSsidRid(ai, &mySsid);
if ( status != SUCCESS ) return ERROR;
-
+
/* Grab the initial wep key, we gotta save it for auto_wep */
rc = readWepKeyRid(ai, &wkr, 1);
if (rc == SUCCESS) do {
}
rc = readWepKeyRid(ai, &wkr, 0);
} while(lastindex != wkr.kindex);
-
+
if (auto_wep && !timer_pending(&ai->timer)) {
ai->timer.expires = RUN_AT(HZ*3);
add_timer(&ai->timer);
return SUCCESS;
}
+static u16 lock_issuecommand(struct airo_info *ai, Cmd *pCmd, Resp *pRsp) {
+ int rc;
+ long flags;
+
+ spin_lock_irqsave(&ai->main_lock, flags);
+ rc = issuecommand(ai, pCmd, pRsp);
+ spin_unlock_irqrestore(&ai->main_lock, flags);
+ return rc;
+}
+
static u16 issuecommand(struct airo_info *ai, Cmd *pCmd, Resp *pRsp) {
// Im really paranoid about letting it run forever!
- int max_tries = 600000;
- int rc = SUCCESS;
- long flags;
+ int max_tries = 600000;
- spin_lock_irqsave(&ai->cmd_lock, flags);
OUT4500(ai, PARAM0, pCmd->parm0);
OUT4500(ai, PARAM1, pCmd->parm1);
OUT4500(ai, PARAM2, pCmd->parm2);
OUT4500(ai, COMMAND, pCmd->cmd);
while ( max_tries-- &&
(IN4500(ai, EVSTAT) & EV_CMD) == 0) {
- if ( IN4500(ai, COMMAND) == pCmd->cmd) {
+ if ( IN4500(ai, COMMAND) == pCmd->cmd) {
// PC4500 didn't notice command, try again
OUT4500(ai, COMMAND, pCmd->cmd);
}
- if (!(max_tries & 255) && !in_interrupt()) {
- set_current_state(TASK_RUNNING);
- schedule();
- }
}
if ( max_tries == -1 ) {
- printk( KERN_ERR
+ printk( KERN_ERR
"airo: Max tries exceeded when issueing command\n" );
- rc = ERROR;
- goto done;
+ return ERROR;
}
// command completed
pRsp->status = IN4500(ai, STATUS);
pRsp->rsp0 = IN4500(ai, RESP0);
pRsp->rsp1 = IN4500(ai, RESP1);
pRsp->rsp2 = IN4500(ai, RESP2);
-
+
// clear stuck command busy if necessary
if (IN4500(ai, COMMAND) & COMMAND_BUSY) {
OUT4500(ai, EVACK, EV_CLEARCOMMANDBUSY);
}
// acknowledge processing the status/response
OUT4500(ai, EVACK, EV_CMD);
- done:
- spin_unlock_irqrestore(&ai->cmd_lock, flags);
- return rc;
+ return SUCCESS;
}
/* Sets up the bap to start exchange data. whichbap should
{
int timeout = 50;
int max_tries = 3;
-
+
OUT4500(ai, SELECT0+whichbap, rid);
OUT4500(ai, OFFSET0+whichbap, offset);
while (1) {
if (status & BAP_BUSY) {
/* This isn't really a timeout, but its kinda
close */
- if (timeout--) {
+ if (timeout--) {
continue;
}
} else if ( status & BAP_ERR ) {
/* invalid rid or offset */
- printk( KERN_ERR "airo: BAP error %x %d\n",
+ printk( KERN_ERR "airo: BAP error %x %d\n",
status, whichbap );
return ERROR;
} else if (status & BAP_DONE) { // success
return SUCCESS;
}
if ( !(max_tries--) ) {
- printk( KERN_ERR
+ printk( KERN_ERR
"airo: BAP setup error too many retries\n" );
return ERROR;
}
/* requires call to bap_setup() first */
static int aux_bap_read(struct airo_info *ai, u16 *pu16Dst,
- int bytelen, int whichbap)
+ int bytelen, int whichbap)
{
u16 len;
u16 page;
for (i=0; i<words;) {
int count;
count = (len>>1) < (words-i) ? (len>>1) : (words-i);
- if ( !do8bitIO )
- insw( ai->dev->base_addr+DATA0+whichbap,
+ if ( !do8bitIO )
+ insw( ai->dev->base_addr+DATA0+whichbap,
pu16Dst+i,count );
else
- insb( ai->dev->base_addr+DATA0+whichbap,
+ insb( ai->dev->base_addr+DATA0+whichbap,
pu16Dst+i, count << 1 );
i += count;
if (i<words) {
/* requires call to bap_setup() first */
-static int fast_bap_read(struct airo_info *ai, u16 *pu16Dst,
+static int fast_bap_read(struct airo_info *ai, u16 *pu16Dst,
int bytelen, int whichbap)
{
bytelen = (bytelen + 1) & (~1); // round up to even value
- if ( !do8bitIO )
+ if ( !do8bitIO )
insw( ai->dev->base_addr+DATA0+whichbap, pu16Dst, bytelen>>1 );
else
insb( ai->dev->base_addr+DATA0+whichbap, pu16Dst, bytelen );
}
/* requires call to bap_setup() first */
-static int bap_write(struct airo_info *ai, const u16 *pu16Src,
+static int bap_write(struct airo_info *ai, const u16 *pu16Src,
int bytelen, int whichbap)
{
bytelen = (bytelen + 1) & (~1); // round up to even value
- if ( !do8bitIO )
- outsw( ai->dev->base_addr+DATA0+whichbap,
+ if ( !do8bitIO )
+ outsw( ai->dev->base_addr+DATA0+whichbap,
pu16Src, bytelen>>1 );
else
outsb( ai->dev->base_addr+DATA0+whichbap, pu16Src, bytelen );
long flags;
int rc = SUCCESS;
- spin_lock_irqsave(&ai->bap1_lock, flags);
+ spin_lock_irqsave(&ai->main_lock, flags);
if ( (status = PC4500_accessrid(ai, rid, CMD_ACCESS)) != SUCCESS) {
rc = status;
goto done;
// read the rid length field
bap_read(ai, pBuf, 2, BAP1);
// length for remaining part of rid
- len = min_t(unsigned int, len, le16_to_cpu(*(u16*)pBuf)) - 2;
-
+ len = min(len, (int)le16_to_cpu(*(u16*)pBuf)) - 2;
+
if ( len <= 2 ) {
- printk( KERN_ERR
+ printk( KERN_ERR
"airo: Rid %x has a length of %d which is too short\n",
(int)rid,
(int)len );
goto done;
}
// read remainder of the rid
- if (bap_setup(ai, rid, 2, BAP1) != SUCCESS) {
- rc = ERROR;
- goto done;
- }
rc = bap_read(ai, ((u16*)pBuf)+1, len, BAP1);
done:
- spin_unlock_irqrestore(&ai->bap1_lock, flags);
+ spin_unlock_irqrestore(&ai->main_lock, flags);
return rc;
}
/* Note, that we are using BAP1 which is also used by transmit, so
* make sure this isnt called when a transmit is happening */
-static int PC4500_writerid(struct airo_info *ai, u16 rid,
+static int PC4500_writerid(struct airo_info *ai, u16 rid,
const void *pBuf, int len)
{
u16 status;
long flags;
int rc = SUCCESS;
- spin_lock_irqsave(&ai->bap1_lock, flags);
+ spin_lock_irqsave(&ai->main_lock, flags);
// --- first access so that we can write the rid data
if ( (status = PC4500_accessrid(ai, rid, CMD_ACCESS)) != 0) {
rc = status;
// ---now commit the rid data
rc = PC4500_accessrid(ai, rid, 0x100|CMD_ACCESS);
done:
- spin_unlock_irqrestore(&ai->bap1_lock, flags);
+ spin_unlock_irqrestore(&ai->main_lock, flags);
return rc;
}
cmd.cmd = CMD_ALLOCATETX;
cmd.parm0 = lenPayload;
- if (issuecommand(ai, &cmd, &rsp) != SUCCESS) return 0;
+ if (lock_issuecommand(ai, &cmd, &rsp) != SUCCESS) return 0;
if ( (rsp.status & 0xFF00) != 0) return 0;
/* wait for the allocate event/indication
* It makes me kind of nervous that this can just sit here and spin,
// get the allocated fid and acknowledge
txFid = IN4500(ai, TXALLOCFID);
OUT4500(ai, EVACK, EV_ALLOC);
-
+
/* The CARD is pretty cool since it converts the ethernet packet
* into 802.11. Also note that we don't release the FID since we
* will be using the same one over and over again. */
* releasing the fid. */
txControl = cpu_to_le16(TXCTL_TXOK | TXCTL_TXEX | TXCTL_802_3
| TXCTL_ETHERNET | TXCTL_NORELEASE);
- spin_lock_irqsave(&ai->bap1_lock, flags);
+ spin_lock_irqsave(&ai->main_lock, flags);
if (bap_setup(ai, txFid, 0x0008, BAP1) != SUCCESS) {
- spin_unlock_irqrestore(&ai->bap1_lock, flags);
+ spin_unlock_irqrestore(&ai->main_lock, flags);
return ERROR;
}
bap_write(ai, &txControl, sizeof(txControl), BAP1);
- spin_unlock_irqrestore(&ai->bap1_lock, flags);
+ spin_unlock_irqrestore(&ai->main_lock, flags);
return txFid;
}
/* In general BAP1 is dedicated to transmiting packets. However,
since we need a BAP when accessing RIDs, we also use BAP1 for that.
Make sure the BAP1 spinlock is held when this is called. */
-static int transmit_802_3_packet(struct airo_info *ai, u16 txFid,
+static int transmit_802_3_packet(struct airo_info *ai, u16 txFid,
char *pPacket, int len)
{
u16 payloadLen;
Cmd cmd;
Resp rsp;
-
+
if (len < 12) {
printk( KERN_WARNING "Short packet %d\n", len );
return ERROR;
}
-
+
// packet is destination[6], source[6], payload[len-12]
// write the payload length and dst/src/payload
if (bap_setup(ai, txFid, 0x0036, BAP1) != SUCCESS) return ERROR;
entry->gid = proc_gid;
entry->data = dev;
SETPROC_OPS(entry, proc_statsdelta_ops);
-
+
/* Setup the Stats */
entry = create_proc_entry("Stats",
S_IFREG | (S_IRUGO&proc_perm),
entry->gid = proc_gid;
entry->data = dev;
SETPROC_OPS(entry, proc_stats_ops);
-
+
/* Setup the Status */
entry = create_proc_entry("Status",
S_IFREG | (S_IRUGO&proc_perm),
entry->gid = proc_gid;
entry->data = dev;
SETPROC_OPS(entry, proc_status_ops);
-
+
/* Setup the Config */
entry = create_proc_entry("Config",
S_IFREG | proc_perm,
int i;
int pos;
struct proc_data *priv = (struct proc_data*)file->private_data;
-
+
if( !priv->rbuffer ) return -EINVAL;
-
+
pos = *offset;
for( i = 0; i+pos < priv->readlen && i < len; i++ ) {
if (put_user( priv->rbuffer[i+pos], buffer+i ))
static ssize_t proc_write( struct file *file,
const char *buffer,
size_t len,
- loff_t *offset )
+ loff_t *offset )
{
int i;
int pos;
struct proc_data *priv = (struct proc_data*)file->private_data;
-
+
if ( !priv->wbuffer ) {
return -EINVAL;
}
-
+
pos = *offset;
-
+
for( i = 0; i + pos < priv->maxwritelen &&
i < len; i++ ) {
if (get_user( priv->wbuffer[i+pos], buffer + i ))
CapabilityRid cap_rid;
StatusRid status_rid;
int i;
-
+
MOD_INC_USE_COUNT;
-
+
dp = inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
kfree (file->private_data);
return -ENOMEM;
}
-
+
readStatusRid(apriv, &status_rid);
readCapabilityRid(apriv, &cap_rid);
-
+
i = sprintf(data->rbuffer, "Status: %s%s%s%s%s%s%s%s%s\n",
status_rid.mode & 1 ? "CFG ": "",
status_rid.mode & 2 ? "ACT ": "",
}
static int proc_stats_rid_open(struct inode*, struct file*, u16);
-static int proc_statsdelta_open( struct inode *inode,
+static int proc_statsdelta_open( struct inode *inode,
struct file *file ) {
if (file->f_mode&FMODE_WRITE) {
return proc_stats_rid_open(inode, file, RID_STATSDELTACLEAR);
return proc_stats_rid_open(inode, file, RID_STATS);
}
-static int proc_stats_rid_open( struct inode *inode,
+static int proc_stats_rid_open( struct inode *inode,
struct file *file,
u16 rid ) {
struct proc_data *data;
int i, j;
int *vals = stats.vals;
MOD_INC_USE_COUNT;
-
-
+
+
dp = inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
kfree (file->private_data);
return -ENOMEM;
}
-
+
readStatsRid(apriv, &stats, rid);
-
+
j = 0;
- for(i=0; (int)statsLabels[i]!=-1 &&
+ for(i=0; (int)statsLabels[i]!=-1 &&
i*4<stats.len; i++){
if (!statsLabels[i]) continue;
if (j+strlen(statsLabels[i])+16>4096) {
Resp rsp;
char *line;
int need_reset = 0;
-
+
if ( !data->writelen ) return;
dp = (struct proc_dir_entry *) inode->u.generic_ip;
-
+
disable_MAC(ai);
readConfigRid(ai, &config);
need_reset = 1;
}
}
-
+
/*** Radio status */
else if (!strncmp(line,"Radio: ", 7)) {
line += 7;
/*** NodeName processing */
else if ( !strncmp( line, "NodeName: ", 10 ) ) {
int j;
-
+
line += 10;
memset( config.nodeName, 0, 16 );
/* Do the name, assume a space between the mode and node name */
for( j = 0; j < 16 && line[j] != '\n'; j++ ) {
config.nodeName[j] = line[j];
}
- }
-
+ }
+
/*** PowerMode processing */
else if ( !strncmp( line, "PowerMode: ", 11 ) ) {
line += 11;
config.powerSaveMode = POWERSAVE_PSP;
} else {
config.powerSaveMode = POWERSAVE_CAM;
- }
+ }
} else if ( !strncmp( line, "DataRates: ", 11 ) ) {
- int v, i = 0, k = 0; /* i is index into line,
+ int v, i = 0, k = 0; /* i is index into line,
k is index to rates */
-
+
line += 11;
while((v = get_dec_u16(line, &i, 3))!=-1) {
config.rates[k++] = (u8)v;
int v, i = 0;
line += 9;
v = get_dec_u16(line, &i, i+3);
- if ( v != -1 )
+ if ( v != -1 )
config.channelSet = (u16)v;
} else if ( !strncmp( line, "XmitPower: ", 11 ) ) {
int v, i = 0;
}
} else if ( !strncmp( line, "LongRetryLimit: ", 16 ) ) {
int v, i = 0;
-
+
line += 16;
v = get_dec_u16(line, &i, 3);
v = (v<0) ? 0 : ((v>255) ? 255 : v);
config.longRetryLimit = (u16)v;
} else if ( !strncmp( line, "ShortRetryLimit: ", 17 ) ) {
int v, i = 0;
-
+
line += 17;
v = get_dec_u16(line, &i, 3);
v = (v<0) ? 0 : ((v>255) ? 255 : v);
config.shortRetryLimit = (u16)v;
} else if ( !strncmp( line, "RTSThreshold: ", 14 ) ) {
int v, i = 0;
-
+
line += 14;
v = get_dec_u16(line, &i, 4);
v = (v<0) ? 0 : ((v>2312) ? 2312 : v);
config.rtsThres = (u16)v;
} else if ( !strncmp( line, "TXMSDULifetime: ", 16 ) ) {
int v, i = 0;
-
+
line += 16;
v = get_dec_u16(line, &i, 5);
v = (v<0) ? 0 : v;
config.txLifetime = (u16)v;
} else if ( !strncmp( line, "RXMSDULifetime: ", 16 ) ) {
int v, i = 0;
-
+
line += 16;
v = get_dec_u16(line, &i, 5);
v = (v<0) ? 0 : v;
config.rxLifetime = (u16)v;
} else if ( !strncmp( line, "TXDiversity: ", 13 ) ) {
- config.txDiversity =
+ config.txDiversity =
(line[13]=='l') ? 1 :
((line[13]=='r')? 2: 3);
} else if ( !strncmp( line, "RXDiversity: ", 13 ) ) {
- config.rxDiversity =
+ config.rxDiversity =
(line[13]=='l') ? 1 :
((line[13]=='r')? 2: 3);
} else if ( !strncmp( line, "FragThreshold: ", 15 ) ) {
int v, i = 0;
-
+
line += 15;
v = get_dec_u16(line, &i, 4);
v = (v<256) ? 256 : ((v>2312) ? 2312 : v);
struct airo_info *ai = (struct airo_info*)dev->priv;
ConfigRid config;
int i;
-
+
MOD_INC_USE_COUNT;
-
+
dp = (struct proc_dir_entry *) inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
memset( data->wbuffer, 0, 2048 );
data->maxwritelen = 2048;
data->on_close = proc_config_on_close;
-
+
readConfigRid(ai, &config);
-
- i = sprintf( data->rbuffer,
+
+ i = sprintf( data->rbuffer,
"Mode: %s\n"
"Radio: %s\n"
"NodeName: %-16s\n"
"DataRates: %d %d %d %d %d %d %d %d\n"
"Channel: %d\n"
"XmitPower: %d\n",
- config.opmode == 0 ? "adhoc" :
+ config.opmode == 0 ? "adhoc" :
config.opmode == 1 ? "ESS" :
- config.opmode == 2 ? "AP" :
+ config.opmode == 2 ? "AP" :
config.opmode == 3 ? "AP RPTR" : "Error",
ai->flags&FLAG_RADIO_OFF ? "off" : "on",
config.nodeName,
SsidRid SSID_rid;
int i;
int offset = 0;
-
+
if ( !data->writelen ) return;
-
+
memset( &SSID_rid, 0, sizeof( SSID_rid ) );
-
+
for( i = 0; i < 3; i++ ) {
int j;
for( j = 0; j+offset < data->writelen && j < 32 &&
if ( j == 0 ) break;
SSID_rid.ssids[i].len = j;
offset += j;
- while( data->wbuffer[offset] != '\n' &&
+ while( data->wbuffer[offset] != '\n' &&
offset < data->writelen ) offset++;
offset++;
}
struct airo_info *ai = (struct airo_info*)dev->priv;
APListRid APList_rid;
int i;
-
+
if ( !data->writelen ) return;
-
+
memset( &APList_rid, 0, sizeof(APList_rid) );
APList_rid.len = sizeof(APList_rid);
-
+
for( i = 0; i < 4 && data->writelen >= (i+1)*6*3; i++ ) {
int j;
for( j = 0; j < 6*3 && data->wbuffer[j+i*6*3]; j++ ) {
int len ) {
int rc;
Resp rsp;
-
+
disable_MAC(ai);
rc = PC4500_writerid(ai, rid, rid_data, len);
enable_MAC(ai, &rsp);
memcpy( wkr.mac, macaddr, 6 );
printk(KERN_INFO "Setting key %d\n", index);
}
-
+
writeWepKeyRid(ai, &wkr, perm);
return 0;
}
int j = 0;
memset(key, 0, sizeof(key));
-
+
dp = (struct proc_dir_entry *) inode->u.generic_ip;
data = (struct proc_data *)file->private_data;
if ( !data->writelen ) return;
-
+
if (data->wbuffer[0] >= '0' && data->wbuffer[0] <= '3' &&
(data->wbuffer[1] == ' ' || data->wbuffer[1] == '\n')) {
index = data->wbuffer[0] - '0';
u16 lastindex;
int j=0;
int rc;
-
+
MOD_INC_USE_COUNT;
-
+
dp = (struct proc_dir_entry *) inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
}
memset( data->wbuffer, 0, 80 );
data->on_close = proc_wepkey_on_close;
-
+
ptr = data->rbuffer;
strcpy(ptr, "No wep keys\n");
rc = readWepKeyRid(ai, &wkr, 1);
SsidRid SSID_rid;
MOD_INC_USE_COUNT;
-
+
dp = (struct proc_dir_entry *) inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
}
memset( data->wbuffer, 0, 33*3 );
data->on_close = proc_SSID_on_close;
-
+
readSsidRid(ai, &SSID_rid);
ptr = data->rbuffer;
for( i = 0; i < 3; i++ ) {
int j;
if ( !SSID_rid.ssids[i].len ) break;
- for( j = 0; j < 32 &&
- j < SSID_rid.ssids[i].len &&
+ for( j = 0; j < 32 &&
+ j < SSID_rid.ssids[i].len &&
SSID_rid.ssids[i].ssid[j]; j++ ) {
- *ptr++ = SSID_rid.ssids[i].ssid[j];
+ *ptr++ = SSID_rid.ssids[i].ssid[j];
}
*ptr++ = '\n';
}
APListRid APList_rid;
MOD_INC_USE_COUNT;
-
+
dp = (struct proc_dir_entry *) inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
}
memset( data->wbuffer, 0, data->maxwritelen );
data->on_close = proc_APList_on_close;
-
+
readAPListRid(ai, &APList_rid);
ptr = data->rbuffer;
for( i = 0; i < 4; i++ ) {
int rc;
/* If doLoseSync is not 1, we won't do a Lose Sync */
int doLoseSync = -1;
-
+
MOD_INC_USE_COUNT;
-
+
dp = (struct proc_dir_entry *) inode->u.generic_ip;
-
+
if ((file->private_data = kmalloc(sizeof(struct proc_data ), GFP_KERNEL)) == NULL)
return -ENOMEM;
memset(file->private_data, 0, sizeof(struct proc_data));
data->maxwritelen = 0;
data->wbuffer = 0;
data->on_close = 0;
-
+
if (file->f_mode & FMODE_WRITE) {
if (!(file->f_mode & FMODE_READ)) {
Cmd cmd;
Resp rsp;
-
+
memset(&cmd, 0, sizeof(cmd));
cmd.cmd=CMD_LISTBSS;
- issuecommand(ai, &cmd, &rsp);
+ lock_issuecommand(ai, &cmd, &rsp);
data->readlen = 0;
return 0;
}
return 0;
}
-static int proc_close( struct inode *inode, struct file *file )
+static int proc_close( struct inode *inode, struct file *file )
{
struct proc_data *data = (struct proc_data *)file->private_data;
if ( data->on_close != NULL ) data->on_close( inode, file );
struct net_device *dev = (struct net_device*)data;
struct airo_info *apriv = (struct airo_info *)dev->priv;
u16 linkstat = IN4500(apriv, LINKSTAT);
-
+
if (linkstat != 0x400 ) {
/* We don't have a link so try changing the authtype */
ConfigRid config = apriv->config;
if ( auto_wep ) {
struct airo_info *apriv=dev->priv;
struct timer_list *timer = &apriv->timer;
-
+
timer->function = timer_func;
timer->data = (u_long)dev;
init_timer(timer);
apriv->authtype = AUTH_SHAREDKEY;
}
-
+
node->dev = dev;
node->next = airo_devices;
airo_devices = node;
while( *p && ( (*p)->dev != dev ) )
p = &(*p)->next;
if ( *p && (*p)->dev == dev )
- *p = (*p)->next;
+ *p = (*p)->next;
}
#ifdef CONFIG_PCI
-static int __devinit airo_pci_probe(struct pci_dev *pdev,
+static int __devinit airo_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *pent)
{
- pdev->driver_data = init_airo_card(pdev->irq,
+ pdev->driver_data = init_airo_card(pdev->irq,
pdev->resource[2].start, 0);
if (!pdev->driver_data) {
return -ENODEV;
static int __init airo_init_module( void )
{
int i, rc = 0, have_isa_dev = 0;
-
+
airo_entry = create_proc_entry("aironet",
S_IFDIR | airo_perm,
proc_root_driver);
airo_entry->uid = proc_uid;
airo_entry->gid = proc_gid;
-
+
for( i = 0; i < 4 && io[i] && irq[i]; i++ ) {
- printk( KERN_INFO
+ printk( KERN_INFO
"airo: Trying to configure ISA adapter at irq=%d io=0x%x\n",
irq[i], io[i] );
if (init_airo_card( irq[i], io[i], 0 ))
have_isa_dev = 1;
}
-
+
#ifdef CONFIG_PCI
printk( KERN_INFO "airo: Probing for PCI adapters\n" );
rc = pci_module_init(&airo_driver);
StatusRid status_rid; /* Card status info */
#ifdef CISCO_EXT
- if (cmd != SIOCGIWPRIV && cmd != AIROIOCTL && cmd != AIROIDIFC)
+ if (cmd != SIOCGIWPRIV && cmd != AIROIOCTL && cmd != AIROIDIFC
+#ifdef AIROOLDIOCTL
+ && cmd != AIROOLDIOCTL && cmd != AIROOLDIDIFC
+#endif
+ )
#endif /* CISCO_EXT */
{
/* If the command read some stuff, we better get it out of
/* Hum... Should put the right values there */
range.max_qual.qual = 10;
- range.max_qual.level = 0;
+ range.max_qual.level = 0x100 - 120; /* -120 dBm */
range.max_qual.noise = 0;
range.sensitivity = 65535;
range.txpower_capa = IW_TXPOW_MWATT;
#endif /* WIRELESS_EXT > 9 */
#if WIRELESS_EXT > 10
- range.we_version_source = 11;
+ range.we_version_source = 12;
range.we_version_compiled = WIRELESS_EXT;
range.retry_capa = IW_RETRY_LIMIT | IW_RETRY_LIFETIME;
range.retry_flags = IW_RETRY_LIMIT;
range.min_r_time = 1024;
range.max_r_time = 65535 * 1024;
#endif /* WIRELESS_EXT > 10 */
+#if WIRELESS_EXT > 11
+ /* Experimental measurements - boundary 11/5.5 Mb/s */
+ /* Note : with or without the (local->rssi), results
+ * are somewhat different. - Jean II */
+ range.avg_qual.qual = 6;
+ if (local->rssi)
+ range.avg_qual.level = 186; /* -70 dBm */
+ else
+ range.avg_qual.level = 176; /* -80 dBm */
+ range.avg_qual.noise = 0;
+#endif /* WIRELESS_EXT > 11 */
if (copy_to_user(wrq->u.data.pointer, &range, sizeof(struct iw_range)))
rc = -EFAULT;
}
break;
- case SIOCSIWPOWER:
+ case SIOCSIWPOWER:
if (wrq->u.power.disabled) {
if ((config.rmode & 0xFF) >= RXMODE_RFMON) {
rc = -EINVAL;
if (BSSList.index == 0xffff) break;
}
if (!i) {
- for (i = 0;
+ for (i = 0;
i < min(IW_MAX_AP, 4) &&
(status_rid.bssid[i][0]
& status_rid.bssid[i][1]
| status_rid.bssid[i][4]
| status_rid.bssid[i][5]);
i++) {
- memcpy(s[i].sa_data,
+ memcpy(s[i].sa_data,
status_rid.bssid[i], 6);
s[i].sa_family = ARPHRD_ETHER;
}
rc = -EFAULT;
}
wrq->u.data.length = i;
- if (copy_to_user(wrq->u.data.pointer, &s,
+ if (copy_to_user(wrq->u.data.pointer, &s,
sizeof(struct sockaddr)*i))
rc = -EFAULT;
}
#ifdef CISCO_EXT
case AIROIDIFC:
+#ifdef AIROOLDIDIFC
+ case AIROOLDIDIFC:
+#endif
{
int val = AIROMAGIC;
aironet_ioctl com;
rc = -EFAULT;
}
break;
-
+
case AIROIOCTL:
- /* Get the command struct and hand it off for evaluation by
+#ifdef AIROOLDIOCTL
+ case AIROOLDIOCTL:
+#endif
+ /* Get the command struct and hand it off for evaluation by
* the proper subfunction
*/
{
* TODO :
* o Check if work in Ad-Hoc mode (otherwise, use SPY, as in wvlan_cs)
* o Find the noise level
- * o Convert values to dBm
- * o Fill out discard.misc with something interesting
*
* Jean
*/
struct airo_info *local = (struct airo_info*) dev->priv;
StatusRid status_rid;
StatsRid stats_rid;
- int *vals = stats_rid.vals;
+ u32 *vals = stats_rid.vals;
/* Get stats out of the card */
readStatusRid(local, &status_rid);
* specific problems */
local->wstats.discard.nwid = vals[56] + vals[57] + vals[58];/* SSID Mismatch */
local->wstats.discard.code = vals[6];/* RxWepErr */
- local->wstats.discard.misc = vals[1] + vals[2] + vals[3] + vals[4] + vals[30] + vals[32];
+#if WIRELESS_EXT > 11
+ local->wstats.discard.fragment = vals[30];
+ local->wstats.discard.retries = vals[10];
+ local->wstats.discard.misc = vals[1] + vals[32];
+ local->wstats.miss.beacon = vals[34];
+#else /* WIRELESS_EXT > 11 */
+ local->wstats.discard.misc = vals[1] + vals[30] + vals[32];
+#endif /* WIRELESS_EXT > 11 */
return (&local->wstats);
}
#endif /* WIRELESS_EXT */
#ifdef CISCO_EXT
/*
- * This just translates from driver IOCTL codes to the command codes to
- * feed to the radio's host interface. Things can be added/deleted
- * as needed. This represents the READ side of control I/O to
+ * This just translates from driver IOCTL codes to the command codes to
+ * feed to the radio's host interface. Things can be added/deleted
+ * as needed. This represents the READ side of control I/O to
* the card
*/
static int readrids(struct net_device *dev, aironet_ioctl *comp) {
unsigned short ridcode;
- unsigned char iobuf[2048];
+ unsigned char iobuf[2048];
switch(comp->command)
{
case AIROGSTATSD32: ridcode = RID_STATSDELTA; break;
case AIROGSTATSC32: ridcode = RID_STATS; break;
default:
- return -EINVAL;
+ return -EINVAL;
break;
}
PC4500_readrid((struct airo_info *)dev->priv,ridcode,iobuf,sizeof(iobuf));
/* get the count of bytes in the rid docs say 1st 2 bytes is it.
- * then return it to the user
+ * then return it to the user
* 9/22/2000 Honor user given length
*/
if (copy_to_user(comp->data, iobuf,
- min_t(unsigned int, comp->len, sizeof(iobuf))))
+ min((int)comp->len, (int)sizeof(iobuf))))
return -EFAULT;
return 0;
}
int ridcode;
Resp rsp;
static int (* writer)(struct airo_info *, u16 rid, const void *, int);
- unsigned char iobuf[2048];
+ unsigned char iobuf[2048];
/* Only super-user can write RIDs */
if (!capable(CAP_NET_ADMIN))
case AIROPWEPKEY: ridcode = RID_WEP_TEMP; writer = PC4500_writerid;
break;
- /* this is not really a rid but a command given to the card
+ /* this is not really a rid but a command given to the card
* same with MAC off
*/
case AIROPMACON:
return -EIO;
return 0;
- /*
+ /*
* Evidently this code in the airo driver does not get a symbol
* as disable_MAC. it's probably so short the compiler does not gen one.
*/
PC4500_readrid(dev->priv,ridcode,iobuf,sizeof(iobuf));
if (copy_to_user(comp->data, iobuf,
- min_t(unsigned int, comp->len, sizeof(iobuf))))
+ min((int)comp->len, (int)sizeof(iobuf))))
return -EFAULT;
return 0;
*****************************************************************************
*/
-/*
+/*
* Flash command switch table
*/
#define FLASH_COMMAND 0x7e7e
-/*
+/*
* STEP 1)
- * Disable MAC and do soft reset on
- * card.
+ * Disable MAC and do soft reset on
+ * card.
*/
int cmdreset(struct airo_info *ai) {
printk(KERN_INFO "Waitbusy hang before RESET\n");
return -EBUSY;
}
-
+
OUT4500(ai,COMMAND,CMD_SOFTRESET);
set_current_state (TASK_UNINTERRUPTIBLE);
}
/* STEP 2)
- * Put the card in legendary flash
+ * Put the card in legendary flash
* mode
*/
OUT4500(ai, COMMAND,0x10);
set_current_state (TASK_UNINTERRUPTIBLE);
schedule_timeout (HZ/2); /* 500ms delay */
-
+
if(!waitbusy(ai)) {
printk(KERN_INFO "Waitbusy hang after setflash mode\n");
return -EIO;
return 0;
}
-/* Put character to SWS0 wait for dwelltime
- * x 50us for echo .
+/* Put character to SWS0 wait for dwelltime
+ * x 50us for echo .
*/
int flashpchar(struct airo_info *ai,int byte,int dwelltime) {
if(dwelltime == 0 )
dwelltime = 200;
-
+
waittime=dwelltime;
/* Wait for busy bit d15 to go false indicating buffer empty */
do {
rchar = IN4500(ai,SWS1);
-
+
if(dwelltime && !(0x8000 & rchar)){
dwelltime -= 10;
mdelay(10);
return -EIO;
}
-/*
- * Transfer 32k of firmware data from user buffer to our buffer and
- * send to the card
+/*
+ * Transfer 32k of firmware data from user buffer to our buffer and
+ * send to the card
*/
int flashputbuf(struct airo_info *ai){
for(nwords=0;nwords != FLASHSIZE / 2;nwords++){
OUT4500(ai,AUXDATA,ai->flash[nwords] & 0xffff);
}
-
+
OUT4500(ai,SWS0,0x8000);
return 0;
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- POSSIBILITY OF SUCH DAMAGE.
+ POSSIBILITY OF SUCH DAMAGE.
*/
module_init(airo_init_module);
* Fix three endian-ness bugs
* Support dual function SYM53C885E ethernet chip
+ LK1.1.5 (val@nmt.edu):
+ * Fix forced full-duplex bug I introduced
+
*/
#define DRV_NAME "yellowfin"
-#define DRV_VERSION "1.05+LK1.1.3"
+#define DRV_VERSION "1.05+LK1.1.5"
#define DRV_RELDATE "May 10, 2001"
#define PFX DRV_NAME ": "
};
enum capability_flags {
HasMII=1, FullTxStatus=2, IsGigabit=4, HasMulticastBug=8, FullRxStatus=16,
- HasMACAddrBug=32, /* Only on early revs. */
+ HasMACAddrBug=32, DontUseEeprom=64, /* Only on early revs. */
};
/* The PCI I/O space extent. */
#define YELLOWFIN_SIZE 0x100
PCI_IOTYPE, YELLOWFIN_SIZE,
FullTxStatus | IsGigabit | HasMulticastBug | HasMACAddrBug},
{"Symbios SYM83C885", { 0x07011000, 0xffffffff},
- PCI_IOTYPE, YELLOWFIN_SIZE, HasMII | IsGigabit | FullTxStatus },
+ PCI_IOTYPE, YELLOWFIN_SIZE, HasMII | DontUseEeprom },
{0,},
};
#endif
irq = pdev->irq;
- if (drv_flags & IsGigabit)
+ if (drv_flags & DontUseEeprom)
for (i = 0; i < 6; i++)
dev->dev_addr[i] = inb(ioaddr + StnAddr + i);
else {
0012 53c895a
0020 53c1010 Ultra3 SCSI Adapter
0021 53c1010 66MHz Ultra3 SCSI Adapter
+ 0030 53c1030
+ 0040 53c1035
008f 53c875J
1092 8000 FirePort 40 SCSI Controller
1092 8760 FirePort 40 Dual SCSI Host Adapter
+ 0621 FC909
+ 0622 FC929
+ 0623 FC929 LAN
+ 0624 FC919
+ 0625 FC919 LAN
0701 83C885
0702 Yellowfin G-NIC gigabit ethernet
1318 0000 PEI100X
110a Siemens Nixdorf AG
0002 Pirahna 2-port
0005 Tulip controller, power management, switch extender
+ 2102 DSCC4 WAN adapter
4942 FPGA I-Bus Tracer for MBD
6120 SZB6120
110b Chromatic Research Inc.
1148 5841 FDDI SK-5841 (SK-NET FDDI-FP64)
1148 5843 FDDI SK-5843 (SK-NET FDDI-LP64)
1148 5844 FDDI SK-5844 (SK-NET FDDI-LP64 DAS)
- 4200 Token ring adaptor
+ 4200 Token Ring adapter
4300 Gigabit Ethernet
1148 9821 SK-9821 (1000Base-T single link)
1148 9822 SK-9822 (1000Base-T dual link)
14e4 0007 NetXtreme BCM5701 1000BaseSX
14e4 0008 NetXtreme BCM5701 1000BaseTX
14e4 8008 NetXtreme BCM5701 1000BaseTX
+ 5820 BCM5820 Crypto Accelerator
14e5 Pixelfusion Ltd
14e6 SHINING Technology Inc
14e7 3CX
0e11 b0c6 Embedded NC3120 with Wake on LAN
0e11 b0c7 Embedded NC3121
0e11 b0d7 NC3121 with Wake on LAN
- 0e11 b0dd NC3131
+ 0e11 b0dd NC3131 (82558B)
0e11 b0de NC3132
0e11 b0e1 NC3133
0e11 b144 NC3123 (82559)
1a21 82840 840 (Carmel) Chipset Host Bridge (Hub A)
1a23 82840 840 (Carmel) Chipset AGP Bridge
1a24 82840 840 (Carmel) Chipset PCI Bridge (Hub B)
+ 1a30 82845 845 (Brookdale) Chipset Host Bridge
+ 1a31 82845 845 (Brookdale) Chipset AGP Bridge
2410 82801AA ISA Bridge (LPC)
2411 82801AA IDE
2412 82801AA USB
* determined for each queue request anew.
*/
-#if BITS_PER_LONG > 32
#define DATASEGS_PER_COMMAND 2
#define DATASEGS_PER_CONT 5
-#else
-#define DATASEGS_PER_COMMAND 3
-#define DATASEGS_PER_CONT 7
-#endif
#define QLOGICFC_REQ_QUEUE_LEN 127 /* must be power of two - 1 */
#define QLOGICFC_MAX_SG(ql) (DATASEGS_PER_COMMAND + (((ql) > 0) ? DATASEGS_PER_CONT*((ql) - 1) : 0))
return -EFAULT;
/* enumerate busses */
- read_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
for (buslist = usb_bus_list.next; buslist != &usb_bus_list; buslist = buslist->next) {
/* print devices for this bus */
bus = list_entry(buslist, struct usb_bus, bus_list);
return ret;
total_written += ret;
}
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
return total_written;
}
if (list->head == list->tail) {
add_wait_queue(&list->hiddev->wait, &wait);
- current->state = TASK_INTERRUPTIBLE;
+ set_current_state(TASK_INTERRUPTIBLE);
while (list->head == list->tail) {
schedule();
}
- current->state = TASK_RUNNING;
+ set_current_state(TASK_RUNNING);
remove_wait_queue(&list->hiddev->wait, &wait);
}
struct list_head *list;
struct usb_bus *bus;
- read_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
for (list = usb_bus_list.next; list != &usb_bus_list; list = list->next) {
bus = list_entry(list, struct usb_bus, bus_list);
if (bus->busnum == busnr) {
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
return bus;
}
}
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
return NULL;
}
if (i < 2+NRSPECIAL)
return 0;
i -= 2+NRSPECIAL;
- read_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
for (list = usb_bus_list.next; list != &usb_bus_list; list = list->next) {
if (i > 0) {
i--;
break;
filp->f_pos++;
}
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
return 0;
}
}
list_add_tail(&inode->u.usbdev_i.slist, &s->u.usbdevfs_sb.ilist);
list_add_tail(&inode->u.usbdev_i.dlist, &special[i].inodes);
}
- read_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
for (blist = usb_bus_list.next; blist != &usb_bus_list; blist = blist->next) {
bus = list_entry(blist, struct usb_bus, bus_list);
new_bus_inode(bus, s);
recurse_new_dev_inode(bus->root_hub, s);
}
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
return s;
out_no_root:
init_waitqueue_head(&awd.wqh);
awd.done = 0;
- current->state = TASK_INTERRUPTIBLE;
+ set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(&awd.wqh, &wait);
urb->context = &awd;
status = usb_submit_urb(urb);
if (status) {
// something went wrong
usb_free_urb(urb);
- current->state = TASK_RUNNING;
+ set_current_state(TASK_RUNNING);
remove_wait_queue(&awd.wqh, &wait);
return status;
}
while (timeout && !awd.done)
timeout = schedule_timeout(timeout);
- current->state = TASK_RUNNING;
+ set_current_state(TASK_RUNNING);
remove_wait_queue(&awd.wqh, &wait);
if (!timeout) {
/* -*- linux-c -*- */
/*
- * Driver for USB Scanners (linux-2.4.0)
+ * Driver for USB Scanners (linux-2.4.12)
*
- * Copyright (C) 1999, 2000 David E. Nelson
+ * Copyright (C) 1999, 2000, 2001 David E. Nelson
*
* Portions may be copyright Brad Keryan and Michael Gee.
*
* read_timeout. Thanks to Mark W. Webb <markwebb@adelphia.net> for
* reporting this bug.
* - Added Epson Perfection 1640SU and 1640SU Photo. Thanks to
- * Jean-Luc <f5ibh@db0bm.ampr.org>.
+ * Jean-Luc <f5ibh@db0bm.ampr.org> and Manuel
+ * Pelayo <Manuel.Pelayo@sesips.org>. Reported to work fine by Manuel.
*
- * 0.4.6 08/16/2001 Yves Duret <yduret@mandrakesoft.com>
- * - added devfs support (from printer.c)
- *
- * TODO
+ * 0.4.6 9/27/2001
+ * - Added IOCTL's to report back scanner USB ID's. Thanks to
+ * Karl Heinz <khk@lynx.phpwebhosting.com>
+ * - Added Umax Astra 2100U ID's. Thanks to Ron
+ * Wellsted <ron@wellsted.org.uk>.
+ * and Manuel Pelayo <Manuel.Pelayo@sesips.org>.
+ * - Added HP 3400 ID's. Thanks to Harald Hannelius <harald@iki.fi>
+ * and Bertrik Sikken <bertrik@zonnet.nl>. Reported to work at
+ * htpp://home.zonnet.nl/bertrik/hp3300c/hp3300c.htm.
+ * - Added Minolta Dimage Scan Dual II ID's. Thanks to Jose Paulo
+ * Moitinho de Almeida <moitinho@civil.ist.utl.pt>
+ * - Confirmed addition for SnapScan E20. Thanks to Steffen Hübner
+ * <hueb_s@gmx.de>.
+ * - Added Lifetec LT9385 ID's. Thanks to Van Bruwaene Kris
+ * <krvbr@yahoo.co.uk>
+ * - Added Agfa SnapScan e26 ID's. Reported to work with SANE
+ * 1.0.5. Thanks to Falk Sauer <falk@mgnkatze.franken.de>.
+ * - Added HP 4300 ID's. Thanks to Stefan Schlosser
+ * <castla@grmmbl.org>.
+ * - Added Relisis Episode ID's. Thanks to Manfred
+ * Morgner <odb-devel@gmx.net>.
+ * - Added many Acer ID's. Thanks to Oliver
+ * Schwartz <Oliver.Schwartz@gmx.de>.
+ * - Added Snapscan e40 ID's. Thanks to Oliver
+ * Schwartz <Oliver.Schwartz@gmx.de>.
+ * - Thanks to Oliver Neukum <Oliver.Neukum@lrz.uni-muenchen.de>
+ * for helping with races.
+ * - Added Epson Perfection 1650 ID's. Thanks to Karl Heinz
+ * Kremer <khk@khk.net>.
+ * - Added Epson Perfection 2450 ID's (aka GT-9700 for the Japanese
+ * market). Thanks to Karl Heinz Kremer <khk@khk.net>.
+ * - Added Mustek 600 USB ID's. Thanks to Marcus
+ * Alanen <maalanen@ra.abo.fi>.
+ * - Added Acer ScanPrisa 1240UT ID's. Thanks to Morgan
+ * Collins <sirmorcant@morcant.org>.
+ * - Incorporated devfs patches!! Thanks to Tom Rini
+ * <trini@kernel.crashing.org>, Pavel Roskin <proski@gnu.org>,
+ * Greg KH <greg@kroah.com>, Yves Duret <yduret@mandrakesoft.com>,
+ * Flavio Stanchina <flavio.stanchina@tin.it>.
+ * - Removed Minolta ScanImage II. This scanner uses USB SCSI. Thanks
+ * to Oliver Neukum <Oliver.Neukum@lrz.uni-muenchen.de> for pointing
+ * this out.
+ * - Added additional SMP locking. Thanks to David Brownell and
+ * Oliver Neukum for their help.
+ * - Added version reporting - reports for both module load and modinfo
+ * - Started path to hopefully straighten/clean out ioctl()'s.
+ * - Users are now notified to consult the Documentation/usb/scanner.txt
+ * for common error messages rather than the maintainer.
*
+ * TODO
* - Performance
* - Select/poll methods
* - More testing
*/
#include "scanner.h"
-
static void
irq_scanner(struct urb *urb)
{
* all I want to do with it -- or somebody else for that matter.
*/
- struct scn_usb_data *scn = urb->context;
- unsigned char *data = &scn->button;
+ struct scn_usb_data *scn;
+ unsigned char *data;
+ scn = urb->context;
+ down(&(scn->sem));
+ data = &scn->button;
data += 0; /* Keep gcc from complaining about unused var */
if (urb->status) {
+ up(&(scn->sem));
return;
}
dbg("irq_scanner(%d): data:%x", scn->scn_minor, *data);
+ up(&(scn->sem));
return;
}
-
static int
open_scanner(struct inode * inode, struct file * file)
{
int err=0;
- lock_kernel();
+ MOD_INC_USE_COUNT;
+
+ down(&scn_mutex);
scn_minor = USB_SCN_MINOR(inode);
dbg("open_scanner: scn_minor:%d", scn_minor);
if (!p_scn_table[scn_minor]) {
+ up(&scn_mutex);
err("open_scanner(%d): Unable to access minor data", scn_minor);
- err = -ENODEV;
- goto out_error;
+ return -ENODEV;
}
scn = p_scn_table[scn_minor];
dev = scn->scn_dev;
+ down(&(scn->sem)); /* Now protect the scn_usb_data structure */
+
+ up(&scn_mutex); /* Now handled by the above */
+
if (!dev) {
err("open_scanner(%d): Scanner device not present", scn_minor);
err = -ENODEV;
scn->isopen = 1;
- file->private_data = scn; /* Used by the read and write metheds */
+ file->private_data = scn; /* Used by the read and write methods */
- MOD_INC_USE_COUNT;
out_error:
- unlock_kernel();
+ up(&(scn->sem)); /* Wake up any possible contending processes */
+
+ if (err)
+ MOD_DEC_USE_COUNT;
return err;
}
return -ENODEV;
}
- scn = p_scn_table[scn_minor];
+ down(&scn_mutex);
+ scn = p_scn_table[scn_minor];
+ down(&(scn->sem));
scn->isopen = 0;
file->private_data = NULL;
+ up(&scn_mutex);
+ up(&(scn->sem));
+
MOD_DEC_USE_COUNT;
return 0;
scn = file->private_data;
+ down(&(scn->sem));
+
scn_minor = scn->scn_minor;
obuf = scn->obuf;
file->f_dentry->d_inode->i_atime = CURRENT_TIME;
- down(&(scn->gen_lock));
-
while (count > 0) {
if (signal_pending(current)) {
- ret = -EINTR;
+ ret = -ERESTARTSYS;
break;
}
ret = result;
break;
} else if (result < 0) { /* We should not get any I/O errors */
- warn("write_scanner(%d): funky result: %d. Please notify the maintainer.", scn_minor, result);
+ warn("write_scanner(%d): funky result: %d. Consult Documentataion/usb/scanner.txt.", scn_minor, result);
ret = -EIO;
break;
}
break;
}
}
- up(&(scn->gen_lock));
+ up(&(scn->sem));
mdelay(5); /* This seems to help with SANE queries */
return ret ? ret : bytes_written;
}
scn = file->private_data;
+ down(&(scn->sem));
+
scn_minor = scn->scn_minor;
ibuf = scn->ibuf;
atime of
the device
node */
- down(&(scn->gen_lock));
-
while (count > 0) {
if (signal_pending(current)) {
- ret = -EINTR;
+ ret = -ERESTARTSYS;
break;
}
ret = result;
break;
} else if ((result < 0) && (result != USB_ST_DATAUNDERRUN)) {
- warn("read_scanner(%d): funky result:%d. Please notify the maintainer.", scn_minor, (int)result);
+ warn("read_scanner(%d): funky result:%d. Consult Documentation/usb/scanner.txt.", scn_minor, (int)result);
ret = -EIO;
break;
}
break;
}
}
- up(&(scn->gen_lock));
-
+ up(&(scn->sem));
return ret ? ret : bytes_read;
}
-#ifdef SCN_IOCTL
static int
ioctl_scanner(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
struct usb_device *dev;
- int result;
-
kdev_t scn_minor;
scn_minor = USB_SCN_MINOR(inode);
switch (cmd)
{
- case IOCTL_SCANNER_VENDOR :
+ case SCANNER_IOCTL_VENDOR :
return (put_user(dev->descriptor.idVendor, (unsigned int *) arg));
- case IOCTL_SCANNER_PRODUCT :
+ case SCANNER_IOCTL_PRODUCT :
return (put_user(dev->descriptor.idProduct, (unsigned int *) arg));
+#ifdef PV8630
case PV8630_IOCTL_INREQUEST :
{
+ int result;
+
struct {
__u8 data;
__u8 request;
}
case PV8630_IOCTL_OUTREQUEST :
{
+ int result;
+
struct {
__u8 request;
__u16 value;
return result;
}
+#endif /* PV8630 */
+ case SCANNER_IOCTL_CTRLMSG:
+ {
+ struct ctrlmsg_ioctl {
+ devrequest req;
+ void *data;
+ } cmsg;
+ int pipe, nb, ret;
+ unsigned char buf[64];
+
+ if (copy_from_user(&cmsg, (void *)arg, sizeof(cmsg)))
+ return -EFAULT;
+
+ nb = le16_to_cpup(&cmsg.req.length);
+
+ if (nb > sizeof(buf))
+ return -EINVAL;
+
+ if ((cmsg.req.requesttype & 0x80) == 0) {
+ pipe = usb_sndctrlpipe(dev, 0);
+ if (nb > 0 && copy_from_user(buf, cmsg.data, nb))
+ return -EFAULT;
+ } else {
+ pipe = usb_rcvctrlpipe(dev, 0);
+ }
+
+ ret = usb_control_msg(dev, pipe, cmsg.req.request,
+ cmsg.req.requesttype,
+ le16_to_cpup(&cmsg.req.value),
+ le16_to_cpup(&cmsg.req.index),
+ buf, nb, HZ);
+
+ if (ret < 0) {
+ err("ioctl_scanner(%d): control_msg returned %d\n", scn_minor, ret);
+ return -EIO;
+ }
+
+ if (nb > 0 && (cmsg.req.requesttype & 0x80) && copy_to_user(cmsg.data, buf, nb))
+ return -EFAULT;
+
+ return 0;
+ }
default:
return -ENOTTY;
}
return 0;
}
-#endif /* SCN_IOCTL */
static struct
file_operations usb_scanner_fops = {
read: read_scanner,
write: write_scanner,
-#ifdef SCN_IOCTL
ioctl: ioctl_scanner,
-#endif /* SCN_IOCTL */
open: open_scanner,
release: close_scanner,
};
char valid_device = 0;
char have_bulk_in, have_bulk_out, have_intr;
+ char name[10];
if (vendor != -1 && product != -1) {
info("probe_scanner: User specified USB scanner -- Vendor:Product - %x:%x", vendor, product);
dbg("probe_scanner: intr_ep:%d", have_intr);
continue;
}
- info("probe_scanner: Undetected endpoint. Notify the maintainer.");
+ info("probe_scanner: Undetected endpoint -- consult Documentation/usb/scanner.txt.");
return NULL; /* Shouldn't ever get here unless we have something weird */
}
}
break;
default:
- info("probe_scanner: Endpoint determination failed. Notify the maintainer.");
+ info("probe_scanner: Endpoint determination failed -- consult Documentation/usb/scanner.txt");
return NULL;
}
* with it. The problem with this is that we are counting on the fact
* that the user will sequentially add device nodes for the scanner
* devices. */
+
+ down(&scn_mutex);
for (scn_minor = 0; scn_minor < SCN_MAX_MNR; scn_minor++) {
if (!p_scn_table[scn_minor])
return NULL;
}
memset (scn, 0, sizeof(struct scn_usb_data));
- dbg ("probe_scanner(%d): Address of scn:%p", scn_minor, scn);
+ init_MUTEX(&(scn->sem)); /* Initializes to unlocked */
+
+ dbg ("probe_scanner(%d): Address of scn:%p", scn_minor, scn);
/* Ok, if we detected an interrupt EP, setup a handler for it */
if (have_intr) {
if (usb_submit_urb(&scn->scn_irq)) {
err("probe_scanner(%d): Unable to allocate INT URB.", scn_minor);
kfree(scn);
+ up(&scn_mutex);
return NULL;
}
}
if (!(scn->obuf = (char *)kmalloc(OBUF_SIZE, GFP_KERNEL))) {
err("probe_scanner(%d): Not enough memory for the output buffer.", scn_minor);
kfree(scn);
+ up(&scn_mutex);
return NULL;
}
dbg("probe_scanner(%d): obuf address:%p", scn_minor, scn->obuf);
err("probe_scanner(%d): Not enough memory for the input buffer.", scn_minor);
kfree(scn->obuf);
kfree(scn);
+ up(&scn_mutex);
return NULL;
}
dbg("probe_scanner(%d): ibuf address:%p", scn_minor, scn->ibuf);
scn->scn_minor = scn_minor;
scn->isopen = 0;
- init_MUTEX(&(scn->gen_lock));
-
- /* if we have devfs, create with perms=660 */
- scn->devfs = devfs_register(usb_devfs_handle, "scanner",
- DEVFS_FL_DEFAULT, USB_MAJOR,
- SCN_BASE_MNR + scn_minor,
- S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP |
- S_IWGRP, &usb_scanner_fops, NULL);
+ sprintf(name, "scanner%d", scn->scn_minor);
+
+ scn->devfs = devfs_register(usb_devfs_handle, name,
+ DEVFS_FL_DEFAULT, USB_MAJOR,
+ SCN_BASE_MNR + scn->scn_minor,
+ S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP |
+ S_IWGRP | S_IROTH | S_IWOTH, &usb_scanner_fops, NULL);
+ if (scn->devfs == NULL)
+ dbg("scanner%d: device node registration failed", scn_minor);
+ up(&scn_mutex);
return p_scn_table[scn_minor] = scn;
}
{
struct scn_usb_data *scn = (struct scn_usb_data *) ptr;
+ down (&scn_mutex);
+ down (&(scn->sem));
+
if(scn->intr_ep) {
dbg("disconnect_scanner(%d): Unlinking IRQ URB", scn->scn_minor);
usb_unlink_urb(&scn->scn_irq);
usb_driver_release_interface(&scanner_driver,
&scn->scn_dev->actconfig->interface[scn->ifnum]);
- devfs_unregister (scn->devfs);
-
kfree(scn->ibuf);
kfree(scn->obuf);
dbg("disconnect_scanner: De-allocating minor:%d", scn->scn_minor);
+ devfs_unregister(scn->devfs);
p_scn_table[scn->scn_minor] = NULL;
+ up (&(scn->sem));
kfree (scn);
+ up (&scn_mutex);
}
-
static struct
usb_driver scanner_driver = {
name: "usbscanner",
if (usb_register(&scanner_driver) < 0)
return -1;
- info("USB Scanner support registered.");
+ info(DRIVER_VERSION ":" DRIVER_DESC);
return 0;
}
/*
- * Driver for USB Scanners (linux-2.4.0)
+ * Driver for USB Scanners (linux-2.4.12)
*
- * Copyright (C) 1999, 2000 David E. Nelson
+ * Copyright (C) 1999, 2000, 2001 David E. Nelson
*
* David E. Nelson (dnelson@jump.net)
*
- * 08/16/2001 added devfs support Yves Duret <yduret@mandrakesoft.com>
- *
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
// #define DEBUG
+/* Enable this to support the older ioctl interfaces scanners that
+ * a PV8630 Scanner-On-Chip. The prefered method is the
+ * SCANNER_IOCTL_CTRLMSG ioctl.
+ */
+// #define PV8630
+
+#define DRIVER_VERSION "0.4.6"
+#define DRIVER_DESC "USB Scanner Driver"
+
#include <linux/usb.h>
static __s32 vendor=-1, product=-1, read_timeout=0;
MODULE_AUTHOR("David E. Nelson, dnelson@jump.net, http://www.jump.net/~dnelson");
-MODULE_DESCRIPTION("USB Scanner Driver");
+MODULE_DESCRIPTION(DRIVER_DESC" "DRIVER_VERSION);
MODULE_LICENSE("GPL");
MODULE_PARM(vendor, "i");
MODULE_PARM_DESC(read_timeout, "User specified read timeout in seconds");
-/* Enable to activate the ioctl interface. This is mainly meant for */
-/* development purposes until an ioctl number is officially registered */
-#define SCN_IOCTL
-
/* WARNING: These DATA_DUMP's can produce a lot of data. Caveat Emptor. */
// #define RD_DATA_DUMP /* Enable to dump data - limited to 24 bytes */
// #define WR_DATA_DUMP /* DEBUG does not have to be defined. */
{ USB_DEVICE(0x04a5, 0x2040) }, /* Prisa AcerScan 620U (!) */
{ USB_DEVICE(0x04a5, 0x20c0) }, /* Prisa AcerScan 1240UT */
{ USB_DEVICE(0x04a5, 0x2022) }, /* Vuego Scan Brisa 340U */
+ { USB_DEVICE(0x04a5, 0x1a20) }, /* Unknown - Oliver Schwartz */
+ { USB_DEVICE(0x04a5, 0x1a2a) }, /* Unknown - Oliver Schwartz */
+ { USB_DEVICE(0x04a5, 0x207e) }, /* Prisa 640BU */
+ { USB_DEVICE(0x04a5, 0x20be) }, /* Unknown - Oliver Schwartz */
+ { USB_DEVICE(0x04a5, 0x20c0) }, /* Unknown - Oliver Schwartz */
+ { USB_DEVICE(0x04a5, 0x20de) }, /* S2W 3300U */
+ { USB_DEVICE(0x04a5, 0x20b0) }, /* Unknown - Oliver Schwartz */
+ { USB_DEVICE(0x04a5, 0x20fe) }, /* Unknown - Oliver Schwartz */
/* Agfa */
{ USB_DEVICE(0x06bd, 0x0001) }, /* SnapScan 1212U */
{ USB_DEVICE(0x06bd, 0x0002) }, /* SnapScan 1236U */
{ USB_DEVICE(0x06bd, 0x2061) }, /* Another SnapScan 1212U (?)*/
{ USB_DEVICE(0x06bd, 0x0100) }, /* SnapScan Touch */
+ { USB_DEVICE(0x06bd, 0x2091) }, /* SnapScan e20 */
+ { USB_DEVICE(0x06bd, 0x2097) }, /* SnapScan e26 */
+ { USB_DEVICE(0x06bd, 0x208d) }, /* Snapscan e40 */
/* Colorado -- See Primax/Colorado below */
/* Epson -- See Seiko/Epson below */
/* Genius */
{ USB_DEVICE(0x0458, 0x2001) }, /* ColorPage-Vivid Pro */
/* Hewlett Packard */
{ USB_DEVICE(0x03f0, 0x0205) }, /* 3300C */
+ { USB_DEVICE(0x03f0, 0x0405) }, /* 3400C */
{ USB_DEVICE(0x03f0, 0x0101) }, /* 4100C */
{ USB_DEVICE(0x03f0, 0x0105) }, /* 4200C */
+ { USB_DEVICE(0x03f0, 0x0305) }, /* 4300C */
{ USB_DEVICE(0x03f0, 0x0102) }, /* PhotoSmart S20 */
{ USB_DEVICE(0x03f0, 0x0401) }, /* 5200C */
// { USB_DEVICE(0x03f0, 0x0701) }, /* 5300C - NOT SUPPORTED - see http://www.neatech.nl/oss/HP5300C/ */
{ USB_DEVICE(0x03f0, 0x0201) }, /* 6200C */
{ USB_DEVICE(0x03f0, 0x0601) }, /* 6300C */
+ { USB_DEVICE(0x03f0, 0x605) }, /* 2200C */
/* iVina */
- { USB_DEVICE(0x0638, 0x0268) }, /* 1200U */
+ { USB_DEVICE(0x0638, 0x0268) }, /* 1200U */
+ /* Lifetec */
+ { USB_DEVICE(0x05d8, 0x4002) }, /* Lifetec LT9385 */
/* Microtek -- No longer supported - Enable SCSI and USB Microtek in kernel config */
// { USB_DEVICE(0x05da, 0x0099) }, /* ScanMaker X6 - X6U */
// { USB_DEVICE(0x05da, 0x0094) }, /* Phantom 336CX - C3 */
// { USB_DEVICE(0x05da, 0x00a3) }, /* ScanMaker V6USL */
// { USB_DEVICE(0x05da, 0x80a3) }, /* ScanMaker V6USL #2 */
// { USB_DEVICE(0x05da, 0x80ac) }, /* ScanMaker V6UL - SpicyU */
+ /* Minolta */
+ // { USB_DEVICE(0x0638,0x026a) }, /* Minolta Dimage Scan Dual II */
/* Mustek */
{ USB_DEVICE(0x055f, 0x0001) }, /* 1200 CU */
{ USB_DEVICE(0x0400, 0x1000) }, /* BearPaw 1200 */
{ USB_DEVICE(0x055f, 0x0002) }, /* 600 CU */
+ { USB_DEVICE(0x055f, 0x0873) }, /* 600 USB */
{ USB_DEVICE(0x055f, 0x0003) }, /* 1200 USB */
{ USB_DEVICE(0x055f, 0x0006) }, /* 1200 UB */
{ USB_DEVICE(0x0400, 0x1001) }, /* BearPaw 2400 */
{ USB_DEVICE(0x0461, 0x0303) }, /* G2E-300 #2 */
{ USB_DEVICE(0x0461, 0x0383) }, /* G2E-600 */
{ USB_DEVICE(0x0461, 0x0340) }, /* Colorado USB 9600 */
- { USB_DEVICE(0x0461, 0x0360) }, /* Colorado USB 19200 */
+ // { USB_DEVICE(0x0461, 0x0360) }, /* Colorado USB 19200 - undetected endpoint */
{ USB_DEVICE(0x0461, 0x0341) }, /* Colorado 600u */
{ USB_DEVICE(0x0461, 0x0361) }, /* Colorado 1200u */
+ /* Relisis */
+ // { USB_DEVICE(0x0475, 0x0103) }, /* Episode - undetected endpoint */
/* Seiko/Epson Corp. */
{ USB_DEVICE(0x04b8, 0x0101) }, /* Perfection 636U and 636Photo */
{ USB_DEVICE(0x04b8, 0x0103) }, /* Perfection 610 */
{ USB_DEVICE(0x04b8, 0x010b) }, /* Perfection 1240U */
{ USB_DEVICE(0x04b8, 0x010c) }, /* Perfection 640U */
{ USB_DEVICE(0x04b8, 0x010e) }, /* Expression 1680 */
+ { USB_DEVICE(0x04b8, 0x0110) }, /* Perfection 1650 */
+ { USB_DEVICE(0x04b8, 0x0112) }, /* Perfection 2450 - GT-9700 for the Japanese mkt */
/* Umax */
{ USB_DEVICE(0x1606, 0x0010) }, /* Astra 1220U */
{ USB_DEVICE(0x1606, 0x0030) }, /* Astra 2000U */
+ { USB_DEVICE(0x1606, 0x0130) }, /* Astra 2100U */
{ USB_DEVICE(0x1606, 0x0230) }, /* Astra 2200U */
/* Visioneer */
{ USB_DEVICE(0x04a7, 0x0221) }, /* OneTouch 5300 USB */
/* FIXME: These are NOT registered ioctls()'s */
+#ifdef PV8630
#define PV8630_IOCTL_INREQUEST 69
#define PV8630_IOCTL_OUTREQUEST 70
+#endif /* PV8630 */
+
+
+/* read vendor and product IDs from the scanner */
+#define SCANNER_IOCTL_VENDOR _IOR('U', 0x20, int)
+#define SCANNER_IOCTL_PRODUCT _IOR('U', 0x21, int)
+/* send/recv a control message to the scanner */
+#define SCANNER_IOCTL_CTRLMSG _IOWR('U', 0x22, devrequest )
-/* read vendor and product IDs */
-#define IOCTL_SCANNER_VENDOR _IOR('u', 0xa0, int)
-#define IOCTL_SCANNER_PRODUCT _IOR('u', 0xa1, int)
#define SCN_MAX_MNR 16 /* We're allocated 16 minors */
#define SCN_BASE_MNR 48 /* USB Scanners start at minor 48 */
+static DECLARE_MUTEX (scn_mutex); /* Initializes to unlocked */
+
struct scn_usb_data {
struct usb_device *scn_dev;
devfs_handle_t devfs; /* devfs device */
char *obuf, *ibuf; /* transfer buffers */
char bulk_in_ep, bulk_out_ep, intr_ep; /* Endpoint assignments */
wait_queue_head_t rd_wait_q; /* read timeouts */
- struct semaphore gen_lock; /* lock to prevent concurrent reads or writes */
+ struct semaphore sem; /* lock to prevent concurrent reads or writes */
unsigned int rd_nak_timeout; /* Seconds to wait before read() timeout. */
};
+extern devfs_handle_t usb_devfs_handle;
+
static struct scn_usb_data *p_scn_table[SCN_MAX_MNR] = { NULL, /* ... */};
static struct usb_driver scanner_driver;
-
-extern devfs_handle_t usb_devfs_handle; /* /dev/usb dir. */
while (port->write_urb->status == -EINPROGRESS) {
dbg(__FUNCTION__ " write in progress - retrying");
if (signal_pending(current)) {
- current->state = TASK_RUNNING;
+ set_current_state(TASK_RUNNING);
remove_wait_queue(&port->write_wait, &wait);
rc = -ERESTARTSYS;
goto err;
/* Control and Isochronous ignore the toggle, so this */
/* is safe for all types */
if (!(td->status & TD_CTRL_ACTIVE) &&
- uhci_actual_length(td->status) < uhci_expected_length(td->info) ||
- tmp == head) {
+ (uhci_actual_length(td->status) < uhci_expected_length(td->info) ||
+ tmp == head)) {
usb_settoggle(urb->dev, uhci_endpoint(td->info),
uhci_packetout(td->info),
uhci_toggle(td->info) ^ 1);
set_current_state(TASK_UNINTERRUPTIBLE);
while (timeout && (urb->status == USB_ST_URB_PENDING))
timeout = schedule_timeout (timeout);
- current->state = TASK_RUNNING;
+ set_current_state(TASK_RUNNING);
remove_wait_queue (&unlink_wakeup, &wait);
if (urb->status == USB_ST_URB_PENDING) {
err ("unlink URB timeout");
set_current_state(TASK_UNINTERRUPTIBLE);
while (timeout && dev->ed_cnt)
timeout = schedule_timeout (timeout);
- current->state = TASK_RUNNING;
+ set_current_state(TASK_RUNNING);
remove_wait_queue (&freedev_wakeup, &wait);
if (dev->ed_cnt) {
err ("free device %d timeout", usb_dev->devnum);
*/
LIST_HEAD(usb_driver_list);
LIST_HEAD(usb_bus_list);
-rwlock_t usb_bus_list_lock = RW_LOCK_UNLOCKED;
+struct semaphore usb_bus_list_lock;
devfs_handle_t usb_devfs_handle; /* /dev/usb dir. */
{
struct list_head *tmp;
- read_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
tmp = usb_bus_list.next;
while (tmp != &usb_bus_list) {
struct usb_bus *bus = list_entry(tmp,struct usb_bus, bus_list);
tmp = tmp->next;
usb_check_support(bus->root_hub);
}
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
}
/*
*/
list_del(&driver->driver_list);
- read_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
tmp = usb_bus_list.next;
while (tmp != &usb_bus_list) {
struct usb_bus *bus = list_entry(tmp,struct usb_bus,bus_list);
tmp = tmp->next;
usb_drivers_purge(driver, bus->root_hub);
}
- read_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
}
struct usb_interface *usb_ifnum_to_if(struct usb_device *dev, unsigned ifnum)
{
int busnum;
- write_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
busnum = find_next_zero_bit(busmap.busmap, USB_MAXBUS, 1);
if (busnum < USB_MAXBUS) {
set_bit(busnum, busmap.busmap);
/* Add it to the list of buses */
list_add(&bus->bus_list, &usb_bus_list);
- write_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
usbdevfs_add_bus(bus);
* controller code, as well as having it call this when cleaning
* itself up
*/
- write_lock_irq (&usb_bus_list_lock);
+ down (&usb_bus_list_lock);
list_del(&bus->bus_list);
- write_unlock_irq (&usb_bus_list_lock);
+ up (&usb_bus_list_lock);
usbdevfs_remove_bus(bus);
*/
static int __init usb_init(void)
{
+ init_MUTEX(&usb_bus_list_lock);
usb_major_init();
usbdevfs_init();
usb_hub_init();
while (skb_queue_len (&dev->rxq)
&& skb_queue_len (&dev->txq)
&& skb_queue_len (&dev->done)) {
- current->state = TASK_UNINTERRUPTIBLE;
+ set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout (UNLINK_TIMEOUT_JIFFIES);
dbg ("waited for %d urb completions", temp);
}
if (time_after_eq (jiffies, expire))
/* The FIFO is stuck. */
return -EBUSY;
- current->state = TASK_INTERRUPTIBLE;
+ set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout((HZ + 99) / 100);
if (signal_pending (current))
break;
acornfb_init_fbinfo();
- for (opt = strtok(options, ","); opt; opt = strtok(NULL, ",")) {
+ while (opt = strsep(&options, ",")) {
if (!*opt)
continue;
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt; this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strcmp(this_opt, "inverse")) {
amifb_inverse = 1;
fb_invert_cmaps();
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
return 0;
}
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strcmp(this_opt, "inverse")) {
Cyberfb_inverse = 1;
fb_invert_cmaps();
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "pal", 3))
fm2fb_mode = FM2FB_MODE_PAL;
else if (!strncmp(this_opt, "ntsc", 4))
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5))
strcpy(fb_info.fontname, this_opt+5);
}
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return;
- for(this_opt=strtok(options,","); this_opt; this_opt=strtok(NULL,",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt) continue;
if (! strcmp(this_opt, "inverse"))
if (!options || !*options)
return 0;
- for(this_opt=strtok(options,","); this_opt; this_opt=strtok(NULL,",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt) continue;
dprintk("matroxfb_setup: option %s\n", this_opt);
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return 0;
- for (this_opt = strtok (options, ","); this_opt;
- this_opt = strtok (NULL, ",")) {
+ while (this_opt = strsep (&options, ",")) {
if (!strncmp (this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")){
+ while (this_opt = strsep(&options, ",")) {
if (!strcmp(this_opt, "inverse")) {
z3fb_inverse = 1;
fb_invert_cmaps();
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "bpp:", 4))
current_par.max_bpp =
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5))
strcpy(fb_info.fontname, this_opt+5);
}
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt)
continue;
if (!options || !*options)
return 0;
- for(this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt) continue;
f_ddprintk("option %s\n", this_opt);
if(!options || !*options)
return;
- for(this_opt = strtok(options, ",");
- this_opt;
- this_opt = strtok(NULL, ",")) {
+ while(this_opt = strsep(&options, ",")) {
if(!strcmp(this_opt, "inverse")) {
inverse = 1;
fb_invert_cmaps();
int i;
if (options && *options) {
- for(this_opt=strtok(options,","); this_opt; this_opt=strtok(NULL,",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt) { continue; }
if (!strncmp(this_opt, "font:", 5)) {
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5)) {
char *p;
int i;
if (!options || !*options)
return 0;
- for(this_opt=strtok(options,","); this_opt; this_opt=strtok(NULL,",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt) continue;
if (! strcmp(this_opt, "inverse"))
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt;
- this_opt = strtok(NULL, ",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!strncmp(this_opt, "font:", 5))
strcpy(fb_info.fontname, this_opt+5);
}
if (!options || !*options)
return 0;
- for(this_opt=strtok(options,","); this_opt; this_opt=strtok(NULL,",")) {
+ while (this_opt = strsep(&options, ",")) {
if (!*this_opt) continue;
if (!strncmp(this_opt, "font:", 5))
if (!options || !*options)
return 0;
- for (this_opt = strtok(options, ","); this_opt; this_opt = strtok(NULL, ","))
+ while (this_opt = strsep(&options, ",")) {
if (!strcmp(this_opt, "inverse")) {
Cyberfb_inverse = 1;
fb_invert_cmaps();
*/
if (interp_elf_ex->e_phentsize != sizeof(struct elf_phdr))
goto out;
+ if (interp_elf_ex->e_phnum > 65536U / sizeof(struct elf_phdr))
+ goto out;
/* Now read in all of the header information */
printk("(brk) %lx\n" , (long) current->mm->brk);
#endif
- if ( current->personality == PER_SVR4 )
- {
+ if (current->personality & MMAP_PAGE_ZERO) {
/* Why this, you ask??? Well SVr4 maps page 0 as read-only,
and some applications "depend" upon this behavior.
Since we do not have the power to recompile these, we
* 1997-06-26 hpa: pass the real filename rather than argv[0]
* 1997-06-30 minor cleanup
* 1997-08-09 removed extension stripping, locking cleanup
+ * 2001-02-28 AV: rewritten into something that resembles C. Original didn't.
*/
-#include <linux/config.h>
#include <linux/module.h>
+#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/fs.h>
-#include <linux/slab.h>
#include <linux/binfmts.h>
-#include <linux/init.h>
-#include <linux/proc_fs.h>
-#include <linux/string.h>
+#include <linux/slab.h>
#include <linux/ctype.h>
#include <linux/file.h>
-#include <linux/spinlock.h>
+#include <linux/pagemap.h>
+
#include <asm/uaccess.h>
-/*
- * We should make this work with a "stub-only" /proc,
- * which would just not be able to be configured.
- * Right now the /proc-fs support is too black and white,
- * though, so just remind people that this should be
- * fixed..
- */
-#ifndef CONFIG_PROC_FS
-#error You really need /proc support for binfmt_misc. Please reconfigure!
-#endif
+enum {
+ VERBOSE_STATUS = 1 /* make it zero to save 400 bytes kernel memory */
+};
+
+static LIST_HEAD(entries);
+static int enabled = 1;
-#define VERBOSE_STATUS /* undef this to save 400 bytes kernel memory */
+enum {Enabled, Magic};
-struct binfmt_entry {
- struct binfmt_entry *next;
- long id;
+typedef struct {
+ struct list_head list;
int flags; /* type, status, etc. */
int offset; /* offset of magic */
int size; /* size of magic/mask */
char *magic; /* magic or filename extension */
char *mask; /* mask, NULL for exact match */
char *interpreter; /* filename of interpreter */
- char *proc_name;
- struct proc_dir_entry *proc_dir;
-};
-
-#define ENTRY_ENABLED 1 /* the old binfmt_entry.enabled */
-#define ENTRY_MAGIC 8 /* not filename detection */
-
-static int load_misc_binary(struct linux_binprm *bprm, struct pt_regs *regs);
-static void entry_proc_cleanup(struct binfmt_entry *e);
-static int entry_proc_setup(struct binfmt_entry *e);
-
-static struct linux_binfmt misc_format = {
- NULL, THIS_MODULE, load_misc_binary, NULL, NULL, 0
-};
-
-static struct proc_dir_entry *bm_dir;
-
-static struct binfmt_entry *entries;
-static int free_id = 1;
-static int enabled = 1;
+ char *name;
+ struct dentry *dentry;
+} Node;
static rwlock_t entries_lock __attribute__((unused)) = RW_LOCK_UNLOCKED;
-
-/*
- * Unregister one entry
- */
-static void clear_entry(int id)
-{
- struct binfmt_entry **ep, *e;
-
- write_lock(&entries_lock);
- ep = &entries;
- while (*ep && ((*ep)->id != id))
- ep = &((*ep)->next);
- if ((e = *ep))
- *ep = e->next;
- write_unlock(&entries_lock);
-
- if (e) {
- entry_proc_cleanup(e);
- kfree(e);
- }
-}
-
-/*
- * Clear all registered binary formats
- */
-static void clear_entries(void)
-{
- struct binfmt_entry *e, *n;
-
- write_lock(&entries_lock);
- n = entries;
- entries = NULL;
- write_unlock(&entries_lock);
-
- while ((e = n)) {
- n = e->next;
- entry_proc_cleanup(e);
- kfree(e);
- }
-}
-
-/*
- * Find entry through id and lock it
- */
-static struct binfmt_entry *get_entry(int id)
-{
- struct binfmt_entry *e;
-
- read_lock(&entries_lock);
- e = entries;
- while (e && (e->id != id))
- e = e->next;
- if (!e)
- read_unlock(&entries_lock);
- return e;
-}
-
-/*
- * unlock entry
- */
-static inline void put_entry(struct binfmt_entry *e)
-{
- if (e)
- read_unlock(&entries_lock);
-}
-
-
/*
* Check if we support the binfmt
- * if we do, return the binfmt_entry, else NULL
+ * if we do, return the node, else NULL
* locking is done in load_misc_binary
*/
-static struct binfmt_entry *check_file(struct linux_binprm *bprm)
+static Node *check_file(struct linux_binprm *bprm)
{
- struct binfmt_entry *e;
char *p = strrchr(bprm->filename, '.');
- int j;
-
- e = entries;
- while (e) {
- if (e->flags & ENTRY_ENABLED) {
- if (!(e->flags & ENTRY_MAGIC)) {
- if (p && !strcmp(e->magic, p + 1))
- return e;
- } else {
- j = 0;
- while ((j < e->size) &&
- !((bprm->buf[e->offset + j] ^ e->magic[j])
- & (e->mask ? e->mask[j] : 0xff)))
- j++;
- if (j == e->size)
- return e;
- }
+ struct list_head *l;
+
+ for (l = entries.next; l != &entries; l = l->next) {
+ Node *e = list_entry(l, Node, list);
+ char *s;
+ int j;
+
+ if (!test_bit(Enabled, &e->flags))
+ continue;
+
+ if (!test_bit(Magic, &e->flags)) {
+ if (p && !strcmp(e->magic, p + 1))
+ return e;
+ continue;
}
- e = e->next;
- };
+
+ s = bprm->buf + e->offset;
+ if (e->mask) {
+ for (j = 0; j < e->size; j++)
+ if ((*s++ ^ e->magic[j]) & e->mask[j])
+ break;
+ } else {
+ for (j = 0; j < e->size; j++)
+ if ((*s++ ^ e->magic[j]))
+ break;
+ }
+ if (j == e->size)
+ return e;
+ }
return NULL;
}
*/
static int load_misc_binary(struct linux_binprm *bprm, struct pt_regs *regs)
{
- struct binfmt_entry *fmt;
+ Node *fmt;
struct file * file;
char iname[BINPRM_BUF_SIZE];
char *iname_addr = iname;
return retval;
}
-
-
-/*
- * /proc handling routines
- */
+/* Command parsers */
/*
* parses and copies one argument enclosed in del from *sp to *dp,
* returns pointer to the copied argument or NULL in case of an
* error (and sets err) or null argument length.
*/
-static char *copyarg(char **dp, const char **sp, int *count,
- char del, int special, int *err)
+static char *scanarg(char *s, char del)
{
- char c = 0, *res = *dp;
-
- while (!*err && ((c = *((*sp)++)), (*count)--) && (c != del)) {
- switch (c) {
- case '\\':
- if (special && (**sp == 'x')) {
- if (!isxdigit(c = toupper(*(++*sp))))
- *err = -EINVAL;
- **dp = (c - (isdigit(c) ? '0' : 'A' - 10)) * 16;
- if (!isxdigit(c = toupper(*(++*sp))))
- *err = -EINVAL;
- *((*dp)++) += c - (isdigit(c) ? '0' : 'A' - 10);
- ++*sp;
- *count -= 3;
- break;
- }
- default:
- *((*dp)++) = c;
+ char c;
+
+ while ((c = *s++) != del) {
+ if (c == '\\' && *s == 'x') {
+ s++;
+ if (!isxdigit(*s++))
+ return NULL;
+ if (!isxdigit(*s++))
+ return NULL;
}
}
- if (*err || (c != del) || (res == *dp))
- res = NULL;
- else if (!special)
- *((*dp)++) = '\0';
- return res;
+ return s;
+}
+
+static int unquote(char *from)
+{
+ char c = 0, *s = from, *p = from;
+
+ while ((c = *s++) != '\0') {
+ if (c == '\\' && *s == 'x') {
+ s++;
+ c = toupper(*s++);
+ *p = (c - (isdigit(c) ? '0' : 'A' - 10)) << 4;
+ c = toupper(*s++);
+ *p++ |= c - (isdigit(c) ? '0' : 'A' - 10);
+ continue;
+ }
+ *p++ = c;
+ }
+ return p - from;
}
/*
* ':name:type:offset:magic:mask:interpreter:'
* where the ':' is the IFS, that can be chosen with the first char
*/
-static int proc_write_register(struct file *file, const char *buffer,
- unsigned long count, void *data)
+static Node *create_entry(const char *buffer, size_t count)
{
- const char *sp;
- char del, *dp;
- struct binfmt_entry *e;
- int memsize, cnt = count - 1, err;
+ Node *e;
+ int memsize, err;
+ char *buf, *p;
+ char del;
/* some sanity checks */
err = -EINVAL;
if ((count < 11) || (count > 256))
- goto _err;
+ goto out;
err = -ENOMEM;
- memsize = sizeof(struct binfmt_entry) + count;
- if (!(e = (struct binfmt_entry *) kmalloc(memsize, GFP_USER)))
- goto _err;
-
- err = 0;
- sp = buffer + 1;
- del = buffer[0];
- dp = (char *)e + sizeof(struct binfmt_entry);
-
- e->proc_name = copyarg(&dp, &sp, &cnt, del, 0, &err);
-
- /* we can use bit 3 of type for ext/magic
- flag due to the nice encoding of E and M */
- if ((*sp & ~('E' | 'M')) || (sp[1] != del))
- err = -EINVAL;
- else
- e->flags = (*sp++ & (ENTRY_MAGIC | ENTRY_ENABLED));
- cnt -= 2; sp++;
-
- e->offset = 0;
- while (cnt-- && isdigit(*sp))
- e->offset = e->offset * 10 + *sp++ - '0';
- if (*sp++ != del)
- err = -EINVAL;
-
- e->magic = copyarg(&dp, &sp, &cnt, del, (e->flags & ENTRY_MAGIC), &err);
- e->size = dp - e->magic;
- e->mask = copyarg(&dp, &sp, &cnt, del, 1, &err);
- if (e->mask && ((dp - e->mask) != e->size))
- err = -EINVAL;
- e->interpreter = copyarg(&dp, &sp, &cnt, del, 0, &err);
- e->id = free_id++;
-
- /* more sanity checks */
- if (err || !(!cnt || (!(--cnt) && (*sp == '\n'))) ||
- (e->size < 1) || ((e->size + e->offset) > (BINPRM_BUF_SIZE - 1)) ||
- !(e->proc_name) || !(e->interpreter) || entry_proc_setup(e))
- goto free_err;
+ memsize = sizeof(Node) + count + 8;
+ e = (Node *) kmalloc(memsize, GFP_USER);
+ if (!e)
+ goto out;
- write_lock(&entries_lock);
- e->next = entries;
- entries = e;
- write_unlock(&entries_lock);
+ p = buf = (char *)e + sizeof(Node);
+
+ memset(e, 0, sizeof(Node));
+ if (copy_from_user(buf, buffer, count))
+ goto Efault;
+
+ del = *p++; /* delimeter */
+
+ memset(buf+count, del, 8);
+
+ e->name = p;
+ p = strchr(p, del);
+ if (!p)
+ goto Einval;
+ *p++ = '\0';
+ if (!e->name[0] ||
+ !strcmp(e->name, ".") ||
+ !strcmp(e->name, "..") ||
+ strchr(e->name, '/'))
+ goto Einval;
+ switch (*p++) {
+ case 'E': e->flags = 1<<Enabled; break;
+ case 'M': e->flags = (1<<Enabled) | (1<<Magic); break;
+ default: goto Einval;
+ }
+ if (*p++ != del)
+ goto Einval;
+ if (test_bit(Magic, &e->flags)) {
+ char *s = strchr(p, del);
+ if (!s)
+ goto Einval;
+ *s++ = '\0';
+ e->offset = simple_strtoul(p, &p, 10);
+ if (*p++)
+ goto Einval;
+ e->magic = p;
+ p = scanarg(p, del);
+ if (!p)
+ goto Einval;
+ p[-1] = '\0';
+ if (!e->magic[0])
+ goto Einval;
+ e->mask = p;
+ p = scanarg(p, del);
+ if (!p)
+ goto Einval;
+ p[-1] = '\0';
+ if (!e->mask[0])
+ e->mask = NULL;
+ e->size = unquote(e->magic);
+ if (e->mask && unquote(e->mask) != e->size)
+ goto Einval;
+ if (e->size + e->offset > BINPRM_BUF_SIZE)
+ goto Einval;
+ } else {
+ p = strchr(p, del);
+ if (!p)
+ goto Einval;
+ *p++ = '\0';
+ e->magic = p;
+ p = strchr(p, del);
+ if (!p)
+ goto Einval;
+ *p++ = '\0';
+ if (!e->magic[0] || strchr(e->magic, '/'))
+ goto Einval;
+ p = strchr(p, del);
+ if (!p)
+ goto Einval;
+ *p++ = '\0';
+ }
+ e->interpreter = p;
+ p = strchr(p, del);
+ if (!p)
+ goto Einval;
+ *p++ = '\0';
+ if (!e->interpreter[0])
+ goto Einval;
+
+ if (*p == '\n')
+ p++;
+ if (p != buf + count)
+ goto Einval;
+ return e;
- err = count;
-_err:
- return err;
-free_err:
+out:
+ return ERR_PTR(err);
+
+Efault:
kfree(e);
- err = -EINVAL;
- goto _err;
+ return ERR_PTR(-EFAULT);
+Einval:
+ kfree(e);
+ return ERR_PTR(-EINVAL);
}
/*
- * Get status of entry/binfmt_misc
- * FIXME? should an entry be marked disabled if binfmt_misc is disabled though
- * entry is enabled?
+ * Set status of entry/binfmt_misc:
+ * '1' enables, '0' disables and '-1' clears entry/binfmt_misc
*/
-static int proc_read_status(char *page, char **start, off_t off,
- int count, int *eof, void *data)
+static int parse_command(const char *buffer, size_t count)
+{
+ char s[4];
+
+ if (!count)
+ return 0;
+ if (count > 3)
+ return -EINVAL;
+ if (copy_from_user(s, buffer, count))
+ return -EFAULT;
+ if (s[count-1] == '\n')
+ count--;
+ if (count == 1 && s[0] == '0')
+ return 1;
+ if (count == 1 && s[0] == '1')
+ return 2;
+ if (count == 2 && s[0] == '-' && s[1] == '1')
+ return 3;
+ return -EINVAL;
+}
+
+/* generic stuff */
+
+static void entry_status(Node *e, char *page)
{
- struct binfmt_entry *e;
char *dp;
- int elen, i, err;
+ char *status = "disabled";
-#ifndef VERBOSE_STATUS
- if (data) {
- if (!(e = get_entry((int) data))) {
- err = -ENOENT;
- goto _err;
- }
- i = e->flags & ENTRY_ENABLED;
- put_entry(e);
+ if (test_bit(Enabled, &e->flags))
+ status = "enabled";
+
+ if (!VERBOSE_STATUS) {
+ sprintf(page, "%s\n", status);
+ return;
+ }
+
+ sprintf(page, "%s\ninterpreter %s\n", status, e->interpreter);
+ dp = page + strlen(page);
+ if (!test_bit(Magic, &e->flags)) {
+ sprintf(dp, "extension .%s\n", e->magic);
} else {
- i = enabled;
- }
- sprintf(page, "%s\n", (i ? "enabled" : "disabled"));
-#else
- if (!data)
- sprintf(page, "%s\n", (enabled ? "enabled" : "disabled"));
- else {
- if (!(e = get_entry((long) data))) {
- err = -ENOENT;
- goto _err;
- }
- sprintf(page, "%s\ninterpreter %s\n",
- (e->flags & ENTRY_ENABLED ? "enabled" : "disabled"),
- e->interpreter);
+ int i;
+
+ sprintf(dp, "offset %i\nmagic ", e->offset);
dp = page + strlen(page);
- if (!(e->flags & ENTRY_MAGIC)) {
- sprintf(dp, "extension .%s\n", e->magic);
- dp = page + strlen(page);
- } else {
- sprintf(dp, "offset %i\nmagic ", e->offset);
- dp = page + strlen(page);
+ for (i = 0; i < e->size; i++) {
+ sprintf(dp, "%02x", 0xff & (int) (e->magic[i]));
+ dp += 2;
+ }
+ if (e->mask) {
+ sprintf(dp, "\nmask ");
+ dp += 6;
for (i = 0; i < e->size; i++) {
- sprintf(dp, "%02x", 0xff & (int) (e->magic[i]));
+ sprintf(dp, "%02x", 0xff & (int) (e->mask[i]));
dp += 2;
}
- if (e->mask) {
- sprintf(dp, "\nmask ");
- dp += 6;
- for (i = 0; i < e->size; i++) {
- sprintf(dp, "%02x", 0xff & (int) (e->mask[i]));
- dp += 2;
- }
- }
- *dp++ = '\n';
- *dp = '\0';
}
- put_entry(e);
+ *dp++ = '\n';
+ *dp = '\0';
}
-#endif
+}
- elen = strlen(page) - off;
- if (elen < 0)
- elen = 0;
- *eof = (elen <= count) ? 1 : 0;
- *start = page + off;
- err = elen;
+static struct inode *bm_get_inode(struct super_block *sb, int mode)
+{
+ struct inode * inode = new_inode(sb);
+
+ if (inode) {
+ inode->i_mode = mode;
+ inode->i_uid = 0;
+ inode->i_gid = 0;
+ inode->i_blksize = PAGE_CACHE_SIZE;
+ inode->i_blocks = 0;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ }
+ return inode;
+}
-_err:
- return err;
+static void bm_clear_inode(struct inode *inode)
+{
+ Node *e = inode->u.generic_ip;
+
+ if (e) {
+ write_lock(&entries_lock);
+ list_del(&e->list);
+ write_unlock(&entries_lock);
+ kfree(e);
+ }
}
-/*
- * Set status of entry/binfmt_misc:
- * '1' enables, '0' disables and '-1' clears entry/binfmt_misc
- */
-static int proc_write_status(struct file *file, const char *buffer,
- unsigned long count, void *data)
+static void kill_node(Node *e)
{
- struct binfmt_entry *e;
- int res = count;
+ struct dentry *dentry;
- if (buffer[count-1] == '\n')
- count--;
- if ((count == 1) && !(buffer[0] & ~('0' | '1'))) {
- if (data) {
- if ((e = get_entry((long) data)))
- e->flags = (e->flags & ~ENTRY_ENABLED)
- | (int)(buffer[0] - '0');
- put_entry(e);
+ write_lock(&entries_lock);
+ dentry = e->dentry;
+ if (dentry) {
+ list_del(&e->list);
+ INIT_LIST_HEAD(&e->list);
+ e->dentry = NULL;
+ }
+ write_unlock(&entries_lock);
+
+ if (dentry) {
+ dentry->d_inode->i_nlink--;
+ d_drop(dentry);
+ dput(dentry);
+ }
+}
+
+/* /<entry> */
+
+static ssize_t
+bm_entry_read(struct file * file, char * buf, size_t nbytes, loff_t *ppos)
+{
+ Node *e = file->f_dentry->d_inode->u.generic_ip;
+ loff_t pos = *ppos;
+ ssize_t res;
+ char *page;
+ int len;
+
+ if (!(page = (char*) __get_free_page(GFP_KERNEL)))
+ return -ENOMEM;
+
+ entry_status(e, page);
+ len = strlen(page);
+
+ res = -EINVAL;
+ if (pos < 0)
+ goto out;
+ res = 0;
+ if (pos >= len)
+ goto out;
+ if (len < pos + nbytes)
+ nbytes = len - pos;
+ res = -EFAULT;
+ if (copy_to_user(buf, page + pos, nbytes))
+ goto out;
+ *ppos = pos + nbytes;
+ res = nbytes;
+out:
+ free_page((unsigned long) page);
+ return res;
+}
+
+static ssize_t bm_entry_write(struct file *file, const char *buffer,
+ size_t count, loff_t *ppos)
+{
+ struct dentry *root;
+ Node *e = file->f_dentry->d_inode->u.generic_ip;
+ int res = parse_command(buffer, count);
+
+ switch (res) {
+ case 1: clear_bit(Enabled, &e->flags);
+ break;
+ case 2: set_bit(Enabled, &e->flags);
+ break;
+ case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root);
+ down(&root->d_inode->i_sem);
+ down(&root->d_inode->i_zombie);
+
+ kill_node(e);
+
+ up(&root->d_inode->i_zombie);
+ up(&root->d_inode->i_sem);
+ dput(root);
+ break;
+ default: return res;
+ }
+ return count;
+}
+
+static struct file_operations bm_entry_operations = {
+ read: bm_entry_read,
+ write: bm_entry_write,
+};
+
+/* /register */
+
+static ssize_t bm_register_write(struct file *file, const char *buffer,
+ size_t count, loff_t *ppos)
+{
+ Node *e;
+ struct dentry *root, *dentry;
+ struct super_block *sb = file->f_vfsmnt->mnt_sb;
+ int err = 0;
+
+ e = create_entry(buffer, count);
+
+ if (IS_ERR(e))
+ return PTR_ERR(e);
+
+ root = dget(sb->s_root);
+ down(&root->d_inode->i_sem);
+ dentry = lookup_one_len(e->name, root, strlen(e->name));
+ err = PTR_ERR(dentry);
+ if (!IS_ERR(dentry)) {
+ down(&root->d_inode->i_zombie);
+ if (dentry->d_inode) {
+ err = -EEXIST;
} else {
- enabled = buffer[0] - '0';
+ struct inode * inode = bm_get_inode(sb, S_IFREG | 0644);
+ err = -ENOMEM;
+
+ if (inode) {
+ write_lock(&entries_lock);
+
+ e->dentry = dget(dentry);
+ inode->u.generic_ip = e;
+ inode->i_fop = &bm_entry_operations;
+ d_instantiate(dentry, inode);
+
+ list_add(&e->list, &entries);
+ write_unlock(&entries_lock);
+
+ err = 0;
+ }
}
- } else if ((count == 2) && (buffer[0] == '-') && (buffer[1] == '1')) {
- if (data)
- clear_entry((long) data);
- else
- clear_entries();
- } else {
- res = -EINVAL;
+ up(&root->d_inode->i_zombie);
+ dput(dentry);
}
- return res;
+ up(&root->d_inode->i_sem);
+ dput(root);
+
+ if (err) {
+ kfree(e);
+ return -EINVAL;
+ }
+ return count;
}
-/*
- * Remove the /proc-dir entries of one binfmt
- */
-static void entry_proc_cleanup(struct binfmt_entry *e)
+static struct file_operations bm_register_operations = {
+ write: bm_register_write,
+};
+
+/* /status */
+
+static ssize_t
+bm_status_read(struct file * file, char * buf, size_t nbytes, loff_t *ppos)
{
- remove_proc_entry(e->proc_name, bm_dir);
+ char *s = enabled ? "enabled" : "disabled";
+ int len = strlen(s);
+ loff_t pos = *ppos;
+
+ if (pos < 0)
+ return -EINVAL;
+ if (pos >= len)
+ return 0;
+ if (len < pos + nbytes)
+ nbytes = len - pos;
+ if (copy_to_user(buf, s + pos, nbytes))
+ return -EFAULT;
+ *ppos = pos + nbytes;
+ return nbytes;
}
-/*
- * Create the /proc-dir entry for binfmt
- */
-static int entry_proc_setup(struct binfmt_entry *e)
+static ssize_t bm_status_write(struct file * file, const char * buffer,
+ size_t count, loff_t *ppos)
{
- if (!(e->proc_dir = create_proc_entry(e->proc_name,
- S_IFREG | S_IRUGO | S_IWUSR, bm_dir)))
- {
- printk(KERN_WARNING "Unable to create /proc entry.\n");
- return -ENOENT;
+ int res = parse_command(buffer, count);
+ struct dentry *root;
+
+ switch (res) {
+ case 1: enabled = 0; break;
+ case 2: enabled = 1; break;
+ case 3: root = dget(file->f_vfsmnt->mnt_sb->s_root);
+ down(&root->d_inode->i_sem);
+ down(&root->d_inode->i_zombie);
+
+ while (!list_empty(&entries))
+ kill_node(list_entry(entries.next, Node, list));
+
+ up(&root->d_inode->i_zombie);
+ up(&root->d_inode->i_sem);
+ dput(root);
+ default: return res;
}
- e->proc_dir->data = (void *) (e->id);
- e->proc_dir->read_proc = proc_read_status;
- e->proc_dir->write_proc = proc_write_status;
+ return count;
+}
+
+static struct file_operations bm_status_operations = {
+ read: bm_status_read,
+ write: bm_status_write,
+};
+
+/* / */
+
+static struct dentry * bm_lookup(struct inode *dir, struct dentry *dentry)
+{
+ d_add(dentry, NULL);
+ return NULL;
+}
+
+static struct file_operations bm_dir_operations = {
+ read: generic_read_dir,
+ readdir: dcache_readdir,
+};
+
+static struct inode_operations bm_dir_inode_operations = {
+ lookup: bm_lookup,
+};
+
+/* Superblock handling */
+
+static int bm_statfs(struct super_block *sb, struct statfs *buf)
+{
+ buf->f_type = sb->s_magic;
+ buf->f_bsize = PAGE_CACHE_SIZE;
+ buf->f_namelen = 255;
return 0;
}
-static int __init init_misc_binfmt(void)
+static struct super_operations s_ops = {
+ statfs: bm_statfs,
+ put_inode: force_delete,
+ clear_inode: bm_clear_inode,
+};
+
+static struct super_block *bm_read_super(struct super_block * sb, void * data, int silent)
{
- int error = -ENOENT;
- struct proc_dir_entry *status = NULL, *reg;
+ struct qstr names[2] = {{name:"status"}, {name:"register"}};
+ struct inode * inode;
+ struct dentry * dentry[3];
+ int i;
+
+ for (i=0; i<sizeof(names)/sizeof(names[0]); i++) {
+ names[i].len = strlen(names[i].name);
+ names[i].hash = full_name_hash(names[i].name, names[i].len);
+ }
- bm_dir = proc_mkdir("sys/fs/binfmt_misc", NULL); /* WTF??? */
- if (!bm_dir)
- goto out;
- bm_dir->owner = THIS_MODULE;
+ sb->s_blocksize = PAGE_CACHE_SIZE;
+ sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
+ sb->s_magic = 0x42494e4d;
+ sb->s_op = &s_ops;
+
+ inode = bm_get_inode(sb, S_IFDIR | 0755);
+ if (!inode)
+ return NULL;
+ inode->i_op = &bm_dir_inode_operations;
+ inode->i_fop = &bm_dir_operations;
+ dentry[0] = d_alloc_root(inode);
+ if (!dentry[0]) {
+ iput(inode);
+ return NULL;
+ }
+ dentry[1] = d_alloc(dentry[0], &names[0]);
+ if (!dentry[1])
+ goto out1;
+ dentry[2] = d_alloc(dentry[0], &names[1]);
+ if (!dentry[2])
+ goto out2;
+ inode = bm_get_inode(sb, S_IFREG | 0644);
+ if (!inode)
+ goto out3;
+ inode->i_fop = &bm_status_operations;
+ d_add(dentry[1], inode);
+ inode = bm_get_inode(sb, S_IFREG | 0400);
+ if (!inode)
+ goto out3;
+ inode->i_fop = &bm_register_operations;
+ d_add(dentry[2], inode);
+
+ sb->s_root = dentry[0];
+ return sb;
+
+out3:
+ dput(dentry[2]);
+out2:
+ dput(dentry[1]);
+out1:
+ dput(dentry[0]);
+ return NULL;
+}
- status = create_proc_entry("status", S_IFREG | S_IRUGO | S_IWUSR,
- bm_dir);
- if (!status)
- goto cleanup_bm;
- status->read_proc = proc_read_status;
- status->write_proc = proc_write_status;
+static struct linux_binfmt misc_format = {
+ NULL, THIS_MODULE, load_misc_binary, NULL, NULL, 0
+};
- reg = create_proc_entry("register", S_IFREG | S_IWUSR, bm_dir);
- if (!reg)
- goto cleanup_status;
- reg->write_proc = proc_write_register;
+static DECLARE_FSTYPE(bm_fs_type, "binfmt_misc", bm_read_super, FS_SINGLE|FS_LITTER);
- error = register_binfmt(&misc_format);
-out:
- return error;
+static struct vfsmount *bm_mnt;
-cleanup_status:
- remove_proc_entry("status", bm_dir);
-cleanup_bm:
- remove_proc_entry("sys/fs/binfmt_misc", NULL);
- goto out;
+static int __init init_misc_binfmt(void)
+{
+ int err = register_filesystem(&bm_fs_type);
+ if (!err) {
+ bm_mnt = kern_mount(&bm_fs_type);
+ err = PTR_ERR(bm_mnt);
+ if (IS_ERR(bm_mnt))
+ unregister_filesystem(&bm_fs_type);
+ else {
+ err = register_binfmt(&misc_format);
+ if (err) {
+ unregister_filesystem(&bm_fs_type);
+ kern_umount(bm_mnt);
+ }
+ }
+ }
+ return err;
}
static void __exit exit_misc_binfmt(void)
{
unregister_binfmt(&misc_format);
- remove_proc_entry("register", bm_dir);
- remove_proc_entry("status", bm_dir);
- clear_entries();
- remove_proc_entry("sys/fs/binfmt_misc", NULL);
+ unregister_filesystem(&bm_fs_type);
+ kern_umount(bm_mnt);
}
EXPORT_NO_SYMBOLS;
module_init(init_misc_binfmt);
module_exit(exit_misc_binfmt);
+MODULE_LICENSE("GPL");
mark_buffer_clean(old_bh);
wait_on_buffer(old_bh);
clear_bit(BH_Req, &old_bh->b_state);
- /* Here we could run brelse or bforget. We use
- bforget because it will try to put the buffer
- in the freelist. */
- __bforget(old_bh);
+ __brelse(old_bh);
}
}
MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
MODULE_DESCRIPTION("NFS file locking service version " LOCKD_VERSION ".");
+MODULE_LICENSE("GPL");
MODULE_PARM(nlm_grace_period, "10-240l");
MODULE_PARM(nlm_timeout, "3-20l");
MODULE_PARM(nlm_udpport, "0-65535l");
/* New-style module support since 2.1.18 */
EXPORT_NO_SYMBOLS;
MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+MODULE_LICENSE("GPL");
struct nfsd_linkage nfsd_linkage_s = {
do_nfsservctl: handle_sys_nfsservctl,
int nfserr, type, mode;
dev_t rdev = NODEV;
- dprintk("nfsd: CREATE %s %*.s\n",
+ dprintk("nfsd: CREATE %s %.*s\n",
SVCFH_fmt(dirfhp), argp->len, argp->name);
/* First verify the parent file handle */
#endif
#ifdef CONFIG_SYSCTL
proc_sys_root = proc_mkdir("sys", 0);
+#endif
+#if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
+ proc_mkdir("sys/fs", 0);
+ proc_mkdir("sys/fs/binfmt_misc", 0);
#endif
proc_root_fs = proc_mkdir("fs", 0);
proc_root_driver = proc_mkdir("driver", 0);
return do_kern_mount((char *)type->name, 0, (char *)type->name, NULL);
}
+static char * __initdata root_mount_data;
+static int __init root_data_setup(char *str)
+{
+ root_mount_data = str;
+ return 1;
+}
+
static char * __initdata root_fs_names;
static int __init fs_names_setup(char *str)
{
root_fs_names = str;
- return 0;
+ return 1;
}
+__setup("rootflags=", root_data_setup);
__setup("rootfstype=", fs_names_setup);
static void __init get_fs_names(char *page)
struct file_system_type * fs_type = get_fs_type(p);
if (!fs_type)
continue;
- sb = read_super(ROOT_DEV,bdev,fs_type,root_mountflags,NULL);
+ sb = read_super(ROOT_DEV, bdev, fs_type,
+ root_mountflags, root_mount_data);
if (sb)
goto mount_it;
put_filesystem(fs_type);
* on code by Martin von Loewis <martin@mira.isdn.cs.tu-berlin.de>.
*/
+#include <linux/sched.h>
+#include <linux/locks.h>
#include <linux/fs.h>
#include <linux/ufs_fs.h>
#define UFSD(x)
#endif
+/*
+ * NOTE! unlike strncmp, ufs_match returns 1 for success, 0 for failure.
+ *
+ * len <= UFS_MAXNAMLEN and de != NULL are guaranteed by caller.
+ */
+static inline int ufs_match (int len, const char * const name,
+ struct ufs_dir_entry * de, unsigned flags, unsigned swab)
+{
+ if (len != ufs_get_de_namlen(de))
+ return 0;
+ if (!de->d_ino)
+ return 0;
+ return !memcmp(name, de->d_name, len);
+}
+
/*
* This is blatantly stolen from ext2fs
*/
return 0;
}
+/*
+ * define how far ahead to read directories while searching them.
+ */
+#define NAMEI_RA_CHUNKS 2
+#define NAMEI_RA_BLOCKS 4
+#define NAMEI_RA_SIZE (NAMEI_RA_CHUNKS * NAMEI_RA_BLOCKS)
+#define NAMEI_RA_INDEX(c,b) (((c) * NAMEI_RA_BLOCKS) + (b))
+
+/*
+ * ufs_find_entry()
+ *
+ * finds an entry in the specified directory with the wanted name. It
+ * returns the cache buffer in which the entry was found, and the entry
+ * itself (as a parameter - res_bh). It does NOT read the inode of the
+ * entry - you'll have to do that yourself if you want to.
+ */
+struct ufs_dir_entry * ufs_find_entry (struct dentry *dentry,
+ struct buffer_head ** res_bh)
+{
+ struct super_block * sb;
+ struct buffer_head * bh_use[NAMEI_RA_SIZE];
+ struct buffer_head * bh_read[NAMEI_RA_SIZE];
+ unsigned long offset;
+ int block, toread, i, err;
+ unsigned flags, swab;
+ struct inode *dir = dentry->d_parent->d_inode;
+ const char *name = dentry->d_name.name;
+ int namelen = dentry->d_name.len;
+
+ UFSD(("ENTER, dir_ino %lu, name %s, namlen %u\n", dir->i_ino, name, namelen))
+
+ *res_bh = NULL;
+
+ sb = dir->i_sb;
+ flags = sb->u.ufs_sb.s_flags;
+ swab = sb->u.ufs_sb.s_swab;
+
+ if (namelen > UFS_MAXNAMLEN)
+ return NULL;
+
+ memset (bh_use, 0, sizeof (bh_use));
+ toread = 0;
+ for (block = 0; block < NAMEI_RA_SIZE; ++block) {
+ struct buffer_head * bh;
+
+ if ((block << sb->s_blocksize_bits) >= dir->i_size)
+ break;
+ bh = ufs_getfrag (dir, block, 0, &err);
+ bh_use[block] = bh;
+ if (bh && !buffer_uptodate(bh))
+ bh_read[toread++] = bh;
+ }
+
+ for (block = 0, offset = 0; offset < dir->i_size; block++) {
+ struct buffer_head * bh;
+ struct ufs_dir_entry * de;
+ char * dlimit;
+
+ if ((block % NAMEI_RA_BLOCKS) == 0 && toread) {
+ ll_rw_block (READ, toread, bh_read);
+ toread = 0;
+ }
+ bh = bh_use[block % NAMEI_RA_SIZE];
+ if (!bh) {
+ ufs_error (sb, "ufs_find_entry",
+ "directory #%lu contains a hole at offset %lu",
+ dir->i_ino, offset);
+ offset += sb->s_blocksize;
+ continue;
+ }
+ wait_on_buffer (bh);
+ if (!buffer_uptodate(bh)) {
+ /*
+ * read error: all bets are off
+ */
+ break;
+ }
+
+ de = (struct ufs_dir_entry *) bh->b_data;
+ dlimit = bh->b_data + sb->s_blocksize;
+ while ((char *) de < dlimit && offset < dir->i_size) {
+ /* this code is executed quadratically often */
+ /* do minimal checking by hand */
+ int de_len;
+
+ if ((char *) de + namelen <= dlimit &&
+ ufs_match (namelen, name, de, flags, swab)) {
+ /* found a match -
+ just to be sure, do a full check */
+ if (!ufs_check_dir_entry("ufs_find_entry",
+ dir, de, bh, offset))
+ goto failed;
+ for (i = 0; i < NAMEI_RA_SIZE; ++i) {
+ if (bh_use[i] != bh)
+ brelse (bh_use[i]);
+ }
+ *res_bh = bh;
+ return de;
+ }
+ /* prevent looping on a bad block */
+ de_len = SWAB16(de->d_reclen);
+ if (de_len <= 0)
+ goto failed;
+ offset += de_len;
+ de = (struct ufs_dir_entry *) ((char *) de + de_len);
+ }
+
+ brelse (bh);
+ if (((block + NAMEI_RA_SIZE) << sb->s_blocksize_bits ) >=
+ dir->i_size)
+ bh = NULL;
+ else
+ bh = ufs_getfrag (dir, block + NAMEI_RA_SIZE, 0, &err);
+ bh_use[block % NAMEI_RA_SIZE] = bh;
+ if (bh && !buffer_uptodate(bh))
+ bh_read[toread++] = bh;
+ }
+
+failed:
+ for (i = 0; i < NAMEI_RA_SIZE; ++i) brelse (bh_use[i]);
+ UFSD(("EXIT\n"))
+ return NULL;
+}
+
int ufs_check_dir_entry (const char * function, struct inode * dir,
struct ufs_dir_entry * de, struct buffer_head * bh,
unsigned long offset)
return (error_msg == NULL ? 1 : 0);
}
+struct ufs_dir_entry *ufs_dotdot(struct inode *dir, struct buffer_head **p)
+{
+ int err;
+ struct buffer_head *bh = ufs_bread (dir, 0, 0, &err);
+ struct ufs_dir_entry *res = NULL;
+
+ if (bh) {
+ unsigned swab = dir->i_sb->u.ufs_sb.s_swab;
+
+ res = (struct ufs_dir_entry *) bh->b_data;
+ res = (struct ufs_dir_entry *)((char *)res +
+ SWAB16(res->d_reclen));
+ }
+ *p = bh;
+ return res;
+}
+ino_t ufs_inode_by_name(struct inode * dir, struct dentry *dentry)
+{
+ unsigned swab = dir->i_sb->u.ufs_sb.s_swab;
+ ino_t res = 0;
+ struct ufs_dir_entry * de;
+ struct buffer_head *bh;
+
+ de = ufs_find_entry (dentry, &bh);
+ if (de) {
+ res = SWAB32(de->d_ino);
+ brelse(bh);
+ }
+ return res;
+}
+
+void ufs_set_link(struct inode *dir, struct ufs_dir_entry *de,
+ struct buffer_head *bh, struct inode *inode)
+{
+ unsigned swab = dir->i_sb->u.ufs_sb.s_swab;
+ dir->i_version = ++event;
+ de->d_ino = SWAB32(inode->i_ino);
+ mark_buffer_dirty(bh);
+ if (IS_SYNC(dir)) {
+ ll_rw_block (WRITE, 1, &bh);
+ wait_on_buffer(bh);
+ }
+ brelse (bh);
+}
+
+/*
+ * ufs_add_entry()
+ *
+ * adds a file entry to the specified directory, using the same
+ * semantics as ufs_find_entry(). It returns NULL if it failed.
+ */
+int ufs_add_link(struct dentry *dentry, struct inode *inode)
+{
+ struct super_block * sb;
+ struct ufs_sb_private_info * uspi;
+ unsigned long offset;
+ unsigned fragoff;
+ unsigned short rec_len;
+ struct buffer_head * bh;
+ struct ufs_dir_entry * de, * de1;
+ unsigned flags, swab;
+ struct inode *dir = dentry->d_parent->d_inode;
+ const char *name = dentry->d_name.name;
+ int namelen = dentry->d_name.len;
+ int err;
+
+ UFSD(("ENTER, name %s, namelen %u\n", name, namelen))
+
+ sb = dir->i_sb;
+ flags = sb->u.ufs_sb.s_flags;
+ swab = sb->u.ufs_sb.s_swab;
+ uspi = sb->u.ufs_sb.s_uspi;
+
+ if (!namelen)
+ return -EINVAL;
+ bh = ufs_bread (dir, 0, 0, &err);
+ if (!bh)
+ return err;
+ rec_len = UFS_DIR_REC_LEN(namelen);
+ offset = 0;
+ de = (struct ufs_dir_entry *) bh->b_data;
+ while (1) {
+ if ((char *)de >= UFS_SECTOR_SIZE + bh->b_data) {
+ fragoff = offset & ~uspi->s_fmask;
+ if (fragoff != 0 && fragoff != UFS_SECTOR_SIZE)
+ ufs_error (sb, "ufs_add_entry", "internal error"
+ " fragoff %u", fragoff);
+ if (!fragoff) {
+ brelse (bh);
+ bh = ufs_bread (dir, offset >> sb->s_blocksize_bits, 1, &err);
+ if (!bh)
+ return err;
+ }
+ if (dir->i_size <= offset) {
+ if (dir->i_size == 0) {
+ brelse(bh);
+ return -ENOENT;
+ }
+ de = (struct ufs_dir_entry *) (bh->b_data + fragoff);
+ de->d_ino = SWAB32(0);
+ de->d_reclen = SWAB16(UFS_SECTOR_SIZE);
+ ufs_set_de_namlen(de,0);
+ dir->i_size = offset + UFS_SECTOR_SIZE;
+ mark_inode_dirty(dir);
+ } else {
+ de = (struct ufs_dir_entry *) bh->b_data;
+ }
+ }
+ if (!ufs_check_dir_entry ("ufs_add_entry", dir, de, bh, offset)) {
+ brelse (bh);
+ return -ENOENT;
+ }
+ if (ufs_match (namelen, name, de, flags, swab)) {
+ brelse (bh);
+ return -EEXIST;
+ }
+ if (SWAB32(de->d_ino) == 0 && SWAB16(de->d_reclen) >= rec_len)
+ break;
+
+ if (SWAB16(de->d_reclen) >= UFS_DIR_REC_LEN(ufs_get_de_namlen(de)) + rec_len)
+ break;
+ offset += SWAB16(de->d_reclen);
+ de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
+ }
+
+ if (SWAB32(de->d_ino)) {
+ de1 = (struct ufs_dir_entry *) ((char *) de +
+ UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
+ de1->d_reclen = SWAB16(SWAB16(de->d_reclen) -
+ UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
+ de->d_reclen = SWAB16(UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
+ de = de1;
+ }
+ de->d_ino = SWAB32(0);
+ ufs_set_de_namlen(de, namelen);
+ memcpy (de->d_name, name, namelen + 1);
+ de->d_ino = SWAB32(inode->i_ino);
+ ufs_set_de_type (de, inode->i_mode);
+ mark_buffer_dirty(bh);
+ if (IS_SYNC(dir)) {
+ ll_rw_block (WRITE, 1, &bh);
+ wait_on_buffer (bh);
+ }
+ brelse (bh);
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_version = ++event;
+ mark_inode_dirty(dir);
+
+ UFSD(("EXIT\n"))
+ return 0;
+}
+
+/*
+ * ufs_delete_entry deletes a directory entry by merging it with the
+ * previous entry.
+ */
+int ufs_delete_entry (struct inode * inode, struct ufs_dir_entry * dir,
+ struct buffer_head * bh )
+
+{
+ struct super_block * sb;
+ struct ufs_dir_entry * de, * pde;
+ unsigned i;
+ unsigned flags, swab;
+
+ UFSD(("ENTER\n"))
+
+ sb = inode->i_sb;
+ flags = sb->u.ufs_sb.s_flags;
+ swab = sb->u.ufs_sb.s_swab;
+ i = 0;
+ pde = NULL;
+ de = (struct ufs_dir_entry *) bh->b_data;
+
+ UFSD(("ino %u, reclen %u, namlen %u, name %s\n", SWAB32(de->d_ino),
+ SWAB16(de->d_reclen), ufs_get_de_namlen(de), de->d_name))
+
+ while (i < bh->b_size) {
+ if (!ufs_check_dir_entry ("ufs_delete_entry", inode, de, bh, i)) {
+ brelse(bh);
+ return -EIO;
+ }
+ if (de == dir) {
+ if (pde)
+ pde->d_reclen =
+ SWAB16(SWAB16(pde->d_reclen) +
+ SWAB16(dir->d_reclen));
+ dir->d_ino = SWAB32(0);
+ inode->i_version = ++event;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ mark_inode_dirty(inode);
+ mark_buffer_dirty(bh);
+ if (IS_SYNC(inode)) {
+ ll_rw_block(WRITE, 1, &bh);
+ wait_on_buffer(bh);
+ }
+ brelse(bh);
+ UFSD(("EXIT\n"))
+ return 0;
+ }
+ i += SWAB16(de->d_reclen);
+ if (i == UFS_SECTOR_SIZE) pde = NULL;
+ else pde = de;
+ de = (struct ufs_dir_entry *)
+ ((char *) de + SWAB16(de->d_reclen));
+ if (i == UFS_SECTOR_SIZE && SWAB16(de->d_reclen) == 0)
+ break;
+ }
+ UFSD(("EXIT\n"))
+ brelse(bh);
+ return -ENOENT;
+}
+
+int ufs_make_empty(struct inode * inode, struct inode *dir)
+{
+ struct super_block * sb = dir->i_sb;
+ unsigned flags = sb->u.ufs_sb.s_flags;
+ unsigned swab = sb->u.ufs_sb.s_swab;
+ struct buffer_head * dir_block;
+ struct ufs_dir_entry * de;
+ int err;
+
+ dir_block = ufs_bread (inode, 0, 1, &err);
+ if (!dir_block)
+ return err;
+
+ inode->i_blocks = sb->s_blocksize / UFS_SECTOR_SIZE;
+ de = (struct ufs_dir_entry *) dir_block->b_data;
+ de->d_ino = SWAB32(inode->i_ino);
+ ufs_set_de_type (de, inode->i_mode);
+ ufs_set_de_namlen(de,1);
+ de->d_reclen = SWAB16(UFS_DIR_REC_LEN(1));
+ strcpy (de->d_name, ".");
+ de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
+ de->d_ino = SWAB32(dir->i_ino);
+ ufs_set_de_type (de, dir->i_mode);
+ de->d_reclen = SWAB16(UFS_SECTOR_SIZE - UFS_DIR_REC_LEN(1));
+ ufs_set_de_namlen(de,2);
+ strcpy (de->d_name, "..");
+ mark_buffer_dirty(dir_block);
+ brelse (dir_block);
+ mark_inode_dirty(inode);
+ return 0;
+}
+
+/*
+ * routine to check that the specified directory is empty (for rmdir)
+ */
+int ufs_empty_dir (struct inode * inode)
+{
+ struct super_block * sb;
+ unsigned long offset;
+ struct buffer_head * bh;
+ struct ufs_dir_entry * de, * de1;
+ int err;
+ unsigned swab;
+
+ sb = inode->i_sb;
+ swab = sb->u.ufs_sb.s_swab;
+
+ if (inode->i_size < UFS_DIR_REC_LEN(1) + UFS_DIR_REC_LEN(2) ||
+ !(bh = ufs_bread (inode, 0, 0, &err))) {
+ ufs_warning (inode->i_sb, "empty_dir",
+ "bad directory (dir #%lu) - no data block",
+ inode->i_ino);
+ return 1;
+ }
+ de = (struct ufs_dir_entry *) bh->b_data;
+ de1 = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
+ if (SWAB32(de->d_ino) != inode->i_ino || !SWAB32(de1->d_ino) ||
+ strcmp (".", de->d_name) || strcmp ("..", de1->d_name)) {
+ ufs_warning (inode->i_sb, "empty_dir",
+ "bad directory (dir #%lu) - no `.' or `..'",
+ inode->i_ino);
+ return 1;
+ }
+ offset = SWAB16(de->d_reclen) + SWAB16(de1->d_reclen);
+ de = (struct ufs_dir_entry *) ((char *) de1 + SWAB16(de1->d_reclen));
+ while (offset < inode->i_size ) {
+ if (!bh || (void *) de >= (void *) (bh->b_data + sb->s_blocksize)) {
+ brelse (bh);
+ bh = ufs_bread (inode, offset >> sb->s_blocksize_bits, 1, &err);
+ if (!bh) {
+ ufs_error (sb, "empty_dir",
+ "directory #%lu contains a hole at offset %lu",
+ inode->i_ino, offset);
+ offset += sb->s_blocksize;
+ continue;
+ }
+ de = (struct ufs_dir_entry *) bh->b_data;
+ }
+ if (!ufs_check_dir_entry ("empty_dir", inode, de, bh, offset)) {
+ brelse (bh);
+ return 1;
+ }
+ if (SWAB32(de->d_ino)) {
+ brelse (bh);
+ return 0;
+ }
+ offset += SWAB16(de->d_reclen);
+ de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
+ }
+ brelse (bh);
+ return 1;
+}
+
struct file_operations ufs_dir_operations = {
read: generic_read_dir,
readdir: ufs_readdir,
* For other inodes, search forward from the parent directory's block
* group to find a free inode.
*/
-struct inode * ufs_new_inode (const struct inode * dir, int mode, int * err )
+struct inode * ufs_new_inode (const struct inode * dir, int mode)
{
struct super_block * sb;
struct ufs_sb_private_info * uspi;
UFSD(("ENTER\n"))
/* Cannot create files in a deleted directory */
- if (!dir || !dir->i_nlink) {
- *err = -EPERM;
- return NULL;
- }
+ if (!dir || !dir->i_nlink)
+ return ERR_PTR(-EPERM);
sb = dir->i_sb;
inode = new_inode(sb);
- if (!inode) {
- *err = -ENOMEM;
- return NULL;
- }
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
lock_super (sb);
- *err = -ENOSPC;
-
/*
* Try to place the inode in its parent directory
*/
if (dir->i_mode & S_ISGID) {
inode->i_gid = dir->i_gid;
if (S_ISDIR(mode))
- mode |= S_ISGID;
+ inode->i_mode |= S_ISGID;
} else
inode->i_gid = current->fsgid;
inode->i_flags |= S_NOQUOTA;
inode->i_nlink = 0;
iput(inode);
- *err = -EDQUOT;
- return NULL;
+ return ERR_PTR(-EDQUOT);
}
UFSD(("allocating inode %lu\n", inode->i_ino))
- *err = 0;
UFSD(("EXIT\n"))
return inode;
make_bad_inode(inode);
iput (inode);
UFSD(("EXIT (FAILED)\n"))
- return NULL;
+ return ERR_PTR(-ENOSPC);
}
#define UFSD(x)
#endif
-#ifdef UFS_INODE_DEBUG_MORE
-static void ufs_print_inode(struct inode * inode)
-{
- unsigned swab = inode->i_sb->u.ufs_sb.s_swab;
- printk("ino %lu mode 0%6.6o nlink %d uid %d gid %d"
- " size %lu blocks %lu\n",
- inode->i_ino, inode->i_mode, inode->i_nlink,
- inode->i_uid, inode->i_gid,
- inode->i_size, inode->i_blocks);
- printk(" db <%u %u %u %u %u %u %u %u %u %u %u %u>\n",
- SWAB32(inode->u.ufs_i.i_u1.i_data[0]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[1]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[2]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[3]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[4]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[5]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[6]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[7]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[8]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[9]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[10]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[11]));
- printk(" gen %u ib <%u %u %u>\n",
- inode->u.ufs_i.i_gen,
- SWAB32(inode->u.ufs_i.i_u1.i_data[UFS_IND_BLOCK]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[UFS_DIND_BLOCK]),
- SWAB32(inode->u.ufs_i.i_u1.i_data[UFS_TIND_BLOCK]));
-}
-#endif
-
-#define ufs_inode_bmap(inode, nr) \
- (SWAB32((inode)->u.ufs_i.i_u1.i_data[(nr) >> uspi->s_fpbshift]) + ((nr) & uspi->s_fpbmask))
-
-static inline unsigned int ufs_block_bmap (struct buffer_head * bh, unsigned nr,
+static inline unsigned int ufs_block_bmap1(struct buffer_head * bh, unsigned nr,
struct ufs_sb_private_info * uspi, unsigned swab)
{
unsigned int tmp;
-
- UFSD(("ENTER, nr %u\n", nr))
if (!bh)
return 0;
- tmp = SWAB32(((u32 *) bh->b_data)[nr >> uspi->s_fpbshift]) + (nr & uspi->s_fpbmask);
+ tmp = SWAB32(((u32 *) bh->b_data)[nr]);
brelse (bh);
- UFSD(("EXIT, result %u\n", tmp))
return tmp;
}
+static int ufs_block_to_path(struct inode *inode, long i_block, int offsets[4])
+{
+ struct ufs_sb_private_info *uspi = inode->i_sb->u.ufs_sb.s_uspi;
+ int ptrs = uspi->s_apb;
+ int ptrs_bits = uspi->s_apbshift;
+ const long direct_blocks = UFS_NDADDR,
+ indirect_blocks = ptrs,
+ double_blocks = (1 << (ptrs_bits * 2));
+ int n = 0;
+
+ if (i_block < 0) {
+ ufs_warning(inode->i_sb, "ufs_block_to_path", "block < 0");
+ } else if (i_block < direct_blocks) {
+ offsets[n++] = i_block;
+ } else if ((i_block -= direct_blocks) < indirect_blocks) {
+ offsets[n++] = UFS_IND_BLOCK;
+ offsets[n++] = i_block;
+ } else if ((i_block -= indirect_blocks) < double_blocks) {
+ offsets[n++] = UFS_DIND_BLOCK;
+ offsets[n++] = i_block >> ptrs_bits;
+ offsets[n++] = i_block & (ptrs - 1);
+ } else if (((i_block -= double_blocks) >> (ptrs_bits * 2)) < ptrs) {
+ offsets[n++] = UFS_TIND_BLOCK;
+ offsets[n++] = i_block >> (ptrs_bits * 2);
+ offsets[n++] = (i_block >> ptrs_bits) & (ptrs - 1);
+ offsets[n++] = i_block & (ptrs - 1);
+ } else {
+ ufs_warning(inode->i_sb, "ufs_block_to_path", "block > big");
+ }
+ return n;
+}
+
int ufs_frag_map(struct inode *inode, int frag)
{
- struct super_block *sb;
- struct ufs_sb_private_info *uspi;
- unsigned int swab;
- int i, ret;
+ struct super_block *sb = inode->i_sb;
+ struct ufs_sb_private_info *uspi = sb->u.ufs_sb.s_uspi;
+ unsigned int swab = sb->u.ufs_sb.s_swab;
+ int mask = uspi->s_apbmask>>uspi->s_fpbshift;
+ int shift = uspi->s_apbshift-uspi->s_fpbshift;
+ int offsets[4], *p;
+ int depth = ufs_block_to_path(inode, frag >> uspi->s_fpbshift, offsets);
+ int ret = 0;
+ u32 block;
+
+ if (depth == 0)
+ return 0;
- ret = 0;
- lock_kernel();
-
- sb = inode->i_sb;
- uspi = sb->u.ufs_sb.s_uspi;
- swab = sb->u.ufs_sb.s_swab;
- if (frag < 0) {
- ufs_warning(sb, "ufs_frag_map", "frag < 0");
- goto out;
- }
- if (frag >=
- ((UFS_NDADDR + uspi->s_apb + uspi->s_2apb + uspi->s_3apb)
- << uspi->s_fpbshift)) {
- ufs_warning(sb, "ufs_frag_map", "frag > big");
- goto out;
- }
+ p = offsets;
- if (frag < UFS_NDIR_FRAGMENT) {
- ret = uspi->s_sbbase + ufs_inode_bmap(inode, frag);
+ lock_kernel();
+ block = inode->u.ufs_i.i_u1.i_data[*p++];
+ if (!block)
goto out;
- }
+ while (--depth) {
+ struct buffer_head *bh;
+ int n = *p++;
- frag -= UFS_NDIR_FRAGMENT;
- if (frag < (1 << (uspi->s_apbshift + uspi->s_fpbshift))) {
- i = ufs_inode_bmap(inode,
- UFS_IND_FRAGMENT + (frag >> uspi->s_apbshift));
- if (!i)
+ bh = bread(sb->s_dev, uspi->s_sbbase+SWAB32(block)+(n>>shift),
+ sb->s_blocksize);
+ if (!bh)
goto out;
- ret = (uspi->s_sbbase +
- ufs_block_bmap(bread(sb->s_dev, uspi->s_sbbase + i,
- sb->s_blocksize),
- frag & uspi->s_apbmask, uspi, swab));
- goto out;
- }
- frag -= 1 << (uspi->s_apbshift + uspi->s_fpbshift);
- if (frag < (1 << (uspi->s_2apbshift + uspi->s_fpbshift))) {
- i = ufs_inode_bmap (inode,
- UFS_DIND_FRAGMENT + (frag >> uspi->s_2apbshift));
- if (!i)
+ block = ((u32*) bh->b_data)[n & mask];
+ brelse (bh);
+ if (!block)
goto out;
- i = ufs_block_bmap(bread(sb->s_dev, uspi->s_sbbase + i,
- sb->s_blocksize),
- (frag >> uspi->s_apbshift) & uspi->s_apbmask,
- uspi, swab);
- if (!i)
- goto out;
- ret = (uspi->s_sbbase +
- ufs_block_bmap(bread(sb->s_dev, uspi->s_sbbase + i,
- sb->s_blocksize),
- (frag & uspi->s_apbmask), uspi, swab));
- goto out;
}
- frag -= 1 << (uspi->s_2apbshift + uspi->s_fpbshift);
- i = ufs_inode_bmap(inode,
- UFS_TIND_FRAGMENT + (frag >> uspi->s_3apbshift));
- if (!i)
- goto out;
- i = ufs_block_bmap(bread(sb->s_dev, uspi->s_sbbase + i, sb->s_blocksize),
- (frag >> uspi->s_2apbshift) & uspi->s_apbmask,
- uspi, swab);
- if (!i)
- goto out;
- i = ufs_block_bmap(bread(sb->s_dev, uspi->s_sbbase + i, sb->s_blocksize),
- (frag >> uspi->s_apbshift) & uspi->s_apbmask,
- uspi, swab);
- if (!i)
- goto out;
- ret = (uspi->s_sbbase +
- ufs_block_bmap(bread(sb->s_dev, uspi->s_sbbase + i, sb->s_blocksize),
- (frag & uspi->s_apbmask), uspi, swab));
+ ret = uspi->s_sbbase + SWAB32(block) + (frag & uspi->s_fpbmask);
out:
unlock_kernel();
return ret;
brelse (bh);
-#ifdef UFS_INODE_DEBUG_MORE
- ufs_print_inode (inode);
-#endif
UFSD(("EXIT\n"))
}
* David S. Miller (davem@caip.rutgers.edu), 1995
*/
-#include <asm/uaccess.h>
-
-#include <linux/errno.h>
+#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/ufs_fs.h>
-#include <linux/fcntl.h>
-#include <linux/sched.h>
-#include <linux/stat.h>
-#include <linux/string.h>
-#include <linux/locks.h>
-#include <linux/quotaops.h>
-
-#include "swab.h"
-#include "util.h"
#undef UFS_NAMEI_DEBUG
#define UFSD(x)
#endif
-/*
- * define how far ahead to read directories while searching them.
- */
-#define NAMEI_RA_CHUNKS 2
-#define NAMEI_RA_BLOCKS 4
-#define NAMEI_RA_SIZE (NAMEI_RA_CHUNKS * NAMEI_RA_BLOCKS)
-#define NAMEI_RA_INDEX(c,b) (((c) * NAMEI_RA_BLOCKS) + (b))
-
-/*
- * NOTE! unlike strncmp, ufs_match returns 1 for success, 0 for failure.
- *
- * len <= UFS_MAXNAMLEN and de != NULL are guaranteed by caller.
- */
-static inline int ufs_match (int len, const char * const name,
- struct ufs_dir_entry * de, unsigned flags, unsigned swab)
+static inline void ufs_inc_count(struct inode *inode)
{
- if (len != ufs_get_de_namlen(de))
- return 0;
- if (!de->d_ino)
- return 0;
- return !memcmp(name, de->d_name, len);
+ inode->i_nlink++;
+ mark_inode_dirty(inode);
}
-/*
- * ufs_find_entry()
- *
- * finds an entry in the specified directory with the wanted name. It
- * returns the cache buffer in which the entry was found, and the entry
- * itself (as a parameter - res_dir). It does NOT read the inode of the
- * entry - you'll have to do that yourself if you want to.
- */
-static struct buffer_head * ufs_find_entry (struct inode * dir,
- const char * const name, int namelen, struct ufs_dir_entry ** res_dir)
+static inline void ufs_dec_count(struct inode *inode)
{
- struct super_block * sb;
- struct buffer_head * bh_use[NAMEI_RA_SIZE];
- struct buffer_head * bh_read[NAMEI_RA_SIZE];
- unsigned long offset;
- int block, toread, i, err;
- unsigned flags, swab;
-
- UFSD(("ENTER, dir_ino %lu, name %s, namlen %u\n", dir->i_ino, name, namelen))
-
- *res_dir = NULL;
-
- sb = dir->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
-
- if (namelen > UFS_MAXNAMLEN)
- return NULL;
-
- memset (bh_use, 0, sizeof (bh_use));
- toread = 0;
- for (block = 0; block < NAMEI_RA_SIZE; ++block) {
- struct buffer_head * bh;
-
- if ((block << sb->s_blocksize_bits) >= dir->i_size)
- break;
- bh = ufs_getfrag (dir, block, 0, &err);
- bh_use[block] = bh;
- if (bh && !buffer_uptodate(bh))
- bh_read[toread++] = bh;
- }
-
- for (block = 0, offset = 0; offset < dir->i_size; block++) {
- struct buffer_head * bh;
- struct ufs_dir_entry * de;
- char * dlimit;
-
- if ((block % NAMEI_RA_BLOCKS) == 0 && toread) {
- ll_rw_block (READ, toread, bh_read);
- toread = 0;
- }
- bh = bh_use[block % NAMEI_RA_SIZE];
- if (!bh) {
- ufs_error (sb, "ufs_find_entry",
- "directory #%lu contains a hole at offset %lu",
- dir->i_ino, offset);
- offset += sb->s_blocksize;
- continue;
- }
- wait_on_buffer (bh);
- if (!buffer_uptodate(bh)) {
- /*
- * read error: all bets are off
- */
- break;
- }
-
- de = (struct ufs_dir_entry *) bh->b_data;
- dlimit = bh->b_data + sb->s_blocksize;
- while ((char *) de < dlimit && offset < dir->i_size) {
- /* this code is executed quadratically often */
- /* do minimal checking by hand */
- int de_len;
-
- if ((char *) de + namelen <= dlimit &&
- ufs_match (namelen, name, de, flags, swab)) {
- /* found a match -
- just to be sure, do a full check */
- if (!ufs_check_dir_entry("ufs_find_entry",
- dir, de, bh, offset))
- goto failed;
- for (i = 0; i < NAMEI_RA_SIZE; ++i) {
- if (bh_use[i] != bh)
- brelse (bh_use[i]);
- }
- *res_dir = de;
- return bh;
- }
- /* prevent looping on a bad block */
- de_len = SWAB16(de->d_reclen);
- if (de_len <= 0)
- goto failed;
- offset += de_len;
- de = (struct ufs_dir_entry *) ((char *) de + de_len);
- }
+ inode->i_nlink--;
+ mark_inode_dirty(inode);
+}
- brelse (bh);
- if (((block + NAMEI_RA_SIZE) << sb->s_blocksize_bits ) >=
- dir->i_size)
- bh = NULL;
- else
- bh = ufs_getfrag (dir, block + NAMEI_RA_SIZE, 0, &err);
- bh_use[block % NAMEI_RA_SIZE] = bh;
- if (bh && !buffer_uptodate(bh))
- bh_read[toread++] = bh;
+static inline int ufs_add_nondir(struct dentry *dentry, struct inode *inode)
+{
+ int err = ufs_add_link(dentry, inode);
+ if (!err) {
+ d_instantiate(dentry, inode);
+ return 0;
}
-
-failed:
- for (i = 0; i < NAMEI_RA_SIZE; ++i) brelse (bh_use[i]);
- UFSD(("EXIT\n"))
- return NULL;
+ ufs_dec_count(inode);
+ iput(inode);
+ return err;
}
static struct dentry *ufs_lookup(struct inode * dir, struct dentry *dentry)
{
- struct super_block * sb;
- struct inode * inode;
- struct ufs_dir_entry * de;
- struct buffer_head * bh;
- unsigned swab;
-
- UFSD(("ENTER\n"))
-
- sb = dir->i_sb;
- swab = sb->u.ufs_sb.s_swab;
+ struct inode * inode = NULL;
+ ino_t ino;
if (dentry->d_name.len > UFS_MAXNAMLEN)
return ERR_PTR(-ENAMETOOLONG);
- bh = ufs_find_entry (dir, dentry->d_name.name, dentry->d_name.len, &de);
- inode = NULL;
- if (bh) {
- unsigned long ino = SWAB32(de->d_ino);
- brelse (bh);
- inode = iget(sb, ino);
+ ino = ufs_inode_by_name(dir, dentry);
+ if (ino) {
+ inode = iget(dir->i_sb, ino);
if (!inode)
return ERR_PTR(-EACCES);
}
d_add(dentry, inode);
- UFSD(("EXIT\n"))
- return NULL;
-}
-
-/*
- * ufs_add_entry()
- *
- * adds a file entry to the specified directory, using the same
- * semantics as ufs_find_entry(). It returns NULL if it failed.
- *
- * NOTE!! The inode part of 'de' is left at 0 - which means you
- * may not sleep between calling this and putting something into
- * the entry, as someone else might have used it while you slept.
- */
-static struct buffer_head * ufs_add_entry (struct inode * dir,
- const char * name, int namelen, struct ufs_dir_entry ** res_dir,
- int *err )
-{
- struct super_block * sb;
- struct ufs_sb_private_info * uspi;
- unsigned long offset;
- unsigned fragoff;
- unsigned short rec_len;
- struct buffer_head * bh;
- struct ufs_dir_entry * de, * de1;
- unsigned flags, swab;
-
- UFSD(("ENTER, name %s, namelen %u\n", name, namelen))
-
- *err = -EINVAL;
- *res_dir = NULL;
-
- sb = dir->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
- uspi = sb->u.ufs_sb.s_uspi;
-
- if (!namelen)
- return NULL;
- bh = ufs_bread (dir, 0, 0, err);
- if (!bh)
- return NULL;
- rec_len = UFS_DIR_REC_LEN(namelen);
- offset = 0;
- de = (struct ufs_dir_entry *) bh->b_data;
- *err = -ENOSPC;
- while (1) {
- if ((char *)de >= UFS_SECTOR_SIZE + bh->b_data) {
- fragoff = offset & ~uspi->s_fmask;
- if (fragoff != 0 && fragoff != UFS_SECTOR_SIZE)
- ufs_error (sb, "ufs_add_entry", "internal error"
- " fragoff %u", fragoff);
- if (!fragoff) {
- brelse (bh);
- bh = NULL;
- bh = ufs_bread (dir, offset >> sb->s_blocksize_bits, 1, err);
- }
- if (!bh)
- return NULL;
- if (dir->i_size <= offset) {
- if (dir->i_size == 0) {
- *err = -ENOENT;
- return NULL;
- }
- de = (struct ufs_dir_entry *) (bh->b_data + fragoff);
- de->d_ino = SWAB32(0);
- de->d_reclen = SWAB16(UFS_SECTOR_SIZE);
- ufs_set_de_namlen(de,0);
- dir->i_size = offset + UFS_SECTOR_SIZE;
- mark_inode_dirty(dir);
- } else {
- de = (struct ufs_dir_entry *) bh->b_data;
- }
- }
- if (!ufs_check_dir_entry ("ufs_add_entry", dir, de, bh, offset)) {
- *err = -ENOENT;
- brelse (bh);
- return NULL;
- }
- if (ufs_match (namelen, name, de, flags, swab)) {
- *err = -EEXIST;
- brelse (bh);
- return NULL;
- }
- if ((SWAB32(de->d_ino) == 0 && SWAB16(de->d_reclen) >= rec_len) ||
- (SWAB16(de->d_reclen) >= UFS_DIR_REC_LEN(ufs_get_de_namlen(de)) + rec_len)) {
- offset += SWAB16(de->d_reclen);
- if (SWAB32(de->d_ino)) {
- de1 = (struct ufs_dir_entry *) ((char *) de +
- UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
- de1->d_reclen = SWAB16(SWAB16(de->d_reclen) -
- UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
- de->d_reclen = SWAB16(UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
- de = de1;
- }
- de->d_ino = SWAB32(0);
- ufs_set_de_namlen(de, namelen);
- memcpy (de->d_name, name, namelen + 1);
- /*
- * XXX shouldn't update any times until successful
- * completion of syscall, but too many callers depend
- * on this.
- *
- * XXX similarly, too many callers depend on
- * ufs_new_inode() setting the times, but error
- * recovery deletes the inode, so the worst that can
- * happen is that the times are slightly out of date
- * and/or different from the directory change time.
- */
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
- mark_inode_dirty(dir);
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- *res_dir = de;
- *err = 0;
-
- UFSD(("EXIT\n"))
- return bh;
- }
- offset += SWAB16(de->d_reclen);
- de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
- }
- brelse (bh);
- UFSD(("EXIT (FAILED)\n"))
return NULL;
}
-/*
- * ufs_delete_entry deletes a directory entry by merging it with the
- * previous entry.
- */
-static int ufs_delete_entry (struct inode * inode, struct ufs_dir_entry * dir,
- struct buffer_head * bh )
-
-{
- struct super_block * sb;
- struct ufs_dir_entry * de, * pde;
- unsigned i;
- unsigned flags, swab;
-
- UFSD(("ENTER\n"))
-
- sb = inode->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
- i = 0;
- pde = NULL;
- de = (struct ufs_dir_entry *) bh->b_data;
-
- UFSD(("ino %u, reclen %u, namlen %u, name %s\n", SWAB32(de->d_ino),
- SWAB16(de->d_reclen), ufs_get_de_namlen(de), de->d_name))
-
- while (i < bh->b_size) {
- if (!ufs_check_dir_entry ("ufs_delete_entry", inode, de, bh, i))
- return -EIO;
- if (de == dir) {
- if (pde)
- pde->d_reclen =
- SWAB16(SWAB16(pde->d_reclen) +
- SWAB16(dir->d_reclen));
- dir->d_ino = SWAB32(0);
- UFSD(("EXIT\n"))
- return 0;
- }
- i += SWAB16(de->d_reclen);
- if (i == UFS_SECTOR_SIZE) pde = NULL;
- else pde = de;
- de = (struct ufs_dir_entry *)
- ((char *) de + SWAB16(de->d_reclen));
- if (i == UFS_SECTOR_SIZE && SWAB16(de->d_reclen) == 0)
- break;
- }
- UFSD(("EXIT\n"))
- return -ENOENT;
-}
-
/*
* By the time this is called, we already have created
* the directory cache entry for the new file, but it
*/
static int ufs_create (struct inode * dir, struct dentry * dentry, int mode)
{
- struct super_block * sb;
- struct inode * inode;
- struct buffer_head * bh;
- struct ufs_dir_entry * de;
- int err = -EIO;
- unsigned flags, swab;
-
- sb = dir->i_sb;
- swab = sb->u.ufs_sb.s_swab;
- flags = sb->u.ufs_sb.s_flags;
- /*
- * N.B. Several error exits in ufs_new_inode don't set err.
- */
- UFSD(("ENTER\n"))
-
- inode = ufs_new_inode (dir, mode, &err);
- if (!inode)
- return err;
- inode->i_op = &ufs_file_inode_operations;
- inode->i_fop = &ufs_file_operations;
- inode->i_mapping->a_ops = &ufs_aops;
- inode->i_mode = mode;
- mark_inode_dirty(inode);
- bh = ufs_add_entry (dir, dentry->d_name.name, dentry->d_name.len, &de, &err);
- if (!bh) {
- inode->i_nlink--;
+ struct inode * inode = ufs_new_inode(dir, mode);
+ int err = PTR_ERR(inode);
+ if (!IS_ERR(inode)) {
+ inode->i_op = &ufs_file_inode_operations;
+ inode->i_fop = &ufs_file_operations;
+ inode->i_mapping->a_ops = &ufs_aops;
mark_inode_dirty(inode);
- iput (inode);
- return err;
- }
- de->d_ino = SWAB32(inode->i_ino);
- ufs_set_de_type (de, inode->i_mode);
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
+ err = ufs_add_nondir(dentry, inode);
}
- brelse (bh);
- d_instantiate(dentry, inode);
-
- UFSD(("EXIT\n"))
-
- return 0;
-}
-
-static int ufs_mknod (struct inode * dir, struct dentry *dentry, int mode, int rdev)
-{
- struct super_block * sb;
- struct inode * inode;
- struct buffer_head * bh;
- struct ufs_dir_entry * de;
- int err = -EIO;
- unsigned flags, swab;
-
- sb = dir->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
-
- inode = ufs_new_inode (dir, mode, &err);
- if (!inode)
- goto out;
-
- inode->i_uid = current->fsuid;
- init_special_inode(inode, mode, rdev);
- mark_inode_dirty(inode);
- bh = ufs_add_entry (dir, dentry->d_name.name, dentry->d_name.len, &de, &err);
- if (!bh)
- goto out_no_entry;
- de->d_ino = SWAB32(inode->i_ino);
- ufs_set_de_type (de, inode->i_mode);
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
- }
- d_instantiate(dentry, inode);
- brelse(bh);
- err = 0;
-out:
return err;
-
-out_no_entry:
- inode->i_nlink--;
- mark_inode_dirty(inode);
- iput(inode);
- goto out;
}
-static int ufs_mkdir(struct inode * dir, struct dentry * dentry, int mode)
+static int ufs_mknod (struct inode * dir, struct dentry *dentry, int mode, int rdev)
{
- struct super_block * sb;
- struct inode * inode;
- struct buffer_head * bh, * dir_block;
- struct ufs_dir_entry * de;
- int err;
- unsigned flags, swab;
-
- sb = dir->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
-
- err = -EMLINK;
- if (dir->i_nlink >= UFS_LINK_MAX)
- goto out;
- err = -EIO;
- inode = ufs_new_inode (dir, S_IFDIR, &err);
- if (!inode)
- goto out;
-
- inode->i_op = &ufs_dir_inode_operations;
- inode->i_fop = &ufs_dir_operations;
- inode->i_size = UFS_SECTOR_SIZE;
- dir_block = ufs_bread (inode, 0, 1, &err);
- if (!dir_block) {
- inode->i_nlink--; /* is this nlink == 0? */
+ struct inode * inode = ufs_new_inode(dir, mode);
+ int err = PTR_ERR(inode);
+ if (!IS_ERR(inode)) {
+ init_special_inode(inode, mode, rdev);
mark_inode_dirty(inode);
- iput (inode);
- return err;
- }
- inode->i_blocks = sb->s_blocksize / UFS_SECTOR_SIZE;
- de = (struct ufs_dir_entry *) dir_block->b_data;
- de->d_ino = SWAB32(inode->i_ino);
- ufs_set_de_type (de, inode->i_mode);
- ufs_set_de_namlen(de,1);
- de->d_reclen = SWAB16(UFS_DIR_REC_LEN(1));
- strcpy (de->d_name, ".");
- de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
- de->d_ino = SWAB32(dir->i_ino);
- ufs_set_de_type (de, dir->i_mode);
- de->d_reclen = SWAB16(UFS_SECTOR_SIZE - UFS_DIR_REC_LEN(1));
- ufs_set_de_namlen(de,2);
- strcpy (de->d_name, "..");
- inode->i_nlink = 2;
- mark_buffer_dirty(dir_block);
- brelse (dir_block);
- inode->i_mode = S_IFDIR | mode;
- if (dir->i_mode & S_ISGID)
- inode->i_mode |= S_ISGID;
- mark_inode_dirty(inode);
- bh = ufs_add_entry (dir, dentry->d_name.name, dentry->d_name.len, &de, &err);
- if (!bh)
- goto out_no_entry;
- de->d_ino = SWAB32(inode->i_ino);
- ufs_set_de_type (de, inode->i_mode);
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
+ err = ufs_add_nondir(dentry, inode);
}
- dir->i_nlink++;
- mark_inode_dirty(dir);
- d_instantiate(dentry, inode);
- brelse (bh);
- err = 0;
-out:
return err;
-
-out_no_entry:
- inode->i_nlink = 0;
- mark_inode_dirty(inode);
- iput (inode);
- goto out;
}
-/*
- * routine to check that the specified directory is empty (for rmdir)
- */
-static int ufs_empty_dir (struct inode * inode)
-{
- struct super_block * sb;
- unsigned long offset;
- struct buffer_head * bh;
- struct ufs_dir_entry * de, * de1;
- int err;
- unsigned swab;
-
- sb = inode->i_sb;
- swab = sb->u.ufs_sb.s_swab;
-
- if (inode->i_size < UFS_DIR_REC_LEN(1) + UFS_DIR_REC_LEN(2) ||
- !(bh = ufs_bread (inode, 0, 0, &err))) {
- ufs_warning (inode->i_sb, "empty_dir",
- "bad directory (dir #%lu) - no data block",
- inode->i_ino);
- return 1;
- }
- de = (struct ufs_dir_entry *) bh->b_data;
- de1 = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
- if (SWAB32(de->d_ino) != inode->i_ino || !SWAB32(de1->d_ino) ||
- strcmp (".", de->d_name) || strcmp ("..", de1->d_name)) {
- ufs_warning (inode->i_sb, "empty_dir",
- "bad directory (dir #%lu) - no `.' or `..'",
- inode->i_ino);
- return 1;
- }
- offset = SWAB16(de->d_reclen) + SWAB16(de1->d_reclen);
- de = (struct ufs_dir_entry *) ((char *) de1 + SWAB16(de1->d_reclen));
- while (offset < inode->i_size ) {
- if (!bh || (void *) de >= (void *) (bh->b_data + sb->s_blocksize)) {
- brelse (bh);
- bh = ufs_bread (inode, offset >> sb->s_blocksize_bits, 1, &err);
- if (!bh) {
- ufs_error (sb, "empty_dir",
- "directory #%lu contains a hole at offset %lu",
- inode->i_ino, offset);
- offset += sb->s_blocksize;
- continue;
- }
- de = (struct ufs_dir_entry *) bh->b_data;
- }
- if (!ufs_check_dir_entry ("empty_dir", inode, de, bh, offset)) {
- brelse (bh);
- return 1;
- }
- if (SWAB32(de->d_ino)) {
- brelse (bh);
- return 0;
- }
- offset += SWAB16(de->d_reclen);
- de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
- }
- brelse (bh);
- return 1;
-}
-
-static int ufs_rmdir (struct inode * dir, struct dentry *dentry)
-{
- struct super_block *sb;
- int retval;
- struct inode * inode;
- struct buffer_head * bh;
- struct ufs_dir_entry * de;
- unsigned swab;
-
- sb = dir->i_sb;
- swab = sb->u.ufs_sb.s_swab;
-
- UFSD(("ENTER\n"))
-
- retval = -ENOENT;
- bh = ufs_find_entry (dir, dentry->d_name.name, dentry->d_name.len, &de);
- if (!bh)
- goto end_rmdir;
-
- inode = dentry->d_inode;
- DQUOT_INIT(inode);
-
- retval = -EIO;
- if (SWAB32(de->d_ino) != inode->i_ino)
- goto end_rmdir;
-
- retval = -ENOTEMPTY;
- if (!ufs_empty_dir (inode))
- goto end_rmdir;
-
- retval = ufs_delete_entry (dir, de, bh);
- dir->i_version = ++event;
- if (retval)
- goto end_rmdir;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
- }
- if (inode->i_nlink != 2)
- ufs_warning (inode->i_sb, "ufs_rmdir",
- "empty directory has nlink!=2 (%d)",
- inode->i_nlink);
- inode->i_version = ++event;
- inode->i_nlink = 0;
- inode->i_size = 0;
- mark_inode_dirty(inode);
- dir->i_nlink--;
- inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
- mark_inode_dirty(dir);
-
-end_rmdir:
- brelse (bh);
- UFSD(("EXIT\n"))
-
- return retval;
-}
-
-static int ufs_unlink(struct inode * dir, struct dentry *dentry)
-{
- struct super_block * sb;
- int retval;
- struct inode * inode;
- struct buffer_head * bh;
- struct ufs_dir_entry * de;
- unsigned flags, swab;
-
- sb = dir->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
-
- retval = -ENOENT;
- bh = ufs_find_entry (dir, dentry->d_name.name, dentry->d_name.len, &de);
- UFSD(("de: ino %u, reclen %u, namelen %u, name %s\n", SWAB32(de->d_ino),
- SWAB16(de->d_reclen), ufs_get_de_namlen(de), de->d_name))
- if (!bh)
- goto end_unlink;
-
- inode = dentry->d_inode;
- DQUOT_INIT(inode);
-
- retval = -EIO;
- if (SWAB32(de->d_ino) != inode->i_ino)
- goto end_unlink;
-
- if (!inode->i_nlink) {
- ufs_warning (inode->i_sb, "ufs_unlink",
- "Deleting nonexistent file (%lu), %d",
- inode->i_ino, inode->i_nlink);
- inode->i_nlink = 1;
- }
- retval = ufs_delete_entry (dir, de, bh);
- if (retval)
- goto end_unlink;
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
- }
- dir->i_ctime = dir->i_mtime = CURRENT_TIME;
- mark_inode_dirty(dir);
- inode->i_nlink--;
- mark_inode_dirty(inode);
- inode->i_ctime = dir->i_ctime;
- retval = 0;
-
-end_unlink:
- brelse (bh);
- return retval;
-}
-
-
-/*
- * Create symbolic link. We use only slow symlinks at this time.
- */
static int ufs_symlink (struct inode * dir, struct dentry * dentry,
const char * symname)
{
struct super_block * sb = dir->i_sb;
- struct ufs_dir_entry * de;
+ int err = -ENAMETOOLONG;
+ unsigned l = strlen(symname)+1;
struct inode * inode;
- struct buffer_head * bh = NULL;
- unsigned l;
- int err;
- unsigned swab = sb->u.ufs_sb.s_swab;
-
- UFSD(("ENTER\n"))
-
- err = -ENAMETOOLONG;
- l = strlen(symname)+1;
if (l > sb->s_blocksize)
goto out;
- err = -EIO;
-
- if (!(inode = ufs_new_inode (dir, S_IFLNK, &err))) {
- return err;
- }
- inode->i_mode = S_IFLNK | S_IRWXUGO;
+ inode = ufs_new_inode(dir, S_IFLNK | S_IRWXUGO);
+ err = PTR_ERR(inode);
+ if (IS_ERR(inode))
+ goto out;
if (l > sb->u.ufs_sb.s_uspi->s_maxsymlinklen) {
/* slow symlink */
inode->i_mapping->a_ops = &ufs_aops;
err = block_symlink(inode, symname, l);
if (err)
- goto out_no_entry;
+ goto out_fail;
} else {
/* fast symlink */
inode->i_op = &ufs_fast_symlink_inode_operations;
}
mark_inode_dirty(inode);
- bh = ufs_add_entry (dir, dentry->d_name.name, dentry->d_name.len, &de, &err);
- if (!bh)
- goto out_no_entry;
- de->d_ino = SWAB32(inode->i_ino);
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
- }
- brelse (bh);
- d_instantiate(dentry, inode);
- err = 0;
+ err = ufs_add_nondir(dentry, inode);
out:
return err;
-out_no_entry:
- inode->i_nlink--;
- mark_inode_dirty(inode);
- iput (inode);
+out_fail:
+ ufs_dec_count(inode);
+ iput(inode);
goto out;
}
struct dentry *dentry)
{
struct inode *inode = old_dentry->d_inode;
- struct super_block * sb = inode->i_sb;
- struct ufs_dir_entry * de;
- struct buffer_head * bh;
- int err;
- unsigned swab = sb->u.ufs_sb.s_swab;
-
+
if (S_ISDIR(inode->i_mode))
return -EPERM;
if (inode->i_nlink >= UFS_LINK_MAX)
return -EMLINK;
- bh = ufs_add_entry (dir, dentry->d_name.name, dentry->d_name.len, &de, &err);
- if (!bh)
- return err;
-
- de->d_ino = SWAB32(inode->i_ino);
- dir->i_version = ++event;
- mark_buffer_dirty(bh);
- if (IS_SYNC(dir)) {
- ll_rw_block (WRITE, 1, &bh);
- wait_on_buffer (bh);
- }
- brelse (bh);
- inode->i_nlink++;
inode->i_ctime = CURRENT_TIME;
- mark_inode_dirty(inode);
+ ufs_inc_count(inode);
atomic_inc(&inode->i_count);
+
+ return ufs_add_nondir(dentry, inode);
+}
+
+static int ufs_mkdir(struct inode * dir, struct dentry * dentry, int mode)
+{
+ struct inode * inode;
+ int err = -EMLINK;
+
+ if (dir->i_nlink >= UFS_LINK_MAX)
+ goto out;
+
+ ufs_inc_count(dir);
+
+ inode = ufs_new_inode(dir, S_IFDIR|mode);
+ err = PTR_ERR(inode);
+ if (IS_ERR(inode))
+ goto out_dir;
+
+ inode->i_op = &ufs_dir_inode_operations;
+ inode->i_fop = &ufs_dir_operations;
+
+ ufs_inc_count(inode);
+
+ err = ufs_make_empty(inode, dir);
+ if (err)
+ goto out_fail;
+
+ err = ufs_add_link(dentry, inode);
+ if (err)
+ goto out_fail;
+
d_instantiate(dentry, inode);
- return 0;
+out:
+ return err;
+
+out_fail:
+ ufs_dec_count(inode);
+ ufs_dec_count(inode);
+ iput (inode);
+out_dir:
+ ufs_dec_count(dir);
+ goto out;
}
+static int ufs_unlink(struct inode * dir, struct dentry *dentry)
+{
+ struct inode * inode = dentry->d_inode;
+ struct buffer_head * bh;
+ struct ufs_dir_entry * de;
+ int err = -ENOENT;
+
+ de = ufs_find_entry (dentry, &bh);
+ if (!de)
+ goto out;
+
+ err = ufs_delete_entry (dir, de, bh);
+ if (err)
+ goto out;
+
+ inode->i_ctime = dir->i_ctime;
+ ufs_dec_count(inode);
+ err = 0;
+out:
+ return err;
+}
+
+static int ufs_rmdir (struct inode * dir, struct dentry *dentry)
+{
+ struct inode * inode = dentry->d_inode;
+ int err= -ENOTEMPTY;
+
+ if (ufs_empty_dir (inode)) {
+ err = ufs_unlink(dir, dentry);
+ if (!err) {
+ inode->i_size = 0;
+ ufs_dec_count(inode);
+ ufs_dec_count(dir);
+ }
+ }
+ return err;
+}
-#define PARENT_INO(buffer) \
- ((struct ufs_dir_entry *) ((char *) buffer + \
- SWAB16(((struct ufs_dir_entry *) buffer)->d_reclen)))->d_ino
-/*
- * Anybody can rename anything with this: the permission checks are left to the
- * higher-level routines.
- */
static int ufs_rename (struct inode * old_dir, struct dentry * old_dentry,
struct inode * new_dir, struct dentry * new_dentry )
{
- struct super_block * sb;
- struct inode * old_inode, * new_inode;
- struct buffer_head * old_bh, * new_bh, * dir_bh;
- struct ufs_dir_entry * old_de, * new_de;
- int retval;
- unsigned flags, swab;
-
- sb = old_dir->i_sb;
- flags = sb->u.ufs_sb.s_flags;
- swab = sb->u.ufs_sb.s_swab;
+ struct inode *old_inode = old_dentry->d_inode;
+ struct inode *new_inode = new_dentry->d_inode;
+ struct buffer_head *dir_bh = NULL;
+ struct ufs_dir_entry *dir_de = NULL;
+ struct buffer_head *old_bh;
+ struct ufs_dir_entry *old_de;
+ int err = -ENOENT;
+
+ old_de = ufs_find_entry (old_dentry, &old_bh);
+ if (!old_de)
+ goto out;
- UFSD(("ENTER\n"))
-
- old_inode = new_inode = NULL;
- old_bh = new_bh = dir_bh = NULL;
- new_de = NULL;
-
- old_bh = ufs_find_entry (old_dir, old_dentry->d_name.name, old_dentry->d_name.len, &old_de);
- /*
- * Check for inode number is _not_ due to possible IO errors.
- * We might rmdir the source, keep it as pwd of some process
- * and merrily kill the link to whatever was created under the
- * same name. Goodbye sticky bit ;-<
- */
- retval = -ENOENT;
- old_inode = old_dentry->d_inode;
- if (!old_bh || SWAB32(old_de->d_ino) != old_inode->i_ino)
- goto end_rename;
-
- new_inode = new_dentry->d_inode;
- new_bh = ufs_find_entry (new_dir, new_dentry->d_name.name, new_dentry->d_name.len, &new_de);
- if (new_bh) {
- if (!new_inode) {
- brelse (new_bh);
- new_bh = NULL;
- } else {
- DQUOT_INIT(new_inode);
- }
- }
if (S_ISDIR(old_inode->i_mode)) {
- if (new_inode) {
- retval = -ENOTEMPTY;
- if (!ufs_empty_dir (new_inode))
- goto end_rename;
- }
-
- retval = -EIO;
- dir_bh = ufs_bread (old_inode, 0, 0, &retval);
- if (!dir_bh)
- goto end_rename;
- if (SWAB32(PARENT_INO(dir_bh->b_data)) != old_dir->i_ino)
- goto end_rename;
- retval = -EMLINK;
- if (!new_inode && new_dir->i_nlink >= UFS_LINK_MAX)
- goto end_rename;
+ err = -EIO;
+ dir_de = ufs_dotdot(old_inode, &dir_bh);
+ if (!dir_de)
+ goto out_old;
}
- if (!new_bh)
- new_bh = ufs_add_entry (new_dir, new_dentry->d_name.name, new_dentry->d_name.len, &new_de,
- &retval);
- if (!new_bh)
- goto end_rename;
- new_dir->i_version = ++event;
-
- /*
- * ok, that's it
- */
- new_de->d_ino = SWAB32(old_inode->i_ino);
- ufs_delete_entry (old_dir, old_de, old_bh);
-
- old_dir->i_version = ++event;
if (new_inode) {
- new_inode->i_nlink--;
+ struct buffer_head *new_bh;
+ struct ufs_dir_entry *new_de;
+
+ err = -ENOTEMPTY;
+ if (dir_de && !ufs_empty_dir (new_inode))
+ goto out_dir;
+ err = -ENOENT;
+ new_de = ufs_find_entry (new_dentry, &new_bh);
+ if (!new_de)
+ goto out_dir;
+ ufs_inc_count(old_inode);
+ ufs_set_link(new_dir, new_de, new_bh, old_inode);
new_inode->i_ctime = CURRENT_TIME;
- mark_inode_dirty(new_inode);
- }
- old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME;
- mark_inode_dirty(old_dir);
- if (dir_bh) {
- PARENT_INO(dir_bh->b_data) = SWAB32(new_dir->i_ino);
- mark_buffer_dirty(dir_bh);
- old_dir->i_nlink--;
- mark_inode_dirty(old_dir);
- if (new_inode) {
+ if (dir_de)
new_inode->i_nlink--;
- mark_inode_dirty(new_inode);
- } else {
- new_dir->i_nlink++;
- mark_inode_dirty(new_dir);
+ ufs_dec_count(new_inode);
+ } else {
+ if (dir_de) {
+ err = -EMLINK;
+ if (new_dir->i_nlink >= UFS_LINK_MAX)
+ goto out_dir;
}
+ ufs_inc_count(old_inode);
+ err = ufs_add_link(new_dentry, old_inode);
+ if (err) {
+ ufs_dec_count(old_inode);
+ goto out_dir;
+ }
+ if (dir_de)
+ ufs_inc_count(new_dir);
}
- mark_buffer_dirty(old_bh);
- if (IS_SYNC(old_dir)) {
- ll_rw_block (WRITE, 1, &old_bh);
- wait_on_buffer (old_bh);
- }
-
- mark_buffer_dirty(new_bh);
- if (IS_SYNC(new_dir)) {
- ll_rw_block (WRITE, 1, &new_bh);
- wait_on_buffer (new_bh);
+
+ ufs_delete_entry (old_dir, old_de, old_bh);
+
+ ufs_dec_count(old_inode);
+
+ if (dir_de) {
+ ufs_set_link(old_inode, dir_de, dir_bh, new_dir);
+ ufs_dec_count(old_dir);
}
+ return 0;
- retval = 0;
-end_rename:
- brelse (dir_bh);
+out_dir:
+ if (dir_de)
+ brelse(dir_bh);
+out_old:
brelse (old_bh);
- brelse (new_bh);
-
- UFSD(("EXIT\n"))
-
- return retval;
+out:
+ return err;
}
struct inode_operations ufs_dir_inode_operations = {
/*
- * BK Id: SCCS/s.pci.h 1.12 05/21/01 01:31:30 cort
+ * BK Id: SCCS/s.pci.h 1.16 10/15/01 22:51:33 paulus
*/
#ifndef __PPC_PCI_H
#define __PPC_PCI_H
extern unsigned long pci_phys_to_bus(unsigned long pa, int busnr);
extern unsigned long pci_bus_to_phys(unsigned int ba, int busnr);
-/* Dynamic DMA Mapping stuff
+/* Dynamic DMA Mapping stuff, stolen from i386
* ++ajoshi
*/
struct pci_dev;
+/* The PCI address space does equal the physical memory
+ * address space. The networking and block device layers use
+ * this boolean for bounce buffer decisions.
+ */
+#define PCI_DMA_BUS_IS_PHYS (1)
+
+/* Allocate and map kernel buffer using consistent mode DMA for a device.
+ * hwdev should be valid struct pci_dev pointer for PCI devices,
+ * NULL for PCI-like buses (ISA, EISA).
+ * Returns non-NULL cpu-view pointer to the buffer if successful and
+ * sets *dma_addrp to the pci side dma address as well, else *dma_addrp
+ * is undefined.
+ */
extern void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
dma_addr_t *dma_handle);
+
+/* Free and unmap a consistent DMA buffer.
+ * cpu_addr is what was returned from pci_alloc_consistent,
+ * size must be the same as what as passed into pci_alloc_consistent,
+ * and likewise dma_addr must be the same as what *dma_addrp was set to.
+ *
+ * References to the memory and mappings associated with cpu_addr/dma_addr
+ * past this call are illegal.
+ */
extern void pci_free_consistent(struct pci_dev *hwdev, size_t size,
void *vaddr, dma_addr_t dma_handle);
-extern inline dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr,
+
+/* Map a single buffer of the indicated size for DMA in streaming mode.
+ * The 32-bit bus address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory
+ * until either pci_unmap_single or pci_dma_sync_single is performed.
+ */
+static inline dma_addr_t pci_map_single(struct pci_dev *hwdev, void *ptr,
size_t size, int direction)
{
if (direction == PCI_DMA_NONE)
BUG();
return virt_to_bus(ptr);
}
-extern inline void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
+
+static inline void pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
size_t size, int direction)
{
if (direction == PCI_DMA_NONE)
BUG();
/* nothing to do */
}
-extern inline int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+
+
+/*
+ * pci_{map,unmap}_single_page maps a kernel page to a dma_addr_t. identical
+ * to pci_map_single, but takes a struct page instead of a virtual address
+ */
+static inline dma_addr_t pci_map_page(struct pci_dev *hwdev, struct page *page,
+ unsigned long offset, size_t size, int direction)
+{
+ if (direction == PCI_DMA_NONE)
+ BUG();
+ return (page - mem_map) * PAGE_SIZE + PCI_DRAM_OFFSET + offset;
+}
+
+static inline void pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address,
+ size_t size, int direction)
+{
+ if (direction == PCI_DMA_NONE)
+ BUG();
+ /* Nothing to do */
+}
+
+/* Map a set of buffers described by scatterlist in streaming
+ * mode for DMA. This is the scather-gather version of the
+ * above pci_map_single interface. Here the scatter gather list
+ * elements are each tagged with the appropriate dma address
+ * and length. They are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for pci_map_single are
+ * the same here.
+ */
+static inline int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)
{
+ int i;
+
if (direction == PCI_DMA_NONE)
BUG();
+
+ /*
+ * temporary 2.4 hack
+ */
+ for (i = 0; i < nents; i++) {
+ if (sg[i].address && sg[i].page)
+ BUG();
+ else if (!sg[i].address && !sg[i].page)
+ BUG();
+
+ if (sg[i].address)
+ sg[i].dma_address = virt_to_bus(sg[i].address);
+ else
+ sg[i].dma_address = page_to_bus(sg[i].page) + sg[i].offset;
+ }
+
return nents;
}
-extern inline void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+
+/* Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+static inline void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)
{
if (direction == PCI_DMA_NONE)
BUG();
/* nothing to do */
}
-extern inline void pci_dma_sync_single(struct pci_dev *hwdev,
+
+/* Make physical memory consistent for a single
+ * streaming mode DMA translation after a transfer.
+ *
+ * If you perform a pci_map_single() but wish to interrogate the
+ * buffer using the cpu, yet do not wish to teardown the PCI dma
+ * mapping, you must call this function before doing so. At the
+ * next point you give the PCI dma address back to the card, the
+ * device again owns the buffer.
+ */
+static inline void pci_dma_sync_single(struct pci_dev *hwdev,
dma_addr_t dma_handle,
size_t size, int direction)
{
/* nothing to do */
}
-extern inline void pci_dma_sync_sg(struct pci_dev *hwdev,
+/* Make physical memory consistent for a set of streaming
+ * mode DMA translations after a transfer.
+ *
+ * The same as pci_dma_sync_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+static inline void pci_dma_sync_sg(struct pci_dev *hwdev,
struct scatterlist *sg,
int nelems, int direction)
{
* only drive the low 24-bits during PCI bus mastering, then
* you would pass 0x00ffffff as the mask to this function.
*/
-extern inline int pci_dma_supported(struct pci_dev *hwdev, u64 mask)
+static inline int pci_dma_supported(struct pci_dev *hwdev, u64 mask)
{
return 1;
}
+/*
+ * At present there are very few 32-bit PPC machines that can have
+ * memory above the 4GB point, and we don't support that.
+ */
+#define pci_dac_dma_supported(pci_dev, mask) (0)
+
+static __inline__ dma64_addr_t
+pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page, unsigned long offset, int direction)
+{
+ return (dma64_addr_t) page_to_bus(page) + offset;
+}
+
+static __inline__ struct page *
+pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr)
+{
+ return mem_map + (unsigned long)(dma_addr >> PAGE_SHIFT);
+}
+
+static __inline__ unsigned long
+pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr)
+{
+ return (dma_addr & ~PAGE_MASK);
+}
+
+static __inline__ void
+pci_dac_dma_sync_single(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction)
+{
+ /* Nothing to do. */
+}
+
+/* These macros should be used after a pci_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+ * You should only work with the number of sg entries pci_map_sg
+ * returns.
+ */
+#define sg_dma_address(sg) ((sg)->dma_address)
+#define sg_dma_len(sg) ((sg)->length)
+
/* Return the index of the PCI controller for device PDEV. */
extern int pci_controller_num(struct pci_dev *pdev);
/* Tell drivers/pci/proc.c that we have pci_mmap_page_range() */
#define HAVE_PCI_MMAP 1
-#define sg_dma_address(sg) (virt_to_bus((sg)->address))
-#define sg_dma_len(sg) ((sg)->length)
-
#endif /* __KERNEL__ */
#endif /* __PPC_PCI_H */
/*
- * BK Id: SCCS/s.scatterlist.h 1.5 05/17/01 18:14:25 cort
+ * BK Id: SCCS/s.scatterlist.h 1.9 10/15/01 22:51:33 paulus
*/
#ifdef __KERNEL__
#ifndef _PPC_SCATTERLIST_H
#include <asm/dma.h>
struct scatterlist {
- char * address; /* Location data is to be transferred to */
- unsigned int length;
+ char *address; /* Location data is to be transferred to,
+ * or NULL for highmem page */
+ struct page * page; /* Location for highmem page, if any */
+ unsigned int offset; /* for highmem, page offset */
+
+ dma_addr_t dma_address; /* phys/bus dma address */
+ unsigned int length; /* length */
};
+/*
+ * These macros should be used after a pci_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+ * You should only work with the number of sg entries pci_map_sg
+ * returns, or alternatively stop on the first sg_dma_len(sg) which
+ * is 0.
+ */
+#define sg_dma_address(sg) ((sg)->dma_address)
+#define sg_dma_len(sg) ((sg)->length)
#endif /* !(_PPC_SCATTERLIST_H) */
#endif /* __KERNEL__ */
/*
- * BK Id: SCCS/s.types.h 1.8 07/07/01 13:37:26 paulus
+ * BK Id: SCCS/s.types.h 1.10 10/15/01 22:51:33 paulus
*/
#ifndef _PPC_TYPES_H
#define _PPC_TYPES_H
/* DMA addresses are 32-bits wide */
typedef u32 dma_addr_t;
+typedef u64 dma64_addr_t;
#endif /* __KERNEL__ */
-/* $Id: pgalloc.h,v 1.13 2001/07/17 16:17:33 anton Exp $ */
+/* $Id: pgalloc.h,v 1.15 2001/10/18 09:06:37 davem Exp $ */
#ifndef _SPARC_PGALLOC_H
#define _SPARC_PGALLOC_H
-/* $Id: unistd.h,v 1.71 2001/10/09 10:54:39 davem Exp $ */
+/* $Id: unistd.h,v 1.72 2001/10/18 08:27:05 davem Exp $ */
#ifndef _SPARC_UNISTD_H
#define _SPARC_UNISTD_H
#define __NR_oldlstat 202 /* Linux Specific */
#define __NR_uselib 203 /* Linux Specific */
#define __NR_readdir 204 /* Linux Specific */
-/* #define __NR_ioperm 205 Linux Specific - i386 specific, unused */
+#define __NR_readahead 205 /* Linux Specific */
#define __NR_socketcall 206 /* Linux Specific */
#define __NR_syslog 207 /* Linux Specific */
/* #define __NR_olduname 208 Linux Specific */
-/* $Id: pgalloc.h,v 1.23 2001/09/25 20:21:48 kanoj Exp $ */
+/* $Id: pgalloc.h,v 1.26 2001/10/18 09:06:37 davem Exp $ */
#ifndef _SPARC64_PGALLOC_H
#define _SPARC64_PGALLOC_H
extern void __flush_dcache_page(void *addr, int flush_icache);
extern void __flush_icache_page(unsigned long);
-#if (L1DCACHE_SIZE > PAGE_SIZE) /* is there D$ aliasing problem */
-#define flush_dcache_page(page) \
-do { if ((page)->mapping && \
- !((page)->mapping->i_mmap) && \
- !((page)->mapping->i_mmap_shared)) \
- set_bit(PG_dcache_dirty, &(page)->flags); \
- else \
- __flush_dcache_page((page)->virtual, \
- ((tlb_type == spitfire) && \
- (page)->mapping != NULL)); \
-} while(0)
-#else /* L1DCACHE_SIZE > PAGE_SIZE */
-#define flush_dcache_page(page) \
-do { if ((page)->mapping && \
- !((page)->mapping->i_mmap) && \
- !((page)->mapping->i_mmap_shared)) \
- set_bit(PG_dcache_dirty, &(page)->flags); \
- else \
- if ((tlb_type == spitfire) && \
- (page)->mapping != NULL) \
- __flush_icache_page(__get_phys((unsigned long)((page)->virtual))); \
-} while(0)
-#endif /* L1DCACHE_SIZE > PAGE_SIZE */
+extern void flush_dcache_page_impl(struct page *page);
+#ifdef CONFIG_SMP
+extern void smp_flush_dcache_page_impl(struct page *page);
+#else
+#define smp_flush_dcache_page_impl flush_dcache_page_impl
+#endif
+
+extern void flush_dcache_page(struct page *page);
extern void __flush_dcache_range(unsigned long start, unsigned long end);
-/* $Id: pgtable.h,v 1.146 2001/09/11 02:20:23 kanoj Exp $
+/* $Id: pgtable.h,v 1.147 2001/10/17 18:26:58 davem Exp $
* pgtable.h: SpitFire page table operations.
*
* Copyright 1996,1997 David S. Miller (davem@caip.rutgers.edu)
#define PG_dcache_dirty PG_arch_1
+#define dcache_dirty_cpu(page) \
+ (((page)->flags >> 24) & (NR_CPUS - 1UL))
+
+#define set_dcache_dirty(PAGE) \
+do { unsigned long mask = smp_processor_id(); \
+ unsigned long non_cpu_bits = (1UL << 24UL) - 1UL; \
+ mask = (mask << 24) | (1UL << PG_dcache_dirty); \
+ __asm__ __volatile__("1:\n\t" \
+ "ldx [%2], %%g7\n\t" \
+ "and %%g7, %1, %%g5\n\t" \
+ "or %%g5, %0, %%g5\n\t" \
+ "casx [%2], %%g7, %%g5\n\t" \
+ "cmp %%g7, %%g5\n\t" \
+ "bne,pn %%xcc, 1b\n\t" \
+ " nop" \
+ : /* no outputs */ \
+ : "r" (mask), "r" (non_cpu_bits), "r" (&(PAGE)->flags) \
+ : "g5", "g7"); \
+} while (0)
+
+#define clear_dcache_dirty(PAGE) \
+ clear_bit(PG_dcache_dirty, &(PAGE)->flags)
+
/* Certain architectures need to do special things when pte's
* within a page table are directly modified. Thus, the following
* hook is made available.
-/* $Id: unistd.h,v 1.48 2001/10/09 10:54:39 davem Exp $ */
+/* $Id: unistd.h,v 1.49 2001/10/18 08:27:05 davem Exp $ */
#ifndef _SPARC64_UNISTD_H
#define _SPARC64_UNISTD_H
#define __NR_oldlstat 202 /* Linux Specific */
#define __NR_uselib 203 /* Linux Specific */
#define __NR_readdir 204 /* Linux Specific */
-/* #define __NR_ioperm 205 Linux Specific - i386 specific, unused */
+#define __NR_readahead 205 /* Linux Specific */
#define __NR_socketcall 206 /* Linux Specific */
#define __NR_syslog 207 /* Linux Specific */
/* #define __NR_olduname 208 Linux Specific */
#include <asm/semaphore.h> /* Needed for MUTEX init macros */
#include <linux/config.h>
#include <linux/notifier.h>
+#include <linux/ioport.h> /* Needed for struct resource */
#include <asm/atomic.h>
/*
/* dir.c */
extern struct inode_operations ufs_dir_inode_operations;
extern int ufs_check_dir_entry (const char *, struct inode *, struct ufs_dir_entry *, struct buffer_head *, unsigned long);
+extern int ufs_add_link (struct dentry *, struct inode *);
+extern ino_t ufs_inode_by_name(struct inode *, struct dentry *);
+extern int ufs_make_empty(struct inode *, struct inode *);
+extern struct ufs_dir_entry * ufs_find_entry (struct dentry *, struct buffer_head **);
+extern int ufs_delete_entry (struct inode *, struct ufs_dir_entry *, struct buffer_head *);
+extern int ufs_empty_dir (struct inode *);
+extern struct ufs_dir_entry * ufs_dotdot (struct inode *, struct buffer_head **);
+extern void ufs_set_link(struct inode *, struct ufs_dir_entry *, struct buffer_head *, struct inode *);
/* file.c */
extern struct inode_operations ufs_file_inode_operations;
/* ialloc.c */
extern void ufs_free_inode (struct inode *inode);
-extern struct inode * ufs_new_inode (const struct inode *, int, int *);
+extern struct inode * ufs_new_inode (const struct inode *, int);
/* inode.c */
extern int ufs_frag_map (struct inode *, int);
extern struct list_head usb_driver_list;
extern struct list_head usb_bus_list;
-extern rwlock_t usb_bus_list_lock;
+extern struct semaphore usb_bus_list_lock;
/*
* USB device fs stuff
*
* Implementation of the Transmission Control Protocol(TCP).
*
- * Version: $Id: tcp_ipv4.c,v 1.232 2001/10/15 12:34:50 davem Exp $
+ * Version: $Id: tcp_ipv4.c,v 1.234 2001/10/18 09:49:08 davem Exp $
*
* IPv4 specific functions
*
local_bh_enable();
}
+static inline void tcp_bind_hash(struct sock *sk, struct tcp_bind_bucket *tb, unsigned short snum)
+{
+ sk->num = snum;
+ if ((sk->bind_next = tb->owners) != NULL)
+ tb->owners->bind_pprev = &sk->bind_next;
+ tb->owners = sk;
+ sk->bind_pprev = &tb->owners;
+ sk->prev = (struct sock *) tb;
+}
+
+static inline int tcp_bind_conflict(struct sock *sk, struct tcp_bind_bucket *tb)
+{
+ struct sock *sk2 = tb->owners;
+ int sk_reuse = sk->reuse;
+
+ for( ; sk2 != NULL; sk2 = sk2->bind_next) {
+ if (sk != sk2 &&
+ sk->bound_dev_if == sk2->bound_dev_if) {
+ if (!sk_reuse ||
+ !sk2->reuse ||
+ sk2->state == TCP_LISTEN) {
+ if (!sk2->rcv_saddr ||
+ !sk->rcv_saddr ||
+ (sk2->rcv_saddr == sk->rcv_saddr))
+ break;
+ }
+ }
+ }
+ return sk2 != NULL;
+}
+
/* Obtain a reference to a local port for the given sock,
* if snum is zero it means select any available local port.
*/
if (tb->fastreuse != 0 && sk->reuse != 0 && sk->state != TCP_LISTEN) {
goto success;
} else {
- struct sock *sk2 = tb->owners;
- int sk_reuse = sk->reuse;
-
- for( ; sk2 != NULL; sk2 = sk2->bind_next) {
- if (sk != sk2 &&
- sk->bound_dev_if == sk2->bound_dev_if) {
- if (!sk_reuse ||
- !sk2->reuse ||
- sk2->state == TCP_LISTEN) {
- if (!sk2->rcv_saddr ||
- !sk->rcv_saddr ||
- (sk2->rcv_saddr == sk->rcv_saddr))
- break;
- }
- }
- }
- /* If we found a conflict, fail. */
- ret = 1;
- if (sk2 != NULL)
- goto fail_unlock;
+ ret = 1;
+ if (tcp_bind_conflict(sk, tb))
+ goto fail_unlock;
}
}
ret = 1;
((sk->reuse == 0) || (sk->state == TCP_LISTEN)))
tb->fastreuse = 0;
success:
- sk->num = snum;
- if (sk->prev == NULL) {
- if ((sk->bind_next = tb->owners) != NULL)
- tb->owners->bind_pprev = &sk->bind_next;
- tb->owners = sk;
- sk->bind_pprev = &tb->owners;
- sk->prev = (struct sock *) tb;
- } else {
- BUG_TRAP(sk->prev == (struct sock *) tb);
- }
- ret = 0;
+ if (sk->prev == NULL)
+ tcp_bind_hash(sk, tb, snum);
+ BUG_TRAP(sk->prev == (struct sock *) tb);
+ ret = 0;
fail_unlock:
spin_unlock(&head->lock);