Erich Focht [Thu, 31 Oct 2002 13:49:00 +0000 (05:49 -0800)]
[PATCH] ia64: 2.5.44 NUMA fixups
Dear David,
please find attached two patches for the latest 2.5.44-ia64. They fix
some problems and simplify things a bit.
remove_nodeid-2.5.44.patch:
This comes from Kimi. In 2.5.44 we suddenly had two definitions for
numa_node_id(), one was IA64 specific (local_cpu_data->nodeid) while
the other one is now platform independent:
__cpu_to_node(smp_processor_id()). After some discussions we decided
to remove the nodeid from the local_cpu_data and keep the definition of
all other platforms. With using the cpu_to_node_map[] we are also
faster when doing multiple lookups, as all node ids come in a single
cache line (which is not bounced around, as it's content is only
read).
ia64_topology_fixup-2.5.44.patch:
I'm following here the latest fixup for i386 from Matthew Dobson. The
__node_to_cpu_mask() macro now accesses an array which is initialized
after the ACPI CPU discovery. It also simplifies
__node_to_first_cpu(). A compiler warning has been fixed, too.
Neil Brown [Wed, 30 Oct 2002 08:24:57 +0000 (00:24 -0800)]
[PATCH] kNFSd: Convert nfsd to use a list of pages instead of one big buffer
This means:
1/ We don't need an order-4 allocation for each nfsd that starts
2/ We don't need an order-4 allocation in skb_linearize when
we receive a 32K write request
3/ It will be easier to incorporate the zero-copy read changes
The pages are handed around using an xdr_buf (instead of svc_buf)
much like the NFS client so future crypto code can use the same
data structure for both client and server.
The code assumes that most requests and replies fit in a single page.
The exceptions are assumed to have some largish 'data' bit, and the
rest must fit in a single page.
The 'data' bits are file data, readdir data, and symlinks.
There must be only one 'data' bit per request.
This is all fine for nfs/nlm.
This isn't complete:
1/ NFSv4 hasn't been converted yet (it won't compile)
2/ NFSv3 allows symlinks upto 4096, but the code will only support
upto about 3800 at the moment
3/ readdir responses are limited to about 3800.
but I thought that patch was big enough, and the rest can come
later.
This patch introduces vfs_readv and vfs_writev as parallels to
vfs_read and vfs_write. This means there is a fair bit of
duplication in read_write.c that should probably be tidied up...
Neil Brown [Wed, 30 Oct 2002 08:24:44 +0000 (00:24 -0800)]
[PATCH] kNFSd: nfsd_readdir changes.
nfsd_readdir - the common readdir code for all version of nfsd,
contains a number of version-specific things with appropriate checks,
and also does some xdr-encoding which rightly belongs elsewhere.
This patch simplifies nfsd_readdir to do just the core stuff, and moves
the version specifics into version specific files, and the xdr encoding
into xdr encoding files.
Neil Brown [Wed, 30 Oct 2002 08:24:12 +0000 (00:24 -0800)]
[PATCH] kNFSd: Fix nfs shutdown problem.
The 'unexport everything' that happens when the
last nfsd thread dies was shuting down too much -
things that should only be shut down on module unload.
Neil Brown [Wed, 30 Oct 2002 08:04:30 +0000 (00:04 -0800)]
[PATCH] md: factor out MD superblock handling code
Define an interface for interpreting and updating superblocks
so we can more easily define new formats.
With this patch, (almost) all superblock layout information is
locating in a small set of routines dedicated to superblock
handling. This will allow us to provide a similar set for
a different format.
The two exceptions are:
1/ autostart_array where the devices listed in the superblock
are searched for.
2/ raid5 'knows' the maximum number of devices for
compute_parity.
Andi Kleen [Wed, 30 Oct 2002 07:54:35 +0000 (23:54 -0800)]
[PATCH] x86-64 updates for 2.5.44
A few updates for x86-64 in 2.5.44. Some of the bugs fixed were serious.
- Don't count ACPI mappings in end_pfn. This shrinks mem_map a lot
on many setups.
- Fix mem= option. Remove custom mapping support.
- Revert per_cpu implementation to the generic version. The optimized one
that used %gs directly triggered too many toolkit problems and was an
constant source of bugs.
- Make sure pgd_offset_k works correctly for vmalloc mappings. This makes
modules work again properly.
- Export pci dma symbols
- Export other symbols to make more modules work
- Don't drop physical address bits >32bit on iommu free.
- Add more prototypes to fix warnings
- Resync pci subsystem with i386
- Fix pci dma kernel option parsing.
- Do PCI peer bus scanning after ACPI in case it missed some busses
(that's a workaround - 2.5 ACPI seems to have some problems here that
I need to investigate more closely)
- Remove the .eh_frame on linking. This saves several hundred KB in the
bzImage
- Fix MTRR initialization. It works properly now on SMP again.
- Fix kernel option parsing, it was broken by section name changes in
init.h
- A few other cleanups and fixes.
- Fix nonatomic warning in ioport.c
Andrew Morton [Wed, 30 Oct 2002 07:36:03 +0000 (23:36 -0800)]
[PATCH] hot-n-cold pages: use cold pages for readahead
It is usually the case that pagecache reads use busmastering hardware
to transfer the data into pagecache. This invalidates the CPU cache of
the pagecache pages.
So use cache-cold pages for pagecache reads. To avoid wasting
cache-hot pages.
Andrew Morton [Wed, 30 Oct 2002 07:35:44 +0000 (23:35 -0800)]
[PATCH] hot-n-cold pages: bulk page freeing
Patch from Martin Bligh.
Implements __free_pages_bulk(). Release multiple pages of a given
order into the buddy all within a single acquisition of the zone lock.
This also removes current->local_pages. The per-task list of pages
which only ever contained one page. To prevent other tasks from
stealing pages which this task has just freed up.
Given that we're freeing into the per-cpu caches, and that those are
multipage caches, and the cpu-stickiness of the scheduler, I think
current->local_pages is no longer needed.
Andrew Morton [Wed, 30 Oct 2002 07:35:32 +0000 (23:35 -0800)]
[PATCH] hot-n-cold pages: bulk page allocator
This is the hot-n-cold-pages series. It introduces a per-cpu lockless
LIFO pool in front of the page allocator. For three reasons:
1: To reduce lock contention on the buddy lock: we allocate and free
pages in, typically, 16-page chunks.
2: To return cache-warm pages to page allocation requests.
3: As infrastructure for a page reservation API which can be used to
ensure that the GFP_ATOMIC radix-tree node and pte_chain allocations
cannot fail. That code is not complete, and does not absolutely
require hot-n-cold pages. It'll work OK though.
We add two queues per CPU. The "hot" queue contains pages which the
freeing code thought were likely to be cache-hot. By default, new
allocations are satisfied from this queue.
The "cold" queue contains pages which the freeing code expected to be
cache-cold. The cold queue is mainly for lock amortisation, although
it is possible to explicitly allocate cold pages. The readahead code
does that.
I have been hot and cold on these patches for quite some time - the
benefit is not great.
- 4% speedup in Randy Hron's benching of the autoconf regression
tests on a 4-way. Most of this came from savings in pte_alloc and
pmd_alloc: the pagetable clearing code liked the warmer pages (some
architectures still have the pgt_cache, and can perhaps do away with
them).
- 1% to 2% speedup in kernel compiles on my 4-way and Martin's 32-way.
- 60% speedup in a little test program which writes 80 kbytes to a
file and ftruncates it to zero again. Ran four instances of that on
4-way and it loved the cache warmth.
- 2.5% speedup in Specweb testing on 8-way
- The thing which won me over: an 11% increase in throughput of the
SDET benchmark on an 8-way PIII:
with hot & cold:
RESULT for 8 users is 17971 +12.1%
RESULT for 16 users is 17026 +12.0%
RESULT for 32 users is 17009 +10.4%
RESULT for 64 users is 16911 +10.3%
without:
RESULT for 8 users is 16038
RESULT for 16 users is 15200
RESULT for 32 users is 15406
RESULT for 64 users is 15331
SDET is a very old SPEC test which simulates a development
environment with a large number of users. Lots of users running a
mix of shell commands, basically.
These patches were written by Martin Bligh and myself.
This one implements rmqueue_bulk() - a function for removing multiple
pages of a given order from the buddy lists.
This is for lock amortisation: take the highly-contended zone->lock
with less frequency, do more work once it has been acquired.
Andrew Morton [Wed, 30 Oct 2002 07:32:18 +0000 (23:32 -0800)]
[PATCH] percpu: convert global page accounting
Convert global page state accounting to use per-cpu storage
(I think this code remains a little buggy, btw. Note how I do
per_cpu(page_states, cpu).member += (delta);
This gets done at interrupt time and hence is assuming that
the "+=" operation on a ulong is atomic wrt interrupts on
all architectures. How do we feel about that assumption?)
Andrew Morton [Wed, 30 Oct 2002 07:31:37 +0000 (23:31 -0800)]
[PATCH] percpu: convert timers
Patch from Dipankar Sarma <dipankar@in.ibm.com>
This patch changes the per-CPU data in timer management (tvec_bases)
to use per_cpu data area and makes it safe for cpu_possible allocation
by using CPU notifiers. End result - saving space.
Andrew Morton [Wed, 30 Oct 2002 07:25:03 +0000 (23:25 -0800)]
[PATCH] slab: reap timers
- add a reap timer that returns stale objects from the cpu arrays
- use list_for_each instead of while loops
- /proc/slabinfo layout change, for a new field about reaping.
Implementation:
slab contains 2 caches that contain objects that might be usable to the
systems:
- the cpu arrays contains objects that other cpus could use
- the slabs_free list contains freeable slabs, i.e. pages that someone
else might want.
The patch now keeps track of accesses to the cpu arrays and to the free
list. If there were no recent activities in one of the caches, part of
the cache is flushed.
Unlike <2.5.39, only a small part (~20%) is flushed each time:
The older kernel would refill/drain bounce heavily under memory pressure:
- kmem_cache_alloc: notices that there are no objects in the cpu
cache, loads 120 objects from the slab lists, return 1.
[assuming batchcount=120]
- kmem_cache_reap is called due to memory pressure, finds 119
objects in the cpu array and returns them to the slab lists.
- repeat.
In addition, the length of the free list is limited based on the free
list accesses: a fixed "1" limit hurts the large object caches.
That's the last part for now, next is: [not yet written]
- cleanup: BUG_ON instead of if() BUG
- OOM handling for enable_cpucaches
- remove the unconditional might_sleep() from
cache_alloc_debugcheck_before, and make that DEBUG dependant.
- initial NUMA support, just to collect some stats:
Which percentage of the objects are freed on the wrong
node? 0.1% or 20%?
Andrew Morton [Wed, 30 Oct 2002 07:24:43 +0000 (23:24 -0800)]
[PATCH] slab: cleanups and speedups
- enable the cpu array for all caches
- remove the optimized implementations for quick list access - with
cpu arrays in all caches, the list access is now rare.
- make the cpu arrays mandatory, this removes 50% of the conditional
branches from the hot path of kmem_cache_alloc [1]
- poisoning for objects with constructors
Patch got a bit longer...
I forgot to mention this: head arrays mean that some pages can be
blocked due to objects in the head arrays, and not returned to
page_alloc.c. The current kernel never flushes the head arrays, this
might worsen the behaviour of low memory systems. The hunk that
flushes the arrays regularly comes next.
Details changelog: [to be read site by side with the patch]
* docu update
* "growing" is not really needed: races between grow and shrink are
handled by retrying. [additionally, the current kernel never
shrinks]
* move the batchcount into the cpu array:
the old code contained a race during cpu cache tuning:
update batchcount [in cachep] before or after the IPI?
And NUMA will need it anyway.
* bootstrap support: the cpu arrays are really mandatory, nothing
works without them. Thus a statically allocated cpu array is needed
to for starting the allocators.
* move the full, partial & free lists into a separate structure, as a
preparation for NUMA
* structure reorganization: now the cpu arrays are the most important
part, not the lists.
* dead code elimination: remove "failures", nowhere read.
* dead code elimination: remove "OPTIMIZE": not implemented. The
idea is to skip the virt_to_page lookup for caches with on-slab slab
structures, and use (ptr&PAGE_MASK) instead. The details are in
Bonwicks paper. Not fully implemented.
* remove GROWN: kernel never shrinks a cache, thus grown is
meaningless.
* bootstrap: starting the slab allocator is now a 3 stage process:
- nothing works, use the statically allocated cpu arrays.
- the smallest kmalloc allocator works, use it to allocate
cpu arrays.
- all kmalloc allocators work, use the default cpu array size
* register a cpu nodifier callback, and allocate the needed head
arrays if a new cpu arrives
* always enable head arrays, even for DEBUG builds. Poisoning and
red-zoning now happens before an object is added to the arrays.
Insert enable_all_cpucaches into cpucache_init, there is no need for
seperate function.
* modifications to the debug checks due to the earlier calls of the
dtor for caches with poisoning enabled
* poison+ctor is now supported
* squeezing 3 objects into a cacheline is hopeless, the FIXME is not
solvable and can be removed.
* move do_ccupdate_local nearer to do_tune_cpucache. Should have
been part of -04-drain.
* additional objects checks. red-zoning is tricky: it's implemented
by increasing the object size by 2*BYTES_PER_WORD. Thus
BYTES_PER_WORD must be added to objp before calling the destructor,
constructor or before returing the object from alloc. The poison
functions add BYTES_PER_WORD internally.
* create a flagcheck function, right now the tests are duplicated in
cache_grow [always] and alloc_debugcheck_before [DEBUG only]
* modify slab list updates: all allocs are now bulk allocs that try
to get multiple objects at once, update the list pointers only at the
end of a bulk alloc, not once per alloc.
* might_sleep was moved into kmem_flagcheck.
* major hotpath change:
- cc always exists, no fallback
- cache_alloc_refill is called with disabled interrupts,
and does everything to recover from an empty cpu array.
Far shorter & simpler __cache_alloc [inlined in both
kmalloc and kmem_cache_alloc]
* __free_block, free_block, cache_flusharray: main implementation of
returning objects to the lists. no big changes, diff lost track.
* new debug check: too early kmalloc or kmem_cache_alloc
* slightly reduce the sizes of the cpu arrays: keep the size < a
power of 2, including batchcount, avail and now limit, for optimal
kmalloc memory efficiency.
That's it. I even found 2 bugs while reading: dtors and ctors for
verify were called with wrong parameters, with RED_ZONE enabled, and
some checks still assumed that POISON and ctor are incompatible.
Andrew Morton [Wed, 30 Oct 2002 07:24:33 +0000 (23:24 -0800)]
[PATCH] slab: remove spaces from /proc identifiers
From Manfred Spraul
remove the space from the name of the DMA caches: they make it
impossible to tune the caches through /proc/slabinfo, and make parsing
/proc/slabinfo difficult
Andrew Morton [Wed, 30 Oct 2002 07:24:12 +0000 (23:24 -0800)]
[PATCH] slab: reduce internal fragmentation
From Manfred Spraul
If an object is freed from a slab, then move the slab to the tail of
the partial list - this should increase the probability that the other
objects from the same page are freed, too, and that a page can be
returned to gfp later.
In other words: if we just freed an object from this page then make
this page be the *last* page which is eligible for new allocations.
Under the assumption that other objects in that same page are about to
be freed up as well.
The cpu arrays are now always in front of the list, i.e. cache hit
rates should not matter.
Andrew Morton [Wed, 30 Oct 2002 07:23:34 +0000 (23:23 -0800)]
[PATCH] slab: extended cpu notifiers
Patch from Dipankar Sarma <dipankar@in.ibm.com>
This is Manfred's patch which provides a CPU_UP_PREPARE cpu notifier to
allow initialization of per_cpu data just before the cpu becomes fully
functional.
It also provides a facility for the CPU_UP_PREPARE handler to return
NOTIFY_BAD to signify that the CPU is not permitted to come up. If
that happens, a CPU_UP_CANCELLED message is passed to all the handlers.
The patch also fixes a bogus NOFITY_BAD return from the softirq setup
code.
Patch has been acked by Rusty.
We need this mechanism in slab for starting per-cpu timers and for
allocating the per-cpu slab hgead arrays *before* the CPU has come up
and started using slab.
Linus Torvalds [Wed, 30 Oct 2002 06:54:51 +0000 (22:54 -0800)]
Merge penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/epoll-0.15
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
James Morris [Wed, 30 Oct 2002 06:45:00 +0000 (22:45 -0800)]
[CRYPTO]: Cleanups based upon suggestions by Jeff Garzik.
- Changed unsigned to unsigned int in algos.
- Consistent use of u32 for flags throughout api.
- Use of unsigned int rather than int for counting things which must
be positive, also replaced size_ts to keep code simpler and lessen
bloat on some archs.
- got rid of some unneeded returns.
- const correctness.
Patrick Mochel [Wed, 30 Oct 2002 04:41:43 +0000 (20:41 -0800)]
kobjects: add array of default attributes to subsystems, and create on registration.
struct subsystem may now contain a pointer to a NULL-terminated array of
default attributes to be exported when an object is registered with the subsystem.
kobject registration will check the return values of the directory creation and
the creation of each file, and handle it appropriately.
Patrick Mochel [Wed, 30 Oct 2002 04:27:36 +0000 (20:27 -0800)]
sysfs: kill struct sysfs_dir.
Previously, sysfs read() and write() calls looked for sysfs_ops in the struct
sysfs_dir, in the kobject. Since objects belong to a subsystem, and is a member
of a group of like devices, the sysfs_ops have been moved to struct subsystem,
and are referenced from there.
The only remaining member of struct sysfs_dir is the dentry of the object's
directory. That is moved out of the dir struct and directly into struct kobject.
That saves us 4 bytes/object.
All of the sysfs functions that referenced the struct have been changed to just
reference the dentry.
Linus Torvalds [Wed, 30 Oct 2002 04:24:40 +0000 (20:24 -0800)]
Merge penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/kconfig
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
Patrick Mochel [Wed, 30 Oct 2002 03:47:41 +0000 (19:47 -0800)]
Introduce struct subsystem.
A struct subsystem is basically a collection of objects of a certain type,
and some callbacks to operate on objects of that type.
subsystems contain embedded kobjects themselves, and have a similar set of
library routines that kobjects do, which are mostly just wrappers for the
correlating kobject routines.
kobjects are inserted in depth-first order into their subsystem's list of
objects. Orphan kobjects are also given foster parents that point to their
subsystem. This provides a bit more rigidity in the hierarchy, and disallows
any orphan kobjects.
When an object is unregistered, it is removed from its subsystem's list. When
the objects' refcount hits 0, the subsystem's ->release() callback is called.
Documentation describing the objects and the interfaces has also been added.
David Brownell [Tue, 29 Oct 2002 16:12:42 +0000 (08:12 -0800)]
[PATCH] ohci td error cleanup
This is a version of a patch I sent out last Friday to help
address some of the "bad entry" errors that some folk
were seeing, seemingly only with control requests. The fix
is just to not try being clever: remove one TD at a time and
patch the ED as if that TD had completed normally, then do
the next ... don't try to patch just once in this fault case.
(And it nukes some debug info I accidently submitted.)
David Brownell [Tue, 29 Oct 2002 15:43:32 +0000 (07:43 -0800)]
[PATCH] USB: clean up usb structures some more
This patch splits up the usb structures to have two structs,
"usb_XXX_descriptor" with just the descriptor, and "usb_host_XXX" (or
something similar) to wrap it and add the "extra" pointers plus the
array of related descriptors that the host parsed during enumeration.
(2 or 3 words extra in each"usb_host_XXX".) This further matches the
"on the wire" data and enables the gadget drivers to share the same
header file.
Covers all the linux/drivers/usb/* and linux/sound/usb/* stuff, but
not a handful of other drivers (bluetooth, iforce, hisax, irda) that
are out of the usb tree and will likely be affected.
We used to lock (ind mod use count) all drivers just in case, but
it makes more sense to only lock the one we're just using, in
particular since the old scheme was rather broken when insmod'ing
a new driver later.
Again, use a per ttyI timer handler to feed arrived data into the
ttyI. Really, there shouldn't be the need for any timer at all,
rather working flow control, but that'll take a bit to fix.
ISDN: New timer handling for "+++" escape sequence
Instead of having one common timer and walking the list of
all ISDN channels, which might be possibly associated with a
ttyI and even more possibly so waiting for the silence period
after "+++", just use a per ttyI timer, which only gets activated
when necessary.
The common way in the kernel is to pass around the struct (e.g.
struct net_device), and leave the user the possibility to add
its private data using ::priv, so do it the same way when accessing
an ISDN channel.
ISDN: stat_callback() and recv_callback() -> event_callback()
Merge the two different types of callbacks into just one, there's no
good reasons for the receive callback to be different, in particular since
we pass things through the same state machine later anyway.
For some reason, isdnloop didn't support the transparent encoding,
which is necessary for testing V.110. Testing also found a typo
causing an oops in isdn_common.c. Fixed.