It would appear that the timespec normalize code has an off by one error.
Found in three places. Thanks to Ben for spotting.
Signed-off-by: George Anzinger<george@mvista.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
allnoconfig:
In file included from fs/super.c:28:
include/linux/acct.h:173: warning: `TICK_NSEC' is not defined
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
put_ioctx's refcount debugging was doing an atomic_read after dropping its
reference when it wasn't the last ref, leaving a tiny race for another freeing
thread to sneak into. This shifts the debugging before the ops, uses BUG_ON,
and reformats the defines a little. Sadly, moving to inlines increased the
code size but this change decreases the code size by a whole 9 bytes :)
Signed-off-by: Zach Brown <zach.brown@oracle.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Sync iocbs have a life cycle that don't need a kioctx. Their retrying, if
any, is done in the context of their owner who has allocated them on the
stack.
The sole user of a sync iocb's ctx reference was aio_complete() checking for
an elevated iocb ref count that could never happen. No path which grabs an
iocb ref has access to sync iocbs.
If we were to implement sync iocb cancelation it would be done by the owner of
the iocb using its on-stack reference.
Removing this chunk from aio_complete allows us to remove the entire kioctx
instance from mm_struct, reducing its size by a third. On a i386 testing box
the slab size went from 768 to 504 bytes and from 5 to 8 per page.
Signed-off-by: Zach Brown <zach.brown@oracle.com>
Acked-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Introduce an atomic_inc_not_zero operation. Make this a special case of
atomic_add_unless because lockless pagecache actually wants
atomic_inc_not_negativeone due to its offset refcount.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
- Make cmpxchg generally available on the i386 platform.
- Provide emulation of cmpxchg suitable for uniprocessor if built and run on
386.
From: Christoph Lameter <clameter@sgi.com>
- Cut down patch and small style changes.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Currently per_cpu_ptr() doesn't really do anything with 'cpu' in the UP
case. This is problematic in the cases where this is the only place the
variable is referenced:
CC kernel/workqueue.o
kernel/workqueue.c: In function `current_is_keventd':
kernel/workqueue.c:460: warning: unused variable `cpu'
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove task_work structure, use the standard thread flags functions and use
shifts in entry.S to test the thread flags. Add a few local labels to entry.S
to allow gas to generate short jumps.
Finally it changes a number of inline functions in thread_info.h to macros to
delay the current_thread_info() usage, which requires on m68k a structure
(task_struct) not yet defined at this point.
Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
Cc: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
a) added embedded thread_info [m68k processor.h]
b) added missing symbols in asm-offsets.c
c) task_thread_info() and friends in asm-m68k/thread_info.h
d) made m68k thread_info.h included by m68k processor.h, not the other way
round.
Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
a) in smp_lock.h #include of sched.h and spinlock.h moved under #ifdef
CONFIG_LOCK_KERNEL.
b) interrupt.h now explicitly pulls sched.h (not via smp_lock.h from
hardirq.h as it used to)
c) in three more places we need changes to compensate for (a) - one place
in arch/sparc needs string.h now, hardirq.h needs forward declaration of
task_struct and preempt.h needs direct include of thread_info.h.
d) thread_info-related helpers in sched.h and thread_info.h put under
ifndef __HAVE_THREAD_FUNCTIONS. Obviously safe.
Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
encapsulates the rest of arch-dependent operations with thread_info access.
Two new helpers - setup_thread_stack() and end_of_stack(). For normal case
the former consists of copying thread_info of parent to new thread_info and
the latter returns pointer immediately past the end of thread_info.
Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
new helper - task_thread_info(task). On platforms that have thread_info
allocated separately (i.e. in default case) it simply returns
task->thread_info. m68k wants (and for good reasons) to embed its thread_info
into task_struct. So it will (in later patch) have task_thread_info() of its
own. For now we just add a macro for generic case and convert existing
instances of its body in core kernel to uses of new macro. Obviously safe -
all normal architectures get the same preprocessor output they used to get.
Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Enablement patch for the new PowerBooks (late 2005 edition).
This enables the ATA controller, Gigabit ethernet and basic AGP setup.
Bluetooth works out-of-the box after running hid2hci.
Still remaining is to get the touchpad to work, the simple change of just
adding the new USB ids isn't enough.
Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove last remnant of the defunct early reclaim page logic, the no longer
used __GFP_NORECLAIM flag bit.
Signed-off-by: Paul Jackson <pj@sgi.com>
Acked-by: Martin Hicks <mort@bork.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Clean up of __alloc_pages.
Restoration of previous behaviour, plus further cleanups by introducing an
'alloc_flags', removing the last of should_reclaim_zone.
Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The address based work estimate for unmapping (for lockbreak) is and always
was horribly inefficient for sparse mappings. The problem is most simply
explained with an example:
If we find a pgd is clear, we still have to call into unmap_page_range
PGDIR_SIZE / ZAP_BLOCK_SIZE times, each time checking the clear pgd, in
order to progress the working address to the next pgd.
The fundamental way to solve the problem is to keep track of the end
address we've processed and pass it back to the higher layers.
From: Nick Piggin <npiggin@suse.de>
Modification to completely get away from address based work estimate
and instead use an abstract count, with a very small cost for empty
entries as opposed to present pages.
On 2.6.14-git2, ppc64, and CONFIG_PREEMPT=y, mapping and unmapping 1TB
of virtual address space takes 1.69s; with the following patch applied,
this operation can be done 1000 times in less than 0.01s
From: Andrew Morton <akpm@osdl.org>
With CONFIG_HUTETLB_PAGE=n:
mm/memory.c: In function `unmap_vmas':
mm/memory.c:779: warning: division by zero
Due to
zap_work -= (end - start) /
(HPAGE_SIZE / PAGE_SIZE);
So make the dummy HPAGE_SIZE non-zero
Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Changed jobs and the Freescale address is no longer valid.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Since few people need the support anymore, this moves the legacy
pm_xxx functions to CONFIG_PM_LEGACY, and include/linux/pm_legacy.h.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The file_lock spinlock sits close to mostly read fields of 'struct
files_struct'
In SMP (and NUMA) environments, each time a thread wants to open or close
a file, it has to acquire the spinlock, thus invalidating the cache line
containing this spinlock on other CPUS. So other threads doing
read()/write()/... calls that use RCU to access the file table are going
to ask further memory (possibly NUMA) transactions to read again this
memory line.
Move the spinlock to another cache line, so that concurrent threads can
share the cache line containing 'count' and 'fdt' fields.
It's worth up to 9% on a microbenchmark using a 4-thread 2-package x86
machine. See
http://marc.theaimsgroup.com/?l=linux-kernel&m=112680448713342&w=2
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
On Fri, Nov 11, 2005 at 12:58:40PM -0800, David S. Miller wrote:
>
> This change:
>
> diff-tree 8ca2bdc7a9 (from feee207e44d3643d19e648aAuthor: Christoph Hellwig <hch@lst.de>
> Date: Wed Nov 9 12:07:18 2005 -0800
>
> [SPARC] sbus rtc: implement ->compat_ioctl
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: David S. Miller <davem@davemloft.net>
>
> results in the console now getting spewed on sparc64 systems
> with messages like:
>
> [ 11.968298] ioctl32(hwclock:464): Unknown cmd fd(3) cmd(401c7014){00} arg(efc
> What's happening is hwclock tries first the SBUS rtc device ioctls
> then the normal rtc driver ones.
>
> So things actually worked better when we had the SBUS rtc compat ioctl
> directly handled via the generic compat ioctl code.
>
> There are _so_ many rtc drivers in the kernel implementing the
> generic rtc ioctls that I don't think putting a ->compat_ioctl
> into all of them to fix this problem is feasible. Unless we
> write a single rtc_compat_ioctl(), export it to modules, and hook
> it into all of those somehow.
>
> But even that doesn't appear to have any pretty implementation.
>
> Any better ideas?
We had similar problems with other ioctls where userspace did things
like that. What we did there was to put the compat handler to generic
code. The patch below does that, adding a big comment about what's
going on and removing the COMPAT_IOCTL entires for these on powerpc
that not only weren't ever useful but are duplicated now aswell.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds request_queue->nr_sorted which keeps the number of
requests in the iosched and implement elv_drain_elevator which
performs forced dispatching. elv_drain_elevator checks whether
iosched actually dispatches all requests it has and prints error
message if it doesn't. As buggy forced dispatching can result in
wrong barrier operations, I think this extra check is worthwhile.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
Also introduces a sysctl option to configure the receive buffer
accounting policy to be either at socket or association level.
Default is all the associations on the same socket share the
receive buffer.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On ia64, it is possible to get NaT Consumption Fault and a kernel panic
when initializing sctp sideeffect commands arguments. The union
sctp_arg_t contains different sized elements and when loading a smaller
sized element (32 or 16 bits), it is possible for a speculative load to
fail and result in a NaT bit set which causes a kernel crash. The easy
way to get around it is to load the largerst member of the union.
Signed-off-by: Vladislav Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The socket level timeout values are maintained in sctp_sock and
association level timeouts are in sctp_association. So there is
no need for ep->timeouts.
Signed-off-by: Vladislav Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch introduces 4-level page tables to ia64. I have run
some benchmarks and found nothing interesting. Performance has
consistently fallen within the noise range.
It also introduces a config option (setting the default to 3
levels). The config option prevents having 4 level page
tables with 64k base page size.
Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
This will let me chop the code size of several drivers right down. In
many cases the actual private data is very useful and constant for a
given host controller so being able to just pass it at probe time would
be very useful indeed (eg with the via driver would could pass the udma
clocking and reduce the code size, or with the AMD one the UDMA
multiplier and the offset)
Signed-off-by: Alan Cox <alan@redhat.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
This patch moves the vdso's to arch/powerpc, adds support for the 32
bits vdso to the 32 bits kernel, rename systemcfg (finally !), and adds
some new (still untested) routines to both vdso's: clock_gettime() with
support for CLOCK_REALTIME and CLOCK_MONOTONIC, clock_getres() (same
clocks) and get_tbfreq() for glibc to retreive the timebase frequency.
Tom,Steve: The implementation of get_tbfreq() I've done for 32 bits
returns a long long (r3, r4) not a long. This is such that if we ever
add support for >4Ghz timebases on ppc32, the userland interface won't
have to change.
I have tested gettimeofday() using some glibc patches in both ppc32 and
ppc64 kernels using 32 bits userland (I haven't had a chance to test a
64 bits userland yet, but the implementation didn't change and was
tested earlier). I haven't tested yet the new functions.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Since the udbg code in ppc64 has no ppc32 equivalent, move it straight
over into arch/powerpc (and include/asm-powerpc for udbg.h). In time,
we probably want to meld the various bits and pieces of 32-bit early
debugging code into udbg, but for now only include it on
CONFIG_PPC64=y builds. The only change during the move is to
standardise the protecting #ifdef/#define in udbg.h, and move its
banner comment above the initial #ifdef (which seems to be normal
practice).
Built and booted on POWER5 LPAR (ARCH=powerpc and ARCH=ppc64). Built
for 32bit multiplatform (ARCH=powerpc).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The definitions in sparsemem.h arent sufficient. We currently sell
machines with 2TB of RAM, and in order to give us room for a few years
growth lets set it to 16TB.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Convert to sparsemem and remove all the discontigmem code in the
process. This has a few advantages:
- The old numa_memory_lookup_table can go away
- All the arch specific discontigmem magic can go away
We also remove the triple pass of memory properties and instead create a
list of per node extents that we iterate through. A final cleanup would
be to change our lmb code to store extents per node, then we can reuse
that information in the numa code.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove ppc64 specific version of nr_cpus_node and use the generic one
provided.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove an unused numa define and move a discontigmem specific define
inside the relevant ifdef.
I will submit a separate patch to remove them from other architectures,
but the ppc64 patches to follow depend on this.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The bit position in the status register corresponding to the
PCI DMA interrupt was incorrect. Additionally, we did not
have a define for the PCI DMA interrupt.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
On Alpha:
include/linux/libata.h: In function `ata_pad_alloc':
include/linux/libata.h:785: warning: implicit declaration of function `dma_alloc_coherent'
include/linux/libata.h:786: warning: assignment makes pointer from integer without a cast
include/linux/libata.h: In function `ata_pad_free':
include/linux/libata.h:792: warning: implicit declaration of function `dma_free_coherent'
(I have a decouple-some-header-files cleanup in -mm, so it's causing some
fallout of this nature)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Use "hints" to speed up the SACK processing. Various forms
of this have been used by TCP developers (Web100, STCP, BIC)
to avoid the 2x linear search of outstanding segments.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>