Commit Graph

66 Commits

Author SHA1 Message Date
Michael Ellerman
b05fac783a powerpc: Remove orphaned asm implementation of abs()
This has been unused since ~2004, remove it.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-04-11 20:30:41 +10:00
Christophe Leroy
737b01fca3 powerpc32: Remove one insn in mulhdu
Remove one instruction in mulhdu

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11 17:20:12 -06:00
Christophe Leroy
716fa91d19 powerpc32: small optimisation in flush_icache_range()
Inlining of _dcache_range() functions has shown that the compiler
does the same thing a bit better with one insn less

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11 17:20:12 -06:00
Christophe Leroy
affe587bac powerpc32: move xxxxx_dcache_range() functions inline
flush/clean/invalidate _dcache_range() functions are all very
similar and are quite short. They are mainly used in __dma_sync()
perf_event locate them in the top 3 consumming functions during
heavy ethernet activity

They are good candidate for inlining, as __dma_sync() does
almost nothing but calling them

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11 17:20:12 -06:00
Christophe Leroy
5736f96d12 powerpc32: Remove clear_pages() and define clear_page() inline
clear_pages() is never used expect by clear_page, and PPC32 is the
only architecture (still) having this function. Neither PPC64 nor
any other architecture has it.

This patch removes clear_pages() and moves clear_page() function
inline (same as PPC64) as it only is a few isns

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11 17:20:11 -06:00
Christophe Leroy
766d45cbee powerpc/8xx: rewrite flush_instruction_cache() in C
On PPC8xx, flushing instruction cache is performed by writing
in register SPRN_IC_CST. This registers suffers CPU6 ERRATA.
The patch rewrites the fonction in C so that CPU6 ERRATA will
be handled transparently

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11 17:20:11 -06:00
Alistair Popple
4450022b49 powerpc/476fpe: Add support for kexec
PPC476FPE has a different PVR from previous PPC476 processors. The
kexec code checks the PVR in order to correctly setup the MMU. When
the initial support for 476FPE processors was added the corresponding
change in the kexec code was missed. This patch simply adds the check
and solves the following bug on kexec:

kexec: Starting new kernel
Bye!
Unable to handle kernel paging request for instruction fetch
Faulting instruction address: 0xee9a50f8
cpu 0x0: Vector: 400 (Instruction Access) at [ee9d7d20]
    pc: ee9a50f8
    lr: ee9a50e4
    sp: ee9d7dd0
    msr: 21020
    current = 0xee40f000
    pid   = 960, comm = kexec
enter ? for help
[link register   ] ee9a50e4
[ee9d7dd0] c0013748 default_machine_kexec+0x58/0x70 (unreliable)
[ee9d7df0] c0012f04 machine_kexec+0x34/0x40
[ee9d7e00] c00aa1ec kernel_kexec+0x9c/0xb0
[ee9d7e20] c005d704 SyS_reboot+0x1f4/0x220
[ee9d7f40] c000db68 ret_from_syscall+0x0/0x3c

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-12-17 22:41:00 +11:00
Peter Zijlstra
de9e432cb5 atomic: Collapse all atomic_{set,clear}_mask definitions
Move the now generic definitions of atomic_{set,clear}_mask() into
linux/atomic.h to avoid endless and pointless repetition.

Also, provide an atomic_andnot() wrapper for those few archs that can
implement that.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27 14:06:24 +02:00
Kevin Hao
1a18a66446 powerpc: Set the correct ksp_limit on ppc32 when switching to irq stack
Guenter Roeck has got the following call trace on a p2020 board:
  Kernel stack overflow in process eb3e5a00, r1=eb79df90
  CPU: 0 PID: 2838 Comm: ssh Not tainted 3.13.0-rc8-juniper-00146-g19eca00 #4
  task: eb3e5a00 ti: c0616000 task.ti: ef440000
  NIP: c003a420 LR: c003a410 CTR: c0017518
  REGS: eb79dee0 TRAP: 0901   Not tainted (3.13.0-rc8-juniper-00146-g19eca00)
  MSR: 00029000 <CE,EE,ME>  CR: 24008444  XER: 00000000
  GPR00: c003a410 eb79df90 eb3e5a00 00000000 eb05d900 00000001 65d87646 00000000
  GPR08: 00000000 020b8000 00000000 00000000 44008442
  NIP [c003a420] __do_softirq+0x94/0x1ec
  LR [c003a410] __do_softirq+0x84/0x1ec
  Call Trace:
  [eb79df90] [c003a410] __do_softirq+0x84/0x1ec (unreliable)
  [eb79dfe0] [c003a970] irq_exit+0xbc/0xc8
  [eb79dff0] [c000cc1c] call_do_irq+0x24/0x3c
  [ef441f20] [c00046a8] do_IRQ+0x8c/0xf8
  [ef441f40] [c000e7f4] ret_from_except+0x0/0x18
  --- Exception: 501 at 0xfcda524
      LR = 0x10024900
  Instruction dump:
  7c781b78 3b40000a 3a73b040 543c0024 3a800000 3b3913a0 7ef5bb78 48201bf9
  5463103a 7d3b182e 7e89b92e 7c008146 <3ba00000> 7e7e9b78 48000014 57fff87f
  Kernel panic - not syncing: kernel stack overflow
  CPU: 0 PID: 2838 Comm: ssh Not tainted 3.13.0-rc8-juniper-00146-g19eca00 #4
  Call Trace:

The reason is that we have used the wrong register to calculate the
ksp_limit in commit cbc9565ee8 (powerpc: Remove ksp_limit on ppc64).
Just fix it.

As suggested by Benjamin Herrenschmidt, also add the C prototype of the
function in the comment in order to avoid such kind of errors in the
future.

Cc: stable@vger.kernel.org # 3.12
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-02-17 11:19:34 +11:00
Kevin Hao
0ce636700c powerpc: purge all the prefetched instructions for the coherent icache flush
As Benjamin Herrenschmidt has indicated, we still need a dummy icbi to
purge all the prefetched instructions from the ifetch buffers for the
snooping icache. We also need a sync before the icbi to order the
actual stores to memory that might have modified instructions with
the icbi.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-02 14:13:47 +11:00
Bharat Bhushan
41b93b238a powerpc: Added __cmpdi2 for signed 64bit comparision
This was missing on powerpc and I am getting compilation error
drivers/vfio/pci/vfio_pci_rdwr.c:193: undefined reference to `__cmpdi2'
drivers/vfio/pci/vfio_pci_rdwr.c:193: undefined reference to `__cmpdi2'

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 16:49:27 +11:00
Benjamin Herrenschmidt
cbc9565ee8 powerpc: Remove ksp_limit on ppc64
We've been keeping that field in thread_struct for a while, it contains
the "limit" of the current stack pointer and is meant to be used for
detecting stack overflows.

It has a few problems however:

 - First, it was never actually *used* on 64-bit. Set and updated but
not actually exploited

 - When switching stack to/from irq and softirq stacks, it's update
is racy unless we hard disable interrupts, which is costly. This
is fine on 32-bit as we don't soft-disable there but not on 64-bit.

Thus rather than fixing 2 in order to implement 1 in some hypothetical
future, let's remove the code completely from 64-bit. In order to avoid
a clutter of ifdef's, we remove the updates from C code completely
during interrupt stack switching, and instead maintain it from the
asm helper that is used to do the stack switching in the first place.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-09-25 14:15:51 +10:00
Benjamin Herrenschmidt
0366a1c70b powerpc/irq: Run softirqs off the top of the irq stack
Nowadays, irq_exit() calls __do_softirq() pretty much directly
instead of calling do_softirq() which switches to the decicated
softirq stack.

This has lead to observed stack overflows on powerpc since we call
irq_enter() and irq_exit() outside of the scope that switches to
the irq stack.

This fixes it by moving the stack switching up a level, making
irq_enter() and irq_exit() run off the irq stack.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-09-25 14:15:36 +10:00
Kevin Hao
3b04c30007 powerpc: Remove the symbol __flush_icache_range
And now the function flush_icache_range() is just a wrapper which
only invoke the function __flush_icache_range() directly. So we
don't have reason to keep it anymore.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14 14:56:44 +10:00
Kevin Hao
abb29c3bb1 powerpc: Move the testing of CPU_FTR_COHERENT_ICACHE into __flush_icache_range
In function flush_icache_range(), we use cpu_has_feature() to test
the feature bit of CPU_FTR_COHERENT_ICACHE. But this seems not optimal
for two reasons:
 a) For ppc32, the function __flush_icache_range() already do this
    check with the macro END_FTR_SECTION_IFSET.
 b) Compare with the cpu_has_feature(), the method of using macro
    END_FTR_SECTION_IFSET will not introduce any runtime overhead.

[And while at it, add the missing required isync] -- BenH

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14 14:56:06 +10:00
David Woodhouse
ca9d7aea59 powerpc: Provide __bswapdi2
Some versions of GCC apparently expect this to be provided by libgcc.

Updates from Mikey to fix 32 bit version and adding "r" to registers.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-05-14 16:00:17 +10:00
Al Viro
58254e1002 powerpc: split ret_from_fork
... and get rid of in-kernel syscalls in kernel_thread()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-09-30 23:31:19 -04:00
Stuart Yoder
9778b696a0 powerpc: Use CURRENT_THREAD_INFO instead of open coded assembly
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-11 14:18:22 +10:00
Suzuki Poulose
6834302003 powerpc/47x: Kernel support for KEXEC
This patch adds support for creating 1:1 mapping for the PPC_47x during
a KEXEC. The implementation is similar to that of the PPC440x which is
described here :

	http://patchwork.ozlabs.org/patch/104323/

PPC_47x MMU :

The 47x uses Unified TLB 1024 entries, with 4-way associative mapping
(4 x 256 entries). The index to be used is calculated by the MMU by
hashing the PID, EPN and TS. The software can choose to specify the way
by setting bit 0(enable way select) and the way in bits 1-2 in the TLB
Word 0.

Implementation:

The patch erases all the UTLB entries which includes the tlb covering
the mapping for our code. The shadow TLB caches the mapping for the
running code which helps us to continue the execution until we do
isync/rfi. We then create a tmp mapping for the current code in the
other address space (TS) and switch to it.

Then we create a 1:1 mapping(EPN=RPN) for 0-2GiB in the original
address space and switch to the new mapping.

TODO: Add SMP support.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Signed-off-by: Josh Boyer <jwboyer@gmail.com>
2012-05-03 08:40:23 -04:00
Suzuki Poulose
f13bfcc696 powerpc/44x: Fix/Initialize PID to kernel PID before the TLB search
Initialize the PID register with kernel pid (0) before we start
setting the TLB mapping for KEXEC. Also set the MMUCR[TID] to kernel
PID.

This was spotted while testing the kexec on ISS for 47x. ISS  doesn't
return a successful tlbsx for a kernel address with PID set to a user PID.
Though the hardware/qemu/simics work fine.

This patch is harmless and initializes the PID to 0 (kernel PID) which
is usually the case during a normal kernel boot. This would fix the kexec
on ISS for 440. I have tested this patch on sequoia board.

Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com>
Cc: Josh Boyer <jwboyer@gmail.com>
Signed-off-by: Josh Boyer <jwboyer@gmail.com>
2012-05-03 08:37:36 -04:00
Suzuki Poulose
bbc24a25e2 powerpc/4xx: Fix typos in kexec config dependencies
Kexec is not supported on 47x. 47x is a variant of 44x with slightly
different MMU and SMP support. There was a typo in the config dependency
for kexec. This patch fixes the same.

Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Cc:	Kumar Gala <galak@kernel.crashing.org>
Cc:	Josh Boyer <jwboyer@gmail.com>
Cc:	linux ppc dev <linuxppc-dev@lists.ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-11-16 14:47:54 +11:00
Suzuki Poulose
674bfa4855 powerpc/44x: Kexec support for PPC440X chipsets
This patch adds kexec support for PPC440 based chipsets.  This work is based
on the KEXEC patches for FSL BookE.

The FSL BookE patch and the code flow could be found at the link below:

	http://patchwork.ozlabs.org/patch/49359/

Steps:

1) Invalidate all the TLB entries except the one this code is run from
2) Create a tmp mapping for our code in the other address space and jump to it
3) Invalidate the entry we used
4) Create a 1:1 mapping for 0-2GiB in blocks of 256M
5) Jump to the new 1:1 mapping and invalidate the tmp mapping

I have tested this patches on Ebony, Sequoia boards and Virtex on QEMU.

You need kexec-tools commit e8b7939b1e or newer for ppc440x support, 
available at:

 git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git

Signed-off-by: 	Suzuki Poulose <suzuki@in.ibm.com>
Cc:	Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Josh Boyer <jwboyer@gmail.com>
2011-08-11 13:50:37 -04:00
Josh Boyer
6de06f313a powerpc: Fix 32-bit SMP build
Commit 69e3cea8d5 ("powerpc/smp: Make start_secondary_resume
available to all CPU variants") introduced start_secondary_resume to
misc_32.S, however it uses a 64-bit instruction which is not valid on
32-bit platforms.  Use 'stw' instead.

Reported-by: Richard Cochran <richardcochran@gmail.com>
Tested-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-20 16:23:19 -07:00
Benjamin Herrenschmidt
69e3cea8d5 powerpc/smp: Make start_secondary_resume available to all CPU variants
This should fix SMP & Hotplug builds on FSL BookE and 476

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-05-19 13:07:12 +10:00
Stephen Rothwell
46f5221049 powerpc: Remove second definition of STACK_FRAME_OVERHEAD
Since STACK_FRAME_OVERHEAD is defined in asm/ptrace.h and that
is ASSEMBER safe, we can just include that instead of going via
asm-offsets.h.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-11-29 15:48:23 +11:00
Matthew McClintock
4562c986f0 powerpc/kexec: Adds correct calling convention for kexec purgatory
Call kexec purgatory code correctly. We were getting lucky before.
If you examine the powerpc 32bit kexec "purgatory" code you will
see it expects the following:

>From kexec-tools: purgatory/arch/ppc/v2wrap_32.S
-> calling convention:
->   r3 = physical number of this cpu (all cpus)
->   r4 = address of this chunk (master only)

As such, we need to set r3 to the current core, r4 happens to be
unused by purgatory at the moment but we go ahead and set it
here as well

Signed-off-by: Matthew McClintock <msm@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-31 11:35:12 +10:00
Christoph Hellwig
f1ba9a5b2a powerpc: Unconditionally enabled irq stacks
Irq stacks provide an essential protection from stack overflows through
external interrupts, at the cost of two additionals stacks per CPU.

Enable them unconditionally to simplify the kernel build and prevent
people from accidentally disabling them.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-06-15 15:02:37 +10:00
Sebastian Andrzej Siewior
b3df895aeb powerpc/kexec: Add support for FSL-BookE
This adds support kexec on FSL-BookE where the MMU can not be simply
switched off. The code borrows the initial MMU-setup code to create the
identical mapping mapping. The only difference to the original boot code
is the size of the mapping(s) and the executeable address.
The kexec code maps the first 2 GiB of memory in 256 MiB steps. This
should work also on e500v1 boxes.
SMP support is still not available.

(Kumar: Added minor change to build to ifdef CONFIG_PPC_STD_MMU_64 some
code that was PPC64 specific)

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2010-05-24 21:25:32 -05:00
Dave Kleikamp
e7f75ad01d powerpc/47x: Base ppc476 support
This patch adds the base support for the 476 processor.  The code was
primarily written by Ben Herrenschmidt and Torez Smith, but I've been
maintaining it for a while.

The goal is to have a single binary that will run on 44x and 47x, but
we still have some details to work out.  The biggest is that the L1 cache
line size differs on the two platforms, but it's currently a compile-time
option.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Torez Smith  <lnxtorez@linux.vnet.ibm.com>
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2010-05-05 09:11:10 -04:00
Joakim Tjernlund
15d914d72a powerpc/8xx: Start using dcbX instructions in various copy routines
Now that 8xx can fixup dcbX instructions, start using them
where possible like every other PowerPc arch do.

Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-12-09 17:10:37 +11:00
Josh Boyer
14d757520a powerpc: Fix __flush_icache_range on 44x
The ptrace POKETEXT interface allows a process to modify the text pages of
a child process being ptraced, usually to insert breakpoints via trap
instructions.  The kernel eventually calls copy_to_user_page, which in turn
calls __flush_icache_range to invalidate the icache lines for the child
process.

However, this function does not work on 44x due to the icache being virtually
indexed.  This was noticed by a breakpoint being triggered after it had been
cleared by ltrace on a 440EPx board.  The convenient solution is to do a
flash invalidate of the icache in the __flush_icache_range function.

Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-27 13:12:52 +10:00
Ilya Yanok
ca9153a3a2 powerpc/44x: Support 16K/64K base page sizes on 44x
This adds support for 16k and 64k page sizes on PowerPC 44x processors.

The PGDIR table is much smaller than a page when using 16k or 64k
pages (512 and 32 bytes respectively) so we allocate the PGDIR with
kzalloc() instead of __get_free_pages().

One PTE table covers rather a large memory area when using 16k or 64k
pages (32MB or 512MB respectively), so we can easily put FIXMAP and
PKMAP in the area covered by one PTE table.

Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Vladimir Panfilov <pvr@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-29 09:53:25 +11:00
Benjamin Herrenschmidt
2a4aca1144 powerpc/mm: Split low level tlb invalidate for nohash processors
Currently, the various forms of low level TLB invalidations are all
implemented in misc_32.S for 32-bit processors, in a fairly scary
mess of #ifdef's and with interesting duplication such as a whole
bunch of code for FSL _tlbie and _tlbia which are no longer used.

This moves things around such that _tlbie is now defined in
hash_low_32.S and is only used by the 32-bit hash code, and all
nohash CPUs use the various _tlbil_* forms that are now moved to
a new file, tlb_nohash_low.S.

I moved all the definitions for that stuff out of
include/asm/tlbflush.h as they are really internal mm stuff, into
mm/mmu_decl.h

The code should have no functional changes.  I kept some variants
inline for trivial forms on things like 40x and 8xx.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Benjamin Herrenschmidt
f048aace29 powerpc/mm: Add SMP support to no-hash TLB handling
This commit moves the whole no-hash TLB handling out of line into a
new tlb_nohash.c file, and implements some basic SMP support using
IPIs and/or broadcast tlbivax instructions.

Note that I'm using local invalidations for D->I cache coherency.

At worst, if another processor is trying to execute the same and
has the old entry in its TLB, it will just take a fault and re-do
the TLB flush locally (it won't re-do the cache flush in any case).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-21 14:21:16 +11:00
Dave Liu
28707af01b powerpc/fsl-booke: Fix the miss interrupt restore
The commit e5e774d883
powerpc/fsl-booke: Fix problem with _tlbil_va being interrupted
introduce one issue. that casue the problem like this:

Kernel BUG at c00b19fc [verbose debug info unavailable]
Oops: Exception in kernel mode, sig: 5 [#1]
MPC8572 DS
Modules linked in:
NIP: c00b19fc LR: c00b1c34 CTR: c0064e88
REGS: ef02b7b0 TRAP: 0700   Not tainted  (2.6.28-rc8-00057-g1bda712)
MSR: 00021000 <ME>  CR: 44048028  XER: 20000000
TASK = ef02c000[1] 'init' THREAD: ef02a000
GPR00: 00000001 ef02b860 ef02c000 eec201a0 c0dec2c0 00000000 000078a1 00000400
GPR08: c00b4e40 000078a1 c048ec00 a1780000 44048028 ecd26917 00000001 ef02b948
GPR16: ffffffea 0000020c 00000000 00000000 00000003 0000000a 00000000 000078a1
GPR24: eec201a0 00000000 ed849000 00000400 ef02b95c 00000001 ef02b978 ef02b984
NIP [c00b19fc] __find_get_block+0x24/0x238
LR [c00b1c34] __getblk+0x24/0x2a0
Call Trace:
[ef02b860] [c017b768] generic_make_request+0x290/0x328 (unreliable)
[ef02b8b0] [c00b1c34] __getblk+0x24/0x2a0
[ef02b910] [c00b4ae4] __bread+0x14/0xf8
[ef02b920] [c00fc228] ext2_get_branch+0xf0/0x138
[ef02b940] [c00fcc88] ext2_get_block+0xb8/0x828
[ef02ba00] [c00bbdc8] do_mpage_readpage+0x188/0x808
[ef02bac0] [c00bc5b4] mpage_readpages+0xec/0x144
[ef02bb50] [c00fba38] ext2_readpages+0x24/0x34
[ef02bb60] [c006ade0] __do_page_cache_readahead+0x150/0x230
[ef02bbb0] [c0064bdc] filemap_fault+0x31c/0x3e0
[ef02bbf0] [c00728b8] __do_fault+0x60/0x5b0
[ef02bc50] [c0011e0c] do_page_fault+0x2d8/0x4c4
[ef02bd10] [c000ed90] handle_page_fault+0xc/0x80
[ef02bdd0] [c00c7adc] set_brk+0x74/0x9c
[ef02bdf0] [c00c9274] load_elf_binary+0x70c/0x1180
[ef02be70] [c00945f0] search_binary_handler+0xa8/0x274
[ef02bea0] [c0095818] do_execve+0x19c/0x1d4
[ef02bed0] [c000766c] sys_execve+0x58/0x84
[ef02bef0] [c000e950] ret_from_syscall+0x0/0x3c
[ef02bfb0] [c009c6fc] sys_dup+0x24/0x6c
[ef02bfc0] [c0001e04] init_post+0xb0/0xf0
[ef02bfd0] [c046c1ac] kernel_init+0xcc/0xf4
[ef02bff0] [c000e6d0] kernel_thread+0x4c/0x68
Instruction dump:
4bffffa4 813f000c 4bffffac 9421ffb0 7c0802a6 7d800026 90010054 bf210034
91810030 7c0000a6 68008000 54008ffe <0f000000> 3d20c04e 3b29ffb8 38000008

The issue was the beqlr returns early but we haven't reenabled interrupts.

Signed-off-by: Dave Liu <daveliu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-17 10:06:13 -06:00
Kumar Gala
e5e774d883 powerpc/fsl-booke: Fix problem with _tlbil_va being interrupted
An example calling sequence which we did see:

copy_user_highpage -> kmap_atomic -> flush_tlb_page -> _tlbil_va

We got interrupted after setting up the MAS registers before the
tlbwe and the interrupt handler that caused the interrupt also did
a kmap_atomic (ide code) and thus on returning from the interrupt
the MAS registers no longer contained the proper values.

Since we dont save/restore MAS registers for normal interrupts we
need to disable interrupts in _tlbil_va to ensure atomicity.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-12-13 17:02:47 -06:00
Kumar Gala
b41d6fee37 powerpc/fsl-booke: Fix synchronization bug w/local tlb invalidates
The implemetation of _tlbil_pid() on Freescale Book-E cores needs
an msync & isync after we flash invalidate the TLBs.  This was causing
the following oops reported by Sebastian Andrzej Siewior:

  VFS: Mounted root (nfs filesystem) readonly.
  Freeing unused kernel memory: 148k init
  BUG: sleeping function called from invalid context at /home/bigeasy/git/linux-2.6-powerpc/mm/mmap.c:234
  in_atomic():1, irqs_disabled():0
  Call Trace:
  [df189df0] [c0007160] show_stack+0x48/0x148 (unreliable)
  [df189e30] [c0029480] __might_sleep+0xf0/0x100
  [df189e40] [c0070ac0] remove_vma+0x28/0x98
  [df189e50] [c0070c1c] exit_mmap+0xec/0x128
  [df189e80] [c002d2f4] mmput+0x54/0xec
  [df189ea0] [c0030b6c] exit_mm+0x10c/0x120
  [df189ed0] [c003288c] do_exit+0x1ac/0x6e8
  [df189f20] [c0032e48] do_group_exit+0x80/0xac
  [df189f40] [c000e9dc] ret_from_syscall+0x0/0x3c
  BUG: scheduling while atomic: udevd/956/0x10000002
  Modules linked in:
  Call Trace:
  [df189df0] [c0007160] show_stack+0x48/0x148 (unreliable)
  [df189e30] [c002ac88] __schedule_bug+0x58/0x6c
  [df189e40] [c023e6cc] schedule+0xa8/0x4a8
  [df189e90] [c002ad6c] __cond_resched+0x38/0x64
  [df189ea0] [c023ebc8] _cond_resched+0x3c/0x58
  [df189eb0] [c0030e70] put_files_struct+0x90/0xec
  [df189ed0] [c00328a8] do_exit+0x1c8/0x6e8
  [df189f20] [c0032e48] do_group_exit+0x80/0xac
  [df189f40] [c000e9dc] ret_from_syscall+0x0/0x3c

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-11-08 12:38:55 -06:00
Josh Poimboeuf
41c2e949cb powerpc: Fix error path in kernel_thread function
The powerpc 32-bit and 64-bit kernel_thread functions don't properly
propagate errors being returned by the clone syscall.  (In the case of
error, the syscall exit code returns a positive errno in r3 and sets
the CR0[SO] bit.)

This patch fixes that by negating r3 if CR0[SO] is set after the syscall.

Signed-off-by: Josh Poimboeuf <jpoimboe@us.ibm.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-10 15:55:18 +11:00
Kumar Gala
0ba3418b8b powerpc: Introduce local (non-broadcast) forms of tlb invalidates
Introduced a new set of low level tlb invalidate functions that do not
broadcast invalidates on the bus:

_tlbil_all - invalidate all
_tlbil_pid - invalidate based on process id (or mm context)
_tlbil_va  - invalidate based on virtual address (ea + pid)

On non-SMP configs _tlbil_all should be functionally equivalent to _tlbia and
_tlbil_va should be functionally equivalent to _tlbie.

The intent of this change is to handle SMP based invalidates via IPIs instead
of broadcasts as the mechanism scales better for larger number of cores.

On e500 (fsl-booke mmu) based cores move to using MMUCSR for invalidate alls
and tlbsx/tlbwe for invalidate virtual address.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:29:40 -05:00
Paul Collins
d9178f4c14 powerpc/kexec: Fix up KEXEC_CONTROL_CODE_SIZE missed during conversion
Commit 163f6876f5 missed one, resulting in
the following compile error:

  AS      arch/powerpc/kernel/misc_32.o
arch/powerpc/kernel/misc_32.S: Assembler messages:
arch/powerpc/kernel/misc_32.S:902: Error: unsupported relocation against KEXEC_CONTROL_CODE_SIZE
make[2]: *** [arch/powerpc/kernel/misc_32.o] Error 1
make[1]: *** [arch/powerpc/kernel] Error 2
make: *** [vmlinux] Error 2

I grepped arch/ and found no further instances.

Signed-off-by: Paul Collins <paul@ondioline.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-18 14:22:35 +10:00
Kumar Gala
b76e59d1fb powerpc/kprobes: Some minor fixes
* Mark __flush_icache_range as a function that can't be probed since its
  used by the kprobe code.

* Fix an issue with single stepping and async exceptions.  We need to
  ensure that we dont get an async exception (external, decrementer, etc)
  while we are attempting to single step the probe point.

  Added a check to ensure we only handle a single step if its really
  intended for the instruction in question.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 03:35:33 -05:00
Kumar Gala
85218827cc [POWERPC] Add IRQSTACKS support on ppc32
This makes it possible to use separate stacks for hard and soft IRQs
on 32-bit powerpc as well as on 64-bit.  The code for 32-bit is just
the 32-bit analog of the 64-bit code.

* Added allocation and initialization of the irq stacks.  We limit the
  stacks to be in lowmem for ppc32.
* Implemented ppc32 versions of call_do_softirq() and call_handle_irq()
  to switch the stack pointers
* Reworked how we do stack overflow detection.  We now keep around the
  limit of the stack in the thread_struct and compare against the limit
  to see if we've overflowed.  We can now use this on ppc64 if desired.

[ paulus@samba.org: Fixed bug on 6xx where we need to reload r9 with the
  thread_info pointer. ]

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-29 15:57:34 +10:00
Kumar Gala
f608600e74 [POWERPC] Clean up access to thread_info in assembly
Use (31-THREAD_SHIFT) to get to thread_info from stack pointer.  This makes
the code a bit easier to read and more robust if we ever change THREAD_SHIFT.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-24 20:58:02 +10:00
Paul Mackerras
95ff54f517 [POWERPC] Add __ucmpdi2 for 64-bit comparisons in 32-bit kernels
Some drivers (such as V4L2) have code that causes gcc to generate
calls to __ucmpdi2 when compiling for 32-bit powerpc, which results
in either a link-time error or a module that can't be loaded, as
we don't currently have a __ucmpdi2.  This adds one so these drivers
can be used.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-03-13 09:39:55 +11:00
Kumar Gala
a6f7174596 [POWERPC] 85xx: Only invalidate TLB0 and TLB1
All current 85xx/e500 implementations only have two TLB
arrays.  We are wasting cycles by invalidating TLB2 and TLB3.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-01-28 13:23:42 -06:00
Benjamin Herrenschmidt
9dae8afdf2 [POWERPC] 4xx: Add early udbg support for 40x processors
This adds some basic real mode based early udbg support for 40x
in order to debug things more easily

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2007-12-23 13:13:03 -06:00
Stephen Rothwell
94b146ceee [POWERPC] kernel_execve is identical in 32 and 64 bit
so consolidate it into misc.S.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-11 13:34:39 +11:00
Benjamin Herrenschmidt
b98ac05d5e [POWERPC] 4xx: Deal with 44x virtually tagged icache
The 44x family has an interesting "feature" which is a virtually
tagged instruction cache (yuck !). So far, we haven't dealt with
it properly, which means we've been mostly lucky or people didn't
report the problems, unless people have been running custom patches
in their distro...

This is an attempt at fixing it properly. I chose to do it by
setting a global flag whenever we change a PTE that was previously
marked executable, and flush the entire instruction cache upon
return to user space when that happens.

This is a bit heavy handed, but it's hard to do more fine grained
flushes as the icbi instruction, on those processor, for some very
strange reasons (since the cache is virtually mapped) still requires
a valid TLB entry for reading in the target address space, which
isn't something I want to deal with.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2007-11-01 07:15:30 -05:00
Benjamin Herrenschmidt
e701d269aa [POWERPC] 4xx: Fix 4xx flush_tlb_page()
On 4xx CPUs, the current implementation of flush_tlb_page() uses
a low level _tlbie() assembly function that only works for the
current PID. Thus, invalidations caused by, for example, a COW
fault triggered by get_user_pages() from a different context will
not work properly, causing among other things, gdb breakpoints
to fail.

This patch adds a "pid" argument to _tlbie() on 4xx processors,
and uses it to flush entries in the right context. FSL BookE
also gets the argument but it seems they don't need it (their
tlbivax form ignores the PID when invalidating according to the
document I have).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2007-11-01 07:15:09 -05:00
David Gibson
aa1cf632bd [POWERPC] Fix small race in 44x tlbie function
The 440 family of processors don't have a tlbie instruction.  So, we
implement TLB invalidates by explicitly searching the TLB with tlbsx.,
then clobbering the relevant entry, if any.  Unfortunately the PID for
the search needs to be stored in the MMUCR register, which is also
used by the TLB miss handler.  Interrupts were enabled in _tlbie(), so
an interrupt between loading the MMUCR and the tlbsx could cause
incorrect search results, and thus a failure to invalide TLB entries
which needed to be invalidated.

This fixes the problem in both arch/ppc and arch/powerpc by inhibiting
interrupts (even critical and debug interrupts) across the relevant
instructions.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-15 15:12:50 +10:00