kernel/kpti-fix.patch

130 lines
3.9 KiB
Diff
Raw Normal View History

2018-01-03 17:46:35 +00:00
From 52994c256df36fda9a715697431cba9daecb6b11 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 3 Jan 2018 15:57:59 +0100
Subject: x86/pti: Make sure the user/kernel PTEs match
Meelis reported that his K8 Athlon64 emits MCE warnings when PTI is
enabled:
[Hardware Error]: Error Addr: 0x0000ffff81e000e0
[Hardware Error]: MC1 Error: L1 TLB multimatch.
[Hardware Error]: cache level: L1, tx: INSN
The address is in the entry area, which is mapped into kernel _AND_ user
space. That's special because we switch CR3 while we are executing
there.
User mapping:
0xffffffff81e00000-0xffffffff82000000 2M ro PSE GLB x pmd
Kernel mapping:
0xffffffff81000000-0xffffffff82000000 16M ro PSE x pmd
So the K8 is complaining that the TLB entries differ. They differ in the
GLB bit.
Drop the GLB bit when installing the user shared mapping.
Fixes: 6dc72c3cbca0 ("x86/mm/pti: Share entry text PMD")
Reported-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Meelis Roos <mroos@linux.ee>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031407180.1957@nanos
---
arch/x86/mm/pti.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index bce8aea..2da28ba 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -367,7 +367,8 @@ static void __init pti_setup_espfix64(void)
static void __init pti_clone_entry_text(void)
{
pti_clone_pmds((unsigned long) __entry_text_start,
- (unsigned long) __irqentry_text_end, _PAGE_RW);
+ (unsigned long) __irqentry_text_end,
+ _PAGE_RW | _PAGE_GLOBAL);
}
/*
--
cgit v1.1
2018-01-03 22:22:54 +00:00
From fea692ec9308084475c0c93bf74bcb2a35f3d417 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 3 Jan 2018 19:52:04 +0100
Subject: [PATCH] CONFIG_PAGE_TABLE_ISOLATION=y on x86_64 causes gcc to
segfault when building x86_32 binaries
On Wed, 3 Jan 2018, Thomas Gleixner wrote:
> On Wed, 3 Jan 2018, Lars Wendler wrote:
> > Am Wed, 3 Jan 2018 13:05:38 +0100 (CET)
> > schrieb Thomas Gleixner <tglx@linutronix.de>:
> > > Also can you please try Linus v4.15-rc6 with PTI enabled so we can see
> > > whether that's a backport issue or a general one?
> >
> > Same problem with 4.15-rc6. So I suppose that means it's a general
> > issue.
>
> Just a shot in the dark as I just decoded another issue on a AMD CPU. Can
> you please try the patch below?
Ok. Found the real issue. This is a problem on AMD boxen.
Fix below.
Can Xen folks please have a look at that as well?
Thanks,
tglx
8<-------------------
arch/x86/entry/entry_64_compat.S | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
---
arch/x86/entry/entry_64_compat.S | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 40f17009ec20..4c4b9545b848 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -190,8 +190,13 @@ ENTRY(entry_SYSCALL_compat)
/* Interrupts are off on entry. */
swapgs
- /* Stash user ESP and switch to the kernel stack. */
+ /* Stash user ESP */
movl %esp, %r8d
+
+ /* Use %rsp as scratch reg. User ESP is stashed in r8 */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
+ /* Switch to the kernel stack */
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/* Construct struct pt_regs on stack */
@@ -219,12 +224,6 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
pushq $0 /* pt_regs->r14 = 0 */
pushq $0 /* pt_regs->r15 = 0 */
- /*
- * We just saved %rdi so it is safe to clobber. It is not
- * preserved during the C calls inside TRACE_IRQS_OFF anyway.
- */
- SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
-
/*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
--
2.14.3