kernel/ARM-fix-__get_user_check-in...

144 lines
6.3 KiB
Diff

From patchwork Mon Sep 30 05:59:25 2019
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Masahiro Yamada <yamada.masahiro@socionext.com>
X-Patchwork-Id: 1132459
Return-Path: <SRS0=rUXN=XZ=vger.kernel.org=linux-kernel-owner@kernel.org>
Received: from mail.kernel.org (mail.kernel.org [198.145.29.99])
by smtp.lore.kernel.org (Postfix) with ESMTP id DF215C4360C
for <linux-kernel@archiver.kernel.org>; Mon, 30 Sep 2019 06:02:56 +0000 (UTC)
Received: from vger.kernel.org (vger.kernel.org [209.132.180.67])
by mail.kernel.org (Postfix) with ESMTP id B032A20815
for <linux-kernel@archiver.kernel.org>; Mon, 30 Sep 2019 06:02:56 +0000 (UTC)
Authentication-Results: mail.kernel.org;
dkim=pass (2048-bit key) header.d=nifty.com header.i=@nifty.com
header.b="sVJyT1RO"
Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
id S1729635AbfI3GCz (ORCPT
<rfc822;linux-kernel@archiver.kernel.org>);
Mon, 30 Sep 2019 02:02:55 -0400
Received: from conuserg-10.nifty.com ([210.131.2.77]:65305 "EHLO
conuserg-10.nifty.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org
with ESMTP id S1726121AbfI3GCz (ORCPT
<rfc822;linux-kernel@vger.kernel.org>);
Mon, 30 Sep 2019 02:02:55 -0400
Received: from localhost.localdomain (p14092-ipngnfx01kyoto.kyoto.ocn.ne.jp
[153.142.97.92]) (authenticated)
by conuserg-10.nifty.com with ESMTP id x8U60ANM011158;
Mon, 30 Sep 2019 15:00:10 +0900
DKIM-Filter: OpenDKIM Filter v2.10.3 conuserg-10.nifty.com x8U60ANM011158
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nifty.com;
s=dec2015msa; t=1569823213;
bh=31RHoPop8t0h2pCPRnwABR+VMelvfuLJ6qwFWQxvRAk=;
h=From:To:Cc:Subject:Date:From;
b=sVJyT1ROU+6mzkZMRTb0M214/0QcKkmxRbNgDwh2q1TPJpEjPLOoE+y1jkVndgyce
qBfr7v3nYiN5WSsx5xTwPYvHohsWcSS3AWwyVRw8Kxjd0CGrX8l5WcF76SmCvJPLCB
wLRZ7C1/Z/zv9v8AVlB2BGhDmSvNQJ9bvuGi42d+JbBXGDfg0HZGGHEj7yDDLBV9nW
EZkTGzP6wtIdqgD6DM5Lj4LA7FnlzH8Ocy6yp5agIZ7tdaiVh4E+Xb97KFsLgRin/o
kTPCap5ub1TziurVW+1pbzwH+G3TNVeY+yJdYcAQRFzXXOrTa7s5zIJUtObrYVGCA2
ctH5uaN1kjx1g==
X-Nifty-SrcIP: [153.142.97.92]
From: Masahiro Yamada <yamada.masahiro@socionext.com>
To: linux-arm-kernel@lists.infradead.org,
Russell King <rmk+kernel@armlinux.org.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Olof Johansson <olof@lixom.net>, Arnd Bergmann <arnd@arndb.de>,
Nick Desaulniers <ndesaulniers@google.com>,
Nicolas Saenz Julienne <nsaenzjulienne@suse.de>,
Masahiro Yamada <yamada.masahiro@socionext.com>,
Julien Thierry <julien.thierry.kdev@gmail.com>,
Russell King <linux@armlinux.org.uk>,
Stefan Agner <stefan@agner.ch>,
Thomas Gleixner <tglx@linutronix.de>,
Vincent Whitchurch <vincent.whitchurch@axis.com>,
linux-kernel@vger.kernel.org
Subject: [PATCH] ARM: fix __get_user_check() in case uaccess_* calls are not
inlined
Date: Mon, 30 Sep 2019 14:59:25 +0900
Message-Id: <20190930055925.25842-1-yamada.masahiro@socionext.com>
X-Mailer: git-send-email 2.17.1
Sender: linux-kernel-owner@vger.kernel.org
Precedence: bulk
List-ID: <linux-kernel.vger.kernel.org>
X-Mailing-List: linux-kernel@vger.kernel.org
KernelCI reports that bcm2835_defconfig is no longer booting since
commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING
forcibly"):
https://lkml.org/lkml/2019/9/26/825
I also received a regression report from Nicolas Saenz Julienne:
https://lkml.org/lkml/2019/9/27/263
This problem has cropped up on arch/arm/config/bcm2835_defconfig
because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends
to prefer not inlining functions with -Os. I was able to reproduce
it with other boards and defconfig files by manually enabling
CONFIG_CC_OPTIMIZE_FOR_SIZE.
The __get_user_check() specifically uses r0, r1, r2 registers.
So, uaccess_save_and_enable() and uaccess_restore() must be inlined
in order to avoid those registers being overwritten in the callees.
Prior to commit 9012d011660e ("compiler: allow all arches to enable
CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for
inlining functions, except on x86.
Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING.
So, __always_inline is now the only guaranteed way of forcible inlining.
I want to keep as much compiler's freedom as possible about the inlining
decision. So, I changed the function call order instead of adding
__always_inline around.
Call uaccess_save_and_enable() before assigning the __p ("r0"), and
uaccess_restore() after evacuating the __e ("r0").
Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING")
Reported-by: "kernelci.org bot" <bot@kernelci.org>
Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Tested-by: Fabrizio Castro <fabrizio.castro@bp.renesas.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
arch/arm/include/asm/uaccess.h | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 303248e5b990..559f252d7e3c 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *);
#define __get_user_check(x, p) \
({ \
unsigned long __limit = current_thread_info()->addr_limit - 1; \
+ unsigned int __ua_flags = uaccess_save_and_enable(); \
register typeof(*(p)) __user *__p asm("r0") = (p); \
register __inttype(x) __r2 asm("r2"); \
register unsigned long __l asm("r1") = __limit; \
register int __e asm("r0"); \
- unsigned int __ua_flags = uaccess_save_and_enable(); \
+ unsigned int __err; \
switch (sizeof(*(__p))) { \
case 1: \
if (sizeof((x)) >= 8) \
@@ -223,9 +224,10 @@ extern int __get_user_64t_4(void *);
break; \
default: __e = __get_user_bad(); break; \
} \
- uaccess_restore(__ua_flags); \
+ __err = __e; \
x = (typeof(*(p))) __r2; \
- __e; \
+ uaccess_restore(__ua_flags); \
+ __err; \
})
#define get_user(x, p) \