kernel/x86-Use-correct-byte-sized-register-constraint-in-__add.patch

36 lines
1.3 KiB
Diff

From: H. Peter Anvin <hpa@zytor.com>
Date: Fri, 6 Apr 2012 16:30:57 +0000 (-0700)
Subject: x86: Use correct byte-sized register constraint in __add()
X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftip%2Ftip.git;a=commitdiff_plain;h=8c91c5325e107ec17e40a59a47c6517387d64eb7
x86: Use correct byte-sized register constraint in __add()
Similar to:
2ca052a x86: Use correct byte-sized register constraint in __xchg_op()
... the __add() macro also needs to use a "q" constraint in the
byte-sized case, lest we try to generate an illegal register.
Link: http://lkml.kernel.org/r/4F7A3315.501@goop.org
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Leigh Scott <leigh123linux@googlemail.com>
Cc: Thomas Reitmayr <treitmayr@devbase.at>
Cc: <stable@vger.kernel.org> v3.3
---
diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
index bc18d0e..99480e5 100644
--- a/arch/x86/include/asm/cmpxchg.h
+++ b/arch/x86/include/asm/cmpxchg.h
@@ -173,7 +173,7 @@ extern void __add_wrong_size(void)
switch (sizeof(*(ptr))) { \
case __X86_CASE_B: \
asm volatile (lock "addb %b1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
+ : "+m" (*(ptr)) : "qi" (inc) \
: "memory", "cc"); \
break; \
case __X86_CASE_W: \