Commit Graph

2526 Commits

Author SHA1 Message Date
Vitaly Chikunov
fe18957e8e crypto: streebog - add Streebog hash function
Add GOST/IETF Streebog hash function (GOST R 34.11-2012, RFC 6986)
generic hash transformation.

Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16 14:09:40 +08:00
Gilad Ben-Yossef
ecd6d5c9cb crypto: cts - document NIST standard status
cts(cbc(aes)) as used in the kernel has been added to NIST
standard as CBC-CS3. Document it as such.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Suggested-by: Stephan Mueller <smueller@chronox.de>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16 14:09:39 +08:00
Vitaly Chikunov
2eb4942b66 crypto: ecc - check for invalid values in the key verification test
Currently used scalar multiplication algorithm (Matthieu Rivain, 2011)
have invalid values for scalar == 1, n-1, and for regularized version
n-2, which was previously not checked. Verify that they are not used as
private keys.

Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16 14:09:39 +08:00
Gilad Ben-Yossef
196ad6043e crypto: testmgr - mark cts(cbc(aes)) as FIPS allowed
As per Sp800-38A addendum from Oct 2010[1], cts(cbc(aes)) is
allowed as a FIPS mode algorithm. Mark it as such.

[1] https://csrc.nist.gov/publications/detail/sp/800-38a/addendum/final

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:41:39 +08:00
Eric Biggers
37db69e0b4 crypto: user - clean up report structure copying
There have been a pretty ridiculous number of issues with initializing
the report structures that are copied to userspace by NETLINK_CRYPTO.
Commit 4473710df1 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
expansion") replaced some strncpy()s with strlcpy()s, thereby
introducing information leaks.  Later two other people tried to replace
other strncpy()s with strlcpy() too, which would have introduced even
more information leaks:

    - https://lore.kernel.org/patchwork/patch/954991/
    - https://patchwork.kernel.org/patch/10434351/

Commit cac5818c25 ("crypto: user - Implement a generic crypto
statistics") also uses the buggy strlcpy() approach and therefore leaks
uninitialized memory to userspace.  A fix was proposed, but it was
originally incomplete.

Seeing as how apparently no one can get this right with the current
approach, change all the reporting functions to:

- Start by memsetting the report structure to 0.  This guarantees it's
  always initialized, regardless of what happens later.
- Initialize all strings using strscpy().  This is safe after the
  memset, ensures null termination of long strings, avoids unnecessary
  work, and avoids the -Wstringop-truncation warnings from gcc.
- Use sizeof(var) instead of sizeof(type).  This is more robust against
  copy+paste errors.

For simplicity, also reuse the -EMSGSIZE return value from nla_put().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:41:39 +08:00
Eric Biggers
ed848b652c crypto: user - remove redundant reporting functions
The acomp, akcipher, and kpp algorithm types already have .report
methods defined, so there's no need to duplicate this functionality in
crypto_user itself; the duplicate functions are actually never executed.
Remove the unused code.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:41:39 +08:00
Colin Ian King
b1e3874c75 pcrypt: use format specifier in kobject_add
Passing string 'name' as the format specifier is potentially hazardous
because name could (although very unlikely to) have a format specifier
embedded in it causing issues when parsing the non-existent arguments
to these.  Follow best practice by using the "%s" format string for
the string 'name'.

Cleans up clang warning:
crypto/pcrypt.c:397:40: warning: format string is not a string literal
(potentially insecure) [-Wformat-security]

Fixes: a3fb1e330d ("pcrypt: Added sysfs interface to pcrypt")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:41:39 +08:00
Dmitry Eremin-Solenikov
7da6667077 crypto: testmgr - add AES-CFB tests
Add AES128/192/256-CFB testvectors from NIST SP800-38A.

Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:41:38 +08:00
Dmitry Eremin-Solenikov
fa4600734b crypto: cfb - fix decryption
crypto_cfb_decrypt_segment() incorrectly XOR'ed generated keystream with
IV, rather than with data stream, resulting in incorrect decryption.
Test vectors will be added in the next patch.

Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:40:59 +08:00
Eric Biggers
913a3aa07d crypto: arm/aes - add some hardening against cache-timing attacks
Make the ARM scalar AES implementation closer to constant-time by
disabling interrupts and prefetching the tables into L1 cache.  This is
feasible because due to ARM's "free" rotations, the main tables are only
1024 bytes instead of the usual 4096 used by most AES implementations.

On ARM Cortex-A7, the speed loss is only about 5%.  The resulting code
is still over twice as fast as aes_ti.c.  Responsiveness is potentially
a concern, but interrupts are only disabled for a single AES block.

Note that even after these changes, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software.  But it's valuable to make such attacks more difficult.

Much of this patch is based on patches suggested by Ard Biesheuvel.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:36:48 +08:00
Eric Biggers
0a6a40c2a8 crypto: aes_ti - disable interrupts while accessing S-box
In the "aes-fixed-time" AES implementation, disable interrupts while
accessing the S-box, in order to make cache-timing attacks more
difficult.  Previously it was possible for the CPU to be interrupted
while the S-box was loaded into L1 cache, potentially evicting the
cachelines and causing later table lookups to be time-variant.

In tests I did on x86 and ARM, this doesn't affect performance
significantly.  Responsiveness is potentially a concern, but interrupts
are only disabled for a single AES block.

Note that even after this change, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software.  But it's valuable to make such attacks more difficult.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:36:48 +08:00
Corentin Labbe
9f4debe384 crypto: user - Zeroize whole structure given to user space
For preventing uninitialized data to be given to user-space (and so leak
potential useful data), the crypto_stat structure must be correctly
initialized.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: cac5818c25 ("crypto: user - Implement a generic crypto statistics")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
[EB: also fix it in crypto_reportstat_one()]
[EB: use sizeof(var) rather than sizeof(type)]
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:35:43 +08:00
Eric Biggers
f43f39958b crypto: user - fix leaking uninitialized memory to userspace
All bytes of the NETLINK_CRYPTO report structures must be initialized,
since they are copied to userspace.  The change from strncpy() to
strlcpy() broke this.  As a minimal fix, change it back.

Fixes: 4473710df1 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion")
Cc: <stable@vger.kernel.org> # v4.12+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:35:43 +08:00
Ard Biesheuvel
508a1c4df0 crypto: simd - correctly take reqsize of wrapped skcipher into account
The simd wrapper's skcipher request context structure consists
of a single subrequest whose size is taken from the subordinate
skcipher. However, in simd_skcipher_init(), the reqsize that is
retrieved is not from the subordinate skcipher but from the
cryptd request structure, whose size is completely unrelated to
the actual wrapped skcipher.

Reported-by: Qian Cai <cai@gmx.us>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Qian Cai <cai@gmx.us>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:35:43 +08:00
Denis Kenzior
64ae16dfee KEYS: asym_tpm: Add support for the sign operation [ver #2]
The sign operation can operate in a non-hashed mode by running the RSA
sign operation directly on the input.  This assumes that the input is
less than key_size_in_bytes - 11.  Since the TPM performs its own PKCS1
padding, it isn't possible to support 'raw' mode, only 'pkcs1'.

Alternatively, a hashed version is also possible.  In this variant the
input is hashed (by userspace) via the selected hash function first.
Then this implementation takes care of converting the hash to ASN.1
format and the sign operation is performed on the result.  This is
similar to the implementation inside crypto/rsa-pkcs1pad.c.

ASN1 templates were copied from crypto/rsa-pkcs1pad.c.  There seems to
be no easy way to expose that functionality, but likely the templates
should be shared somehow.

The sign operation is implemented via TPM_Sign operation on the TPM.
It is assumed that the TPM wrapped key provided uses
TPM_SS_RSASSAPKCS1v15_DER signature scheme.  This allows the TPM_Sign
operation to work on data up to key_len_in_bytes - 11 bytes long.

In theory, we could also use TPM_Unbind instead of TPM_Sign, but we would
have to manually pkcs1 pad the digest first.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
e73d170f6c KEYS: asym_tpm: Implement tpm_sign [ver #2]
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
e08e689123 KEYS: asym_tpm: Implement signature verification [ver #2]
This patch implements the verify_signature operation.  The public key
portion extracted from the TPM key blob is used.  The operation is
performed entirely in software using the crypto API.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
a335974ae0 KEYS: asym_tpm: Implement the decrypt operation [ver #2]
This patch implements the pkey_decrypt operation using the private key
blob.  The blob is first loaded into the TPM via tpm_loadkey2.  Once the
handle is obtained, tpm_unbind operation is used to decrypt the data on
the TPM and the result is returned.  The key loaded by tpm_loadkey2 is
then evicted via tpm_flushspecific operation.

This patch assumes that the SRK authorization is a well known 20-byte of
zeros and the same holds for the key authorization of the provided key.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
f884fe5a15 KEYS: asym_tpm: Implement tpm_unbind [ver #2]
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Reviewed-by: James Morris <james.morris@microsoft.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
0c36264aa1 KEYS: asym_tpm: Add loadkey2 and flushspecific [ver #2]
This commit adds TPM_LoadKey2 and TPM_FlushSpecific operations.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: James Morris <james.morris@microsoft.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
e1ea9f8602 KEYS: trusted: Expose common functionality [ver #2]
This patch exposes some common functionality needed to send TPM commands.
Several functions from keys/trusted.c are exposed for use by the new tpm
key subtype and a module dependency is introduced.

In the future, common functionality between the trusted key type and the
asym_tpm subtype should be factored out into a common utility library.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
ad4b1eb5fb KEYS: asym_tpm: Implement encryption operation [ver #2]
This patch impelements the pkey_encrypt operation.  The public key
portion extracted from the TPM key blob is used.  The operation is
performed entirely in software using the crypto API.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:47 +01:00
Denis Kenzior
dff5a61a59 KEYS: asym_tpm: Implement pkey_query [ver #2]
This commit implements the pkey_query operation.  This is accomplished
by utilizing the public key portion to obtain max encryption size
information for the operations that utilize the public key (encrypt,
verify).  The private key size extracted from the TPM_Key data structure
is used to fill the information where the private key is used (decrypt,
sign).

The kernel uses a DER/BER format for public keys and does not support
setting the key via the raw binary form.  To get around this a simple
DER/BER formatter is implemented which stores the DER/BER formatted key
and exponent in a temporary buffer for use by the crypto API.

The only exponent supported currently is 65537.  This holds true for
other Linux TPM tools such as 'create_tpm_key' and
trousers-openssl_tpm_engine.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
Denis Kenzior
d5e72745ca KEYS: Add parser for TPM-based keys [ver #2]
For TPM based keys, the only standard seems to be described here:
http://david.woodhou.se/draft-woodhouse-cert-best-practice.html#rfc.section.4.4

Quote from the relevant section:
"Rather, a common form of storage for "wrapped" keys is to encode the
binary TCPA_KEY structure in a single ASN.1 OCTET-STRING, and store the
result in PEM format with the tag "-----BEGIN TSS KEY BLOB-----". "

This patch implements the above behavior.  It is assumed that the PEM
encoding is stripped out by userspace and only the raw DER/BER format is
provided.  This is similar to how PKCS7, PKCS8 and X.509 keys are
handled.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
Denis Kenzior
f8c54e1ac4 KEYS: asym_tpm: extract key size & public key [ver #2]
The parsed BER/DER blob obtained from user space contains a TPM_Key
structure.  This structure has some information about the key as well as
the public key portion.

This patch extracts this information for future use.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
Denis Kenzior
903be6bb84 KEYS: asym_tpm: add skeleton for asym_tpm [ver #2]
This patch adds the basic skeleton for the asym_tpm asymmetric key
subtype.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
Denis Kenzior
b3a8c8a5eb crypto: rsa-pkcs1pad: Allow hash to be optional [ver #2]
The original pkcs1pad implementation allowed to pad/unpad raw RSA
output.  However, this has been taken out in commit:
commit c0d20d22e0 ("crypto: rsa-pkcs1pad - Require hash to be present")

This patch restored this ability as it is needed by the asymmetric key
implementation.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
David Howells
3c58b2362b KEYS: Implement PKCS#8 RSA Private Key parser [ver #2]
Implement PKCS#8 RSA Private Key format [RFC 5208] parser for the
asymmetric key type.  For the moment, this will only support unencrypted
DER blobs.  PEM and decryption can be added later.

PKCS#8 keys can be loaded like this:

	openssl pkcs8 -in private_key.pem -topk8 -nocrypt -outform DER | \
	  keyctl padd asymmetric foo @s

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
David Howells
c08fed7371 KEYS: Implement encrypt, decrypt and sign for software asymmetric key [ver #2]
Implement the encrypt, decrypt and sign operations for the software
asymmetric key subtype.  This mostly involves offloading the call to the
crypto layer.

Note that the decrypt and sign operations require a private key to be
supplied.  Encrypt (and also verify) will work with either a public or a
private key.  A public key can be supplied with an X.509 certificate and a
private key can be supplied using a PKCS#8 blob:

	# j=`openssl pkcs8 -in ~/pkcs7/firmwarekey2.priv -topk8 -nocrypt -outform DER | keyctl padd asymmetric foo @s`
	# keyctl pkey_query $j - enc=pkcs1
	key_size=4096
	max_data_size=512
	max_sig_size=512
	max_enc_size=512
	max_dec_size=512
	encrypt=y
	decrypt=y
	sign=y
	verify=y
	# keyctl pkey_encrypt $j 0 data enc=pkcs1 >/tmp/enc
	# keyctl pkey_decrypt $j 0 /tmp/enc enc=pkcs1 >/tmp/dec
	# cmp data /tmp/dec
	# keyctl pkey_sign $j 0 data enc=pkcs1 hash=sha1 >/tmp/sig
	# keyctl pkey_verify $j 0 data /tmp/sig enc=pkcs1 hash=sha1
	#

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
David Howells
f7c4e06e06 KEYS: Allow the public_key struct to hold a private key [ver #2]
Put a flag in the public_key struct to indicate if the structure is holding
a private key.  The private key must be held ASN.1 encoded in the format
specified in RFC 3447 A.1.2.  This is the form required by crypto/rsa.c.

The software encryption subtype's verification and query functions then
need to select the appropriate crypto function to set the key.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
David Howells
82f94f2447 KEYS: Provide software public key query function [ver #2]
Provide a query function for the software public key implementation.  This
permits information about such a key to be obtained using
query_asymmetric_key() or KEYCTL_PKEY_QUERY.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
David Howells
0398849077 KEYS: Make the X.509 and PKCS7 parsers supply the sig encoding type [ver #2]
Make the X.509 and PKCS7 parsers fill in the signature encoding type field
recently added to the public_key_signature struct.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
David Howells
5a30771832 KEYS: Provide missing asymmetric key subops for new key type ops [ver #2]
Provide the missing asymmetric key subops for new key type ops.  This
include query, encrypt, decrypt and create signature.  Verify signature
already exists.  Also provided are accessor functions for this:

	int query_asymmetric_key(const struct key *key,
				 struct kernel_pkey_query *info);

	int encrypt_blob(struct kernel_pkey_params *params,
			 const void *data, void *enc);
	int decrypt_blob(struct kernel_pkey_params *params,
			 const void *enc, void *data);
	int create_signature(struct kernel_pkey_params *params,
			     const void *data, void *enc);

The public_key_signature struct gains an encoding field to carry the
encoding for verify_signature().

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-10-26 09:30:46 +01:00
Linus Torvalds
62606c224d Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "API:
   - Remove VLA usage
   - Add cryptostat user-space interface
   - Add notifier for new crypto algorithms

  Algorithms:
   - Add OFB mode
   - Remove speck

  Drivers:
   - Remove x86/sha*-mb as they are buggy
   - Remove pcbc(aes) from x86/aesni
   - Improve performance of arm/ghash-ce by up to 85%
   - Implement CTS-CBC in arm64/aes-blk, faster by up to 50%
   - Remove PMULL based arm64/crc32 driver
   - Use PMULL in arm64/crct10dif
   - Add aes-ctr support in s5p-sss
   - Add caam/qi2 driver

  Others:
   - Pick better transform if one becomes available in crc-t10dif"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (124 commits)
  crypto: chelsio - Update ntx queue received from cxgb4
  crypto: ccree - avoid implicit enum conversion
  crypto: caam - add SPDX license identifier to all files
  crypto: caam/qi - simplify CGR allocation, freeing
  crypto: mxs-dcp - make symbols 'sha1_null_hash' and 'sha256_null_hash' static
  crypto: arm64/aes-blk - ensure XTS mask is always loaded
  crypto: testmgr - fix sizeof() on COMP_BUF_SIZE
  crypto: chtls - remove set but not used variable 'csk'
  crypto: axis - fix platform_no_drv_owner.cocci warnings
  crypto: x86/aes-ni - fix build error following fpu template removal
  crypto: arm64/aes - fix handling sub-block CTS-CBC inputs
  crypto: caam/qi2 - avoid double export
  crypto: mxs-dcp - Fix AES issues
  crypto: mxs-dcp - Fix SHA null hashes and output length
  crypto: mxs-dcp - Implement sha import/export
  crypto: aegis/generic - fix for big endian systems
  crypto: morus/generic - fix for big endian systems
  crypto: lrw - fix rebase error after out of bounds fix
  crypto: cavium/nitrox - use pci_alloc_irq_vectors() while enabling MSI-X.
  crypto: cavium/nitrox - NITROX command queue changes.
  ...
2018-10-25 16:43:35 -07:00
Karsten Graul
89ab066d42 Revert "net: simplify sock_poll_wait"
This reverts commit dd979b4df8.

This broke tcp_poll for SMC fallback: An AF_SMC socket establishes an
internal TCP socket for the initial handshake with the remote peer.
Whenever the SMC connection can not be established this TCP socket is
used as a fallback. All socket operations on the SMC socket are then
forwarded to the TCP socket. In case of poll, the file->private_data
pointer references the SMC socket because the TCP socket has no file
assigned. This causes tcp_poll to wait on the wrong socket.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-23 10:57:06 -07:00
Michael Schupikov
22a8118d32 crypto: testmgr - fix sizeof() on COMP_BUF_SIZE
After allocation, output and decomp_output both point to memory chunks of
size COMP_BUF_SIZE. Then, only the first bytes are zeroed out using
sizeof(COMP_BUF_SIZE) as parameter to memset(), because
sizeof(COMP_BUF_SIZE) provides the size of the constant and not the size of
allocated memory.

Instead, the whole allocated memory is meant to be zeroed out. Use
COMP_BUF_SIZE as parameter to memset() directly in order to accomplish
this.

Fixes: 336073840a ("crypto: testmgr - Allow different compression results")

Signed-off-by: Michael Schupikov <michael@schupikov.de>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-12 14:20:45 +08:00
Ard Biesheuvel
4a34e3c2f2 crypto: aegis/generic - fix for big endian systems
Use the correct __le32 annotation and accessors to perform the
single round of AES encryption performed inside the AEGIS transform.
Otherwise, tcrypt reports:

  alg: aead: Test 1 failed on encryption for aegis128-generic
  00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e
  alg: aead: Test 1 failed on encryption for aegis128l-generic
  00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28
  alg: aead: Test 1 failed on encryption for aegis256-generic
  00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c

Fixes: f606a88e58 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-08 13:44:53 +08:00
Ard Biesheuvel
5a8dedfa32 crypto: morus/generic - fix for big endian systems
Omit the endian swabbing when folding the lengths of the assoc and
crypt input buffers into the state to finalize the tag. This is not
necessary given that the memory representation of the state is in
machine native endianness already.

This fixes an error reported by tcrypt running on a big endian system:

  alg: aead: Test 2 failed on encryption for morus640-generic
  00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b
  00000010: 21
  alg: aead: Test 2 failed on encryption for morus1280-generic
  00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee
  00000010: 5f

Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-08 13:44:53 +08:00
Ard Biesheuvel
fd27b571c9 crypto: lrw - fix rebase error after out of bounds fix
Due to an unfortunate interaction between commit fbe1a850b3
("crypto: lrw - Fix out-of bounds access on counter overflow") and
commit c778f96bf3 ("crypto: lrw - Optimize tweak computation"),
we ended up with a version of next_index() that always returns 127.

Fixes: c778f96bf3 ("crypto: lrw - Optimize tweak computation")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-05 10:22:48 +08:00
Ard Biesheuvel
944585a64f crypto: x86/aes-ni - remove special handling of AES in PCBC mode
For historical reasons, the AES-NI based implementation of the PCBC
chaining mode uses a special FPU chaining mode wrapper template to
amortize the FPU start/stop overhead over multiple blocks.

When this FPU wrapper was introduced, it supported widely used
chaining modes such as XTS and CTR (as well as LRW), but currently,
PCBC is the only remaining user.

Since there are no known users of pcbc(aes) in the kernel, let's remove
this special driver, and rely on the generic pcbc driver to encapsulate
the AES-NI core cipher.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-10-05 10:16:56 +08:00
Gilad Ben-Yossef
dfb89ab3f0 crypto: tcrypt - add OFB functional tests
We already have OFB test vectors and tcrypt OFB speed tests.
Add OFB functional tests to tcrypt as well.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:26 +08:00
Gilad Ben-Yossef
e497c51896 crypto: ofb - add output feedback mode
Add a generic version of output feedback mode. We already have support of
several hardware based transformations of this mode and the needed test
vectors but we somehow missed adding a generic software one. Fix this now.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:26 +08:00
Gilad Ben-Yossef
95ba597367 crypto: testmgr - update sm4 test vectors
Add additional test vectors from "The SM4 Blockcipher Algorithm And Its
Modes Of Operations" draft-ribose-cfrg-sm4-10 and register cipher speed
tests for sm4.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:26 +08:00
Horia Geantă
4d407b04d4 crypto: tcrypt - remove remnants of pcomp-based zlib
Commit 110492183c ("crypto: compress - remove unused pcomp interface")
removed pcomp interface but missed cleaning up tcrypt.

Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:26 +08:00
Corentin Labbe
cac5818c25 crypto: user - Implement a generic crypto statistics
This patch implement a generic way to get statistics about all crypto
usages.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:25 +08:00
Kees Cook
36b3875a97 crypto: cryptd - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:08 +08:00
Kees Cook
8d60539842 crypto: null - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:08 +08:00
Kees Cook
b350bee5ea crypto: skcipher - Introduce crypto_sync_skcipher
In preparation for removal of VLAs due to skcipher requests on the stack
via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
for the "sync skcipher" tfm, which is for handling the on-stack cases of
skcipher, which are always non-ASYNC and have a known limited request
size.

The crypto API additions:

	struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
	crypto_alloc_sync_skcipher()
	crypto_free_sync_skcipher()
	crypto_sync_skcipher_setkey()
	crypto_sync_skcipher_get_flags()
	crypto_sync_skcipher_set_flags()
	crypto_sync_skcipher_clear_flags()
	crypto_sync_skcipher_blocksize()
	crypto_sync_skcipher_ivsize()
	crypto_sync_skcipher_reqtfm()
	skcipher_request_set_sync_tfm()
	SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:06 +08:00
Dan Aloni
3944f139d5 crypto: fix a memory leak in rsa-kcs1pad's encryption mode
The encryption mode of pkcs1pad never uses out_sg and out_buf, so
there's no need to allocate the buffer, which presently is not even
being freed.

CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: linux-crypto@vger.kernel.org
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Dan Aloni <dan@kernelim.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:06 +08:00
Ondrej Mosnacek
ac3c8f36c3 crypto: lrw - Do not use auxiliary buffer
This patch simplifies the LRW template to recompute the LRW tweaks from
scratch in the second pass and thus also removes the need to allocate a
dynamic buffer using kmalloc().

As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.

PERFORMANCE MEASUREMENTS (x86_64)
Performed using: https://gitlab.com/omos/linux-crypto-bench
Crypto driver used: lrw(ecb-aes-aesni)

The results show that the new code has about the same performance as the
old code. For 512-byte message it seems to be even slightly faster, but
that might be just noise.

Before:
       ALGORITHM KEY (b)        DATA (B)   TIME ENC (ns)   TIME DEC (ns)
        lrw(aes)     256              64             200             203
        lrw(aes)     320              64             202             204
        lrw(aes)     384              64             204             205
        lrw(aes)     256             512             415             415
        lrw(aes)     320             512             432             440
        lrw(aes)     384             512             449             451
        lrw(aes)     256            4096            1838            1995
        lrw(aes)     320            4096            2123            1980
        lrw(aes)     384            4096            2100            2119
        lrw(aes)     256           16384            7183            6954
        lrw(aes)     320           16384            7844            7631
        lrw(aes)     384           16384            8256            8126
        lrw(aes)     256           32768           14772           14484
        lrw(aes)     320           32768           15281           15431
        lrw(aes)     384           32768           16469           16293

After:
       ALGORITHM KEY (b)        DATA (B)   TIME ENC (ns)   TIME DEC (ns)
        lrw(aes)     256              64             197             196
        lrw(aes)     320              64             200             197
        lrw(aes)     384              64             203             199
        lrw(aes)     256             512             385             380
        lrw(aes)     320             512             401             395
        lrw(aes)     384             512             415             415
        lrw(aes)     256            4096            1869            1846
        lrw(aes)     320            4096            2080            1981
        lrw(aes)     384            4096            2160            2109
        lrw(aes)     256           16384            7077            7127
        lrw(aes)     320           16384            7807            7766
        lrw(aes)     384           16384            8108            8357
        lrw(aes)     256           32768           14111           14454
        lrw(aes)     320           32768           15268           15082
        lrw(aes)     384           32768           16581           16250

[1] https://lkml.org/lkml/2018/8/23/1315

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:52 +08:00
Ondrej Mosnacek
c778f96bf3 crypto: lrw - Optimize tweak computation
This patch rewrites the tweak computation to a slightly simpler method
that performs less bswaps. Based on performance measurements the new
code seems to provide slightly better performance than the old one.

PERFORMANCE MEASUREMENTS (x86_64)
Performed using: https://gitlab.com/omos/linux-crypto-bench
Crypto driver used: lrw(ecb-aes-aesni)

Before:
       ALGORITHM KEY (b)        DATA (B)   TIME ENC (ns)   TIME DEC (ns)
        lrw(aes)     256              64             204             286
        lrw(aes)     320              64             227             203
        lrw(aes)     384              64             208             204
        lrw(aes)     256             512             441             439
        lrw(aes)     320             512             456             455
        lrw(aes)     384             512             469             483
        lrw(aes)     256            4096            2136            2190
        lrw(aes)     320            4096            2161            2213
        lrw(aes)     384            4096            2295            2369
        lrw(aes)     256           16384            7692            7868
        lrw(aes)     320           16384            8230            8691
        lrw(aes)     384           16384            8971            8813
        lrw(aes)     256           32768           15336           15560
        lrw(aes)     320           32768           16410           16346
        lrw(aes)     384           32768           18023           17465

After:
       ALGORITHM KEY (b)        DATA (B)   TIME ENC (ns)   TIME DEC (ns)
        lrw(aes)     256              64             200             203
        lrw(aes)     320              64             202             204
        lrw(aes)     384              64             204             205
        lrw(aes)     256             512             415             415
        lrw(aes)     320             512             432             440
        lrw(aes)     384             512             449             451
        lrw(aes)     256            4096            1838            1995
        lrw(aes)     320            4096            2123            1980
        lrw(aes)     384            4096            2100            2119
        lrw(aes)     256           16384            7183            6954
        lrw(aes)     320           16384            7844            7631
        lrw(aes)     384           16384            8256            8126
        lrw(aes)     256           32768           14772           14484
        lrw(aes)     320           32768           15281           15431
        lrw(aes)     384           32768           16469           16293

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:52 +08:00
Ondrej Mosnacek
dc6d6d5a58 crypto: testmgr - Add test for LRW counter wrap-around
This patch adds a test vector for lrw(aes) that triggers wrap-around of
the counter, which is a tricky corner case.

Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:52 +08:00
Ondrej Mosnacek
fbe1a850b3 crypto: lrw - Fix out-of bounds access on counter overflow
When the LRW block counter overflows, the current implementation returns
128 as the index to the precomputed multiplication table, which has 128
entries. This patch fixes it to return the correct value (127).

Fixes: 64470f1b85 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
Cc: <stable@vger.kernel.org> # 2.6.20+
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:51 +08:00
Horia Geantă
331351f89c crypto: tcrypt - fix ghash-generic speed test
ghash is a keyed hash algorithm, thus setkey needs to be called.
Otherwise the following error occurs:
$ modprobe tcrypt mode=318 sec=1
testing speed of async ghash-generic (ghash-generic)
tcrypt: test  0 (   16 byte blocks,   16 bytes per update,   1 updates):
tcrypt: hashing failed ret=-126

Cc: <stable@vger.kernel.org> # 4.6+
Fixes: 0660511c0b ("crypto: tcrypt - Use ahash")
Tested-by: Franck Lenormand <franck.lenormand@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:51 +08:00
Eric Biggers
a5e9f55709 crypto: chacha20 - Fix chacha20_block() keystream alignment (again)
In commit 9f480faec5 ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment.  So, while my commit didn't break anything, it didn't fully
solve the alignment problems.

Revert my solution and just update chacha20_block() to use
put_unaligned_le32(), so the output buffer need not be aligned.
This is simpler, and on many CPUs it's the same speed.

But, I kept the 'tmp' buffers in extract_crng_user() and
_get_random_bytes() 4-byte aligned, since that alignment is actually
needed for _crng_backtrack_protect() too.

Reported-by: Stephan Müller <smueller@chronox.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:50 +08:00
Ondrej Mosnacek
78105c7e76 crypto: xts - Drop use of auxiliary buffer
Since commit acb9b159c7 ("crypto: gf128mul - define gf128mul_x_* in
gf128mul.h"), the gf128mul_x_*() functions are very fast and therefore
caching the computed XTS tweaks has only negligible advantage over
computing them twice.

In fact, since the current caching implementation limits the size of
the calls to the child ecb(...) algorithm to PAGE_SIZE (usually 4096 B),
it is often actually slower than the simple recomputing implementation.

This patch simplifies the XTS template to recompute the XTS tweaks from
scratch in the second pass and thus also removes the need to allocate a
dynamic buffer using kmalloc().

As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.

PERFORMANCE RESULTS
I measured time to encrypt/decrypt a memory buffer of varying sizes with
xts(ecb-aes-aesni) using a tool I wrote ([2]) and the results suggest
that after this patch the performance is either better or comparable for
both small and large buffers. Note that there is a lot of noise in the
measurements, but the overall difference is easy to see.

Old code:
       ALGORITHM KEY (b)        DATA (B)   TIME ENC (ns)   TIME DEC (ns)
        xts(aes)     256              64             331             328
        xts(aes)     384              64             332             333
        xts(aes)     512              64             338             348
        xts(aes)     256             512             889             920
        xts(aes)     384             512            1019             993
        xts(aes)     512             512            1032             990
        xts(aes)     256            4096            2152            2292
        xts(aes)     384            4096            2453            2597
        xts(aes)     512            4096            3041            2641
        xts(aes)     256           16384            9443            8027
        xts(aes)     384           16384            8536            8925
        xts(aes)     512           16384            9232            9417
        xts(aes)     256           32768           16383           14897
        xts(aes)     384           32768           17527           16102
        xts(aes)     512           32768           18483           17322

New code:
       ALGORITHM KEY (b)        DATA (B)   TIME ENC (ns)   TIME DEC (ns)
        xts(aes)     256              64             328             324
        xts(aes)     384              64             324             319
        xts(aes)     512              64             320             322
        xts(aes)     256             512             476             473
        xts(aes)     384             512             509             492
        xts(aes)     512             512             531             514
        xts(aes)     256            4096            2132            1829
        xts(aes)     384            4096            2357            2055
        xts(aes)     512            4096            2178            2027
        xts(aes)     256           16384            6920            6983
        xts(aes)     384           16384            8597            7505
        xts(aes)     512           16384            7841            8164
        xts(aes)     256           32768           13468           12307
        xts(aes)     384           32768           14808           13402
        xts(aes)     512           32768           15753           14636

[1] https://lkml.org/lkml/2018/8/23/1315
[2] https://gitlab.com/omos/linux-crypto-bench

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21 13:24:50 +08:00
Martin K. Petersen
dd8b083f9a crypto: api - Introduce notifier for new crypto algorithms
Introduce a facility that can be used to receive a notification
callback when a new algorithm becomes available. This can be used by
existing crypto registrations to trigger a switch from a software-only
algorithm to a hardware-accelerated version.

A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto
notification chain, and the register/unregister functions are exported
so they can be called by subsystems outside of crypto.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:37:04 +08:00
Ard Biesheuvel
ab8085c130 crypto: x86 - remove SHA multibuffer routines and mcryptd
As it turns out, the AVX2 multibuffer SHA routines are currently
broken [0], in a way that would have likely been noticed if this
code were in wide use. Since the code is too complicated to be
maintained by anyone except the original authors, and since the
performance benefits for real-world use cases are debatable to
begin with, it is better to drop it entirely for the moment.

[0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2

Suggested-by: Eric Biggers <ebiggers@google.com>
Cc: Megha Dey <megha.dey@linux.intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:37:04 +08:00
Kees Cook
f3569fd613 crypto: shash - Remove VLA usage in unaligned hashing
In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:37:03 +08:00
Kees Cook
a9f7f88a12 crypto: api - Introduce generic max blocksize and alignmask
In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.

At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:35:04 +08:00
Kees Cook
b68a7ec1e9 crypto: hash - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro), along with a few other cases. Similar limits are turned into
macros as well.

A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:35:03 +08:00
Ard Biesheuvel
ebf533adc8 crypto: ccm - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this drops
AHASH_REQUEST_ON_STACK by preallocating the ahash request area combined
with the skcipher area (which are not used at the same time).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:35:03 +08:00
Kees Cook
3bdd23f886 crypto: xcbc - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:35:03 +08:00
Jason A. Donenfeld
578bdaabd0 crypto: speck - remove Speck
These are unused, undesired, and have never actually been used by
anybody. The original authors of this code have changed their mind about
its inclusion. While originally proposed for disk encryption on low-end
devices, the idea was discarded [1] in favor of something else before
that could really get going. Therefore, this patch removes Speck.

[1] https://marc.info/?l=linux-crypto-vger&m=153359499015659

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Eric Biggers <ebiggers@google.com>
Cc: stable@vger.kernel.org
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04 11:35:03 +08:00
Linus Torvalds
13bf2cf9e2 DMAengine updates for v4.19-rc1
This round brings couple of framework changes, a new driver and usual driver
 updates:
  - New managed helper for dmaengine framework registration
  - Split dmaengine pause capability to pause and resume and allow drivers to
    report that individually
  - Update dma_request_chan_by_mask() to handle deferred probing
  - Move imx-sdma to use virt-dma
  - New driver for Actions Semi Owl family S900 controller
  - Minor updates to intel, renesas, mv_xor, pl330 etc
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJbdsctAAoJEHwUBw8lI4NHZrIP/3/HrNSUKApt1KdOcG5UA7nu
 7O3BcvkAahmM285Hw3a/zLEnSm2sJ/6EI0lN1sz+VYi8IECG7nbCyHQh3Bd1Mxi1
 XLHafdTGcI5b7rpicNtRS1BHCPtNrgOypFxs8b/bTatbzc/aWM8K8WFLX27sqGZT
 1Sb2nNKKrVbQDVqJ+1ZEQ4q86w61tPHmmRH0icl1DAQREfsvbu/bRMdol5H7/orx
 A+ZGH39Ig3FI8/Ri8KccqShvG0VM1yCVJca+0j30IL1x4JNZ36uG+NQbtkBIkOJC
 kk9qfCu3ugm4NOtfKGOtkmmOwE9/GirRh+QMPpSmi6oQu4vdOVxyQyYpKukHIer1
 vxwpvo2b+3POMfHi1kuqDJhcGIEPak6tH2Oyd01l7nA7Lyww9iC2AyiL89knw+i6
 aUK4oHIhf2fFLUN6/ck4JbBqQ3MrDNraZfLJcnmQPtpTftW9Yqd2yqs7Cf1gcBC9
 jyLAekJENiUmaNJsL5nJUMDVGG0lIiOnfwtPNfPZJuWu+4doKb2pM4+Ljcyfn2g0
 ub4fPfXp0wcFaVarjpQr6T0tdZVMpmrPSTPGS5BdVZbWntrNOpiHmmPVEOLNz3zb
 ibIMFn478/RYYB5pcNtHkUaOF4tu0w46fSqRp1ixkey+FIHKlj8/B+YeaAJF0nJh
 fc4XaTTJgLufzc1F0ztU
 =kbCC
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull DMAengine updates from Vinod Koul:
 "This round brings couple of framework changes, a new driver and usual
  driver updates:

   - new managed helper for dmaengine framework registration

   - split dmaengine pause capability to pause and resume and allow
     drivers to report that individually

   - update dma_request_chan_by_mask() to handle deferred probing

   - move imx-sdma to use virt-dma

   - new driver for Actions Semi Owl family S900 controller

   - minor updates to intel, renesas, mv_xor, pl330 etc"

* tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
  dmaengine: Add Actions Semi Owl family S900 DMA driver
  dt-bindings: dmaengine: Add binding for Actions Semi Owl SoCs
  dmaengine: sh: rcar-dmac: Should not stop the DMAC by rcar_dmac_sync_tcr()
  dmaengine: mic_x100_dma: use the new helper to simplify the code
  dmaengine: add a new helper dmaenginem_async_device_register
  dmaengine: imx-sdma: add memcpy interface
  dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff'
  dmaengine: dma_request_chan_by_mask() to handle deferred probing
  dmaengine: pl330: fix irq race with terminate_all
  dmaengine: Revert "dmaengine: mv_xor_v2: enable COMPILE_TEST"
  dmaengine: mv_xor_v2: use {lower,upper}_32_bits to configure HW descriptor address
  dmaengine: mv_xor_v2: enable COMPILE_TEST
  dmaengine: mv_xor_v2: move unmap to before callback
  dmaengine: mv_xor_v2: convert callback to helper function
  dmaengine: mv_xor_v2: kill the tasklets upon exit
  dmaengine: mv_xor_v2: explicitly freeup irq
  dmaengine: sh: rcar-dmac: Add dma_pause operation
  dmaengine: sh: rcar-dmac: add a new function to clear CHCR.DE with barrier
  dmaengine: idma64: Support dmaengine_terminate_sync()
  dmaengine: hsu: Support dmaengine_terminate_sync()
  ...
2018-08-18 15:55:59 -07:00
Yannik Sembritzki
817aef2600 Replace magic for trusting the secondary keyring with #define
Replace the use of a magic number that indicates that verify_*_signature()
should use the secondary keyring with a symbol.

Signed-off-by: Yannik Sembritzki <yannik@sembritzki.me>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Cc: linux-security-module@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-16 09:57:20 -07:00
Linus Torvalds
f91e654474 Merge branch 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security
Pull integrity updates from James Morris:
 "This adds support for EVM signatures based on larger digests, contains
  a new audit record AUDIT_INTEGRITY_POLICY_RULE to differentiate the
  IMA policy rules from the IMA-audit messages, addresses two deadlocks
  due to either loading or searching for crypto algorithms, and cleans
  up the audit messages"

* 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
  EVM: fix return value check in evm_write_xattrs()
  integrity: prevent deadlock during digsig verification.
  evm: Allow non-SHA1 digital signatures
  evm: Don't deadlock if a crypto algorithm is unavailable
  integrity: silence warning when CONFIG_SECURITYFS is not enabled
  ima: Differentiate auditing policy rules from "audit" actions
  ima: Do not audit if CONFIG_INTEGRITY_AUDIT is not set
  ima: Use audit_log_format() rather than audit_log_string()
  ima: Call audit_log_string() rather than logging it untrusted
2018-08-15 22:54:12 -07:00
Linus Torvalds
dafa5f6577 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "API:
   - Fix dcache flushing crash in skcipher.
   - Add hash finup self-tests.
   - Reschedule during speed tests.

  Algorithms:
   - Remove insecure vmac and replace it with vmac64.
   - Add public key verification for DH/ECDH.

  Drivers:
   - Decrease priority of sha-mb on x86.
   - Improve NEON latency/throughput on ARM64.
   - Add md5/sha384/sha512/des/3des to inside-secure.
   - Support eip197d in inside-secure.
   - Only register algorithms supported by the host in virtio.
   - Add cts and remove incompatible cts1 from ccree.
   - Add hisilicon SEC security accelerator driver.
   - Replace msm hwrng driver with qcom pseudo rng driver.

  Misc:
   - Centralize CRC polynomials"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (121 commits)
  crypto: arm64/ghash-ce - implement 4-way aggregation
  crypto: arm64/ghash-ce - replace NEON yield check with block limit
  crypto: hisilicon - sec_send_request() can be static
  lib/mpi: remove redundant variable esign
  crypto: arm64/aes-ce-gcm - don't reload key schedule if avoidable
  crypto: arm64/aes-ce-gcm - implement 2-way aggregation
  crypto: arm64/aes-ce-gcm - operate on two input blocks at a time
  crypto: dh - make crypto_dh_encode_key() make robust
  crypto: dh - fix calculating encoded key size
  crypto: ccp - Check for NULL PSP pointer at module unload
  crypto: arm/chacha20 - always use vrev for 16-bit rotates
  crypto: ccree - allow bigger than sector XTS op
  crypto: ccree - zero all of request ctx before use
  crypto: ccree - remove cipher ivgen left overs
  crypto: ccree - drop useless type flag during reg
  crypto: ablkcipher - fix crash flushing dcache in error path
  crypto: blkcipher - fix crash flushing dcache in error path
  crypto: skcipher - fix crash flushing dcache in error path
  crypto: skcipher - remove unnecessary setting of walk->nbytes
  crypto: scatterwalk - remove scatterwalk_samebuf()
  ...
2018-08-15 16:01:47 -07:00
Eric Biggers
d6e43798b3 crypto: dh - make crypto_dh_encode_key() make robust
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than
overflowing the buffer.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:06 +08:00
Eric Biggers
35f7d5225f crypto: dh - fix calculating encoded key size
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size',
causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and
an out-of-bounds read of 4 bytes in crypto_dh_decode_key().  Fix it, and
fix the lengths of the test vectors to match this.

Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com
Fixes: e3fe0ae129 ("crypto: dh - add public key verification test")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:06 +08:00
Eric Biggers
318abdfbe7 crypto: ablkcipher - fix crash flushing dcache in error path
Like the skcipher_walk and blkcipher_walk cases:

scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page.  But in the error case of
ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.

Fix it by reorganizing ablkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.

Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: bf06099db1 ("crypto: skcipher - Add ablkcipher_walk interfaces")
Cc: <stable@vger.kernel.org> # v2.6.35+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:04 +08:00
Eric Biggers
0868def3e4 crypto: blkcipher - fix crash flushing dcache in error path
Like the skcipher_walk case:

scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page.  But in the error case of
blkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.

Fix it by reorganizing blkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.

This bug was found by syzkaller fuzzing.

Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

	#include <linux/if_alg.h>
	#include <sys/socket.h>
	#include <unistd.h>

	int main()
	{
		struct sockaddr_alg addr = {
			.salg_type = "skcipher",
			.salg_name = "ecb(aes-generic)",
		};
		char buffer[4096] __attribute__((aligned(4096))) = { 0 };
		int fd;

		fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
		bind(fd, (void *)&addr, sizeof(addr));
		setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
		fd = accept(fd, NULL, NULL);
		write(fd, buffer, 15);
		read(fd, buffer, 15);
	}

Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: 5cde0af2a9 ("[CRYPTO] cipher: Added block cipher type")
Cc: <stable@vger.kernel.org> # v2.6.19+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:04 +08:00
Eric Biggers
8088d3dd4d crypto: skcipher - fix crash flushing dcache in error path
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page.  But in the error case of
skcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.

Fix it by reorganizing skcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.

This bug was found by syzkaller fuzzing.

Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

	#include <linux/if_alg.h>
	#include <sys/socket.h>
	#include <unistd.h>

	int main()
	{
		struct sockaddr_alg addr = {
			.salg_type = "skcipher",
			.salg_name = "cbc(aes-generic)",
		};
		char buffer[4096] __attribute__((aligned(4096))) = { 0 };
		int fd;

		fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
		bind(fd, (void *)&addr, sizeof(addr));
		setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
		fd = accept(fd, NULL, NULL);
		write(fd, buffer, 15);
		read(fd, buffer, 15);
	}

Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:04 +08:00
Eric Biggers
2a57c0be22 crypto: skcipher - remove unnecessary setting of walk->nbytes
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
make sense because actually walk->nbytes needs to be set to the length
of the first step in the walk, which may be less than walk->total.  This
is done by skcipher_walk_next() which is called immediately afterwards.
Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(),
which is a better default value in case it's forgotten to be set later.

Therefore, remove the unnecessary assignment to walk->nbytes.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:04 +08:00
Eric Biggers
8c30fbe63e crypto: scatterwalk - remove 'chain' argument from scatterwalk_crypto_chain()
All callers pass chain=0 to scatterwalk_crypto_chain().

Remove this unneeded parameter.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:03 +08:00
Eric Biggers
0567fc9e90 crypto: skcipher - fix aligning block size in skcipher_copy_iv()
The ALIGN() macro needs to be passed the alignment, not the alignmask
(which is the alignment minus 1).

Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:03 +08:00
Horia Geantă
2af632996b crypto: tcrypt - reschedule during speed tests
Avoid RCU stalls in the case of non-preemptible kernel and lengthy
speed tests by rescheduling when advancing from one block size
to another.

Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:02 +08:00
Stephan Müller
43490e8046 crypto: drbg - in-place cipher operation for CTR
The cipher implementations of the kernel crypto API favor in-place
cipher operations. Thus, switch the CTR cipher operation in the DRBG to
perform in-place operations. This is implemented by using the output
buffer as input buffer and zeroizing it before the cipher operation to
implement a CTR encryption of a NULL buffer.

The speed improvement is quite visibile with the following comparison
using the LRNG implementation.

Without the patch set:

      16 bytes|           12.267661 MB/s|    61338304 bytes |  5000000213 ns
      32 bytes|           23.603770 MB/s|   118018848 bytes |  5000000073 ns
      64 bytes|           46.732262 MB/s|   233661312 bytes |  5000000241 ns
     128 bytes|           90.038042 MB/s|   450190208 bytes |  5000000244 ns
     256 bytes|          160.399616 MB/s|   801998080 bytes |  5000000393 ns
     512 bytes|          259.878400 MB/s|  1299392000 bytes |  5000001675 ns
    1024 bytes|          386.050662 MB/s|  1930253312 bytes |  5000001661 ns
    2048 bytes|          493.641728 MB/s|  2468208640 bytes |  5000001598 ns
    4096 bytes|          581.835981 MB/s|  2909179904 bytes |  5000003426 ns

With the patch set:

      16 bytes |         17.051142 MB/s |     85255712 bytes |  5000000854 ns
      32 bytes |         32.695898 MB/s |    163479488 bytes |  5000000544 ns
      64 bytes |         64.490739 MB/s |    322453696 bytes |  5000000954 ns
     128 bytes |        123.285043 MB/s |    616425216 bytes |  5000000201 ns
     256 bytes |        233.434573 MB/s |   1167172864 bytes |  5000000573 ns
     512 bytes |        384.405197 MB/s |   1922025984 bytes |  5000000671 ns
    1024 bytes |        566.313370 MB/s |   2831566848 bytes |  5000001080 ns
    2048 bytes |        744.518042 MB/s |   3722590208 bytes |  5000000926 ns
    4096 bytes |        867.501670 MB/s |   4337508352 bytes |  5000002181 ns

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:05:48 +08:00
Herbert Xu
c5f5aeef9b Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux
Merge mainline to pick up c7513c2a27 ("crypto/arm64: aes-ce-gcm -
add missing kernel_neon_begin/end pair").
2018-08-03 17:55:12 +08:00
Christoph Hellwig
dd979b4df8 net: simplify sock_poll_wait
The wait_address argument is always directly derived from the filp
argument, so remove it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-30 09:10:25 -07:00
Gustavo A. R. Silva
a478908993 crypto: rmd320 - use swap macro in rmd320_transform
Make use of the swap macro and remove unnecessary variable *tmp*.
This makes the code easier to read and maintain.

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-27 19:28:36 +08:00
Gustavo A. R. Silva
d75f482eaf crypto: rmd256 - use swap macro in rmd256_transform
Make use of the swap macro and remove unnecessary variable *tmp*.
This makes the code easier to read and maintain.

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-27 19:28:36 +08:00
Stephan Mueller
aef66587f1 crypto: ecdh - fix typo of P-192 b value
Fix the b value to be compliant with FIPS 186-4 D.1.2.1. This fix is
required to make sure the SP800-56A public key test passes for P-192.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20 13:51:22 +08:00
Stephan Mueller
c98fae5e29 crypto: dh - update test for public key verification
By adding a zero byte-length for the DH parameter Q value, the public
key verification test is disabled for the given test.

Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20 13:51:21 +08:00
Stephan Mueller
cf862cbc83 crypto: drbg - eliminate constant reinitialization of SGL
The CTR DRBG requires two SGLs pointing to input/output buffers for the
CTR AES operation. The used SGLs always have only one entry. Thus, the
SGL can be initialized during allocation time, preventing a
re-initialization of the SGLs during each call.

The performance is increased by about 1 to 3 percent depending on the
size of the requested buffer size.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20 13:51:21 +08:00
Gustavo A. R. Silva
3fd8093b41 crypto: dh - fix memory leak
In case memory resources for *base* were allocated, release them
before return.

Addresses-Coverity-ID: 1471702 ("Resource leak")
Fixes: e3fe0ae129 ("crypto: dh - add public key verification test")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20 13:51:21 +08:00
Linus Torvalds
b4394c3435 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
 "This fixes an allocation error-path bug in af_alg discovered by
  syzkaller"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: af_alg - Initialize sg_num_bytes in error code path
2018-07-19 07:32:44 -07:00
Matthew Garrett
e2861fa716 evm: Don't deadlock if a crypto algorithm is unavailable
When EVM attempts to appraise a file signed with a crypto algorithm the
kernel doesn't have support for, it will cause the kernel to trigger a
module load. If the EVM policy includes appraisal of kernel modules this
will in turn call back into EVM - since EVM is holding a lock until the
crypto initialisation is complete, this triggers a deadlock. Add a
CRYPTO_NOLOAD flag and skip module loading if it's set, and add that flag
in the EVM case in order to fail gracefully with an error message
instead of deadlocking.

Signed-off-by: Matthew Garrett <mjg59@google.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
2018-07-18 07:27:22 -04:00
Stephan Mueller
2546da9921 crypto: af_alg - Initialize sg_num_bytes in error code path
The RX SGL in processing is already registered with the RX SGL tracking
list to support proper cleanup. The cleanup code path uses the
sg_num_bytes variable which must therefore be always initialized, even
in the error code path.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Reported-by: syzbot+9c251bdd09f83b92ba95@syzkaller.appspotmail.com
#syz test: https://github.com/google/kmsan.git master
CC: <stable@vger.kernel.org> #4.14
Fixes: e870456d8e ("crypto: algif_skcipher - overhaul memory management")
Fixes: d887c52d6a ("crypto: algif_aead - overhaul memory management")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-13 18:24:23 +08:00
Gilad Ben-Yossef
7671509593 crypto: testmgr - add hash finup tests
The testmgr hash tests were testing init, digest, update and final
methods but not the finup method. Add a test for this one too.

While doing this, make sure we only run the partial tests once with
the digest tests and skip them with the final and finup tests since
they are the same.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:33:35 +08:00
Eric Biggers
3f4a537a26 crypto: aead - remove useless setting of type flags
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD.  But this is
redundant with the C structure type ('struct aead_alg'), and
crypto_register_aead() already sets the type flag automatically,
clearing any type flag that was already there.  Apparently the useless
assignment has just been copy+pasted around.

So, remove the useless assignment from all the aead algorithms.

This patch shouldn't change any actual behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:30:26 +08:00
Eric Biggers
e50944e219 crypto: shash - remove useless setting of type flags
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH.  But this
is redundant with the C structure type ('struct shash_alg'), and
crypto_register_shash() already sets the type flag automatically,
clearing any type flag that was already there.  Apparently the useless
assignment has just been copy+pasted around.

So, remove the useless assignment from all the shash algorithms.

This patch shouldn't change any actual behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:30:24 +08:00
Eric Biggers
e47890163a crypto: sha512_generic - add cra_priority
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-512 or SHA-384 implementation, as
is desired for sha512_mb which is only useful under certain workloads
and is otherwise extremely slow.  Change them to priority 100, which is
the priority used for many of the other generic algorithms.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:30:21 +08:00
Eric Biggers
b73b7ac0a7 crypto: sha256_generic - add cra_priority
sha256-generic and sha224-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-256 or SHA-224 implementation, as
is desired for sha256_mb which is only useful under certain workloads
and is otherwise extremely slow.  Change them to priority 100, which is
the priority used for many of the other generic algorithms.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:30:20 +08:00
Eric Biggers
90ef3e4835 crypto: sha1_generic - add cra_priority
sha1-generic had a cra_priority of 0, so it wasn't possible to have a
lower priority SHA-1 implementation, as is desired for sha1_mb which is
only useful under certain workloads and is otherwise extremely slow.
Change it to priority 100, which is the priority used for many of the
other generic algorithms.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:30:20 +08:00
Stephan Mueller
e3fe0ae129 crypto: dh - add public key verification test
According to SP800-56A section 5.6.2.1, the public key to be processed
for the DH operation shall be checked for appropriateness. The check
shall covers the full verification test in case the domain parameter Q
is provided as defined in SP800-56A section 5.6.2.3.1. If Q is not
provided, the partial check according to SP800-56A section 5.6.2.3.2 is
performed.

The full verification test requires the presence of the domain parameter
Q. Thus, the patch adds the support to handle Q. It is permissible to
not provide the Q value as part of the domain parameters. This implies
that the interface is still backwards-compatible where so far only P and
G are to be provided. However, if Q is provided, it is imported.

Without the test, the NIST ACVP testing fails. After adding this check,
the NIST ACVP testing passes. Testing without providing the Q domain
parameter has been performed to verify the interface has not changed.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:26:22 +08:00
Stafford Horne
cefd769fd0 crypto: skcipher - Fix -Wstringop-truncation warnings
As of GCC 9.0.0 the build is reporting warnings like:

    crypto/ablkcipher.c: In function ‘crypto_ablkcipher_report’:
    crypto/ablkcipher.c:374:2: warning: ‘strncpy’ specified bound 64 equals destination size [-Wstringop-truncation]
      strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>",
      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
       sizeof(rblkcipher.geniv));
       ~~~~~~~~~~~~~~~~~~~~~~~~~

This means the strnycpy might create a non null terminated string.  Fix this by
explicitly performing '\0' termination.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Nick Desaulniers <nick.desaulniers@gmail.com>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:26:20 +08:00
Stephan Mueller
ea169a30a6 crypto: ecdh - add public key verification test
According to SP800-56A section 5.6.2.1, the public key to be processed
for the ECDH operation shall be checked for appropriateness. When the
public key is considered to be an ephemeral key, the partial validation
test as defined in SP800-56A section 5.6.2.3.4 can be applied.

The partial verification test requires the presence of the field
elements of a and b. For the implemented NIST curves, b is defined in
FIPS 186-4 appendix D.1.2. The element a is implicitly given with the
Weierstrass equation given in D.1.2 where a = p - 3.

Without the test, the NIST ACVP testing fails. After adding this check,
the NIST ACVP testing passes.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-09 00:26:19 +08:00
Denis Efremov
e4e4730698 crypto: skcipher - remove the exporting of skcipher_walk_next
The function skcipher_walk_next declared as static and marked as
EXPORT_SYMBOL_GPL. It's a bit confusing for internal function to be
exported. The area of visibility for such function is its .c file
and all other modules. Other *.c files of the same module can't use it,
despite all other modules can. Relying on the fact that this is the
internal function and it's not a crucial part of the API, the patch
just removes the EXPORT_SYMBOL_GPL marking of skcipher_walk_next.

Found by Linux Driver Verification project (linuxtesting.org).

Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01 21:00:47 +08:00
Eric Biggers
0917b87312 crypto: vmac - remove insecure version with hardcoded nonce
Remove the original version of the VMAC template that had the nonce
hardcoded to 0 and produced a digest with the wrong endianness.  I'm
unsure whether this had users or not (there are no explicit in-kernel
references to it), but given that the hardcoded nonce made it wildly
insecure unless a unique key was used for each message, let's try
removing it and see if anyone complains.

Leave the new "vmac64" template that requires the nonce to be explicitly
specified as the first 16 bytes of data and uses the correct endianness
for the digest.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01 21:00:44 +08:00
Eric Biggers
ed331adab3 crypto: vmac - add nonced version with big endian digest
Currently the VMAC template uses a "nonce" hardcoded to 0, which makes
it insecure unless a unique key is set for every message.  Also, the
endianness of the final digest is wrong: the implementation uses little
endian, but the VMAC specification has it as big endian, as do other
VMAC implementations such as the one in Crypto++.

Add a new VMAC template where the nonce is passed as the first 16 bytes
of data (similar to what is done for Poly1305's nonce), and the digest
is big endian.  Call it "vmac64", since the old name of simply "vmac"
didn't clarify whether the implementation is of VMAC-64 or of VMAC-128
(which produce 64-bit and 128-bit digests respectively); so we fix the
naming ambiguity too.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01 21:00:43 +08:00
Eric Biggers
bb29648102 crypto: vmac - separate tfm and request context
syzbot reported a crash in vmac_final() when multiple threads
concurrently use the same "vmac(aes)" transform through AF_ALG.  The bug
is pretty fundamental: the VMAC template doesn't separate per-request
state from per-tfm (per-key) state like the other hash algorithms do,
but rather stores it all in the tfm context.  That's wrong.

Also, vmac_final() incorrectly zeroes most of the state including the
derived keys and cached pseudorandom pad.  Therefore, only the first
VMAC invocation with a given key calculates the correct digest.

Fix these bugs by splitting the per-tfm state from the per-request state
and using the proper init/update/final sequencing for requests.

Reproducer for the crash:

    #include <linux/if_alg.h>
    #include <sys/socket.h>
    #include <unistd.h>

    int main()
    {
            int fd;
            struct sockaddr_alg addr = {
                    .salg_type = "hash",
                    .salg_name = "vmac(aes)",
            };
            char buf[256] = { 0 };

            fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
            bind(fd, (void *)&addr, sizeof(addr));
            setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
            fork();
            fd = accept(fd, NULL, NULL);
            for (;;)
                    write(fd, buf, 256);
    }

The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
VMAC_NHBYTES, causing vmac_final() to memset() a negative length.

Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
Fixes: f1939f7c56 ("crypto: vmac - New hash algorithm for intel_txt support")
Cc: <stable@vger.kernel.org> # v2.6.32+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01 21:00:42 +08:00
Eric Biggers
73bf20ef3d crypto: vmac - require a block cipher with 128-bit block size
The VMAC template assumes the block cipher has a 128-bit block size, but
it failed to check for that.  Thus it was possible to instantiate it
using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
uninitialized memory to be used.

Add the needed check when instantiating the template.

Fixes: f1939f7c56 ("crypto: vmac - New hash algorithm for intel_txt support")
Cc: <stable@vger.kernel.org> # v2.6.32+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01 21:00:41 +08:00
Linus Torvalds
a11e1d432b Revert changes to convert to ->poll_mask() and aio IOCB_CMD_POLL
The poll() changes were not well thought out, and completely
unexplained.  They also caused a huge performance regression, because
"->poll()" was no longer a trivial file operation that just called down
to the underlying file operations, but instead did at least two indirect
calls.

Indirect calls are sadly slow now with the Spectre mitigation, but the
performance problem could at least be largely mitigated by changing the
"->get_poll_head()" operation to just have a per-file-descriptor pointer
to the poll head instead.  That gets rid of one of the new indirections.

But that doesn't fix the new complexity that is completely unwarranted
for the regular case.  The (undocumented) reason for the poll() changes
was some alleged AIO poll race fixing, but we don't make the common case
slower and more complex for some uncommon special case, so this all
really needs way more explanations and most likely a fundamental
redesign.

[ This revert is a revert of about 30 different commits, not reverted
  individually because that would just be unnecessarily messy  - Linus ]

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-28 10:40:47 -07:00
Linus Torvalds
813835028e Merge branch 'fixes-v4.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security
Pull security subsystem fixes from James Morris:

 - Smack: fix a regression caused by 1bbc55131e

 - X.509: fix a (usually un-seen) bug in RSA signature parsing

* 'fixes-v4.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
  X.509: unpack RSA signatureValue field from BIT STRING
  Smack: Mark inode instant in smack_task_to_inode
2018-06-26 08:44:15 -07:00
Maciej S. Szmigiero
b65c32ec5a X.509: unpack RSA signatureValue field from BIT STRING
The signatureValue field of a X.509 certificate is encoded as a BIT STRING.
For RSA signatures this BIT STRING is of so-called primitive subtype, which
contains a u8 prefix indicating a count of unused bits in the encoding.

We have to strip this prefix from signature data, just as we already do for
key data in x509_extract_key_data() function.

This wasn't noticed earlier because this prefix byte is zero for RSA key
sizes divisible by 8. Since BIT STRING is a big-endian encoding adding zero
prefixes has no bearing on its value.

The signature length, however was incorrect, which is a problem for RSA
implementations that need it to be exactly correct (like AMD CCP).

Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Fixes: c26fd69fa0 ("X.509: Add a crypto key parser for binary (DER) X.509 certificates")
Cc: stable@vger.kernel.org
Signed-off-by: James Morris <james.morris@microsoft.com>
2018-06-25 12:17:08 -07:00
Linus Torvalds
2dd3f7c904 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:

 - Fix use after free in chtls

 - Fix RBP breakage in sha3

 - Fix use after free in hwrng_unregister

 - Fix overread in morus640

 - Move sleep out of kernel_neon in arm64/aes-blk

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  hwrng: core - Always drop the RNG in hwrng_unregister()
  crypto: morus640 - Fix out-of-bounds access
  crypto: don't optimize keccakf()
  crypto: arm64/aes-blk - fix and move skcipher_walk_done out of kernel_neon_begin, _end
  crypto: chtls - use after free in chtls_pt_recvmsg()
2018-06-24 06:31:54 +08:00
Colin Ian King
b25c1199ac crypto: aegis - fix indentation of a statement
Trival fix to correct the indentation of a single statement

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-22 23:03:06 +08:00
Antoine Tenart
26f7120b86 crypto: sha512_generic - add a sha384 0-length pre-computed hash
This patch adds the sha384 pre-computed 0-length hash so that device
drivers can use it when an hardware engine does not support computing a
hash from a 0 length input.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-22 23:03:05 +08:00
Antoine Tenart
30c217ef64 crypto: sha512_generic - add a sha512 0-length pre-computed hash
This patch adds the sha512 pre-computed 0-length hash so that device
drivers can use it when an hardware engine does not support computing a
hash from a 0 length input.

Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-22 23:03:02 +08:00
Kyle Spiers
89a7e2f752 async_pq: Remove VLA usage
In the quest to remove VLAs from the kernel[1], this adjusts the
allocation of coefs and blocks to use the existing maximum values
(with one new define, MAX_DISKS for coefs, and a reuse of the
existing NDISKS for blocks).

[1] https://lkml.org/lkml/2018/3/7/621

Signed-off-by: Kyle Spiers <ksspiers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-06-18 20:17:38 +05:30
Mauro Carvalho Chehab
5fb94e9ca3 docs: Fix some broken references
As we move stuff around, some doc references are broken. Fix some of
them via this script:
	./scripts/documentation-file-ref-check --fix

Manually checked if the produced result is valid, removing a few
false-positives.

Acked-by: Takashi Iwai <tiwai@suse.de>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Stephen Boyd <sboyd@kernel.org>
Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Jonathan Corbet <corbet@lwn.net>
2018-06-15 18:10:01 -03:00
Ondrej Mosnáček
a81ae80957 crypto: morus640 - Fix out-of-bounds access
We must load the block from the temporary variable here, not directly
from the input.

Also add forgotten zeroing-out of the uninitialized part of the
temporary block (as is done correctly in morus1280.c).

Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Reported-by: syzbot+1fafa9c4cf42df33f716@syzkaller.appspotmail.com
Reported-by: syzbot+d82643ba80bf6937cd44@syzkaller.appspotmail.com
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-15 23:06:48 +08:00
Dmitry Vyukov
f044a84e04 crypto: don't optimize keccakf()
keccakf() is the only function in kernel that uses __optimize() macro.
__optimize() breaks frame pointer unwinder as optimized code uses RBP,
and amusingly this always lead to degraded performance as gcc does not
inline across different optimizations levels, so keccakf() wasn't inlined
into its callers and keccakf_round() wasn't inlined into keccakf().

Drop __optimize() to resolve both problems.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Fixes: 83dee2ce1a ("crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize")
Reported-by: syzbot+37035ccfa9a0a017ffcf@syzkaller.appspotmail.com
Reported-by: syzbot+e073e4740cfbb3ae200b@syzkaller.appspotmail.com
Cc: linux-crypto@vger.kernel.org
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-15 23:06:48 +08:00
Kees Cook
76e43e37a4 treewide: Use array_size() in sock_kmalloc()
The sock_kmalloc() function has no 2-factor argument form, so
multiplication factors need to be wrapped in array_size(). This patch
replaces cases of:

        sock_kmalloc(handle, a * b, gfp)

with:
        sock_kmalloc(handle, array_size(a, b), gfp)

as well as handling cases of:

        sock_kmalloc(handle, a * b * c, gfp)

with:

        sock_kmalloc(handle, array3_size(a, b, c), gfp)

This does, however, attempt to ignore constant size factors like:

        sock_kmalloc(handle, 4 * 1024, gfp)

though any constants defined via macros get caught up in the conversion.

Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.

The Coccinelle script used for this was:

// Fix redundant parens around sizeof().
@@
expression HANDLE;
type TYPE;
expression THING, E;
@@

(
  sock_kmalloc(HANDLE,
-	(sizeof(TYPE)) * E
+	sizeof(TYPE) * E
  , ...)
|
  sock_kmalloc(HANDLE,
-	(sizeof(THING)) * E
+	sizeof(THING) * E
  , ...)
)

// Drop single-byte sizes and redundant parens.
@@
expression HANDLE;
expression COUNT;
typedef u8;
typedef __u8;
@@

(
  sock_kmalloc(HANDLE,
-	sizeof(u8) * (COUNT)
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(__u8) * (COUNT)
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(char) * (COUNT)
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(unsigned char) * (COUNT)
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(u8) * COUNT
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(__u8) * COUNT
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(char) * COUNT
+	COUNT
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(unsigned char) * COUNT
+	COUNT
  , ...)
)

// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
expression HANDLE;
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@

(
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * (COUNT_ID)
+	array_size(COUNT_ID, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * COUNT_ID
+	array_size(COUNT_ID, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * (COUNT_CONST)
+	array_size(COUNT_CONST, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * COUNT_CONST
+	array_size(COUNT_CONST, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * (COUNT_ID)
+	array_size(COUNT_ID, sizeof(THING))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * COUNT_ID
+	array_size(COUNT_ID, sizeof(THING))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * (COUNT_CONST)
+	array_size(COUNT_CONST, sizeof(THING))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * COUNT_CONST
+	array_size(COUNT_CONST, sizeof(THING))
  , ...)
)

// 2-factor product, only identifiers.
@@
expression HANDLE;
identifier SIZE, COUNT;
@@

  sock_kmalloc(HANDLE,
-	SIZE * COUNT
+	array_size(COUNT, SIZE)
  , ...)

// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression HANDLE;
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@

(
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * (COUNT) * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * (COUNT) * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * COUNT * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE) * COUNT * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * (COUNT) * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * (COUNT) * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * COUNT * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING) * COUNT * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
)

// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression HANDLE;
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@

(
  sock_kmalloc(HANDLE,
-	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING1) * sizeof(THING2) * COUNT
+	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(THING1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE1) * sizeof(THING2) * COUNT
+	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
  , ...)
|
  sock_kmalloc(HANDLE,
-	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
  , ...)
)

// 3-factor product, only identifiers, with redundant parens removed.
@@
expression HANDLE;
identifier STRIDE, SIZE, COUNT;
@@

(
  sock_kmalloc(HANDLE,
-	(COUNT) * STRIDE * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	COUNT * (STRIDE) * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	COUNT * STRIDE * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	(COUNT) * (STRIDE) * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	COUNT * (STRIDE) * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	(COUNT) * STRIDE * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	(COUNT) * (STRIDE) * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  sock_kmalloc(HANDLE,
-	COUNT * STRIDE * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
)

// Any remaining multi-factor products, first at least 3-factor products
// when they're not all constants...
@@
expression HANDLE;
expression E1, E2, E3;
constant C1, C2, C3;
@@

(
  sock_kmalloc(HANDLE, C1 * C2 * C3, ...)
|
  sock_kmalloc(HANDLE,
-	E1 * E2 * E3
+	array3_size(E1, E2, E3)
  , ...)
)

// And then all remaining 2 factors products when they're not all constants.
@@
expression HANDLE;
expression E1, E2;
constant C1, C2;
@@

(
  sock_kmalloc(HANDLE, C1 * C2, ...)
|
  sock_kmalloc(HANDLE,
-	E1 * E2
+	array_size(E1, E2)
  , ...)
)

Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-12 16:19:22 -07:00
Kees Cook
6da2ec5605 treewide: kmalloc() -> kmalloc_array()
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
patch replaces cases of:

        kmalloc(a * b, gfp)

with:
        kmalloc_array(a * b, gfp)

as well as handling cases of:

        kmalloc(a * b * c, gfp)

with:

        kmalloc(array3_size(a, b, c), gfp)

as it's slightly less ugly than:

        kmalloc_array(array_size(a, b), c, gfp)

This does, however, attempt to ignore constant size factors like:

        kmalloc(4 * 1024, gfp)

though any constants defined via macros get caught up in the conversion.

Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.

The tools/ directory was manually excluded, since it has its own
implementation of kmalloc().

The Coccinelle script used for this was:

// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@

(
  kmalloc(
-	(sizeof(TYPE)) * E
+	sizeof(TYPE) * E
  , ...)
|
  kmalloc(
-	(sizeof(THING)) * E
+	sizeof(THING) * E
  , ...)
)

// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@

(
  kmalloc(
-	sizeof(u8) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(__u8) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(char) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(unsigned char) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(u8) * COUNT
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(__u8) * COUNT
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(char) * COUNT
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(unsigned char) * COUNT
+	COUNT
  , ...)
)

// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@

(
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * (COUNT_ID)
+	COUNT_ID, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * COUNT_ID
+	COUNT_ID, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * (COUNT_CONST)
+	COUNT_CONST, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * COUNT_CONST
+	COUNT_CONST, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * (COUNT_ID)
+	COUNT_ID, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * COUNT_ID
+	COUNT_ID, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * (COUNT_CONST)
+	COUNT_CONST, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * COUNT_CONST
+	COUNT_CONST, sizeof(THING)
  , ...)
)

// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@

- kmalloc
+ kmalloc_array
  (
-	SIZE * COUNT
+	COUNT, SIZE
  , ...)

// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@

(
  kmalloc(
-	sizeof(TYPE) * (COUNT) * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(TYPE) * (COUNT) * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(TYPE) * COUNT * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(TYPE) * COUNT * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(THING) * (COUNT) * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  kmalloc(
-	sizeof(THING) * (COUNT) * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  kmalloc(
-	sizeof(THING) * COUNT * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  kmalloc(
-	sizeof(THING) * COUNT * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
)

// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@

(
  kmalloc(
-	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
  , ...)
|
  kmalloc(
-	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
  , ...)
|
  kmalloc(
-	sizeof(THING1) * sizeof(THING2) * COUNT
+	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
  , ...)
|
  kmalloc(
-	sizeof(THING1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
  , ...)
|
  kmalloc(
-	sizeof(TYPE1) * sizeof(THING2) * COUNT
+	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
  , ...)
|
  kmalloc(
-	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
  , ...)
)

// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@

(
  kmalloc(
-	(COUNT) * STRIDE * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * (STRIDE) * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * STRIDE * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	(COUNT) * (STRIDE) * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * (STRIDE) * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	(COUNT) * STRIDE * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	(COUNT) * (STRIDE) * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * STRIDE * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
)

// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@

(
  kmalloc(C1 * C2 * C3, ...)
|
  kmalloc(
-	(E1) * E2 * E3
+	array3_size(E1, E2, E3)
  , ...)
|
  kmalloc(
-	(E1) * (E2) * E3
+	array3_size(E1, E2, E3)
  , ...)
|
  kmalloc(
-	(E1) * (E2) * (E3)
+	array3_size(E1, E2, E3)
  , ...)
|
  kmalloc(
-	E1 * E2 * E3
+	array3_size(E1, E2, E3)
  , ...)
)

// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@

(
  kmalloc(sizeof(THING) * C2, ...)
|
  kmalloc(sizeof(TYPE) * C2, ...)
|
  kmalloc(C1 * C2 * C3, ...)
|
  kmalloc(C1 * C2, ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * (E2)
+	E2, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * E2
+	E2, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * (E2)
+	E2, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * E2
+	E2, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	(E1) * E2
+	E1, E2
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	(E1) * (E2)
+	E1, E2
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	E1 * E2
+	E1, E2
  , ...)
)

Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-12 16:19:22 -07:00
Linus Torvalds
2857676045 - Introduce arithmetic overflow test helper functions (Rasmus)
- Use overflow helpers in 2-factor allocators (Kees, Rasmus)
 - Introduce overflow test module (Rasmus, Kees)
 - Introduce saturating size helper functions (Matthew, Kees)
 - Treewide use of struct_size() for allocators (Kees)
 -----BEGIN PGP SIGNATURE-----
 Comment: Kees Cook <kees@outflux.net>
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlsYJ1gWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJlCTEACwdEeriAd2VwxknnsstojGD/3g
 8TTFA19vSu4Gxa6WiDkjGoSmIlfhXTlZo1Nlmencv16ytSvIVDNLUIB3uDxUIv1J
 2+dyHML9JpXYHHR7zLXXnGFJL0wazqjbsD3NYQgXqmun7EVVYnOsAlBZ7h/Lwiej
 jzEJd8DaHT3TA586uD3uggiFvQU0yVyvkDCDONIytmQx+BdtGdg9TYCzkBJaXuDZ
 YIthyKDvxIw5nh/UaG3L+SKo73tUr371uAWgAfqoaGQQCWe+mxnWL4HkCKsjFzZL
 u9ouxxF/n6pij3E8n6rb0i2fCzlsTDdDF+aqV1rQ4I4hVXCFPpHUZgjDPvBWbj7A
 m6AfRHVNnOgI8HGKqBGOfViV+2kCHlYeQh3pPW33dWzy/4d/uq9NIHKxE63LH+S4
 bY3oO2ela8oxRyvEgXLjqmRYGW1LB/ZU7FS6Rkx2gRzo4k8Rv+8K/KzUHfFVRX61
 jEbiPLzko0xL9D53kcEn0c+BhofK5jgeSWxItdmfuKjLTW4jWhLRlU+bcUXb6kSS
 S3G6aF+L+foSUwoq63AS8QxCuabuhreJSB+BmcGUyjthCbK/0WjXYC6W/IJiRfBa
 3ZTxBC/2vP3uq/AGRNh5YZoxHL8mSxDfn62F+2cqlJTTKR/O+KyDb1cusyvk3H04
 KCDVLYPxwQQqK1Mqig==
 =/3L8
 -----END PGP SIGNATURE-----

Merge tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull overflow updates from Kees Cook:
 "This adds the new overflow checking helpers and adds them to the
  2-factor argument allocators. And this adds the saturating size
  helpers and does a treewide replacement for the struct_size() usage.
  Additionally this adds the overflow testing modules to make sure
  everything works.

  I'm still working on the treewide replacements for allocators with
  "simple" multiplied arguments:

     *alloc(a * b, ...) -> *alloc_array(a, b, ...)

  and

     *zalloc(a * b, ...) -> *calloc(a, b, ...)

  as well as the more complex cases, but that's separable from this
  portion of the series. I expect to have the rest sent before -rc1
  closes; there are a lot of messy cases to clean up.

  Summary:

   - Introduce arithmetic overflow test helper functions (Rasmus)

   - Use overflow helpers in 2-factor allocators (Kees, Rasmus)

   - Introduce overflow test module (Rasmus, Kees)

   - Introduce saturating size helper functions (Matthew, Kees)

   - Treewide use of struct_size() for allocators (Kees)"

* tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  treewide: Use struct_size() for devm_kmalloc() and friends
  treewide: Use struct_size() for vmalloc()-family
  treewide: Use struct_size() for kmalloc()-family
  device: Use overflow helpers for devm_kmalloc()
  mm: Use overflow helpers in kvmalloc()
  mm: Use overflow helpers in kmalloc_array*()
  test_overflow: Add memory allocation overflow tests
  overflow.h: Add allocation size calculation helpers
  test_overflow: Report test failures
  test_overflow: macrofy some more, do more tests for free
  lib: add runtime test of check_*_overflow functions
  compiler.h: enable builtin overflow checkers and add fallback code
2018-06-06 17:27:14 -07:00
Kees Cook
0ed2dd03b9 treewide: Use struct_size() for devm_kmalloc() and friends
Replaces open-coded struct size calculations with struct_size() for
devm_*, f2fs_*, and sock_* allocations. Automatically generated (and
manually adjusted) from the following Coccinelle script:

// Direct reference to struct field.
@@
identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc";
expression HANDLE;
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@

- alloc(HANDLE, sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP)
+ alloc(HANDLE, struct_size(VAR, ELEMENT, COUNT), GFP)

// mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL);
@@
identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc";
expression HANDLE;
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@

- alloc(HANDLE, sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP)
+ alloc(HANDLE, struct_size(VAR, ELEMENT, COUNT), GFP)

// Same pattern, but can't trivially locate the trailing element name,
// or variable name.
@@
identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc";
expression HANDLE;
expression GFP;
expression SOMETHING, COUNT, ELEMENT;
@@

- alloc(HANDLE, sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP)
+ alloc(HANDLE, CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP)

Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-06 11:15:43 -07:00
Linus Torvalds
3e1a29b3bf Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "API:

   - Decryption test vectors are now automatically generated from
     encryption test vectors.

  Algorithms:

   - Fix unaligned access issues in crc32/crc32c.

   - Add zstd compression algorithm.

   - Add AEGIS.

   - Add MORUS.

  Drivers:

   - Add accelerated AEGIS/MORUS on x86.

   - Add accelerated SM4 on arm64.

   - Removed x86 assembly salsa implementation as it is slower than C.

   - Add authenc(hmac(sha*), cbc(aes)) support in inside-secure.

   - Add ctr(aes) support in crypto4xx.

   - Add hardware key support in ccree.

   - Add support for new Centaur CPU in via-rng"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits)
  crypto: chtls - free beyond end rspq_skb_cache
  crypto: chtls - kbuild warnings
  crypto: chtls - dereference null variable
  crypto: chtls - wait for memory sendmsg, sendpage
  crypto: chtls - key len correction
  crypto: salsa20 - Revert "crypto: salsa20 - export generic helpers"
  crypto: x86/salsa20 - remove x86 salsa20 implementations
  crypto: ccp - Add GET_ID SEV command
  crypto: ccp - Add DOWNLOAD_FIRMWARE SEV command
  crypto: qat - Add MODULE_FIRMWARE for all qat drivers
  crypto: ccree - silence debug prints
  crypto: ccree - better clock handling
  crypto: ccree - correct host regs offset
  crypto: chelsio - Remove separate buffer used for DMA map B0 block in CCM
  crypt: chelsio - Send IV as Immediate for cipher algo
  crypto: chelsio - Return -ENOSPC for transient busy indication.
  crypto: caam/qi - fix warning in init_cgr()
  crypto: caam - fix rfc4543 descriptors
  crypto: caam - fix MC firmware detection
  crypto: clarify licensing of OpenSSL asm code
  ...
2018-06-05 15:51:21 -07:00
Linus Torvalds
408afb8d78 Merge branch 'work.aio-1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull aio updates from Al Viro:
 "Majority of AIO stuff this cycle. aio-fsync and aio-poll, mostly.

  The only thing I'm holding back for a day or so is Adam's aio ioprio -
  his last-minute fixup is trivial (missing stub in !CONFIG_BLOCK case),
  but let it sit in -next for decency sake..."

* 'work.aio-1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (46 commits)
  aio: sanitize the limit checking in io_submit(2)
  aio: fold do_io_submit() into callers
  aio: shift copyin of iocb into io_submit_one()
  aio_read_events_ring(): make a bit more readable
  aio: all callers of aio_{read,write,fsync,poll} treat 0 and -EIOCBQUEUED the same way
  aio: take list removal to (some) callers of aio_complete()
  aio: add missing break for the IOCB_CMD_FDSYNC case
  random: convert to ->poll_mask
  timerfd: convert to ->poll_mask
  eventfd: switch to ->poll_mask
  pipe: convert to ->poll_mask
  crypto: af_alg: convert to ->poll_mask
  net/rxrpc: convert to ->poll_mask
  net/iucv: convert to ->poll_mask
  net/phonet: convert to ->poll_mask
  net/nfc: convert to ->poll_mask
  net/caif: convert to ->poll_mask
  net/bluetooth: convert to ->poll_mask
  net/sctp: convert to ->poll_mask
  net/tipc: convert to ->poll_mask
  ...
2018-06-04 13:57:43 -07:00
Eric Biggers
015a03704d crypto: salsa20 - Revert "crypto: salsa20 - export generic helpers"
This reverts commit eb772f37ae, as now the
x86 Salsa20 implementation has been removed and the generic helpers are
no longer needed outside of salsa20_generic.c.

We could keep this just in case someone else wants to add a new
optimized Salsa20 implementation.  But given that we have ChaCha20 now
too, I think it's unlikely.  And this can always be reverted back.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:57 +08:00
Eric Biggers
b7b73cd5d7 crypto: x86/salsa20 - remove x86 salsa20 implementations
The x86 assembly implementations of Salsa20 use the frame base pointer
register (%ebp or %rbp), which breaks frame pointer convention and
breaks stack traces when unwinding from an interrupt in the crypto code.
Recent (v4.10+) kernels will warn about this, e.g.

WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c
[...]

But after looking into it, I believe there's very little reason to still
retain the x86 Salsa20 code.  First, these are *not* vectorized
(SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere
close to the best Salsa20 performance on any remotely modern x86
processor; they're just regular x86 assembly.  Second, it's still
unclear that anyone is actually using the kernel's Salsa20 at all,
especially given that now ChaCha20 is supported too, and with much more
efficient SSSE3 and AVX2 implementations.  Finally, in benchmarks I did
on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the
x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic
(~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm
is only slightly faster than salsa20-generic (~15% faster on Skylake,
~20% faster on Zen).  The gcc version made little difference.

So, the x86_64 salsa20-asm is pretty clearly useless.  That leaves just
the i686 salsa20-asm, which based on my tests provides a 15-20% speed
boost.  But that's without updating the code to not use %ebp.  And given
the maintenance cost, the small speed difference vs. salsa20-generic,
the fact that few people still use i686 kernels, the doubt that anyone
is even using the kernel's Salsa20 at all, and the fact that a SSE2
implementation would almost certainly be much faster on any remotely
modern x86 processor yet no one has cared enough to add one yet, I don't
think it's worthwhile to keep.

Thus, just remove both the x86_64 and i686 salsa20-asm implementations.

Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:57 +08:00
Ondrej Mosnacek
2808f17319 crypto: morus - Mark MORUS SIMD glue as x86-specific
Commit 56e8e57fc3 ("crypto: morus - Add common SIMD glue code for
MORUS") accidetally consiedered the glue code to be usable by different
architectures, but it seems to be only usable on x86.

This patch moves it under arch/x86/crypto and adds 'depends on X86' to
the Kconfig options and also removes the prompt to hide these internal
options from the user.

Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:41 +08:00
Eric Biggers
92a4c9fef3 crypto: testmgr - eliminate redundant decryption test vectors
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers.  That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.

Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors.  Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'.  Note that it was always the case that 'ilen == rlen'.

AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption.  Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption.  To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.

In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.

This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20.  No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.

The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):

    BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }

    /^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
    /^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
    mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
    	sub(/\.input[[:space:]]*=$/,    ".ptext =")
    	sub(/\.input[[:space:]]*=/,     ".ptext\t=")
    	sub(/\.result[[:space:]]*=$/,   ".ctext =")
    	sub(/\.result[[:space:]]*=/,    ".ctext\t=")
    	sub(/\.rlen[[:space:]]*=/,      ".len\t=")
    	print
    }
    mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
    mode == OTHER                         { print }
    mode == ENCVEC && /^};/               { mode = OTHER }
    mode == DECVEC && /^};/               { mode = DECVEC_TAIL }

Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:39 +08:00
Eric Biggers
4074a77d48 crypto: testmgr - add extra kw(aes) encryption test vector
One "kw(aes)" decryption test vector doesn't exactly match an encryption
test vector with input and result swapped.  In preparation for removing
the decryption test vectors, add this test vector to the encryption test
vectors, so we don't lose any test coverage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:38 +08:00
Eric Biggers
a0e20b9b54 crypto: testmgr - add extra ecb(tnepres) encryption test vectors
None of the four "ecb(tnepres)" decryption test vectors exactly match an
encryption test vector with input and result swapped.  In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don't lose any test coverage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:37 +08:00
Eric Biggers
17880f1139 crypto: testmgr - make an cbc(des) encryption test vector chunked
One "cbc(des)" decryption test vector doesn't exactly match an
encryption test vector with input and result swapped.  It's *almost* the
same as one, but the decryption version is "chunked" while the
encryption version is "unchunked".  In preparation for removing the
decryption test vectors, make the encryption one both chunked and
unchunked, so we don't lose any test coverage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:37 +08:00
Eric Biggers
097012e8f2 crypto: testmgr - add extra ecb(des) encryption test vectors
Two "ecb(des)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped.  In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don't lose any test coverage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-31 00:13:36 +08:00
Eric Biggers
9f50fd5bb6 crypto: testmgr - add more unkeyed crc32 and crc32c test vectors
crc32c has an unkeyed test vector but crc32 did not.  Add the crc32c one
(which uses an empty input) to crc32 too, and also add a new one to both
that uses a nonempty input.  These test vectors verify that crc32 and
crc32c implementations use the correct default initial state.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-27 00:12:10 +08:00
Eric Biggers
9b3abc0162 crypto: testmgr - fix testing OPTIONAL_KEY hash algorithms
Since testmgr uses a single tfm for all tests of each hash algorithm,
once a key is set the tfm won't be unkeyed anymore.  But with crc32 and
crc32c, the key is really the "default initial state" and is optional;
those algorithms should have both keyed and unkeyed test vectors, to
verify that implementations use the correct default key.

Simply listing the unkeyed test vectors first isn't guaranteed to work
yet because testmgr makes multiple passes through the test vectors.
crc32c does have an unkeyed test vector listed first currently, but it
only works by chance because the last crc32c test vector happens to use
a key that is the same as the default key.

Therefore, teach testmgr to split hash test vectors into unkeyed and
keyed sections, and do all the unkeyed ones before the keyed ones.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-27 00:12:10 +08:00
Eric Biggers
a179a2bf05 crypto: testmgr - remove bfin_crc "hmac(crc32)" test vectors
The Blackfin CRC driver was removed by commit 9678a8dc53 ("crypto:
bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove
the corresponding "hmac(crc32)" test vectors.  I see no point in keeping
them since nothing else appears to implement or use "hmac(crc32)", which
isn't an algorithm that makes sense anyway because HMAC is meant to be
used with a cryptographically secure hash function, which CRC's are not.

Thus, remove the unneeded test vectors.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-27 00:12:10 +08:00
Eric Biggers
6943546c2d crypto: crc32-generic - remove __crc32_le()
The __crc32_le() wrapper function is pointless.  Just call crc32_le()
directly instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-27 00:12:09 +08:00
Eric Biggers
7bcfb13630 crypto: crc32c-generic - remove cra_alignmask
crc32c-generic sets an alignmask, but actually its ->update() works with
any alignment; only its ->setkey() and outputting the final digest
assume an alignment.  To prevent the buffer from having to be aligned by
the crypto API for just these cases, switch these cases over to the
unaligned access macros and remove the cra_alignmask.  Note that this
also makes crc32c-generic more consistent with crc32-generic.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-27 00:12:08 +08:00
Eric Biggers
fffe7d9279 crypto: crc32-generic - use unaligned access macros when needed
crc32-generic doesn't have a cra_alignmask set, which is desired as its
->update() works with any alignment.  However, it incorrectly assumes
4-byte alignment in ->setkey() and when outputting the final digest.

Fix this by using the unaligned access macros in those cases.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-27 00:12:08 +08:00
Christoph Hellwig
b28fc82267 crypto: af_alg: convert to ->poll_mask
Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-26 09:16:44 +02:00
Christoph Hellwig
984652dd8b net: remove sock_no_poll
Now that sock_poll handles a NULL ->poll or ->poll_mask there is no need
for a stub.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-26 09:16:44 +02:00
Ondrej Mosnacek
6ecc9d9ff9 crypto: x86 - Add optimized MORUS implementations
This patch adds optimized implementations of MORUS-640 and MORUS-1280,
utilizing the SSE2 and AVX2 x86 extensions.

For MORUS-1280 (which operates on 256-bit blocks) we provide both AVX2
and SSE2 implementation. Although SSE2 MORUS-1280 is slower than AVX2
MORUS-1280, it is comparable in speed to the SSE2 MORUS-640.

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:15:35 +08:00
Ondrej Mosnacek
56e8e57fc3 crypto: morus - Add common SIMD glue code for MORUS
This patch adds a common glue code for optimized implementations of
MORUS AEAD algorithms.

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:15:18 +08:00
Ondrej Mosnacek
4feb4c597a crypto: testmgr - Add test vectors for MORUS
This patch adds test vectors for MORUS-640 and MORUS-1280. The test
vectors were generated using the reference implementation from
SUPERCOP (see code comments for more details).

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:15:01 +08:00
Ondrej Mosnacek
396be41f16 crypto: morus - Add generic MORUS AEAD implementations
This patch adds the generic implementation of the MORUS family of AEAD
algorithms (MORUS-640 and MORUS-1280). The original authors of MORUS
are Hongjun Wu and Tao Huang.

At the time of writing, MORUS is one of the finalists in CAESAR, an
open competition intended to select a portfolio of alternatives to
the problematic AES-GCM:

https://competitions.cr.yp.to/caesar-submissions.html
https://competitions.cr.yp.to/round3/morusv2.pdf

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:15:00 +08:00
Ondrej Mosnacek
1d373d4e8e crypto: x86 - Add optimized AEGIS implementations
This patch adds optimized implementations of AEGIS-128, AEGIS-128L,
and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions.

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:14:00 +08:00
Ondrej Mosnacek
b87dc20346 crypto: testmgr - Add test vectors for AEGIS
This patch adds test vectors for the AEGIS family of AEAD algorithms
(AEGIS-128, AEGIS-128L, and AEGIS-256). The test vectors were
generated using the reference implementation from SUPERCOP (see code
comments for more details).

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:13:59 +08:00
Ondrej Mosnacek
f606a88e58 crypto: aegis - Add generic AEGIS AEAD implementations
This patch adds the generic implementation of the AEGIS family of AEAD
algorithms (AEGIS-128, AEGIS-128L, and AEGIS-256). The original
authors of AEGIS are Hongjun Wu and Bart Preneel.

At the time of writing, AEGIS is one of the finalists in CAESAR, an
open competition intended to select a portfolio of alternatives to
the problematic AES-GCM:

https://competitions.cr.yp.to/caesar-submissions.html
https://competitions.cr.yp.to/round3/aegisv11.pdf

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:13:58 +08:00
Gilad Ben-Yossef
15f47ce575 crypto: testmgr - reorder paes test lexicographically
Due to a snafu "paes" testmgr tests were not ordered
lexicographically, which led to boot time warnings.
Reorder the tests as needed.

Fixes: a794d8d ("crypto: ccree - enable support for hardware keys")
Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Tested-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-19 00:13:57 +08:00
Christoph Hellwig
fddda2b7b5 proc: introduce proc_create_seq{,_data}
Variants of proc_create{,_data} that directly take a struct seq_operations
argument and drastically reduces the boilerplate code in the callers.

All trivial callers converted over.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-16 07:23:35 +02:00
Kees Cook
4e234eed58 crypto: tcrypt - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this
allocates the return code buffers before starting jiffie timers, rather
than using stack space for the array. Additionally cleans up some exit
paths and make sure that the num_mb module_param() is used only once
per execution to avoid possible races in the value changing.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-05 14:52:53 +08:00
Ard Biesheuvel
8da02bf1a2 crypto: sm4 - export encrypt/decrypt routines to other drivers
In preparation of adding support for the SIMD based arm64 implementation
of arm64, which requires a fallback to non-SIMD code when invoked in
certain contexts, expose the generic SM4 encrypt and decrypt routines
to other drivers.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-05 14:52:51 +08:00
Gilad Ben-Yossef
a794d8d876 crypto: ccree - enable support for hardware keys
Enable CryptoCell support for hardware keys.

Hardware keys are regular AES keys loaded into CryptoCell internal memory
via firmware, often from secure boot ROM or hardware fuses at boot time.

As such, they can be used for enc/dec purposes like any other key but
cannot (read: extremely hard to) be extracted since since they are not
available anywhere in RAM during runtime.

The mechanism has some similarities to s390 secure keys although the keys
are not wrapped or sealed, but simply loaded offline. The interface was
therefore modeled based on the s390 secure keys support.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-05 14:52:43 +08:00
Fabio Estevam
b2b4f84d9c crypto: rsa - Remove unneeded error assignment
There is no need to assign an error value to 'ret' prior
to calling mpi_read_raw_from_sgl() because in the case
of error the 'ret' variable will be assigned to the error
code inside the if block.

In the case of non failure, 'ret' will be overwritten
immediately after, so remove the unneeded assignment.

Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-21 00:58:37 +08:00
Mahipal Challa
336073840a crypto: testmgr - Allow different compression results
The following error is triggered by the ThunderX ZIP driver
if the testmanager is enabled:

[  199.069437] ThunderX-ZIP 0000:03:00.0: Found ZIP device 0 177d:a01a on Node 0
[  199.073573] alg: comp: Compression test 1 failed for deflate-generic: output len = 37

The reason for this error is the verification of the compression
results. Verifying the compression result only works if all
algorithm parameters are identical, in this case to the software
implementation.

Different compression engines like the ThunderX ZIP coprocessor
might yield different compression results by tuning the
algorithm parameters. In our case the compressed result is
shorter than the test vector.

We should not forbid different compression results but only
check that compression -> decompression yields the same
result. This is done already in the acomp test. Do something
similar for test_comp().

Signed-off-by: Mahipal Challa <mchalla@cavium.com>
Signed-off-by: Balakrishna Bhamidipati <bbhamidipati@cavium.com>
[jglauber@cavium.com: removed unrelated printk changes, rewrote commit msg,
 fixed whitespace and unneeded initialization]
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-21 00:58:37 +08:00