Commit Graph

36 Commits

Author SHA1 Message Date
Herbert Xu
3a01d0ee2b crypto: skcipher - Remove top-level givcipher interface
This patch removes the old crypto_grab_skcipher helper and replaces
it with crypto_grab_skcipher2.

As this is the final entry point into givcipher this patch also
removes all traces of the top-level givcipher interface, including
all implicit IV generators such as chainiv.

The bottom-level givcipher interface remains until the drivers
using it are converted.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:46 +08:00
Jason A. Donenfeld
70d906bc17 crypto: skcipher - Copy iv from desc even for 0-len walks
Some ciphers actually support encrypting zero length plaintexts. For
example, many AEAD modes support this. The resulting ciphertext for
those winds up being only the authentication tag, which is a result of
the key, the iv, the additional data, and the fact that the plaintext
had zero length. The blkcipher constructors won't copy the IV to the
right place, however, when using a zero length input, resulting in
some significant problems when ciphers call their initialization
routines, only to find that the ->iv parameter is uninitialized. One
such example of this would be using chacha20poly1305 with a zero length
input, which then calls chacha20, which calls the key setup routine,
which eventually OOPSes due to the uninitialized ->iv member.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-12-09 20:16:22 +08:00
Herbert Xu
d1a2fd500c crypto: blkcipher - Include crypto/aead.h
All users of AEAD should include crypto/aead.h instead of
include/linux/crypto.h.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-05-13 10:31:34 +08:00
Ard Biesheuvel
4f7f1d7cff crypto: allow blkcipher walks over AEAD data
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-10 20:17:11 +08:00
Ard Biesheuvel
822be00fe6 crypto: remove direct blkcipher_walk dependency on transform
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-10 20:17:10 +08:00
Mathias Krause
9a5467bf7b crypto: user - fix info leaks in report API
Three errors resulting in kernel memory disclosure:

1/ The structures used for the netlink based crypto algorithm report API
are located on the stack. As snprintf() does not fill the remainder of
the buffer with null bytes, those stack bytes will be disclosed to users
of the API. Switch to strncpy() to fix this.

2/ crypto_report_one() does not initialize all field of struct
crypto_user_alg. Fix this to fix the heap info leak.

3/ For the module name we should copy only as many bytes as
module_name() returns -- not as much as the destination buffer could
hold. But the current code does not and therefore copies random data
from behind the end of the module name, as the module name is always
shorter than CRYPTO_MAX_ALG_NAME.

Also switch to use strncpy() to copy the algorithm's name and
driver_name. They are strings, after all.

Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-02-19 20:27:03 +08:00
Julia Lawall
3e8afe35c3 crypto: use ERR_CAST
Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.

The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)

// <smpl>
@@
expression err,x;
@@
-       err = PTR_ERR(x);
        if (IS_ERR(x))
-                return ERR_PTR(err);
+                return ERR_CAST(x);
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-02-04 21:16:53 +08:00
David S. Miller
6662df33f8 crypto: Stop using NLA_PUT*().
These macros contain a hidden goto, and are thus extremely error
prone and make code hard to audit.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-02 04:33:42 -04:00
Cong Wang
f0dfc0b0b7 crypto: remove the second argument of k[un]map_atomic()
Signed-off-by: Cong Wang <amwang@redhat.com>
2012-03-20 21:48:16 +08:00
Herbert Xu
3acc84739d crypto: algapi - Fix build problem with NET disabled
The report functions use NLA_PUT so we need to ensure that NET
is enabled.

Reported-by: Luis Henriques <henrix@camandro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-11 06:57:06 +08:00
Steffen Klassert
50496a1fab crypto: Add userspace report for blkcipher type algorithms
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-10-21 14:24:05 +02:00
Peter Zijlstra
61ecdb801e mm: strictly nested kmap_atomic()
Ensure kmap_atomic() usage is strictly nested

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-26 16:52:08 -07:00
Herbert Xu
b170a137f4 crypto: skcipher - Avoid infinite loop when cipher fails selftest
When an skcipher constructed through crypto_givcipher_default fails
its selftest, we'll loop forever trying to construct new skcipher
objects but failing because it already exists.

The crux of the issue is that once a givcipher fails the selftest,
we'll ignore it on the next run through crypto_skcipher_lookup and
attempt to construct a new givcipher.

We should instead return an error to the caller if we find a
givcipher that has failed the test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18 21:20:06 +08:00
Herbert Xu
bac1b5c469 crypto: blkcipher - Fix WARN_ON handling in walk_done
When we get left-over bits from a slow walk, it means that the
underlying cipher has gone troppo.  However, as we're handling
that case we should ensure that the caller terminates the walk.

This patch does this by setting walk->nbytes to zero.

Reported-by: Roel Kluin <roel.kluin@gmail.com>
Reported-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-01-27 17:11:13 +11:00
Herbert Xu
5be5e667a9 crypto: skcipher - Move IV generators into their own modules
This patch moves the default IV generators into their own modules
in order to break a dependency loop between cryptomgr, rng, and
blkcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-08-29 15:50:00 +10:00
Herbert Xu
76fc60a2e3 [CRYPTO] skcipher: Move chainiv/seqiv into crypto_blkcipher module
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module.  Since they're required
by most algorithms anyway this is an acceptable trade-off.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-02-23 11:12:06 +08:00
Herbert Xu
b9c55aa475 [CRYPTO] skcipher: Create default givcipher instances
This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
return algorithms that are capable of generating their own IVs through
givencrypt and givdecrypt.  Each algorithm may specify its default IV
generator through the geniv field.

For algorithms that do not set the geniv field, the blkcipher layer will
pick a default.  Currently it's chainiv for synchronous algorithms and
eseqiv for asynchronous algorithms.  Note that if these wrappers do not
work on an algorithm then that algorithm must specify its own geniv or
it can't be used at all.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:46 +11:00
Herbert Xu
ecfc43292f [CRYPTO] skcipher: Add skcipher_geniv_alloc/skcipher_geniv_free
This patch creates the infrastructure to help the construction of givcipher
templates that wrap around existing blkcipher/ablkcipher algorithms by adding
an IV generator to them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:44 +11:00
Herbert Xu
23508e11ab [CRYPTO] skcipher: Added geniv field
This patch introduces the geniv field which indicates the default IV
generator for each algorithm.  It should point to a string that is not
freed as long as the algorithm is registered.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:43 +11:00
Herbert Xu
42c271c6c5 [CRYPTO] scatterwalk: Move scatterwalk.h to linux/crypto
The scatterwalk infrastructure is used by algorithms so it needs to
move out of crypto for future users that may live in drivers/crypto
or asm/*/crypto.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:32 +11:00
Herbert Xu
332f8840f7 [CRYPTO] ablkcipher: Add distinct ABLKCIPHER type
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set.  This is suboptimal because
ablkcipher refers to two things.  On the one hand it refers to the
top-level ablkcipher interface with requests.  On the other hand it
refers to and algorithm type underneath.

As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top.  This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.

This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST.  The type it associated
with the algorithm implementation only.

Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used.  If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:15 +11:00
Herbert Xu
7607bd8ff0 [CRYPTO] blkcipher: Added blkcipher_walk_virt_block
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher.  This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu
2614de1b9a [CRYPTO] blkcipher: Increase kmalloc amount to aligned block size
Now that the block size is no longer a multiple of the alignment, we need to
increase the kmalloc amount in blkcipher_next_slow to use the aligned block
size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu
70613783fc [CRYPTO] blkcipher: Remove alignment restriction on block size
Previously we assumed for convenience that the block size is a multiple of
the algorithm's required alignment.  With the pending addition of CTR this
will no longer be the case as the block size will be 1 due to it being a
stream cipher.  However, the alignment requirement will be that of the
underlying implementation which will most likely be greater than 1.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:46 -07:00
Ingo Oeser
5aaff0c8f7 [CRYPTO] blkcipher: Use max() in blkcipher_get_spot() to state the intention
Use max in blkcipher_get_spot() instead of open coding it.

Signed-off-by: Ingo Oeser <ioe-lkml@rameria.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:44 -07:00
Herbert Xu
791b4d5f73 [CRYPTO] api: Add missing headers for setkey_unaligned
This patch ensures that kernel.h and slab.h are included for
the setkey_unaligned function.  It also breaks a couple of
long lines.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:40 -07:00
Herbert Xu
32528d0fbd [CRYPTO] blkcipher: Fix inverted test in blkcipher_get_spot
The previous patch had the conditional inverted.  This patch fixes it
so that we return the original position if it does not straddle a page.

Thanks to Bob Gilligan for spotting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-09-10 15:51:11 +08:00
Herbert Xu
e4630f9fd8 [CRYPTO] blkcipher: Fix handling of kmalloc page straddling
The function blkcipher_get_spot tries to return a buffer of
the specified length that does not straddle a page.  It has
an off-by-one bug so it may advance a page unnecessarily.

What's worse, one of its callers doesn't provide a buffer
that's sufficiently long for this operation.

This patch fixes both problems.  Thanks to Bob Gilligan for
diagnosing this problem and providing a fix.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-09-09 08:45:21 +01:00
Sebastian Siewior
0681717678 [CRYPTO] api: fix writting into unallocated memory in setkey_aligned
setkey_unaligned() commited in ca7c39385c
overwrites unallocated memory in the following memset() because
I used the wrong buffer length.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-08-06 15:33:56 +08:00
Sebastian Siewior
ca7c39385c [CRYPTO] api: Handle unaligned keys in setkey
setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the
requested alignment by the algorithm. This patch fixes it. The extra
memory is allocated by kmalloc() with GFP_ATOMIC flag.

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:54 +08:00
Herbert Xu
32e3983fe5 [CRYPTO] api: Add async block cipher interface
This patch adds the frontend interface for asynchronous block ciphers.
In addition to the usual block cipher parameters, there is a callback
function pointer and a data pointer.  The callback will be invoked only
if the encrypt/decrypt handlers return -EINPROGRESS.  In other words,
if the return value of zero the completion handler (or the equivalent
code) needs to be invoked by the caller.

The request structure is allocated and freed by the caller.  Its size
is determined by calling crypto_ablkcipher_reqsize().  The helpers
ablkcipher_request_alloc/ablkcipher_request_free can be used to manage
the memory for a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:30 +10:00
Herbert Xu
03f5d8cedb [CRYPTO] api: Proc functions should be marked as unused
The proc functions were incorrectly marked as used rather than unused.
They may be unused if proc is disabled.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:29 +10:00
Herbert Xu
27d2a33007 [CRYPTO] api: Allow multiple frontends per backend
This patch adds support for multiple frontend types for each backend
algorithm by passing the type and mask through to the backend type
init function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:01 +11:00
Herbert Xu
fb469840b8 [CRYPTO] all: Check for usage in hard IRQ context
Using blkcipher/hash crypto operations in hard IRQ context can lead
to random memory corruption due to the reuse of kmap_atomic slots.
Since crypto operations were never meant to be used in hard IRQ
contexts, this patch checks for such usage and returns an error
before kmap_atomic is performed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:20:58 +11:00
Al Viro
ee36c2bf8e [PATCH] uml problems with linux/io.h
Remove useless includes of linux/io.h, don't even try to build iomap_copy
on uml (it doesn't have readb() et.al., so...)

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-13 09:05:52 -08:00
Herbert Xu
5cde0af2a9 [CRYPTO] cipher: Added block cipher type
This patch adds the new type of block ciphers.  Unlike current cipher
algorithms which operate on a single block at a time, block ciphers
operate on an arbitrarily long linear area of data.  As it is block-based,
it will skip any data remaining at the end which cannot form a block.

The block cipher has one major difference when compared to the existing
block cipher implementation.  The sg walking is now performed by the
algorithm rather than the cipher mid-layer.  This is needed for drivers
that directly support sg lists.  It also improves performance for all
algorithms as it reduces the total number of indirect calls by one.

In future the existing cipher algorithm will be converted to only have
a single-block interface.  This will be done after all existing users
have switched over to the new block cipher type.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:52 +10:00