Add ghash algorithm test before provide it to users
Signed-off-by: Youquan, Song <youquan.song@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CLMUL-NI accelerated GHASH should be turned off on non-x86_64 machine.
Reported-by: Dave Young <hidave.darkstar@gmail.com>
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto/testmgr.c: In function ‘test_cprng’:
crypto/testmgr.c:1204: warning: ‘err’ may be used uninitialized in this function
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
size_t nbytes cannot be less than 0 and the test was redundant.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove special handling of old-style digest algorithms from the procfs
show handler.
Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6941c3a0 disabled compilation of the legacy digest code but didn't
actually remove it. Rectify this. Also, remove the crypto_hash_type
extern declaration from algapi.h now that the struct is gone.
Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch to add fips(ansi_cprng) alg, which is ansi_cprng plus a continuous test
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH,
carry-less multiplication. More information about PCLMULQDQ can be
found at:
http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/
Because PCLMULQDQ changes XMM state, its usage must be enclosed with
kernel_fpu_begin/end, which can be used only in process context, the
acceleration is implemented as crypto_ahash. That is, request in soft
IRQ context will be defered to the cryptd kernel thread.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
crypto: sha-s390 - Fix warnings in import function
crypto: vmac - New hash algorithm for intel_txt support
crypto: api - Do not displace newly registered algorithms
crypto: ansi_cprng - Fix module initialization
crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
crypto: fips - Depend on ansi_cprng
crypto: blkcipher - Do not use eseqiv on stream ciphers
crypto: ctr - Use chainiv on raw counter mode
Revert crypto: fips - Select CPRNG
crypto: rng - Fix typo
crypto: talitos - add support for 36 bit addressing
crypto: talitos - align locks on cache lines
crypto: talitos - simplify hmac data size calculation
crypto: mv_cesa - Add support for Orion5X crypto engine
crypto: cryptd - Add support to access underlaying shash
crypto: gcm - Use GHASH digest algorithm
crypto: ghash - Add GHASH digest algorithm for GCM
crypto: authenc - Convert to ahash
crypto: api - Fix aligned ctx helper
crypto: hmac - Prehash ipad/opad
...
This patch adds VMAC (a fast MAC) support into crypto framework.
Signed-off-by: Shane Wang <shane.wang@intel.com>
Signed-off-by: Joseph Cihula <joseph.cihula@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We have a mechanism where newly registered algorithms of a higher
priority can displace existing instances that use a different
implementation of the same algorithm with a lower priority.
Unfortunately the same mechanism can cause a newly registered
algorithm to displace itself if it depends on an existing version
of the same algorithm.
This patch fixes this by keeping all algorithms that the newly
reigstered algorithm depends on, thus protecting them from being
removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work. This can result
in IPsec crashes when the queue is depleted.
This patch fixes it by doing the pointer conversion only when the
return value is non-NULL. In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.
Reported-by: Brad Bosch <bradbosch@comcast.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Return the value we got from crypto_register_alg() instead of
returning 0 in any case.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The alignment calculation of xcbc_tfm_ctx uses alg->cra_alignmask
and not alg->cra_alignmask + 1 as it should. This led to frequent
crashes during the selftest of xcbc(aes-asm) on x86_64
machines. This patch fixes this. Also we use the alignmask
of xcbc and not the alignmask of the underlying algorithm
for the alignmnent calculation in xcbc_create now.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
What about something like this? It defaults the CPRNG to m and makes FIPS
dependent on the CPRNG. That way you get a module build by default, but you can
change it to y manually during config and still satisfy the dependency, and if
you select N it disables FIPS as well. I rather like that better than making
FIPS a tristate. I just tested it out here and it seems to work well. Let me
know what you think
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Recently we switched to using eseqiv on SMP machines in preference
over chainiv. However, eseqiv does not support stream ciphers so
they should still default to chainiv.
This patch applies the same check as done by eseqiv to weed out
the stream ciphers. In particular, all algorithms where the IV
size is not equal to the block size will now default to chainiv.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Raw counter mode only works with chainiv, which is no longer
the default IV generator on SMP machines. This broke raw counter
mode as it can no longer instantiate as a givcipher.
This patch fixes it by always picking chainiv on raw counter
mode. This is based on the diagnosis and a patch by Huang
Ying.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 215ccd6f55.
It causes CPRNG and everything selected by it to be built-in
whenever FIPS is enabled. The problem is that it is selecting
a tristate from a bool, which is usually not what is intended.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Correct a typo in crypto/rng.c
Signed-off-by: Christian Kujau <lists@nerdbynature.de>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified
algorithm name. The new allocated one is guaranteed to be cryptd-ed
ahash, so the shash underlying can be gotten via cryptd_ahash_child().
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the dedicated GHASH implementation in GCM, and uses the GHASH
digest algorithm instead. This will make GCM uses hardware accelerated
GHASH implementation automatically if available.
ahash instead of shash interface is used, because some hardware
accelerated GHASH implementation needs asynchronous interface.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
GHASH is implemented as a shash algorithm. The actual implementation
is copied from gcm.c. This makes it possible to add
architecture/hardware accelerated GHASH implementation.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts authenc to the new ahash interface.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
dmaengine: at_hdmac: add DMA slave transfers
dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller
dmaengine: dmatest: correct thread_count while using multiple thread per channel
dmaengine: dmatest: add a maximum number of test iterations
drivers/dma: Remove unnecessary semicolons
drivers/dma/fsldma.c: Remove unnecessary semicolons
dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
fsldma: do not clear bandwidth control bits on the 83xx controller
fsldma: enable external start for the 83xx controller
fsldma: use PCI Read Multiple command
This patch uses crypto_shash_export/crypto_shash_import to prehash
ipad/opad to speed up hmac. This is partly based on a similar patch
by Steffen Klassert.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It's undefined behaviour in C to write outside the bounds of an array.
The key expansion routine takes a shortcut of creating 8 words at a
time, but this creates 4 additional words which don't fit in the array.
As everyone is hopefully now aware, GCC is at liberty to make any
assumptions and optimisations it likes in situations where it can
detect that UB has occured, up to and including nasal demons, and
as the indices being accessed in the array are trivially calculable,
it's rash to invite gcc to do take any liberties at all.
Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_init_shash_ops_async() tests for setkey and not for import
before exporting the algorithms import function to ahash.
This patch fixes this.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ahash_op_unaligned() and ahash_def_finup() allocate memory atomically,
regardless whether the request can sleep or not. This patch changes
this to use GFP_KERNEL if the request can sleep.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch provides a default export/import function for all
shash algorithms. It simply copies the descriptor context as
is done by sha1_generic.
This in essence means that all existing shash algorithms now
support export/import. This is something that will be depended
upon in implementations such as hmac. Therefore all new shash
and ahash implementations must support export/import.
For those that cannot obtain a partial result, padlock-sha's
fallback model should be used so that a partial result is always
available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch replaces the 32-bit counters in sha512_generic with
64-bit counters. It also switches the bit count to the simpler
byte count.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch renames struct sha512_ctx and exports it as struct
sha512_state so that other sha512 implementations can use it
as the reference structure for exporting their state.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Although xcbc was converted to shash, it didn't obey the new
requirement that all hash state must be stored in the descriptor
rather than the transform.
This patch fixes this issue and also optimises away the rekeying
by precomputing K2 and K3 within setkey.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the finup/export/import functions to the cryptd
ahash implementation. We simply invoke the underlying shash
operations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When shash_ahash_finup encounters a null request, we end up not
calling the underlying final function. This patch fixes that.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the alignment check was made unconditional for ahash we
may end up crashing on shash algorithms because we're always
calling alg->setkey instead of tfm->setkey.
This patch fixes it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If cryptd_alloc_instance() fails, the return value is uninitialized.
This patch fixes this by setting the return value.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We currently use GFP_ATOMIC in the unaligned setkey function
to allocate the temporary aligned buffer. Since setkey must
be called in a sleepable context, we can use GFP_KERNEL instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When we encounter an unaligned pointer we are supposed to copy
it to a temporary aligned location. However the temporary buffer
isn't aligned properly. This patch fixes that.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some unaligned buffers on the stack weren't zapped properly which
may cause secret data to be leaked. This patch fixes them by doing
a zero memset.
It is also possible for us to place random kernel stack contents
in the digest buffer if a digest operation fails. This is fixed
by only copying if the operation succeeded.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes cryptd to use the new style ahash type. In
particular, the instance is enlarged to encapsulate the new
ahash_alg structure.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes cryptd to use the template->create function
instead of alloc in anticipation for the switch to new style
ahash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a mask parameter to complement the existing type
parameter. This is useful when instantiating algorithms that
require a mask other than the default, e.g., ahash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>