Files
linux/arch/arm/crypto/Kconfig
Eric Biggers 68546e5632 lib/crypto: curve25519: Consolidate into single module
Reorganize the Curve25519 library code:

- Build a single libcurve25519 module, instead of up to three modules:
  libcurve25519, libcurve25519-generic, and an arch-specific module.

- Move the arch-specific Curve25519 code from arch/$(SRCARCH)/crypto/ to
  lib/crypto/$(SRCARCH)/.  Centralize the build rules into
  lib/crypto/Makefile and lib/crypto/Kconfig.

- Include the arch-specific code directly in lib/crypto/curve25519.c via
  a header, rather than using a separate .c file.

- Eliminate the entanglement with CRYPTO.  CRYPTO_LIB_CURVE25519 no
  longer selects CRYPTO, and the arch-specific Curve25519 code no longer
  depends on CRYPTO.

This brings Curve25519 in line with the latest conventions for
lib/crypto/, used by other algorithms.  The exception is that I kept the
generic code in separate translation units for now.  (Some of the
function names collide between the x86 and generic Curve25519 code.  And
the Curve25519 functions are very long anyway, so inlining doesn't
matter as much for Curve25519 as it does for some other algorithms.)

Link: https://lore.kernel.org/r/20250906213523.84915-11-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-09-06 16:32:43 -07:00

118 lines
4.0 KiB
Plaintext

# SPDX-License-Identifier: GPL-2.0
menu "Accelerated Cryptographic Algorithms for CPU (arm)"
config CRYPTO_GHASH_ARM_CE
tristate "Hash functions: GHASH (PMULL/NEON/ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_AEAD
select CRYPTO_HASH
select CRYPTO_CRYPTD
select CRYPTO_LIB_AES
select CRYPTO_LIB_GF128MUL
help
GCM GHASH function (NIST SP800-38D)
Architecture: arm using
- PMULL (Polynomial Multiply Long) instructions
- NEON (Advanced SIMD) extensions
- ARMv8 Crypto Extensions
Use an implementation of GHASH (used by the GCM AEAD chaining mode)
that uses the 64x64 to 128 bit polynomial multiplication (vmull.p64)
that is part of the ARMv8 Crypto Extensions, or a slower variant that
uses the vmull.p8 instruction that is part of the basic NEON ISA.
config CRYPTO_NHPOLY1305_NEON
tristate "Hash functions: NHPoly1305 (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_NHPOLY1305
help
NHPoly1305 hash function (Adiantum)
Architecture: arm using:
- NEON (Advanced SIMD) extensions
config CRYPTO_BLAKE2B_NEON
tristate "Hash functions: BLAKE2b (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_BLAKE2B
help
BLAKE2b cryptographic hash function (RFC 7693)
Architecture: arm using
- NEON (Advanced SIMD) extensions
BLAKE2b digest algorithm optimized with ARM NEON instructions.
On ARM processors that have NEON support but not the ARMv8
Crypto Extensions, typically this BLAKE2b implementation is
much faster than the SHA-2 family and slightly faster than
SHA-1.
config CRYPTO_AES_ARM
tristate "Ciphers: AES"
select CRYPTO_ALGAPI
select CRYPTO_AES
help
Block ciphers: AES cipher algorithms (FIPS-197)
Architecture: arm
On ARM processors without the Crypto Extensions, this is the
fastest AES implementation for single blocks. For multiple
blocks, the NEON bit-sliced implementation is usually faster.
This implementation may be vulnerable to cache timing attacks,
since it uses lookup tables. However, as countermeasures it
disables IRQs and preloads the tables; it is hoped this makes
such attacks very difficult.
config CRYPTO_AES_ARM_BS
tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (bit-sliced NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_AES_ARM
select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES
help
Length-preserving ciphers: AES cipher algorithms (FIPS-197)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
Bit sliced AES gives around 45% speedup on Cortex-A15 for CTR mode
and for XTS mode encryption, CBC and XTS mode decryption speedup is
around 25%. (CBC encryption speed is not affected by this driver.)
The bit sliced AES code does not use lookup tables, so it is believed
to be invulnerable to cache timing attacks. However, since the bit
sliced AES code cannot process single blocks efficiently, in certain
cases table-based code with some countermeasures against cache timing
attacks will still be used as a fallback method; specifically CBC
encryption (not CBC decryption), the encryption of XTS tweaks, XTS
ciphertext stealing when the message isn't a multiple of 16 bytes, and
CTR when invoked in a context in which NEON instructions are unusable.
config CRYPTO_AES_ARM_CE
tristate "Ciphers: AES, modes: ECB/CBC/CTS/CTR/XTS (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES
help
Length-preserving ciphers: AES cipher algorithms (FIPS-197)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- CTS (Cipher Text Stealing) mode (NIST SP800-38A)
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
Architecture: arm using:
- ARMv8 Crypto Extensions
endmenu