Files
linux/arch/arm64/include/asm/simd.h
Ard Biesheuvel c4b502d60a arm64/simd: Avoid pointless clearing of FP/SIMD buffer
The buffer provided to kernel_neon_begin() is only used if the task is
scheduled out while the FP/SIMD is in use by the kernel, or when such a
section is interrupted by a softirq that also uses the FP/SIMD.

IOW, this happens rarely, and even if it happened often, there is still
no reason for this buffer to be cleared beforehand, which happens
unconditionally, due to the use of a compound literal expression.

So define that buffer variable explicitly, and mark it as
__uninitialized so that it will not get cleared, even when
-ftrivial-auto-var-init is in effect.

This requires some preprocessor gymnastics, due to the fact that the
variable must be defined throughout the entire guarded scope, and the
expression

  ({ struct user_fpsimd_state __uninitialized st; &st; })

is problematic in that regard, even though the compilers seem to
permit it. So instead, repeat the 'for ()' trick that is also used in
the implementation of the guarded scope helpers.

Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Fixes: 4fa617cc68 ("arm64/fpsimd: Allocate kernel mode FP/SIMD buffers on the stack")
Link: https://lore.kernel.org/r/20251209054848.998878-2-ardb@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-12-14 10:18:22 -08:00

61 lines
1.4 KiB
C

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2017 Linaro Ltd. <ard.biesheuvel@linaro.org>
*/
#ifndef __ASM_SIMD_H
#define __ASM_SIMD_H
#include <linux/cleanup.h>
#include <linux/compiler.h>
#include <linux/irqflags.h>
#include <linux/percpu.h>
#include <linux/preempt.h>
#include <linux/types.h>
#include <asm/neon.h>
#ifdef CONFIG_KERNEL_MODE_NEON
/*
* may_use_simd - whether it is allowable at this time to issue SIMD
* instructions or access the SIMD register file
*
* Callers must not assume that the result remains true beyond the next
* preempt_enable() or return from softirq context.
*/
static __must_check inline bool may_use_simd(void)
{
/*
* We must make sure that the SVE has been initialized properly
* before using the SIMD in kernel.
*/
return !WARN_ON(!system_capabilities_finalized()) &&
system_supports_fpsimd() &&
!in_hardirq() && !in_nmi();
}
#else /* ! CONFIG_KERNEL_MODE_NEON */
static __must_check inline bool may_use_simd(void) {
return false;
}
#endif /* ! CONFIG_KERNEL_MODE_NEON */
DEFINE_LOCK_GUARD_1(ksimd,
struct user_fpsimd_state,
kernel_neon_begin(_T->lock),
kernel_neon_end(_T->lock))
#define __scoped_ksimd(_label) \
for (struct user_fpsimd_state __uninitialized __st; \
true; ({ goto _label; })) \
if (0) { \
_label: break; \
} else scoped_guard(ksimd, &__st)
#define scoped_ksimd() __scoped_ksimd(__UNIQUE_ID(label))
#endif