Merge branches 'for-next/misc', 'for-next/kselftest', 'for-next/efi-preempt', 'for-next/assembler-macro', 'for-next/typos', 'for-next/sme-ptrace-disable', 'for-next/local-tlbi-page-reused', 'for-next/mpam', 'for-next/acpi' and 'for-next/documentation', remote-tracking branch 'arm64/for-next/perf' into for-next/core

* arm64/for-next/perf:
  perf: arm_spe: Add support for filtering on data source
  perf: Add perf_event_attr::config4
  perf/imx_ddr: Add support for PMU in DB (system interconnects)
  perf/imx_ddr: Get and enable optional clks
  perf/imx_ddr: Move ida_alloc() from ddr_perf_init() to ddr_perf_probe()
  dt-bindings: perf: fsl-imx-ddr: Add compatible string for i.MX8QM, i.MX8QXP and i.MX8DXL
  arch_topology: Provide a stub topology_core_has_smt() for !CONFIG_GENERIC_ARCH_TOPOLOGY
  perf/arm-ni: Fix and optimise register offset calculation
  perf: arm_pmuv3: Add new Cortex and C1 CPU PMUs
  perf: arm_cspmu: fix error handling in arm_cspmu_impl_unregister()
  perf/arm-ni: Add NoC S3 support
  perf/arm_cspmu: nvidia: Add pmevfiltr2 support
  perf/arm_cspmu: nvidia: Add revision id matching
  perf/arm_cspmu: Add pmpidr support
  perf/arm_cspmu: Add callback to reset filter config
  perf: arm_pmuv3: Don't use PMCCNTR_EL0 on SMT cores

* for-next/misc:
  : Miscellaneous patches
  arm64: atomics: lse: Remove unused parameters from ATOMIC_FETCH_OP_AND macros
  arm64: remove duplicate ARCH_HAS_MEM_ENCRYPT
  arm64: mm: use untagged address to calculate page index
  arm64: mm: make linear mapping permission update more robust for patial range
  arm64/mm: Elide TLB flush in certain pte protection transitions
  arm64/mm: Rename try_pgd_pgtable_alloc_init_mm
  arm64/mm: Allow __create_pgd_mapping() to propagate pgtable_alloc() errors
  arm64: add unlikely hint to MTE async fault check in el0_svc_common
  arm64: acpi: add newline to deferred APEI warning
  arm64: entry: Clean out some indirection
  arm64/mm: Ensure PGD_SIZE is aligned to 64 bytes when PA_BITS = 52
  arm64/mm: Drop cpu_set_[default|idmap]_tcr_t0sz()
  arm64: remove unused ARCH_PFN_OFFSET
  arm64: use SOFTIRQ_ON_OWN_STACK for enabling softirq stack
  arm64: Remove assertion on CONFIG_VMAP_STACK

* for-next/kselftest:
  : arm64 kselftest patches
  kselftest/arm64: Align zt-test register dumps

* for-next/efi-preempt:
  : arm64: Make EFI calls preemptible
  arm64/efi: Call EFI runtime services without disabling preemption
  arm64/efi: Move uaccess en/disable out of efi_set_pgd()
  arm64/efi: Drop efi_rt_lock spinlock from EFI arch wrapper
  arm64/fpsimd: Permit kernel mode NEON with IRQs off
  arm64/fpsimd: Don't warn when EFI execution context is preemptible
  efi/runtime-wrappers: Keep track of the efi_runtime_lock owner
  efi: Add missing static initializer for efi_mm::cpus_allowed_lock

* for-next/assembler-macro:
  : arm64: Replace __ASSEMBLY__ with __ASSEMBLER__ in headers
  arm64: Replace __ASSEMBLY__ with __ASSEMBLER__ in non-uapi headers
  arm64: Replace __ASSEMBLY__ with __ASSEMBLER__ in uapi headers

* for-next/typos:
  : Random typo/spelling fixes
  arm64: Fix double word in comments
  arm64: Fix typos and spelling errors in comments

* for-next/sme-ptrace-disable:
  : Support disabling streaming mode via ptrace on SME only systems
  kselftest/arm64: Cover disabling streaming mode without SVE in fp-ptrace
  kselftst/arm64: Test NT_ARM_SVE FPSIMD format writes on non-SVE systems
  arm64/sme: Support disabling streaming mode via ptrace on SME only systems

* for-next/local-tlbi-page-reused:
  : arm64, mm: avoid TLBI broadcast if page reused in write fault
  arm64, tlbflush: don't TLBI broadcast if page reused in write fault
  mm: add spurious fault fixing support for huge pmd

* for-next/mpam: (34 commits)
  : Basic Arm MPAM driver (more to follow)
  MAINTAINERS: new entry for MPAM Driver
  arm_mpam: Add kunit tests for props_mismatch()
  arm_mpam: Add kunit test for bitmap reset
  arm_mpam: Add helper to reset saved mbwu state
  arm_mpam: Use long MBWU counters if supported
  arm_mpam: Probe for long/lwd mbwu counters
  arm_mpam: Consider overflow in bandwidth counter state
  arm_mpam: Track bandwidth counter state for power management
  arm_mpam: Add mpam_msmon_read() to read monitor value
  arm_mpam: Add helpers to allocate monitors
  arm_mpam: Probe and reset the rest of the features
  arm_mpam: Allow configuration to be applied and restored during cpu online
  arm_mpam: Use a static key to indicate when mpam is enabled
  arm_mpam: Register and enable IRQs
  arm_mpam: Extend reset logic to allow devices to be reset any time
  arm_mpam: Add a helper to touch an MSC from any CPU
  arm_mpam: Reset MSC controls from cpuhp callbacks
  arm_mpam: Merge supported features during mpam_enable() into mpam_class
  arm_mpam: Probe the hardware features resctrl supports
  arm_mpam: Add helpers for managing the locking around the mon_sel registers
  ...

* for-next/acpi:
  : arm64 acpi updates
  ACPI: GTDT: Get rid of acpi_arch_timer_mem_init()

* for-next/documentation:
  : arm64 Documentation updates
  Documentation/arm64: Fix the typo of register names
This commit is contained in:
131 changed files with 5298 additions and 442 deletions

View File

@@ -391,13 +391,13 @@ Before jumping into the kernel, the following conditions must be met:
- SMCR_EL2.LEN must be initialised to the same value for all CPUs the
kernel will execute on.
- HWFGRTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
- HFGRTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
- HWFGWTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
- HFGWTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01.
- HWFGRTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
- HFGRTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
- HWFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
- HFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01.
For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64):

View File

@@ -402,6 +402,11 @@ The regset data starts with struct user_sve_header, containing:
streaming mode and any SETREGSET of NT_ARM_SSVE will enter streaming mode
if the target was not in streaming mode.
* On systems that do not support SVE it is permitted to use SETREGSET to
write SVE_PT_REGS_FPSIMD formatted data via NT_ARM_SVE, in this case the
vector length should be specified as 0. This allows streaming mode to be
disabled on systems with SME but not SVE.
* If any register data is provided along with SVE_PT_VL_ONEXEC then the
registers data will be interpreted with the current vector length, not
the vector length configured for use on exec.

View File

@@ -17448,6 +17448,16 @@ S: Maintained
F: Documentation/devicetree/bindings/leds/backlight/mps,mp3309c.yaml
F: drivers/video/backlight/mp3309c.c
MPAM DRIVER
M: James Morse <james.morse@arm.com>
M: Ben Horgan <ben.horgan@arm.com>
R: Reinette Chatre <reinette.chatre@intel.com>
R: Fenghua Yu <fenghuay@nvidia.com>
S: Maintained
F: drivers/resctrl/mpam_*
F: drivers/resctrl/test_mpam_*
F: include/linux/arm_mpam.h
MPS MP2869 DRIVER
M: Wensheng Wang <wenswang@yeah.net>
L: linux-hwmon@vger.kernel.org

View File

@@ -47,7 +47,6 @@ config ARM64
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_DIRECT_MAP
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select ARCH_STACKWALK
select ARCH_HAS_STRICT_KERNEL_RWX
@@ -2023,6 +2022,31 @@ config ARM64_TLB_RANGE
ARMv8.4-TLBI provides TLBI invalidation instruction that apply to a
range of input addresses.
config ARM64_MPAM
bool "Enable support for MPAM"
select ARM64_MPAM_DRIVER if EXPERT # does nothing yet
select ACPI_MPAM if ACPI
help
Memory System Resource Partitioning and Monitoring (MPAM) is an
optional extension to the Arm architecture that allows each
transaction issued to the memory system to be labelled with a
Partition identifier (PARTID) and Performance Monitoring Group
identifier (PMG).
Memory system components, such as the caches, can be configured with
policies to control how much of various physical resources (such as
memory bandwidth or cache memory) the transactions labelled with each
PARTID can consume. Depending on the capabilities of the hardware,
the PARTID and PMG can also be used as filtering criteria to measure
the memory system resource consumption of different parts of a
workload.
Use of this extension requires CPU support, support in the
Memory System Components (MSC), and a description from firmware
of where the MSCs are in the address space.
MPAM is exposed to user-space via the resctrl pseudo filesystem.
endmenu # "ARMv8.4 architectural features"
menu "ARMv8.5 architectural features"

View File

@@ -19,7 +19,7 @@
#error "cpucaps have overflown ARM64_CB_BIT"
#endif
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/stringify.h>
@@ -207,7 +207,7 @@ alternative_endif
#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \
alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
/*
* Usage: asm(ALTERNATIVE(oldinstr, newinstr, cpucap));
@@ -219,7 +219,7 @@ alternative_endif
#define ALTERNATIVE(oldinstr, newinstr, ...) \
_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
@@ -263,6 +263,6 @@ alternative_has_cap_unlikely(const unsigned long cpucap)
return true;
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_ALTERNATIVE_MACROS_H */

View File

@@ -4,7 +4,7 @@
#include <asm/alternative-macros.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/init.h>
#include <linux/types.h>
@@ -34,5 +34,5 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
void alt_cb_patch_nops(struct alt_instr *alt, __le32 *origptr,
__le32 *updptr, int nr_inst);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_ALTERNATIVE_H */

View File

@@ -9,7 +9,7 @@
#include <asm/sysreg.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/irqchip/arm-gic-common.h>
#include <linux/stringify.h>
@@ -188,5 +188,5 @@ static inline bool gic_has_relaxed_pmr_sync(void)
return cpus_have_cap(ARM64_HAS_GIC_PRIO_RELAXED_SYNC);
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_ARCH_GICV3_H */

View File

@@ -27,7 +27,7 @@
/* Data fields for EX_TYPE_UACCESS_CPY */
#define EX_DATA_UACCESS_WRITE BIT(0)
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
.pushsection __ex_table, "a"; \
@@ -77,7 +77,7 @@
__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_CPY, \uaccess_is_write)
.endm
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#include <linux/stringify.h>
@@ -132,6 +132,6 @@
EX_DATA_REG(ADDR, addr) \
")")
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_ASM_EXTABLE_H */

View File

@@ -5,7 +5,7 @@
* Copyright (C) 1996-2000 Russell King
* Copyright (C) 2012 ARM Ltd.
*/
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#error "Only include this from assembly code"
#endif
@@ -371,7 +371,7 @@ alternative_endif
* [start, end) with dcache line size explicitly provided.
*
* op: operation passed to dc instruction
* domain: domain used in dsb instruciton
* domain: domain used in dsb instruction
* start: starting virtual address of the region
* end: end virtual address of the region
* linesz: dcache line size
@@ -412,7 +412,7 @@ alternative_endif
* [start, end)
*
* op: operation passed to dc instruction
* domain: domain used in dsb instruciton
* domain: domain used in dsb instruction
* start: starting virtual address of the region
* end: end virtual address of the region
* fixup: optional label to branch to on user fault

View File

@@ -103,17 +103,17 @@ static __always_inline void __lse_atomic_and(int i, atomic_t *v)
return __lse_atomic_andnot(~i, v);
}
#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \
#define ATOMIC_FETCH_OP_AND(name) \
static __always_inline int \
__lse_atomic_fetch_and##name(int i, atomic_t *v) \
{ \
return __lse_atomic_fetch_andnot##name(~i, v); \
}
ATOMIC_FETCH_OP_AND(_relaxed, )
ATOMIC_FETCH_OP_AND(_acquire, a, "memory")
ATOMIC_FETCH_OP_AND(_release, l, "memory")
ATOMIC_FETCH_OP_AND( , al, "memory")
ATOMIC_FETCH_OP_AND(_relaxed)
ATOMIC_FETCH_OP_AND(_acquire)
ATOMIC_FETCH_OP_AND(_release)
ATOMIC_FETCH_OP_AND( )
#undef ATOMIC_FETCH_OP_AND
@@ -210,17 +210,17 @@ static __always_inline void __lse_atomic64_and(s64 i, atomic64_t *v)
return __lse_atomic64_andnot(~i, v);
}
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
#define ATOMIC64_FETCH_OP_AND(name) \
static __always_inline long \
__lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
{ \
return __lse_atomic64_fetch_andnot##name(~i, v); \
}
ATOMIC64_FETCH_OP_AND(_relaxed, )
ATOMIC64_FETCH_OP_AND(_acquire, a, "memory")
ATOMIC64_FETCH_OP_AND(_release, l, "memory")
ATOMIC64_FETCH_OP_AND( , al, "memory")
ATOMIC64_FETCH_OP_AND(_relaxed)
ATOMIC64_FETCH_OP_AND(_acquire)
ATOMIC64_FETCH_OP_AND(_release)
ATOMIC64_FETCH_OP_AND( )
#undef ATOMIC64_FETCH_OP_AND

View File

@@ -7,7 +7,7 @@
#ifndef __ASM_BARRIER_H
#define __ASM_BARRIER_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/kasan-checks.h>
@@ -221,6 +221,6 @@ do { \
#include <asm-generic/barrier.h>
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_BARRIER_H */

View File

@@ -35,7 +35,7 @@
#define ARCH_DMA_MINALIGN (128)
#define ARCH_KMALLOC_MINALIGN (8)
#if !defined(__ASSEMBLY__) && !defined(BUILD_VDSO)
#if !defined(__ASSEMBLER__) && !defined(BUILD_VDSO)
#include <linux/bitops.h>
#include <linux/kasan-enabled.h>
@@ -135,6 +135,6 @@ static inline u32 __attribute_const__ read_cpuid_effective_cachetype(void)
return ctr;
}
#endif /* !defined(__ASSEMBLY__) && !defined(BUILD_VDSO) */
#endif /* !defined(__ASSEMBLER__) && !defined(BUILD_VDSO) */
#endif

View File

@@ -5,7 +5,7 @@
#include <asm/cpucap-defs.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
/*
* Check whether a cpucap is possible at compiletime.
@@ -77,6 +77,6 @@ cpucap_is_possible(const unsigned int cap)
return true;
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_CPUCAPS_H */

View File

@@ -19,7 +19,7 @@
#define ARM64_SW_FEATURE_OVERRIDE_HVHE 4
#define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF 8
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/bug.h>
#include <linux/jump_label.h>
@@ -199,7 +199,7 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
* registers (e.g, SCTLR, TCR etc.) or patching the kernel via
* alternatives. The kernel patching is batched and performed at later
* point. The actions are always initiated only after the capability
* is finalised. This is usally denoted by "enabling" the capability.
* is finalised. This is usually denoted by "enabling" the capability.
* The actions are initiated as follows :
* a) Action is triggered on all online CPUs, after the capability is
* finalised, invoked within the stop_machine() context from
@@ -251,7 +251,7 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
#define ARM64_CPUCAP_SCOPE_LOCAL_CPU ((u16)BIT(0))
#define ARM64_CPUCAP_SCOPE_SYSTEM ((u16)BIT(1))
/*
* The capabilitiy is detected on the Boot CPU and is used by kernel
* The capability is detected on the Boot CPU and is used by kernel
* during early boot. i.e, the capability should be "detected" and
* "enabled" as early as possibly on all booting CPUs.
*/
@@ -1078,6 +1078,6 @@ static inline bool cpu_has_lpa2(void)
#endif
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View File

@@ -249,7 +249,7 @@
#define MIDR_FUJITSU_ERRATUM_010001_MASK (~MIDR_CPU_VAR_REV(1, 0))
#define TCR_CLEAR_FUJITSU_ERRATUM_010001 (TCR_NFD1 | TCR_NFD0)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/sysreg.h>
@@ -328,6 +328,6 @@ static inline u32 __attribute_const__ read_cpuid_cachetype(void)
{
return read_cpuid(CTR_EL0);
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View File

@@ -4,7 +4,7 @@
#include <linux/compiler.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
struct task_struct;
@@ -23,7 +23,7 @@ static __always_inline struct task_struct *get_current(void)
#define current get_current()
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_CURRENT_H */

View File

@@ -48,7 +48,7 @@
#define AARCH32_BREAK_THUMB2_LO 0xf7f0
#define AARCH32_BREAK_THUMB2_HI 0xa000
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
struct task_struct;
#define DBG_ARCH_ID_RESERVED 0 /* In case of ptrace ABI updates. */
@@ -88,5 +88,5 @@ static inline bool try_step_suspended_breakpoints(struct pt_regs *regs)
bool try_handle_aarch32_break(struct pt_regs *regs);
#endif /* __ASSEMBLY */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_DEBUG_MONITORS_H */

View File

@@ -126,21 +126,14 @@ static inline void efi_set_pgd(struct mm_struct *mm)
if (mm != current->active_mm) {
/*
* Update the current thread's saved ttbr0 since it is
* restored as part of a return from exception. Enable
* access to the valid TTBR0_EL1 and invoke the errata
* workaround directly since there is no return from
* exception when invoking the EFI run-time services.
* restored as part of a return from exception.
*/
update_saved_ttbr0(current, mm);
uaccess_ttbr0_enable();
post_ttbr_update_workaround();
} else {
/*
* Defer the switch to the current thread's TTBR0_EL1
* until uaccess_enable(). Restore the current
* thread's saved ttbr0 corresponding to its active_mm
* Restore the current thread's saved ttbr0
* corresponding to its active_mm
*/
uaccess_ttbr0_disable();
update_saved_ttbr0(current, current->active_mm);
}
}

View File

@@ -7,7 +7,7 @@
#ifndef __ARM_KVM_INIT_H__
#define __ARM_KVM_INIT_H__
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#error Assembly-only header
#endif
@@ -28,7 +28,7 @@
* Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but
* don't advertise it (they predate this relaxation).
*
* Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H
* Initialize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H
* indicating whether the CPU is running in E2H mode.
*/
mrs_s x1, SYS_ID_AA64MMFR4_EL1

View File

@@ -133,7 +133,7 @@
#define ELF_ET_DYN_BASE (2 * DEFAULT_MAP_WINDOW_64 / 3)
#endif /* CONFIG_ARM64_FORCE_52BIT */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <uapi/linux/elf.h>
#include <linux/bug.h>
@@ -293,6 +293,6 @@ static inline int arch_check_elf(void *ehdr, bool has_interp,
return 0;
}
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View File

@@ -431,7 +431,7 @@
#define ESR_ELx_IT_GCSPOPCX 6
#define ESR_ELx_IT_GCSPOPX 7
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/types.h>
static inline unsigned long esr_brk_comment(unsigned long esr)
@@ -534,6 +534,6 @@ static inline bool esr_iss_is_eretab(unsigned long esr)
}
const char *esr_get_class_string(unsigned long esr);
#endif /* __ASSEMBLY */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_ESR_H */

View File

@@ -15,7 +15,7 @@
#ifndef _ASM_ARM64_FIXMAP_H
#define _ASM_ARM64_FIXMAP_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/kernel.h>
#include <linux/math.h>
#include <linux/sizes.h>
@@ -117,5 +117,5 @@ extern void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t pr
#include <asm-generic/fixmap.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* _ASM_ARM64_FIXMAP_H */

View File

@@ -12,7 +12,7 @@
#include <asm/sigcontext.h>
#include <asm/sysreg.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/bitmap.h>
#include <linux/build_bug.h>

View File

@@ -37,7 +37,7 @@
*/
#define ARCH_FTRACE_SHIFT_STACK_TRACER 1
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/compat.h>
extern void _mcount(unsigned long);
@@ -217,9 +217,9 @@ static inline bool arch_syscall_match_sym_name(const char *sym,
*/
return !strcmp(sym + 8, name);
}
#endif /* ifndef __ASSEMBLY__ */
#endif /* ifndef __ASSEMBLER__ */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,

View File

@@ -2,7 +2,7 @@
#ifndef __ASM_GPR_NUM_H
#define __ASM_GPR_NUM_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
.irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
.equ .L__gpr_num_x\num, \num
@@ -11,7 +11,7 @@
.equ .L__gpr_num_xzr, 31
.equ .L__gpr_num_wzr, 31
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#define __DEFINE_ASM_GPR_NUMS \
" .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \
@@ -21,6 +21,6 @@
" .equ .L__gpr_num_xzr, 31\n" \
" .equ .L__gpr_num_wzr, 31\n"
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_GPR_NUM_H */

View File

@@ -46,7 +46,7 @@
#define COMPAT_HWCAP2_SB (1 << 5)
#define COMPAT_HWCAP2_SSBS (1 << 6)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/log2.h>
/*

View File

@@ -20,7 +20,7 @@
#define ARM64_IMAGE_FLAG_PAGE_SIZE_64K 3
#define ARM64_IMAGE_FLAG_PHYS_BASE 1
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#define arm64_image_flag_field(flags, field) \
(((flags) >> field##_SHIFT) & field##_MASK)
@@ -54,6 +54,6 @@ struct arm64_image_header {
__le32 res5;
};
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_IMAGE_H */

View File

@@ -12,7 +12,7 @@
#include <asm/insn-def.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
enum aarch64_insn_hint_cr_op {
AARCH64_INSN_HINT_NOP = 0x0 << 5,
@@ -730,6 +730,6 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn);
typedef bool (pstate_check_t)(unsigned long);
extern pstate_check_t * const aarch32_opcode_cond_checks[16];
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_INSN_H */

View File

@@ -8,7 +8,7 @@
#ifndef __ASM_JUMP_LABEL_H
#define __ASM_JUMP_LABEL_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
#include <asm/insn.h>
@@ -58,5 +58,5 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
return true;
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_JUMP_LABEL_H */

View File

@@ -2,7 +2,7 @@
#ifndef __ASM_KASAN_H
#define __ASM_KASAN_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/linkage.h>
#include <asm/memory.h>

View File

@@ -25,7 +25,7 @@
#define KEXEC_ARCH KEXEC_ARCH_AARCH64
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/**
* crash_setup_regs() - save registers for the panic kernel
@@ -130,6 +130,6 @@ extern int load_other_segments(struct kimage *image,
char *cmdline);
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View File

@@ -14,7 +14,7 @@
#include <linux/ptrace.h>
#include <asm/debug-monitors.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
static inline void arch_kgdb_breakpoint(void)
{
@@ -36,7 +36,7 @@ static inline int kgdb_single_step_handler(struct pt_regs *regs,
}
#endif
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
/*
* gdb remote procotol (well most versions of it) expects the following

View File

@@ -46,7 +46,7 @@
#define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init 0
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/mm.h>
@@ -303,7 +303,7 @@ void kvm_compute_final_ctr_el0(struct alt_instr *alt,
void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
u64 elr_phys, u64 par, uintptr_t vcpu, u64 far, u64 hpfar);
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
.macro get_host_ctxt reg, tmp
adr_this_cpu \reg, kvm_host_data, \tmp

View File

@@ -49,7 +49,7 @@
* mappings, and none of this applies in that case.
*/
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#include <asm/alternative.h>
@@ -396,5 +396,5 @@ void kvm_s2_ptdump_create_debugfs(struct kvm *kvm);
static inline void kvm_s2_ptdump_create_debugfs(struct kvm *kvm) {}
#endif /* CONFIG_PTDUMP_STAGE2_DEBUGFS */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ARM64_KVM_MMU_H__ */

View File

@@ -5,7 +5,7 @@
#ifndef __ASM_KVM_MTE_H
#define __ASM_KVM_MTE_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#include <asm/sysreg.h>
@@ -62,5 +62,5 @@ alternative_else_nop_endif
.endm
#endif /* CONFIG_ARM64_MTE */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_KVM_MTE_H */

View File

@@ -8,7 +8,7 @@
#ifndef __ASM_KVM_PTRAUTH_H
#define __ASM_KVM_PTRAUTH_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#include <asm/sysreg.h>
@@ -100,7 +100,7 @@ alternative_else_nop_endif
.endm
#endif /* CONFIG_ARM64_PTR_AUTH */
#else /* !__ASSEMBLY */
#else /* !__ASSEMBLER__ */
#define __ptrauth_save_key(ctxt, key) \
do { \
@@ -120,5 +120,5 @@ alternative_else_nop_endif
__ptrauth_save_key(ctxt, APGA); \
} while(0)
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_KVM_PTRAUTH_H */

View File

@@ -1,7 +1,7 @@
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#include <asm/assembler.h>
#endif

View File

@@ -207,7 +207,7 @@
*/
#define TRAMP_SWAPPER_OFFSET (2 * PAGE_SIZE)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/bitops.h>
#include <linux/compiler.h>
@@ -392,7 +392,6 @@ static inline unsigned long virt_to_pfn(const void *kaddr)
* virt_to_page(x) convert a _valid_ virtual address to struct page *
* virt_addr_valid(x) indicates whether a virtual address is valid
*/
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
#if defined(CONFIG_DEBUG_VIRTUAL)
#define page_to_virt(x) ({ \
@@ -422,7 +421,7 @@ static inline unsigned long virt_to_pfn(const void *kaddr)
})
void dump_mem_limit(void);
#endif /* !ASSEMBLY */
#endif /* !__ASSEMBLER__ */
/*
* Given that the GIC architecture permits ITS implementations that can only be

View File

@@ -12,7 +12,7 @@
#define USER_ASID_FLAG (UL(1) << USER_ASID_BIT)
#define TTBR_ASID_MASK (UL(0xffff) << 48)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/refcount.h>
#include <asm/cpufeature.h>
@@ -112,5 +112,5 @@ void kpti_install_ng_mappings(void);
static inline void kpti_install_ng_mappings(void) {}
#endif
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View File

@@ -8,7 +8,7 @@
#ifndef __ASM_MMU_CONTEXT_H
#define __ASM_MMU_CONTEXT_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/compiler.h>
#include <linux/sched.h>
@@ -61,11 +61,6 @@ static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
cpu_do_switch_mm(virt_to_phys(pgd),mm);
}
/*
* TCR.T0SZ value to use when the ID map is active.
*/
#define idmap_t0sz TCR_T0SZ(IDMAP_VA_BITS)
/*
* Ensure TCR.T0SZ is set to the provided value.
*/
@@ -82,9 +77,6 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
isb();
}
#define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual))
#define cpu_set_idmap_tcr_t0sz() __cpu_set_tcr_t0sz(idmap_t0sz)
/*
* Remove the idmap from TTBR0_EL1 and install the pgd of the active mm.
*
@@ -103,7 +95,7 @@ static inline void cpu_uninstall_idmap(void)
cpu_set_reserved_ttbr0();
local_flush_tlb_all();
cpu_set_default_tcr_t0sz();
__cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual));
if (mm != &init_mm && !system_uses_ttbr0_pan())
cpu_switch_mm(mm->pgd, mm);
@@ -113,7 +105,7 @@ static inline void cpu_install_idmap(void)
{
cpu_set_reserved_ttbr0();
local_flush_tlb_all();
cpu_set_idmap_tcr_t0sz();
__cpu_set_tcr_t0sz(TCR_T0SZ(IDMAP_VA_BITS));
cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm);
}
@@ -330,6 +322,6 @@ static inline void deactivate_mm(struct task_struct *tsk,
#include <asm-generic/mmu_context.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* !__ASM_MMU_CONTEXT_H */

View File

@@ -9,7 +9,7 @@
#include <asm/cputype.h>
#include <asm/mte-def.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
@@ -259,6 +259,6 @@ static inline int mte_enable_kernel_store_only(void)
#endif /* CONFIG_ARM64_MTE */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_MTE_KASAN_H */

View File

@@ -8,7 +8,7 @@
#include <asm/compiler.h>
#include <asm/mte-def.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/bitfield.h>
#include <linux/kasan-enabled.h>
@@ -282,5 +282,5 @@ static inline void mte_check_tfsr_exit(void)
}
#endif /* CONFIG_KASAN_HW_TAGS */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_MTE_H */

View File

@@ -10,7 +10,7 @@
#include <asm/page-def.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/personality.h> /* for READ_IMPLIES_EXEC */
#include <linux/types.h> /* for gfp_t */
@@ -45,7 +45,7 @@ int pfn_is_map_memory(unsigned long pfn);
#include <asm/memory.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED)

View File

@@ -62,7 +62,7 @@
#define _PAGE_READONLY_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
#define _PAGE_EXECONLY (_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/cpufeature.h>
#include <asm/pgtable-types.h>
@@ -127,7 +127,7 @@ static inline bool __pure lpa2_is_enabled(void)
#define PAGE_READONLY_EXEC __pgprot(_PAGE_READONLY_EXEC)
#define PAGE_EXECONLY __pgprot(_PAGE_EXECONLY)
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#define pte_pi_index(pte) ( \
((pte & BIT(PTE_PI_IDX_3)) >> (PTE_PI_IDX_3 - 3)) | \

View File

@@ -30,7 +30,7 @@
#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/cmpxchg.h>
#include <asm/fixmap.h>
@@ -130,12 +130,16 @@ static inline void arch_leave_lazy_mmu_mode(void)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/*
* Outside of a few very special situations (e.g. hibernation), we always
* use broadcast TLB invalidation instructions, therefore a spurious page
* fault on one CPU which has been handled concurrently by another CPU
* does not need to perform additional invalidation.
* We use local TLB invalidation instruction when reusing page in
* write protection fault handler to avoid TLBI broadcast in the hot
* path. This will cause spurious page faults if stale read-only TLB
* entries exist.
*/
#define flush_tlb_fix_spurious_fault(vma, address, ptep) do { } while (0)
#define flush_tlb_fix_spurious_fault(vma, address, ptep) \
local_flush_tlb_page_nonotify(vma, address)
#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \
local_flush_tlb_page_nonotify(vma, address)
/*
* ZERO_PAGE is a global shared page that is always zero: used
@@ -432,7 +436,7 @@ bool pgattr_change_is_safe(pteval_t old, pteval_t new);
* 1 0 | 1 0 1
* 1 1 | 0 1 x
*
* When hardware DBM is not present, the sofware PTE_DIRTY bit is updated via
* When hardware DBM is not present, the software PTE_DIRTY bit is updated via
* the page fault mechanism. Checking the dirty status of a pte becomes:
*
* PTE_DIRTY || (PTE_WRITE && !PTE_RDONLY)
@@ -598,7 +602,7 @@ static inline int pte_protnone(pte_t pte)
/*
* pte_present_invalid() tells us that the pte is invalid from HW
* perspective but present from SW perspective, so the fields are to be
* interpretted as per the HW layout. The second 2 checks are the unique
* interpreted as per the HW layout. The second 2 checks are the unique
* encoding that we use for PROT_NONE. It is insufficient to only use
* the first check because we share the same encoding scheme with pmds
* which support pmd_mkinvalid(), so can be present-invalid without
@@ -1948,6 +1952,6 @@ static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
#endif /* CONFIG_ARM64_CONTPTE */
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_PGTABLE_H */

View File

@@ -9,7 +9,7 @@
#ifndef __ASM_PROCFNS_H
#define __ASM_PROCFNS_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/page.h>
@@ -21,5 +21,5 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
#include <asm/memory.h>
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_PROCFNS_H */

View File

@@ -25,7 +25,7 @@
#define MTE_CTRL_STORE_ONLY (1UL << 19)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/build_bug.h>
#include <linux/cache.h>
@@ -437,5 +437,5 @@ int set_tsc_mode(unsigned int val);
#define GET_TSC_CTL(adr) get_tsc_mode((adr))
#define SET_TSC_CTL(val) set_tsc_mode((val))
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_PROCESSOR_H */

View File

@@ -94,7 +94,7 @@
*/
#define NO_SYSCALL (-1)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/bug.h>
#include <linux/types.h>
@@ -361,5 +361,5 @@ static inline void procedure_link_pointer_set(struct pt_regs *regs,
extern unsigned long profile_pc(struct pt_regs *regs);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View File

@@ -122,7 +122,7 @@
*/
#define SMC_RSI_ATTESTATION_TOKEN_CONTINUE SMC_RSI_FID(0x195)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
struct realm_config {
union {
@@ -142,7 +142,7 @@ struct realm_config {
*/
} __aligned(0x1000);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
/*
* Read configuration for the current Realm.

View File

@@ -5,7 +5,7 @@
#ifndef __ASM_RWONCE_H
#define __ASM_RWONCE_H
#if defined(CONFIG_LTO) && !defined(__ASSEMBLY__)
#if defined(CONFIG_LTO) && !defined(__ASSEMBLER__)
#include <linux/compiler_types.h>
#include <asm/alternative-macros.h>
@@ -62,7 +62,7 @@
})
#endif /* !BUILD_VDSO */
#endif /* CONFIG_LTO && !__ASSEMBLY__ */
#endif /* CONFIG_LTO && !__ASSEMBLER__ */
#include <asm-generic/rwonce.h>

View File

@@ -2,7 +2,7 @@
#ifndef _ASM_SCS_H
#define _ASM_SCS_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#include <asm/asm-offsets.h>
#include <asm/sysreg.h>
@@ -55,6 +55,6 @@ enum {
int __pi_scs_patch(const u8 eh_frame[], int size);
#endif /* __ASSEMBLY __ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_SCS_H */

View File

@@ -9,7 +9,7 @@
#define SDEI_STACK_SIZE IRQ_STACK_SIZE
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/linkage.h>
#include <linux/preempt.h>
@@ -49,5 +49,5 @@ unsigned long do_sdei_event(struct pt_regs *regs,
unsigned long sdei_arch_get_entry_point(int conduit);
#define sdei_arch_get_entry_point(x) sdei_arch_get_entry_point(x)
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_SDEI_H */

View File

@@ -29,7 +29,7 @@ static __must_check inline bool may_use_simd(void)
*/
return !WARN_ON(!system_capabilities_finalized()) &&
system_supports_fpsimd() &&
!in_hardirq() && !irqs_disabled() && !in_nmi();
!in_hardirq() && !in_nmi();
}
#else /* ! CONFIG_KERNEL_MODE_NEON */

View File

@@ -23,7 +23,7 @@
#define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT)
#define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/threads.h>
#include <linux/cpumask.h>
@@ -155,6 +155,6 @@ bool cpus_are_stuck_in_kernel(void);
extern void crash_smp_send_stop(void);
extern bool smp_crash_stop_failed(void);
#endif /* ifndef __ASSEMBLY__ */
#endif /* ifndef __ASSEMBLER__ */
#endif /* ifndef __ASM_SMP_H */

View File

@@ -12,7 +12,7 @@
#define BP_HARDEN_EL2_SLOTS 4
#define __BP_HARDEN_HYP_VECS_SZ ((BP_HARDEN_EL2_SLOTS - 1) * SZ_2K)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/smp.h>
#include <asm/percpu.h>
@@ -118,5 +118,5 @@ void spectre_bhb_patch_wa3(struct alt_instr *alt,
void spectre_bhb_patch_clearbhb(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_SPECTRE_H */

View File

@@ -25,7 +25,7 @@
#define FRAME_META_TYPE_FINAL 1
#define FRAME_META_TYPE_PT_REGS 2
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/*
* A standard AAPCS64 frame record.
*/
@@ -43,6 +43,6 @@ struct frame_record_meta {
struct frame_record record;
u64 type;
};
#endif /* __ASSEMBLY */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_STACKTRACE_FRAME_H */

View File

@@ -23,7 +23,7 @@ struct cpu_suspend_ctx {
* __cpu_suspend_enter()'s caller, and populated by __cpu_suspend_enter().
* This data must survive until cpu_resume() is called.
*
* This struct desribes the size and the layout of the saved cpu state.
* This struct describes the size and the layout of the saved cpu state.
* The layout of the callee_saved_regs is defined by the implementation
* of __cpu_suspend_enter(), and cpu_resume(). This struct must be passed
* in by the caller as __cpu_suspend_enter()'s stack-frame is gone once it

View File

@@ -52,7 +52,7 @@
#ifndef CONFIG_BROKEN_GAS_INST
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
// The space separator is omitted so that __emit_inst(x) can be parsed as
// either an assembler directive or an assembler macro argument.
#define __emit_inst(x) .inst(x)
@@ -71,11 +71,11 @@
(((x) >> 24) & 0x000000ff))
#endif /* CONFIG_CPU_BIG_ENDIAN */
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#define __emit_inst(x) .long __INSTR_BSWAP(x)
#else /* __ASSEMBLY__ */
#else /* __ASSEMBLER__ */
#define __emit_inst(x) ".long " __stringify(__INSTR_BSWAP(x)) "\n\t"
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* CONFIG_BROKEN_GAS_INST */
@@ -1131,7 +1131,7 @@
#define ARM64_FEATURE_FIELD_BITS 4
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
.macro mrs_s, rt, sreg
__emit_inst(0xd5200000|(\sreg)|(.L__gpr_num_\rt))

View File

@@ -7,7 +7,7 @@
#ifndef __ASM_SYSTEM_MISC_H
#define __ASM_SYSTEM_MISC_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/compiler.h>
#include <linux/linkage.h>
@@ -28,6 +28,6 @@ void arm64_notify_die(const char *str, struct pt_regs *regs,
struct mm_struct;
extern void __show_regs(struct pt_regs *);
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_SYSTEM_MISC_H */

View File

@@ -10,7 +10,7 @@
#include <linux/compiler.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
struct task_struct;

View File

@@ -8,7 +8,7 @@
#ifndef __ASM_TLBFLUSH_H
#define __ASM_TLBFLUSH_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/bitfield.h>
#include <linux/mm_types.h>
@@ -249,6 +249,19 @@ static inline unsigned long get_trans_granule(void)
* cannot be easily determined, the value TLBI_TTL_UNKNOWN will
* perform a non-hinted invalidation.
*
* local_flush_tlb_page(vma, addr)
* Local variant of flush_tlb_page(). Stale TLB entries may
* remain in remote CPUs.
*
* local_flush_tlb_page_nonotify(vma, addr)
* Same as local_flush_tlb_page() except MMU notifier will not be
* called.
*
* local_flush_tlb_contpte(vma, addr)
* Invalidate the virtual-address range
* '[addr, addr+CONT_PTE_SIZE)' mapped with contpte on local CPU
* for the user address space corresponding to 'vma->mm'. Stale
* TLB entries may remain in remote CPUs.
*
* Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
* on top of these routines, since that is our interface to the mmu_gather
@@ -282,6 +295,33 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
}
static inline void __local_flush_tlb_page_nonotify_nosync(struct mm_struct *mm,
unsigned long uaddr)
{
unsigned long addr;
dsb(nshst);
addr = __TLBI_VADDR(uaddr, ASID(mm));
__tlbi(vale1, addr);
__tlbi_user(vale1, addr);
}
static inline void local_flush_tlb_page_nonotify(struct vm_area_struct *vma,
unsigned long uaddr)
{
__local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
dsb(nsh);
}
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
unsigned long uaddr)
{
__local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, uaddr & PAGE_MASK,
(uaddr & PAGE_MASK) + PAGE_SIZE);
dsb(nsh);
}
static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
unsigned long uaddr)
{
@@ -472,6 +512,22 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
dsb(ish);
}
static inline void local_flush_tlb_contpte(struct vm_area_struct *vma,
unsigned long addr)
{
unsigned long asid;
addr = round_down(addr, CONT_PTE_SIZE);
dsb(nshst);
asid = ASID(vma->vm_mm);
__flush_tlb_range_op(vale1, addr, CONT_PTES, PAGE_SIZE, asid,
3, true, lpa2_is_enabled());
mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, addr,
addr + CONT_PTE_SIZE);
dsb(nsh);
}
static inline void flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
@@ -524,6 +580,33 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
{
__flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
}
static inline bool __pte_flags_need_flush(ptdesc_t oldval, ptdesc_t newval)
{
ptdesc_t diff = oldval ^ newval;
/* invalid to valid transition requires no flush */
if (!(oldval & PTE_VALID))
return false;
/* Transition in the SW bits requires no flush */
diff &= ~PTE_SWBITS_MASK;
return diff;
}
static inline bool pte_needs_flush(pte_t oldpte, pte_t newpte)
{
return __pte_flags_need_flush(pte_val(oldpte), pte_val(newpte));
}
#define pte_needs_flush pte_needs_flush
static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
{
return __pte_flags_need_flush(pmd_val(oldpmd), pmd_val(newpmd));
}
#define huge_pmd_needs_flush huge_pmd_needs_flush
#endif
#endif

View File

@@ -7,7 +7,7 @@
#define __VDSO_PAGES 4
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <generated/vdso-offsets.h>
@@ -19,6 +19,6 @@
extern char vdso_start[], vdso_end[];
extern char vdso32_start[], vdso32_end[];
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_VDSO_H */

View File

@@ -5,7 +5,7 @@
#ifndef __COMPAT_BARRIER_H
#define __COMPAT_BARRIER_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/*
* Warning: This code is meant to be used from the compat vDSO only.
*/
@@ -31,6 +31,6 @@
#define smp_rmb() aarch32_smp_rmb()
#define smp_wmb() aarch32_smp_wmb()
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __COMPAT_BARRIER_H */

View File

@@ -5,7 +5,7 @@
#ifndef __ASM_VDSO_COMPAT_GETTIMEOFDAY_H
#define __ASM_VDSO_COMPAT_GETTIMEOFDAY_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/barrier.h>
#include <asm/unistd_compat_32.h>
@@ -161,6 +161,6 @@ static inline bool vdso_clocksource_ok(const struct vdso_clock *vc)
}
#define vdso_clocksource_ok vdso_clocksource_ok
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_VDSO_COMPAT_GETTIMEOFDAY_H */

View File

@@ -3,7 +3,7 @@
#ifndef __ASM_VDSO_GETRANDOM_H
#define __ASM_VDSO_GETRANDOM_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/unistd.h>
#include <asm/vdso/vsyscall.h>
@@ -33,6 +33,6 @@ static __always_inline ssize_t getrandom_syscall(void *_buffer, size_t _len, uns
return ret;
}
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_VDSO_GETRANDOM_H */

View File

@@ -7,7 +7,7 @@
#ifdef __aarch64__
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/alternative.h>
#include <asm/arch_timer.h>
@@ -96,7 +96,7 @@ static __always_inline const struct vdso_time_data *__arch_get_vdso_u_time_data(
#define __arch_get_vdso_u_time_data __arch_get_vdso_u_time_data
#endif /* IS_ENABLED(CONFIG_CC_IS_GCC) && IS_ENABLED(CONFIG_PAGE_SIZE_64KB) */
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#else /* !__aarch64__ */

View File

@@ -5,13 +5,13 @@
#ifndef __ASM_VDSO_PROCESSOR_H
#define __ASM_VDSO_PROCESSOR_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
static inline void cpu_relax(void)
{
asm volatile("yield" ::: "memory");
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* __ASM_VDSO_PROCESSOR_H */

View File

@@ -2,7 +2,7 @@
#ifndef __ASM_VDSO_VSYSCALL_H
#define __ASM_VDSO_VSYSCALL_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <vdso/datapage.h>
@@ -22,6 +22,6 @@ void __arch_update_vdso_clock(struct vdso_clock *vc)
/* The asm-generic header needs to be included after the definitions above */
#include <asm-generic/vdso/vsyscall.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_VDSO_VSYSCALL_H */

View File

@@ -56,7 +56,7 @@
*/
#define BOOT_CPU_FLAG_E2H BIT_ULL(32)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/ptrace.h>
#include <asm/sections.h>
@@ -161,6 +161,6 @@ static inline bool is_hyp_nvhe(void)
return is_hyp_mode_available() && !is_kernel_in_hyp_mode();
}
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* ! __ASM__VIRT_H */

View File

@@ -3,9 +3,7 @@
#ifndef __ASM_VMAP_STACK_H
#define __ASM_VMAP_STACK_H
#include <linux/bug.h>
#include <linux/gfp.h>
#include <linux/kconfig.h>
#include <linux/vmalloc.h>
#include <linux/pgtable.h>
#include <asm/memory.h>
@@ -19,8 +17,6 @@ static inline unsigned long *arch_alloc_vmap_stack(size_t stack_size, int node)
{
void *p;
BUILD_BUG_ON(!IS_ENABLED(CONFIG_VMAP_STACK));
p = __vmalloc_node(stack_size, THREAD_ALIGN, THREADINFO_GFP, node,
__builtin_return_address(0));
return kasan_reset_tag(p);

View File

@@ -31,7 +31,7 @@
#define KVM_SPSR_FIQ 4
#define KVM_NR_SPSR 5
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/psci.h>
#include <linux/types.h>
#include <asm/ptrace.h>

View File

@@ -80,7 +80,7 @@
#define PTRACE_PEEKMTETAGS 33
#define PTRACE_POKEMTETAGS 34
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/*
* User structures for general purpose, floating point and debug registers.
@@ -332,6 +332,6 @@ struct user_gcs {
__u64 gcspr_el0;
};
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _UAPI__ASM_PTRACE_H */

View File

@@ -17,7 +17,7 @@
#ifndef _UAPI__ASM_SIGCONTEXT_H
#define _UAPI__ASM_SIGCONTEXT_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
@@ -192,7 +192,7 @@ struct gcs_context {
__u64 reserved;
};
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#include <asm/sve_context.h>

View File

@@ -133,7 +133,7 @@ static int __init acpi_fadt_sanity_check(void)
/*
* FADT is required on arm64; retrieve it to check its presence
* and carry out revision and ACPI HW reduced compliancy tests
* and carry out revision and ACPI HW reduced compliance tests
*/
status = acpi_get_table(ACPI_SIG_FADT, 0, &table);
if (ACPI_FAILURE(status)) {
@@ -439,7 +439,7 @@ int apei_claim_sea(struct pt_regs *regs)
irq_work_run();
__irq_exit();
} else {
pr_warn_ratelimited("APEI work queued but not completed");
pr_warn_ratelimited("APEI work queued but not completed\n");
err = -EINPROGRESS;
}
}

View File

@@ -1002,7 +1002,7 @@ static void __init sort_ftr_regs(void)
/*
* Initialise the CPU feature register from Boot CPU values.
* Also initiliases the strict_mask for the register.
* Also initialises the strict_mask for the register.
* Any bits that are not covered by an arm64_ftr_bits entry are considered
* RES0 for the system-wide value, and must strictly match.
*/

View File

@@ -10,6 +10,7 @@
#include <linux/efi.h>
#include <linux/init.h>
#include <linux/kmemleak.h>
#include <linux/kthread.h>
#include <linux/screen_info.h>
#include <linux/vmalloc.h>
@@ -165,20 +166,53 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
return s;
}
static DEFINE_RAW_SPINLOCK(efi_rt_lock);
void arch_efi_call_virt_setup(void)
{
efi_virtmap_load();
raw_spin_lock(&efi_rt_lock);
efi_runtime_assert_lock_held();
if (preemptible() && (current->flags & PF_KTHREAD)) {
/*
* Disable migration to ensure that a preempted EFI runtime
* service call will be resumed on the same CPU. This avoids
* potential issues with EFI runtime calls that are preempted
* while polling for an asynchronous completion of a secure
* firmware call, which may not permit the CPU to change.
*/
migrate_disable();
kthread_use_mm(&efi_mm);
} else {
efi_virtmap_load();
}
/*
* Enable access to the valid TTBR0_EL1 and invoke the errata
* workaround directly since there is no return from exception when
* invoking the EFI run-time services.
*/
uaccess_ttbr0_enable();
post_ttbr_update_workaround();
__efi_fpsimd_begin();
}
void arch_efi_call_virt_teardown(void)
{
__efi_fpsimd_end();
raw_spin_unlock(&efi_rt_lock);
efi_virtmap_unload();
/*
* Defer the switch to the current thread's TTBR0_EL1 until
* uaccess_enable(). Do so before efi_virtmap_unload() updates the
* saved TTBR0 value, so the userland page tables are not activated
* inadvertently over the back of an exception.
*/
uaccess_ttbr0_disable();
if (preemptible() && (current->flags & PF_KTHREAD)) {
kthread_unuse_mm(&efi_mm);
migrate_enable();
} else {
efi_virtmap_unload();
}
}
asmlinkage u64 *efi_rt_stack_top __ro_after_init;

View File

@@ -34,20 +34,12 @@
* Handle IRQ/context state management when entering from kernel mode.
* Before this function is called it is not safe to call regular kernel code,
* instrumentable code, or any code which may trigger an exception.
*
* This is intended to match the logic in irqentry_enter(), handling the kernel
* mode transitions only.
*/
static __always_inline irqentry_state_t __enter_from_kernel_mode(struct pt_regs *regs)
{
return irqentry_enter(regs);
}
static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs)
{
irqentry_state_t state;
state = __enter_from_kernel_mode(regs);
state = irqentry_enter(regs);
mte_check_tfsr_entry();
mte_disable_tco_entry(current);
@@ -58,21 +50,12 @@ static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs)
* Handle IRQ/context state management when exiting to kernel mode.
* After this function returns it is not safe to call regular kernel code,
* instrumentable code, or any code which may trigger an exception.
*
* This is intended to match the logic in irqentry_exit(), handling the kernel
* mode transitions only, and with preemption handled elsewhere.
*/
static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs,
irqentry_state_t state)
{
irqentry_exit(regs, state);
}
static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
irqentry_state_t state)
{
mte_check_tfsr_exit();
__exit_to_kernel_mode(regs, state);
irqentry_exit(regs, state);
}
/*
@@ -80,17 +63,12 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
* Before this function is called it is not safe to call regular kernel code,
* instrumentable code, or any code which may trigger an exception.
*/
static __always_inline void __enter_from_user_mode(struct pt_regs *regs)
static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
{
enter_from_user_mode(regs);
mte_disable_tco_entry(current);
}
static __always_inline void arm64_enter_from_user_mode(struct pt_regs *regs)
{
__enter_from_user_mode(regs);
}
/*
* Handle IRQ/context state management when exiting to user mode.
* After this function returns it is not safe to call regular kernel code,

View File

@@ -94,7 +94,7 @@ SYM_CODE_START(ftrace_caller)
stp x29, x30, [sp, #FREGS_SIZE]
add x29, sp, #FREGS_SIZE
/* Prepare arguments for the the tracer func */
/* Prepare arguments for the tracer func */
sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
mov x1, x9 // parent_ip (callsite's LR)
mov x3, sp // regs

View File

@@ -225,10 +225,21 @@ static void fpsimd_bind_task_to_cpu(void);
*/
static void get_cpu_fpsimd_context(void)
{
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_bh_disable();
else
if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
/*
* The softirq subsystem lacks a true unmask/mask API, and
* re-enabling softirq processing using local_bh_enable() will
* not only unmask softirqs, it will also result in immediate
* delivery of any pending softirqs.
* This is undesirable when running with IRQs disabled, but in
* that case, there is no need to mask softirqs in the first
* place, so only bother doing so when IRQs are enabled.
*/
if (!irqs_disabled())
local_bh_disable();
} else {
preempt_disable();
}
}
/*
@@ -240,10 +251,12 @@ static void get_cpu_fpsimd_context(void)
*/
static void put_cpu_fpsimd_context(void)
{
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_bh_enable();
else
if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
if (!irqs_disabled())
local_bh_enable();
} else {
preempt_enable();
}
}
unsigned int task_get_vl(const struct task_struct *task, enum vec_type type)
@@ -1934,11 +1947,11 @@ void __efi_fpsimd_begin(void)
if (!system_supports_fpsimd())
return;
WARN_ON(preemptible());
if (may_use_simd()) {
kernel_neon_begin();
} else {
WARN_ON(preemptible());
/*
* If !efi_sve_state, SVE can't be in use yet and doesn't need
* preserving:

View File

@@ -492,7 +492,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
return ret;
/*
* When using mcount, callsites in modules may have been initalized to
* When using mcount, callsites in modules may have been initialized to
* call an arbitrary module PLT (which redirects to the _mcount stub)
* rather than the ftrace PLT we'll use at runtime (which redirects to
* the ftrace trampoline). We can ignore the old PLT when initializing

View File

@@ -62,7 +62,7 @@ static void __init init_irq_stacks(void)
}
}
#ifndef CONFIG_PREEMPT_RT
#ifdef CONFIG_SOFTIRQ_ON_OWN_STACK
static void ____do_softirq(struct pt_regs *regs)
{
__do_softirq();

View File

@@ -251,7 +251,7 @@ void crash_post_resume(void)
* marked as Reserved as memory was allocated via memblock_reserve().
*
* In hibernation, the pages which are Reserved and yet "nosave" are excluded
* from the hibernation iamge. crash_is_nosave() does thich check for crash
* from the hibernation image. crash_is_nosave() does thich check for crash
* dump kernel and will reduce the total size of hibernation image.
*/

View File

@@ -131,7 +131,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
struct uprobe_task *utask = current->utask;
/*
* Task has received a fatal signal, so reset back to probbed
* Task has received a fatal signal, so reset back to probed
* address.
*/
instruction_pointer_set(regs, utask->vaddr);

View File

@@ -912,13 +912,39 @@ static int sve_set_common(struct task_struct *target,
return -EINVAL;
/*
* Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are consumed by
* vec_set_vector_length(), which will also validate them for us:
* On systems without SVE we accept FPSIMD format writes with
* a VL of 0 to allow exiting streaming mode, otherwise a VL
* is required.
*/
ret = vec_set_vector_length(target, type, header.vl,
((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16);
if (ret)
return ret;
if (header.vl) {
/*
* If the system does not support SVE we can't
* configure a SVE VL.
*/
if (!system_supports_sve() && type == ARM64_VEC_SVE)
return -EINVAL;
/*
* Apart from SVE_PT_REGS_MASK, all SVE_PT_* flags are
* consumed by vec_set_vector_length(), which will
* also validate them for us:
*/
ret = vec_set_vector_length(target, type, header.vl,
((unsigned long)header.flags & ~SVE_PT_REGS_MASK) << 16);
if (ret)
return ret;
} else {
/* If the system supports SVE we require a VL. */
if (system_supports_sve())
return -EINVAL;
/*
* Only FPSIMD formatted data with no flags set is
* supported.
*/
if (header.flags != SVE_PT_REGS_FPSIMD)
return -EINVAL;
}
/* Allocate SME storage if necessary, preserving any existing ZA/ZT state */
if (type == ARM64_VEC_SME) {
@@ -1016,7 +1042,7 @@ static int sve_set(struct task_struct *target,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
if (!system_supports_sve())
if (!system_supports_sve() && !system_supports_sme())
return -EINVAL;
return sve_set_common(target, regset, pos, count, kbuf, ubuf,

View File

@@ -63,8 +63,6 @@ static void free_sdei_stacks(void)
{
int cpu;
BUILD_BUG_ON(!IS_ENABLED(CONFIG_VMAP_STACK));
for_each_possible_cpu(cpu) {
_free_sdei_stack(&sdei_stack_normal_ptr, cpu);
_free_sdei_stack(&sdei_stack_critical_ptr, cpu);
@@ -88,8 +86,6 @@ static int init_sdei_stacks(void)
int cpu;
int err = 0;
BUILD_BUG_ON(!IS_ENABLED(CONFIG_VMAP_STACK));
for_each_possible_cpu(cpu) {
err = _init_sdei_stack(&sdei_stack_normal_ptr, cpu);
if (err)
@@ -202,7 +198,7 @@ unsigned long sdei_arch_get_entry_point(int conduit)
/*
* do_sdei_event() returns one of:
* SDEI_EV_HANDLED - success, return to the interrupted context.
* SDEI_EV_FAILED - failure, return this error code to firmare.
* SDEI_EV_FAILED - failure, return this error code to firmware.
* virtual-address - success, return to this address.
*/
unsigned long __kprobes do_sdei_event(struct pt_regs *regs,

View File

@@ -350,7 +350,7 @@ void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
/*
* Now that the dying CPU is beyond the point of no return w.r.t.
* in-kernel synchronisation, try to get the firwmare to help us to
* in-kernel synchronisation, try to get the firmware to help us to
* verify that it has really left the kernel before we consider
* clobbering anything it might still be using.
*/
@@ -523,7 +523,7 @@ int arch_register_cpu(int cpu)
/*
* Availability of the acpi handle is sufficient to establish
* that _STA has aleady been checked. No need to recheck here.
* that _STA has already been checked. No need to recheck here.
*/
c->hotpluggable = arch_cpu_is_hotpluggable(cpu);

View File

@@ -96,7 +96,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
* (Similarly for HVC and SMC elsewhere.)
*/
if (flags & _TIF_MTE_ASYNC_FAULT) {
if (unlikely(flags & _TIF_MTE_ASYNC_FAULT)) {
/*
* Process the asynchronous tag check fault before the actual
* syscall. do_notify_resume() will send a signal to userspace

View File

@@ -922,7 +922,7 @@ void __noreturn panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigne
__show_regs(regs);
/*
* We use nmi_panic to limit the potential for recusive overflows, and
* We use nmi_panic to limit the potential for recursive overflows, and
* to get a better stack trace.
*/
nmi_panic(NULL, "kernel stack overflow");

View File

@@ -815,7 +815,7 @@ static void timer_set_traps(struct kvm_vcpu *vcpu, struct timer_map *map)
tpt = tpc = true;
/*
* For the poor sods that could not correctly substract one value
* For the poor sods that could not correctly subtract one value
* from another, trap the full virtual timer and counter.
*/
if (has_broken_cntvoff() && timer_get_offset(map->direct_vtimer))

View File

@@ -2441,7 +2441,7 @@ static void kvm_hyp_init_symbols(void)
kvm_nvhe_sym(__icache_flags) = __icache_flags;
kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
/* Propagate the FGT state to the the nVHE side */
/* Propagate the FGT state to the nVHE side */
kvm_nvhe_sym(hfgrtr_masks) = hfgrtr_masks;
kvm_nvhe_sym(hfgwtr_masks) = hfgwtr_masks;
kvm_nvhe_sym(hfgitr_masks) = hfgitr_masks;

View File

@@ -115,7 +115,7 @@ static void ffa_set_retval(struct kvm_cpu_context *ctxt,
*
* FFA-1.3 introduces 64-bit variants of the CPU cycle management
* interfaces. Moreover, FF-A 1.3 clarifies that SMC32 direct requests
* complete with SMC32 direct reponses which *should* allow us use the
* complete with SMC32 direct responses which *should* allow us use the
* function ID sent by the caller to determine whether to return x8-x17.
*
* Note that we also cannot rely on function IDs in the response.

View File

@@ -1755,7 +1755,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
/*
* Check if this is non-struct page memory PFN, and cannot support
* CMOs. It could potentially be unsafe to access as cachable.
* CMOs. It could potentially be unsafe to access as cacheable.
*/
if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP) && !pfn_is_map_memory(pfn)) {
if (is_vma_cacheable) {

View File

@@ -85,7 +85,7 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
/*
* Let's treat memory allocation failures as benign: If we fail to
* allocate anything, return an error and keep the allocated array
* alive. Userspace may try to recover by intializing the vcpu
* alive. Userspace may try to recover by initializing the vcpu
* again, and there is no reason to affect the whole VM for this.
*/
num_mmus = atomic_read(&kvm->online_vcpus) * S2_MMU_PER_VCPU;

View File

@@ -622,8 +622,7 @@ int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
__ptep_set_access_flags(vma, addr, ptep, entry, 0);
if (dirty)
__flush_tlb_range(vma, start_addr, addr,
PAGE_SIZE, true, 3);
local_flush_tlb_contpte(vma, start_addr);
} else {
__contpte_try_unfold(vma->vm_mm, addr, ptep, orig_pte);
__ptep_set_access_flags(vma, addr, ptep, entry, dirty);

View File

@@ -233,9 +233,13 @@ int __ptep_set_access_flags(struct vm_area_struct *vma,
pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval);
} while (pteval != old_pteval);
/* Invalidate a stale read-only entry */
/*
* Invalidate the local stale read-only entry. Remote stale entries
* may still cause page faults and be invalidated via
* flush_tlb_fix_spurious_fault().
*/
if (dirty)
flush_tlb_page(vma, address);
local_flush_tlb_page(vma, address);
return 1;
}

View File

@@ -49,6 +49,8 @@
#define NO_CONT_MAPPINGS BIT(1)
#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
#define INVALID_PHYS_ADDR (-1ULL)
DEFINE_STATIC_KEY_FALSE(arm64_ptdump_lock_key);
u64 kimage_voffset __ro_after_init;
@@ -194,11 +196,11 @@ static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end,
} while (ptep++, addr += PAGE_SIZE, addr != end);
}
static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
static int alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
unsigned long next;
pmd_t pmd = READ_ONCE(*pmdp);
@@ -213,6 +215,8 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
pmdval |= PMD_TABLE_PXN;
BUG_ON(!pgtable_alloc);
pte_phys = pgtable_alloc(TABLE_PTE);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
ptep = pte_set_fixmap(pte_phys);
init_clear_pgtable(ptep);
ptep += pte_index(addr);
@@ -244,11 +248,13 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
* walker.
*/
pte_clear_fixmap();
return 0;
}
static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type), int flags)
static int init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type), int flags)
{
unsigned long next;
@@ -269,22 +275,29 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
READ_ONCE(pmd_val(*pmdp))));
} else {
alloc_init_cont_pte(pmdp, addr, next, phys, prot,
pgtable_alloc, flags);
int ret;
ret = alloc_init_cont_pte(pmdp, addr, next, phys, prot,
pgtable_alloc, flags);
if (ret)
return ret;
BUG_ON(pmd_val(old_pmd) != 0 &&
pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
}
phys += next - addr;
} while (pmdp++, addr = next, addr != end);
return 0;
}
static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
static int alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
int ret;
unsigned long next;
pud_t pud = READ_ONCE(*pudp);
pmd_t *pmdp;
@@ -301,6 +314,8 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
pudval |= PUD_TABLE_PXN;
BUG_ON(!pgtable_alloc);
pmd_phys = pgtable_alloc(TABLE_PMD);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pmdp = pmd_set_fixmap(pmd_phys);
init_clear_pgtable(pmdp);
pmdp += pmd_index(addr);
@@ -320,20 +335,26 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
(flags & NO_CONT_MAPPINGS) == 0)
__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
ret = init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags);
if (ret)
goto out;
pmdp += pmd_index(next) - pmd_index(addr);
phys += next - addr;
} while (addr = next, addr != end);
out:
pmd_clear_fixmap();
return ret;
}
static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
static int alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
int ret = 0;
unsigned long next;
p4d_t p4d = READ_ONCE(*p4dp);
pud_t *pudp;
@@ -346,6 +367,8 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
p4dval |= P4D_TABLE_PXN;
BUG_ON(!pgtable_alloc);
pud_phys = pgtable_alloc(TABLE_PUD);
if (pud_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pudp = pud_set_fixmap(pud_phys);
init_clear_pgtable(pudp);
pudp += pud_index(addr);
@@ -375,8 +398,10 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
READ_ONCE(pud_val(*pudp))));
} else {
alloc_init_cont_pmd(pudp, addr, next, phys, prot,
pgtable_alloc, flags);
ret = alloc_init_cont_pmd(pudp, addr, next, phys, prot,
pgtable_alloc, flags);
if (ret)
goto out;
BUG_ON(pud_val(old_pud) != 0 &&
pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
@@ -384,14 +409,18 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
phys += next - addr;
} while (pudp++, addr = next, addr != end);
out:
pud_clear_fixmap();
return ret;
}
static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
int ret;
unsigned long next;
pgd_t pgd = READ_ONCE(*pgdp);
p4d_t *p4dp;
@@ -404,6 +433,8 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
pgdval |= PGD_TABLE_PXN;
BUG_ON(!pgtable_alloc);
p4d_phys = pgtable_alloc(TABLE_P4D);
if (p4d_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
p4dp = p4d_set_fixmap(p4d_phys);
init_clear_pgtable(p4dp);
p4dp += p4d_index(addr);
@@ -418,8 +449,10 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
next = p4d_addr_end(addr, end);
alloc_init_pud(p4dp, addr, next, phys, prot,
pgtable_alloc, flags);
ret = alloc_init_pud(p4dp, addr, next, phys, prot,
pgtable_alloc, flags);
if (ret)
goto out;
BUG_ON(p4d_val(old_p4d) != 0 &&
p4d_val(old_p4d) != READ_ONCE(p4d_val(*p4dp)));
@@ -427,15 +460,19 @@ static void alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys += next - addr;
} while (p4dp++, addr = next, addr != end);
out:
p4d_clear_fixmap();
return ret;
}
static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
static int __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
int ret;
unsigned long addr, end, next;
pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
@@ -444,7 +481,7 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
* within a page, we cannot map the region as the caller expects.
*/
if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
return;
return -EINVAL;
phys &= PAGE_MASK;
addr = virt & PAGE_MASK;
@@ -452,25 +489,45 @@ static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
do {
next = pgd_addr_end(addr, end);
alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
flags);
ret = alloc_init_p4d(pgdp, addr, next, phys, prot, pgtable_alloc,
flags);
if (ret)
return ret;
phys += next - addr;
} while (pgdp++, addr = next, addr != end);
return 0;
}
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
static int __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
int ret;
mutex_lock(&fixmap_lock);
__create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
pgtable_alloc, flags);
ret = __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,
pgtable_alloc, flags);
mutex_unlock(&fixmap_lock);
return ret;
}
#define INVALID_PHYS_ADDR (-1ULL)
static void early_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(enum pgtable_type),
int flags)
{
int ret;
ret = __create_pgd_mapping(pgdir, phys, virt, size, prot, pgtable_alloc,
flags);
if (ret)
panic("Failed to create page tables\n");
}
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
enum pgtable_type pgtable_type)
@@ -503,7 +560,7 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
}
static phys_addr_t
try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
pgd_pgtable_alloc_init_mm_gfp(enum pgtable_type pgtable_type, gfp_t gfp)
{
return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
}
@@ -511,21 +568,13 @@ try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
static phys_addr_t __maybe_unused
pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
return pgd_pgtable_alloc_init_mm_gfp(pgtable_type, GFP_PGTABLE_KERNEL);
}
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
return __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
}
static void split_contpte(pte_t *ptep)
@@ -546,7 +595,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
pte_t *ptep;
int i;
pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
pte_phys = pgd_pgtable_alloc_init_mm_gfp(TABLE_PTE, gfp);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
ptep = (pte_t *)phys_to_virt(pte_phys);
@@ -591,7 +640,7 @@ static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
pmd_t *pmdp;
int i;
pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
pmd_phys = pgd_pgtable_alloc_init_mm_gfp(TABLE_PMD, gfp);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pmdp = (pmd_t *)phys_to_virt(pmd_phys);
@@ -903,8 +952,8 @@ void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
&phys, virt);
return;
}
__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
NO_CONT_MAPPINGS);
early_create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
NO_CONT_MAPPINGS);
}
void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
@@ -918,8 +967,8 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
if (page_mappings_only)
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(mm->pgd, phys, virt, size, prot,
pgd_pgtable_alloc_special_mm, flags);
early_create_pgd_mapping(mm->pgd, phys, virt, size, prot,
pgd_pgtable_alloc_special_mm, flags);
}
static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
@@ -931,8 +980,8 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
return;
}
__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
NO_CONT_MAPPINGS);
early_create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
NO_CONT_MAPPINGS);
/* flush the TLBs after updating live kernel mappings */
flush_tlb_kernel_range(virt, virt + size);
@@ -941,8 +990,8 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
phys_addr_t end, pgprot_t prot, int flags)
{
__create_pgd_mapping(pgdp, start, __phys_to_virt(start), end - start,
prot, early_pgtable_alloc, flags);
early_create_pgd_mapping(pgdp, start, __phys_to_virt(start), end - start,
prot, early_pgtable_alloc, flags);
}
void __init mark_linear_text_alias_ro(void)
@@ -1158,6 +1207,8 @@ static int __init __kpti_install_ng_mappings(void *__unused)
remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
if (!cpu) {
int ret;
alloc = __get_free_pages(GFP_ATOMIC | __GFP_ZERO, order);
kpti_ng_temp_pgd = (pgd_t *)(alloc + (levels - 1) * PAGE_SIZE);
kpti_ng_temp_alloc = kpti_ng_temp_pgd_pa = __pa(kpti_ng_temp_pgd);
@@ -1178,9 +1229,11 @@ static int __init __kpti_install_ng_mappings(void *__unused)
// covers the PTE[] page itself, the remaining entries are free
// to be used as a ad-hoc fixmap.
//
__create_pgd_mapping_locked(kpti_ng_temp_pgd, __pa(alloc),
KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL,
kpti_ng_pgd_alloc, 0);
ret = __create_pgd_mapping_locked(kpti_ng_temp_pgd, __pa(alloc),
KPTI_NG_TEMP_VA, PAGE_SIZE, PAGE_KERNEL,
kpti_ng_pgd_alloc, 0);
if (ret)
panic("Failed to create page tables\n");
}
cpu_install_idmap();
@@ -1233,9 +1286,9 @@ static int __init map_entry_trampoline(void)
/* Map only the text into the trampoline page table */
memset(tramp_pg_dir, 0, PGD_SIZE);
__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS,
entry_tramp_text_size(), prot,
pgd_pgtable_alloc_init_mm, NO_BLOCK_MAPPINGS);
early_create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS,
entry_tramp_text_size(), prot,
pgd_pgtable_alloc_init_mm, NO_BLOCK_MAPPINGS);
/* Map both the text and data into the kernel page table */
for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++)
@@ -1877,23 +1930,28 @@ int arch_add_memory(int nid, u64 start, u64 size,
if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
size, params->pgprot, pgd_pgtable_alloc_init_mm,
flags);
ret = __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
size, params->pgprot, pgd_pgtable_alloc_init_mm,
flags);
if (ret)
goto err;
memblock_clear_nomap(start, size);
ret = __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT,
params);
if (ret)
__remove_pgd_mapping(swapper_pg_dir,
__phys_to_virt(start), size);
else {
/* Address of hotplugged memory can be smaller */
max_pfn = max(max_pfn, PFN_UP(start + size));
max_low_pfn = max_pfn;
}
goto err;
/* Address of hotplugged memory can be smaller */
max_pfn = max(max_pfn, PFN_UP(start + size));
max_low_pfn = max_pfn;
return 0;
err:
__remove_pgd_mapping(swapper_pg_dir,
__phys_to_virt(start), size);
return ret;
}

View File

@@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
unsigned long size = PAGE_SIZE * numpages;
unsigned long end = start + size;
struct vm_struct *area;
int i;
if (!PAGE_ALIGNED(addr)) {
start &= PAGE_MASK;
@@ -184,8 +183,10 @@ static int change_memory_common(unsigned long addr, int numpages,
*/
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
pgprot_val(clear_mask) == PTE_RDONLY)) {
for (i = 0; i < area->nr_pages; i++) {
__change_memory_common((u64)page_address(area->pages[i]),
unsigned long idx = (start - (unsigned long)kasan_reset_tag(area->addr))
>> PAGE_SHIFT;
for (; numpages; idx++, numpages--) {
__change_memory_common((u64)page_address(area->pages[idx]),
PAGE_SIZE, set_mask, clear_mask);
}
}

View File

@@ -56,7 +56,7 @@ void __init pgtable_cache_init(void)
* With 52-bit physical addresses, the architecture requires the
* top-level table to be aligned to at least 64 bytes.
*/
BUILD_BUG_ON(PGD_SIZE < 64);
BUILD_BUG_ON(!IS_ALIGNED(PGD_SIZE, 64));
#endif
/*

View File

@@ -3053,7 +3053,7 @@ bool bpf_jit_supports_exceptions(void)
/* We unwind through both kernel frames starting from within bpf_throw
* call and BPF frames. Therefore we require FP unwinder to be enabled
* to walk kernel frames and reach BPF frames in the stack trace.
* ARM64 kernel is aways compiled with CONFIG_FRAME_POINTER=y
* ARM64 kernel is always compiled with CONFIG_FRAME_POINTER=y
*/
return true;
}

View File

@@ -251,4 +251,6 @@ source "drivers/hte/Kconfig"
source "drivers/cdx/Kconfig"
source "drivers/resctrl/Kconfig"
endmenu

Some files were not shown because too many files have changed in this diff Show More