Merge branch 'main' into zstd-next

This commit is contained in:
Nick Terrell
2022-10-24 12:09:07 -07:00
265 changed files with 4227 additions and 4518 deletions

View File

@@ -9,7 +9,6 @@ the Linux ACPI support.
:maxdepth: 1
initrd_table_override
dsdt-override
ssdt-overlays
cppc_sysfs
fan_performance_states

View File

@@ -144,6 +144,42 @@ managing and controlling ublk devices with help of several control commands:
For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's
responsibility to save IO target specific info in userspace.
- ``UBLK_CMD_START_USER_RECOVERY``
This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
command is accepted after the old process has exited, ublk device is quiesced
and ``/dev/ublkc*`` is released. User should send this command before he starts
a new process which re-opens ``/dev/ublkc*``. When this command returns, the
ublk device is ready for the new process.
- ``UBLK_CMD_END_USER_RECOVERY``
This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
command is accepted after ublk device is quiesced and a new process has
opened ``/dev/ublkc*`` and get all ublk queues be ready. When this command
returns, ublk device is unquiesced and new I/O requests are passed to the
new process.
- user recovery feature description
Two new features are added for user recovery: ``UBLK_F_USER_RECOVERY`` and
``UBLK_F_USER_RECOVERY_REISSUE``.
With ``UBLK_F_USER_RECOVERY`` set, after one ubq_daemon(ublk server's io
handler) is dying, ublk does not delete ``/dev/ublkb*`` during the whole
recovery stage and ublk device ID is kept. It is ublk server's
responsibility to recover the device context by its own knowledge.
Requests which have not been issued to userspace are requeued. Requests
which have been issued to userspace are aborted.
With ``UBLK_F_USER_RECOVERY_REISSUE`` set, after one ubq_daemon(ublk
server's io handler) is dying, contrary to ``UBLK_F_USER_RECOVERY``,
requests which have been issued to userspace are requeued and will be
re-issued to the new process after handling ``UBLK_CMD_END_USER_RECOVERY``.
``UBLK_F_USER_RECOVERY_REISSUE`` is designed for backends who tolerate
double-write since the driver may issue the same I/O request twice. It
might be useful to a read-only FS or a VM backend.
Data plane
----------

View File

@@ -1,9 +0,0 @@
Dongwoon Anatech DW9714 camera voice coil lens driver
DW9174 is a 10-bit DAC with current sink capability. It is intended
for driving voice coil lenses in camera modules.
Mandatory properties:
- compatible: "dongwoon,dw9714"
- reg: I²C slave address

View File

@@ -0,0 +1,47 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/i2c/dongwoon,dw9714.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Dongwoon Anatech DW9714 camera voice coil lens driver
maintainers:
- Krzysztof Kozlowski <krzk@kernel.org>
description:
DW9174 is a 10-bit DAC with current sink capability. It is intended for
driving voice coil lenses in camera modules.
properties:
compatible:
const: dongwoon,dw9714
reg:
maxItems: 1
powerdown-gpios:
description:
XSD pin for shutdown (active low)
vcc-supply:
description: VDD power supply
required:
- compatible
- reg
additionalProperties: false
examples:
- |
i2c {
#address-cells = <1>;
#size-cells = <0>;
camera-lens@c {
compatible = "dongwoon,dw9714";
reg = <0x0c>;
vcc-supply = <&reg_csi_1v8>;
};
};

View File

@@ -214,18 +214,29 @@ Link properties can be modified at runtime by calling
Pipelines and media streams
^^^^^^^^^^^^^^^^^^^^^^^^^^^
A media stream is a stream of pixels or metadata originating from one or more
source devices (such as a sensors) and flowing through media entity pads
towards the final sinks. The stream can be modified on the route by the
devices (e.g. scaling or pixel format conversions), or it can be split into
multiple branches, or multiple branches can be merged.
A media pipeline is a set of media streams which are interdependent. This
interdependency can be caused by the hardware (e.g. configuration of a second
stream cannot be changed if the first stream has been enabled) or by the driver
due to the software design. Most commonly a media pipeline consists of a single
stream which does not branch.
When starting streaming, drivers must notify all entities in the pipeline to
prevent link states from being modified during streaming by calling
:c:func:`media_pipeline_start()`.
The function will mark all entities connected to the given entity through
enabled links, either directly or indirectly, as streaming.
The function will mark all the pads which are part of the pipeline as streaming.
The struct media_pipeline instance pointed to by
the pipe argument will be stored in every entity in the pipeline.
the pipe argument will be stored in every pad in the pipeline.
Drivers should embed the struct media_pipeline
in higher-level pipeline structures and can then access the
pipeline through the struct media_entity
pipeline through the struct media_pad
pipe field.
Calls to :c:func:`media_pipeline_start()` can be nested.

View File

@@ -19,6 +19,8 @@ Supported devices:
Corsair HX1200i
Corsair HX1500i
Corsair RM550i
Corsair RM650i

View File

@@ -239,6 +239,7 @@ ignore define CEC_OP_FEAT_DEV_HAS_DECK_CONTROL
ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE
ignore define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX
ignore define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX
ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL
ignore define CEC_MSG_GIVE_FEATURES
@@ -487,6 +488,7 @@ ignore define CEC_OP_SYS_AUD_STATUS_ON
ignore define CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST
ignore define CEC_MSG_SYSTEM_AUDIO_MODE_STATUS
ignore define CEC_MSG_SET_AUDIO_VOLUME_LEVEL
ignore define CEC_OP_AUD_FMT_ID_CEA861
ignore define CEC_OP_AUD_FMT_ID_CEA861_CXT

View File

@@ -136,9 +136,9 @@ V4L2 functions
operates like the :c:func:`read()` function.
.. c:function:: void v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
.. c:function:: void *v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
operates like the :c:func:`munmap()` function.
operates like the :c:func:`mmap()` function.
.. c:function:: int v4l2_munmap(void *_start, size_t length);

View File

@@ -6284,7 +6284,7 @@ M: Sakari Ailus <sakari.ailus@linux.intel.com>
L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
F: drivers/media/i2c/dw9714.c
DONGWOON DW9768 LENS VOICE COIL DRIVER
@@ -14713,6 +14713,12 @@ F: drivers/nvme/target/auth.c
F: drivers/nvme/target/fabrics-cmd-auth.c
F: include/linux/nvme-auth.h
NVM EXPRESS HARDWARE MONITORING SUPPORT
M: Guenter Roeck <linux@roeck-us.net>
L: linux-nvme@lists.infradead.org
S: Supported
F: drivers/nvme/host/hwmon.c
NVM EXPRESS FC TRANSPORT DRIVERS
M: James Smart <james.smart@broadcom.com>
L: linux-nvme@lists.infradead.org
@@ -15843,7 +15849,7 @@ F: Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
F: drivers/pci/controller/dwc/*designware*
PCI DRIVER FOR TI DRA7XX/J721E
M: Kishon Vijay Abraham I <kishon@ti.com>
M: Vignesh Raghavendra <vigneshr@ti.com>
L: linux-omap@vger.kernel.org
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@@ -15860,10 +15866,10 @@ F: Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt
F: drivers/pci/controller/pci-v3-semi.c
PCI ENDPOINT SUBSYSTEM
M: Kishon Vijay Abraham I <kishon@ti.com>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
R: Krzysztof Wilczyński <kw@linux.com>
R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
R: Kishon Vijay Abraham I <kishon@kernel.org>
L: linux-pci@vger.kernel.org
S: Supported
Q: https://patchwork.kernel.org/project/linux-pci/list/
@@ -18135,7 +18141,6 @@ L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: drivers/staging/media/deprecated/saa7146/
F: include/media/drv-intf/saa7146*
SAFESETID SECURITY MODULE
M: Micah Morton <mortonm@chromium.org>
@@ -22765,7 +22770,7 @@ S: Maintained
W: http://mjpeg.sourceforge.net/driver-zoran/
Q: https://patchwork.linuxtv.org/project/linux-media/list/
F: Documentation/driver-api/media/drivers/zoran.rst
F: drivers/staging/media/zoran/
F: drivers/media/pci/zoran/
ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER
M: Minchan Kim <minchan@kernel.org>

View File

@@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 1
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@@ -13,6 +13,18 @@
#define KVM_PGTABLE_MAX_LEVELS 4U
/*
* The largest supported block sizes for KVM (no 52-bit PA support):
* - 4K (level 1): 1GB
* - 16K (level 2): 32MB
* - 64K (level 2): 512MB
*/
#ifdef CONFIG_ARM64_4K_PAGES
#define KVM_PGTABLE_MIN_BLOCK_LEVEL 1U
#else
#define KVM_PGTABLE_MIN_BLOCK_LEVEL 2U
#endif
static inline u64 kvm_get_parange(u64 mmfr0)
{
u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
@@ -58,11 +70,7 @@ static inline u64 kvm_granule_size(u32 level)
static inline bool kvm_level_supports_block_mapping(u32 level)
{
/*
* Reject invalid block mappings and don't bother with 4TB mappings for
* 52-bit PAs.
*/
return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
return level >= KVM_PGTABLE_MIN_BLOCK_LEVEL;
}
/**

View File

@@ -10,13 +10,6 @@
#include <linux/pgtable.h>
/*
* PGDIR_SHIFT determines the size a top-level page table entry can map
* and depends on the number of levels in the page table. Compute the
* PGDIR_SHIFT for a given number of levels.
*/
#define pt_levels_pgdir_shift(lvls) ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (lvls))
/*
* The hardware supports concatenation of up to 16 tables at stage2 entry
* level and we use the feature whenever possible, which means we resolve 4
@@ -30,11 +23,6 @@
#define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4)
#define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr)
/* stage2_pgdir_shift() is the size mapped by top-level stage2 entry for the VM */
#define stage2_pgdir_shift(kvm) pt_levels_pgdir_shift(kvm_stage2_levels(kvm))
#define stage2_pgdir_size(kvm) (1ULL << stage2_pgdir_shift(kvm))
#define stage2_pgdir_mask(kvm) ~(stage2_pgdir_size(kvm) - 1)
/*
* kvm_mmmu_cache_min_pages() is the number of pages required to install
* a stage-2 translation. We pre-allocate the entry level page table at
@@ -42,12 +30,4 @@
*/
#define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1)
static inline phys_addr_t
stage2_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
{
phys_addr_t boundary = (addr + stage2_pgdir_size(kvm)) & stage2_pgdir_mask(kvm);
return (boundary - 1 < end - 1) ? boundary : end;
}
#endif /* __ARM64_S2_PGTABLE_H_ */

View File

@@ -7,6 +7,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/asm-offsets.h>
#include <asm/assembler.h>
#include <asm/ftrace.h>
@@ -294,10 +295,14 @@ SYM_FUNC_END(ftrace_graph_caller)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
SYM_FUNC_START(ftrace_stub)
SYM_TYPED_FUNC_START(ftrace_stub)
ret
SYM_FUNC_END(ftrace_stub)
SYM_TYPED_FUNC_START(ftrace_stub_graph)
ret
SYM_FUNC_END(ftrace_stub_graph)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* void return_to_handler(void)

View File

@@ -5,9 +5,6 @@
incdir := $(srctree)/$(src)/include
subdir-asflags-y := -I$(incdir)
subdir-ccflags-y := -I$(incdir) \
-fno-stack-protector \
-DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN)
subdir-ccflags-y := -I$(incdir)
obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o

View File

@@ -10,6 +10,9 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
# will explode instantly (Words of Marc Zyngier). So introduce a generic flag
# __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM.
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
ccflags-y += -fno-stack-protector \
-DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN)
hostprogs := gen-hyprel
HOST_EXTRACFLAGS += -I$(objtree)/include
@@ -89,6 +92,10 @@ quiet_cmd_hypcopy = HYPCOPY $@
# Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
# This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
# Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
# when profile optimization is applied. gen-hyprel does not support SHT_REL and
# causes a build failure. Remove profile optimization flags.
KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%, $(KBUILD_CFLAGS))
# KVM nVHE code is run at a different exception code with a different map, so
# compiler instrumentation that inserts callbacks or checks into the code may

View File

@@ -31,6 +31,13 @@ static phys_addr_t hyp_idmap_vector;
static unsigned long io_map_base;
static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
{
phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL);
phys_addr_t boundary = ALIGN_DOWN(addr + size, size);
return (boundary - 1 < end - 1) ? boundary : end;
}
/*
* Release kvm_mmu_lock periodically if the memory region is large. Otherwise,
@@ -52,7 +59,7 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
if (!pgt)
return -EINVAL;
next = stage2_pgd_addr_end(kvm, addr, end);
next = stage2_range_addr_end(addr, end);
ret = fn(pgt, addr, next - addr);
if (ret)
break;

View File

@@ -2149,7 +2149,7 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
memset(entry, 0, esz);
while (len > 0) {
while (true) {
int next_offset;
size_t byte_offset;
@@ -2162,6 +2162,9 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
return next_offset;
byte_offset = next_offset * esz;
if (byte_offset >= len)
break;
id += next_offset;
gpa += byte_offset;
len -= byte_offset;

View File

@@ -42,16 +42,8 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
#endif /* CONFIG_SMP */
/*
* The T-Head CMO errata internally probe the CBOM block size, but otherwise
* don't depend on Zicbom.
*/
extern unsigned int riscv_cbom_block_size;
#ifdef CONFIG_RISCV_ISA_ZICBOM
void riscv_init_cbom_blocksize(void);
#else
static inline void riscv_init_cbom_blocksize(void) { }
#endif
#ifdef CONFIG_RISCV_DMA_NONCOHERENT
void riscv_noncoherent_supported(void);

View File

@@ -45,6 +45,7 @@ int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu);
void kvm_riscv_guest_timer_init(struct kvm *kvm);
void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu);
bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu);

View File

@@ -708,6 +708,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
}
}
/* Sync-up timer CSRs */
kvm_riscv_vcpu_timer_sync(vcpu);
}
int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)

View File

@@ -320,6 +320,21 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu)
kvm_riscv_vcpu_timer_unblocking(vcpu);
}
void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_timer *t = &vcpu->arch.timer;
if (!t->sstc_enabled)
return;
#if defined(CONFIG_32BIT)
t->next_cycles = csr_read(CSR_VSTIMECMP);
t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32;
#else
t->next_cycles = csr_read(CSR_VSTIMECMP);
#endif
}
void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_timer *t = &vcpu->arch.timer;
@@ -327,13 +342,11 @@ void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
if (!t->sstc_enabled)
return;
t = &vcpu->arch.timer;
#if defined(CONFIG_32BIT)
t->next_cycles = csr_read(CSR_VSTIMECMP);
t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32;
#else
t->next_cycles = csr_read(CSR_VSTIMECMP);
#endif
/*
* The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync()
* upon every VM exit so no need to save here.
*/
/* timer should be enabled for the remaining operations */
if (unlikely(!t->init_done))
return;

View File

@@ -3,6 +3,7 @@
* Copyright (C) 2017 SiFive
*/
#include <linux/of.h>
#include <asm/cacheflush.h>
#ifdef CONFIG_SMP
@@ -86,3 +87,40 @@ void flush_icache_pte(pte_t pte)
flush_icache_all();
}
#endif /* CONFIG_MMU */
unsigned int riscv_cbom_block_size;
EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
void riscv_init_cbom_blocksize(void)
{
struct device_node *node;
unsigned long cbom_hartid;
u32 val, probed_block_size;
int ret;
probed_block_size = 0;
for_each_of_cpu_node(node) {
unsigned long hartid;
ret = riscv_of_processor_hartid(node, &hartid);
if (ret)
continue;
/* set block-size for cbom extension if available */
ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
if (ret)
continue;
if (!probed_block_size) {
probed_block_size = val;
cbom_hartid = hartid;
} else {
if (probed_block_size != val)
pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
cbom_hartid, hartid);
}
}
if (probed_block_size)
riscv_cbom_block_size = probed_block_size;
}

View File

@@ -8,13 +8,8 @@
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
#include <linux/mm.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <asm/cacheflush.h>
unsigned int riscv_cbom_block_size;
EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
static bool noncoherent_supported;
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
@@ -77,42 +72,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
dev->dma_coherent = coherent;
}
#ifdef CONFIG_RISCV_ISA_ZICBOM
void riscv_init_cbom_blocksize(void)
{
struct device_node *node;
unsigned long cbom_hartid;
u32 val, probed_block_size;
int ret;
probed_block_size = 0;
for_each_of_cpu_node(node) {
unsigned long hartid;
ret = riscv_of_processor_hartid(node, &hartid);
if (ret)
continue;
/* set block-size for cbom extension if available */
ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
if (ret)
continue;
if (!probed_block_size) {
probed_block_size = val;
cbom_hartid = hartid;
} else {
if (probed_block_size != val)
pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
cbom_hartid, hartid);
}
}
if (probed_block_size)
riscv_cbom_block_size = probed_block_size;
}
#endif
void riscv_noncoherent_supported(void)
{
WARN(!riscv_cbom_block_size,

View File

@@ -1973,7 +1973,6 @@ config EFI
config EFI_STUB
bool "EFI stub support"
depends on EFI
depends on $(cc-option,-mabi=ms) || X86_32
select RELOCATABLE
help
This kernel feature allows a bzImage to be loaded directly

View File

@@ -1596,7 +1596,7 @@ void __init intel_pmu_arch_lbr_init(void)
return;
clear_arch_lbr:
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR);
setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR);
}
/**

View File

@@ -25,8 +25,10 @@ arch_rmrr_sanity_check(struct acpi_dmar_reserved_memory *rmrr)
{
u64 start = rmrr->base_address;
u64 end = rmrr->end_address + 1;
int entry_type;
if (e820__mapped_all(start, end, E820_TYPE_RESERVED))
entry_type = e820__get_entry_type(start, end);
if (entry_type == E820_TYPE_RESERVED || entry_type == E820_TYPE_NVS)
return 0;
pr_err(FW_BUG "No firmware reserved region can cover this RMRR [%#018Lx-%#018Lx], contact BIOS vendor for fixes\n",

View File

@@ -440,7 +440,13 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
return ret;
native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev >= mc->hdr.patch_id)
/*
* Allow application of the same revision to pick up SMT-specific
* changes even if the revision of the other SMT thread is already
* up-to-date.
*/
if (rev > mc->hdr.patch_id)
return ret;
if (!__apply_microcode_amd(mc)) {
@@ -528,8 +534,12 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
/* Check whether we have saved a new patch already: */
if (*new_rev && rev < mc->hdr.patch_id) {
/*
* Check whether a new patch has been saved already. Also, allow application of
* the same revision in order to pick up SMT-thread-specific configuration even
* if the sibling SMT thread already has an up-to-date revision.
*/
if (*new_rev && rev <= mc->hdr.patch_id) {
if (!__apply_microcode_amd(mc)) {
*new_rev = mc->hdr.patch_id;
return;

View File

@@ -66,9 +66,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.rid = RDT_RESOURCE_L3,
.name = "L3",
.cache_level = 3,
.cache = {
.min_cbm_bits = 1,
},
.domains = domain_init(RDT_RESOURCE_L3),
.parse_ctrlval = parse_cbm,
.format_str = "%d=%0*x",
@@ -83,9 +80,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.rid = RDT_RESOURCE_L2,
.name = "L2",
.cache_level = 2,
.cache = {
.min_cbm_bits = 1,
},
.domains = domain_init(RDT_RESOURCE_L2),
.parse_ctrlval = parse_cbm,
.format_str = "%d=%0*x",
@@ -836,6 +830,7 @@ static __init void rdt_init_res_defs_intel(void)
r->cache.arch_has_sparse_bitmaps = false;
r->cache.arch_has_empty_bitmaps = false;
r->cache.arch_has_per_cpu_cfg = false;
r->cache.min_cbm_bits = 1;
} else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_THRTL_BASE;
hw_res->msr_update = mba_wrmsr_intel;
@@ -856,6 +851,7 @@ static __init void rdt_init_res_defs_amd(void)
r->cache.arch_has_sparse_bitmaps = true;
r->cache.arch_has_empty_bitmaps = true;
r->cache.arch_has_per_cpu_cfg = true;
r->cache.min_cbm_bits = 0;
} else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_BW_BASE;
hw_res->msr_update = mba_wrmsr_amd;

View File

@@ -96,6 +96,7 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width;
unsigned int core_select_mask, core_level_siblings;
unsigned int die_select_mask, die_level_siblings;
unsigned int pkg_mask_width;
bool die_level_present = false;
int leaf;
@@ -111,10 +112,10 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
sub_index = 1;
do {
while (true) {
cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx);
/*
@@ -132,10 +133,15 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
}
sub_index++;
} while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE)
pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
else
break;
core_select_mask = (~(-1 << core_plus_mask_width)) >> ht_mask_width;
sub_index++;
}
core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width;
die_select_mask = (~(-1 << die_plus_mask_width)) >>
core_plus_mask_width;
@@ -148,7 +154,7 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
}
c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
die_plus_mask_width);
pkg_mask_width);
/*
* Reinit the apicid, now that we have extended initial_apicid.
*/

View File

@@ -210,13 +210,6 @@ static void __init fpu__init_system_xstate_size_legacy(void)
fpstate_reset(&current->thread.fpu);
}
static void __init fpu__init_init_fpstate(void)
{
/* Bring init_fpstate size and features up to date */
init_fpstate.size = fpu_kernel_cfg.max_size;
init_fpstate.xfeatures = fpu_kernel_cfg.max_features;
}
/*
* Called on the boot CPU once per system bootup, to set up the initial
* FPU state that is later cloned into all processes:
@@ -236,5 +229,4 @@ void __init fpu__init_system(struct cpuinfo_x86 *c)
fpu__init_system_xstate_size_legacy();
fpu__init_system_xstate(fpu_kernel_cfg.max_size);
fpu__init_task_struct_size();
fpu__init_init_fpstate();
}

View File

@@ -360,7 +360,7 @@ static void __init setup_init_fpu_buf(void)
print_xstate_features();
xstate_init_xcomp_bv(&init_fpstate.regs.xsave, fpu_kernel_cfg.max_features);
xstate_init_xcomp_bv(&init_fpstate.regs.xsave, init_fpstate.xfeatures);
/*
* Init all the features state with header.xfeatures being 0x0
@@ -678,20 +678,6 @@ static unsigned int __init get_xsave_size_user(void)
return ebx;
}
/*
* Will the runtime-enumerated 'xstate_size' fit in the init
* task's statically-allocated buffer?
*/
static bool __init is_supported_xstate_size(unsigned int test_xstate_size)
{
if (test_xstate_size <= sizeof(init_fpstate.regs))
return true;
pr_warn("x86/fpu: xstate buffer too small (%zu < %d), disabling xsave\n",
sizeof(init_fpstate.regs), test_xstate_size);
return false;
}
static int __init init_xstate_size(void)
{
/* Recompute the context size for enabled features: */
@@ -717,10 +703,6 @@ static int __init init_xstate_size(void)
kernel_default_size =
xstate_calculate_size(fpu_kernel_cfg.default_features, compacted);
/* Ensure we have the space to store all default enabled features. */
if (!is_supported_xstate_size(kernel_default_size))
return -EINVAL;
if (!paranoid_xstate_size_valid(kernel_size))
return -EINVAL;
@@ -875,6 +857,19 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
update_regset_xstate_info(fpu_user_cfg.max_size,
fpu_user_cfg.max_features);
/*
* init_fpstate excludes dynamic states as they are large but init
* state is zero.
*/
init_fpstate.size = fpu_kernel_cfg.default_size;
init_fpstate.xfeatures = fpu_kernel_cfg.default_features;
if (init_fpstate.size > sizeof(init_fpstate.regs)) {
pr_warn("x86/fpu: init_fpstate buffer too small (%zu < %d), disabling XSAVE\n",
sizeof(init_fpstate.regs), init_fpstate.size);
goto out_disable;
}
setup_init_fpu_buf();
/*
@@ -1130,6 +1125,15 @@ void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate,
*/
mask = fpstate->user_xfeatures;
/*
* Dynamic features are not present in init_fpstate. When they are
* in an all zeros init state, remove those from 'mask' to zero
* those features in the user buffer instead of retrieving them
* from init_fpstate.
*/
if (fpu_state_size_dynamic())
mask &= (header.xfeatures | xinit->header.xcomp_bv);
for_each_extended_xfeature(i, mask) {
/*
* If there was a feature or alignment gap, zero the space

View File

@@ -4,6 +4,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/ptrace.h>
#include <asm/ftrace.h>
#include <asm/export.h>
@@ -129,6 +130,14 @@
.endm
SYM_TYPED_FUNC_START(ftrace_stub)
RET
SYM_FUNC_END(ftrace_stub)
SYM_TYPED_FUNC_START(ftrace_stub_graph)
RET
SYM_FUNC_END(ftrace_stub_graph)
#ifdef CONFIG_DYNAMIC_FTRACE
SYM_FUNC_START(__fentry__)
@@ -172,21 +181,10 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
*/
SYM_INNER_LABEL(ftrace_caller_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
jmp ftrace_epilogue
RET
SYM_FUNC_END(ftrace_caller);
STACK_FRAME_NON_STANDARD_FP(ftrace_caller)
SYM_FUNC_START(ftrace_epilogue)
/*
* This is weak to keep gas from relaxing the jumps.
*/
SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
UNWIND_HINT_FUNC
ENDBR
RET
SYM_FUNC_END(ftrace_epilogue)
SYM_FUNC_START(ftrace_regs_caller)
/* Save the current flags before any operations that can change them */
pushfq
@@ -262,14 +260,11 @@ SYM_INNER_LABEL(ftrace_regs_caller_jmp, SYM_L_GLOBAL)
popfq
/*
* As this jmp to ftrace_epilogue can be a short jump
* it must not be copied into the trampoline.
* The trampoline will add the code to jump
* to the return.
* The trampoline will add the return.
*/
SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
jmp ftrace_epilogue
RET
/* Swap the flags with orig_rax */
1: movq MCOUNT_REG_SIZE(%rsp), %rdi
@@ -280,7 +275,7 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
/* Restore flags */
popfq
UNWIND_HINT_FUNC
jmp ftrace_epilogue
RET
SYM_FUNC_END(ftrace_regs_caller)
STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller)
@@ -291,9 +286,6 @@ STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller)
SYM_FUNC_START(__fentry__)
cmpq $ftrace_stub, ftrace_trace_function
jnz trace
SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
ENDBR
RET
trace:

View File

@@ -713,7 +713,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
/* Otherwise, skip ahead to the user-specified starting frame: */
while (!unwind_done(state) &&
(!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
state->sp < (unsigned long)first_frame))
state->sp <= (unsigned long)first_frame))
unwind_next_frame(state);
return;

View File

@@ -6442,26 +6442,22 @@ static int kvm_add_msr_filter(struct kvm_x86_msr_filter *msr_filter,
return 0;
}
static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm,
struct kvm_msr_filter *filter)
{
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_x86_msr_filter *new_filter, *old_filter;
struct kvm_msr_filter filter;
bool default_allow;
bool empty = true;
int r = 0;
u32 i;
if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
return -EFAULT;
if (filter.flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
if (filter->flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
return -EINVAL;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++)
empty &= !filter.ranges[i].nmsrs;
for (i = 0; i < ARRAY_SIZE(filter->ranges); i++)
empty &= !filter->ranges[i].nmsrs;
default_allow = !(filter.flags & KVM_MSR_FILTER_DEFAULT_DENY);
default_allow = !(filter->flags & KVM_MSR_FILTER_DEFAULT_DENY);
if (empty && !default_allow)
return -EINVAL;
@@ -6469,8 +6465,8 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
if (!new_filter)
return -ENOMEM;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) {
r = kvm_add_msr_filter(new_filter, &filter->ranges[i]);
if (r) {
kvm_free_msr_filter(new_filter);
return r;
@@ -6493,6 +6489,62 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
return 0;
}
#ifdef CONFIG_KVM_COMPAT
/* for KVM_X86_SET_MSR_FILTER */
struct kvm_msr_filter_range_compat {
__u32 flags;
__u32 nmsrs;
__u32 base;
__u32 bitmap;
};
struct kvm_msr_filter_compat {
__u32 flags;
struct kvm_msr_filter_range_compat ranges[KVM_MSR_FILTER_MAX_RANGES];
};
#define KVM_X86_SET_MSR_FILTER_COMPAT _IOW(KVMIO, 0xc6, struct kvm_msr_filter_compat)
long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
unsigned long arg)
{
void __user *argp = (void __user *)arg;
struct kvm *kvm = filp->private_data;
long r = -ENOTTY;
switch (ioctl) {
case KVM_X86_SET_MSR_FILTER_COMPAT: {
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_msr_filter_compat filter_compat;
struct kvm_msr_filter filter;
int i;
if (copy_from_user(&filter_compat, user_msr_filter,
sizeof(filter_compat)))
return -EFAULT;
filter.flags = filter_compat.flags;
for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
struct kvm_msr_filter_range_compat *cr;
cr = &filter_compat.ranges[i];
filter.ranges[i] = (struct kvm_msr_filter_range) {
.flags = cr->flags,
.nmsrs = cr->nmsrs,
.base = cr->base,
.bitmap = (__u8 *)(ulong)cr->bitmap,
};
}
r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
break;
}
}
return r;
}
#endif
#ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
static int kvm_arch_suspend_notifier(struct kvm *kvm)
{
@@ -6915,9 +6967,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
case KVM_SET_PMU_EVENT_FILTER:
r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
break;
case KVM_X86_SET_MSR_FILTER:
r = kvm_vm_ioctl_set_msr_filter(kvm, argp);
case KVM_X86_SET_MSR_FILTER: {
struct kvm_msr_filter __user *user_msr_filter = argp;
struct kvm_msr_filter filter;
if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
return -EFAULT;
r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
break;
}
default:
r = -ENOTTY;
}

View File

@@ -369,12 +369,8 @@ struct bfq_queue {
unsigned long split_time; /* time of last split */
unsigned long first_IO_time; /* time of first I/O for this queue */
unsigned long creation_time; /* when this queue is created */
/* max service rate measured so far */
u32 max_service_rate;
/*
* Pointer to the waker queue for this queue, i.e., to the
* queue Q such that this queue happens to get new I/O right

View File

@@ -741,7 +741,7 @@ void bio_put(struct bio *bio)
return;
}
if (bio->bi_opf & REQ_ALLOC_CACHE) {
if ((bio->bi_opf & REQ_ALLOC_CACHE) && !WARN_ON_ONCE(in_interrupt())) {
struct bio_alloc_cache *cache;
bio_uninit(bio);

View File

@@ -3112,8 +3112,11 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags,
struct page *page;
unsigned long flags;
/* There is no need to clear a driver tags own mapping */
if (drv_tags == tags)
/*
* There is no need to clear mapping if driver tags is not initialized
* or the mapping belongs to the driver tags.
*/
if (!drv_tags || drv_tags == tags)
return;
list_for_each_entry(page, &tags->page_list, lru) {

View File

@@ -12,6 +12,7 @@
#include <linux/ratelimit.h>
#include <linux/edac.h>
#include <linux/ras.h>
#include <acpi/ghes.h>
#include <asm/cpu.h>
#include <asm/mce.h>
@@ -138,8 +139,8 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
int cpu = mce->extcpu;
struct acpi_hest_generic_status *estatus, *tmp;
struct acpi_hest_generic_data *gdata;
const guid_t *fru_id = &guid_null;
char *fru_text = "";
const guid_t *fru_id;
char *fru_text;
guid_t *sec_type;
static u32 err_seq;
@@ -160,17 +161,23 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
/* log event via trace */
err_seq++;
gdata = (struct acpi_hest_generic_data *)(tmp + 1);
if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
fru_id = (guid_t *)gdata->fru_id;
if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
fru_text = gdata->fru_text;
sec_type = (guid_t *)gdata->section_type;
if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
struct cper_sec_mem_err *mem = (void *)(gdata + 1);
if (gdata->error_data_length >= sizeof(*mem))
trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
(u8)gdata->error_severity);
apei_estatus_for_each_section(tmp, gdata) {
if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
fru_id = (guid_t *)gdata->fru_id;
else
fru_id = &guid_null;
if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
fru_text = gdata->fru_text;
else
fru_text = "";
sec_type = (guid_t *)gdata->section_type;
if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
struct cper_sec_mem_err *mem = (void *)(gdata + 1);
if (gdata->error_data_length >= sizeof(*mem))
trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
(u8)gdata->error_severity);
}
}
out:

View File

@@ -163,7 +163,7 @@ static void ghes_unmap(void __iomem *vaddr, enum fixed_addresses fixmap_idx)
clear_fixmap(fixmap_idx);
}
int ghes_estatus_pool_init(int num_ghes)
int ghes_estatus_pool_init(unsigned int num_ghes)
{
unsigned long addr, len;
int rc;

View File

@@ -1142,7 +1142,8 @@ static void iort_iommu_msi_get_resv_regions(struct device *dev,
struct iommu_resv_region *region;
region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K,
prot, IOMMU_RESV_MSI);
prot, IOMMU_RESV_MSI,
GFP_KERNEL);
if (region)
list_add_tail(&region->list, head);
}

View File

@@ -323,6 +323,7 @@ struct pci_dev *acpi_get_pci_dev(acpi_handle handle)
list_for_each_entry(pn, &adev->physical_node_list, node) {
if (dev_is_pci(pn->dev)) {
get_device(pn->dev);
pci_dev = to_pci_dev(pn->dev);
break;
}

View File

@@ -428,17 +428,31 @@ static const struct dmi_system_id asus_laptop[] = {
{ }
};
static const struct dmi_system_id lenovo_82ra[] = {
{
.ident = "LENOVO IdeaPad Flex 5 16ALC7",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_NAME, "82RA"),
},
},
{ }
};
struct irq_override_cmp {
const struct dmi_system_id *system;
unsigned char irq;
unsigned char triggering;
unsigned char polarity;
unsigned char shareable;
bool override;
};
static const struct irq_override_cmp skip_override_table[] = {
{ medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
{ asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
static const struct irq_override_cmp override_table[] = {
{ medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
{ asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
{ lenovo_82ra, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
{ lenovo_82ra, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
};
static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
@@ -446,6 +460,17 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
{
int i;
for (i = 0; i < ARRAY_SIZE(override_table); i++) {
const struct irq_override_cmp *entry = &override_table[i];
if (dmi_check_system(entry->system) &&
entry->irq == gsi &&
entry->triggering == triggering &&
entry->polarity == polarity &&
entry->shareable == shareable)
return entry->override;
}
#ifdef CONFIG_X86
/*
* IRQ override isn't needed on modern AMD Zen systems and
@@ -456,17 +481,6 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
return false;
#endif
for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) {
const struct irq_override_cmp *entry = &skip_override_table[i];
if (dmi_check_system(entry->system) &&
entry->irq == gsi &&
entry->triggering == triggering &&
entry->polarity == polarity &&
entry->shareable == shareable)
return false;
}
return true;
}
@@ -498,8 +512,11 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
if (triggering != trig || polarity != pol) {
pr_warn("ACPI: IRQ %d override to %s, %s\n", gsi,
t ? "level" : "edge", p ? "low" : "high");
pr_warn("ACPI: IRQ %d override to %s%s, %s%s\n", gsi,
t ? "level" : "edge",
trig == triggering ? "" : "(!)",
p ? "low" : "high",
pol == polarity ? "" : "(!)");
triggering = trig;
polarity = pol;
}

View File

@@ -1509,9 +1509,12 @@ int acpi_dma_get_range(struct device *dev, const struct bus_dma_region **map)
goto out;
}
*map = r;
list_for_each_entry(rentry, &list, node) {
if (rentry->res->start >= rentry->res->end) {
kfree(r);
kfree(*map);
*map = NULL;
ret = -EINVAL;
dev_dbg(dma_dev, "Invalid DMA regions configuration\n");
goto out;
@@ -1523,8 +1526,6 @@ int acpi_dma_get_range(struct device *dev, const struct bus_dma_region **map)
r->offset = rentry->offset;
r++;
}
*map = r;
}
out:
acpi_dev_free_resource_list(&list);

View File

@@ -30,11 +30,6 @@ static struct drbd_request *drbd_req_new(struct drbd_device *device, struct bio
return NULL;
memset(req, 0, sizeof(*req));
req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, bio_src,
GFP_NOIO, &drbd_io_bio_set);
req->private_bio->bi_private = req;
req->private_bio->bi_end_io = drbd_request_endio;
req->rq_state = (bio_data_dir(bio_src) == WRITE ? RQ_WRITE : 0)
| (bio_op(bio_src) == REQ_OP_WRITE_ZEROES ? RQ_ZEROES : 0)
| (bio_op(bio_src) == REQ_OP_DISCARD ? RQ_UNMAP : 0);
@@ -1219,9 +1214,12 @@ drbd_request_prepare(struct drbd_device *device, struct bio *bio)
/* Update disk stats */
req->start_jif = bio_start_io_acct(req->master_bio);
if (!get_ldev(device)) {
bio_put(req->private_bio);
req->private_bio = NULL;
if (get_ldev(device)) {
req->private_bio = bio_alloc_clone(device->ldev->backing_bdev,
bio, GFP_NOIO,
&drbd_io_bio_set);
req->private_bio->bi_private = req;
req->private_bio->bi_end_io = drbd_request_endio;
}
/* process discards always from our submitter thread */

View File

@@ -124,7 +124,7 @@ struct ublk_queue {
bool force_abort;
unsigned short nr_io_ready; /* how many ios setup */
struct ublk_device *dev;
struct ublk_io ios[0];
struct ublk_io ios[];
};
#define UBLK_DAEMON_MONITOR_PERIOD (5 * HZ)

View File

@@ -222,10 +222,8 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
if (reg_name[0]) {
priv->opp_token = dev_pm_opp_set_regulators(cpu_dev, reg_name);
if (priv->opp_token < 0) {
ret = priv->opp_token;
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev, "failed to set regulators: %d\n",
ret);
ret = dev_err_probe(cpu_dev, priv->opp_token,
"failed to set regulators\n");
goto free_cpumask;
}
}

View File

@@ -396,9 +396,7 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
ret = imx6q_opp_check_speed_grading(cpu_dev);
}
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev, "failed to read ocotp: %d\n",
ret);
dev_err_probe(cpu_dev, ret, "failed to read ocotp\n");
goto out_free_opp;
}

View File

@@ -64,7 +64,7 @@ static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev;
static void get_krait_bin_format_a(struct device *cpu_dev,
int *speed, int *pvs, int *pvs_ver,
struct nvmem_cell *pvs_nvmem, u8 *buf)
u8 *buf)
{
u32 pte_efuse;
@@ -95,7 +95,7 @@ static void get_krait_bin_format_a(struct device *cpu_dev,
static void get_krait_bin_format_b(struct device *cpu_dev,
int *speed, int *pvs, int *pvs_ver,
struct nvmem_cell *pvs_nvmem, u8 *buf)
u8 *buf)
{
u32 pte_efuse, redundant_sel;
@@ -213,6 +213,7 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
int speed = 0, pvs = 0, pvs_ver = 0;
u8 *speedbin;
size_t len;
int ret = 0;
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
@@ -222,15 +223,16 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
switch (len) {
case 4:
get_krait_bin_format_a(cpu_dev, &speed, &pvs, &pvs_ver,
speedbin_nvmem, speedbin);
speedbin);
break;
case 8:
get_krait_bin_format_b(cpu_dev, &speed, &pvs, &pvs_ver,
speedbin_nvmem, speedbin);
speedbin);
break;
default:
dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n");
return -ENODEV;
ret = -ENODEV;
goto len_error;
}
snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d",
@@ -238,8 +240,9 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
drv->versions = (1 << speed);
len_error:
kfree(speedbin);
return 0;
return ret;
}
static const struct qcom_cpufreq_match_data match_data_kryo = {
@@ -262,7 +265,8 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
char *pvs_name = "speedXX-pvsXX-vXX";
char pvs_name_buffer[] = "speedXX-pvsXX-vXX";
char *pvs_name = pvs_name_buffer;
unsigned cpu;
const struct of_device_id *match;
int ret;
@@ -295,11 +299,8 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
if (drv->data->get_version) {
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
if (IS_ERR(speedbin_nvmem)) {
if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
dev_err(cpu_dev,
"Could not get nvmem cell: %ld\n",
PTR_ERR(speedbin_nvmem));
ret = PTR_ERR(speedbin_nvmem);
ret = dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
"Could not get nvmem cell\n");
goto free_drv;
}

View File

@@ -56,12 +56,9 @@ static int sun50i_cpufreq_get_efuse(u32 *versions)
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
of_node_put(np);
if (IS_ERR(speedbin_nvmem)) {
if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
pr_err("Could not get nvmem cell: %ld\n",
PTR_ERR(speedbin_nvmem));
return PTR_ERR(speedbin_nvmem);
}
if (IS_ERR(speedbin_nvmem))
return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
"Could not get nvmem cell\n");
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
nvmem_cell_put(speedbin_nvmem);

View File

@@ -589,6 +589,7 @@ static const struct of_device_id tegra194_cpufreq_of_match[] = {
{ .compatible = "nvidia,tegra239-ccplex-cluster", .data = &tegra239_cpufreq_soc },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, tegra194_cpufreq_of_match);
static struct platform_driver tegra194_ccplex_driver = {
.driver = {

View File

@@ -124,28 +124,6 @@ config EFI_ZBOOT
is supported by the encapsulated image. (The compression algorithm
used is described in the zboot image header)
config EFI_ZBOOT_SIGNED
def_bool y
depends on EFI_ZBOOT_SIGNING_CERT != ""
depends on EFI_ZBOOT_SIGNING_KEY != ""
config EFI_ZBOOT_SIGNING
bool "Sign the EFI decompressor for UEFI secure boot"
depends on EFI_ZBOOT
help
Use the 'sbsign' command line tool (which must exist on the host
path) to sign both the EFI decompressor PE/COFF image, as well as the
encapsulated PE/COFF image, which is subsequently compressed and
wrapped by the former image.
config EFI_ZBOOT_SIGNING_CERT
string "Certificate to use for signing the compressed EFI boot image"
depends on EFI_ZBOOT_SIGNING
config EFI_ZBOOT_SIGNING_KEY
string "Private key to use for signing the compressed EFI boot image"
depends on EFI_ZBOOT_SIGNING
config EFI_ARMSTUB_DTB_LOADER
bool "Enable the DTB loader"
depends on EFI_GENERIC_STUB && !RISCV && !LOONGARCH

View File

@@ -63,7 +63,7 @@ static bool __init efi_virtmap_init(void)
if (!(md->attribute & EFI_MEMORY_RUNTIME))
continue;
if (md->virt_addr == 0)
if (md->virt_addr == U64_MAX)
return false;
ret = efi_create_mapping(&efi_mm, md);

View File

@@ -271,6 +271,8 @@ static __init int efivar_ssdt_load(void)
acpi_status ret = acpi_load_table(data, NULL);
if (ret)
pr_err("failed to load table: %u\n", ret);
else
continue;
} else {
pr_err("failed to get var data: 0x%lx\n", status);
}

View File

@@ -20,22 +20,11 @@ zboot-size-len-y := 4
zboot-method-$(CONFIG_KERNEL_GZIP) := gzip
zboot-size-len-$(CONFIG_KERNEL_GZIP) := 0
quiet_cmd_sbsign = SBSIGN $@
cmd_sbsign = sbsign --out $@ $< \
--key $(CONFIG_EFI_ZBOOT_SIGNING_KEY) \
--cert $(CONFIG_EFI_ZBOOT_SIGNING_CERT)
$(obj)/$(EFI_ZBOOT_PAYLOAD).signed: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
$(call if_changed,sbsign)
ZBOOT_PAYLOAD-y := $(EFI_ZBOOT_PAYLOAD)
ZBOOT_PAYLOAD-$(CONFIG_EFI_ZBOOT_SIGNED) := $(EFI_ZBOOT_PAYLOAD).signed
$(obj)/vmlinuz: $(obj)/$(ZBOOT_PAYLOAD-y) FORCE
$(obj)/vmlinuz: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
$(call if_changed,$(zboot-method-y))
OBJCOPYFLAGS_vmlinuz.o := -I binary -O $(EFI_ZBOOT_BFD_TARGET) \
--rename-section .data=.gzdata,load,alloc,readonly,contents
--rename-section .data=.gzdata,load,alloc,readonly,contents
$(obj)/vmlinuz.o: $(obj)/vmlinuz FORCE
$(call if_changed,objcopy)
@@ -53,18 +42,8 @@ LDFLAGS_vmlinuz.efi.elf := -T $(srctree)/drivers/firmware/efi/libstub/zboot.lds
$(obj)/vmlinuz.efi.elf: $(obj)/vmlinuz.o $(ZBOOT_DEPS) FORCE
$(call if_changed,ld)
ZBOOT_EFI-y := vmlinuz.efi
ZBOOT_EFI-$(CONFIG_EFI_ZBOOT_SIGNED) := vmlinuz.efi.unsigned
OBJCOPYFLAGS_$(ZBOOT_EFI-y) := -O binary
$(obj)/$(ZBOOT_EFI-y): $(obj)/vmlinuz.efi.elf FORCE
OBJCOPYFLAGS_vmlinuz.efi := -O binary
$(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.elf FORCE
$(call if_changed,objcopy)
targets += zboot-header.o vmlinuz vmlinuz.o vmlinuz.efi.elf vmlinuz.efi
ifneq ($(CONFIG_EFI_ZBOOT_SIGNED),)
$(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.unsigned FORCE
$(call if_changed,sbsign)
endif
targets += $(EFI_ZBOOT_PAYLOAD).signed vmlinuz.efi.unsigned

View File

@@ -313,16 +313,16 @@ efi_status_t allocate_new_fdt_and_exit_boot(void *handle,
/*
* Set the virtual address field of all
* EFI_MEMORY_RUNTIME entries to 0. This will signal
* the incoming kernel that no virtual translation has
* been installed.
* EFI_MEMORY_RUNTIME entries to U64_MAX. This will
* signal the incoming kernel that no virtual
* translation has been installed.
*/
for (l = 0; l < priv.boot_memmap->map_size;
l += priv.boot_memmap->desc_size) {
p = (void *)priv.boot_memmap->map + l;
if (p->attribute & EFI_MEMORY_RUNTIME)
p->virt_addr = 0;
p->virt_addr = U64_MAX;
}
}
return EFI_SUCCESS;

View File

@@ -765,9 +765,9 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle)
* relocated by efi_relocate_kernel.
* On failure, we exit to the firmware via efi_exit instead of returning.
*/
unsigned long efi_main(efi_handle_t handle,
efi_system_table_t *sys_table_arg,
struct boot_params *boot_params)
asmlinkage unsigned long efi_main(efi_handle_t handle,
efi_system_table_t *sys_table_arg,
struct boot_params *boot_params)
{
unsigned long bzimage_addr = (unsigned long)startup_32;
unsigned long buffer_start, buffer_end;

View File

@@ -38,7 +38,8 @@ SECTIONS
}
}
PROVIDE(__efistub__gzdata_size = ABSOLUTE(. - __efistub__gzdata_start));
PROVIDE(__efistub__gzdata_size =
ABSOLUTE(__efistub__gzdata_end - __efistub__gzdata_start));
PROVIDE(__data_rawsize = ABSOLUTE(_edata - _etext));
PROVIDE(__data_size = ABSOLUTE(_end - _etext));

View File

@@ -41,7 +41,7 @@ static bool __init efi_virtmap_init(void)
if (!(md->attribute & EFI_MEMORY_RUNTIME))
continue;
if (md->virt_addr == 0)
if (md->virt_addr == U64_MAX)
return false;
ret = efi_create_mapping(&efi_mm, md);

View File

@@ -7,6 +7,7 @@
*/
#include <linux/types.h>
#include <linux/sizes.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/module.h>
@@ -20,19 +21,19 @@ static struct efivars *__efivars;
static DEFINE_SEMAPHORE(efivars_lock);
efi_status_t check_var_size(u32 attributes, unsigned long size)
static efi_status_t check_var_size(u32 attributes, unsigned long size)
{
const struct efivar_operations *fops;
fops = __efivars->ops;
if (!fops->query_variable_store)
return EFI_UNSUPPORTED;
return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;
return fops->query_variable_store(attributes, size, false);
}
EXPORT_SYMBOL_NS_GPL(check_var_size, EFIVAR);
static
efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size)
{
const struct efivar_operations *fops;
@@ -40,11 +41,10 @@ efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size)
fops = __efivars->ops;
if (!fops->query_variable_store)
return EFI_UNSUPPORTED;
return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;
return fops->query_variable_store(attributes, size, true);
}
EXPORT_SYMBOL_NS_GPL(check_var_size_nonblocking, EFIVAR);
/**
* efivars_kobject - get the kobject for the registered efivars

View File

@@ -867,6 +867,7 @@
#define USB_DEVICE_ID_MADCATZ_BEATPAD 0x4540
#define USB_DEVICE_ID_MADCATZ_RAT5 0x1705
#define USB_DEVICE_ID_MADCATZ_RAT9 0x1709
#define USB_DEVICE_ID_MADCATZ_MMO7 0x1713
#define USB_VENDOR_ID_MCC 0x09db
#define USB_DEVICE_ID_MCC_PMD1024LS 0x0076
@@ -1142,6 +1143,7 @@
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_2 0x09cc
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE 0x0ba0
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER 0x0ce6
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER_2 0x0df2
#define USB_DEVICE_ID_SONY_MOTION_CONTROLLER 0x03d5
#define USB_DEVICE_ID_SONY_NAVIGATION_CONTROLLER 0x042f
#define USB_DEVICE_ID_SONY_BUZZ_CONTROLLER 0x0002

View File

@@ -985,7 +985,7 @@ static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
struct device *dev = led_cdev->dev->parent;
struct hid_device *hdev = to_hid_device(dev);
struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev);
u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
static const u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
int led_nr = 0;
int ret = 0;

View File

@@ -480,7 +480,7 @@ static int magicmouse_raw_event(struct hid_device *hdev,
magicmouse_raw_event(hdev, report, data + 2, data[1]);
magicmouse_raw_event(hdev, report, data + 2 + data[1],
size - 2 - data[1]);
break;
return 0;
default:
return 0;
}

View File

@@ -46,6 +46,7 @@ struct ps_device {
uint32_t fw_version;
int (*parse_report)(struct ps_device *dev, struct hid_report *report, u8 *data, int size);
void (*remove)(struct ps_device *dev);
};
/* Calibration data for playstation motion sensors. */
@@ -107,6 +108,9 @@ struct ps_led_info {
#define DS_STATUS_CHARGING GENMASK(7, 4)
#define DS_STATUS_CHARGING_SHIFT 4
/* Feature version from DualSense Firmware Info report. */
#define DS_FEATURE_VERSION(major, minor) ((major & 0xff) << 8 | (minor & 0xff))
/*
* Status of a DualSense touch point contact.
* Contact IDs, with highest bit set are 'inactive'
@@ -125,6 +129,7 @@ struct ps_led_info {
#define DS_OUTPUT_VALID_FLAG1_RELEASE_LEDS BIT(3)
#define DS_OUTPUT_VALID_FLAG1_PLAYER_INDICATOR_CONTROL_ENABLE BIT(4)
#define DS_OUTPUT_VALID_FLAG2_LIGHTBAR_SETUP_CONTROL_ENABLE BIT(1)
#define DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2 BIT(2)
#define DS_OUTPUT_POWER_SAVE_CONTROL_MIC_MUTE BIT(4)
#define DS_OUTPUT_LIGHTBAR_SETUP_LIGHT_OUT BIT(1)
@@ -142,6 +147,9 @@ struct dualsense {
struct input_dev *sensors;
struct input_dev *touchpad;
/* Update version is used as a feature/capability version. */
uint16_t update_version;
/* Calibration data for accelerometer and gyroscope. */
struct ps_calibration_data accel_calib_data[3];
struct ps_calibration_data gyro_calib_data[3];
@@ -152,6 +160,7 @@ struct dualsense {
uint32_t sensor_timestamp_us;
/* Compatible rumble state */
bool use_vibration_v2;
bool update_rumble;
uint8_t motor_left;
uint8_t motor_right;
@@ -174,6 +183,7 @@ struct dualsense {
struct led_classdev player_leds[5];
struct work_struct output_worker;
bool output_worker_initialized;
void *output_report_dmabuf;
uint8_t output_seq; /* Sequence number for output report. */
};
@@ -299,6 +309,7 @@ static const struct {int x; int y; } ps_gamepad_hat_mapping[] = {
{0, 0},
};
static inline void dualsense_schedule_work(struct dualsense *ds);
static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t green, uint8_t blue);
/*
@@ -789,6 +800,7 @@ static int dualsense_get_calibration_data(struct dualsense *ds)
return ret;
}
static int dualsense_get_firmware_info(struct dualsense *ds)
{
uint8_t *buf;
@@ -808,6 +820,15 @@ static int dualsense_get_firmware_info(struct dualsense *ds)
ds->base.hw_version = get_unaligned_le32(&buf[24]);
ds->base.fw_version = get_unaligned_le32(&buf[28]);
/* Update version is some kind of feature version. It is distinct from
* the firmware version as there can be many different variations of a
* controller over time with the same physical shell, but with different
* PCBs and other internal changes. The update version (internal name) is
* used as a means to detect what features are available and change behavior.
* Note: the version is different between DualSense and DualSense Edge.
*/
ds->update_version = get_unaligned_le16(&buf[44]);
err_free:
kfree(buf);
return ret;
@@ -878,7 +899,7 @@ static int dualsense_player_led_set_brightness(struct led_classdev *led, enum le
ds->update_player_leds = true;
spin_unlock_irqrestore(&ds->base.lock, flags);
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
return 0;
}
@@ -922,6 +943,16 @@ static void dualsense_init_output_report(struct dualsense *ds, struct dualsense_
}
}
static inline void dualsense_schedule_work(struct dualsense *ds)
{
unsigned long flags;
spin_lock_irqsave(&ds->base.lock, flags);
if (ds->output_worker_initialized)
schedule_work(&ds->output_worker);
spin_unlock_irqrestore(&ds->base.lock, flags);
}
/*
* Helper function to send DualSense output reports. Applies a CRC at the end of a report
* for Bluetooth reports.
@@ -960,7 +991,10 @@ static void dualsense_output_worker(struct work_struct *work)
if (ds->update_rumble) {
/* Select classic rumble style haptics and enable it. */
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_HAPTICS_SELECT;
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
if (ds->use_vibration_v2)
common->valid_flag2 |= DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2;
else
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
common->motor_left = ds->motor_left;
common->motor_right = ds->motor_right;
ds->update_rumble = false;
@@ -1082,7 +1116,7 @@ static int dualsense_parse_report(struct ps_device *ps_dev, struct hid_report *r
spin_unlock_irqrestore(&ps_dev->lock, flags);
/* Schedule updating of microphone state at hardware level. */
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
}
ds->last_btn_mic_state = btn_mic_state;
@@ -1197,10 +1231,22 @@ static int dualsense_play_effect(struct input_dev *dev, void *data, struct ff_ef
ds->motor_right = effect->u.rumble.weak_magnitude / 256;
spin_unlock_irqrestore(&ds->base.lock, flags);
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
return 0;
}
static void dualsense_remove(struct ps_device *ps_dev)
{
struct dualsense *ds = container_of(ps_dev, struct dualsense, base);
unsigned long flags;
spin_lock_irqsave(&ds->base.lock, flags);
ds->output_worker_initialized = false;
spin_unlock_irqrestore(&ds->base.lock, flags);
cancel_work_sync(&ds->output_worker);
}
static int dualsense_reset_leds(struct dualsense *ds)
{
struct dualsense_output_report report;
@@ -1237,7 +1283,7 @@ static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t gr
ds->lightbar_blue = blue;
spin_unlock_irqrestore(&ds->base.lock, flags);
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
}
static void dualsense_set_player_leds(struct dualsense *ds)
@@ -1260,7 +1306,7 @@ static void dualsense_set_player_leds(struct dualsense *ds)
ds->update_player_leds = true;
ds->player_leds_state = player_ids[player_id];
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
}
static struct ps_device *dualsense_create(struct hid_device *hdev)
@@ -1299,7 +1345,9 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
ps_dev->battery_capacity = 100; /* initial value until parse_report. */
ps_dev->battery_status = POWER_SUPPLY_STATUS_UNKNOWN;
ps_dev->parse_report = dualsense_parse_report;
ps_dev->remove = dualsense_remove;
INIT_WORK(&ds->output_worker, dualsense_output_worker);
ds->output_worker_initialized = true;
hid_set_drvdata(hdev, ds);
max_output_report_size = sizeof(struct dualsense_output_report_bt);
@@ -1320,6 +1368,21 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
return ERR_PTR(ret);
}
/* Original DualSense firmware simulated classic controller rumble through
* its new haptics hardware. It felt different from classic rumble users
* were used to. Since then new firmwares were introduced to change behavior
* and make this new 'v2' behavior default on PlayStation and other platforms.
* The original DualSense requires a new enough firmware as bundled with PS5
* software released in 2021. DualSense edge supports it out of the box.
* Both devices also support the old mode, but it is not really used.
*/
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
/* Feature version 2.21 introduced new vibration method. */
ds->use_vibration_v2 = ds->update_version >= DS_FEATURE_VERSION(2, 21);
} else if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
ds->use_vibration_v2 = true;
}
ret = ps_devices_list_add(ps_dev);
if (ret)
return ERR_PTR(ret);
@@ -1436,7 +1499,8 @@ static int ps_probe(struct hid_device *hdev, const struct hid_device_id *id)
goto err_stop;
}
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER ||
hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
dev = dualsense_create(hdev);
if (IS_ERR(dev)) {
hid_err(hdev, "Failed to create dualsense.\n");
@@ -1461,6 +1525,9 @@ static void ps_remove(struct hid_device *hdev)
ps_devices_list_remove(dev);
ps_device_release_player_id(dev);
if (dev->remove)
dev->remove(dev);
hid_hw_close(hdev);
hid_hw_stop(hdev);
}
@@ -1468,6 +1535,8 @@ static void ps_remove(struct hid_device *hdev)
static const struct hid_device_id ps_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ }
};
MODULE_DEVICE_TABLE(hid, ps_devices);

View File

@@ -620,6 +620,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT5) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9) },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7) },
#endif
#if IS_ENABLED(CONFIG_HID_SAMSUNG)
{ HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },

View File

@@ -187,6 +187,8 @@ static const struct hid_device_id saitek_devices[] = {
.driver_data = SAITEK_RELEASE_MODE_RAT7 },
{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7),
.driver_data = SAITEK_RELEASE_MODE_MMO7 },
{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7),
.driver_data = SAITEK_RELEASE_MODE_MMO7 },
{ }
};

View File

@@ -46,9 +46,6 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
#define TOTAL_ATTRS (MAX_CORE_ATTRS + 1)
#define MAX_CORE_DATA (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
#define TO_CORE_ID(cpu) (cpu_data(cpu).cpu_core_id)
#define TO_ATTR_NO(cpu) (TO_CORE_ID(cpu) + BASE_SYSFS_ATTR_NO)
#ifdef CONFIG_SMP
#define for_each_sibling(i, cpu) \
for_each_cpu(i, topology_sibling_cpumask(cpu))
@@ -91,6 +88,8 @@ struct temp_data {
struct platform_data {
struct device *hwmon_dev;
u16 pkg_id;
u16 cpu_map[NUM_REAL_CORES];
struct ida ida;
struct cpumask cpumask;
struct temp_data *core_data[MAX_CORE_DATA];
struct device_attribute name_attr;
@@ -441,7 +440,7 @@ static struct temp_data *init_temp_data(unsigned int cpu, int pkg_flag)
MSR_IA32_THERM_STATUS;
tdata->is_pkg_data = pkg_flag;
tdata->cpu = cpu;
tdata->cpu_core_id = TO_CORE_ID(cpu);
tdata->cpu_core_id = topology_core_id(cpu);
tdata->attr_size = MAX_CORE_ATTRS;
mutex_init(&tdata->update_lock);
return tdata;
@@ -454,7 +453,7 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
struct platform_data *pdata = platform_get_drvdata(pdev);
struct cpuinfo_x86 *c = &cpu_data(cpu);
u32 eax, edx;
int err, attr_no;
int err, index, attr_no;
/*
* Find attr number for sysfs:
@@ -462,14 +461,26 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
* The attr number is always core id + 2
* The Pkgtemp will always show up as temp1_*, if available
*/
attr_no = pkg_flag ? PKG_SYSFS_ATTR_NO : TO_ATTR_NO(cpu);
if (pkg_flag) {
attr_no = PKG_SYSFS_ATTR_NO;
} else {
index = ida_alloc(&pdata->ida, GFP_KERNEL);
if (index < 0)
return index;
pdata->cpu_map[index] = topology_core_id(cpu);
attr_no = index + BASE_SYSFS_ATTR_NO;
}
if (attr_no > MAX_CORE_DATA - 1)
return -ERANGE;
if (attr_no > MAX_CORE_DATA - 1) {
err = -ERANGE;
goto ida_free;
}
tdata = init_temp_data(cpu, pkg_flag);
if (!tdata)
return -ENOMEM;
if (!tdata) {
err = -ENOMEM;
goto ida_free;
}
/* Test if we can access the status register */
err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &eax, &edx);
@@ -505,6 +516,9 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
exit_free:
pdata->core_data[attr_no] = NULL;
kfree(tdata);
ida_free:
if (!pkg_flag)
ida_free(&pdata->ida, index);
return err;
}
@@ -524,6 +538,9 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
kfree(pdata->core_data[indx]);
pdata->core_data[indx] = NULL;
if (indx >= BASE_SYSFS_ATTR_NO)
ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
}
static int coretemp_probe(struct platform_device *pdev)
@@ -537,6 +554,7 @@ static int coretemp_probe(struct platform_device *pdev)
return -ENOMEM;
pdata->pkg_id = pdev->id;
ida_init(&pdata->ida);
platform_set_drvdata(pdev, pdata);
pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
@@ -553,6 +571,7 @@ static int coretemp_remove(struct platform_device *pdev)
if (pdata->core_data[i])
coretemp_remove_core(pdata, i);
ida_destroy(&pdata->ida);
return 0;
}
@@ -647,7 +666,7 @@ static int coretemp_cpu_offline(unsigned int cpu)
struct platform_device *pdev = coretemp_get_pdev(cpu);
struct platform_data *pd;
struct temp_data *tdata;
int indx, target;
int i, indx = -1, target;
/*
* Don't execute this on suspend as the device remove locks
@@ -660,12 +679,19 @@ static int coretemp_cpu_offline(unsigned int cpu)
if (!pdev)
return 0;
/* The core id is too big, just return */
indx = TO_ATTR_NO(cpu);
if (indx > MAX_CORE_DATA - 1)
pd = platform_get_drvdata(pdev);
for (i = 0; i < NUM_REAL_CORES; i++) {
if (pd->cpu_map[i] == topology_core_id(cpu)) {
indx = i + BASE_SYSFS_ATTR_NO;
break;
}
}
/* Too many cores and this core is not populated, just return */
if (indx < 0)
return 0;
pd = platform_get_drvdata(pdev);
tdata = pd->core_data[indx];
cpumask_clear_cpu(cpu, &pd->cpumask);

View File

@@ -820,7 +820,8 @@ static const struct hid_device_id corsairpsu_idtable[] = {
{ HID_USB_DEVICE(0x1b1c, 0x1c0b) }, /* Corsair RM750i */
{ HID_USB_DEVICE(0x1b1c, 0x1c0c) }, /* Corsair RM850i */
{ HID_USB_DEVICE(0x1b1c, 0x1c0d) }, /* Corsair RM1000i */
{ HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsaur HX1000i revision 2 */
{ HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsair HX1000i revision 2 */
{ HID_USB_DEVICE(0x1b1c, 0x1c1f) }, /* Corsair HX1500i */
{ },
};
MODULE_DEVICE_TABLE(hid, corsairpsu_idtable);

View File

@@ -257,7 +257,10 @@ static int pwm_fan_update_enable(struct pwm_fan_ctx *ctx, long val)
if (val == 0) {
/* Disable pwm-fan unconditionally */
ret = __set_pwm(ctx, 0);
if (ctx->enabled)
ret = __set_pwm(ctx, 0);
else
ret = pwm_fan_switch_power(ctx, false);
if (ret)
ctx->enable_mode = old_val;
pwm_fan_update_state(ctx, 0);

View File

@@ -764,6 +764,7 @@ config I2C_LPC2K
config I2C_MLXBF
tristate "Mellanox BlueField I2C controller"
depends on MELLANOX_PLATFORM && ARM64
depends on ACPI
select I2C_SLAVE
help
Enabling this option will add I2C SMBus support for Mellanox BlueField

View File

@@ -2247,7 +2247,6 @@ static struct i2c_adapter_quirks mlxbf_i2c_quirks = {
.max_write_len = MLXBF_I2C_MASTER_DATA_W_LENGTH,
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id mlxbf_i2c_acpi_ids[] = {
{ "MLNXBF03", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_1] },
{ "MLNXBF23", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_2] },
@@ -2282,12 +2281,6 @@ static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv)
return 0;
}
#else
static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv)
{
return -ENOENT;
}
#endif /* CONFIG_ACPI */
static int mlxbf_i2c_probe(struct platform_device *pdev)
{
@@ -2490,9 +2483,7 @@ static struct platform_driver mlxbf_i2c_driver = {
.remove = mlxbf_i2c_remove,
.driver = {
.name = "i2c-mlxbf",
#ifdef CONFIG_ACPI
.acpi_match_table = ACPI_PTR(mlxbf_i2c_acpi_ids),
#endif /* CONFIG_ACPI */
},
};

View File

@@ -40,7 +40,7 @@
#define MLXCPLD_LPCI2C_STATUS_REG 0x9
#define MLXCPLD_LPCI2C_DATA_REG 0xa
/* LPC I2C masks and parametres */
/* LPC I2C masks and parameters */
#define MLXCPLD_LPCI2C_RST_SEL_MASK 0x1
#define MLXCPLD_LPCI2C_TRANS_END 0x1
#define MLXCPLD_LPCI2C_STATUS_NACK 0x10

View File

@@ -639,6 +639,11 @@ static int cci_probe(struct platform_device *pdev)
if (ret < 0)
goto error;
pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
pm_runtime_use_autosuspend(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
for (i = 0; i < cci->data->num_masters; i++) {
if (!cci->master[i].cci)
continue;
@@ -650,14 +655,12 @@ static int cci_probe(struct platform_device *pdev)
}
}
pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
pm_runtime_use_autosuspend(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
return 0;
error_i2c:
pm_runtime_disable(dev);
pm_runtime_dont_use_autosuspend(dev);
for (--i ; i >= 0; i--) {
if (cci->master[i].cci) {
i2c_del_adapter(&cci->master[i].adap);

View File

@@ -97,7 +97,7 @@ MODULE_PARM_DESC(high_clock,
module_param(force, bool, 0);
MODULE_PARM_DESC(force, "Forcibly enable the SIS630. DANGEROUS!");
/* SMBus base adress */
/* SMBus base address */
static unsigned short smbus_base;
/* supported chips */

View File

@@ -920,6 +920,7 @@ static struct platform_driver xiic_i2c_driver = {
module_platform_driver(xiic_i2c_driver);
MODULE_ALIAS("platform:" DRIVER_NAME);
MODULE_AUTHOR("info@mocean-labs.com");
MODULE_DESCRIPTION("Xilinx I2C bus driver");
MODULE_LICENSE("GPL v2");

View File

@@ -2330,7 +2330,8 @@ static void amd_iommu_get_resv_regions(struct device *dev,
type = IOMMU_RESV_RESERVED;
region = iommu_alloc_resv_region(entry->address_start,
length, prot, type);
length, prot, type,
GFP_KERNEL);
if (!region) {
dev_err(dev, "Out of memory allocating dm-regions\n");
return;
@@ -2340,14 +2341,14 @@ static void amd_iommu_get_resv_regions(struct device *dev,
region = iommu_alloc_resv_region(MSI_RANGE_START,
MSI_RANGE_END - MSI_RANGE_START + 1,
0, IOMMU_RESV_MSI);
0, IOMMU_RESV_MSI, GFP_KERNEL);
if (!region)
return;
list_add_tail(&region->list, head);
region = iommu_alloc_resv_region(HT_RANGE_START,
HT_RANGE_END - HT_RANGE_START + 1,
0, IOMMU_RESV_RESERVED);
0, IOMMU_RESV_RESERVED, GFP_KERNEL);
if (!region)
return;
list_add_tail(&region->list, head);

View File

@@ -758,7 +758,7 @@ static void apple_dart_get_resv_regions(struct device *dev,
region = iommu_alloc_resv_region(DOORBELL_ADDR,
PAGE_SIZE, prot,
IOMMU_RESV_MSI);
IOMMU_RESV_MSI, GFP_KERNEL);
if (!region)
return;

View File

@@ -2757,7 +2757,7 @@ static void arm_smmu_get_resv_regions(struct device *dev,
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
prot, IOMMU_RESV_SW_MSI);
prot, IOMMU_RESV_SW_MSI, GFP_KERNEL);
if (!region)
return;

View File

@@ -1534,7 +1534,7 @@ static void arm_smmu_get_resv_regions(struct device *dev,
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
prot, IOMMU_RESV_SW_MSI);
prot, IOMMU_RESV_SW_MSI, GFP_KERNEL);
if (!region)
return;

View File

@@ -2410,6 +2410,7 @@ static int __init si_domain_init(int hw)
if (md_domain_init(si_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) {
domain_exit(si_domain);
si_domain = NULL;
return -EFAULT;
}
@@ -3052,6 +3053,10 @@ static int __init init_dmars(void)
disable_dmar_iommu(iommu);
free_dmar_iommu(iommu);
}
if (si_domain) {
domain_exit(si_domain);
si_domain = NULL;
}
return ret;
}
@@ -4534,7 +4539,7 @@ static void intel_iommu_get_resv_regions(struct device *device,
struct device *i_dev;
int i;
down_read(&dmar_global_lock);
rcu_read_lock();
for_each_rmrr_units(rmrr) {
for_each_active_dev_scope(rmrr->devices, rmrr->devices_cnt,
i, i_dev) {
@@ -4552,14 +4557,15 @@ static void intel_iommu_get_resv_regions(struct device *device,
IOMMU_RESV_DIRECT_RELAXABLE : IOMMU_RESV_DIRECT;
resv = iommu_alloc_resv_region(rmrr->base_address,
length, prot, type);
length, prot, type,
GFP_ATOMIC);
if (!resv)
break;
list_add_tail(&resv->list, head);
}
}
up_read(&dmar_global_lock);
rcu_read_unlock();
#ifdef CONFIG_INTEL_IOMMU_FLOPPY_WA
if (dev_is_pci(device)) {
@@ -4567,7 +4573,8 @@ static void intel_iommu_get_resv_regions(struct device *device,
if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA) {
reg = iommu_alloc_resv_region(0, 1UL << 24, prot,
IOMMU_RESV_DIRECT_RELAXABLE);
IOMMU_RESV_DIRECT_RELAXABLE,
GFP_KERNEL);
if (reg)
list_add_tail(&reg->list, head);
}
@@ -4576,7 +4583,7 @@ static void intel_iommu_get_resv_regions(struct device *device,
reg = iommu_alloc_resv_region(IOAPIC_RANGE_START,
IOAPIC_RANGE_END - IOAPIC_RANGE_START + 1,
0, IOMMU_RESV_MSI);
0, IOMMU_RESV_MSI, GFP_KERNEL);
if (!reg)
return;
list_add_tail(&reg->list, head);

View File

@@ -504,7 +504,7 @@ static int iommu_insert_resv_region(struct iommu_resv_region *new,
LIST_HEAD(stack);
nr = iommu_alloc_resv_region(new->start, new->length,
new->prot, new->type);
new->prot, new->type, GFP_KERNEL);
if (!nr)
return -ENOMEM;
@@ -2579,11 +2579,12 @@ EXPORT_SYMBOL(iommu_put_resv_regions);
struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start,
size_t length, int prot,
enum iommu_resv_type type)
enum iommu_resv_type type,
gfp_t gfp)
{
struct iommu_resv_region *region;
region = kzalloc(sizeof(*region), GFP_KERNEL);
region = kzalloc(sizeof(*region), gfp);
if (!region)
return NULL;

View File

@@ -917,7 +917,8 @@ static void mtk_iommu_get_resv_regions(struct device *dev,
continue;
region = iommu_alloc_resv_region(resv->iova_base, resv->size,
prot, IOMMU_RESV_RESERVED);
prot, IOMMU_RESV_RESERVED,
GFP_KERNEL);
if (!region)
return;

View File

@@ -490,11 +490,13 @@ static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
fallthrough;
case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
region = iommu_alloc_resv_region(start, size, 0,
IOMMU_RESV_RESERVED);
IOMMU_RESV_RESERVED,
GFP_KERNEL);
break;
case VIRTIO_IOMMU_RESV_MEM_T_MSI:
region = iommu_alloc_resv_region(start, size, prot,
IOMMU_RESV_MSI);
IOMMU_RESV_MSI,
GFP_KERNEL);
break;
}
if (!region)
@@ -909,7 +911,8 @@ static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
*/
if (!msi) {
msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
prot, IOMMU_RESV_SW_MSI);
prot, IOMMU_RESV_SW_MSI,
GFP_KERNEL);
if (!msi)
return;

View File

@@ -24,7 +24,7 @@ if MEDIA_SUPPORT
config MEDIA_SUPPORT_FILTER
bool "Filter media drivers"
default y if !EMBEDDED && !EXPERT
default y if !EXPERT
help
Configuring the media subsystem can be complex, as there are
hundreds of drivers and other config options.

View File

@@ -1027,6 +1027,7 @@ static const u8 cec_msg_size[256] = {
[CEC_MSG_REPORT_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED,
[CEC_MSG_REQUEST_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED,
[CEC_MSG_SET_SYSTEM_AUDIO_MODE] = 3 | BOTH,
[CEC_MSG_SET_AUDIO_VOLUME_LEVEL] = 3 | DIRECTED,
[CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST] = 2 | DIRECTED,
[CEC_MSG_SYSTEM_AUDIO_MODE_STATUS] = 3 | DIRECTED,
[CEC_MSG_SET_AUDIO_RATE] = 3 | DIRECTED,

View File

@@ -44,6 +44,8 @@ static void handle_cec_message(struct cros_ec_cec *cros_ec_cec)
uint8_t *cec_message = cros_ec->event_data.data.cec_message;
unsigned int len = cros_ec->event_size;
if (len > CEC_MAX_MSG_SIZE)
len = CEC_MAX_MSG_SIZE;
cros_ec_cec->rx_msg.len = len;
memcpy(cros_ec_cec->rx_msg.msg, cec_message, len);
@@ -221,6 +223,8 @@ static const struct cec_dmi_match cec_dmi_match_table[] = {
{ "Google", "Moli", "0000:00:02.0", "Port B" },
/* Google Kinox */
{ "Google", "Kinox", "0000:00:02.0", "Port B" },
/* Google Kuldax */
{ "Google", "Kuldax", "0000:00:02.0", "Port B" },
};
static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,

View File

@@ -115,6 +115,8 @@ static irqreturn_t s5p_cec_irq_handler(int irq, void *priv)
dev_dbg(cec->dev, "Buffer overrun (worker did not process previous message)\n");
cec->rx = STATE_BUSY;
cec->msg.len = status >> 24;
if (cec->msg.len > CEC_MAX_MSG_SIZE)
cec->msg.len = CEC_MAX_MSG_SIZE;
cec->msg.rx_status = CEC_RX_STATUS_OK;
s5p_cec_get_rx_buf(cec, cec->msg.len,
cec->msg.msg);

View File

@@ -6660,7 +6660,7 @@ static int drxk_read_snr(struct dvb_frontend *fe, u16 *snr)
static int drxk_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
{
struct drxk_state *state = fe->demodulator_priv;
u16 err;
u16 err = 0;
dprintk(1, "\n");

View File

@@ -406,7 +406,6 @@ static int ar0521_set_fmt(struct v4l2_subdev *sd,
struct v4l2_subdev_format *format)
{
struct ar0521_dev *sensor = to_ar0521_dev(sd);
int ret = 0;
ar0521_adj_fmt(&format->format);
@@ -423,7 +422,7 @@ static int ar0521_set_fmt(struct v4l2_subdev *sd,
}
mutex_unlock(&sensor->lock);
return ret;
return 0;
}
static int ar0521_s_ctrl(struct v4l2_ctrl *ctrl)
@@ -756,10 +755,12 @@ static int ar0521_power_on(struct device *dev)
gpiod_set_value(sensor->reset_gpio, 0);
usleep_range(4500, 5000); /* min 45000 clocks */
for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++)
if (ar0521_write_regs(sensor, initial_regs[cnt].data,
initial_regs[cnt].count))
for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) {
ret = ar0521_write_regs(sensor, initial_regs[cnt].data,
initial_regs[cnt].count);
if (ret)
goto off;
}
ret = ar0521_write_reg(sensor, AR0521_REG_SERIAL_FORMAT,
AR0521_REG_SERIAL_FORMAT_MIPI |

View File

@@ -238,6 +238,43 @@ static int get_key_knc1(struct IR_i2c *ir, enum rc_proto *protocol,
return 1;
}
static int get_key_geniatech(struct IR_i2c *ir, enum rc_proto *protocol,
u32 *scancode, u8 *toggle)
{
int i, rc;
unsigned char b;
/* poll IR chip */
for (i = 0; i < 4; i++) {
rc = i2c_master_recv(ir->c, &b, 1);
if (rc == 1)
break;
msleep(20);
}
if (rc != 1) {
dev_dbg(&ir->rc->dev, "read error\n");
if (rc < 0)
return rc;
return -EIO;
}
/* don't repeat the key */
if (ir->old == b)
return 0;
ir->old = b;
/* decode to RC5 */
b &= 0x7f;
b = (b - 1) / 2;
dev_dbg(&ir->rc->dev, "key %02x\n", b);
*protocol = RC_PROTO_RC5;
*scancode = b;
*toggle = ir->old >> 7;
return 1;
}
static int get_key_avermedia_cardbus(struct IR_i2c *ir, enum rc_proto *protocol,
u32 *scancode, u8 *toggle)
{
@@ -766,6 +803,13 @@ static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
rc_proto = RC_PROTO_BIT_OTHER;
ir_codes = RC_MAP_EMPTY;
break;
case 0x33:
name = "Geniatech";
ir->get_key = get_key_geniatech;
rc_proto = RC_PROTO_BIT_RC5;
ir_codes = RC_MAP_TOTAL_MEDIA_IN_HAND_02;
ir->old = 0xfc;
break;
case 0x6b:
name = "FusionHDTV";
ir->get_key = get_key_fusionhdtv;
@@ -825,6 +869,9 @@ static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
case IR_KBD_GET_KEY_KNC1:
ir->get_key = get_key_knc1;
break;
case IR_KBD_GET_KEY_GENIATECH:
ir->get_key = get_key_geniatech;
break;
case IR_KBD_GET_KEY_FUSIONHDTV:
ir->get_key = get_key_fusionhdtv;
break;

View File

@@ -8,7 +8,7 @@
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/module.h>
#include <linux/of_graph.h>

View File

@@ -633,7 +633,7 @@ static int mt9v111_hw_config(struct mt9v111_dev *mt9v111)
/*
* Set pixel integration time to the whole frame time.
* This value controls the the shutter delay when running with AE
* This value controls the shutter delay when running with AE
* disabled. If longer than frame time, it affects the output
* frame rate.
*/

View File

@@ -15,6 +15,7 @@
#include <linux/init.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include <linux/slab.h>
#include <linux/types.h>
@@ -447,8 +448,6 @@ struct ov5640_dev {
/* lock to protect all members below */
struct mutex lock;
int power_count;
struct v4l2_mbus_framefmt fmt;
bool pending_fmt_change;
@@ -2696,39 +2695,24 @@ static int ov5640_set_power(struct ov5640_dev *sensor, bool on)
return ret;
}
/* --------------- Subdev Operations --------------- */
static int ov5640_s_power(struct v4l2_subdev *sd, int on)
static int ov5640_sensor_suspend(struct device *dev)
{
struct ov5640_dev *sensor = to_ov5640_dev(sd);
int ret = 0;
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov5640_dev *ov5640 = to_ov5640_dev(sd);
mutex_lock(&sensor->lock);
/*
* If the power count is modified from 0 to != 0 or from != 0 to 0,
* update the power state.
*/
if (sensor->power_count == !on) {
ret = ov5640_set_power(sensor, !!on);
if (ret)
goto out;
}
/* Update the power count. */
sensor->power_count += on ? 1 : -1;
WARN_ON(sensor->power_count < 0);
out:
mutex_unlock(&sensor->lock);
if (on && !ret && sensor->power_count == 1) {
/* restore controls */
ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
}
return ret;
return ov5640_set_power(ov5640, false);
}
static int ov5640_sensor_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov5640_dev *ov5640 = to_ov5640_dev(sd);
return ov5640_set_power(ov5640, true);
}
/* --------------- Subdev Operations --------------- */
static int ov5640_try_frame_interval(struct ov5640_dev *sensor,
struct v4l2_fract *fi,
u32 width, u32 height)
@@ -3314,6 +3298,9 @@ static int ov5640_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
/* v4l2_ctrl_lock() locks our own mutex */
if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev))
return 0;
switch (ctrl->id) {
case V4L2_CID_AUTOGAIN:
val = ov5640_get_gain(sensor);
@@ -3329,6 +3316,8 @@ static int ov5640_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
break;
}
pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
return 0;
}
@@ -3358,9 +3347,9 @@ static int ov5640_s_ctrl(struct v4l2_ctrl *ctrl)
/*
* If the device is not powered up by the host driver do
* not apply any controls to H/W at this time. Instead
* the controls will be restored right after power-up.
* the controls will be restored at start streaming time.
*/
if (sensor->power_count == 0)
if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev))
return 0;
switch (ctrl->id) {
@@ -3402,6 +3391,8 @@ static int ov5640_s_ctrl(struct v4l2_ctrl *ctrl)
break;
}
pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
return ret;
}
@@ -3677,6 +3668,18 @@ static int ov5640_s_stream(struct v4l2_subdev *sd, int enable)
struct ov5640_dev *sensor = to_ov5640_dev(sd);
int ret = 0;
if (enable) {
ret = pm_runtime_resume_and_get(&sensor->i2c_client->dev);
if (ret < 0)
return ret;
ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
if (ret) {
pm_runtime_put(&sensor->i2c_client->dev);
return ret;
}
}
mutex_lock(&sensor->lock);
if (sensor->streaming == !enable) {
@@ -3701,8 +3704,13 @@ static int ov5640_s_stream(struct v4l2_subdev *sd, int enable)
if (!ret)
sensor->streaming = enable;
}
out:
mutex_unlock(&sensor->lock);
if (!enable || ret)
pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
return ret;
}
@@ -3724,7 +3732,6 @@ static int ov5640_init_cfg(struct v4l2_subdev *sd,
}
static const struct v4l2_subdev_core_ops ov5640_core_ops = {
.s_power = ov5640_s_power,
.log_status = v4l2_ctrl_subdev_log_status,
.subscribe_event = v4l2_ctrl_subdev_subscribe_event,
.unsubscribe_event = v4l2_event_subdev_unsubscribe,
@@ -3770,26 +3777,20 @@ static int ov5640_check_chip_id(struct ov5640_dev *sensor)
int ret = 0;
u16 chip_id;
ret = ov5640_set_power_on(sensor);
if (ret)
return ret;
ret = ov5640_read_reg16(sensor, OV5640_REG_CHIP_ID, &chip_id);
if (ret) {
dev_err(&client->dev, "%s: failed to read chip identifier\n",
__func__);
goto power_off;
return ret;
}
if (chip_id != 0x5640) {
dev_err(&client->dev, "%s: wrong chip identifier, expected 0x5640, got 0x%x\n",
__func__, chip_id);
ret = -ENXIO;
return -ENXIO;
}
power_off:
ov5640_set_power_off(sensor);
return ret;
return 0;
}
static int ov5640_probe(struct i2c_client *client)
@@ -3880,26 +3881,43 @@ static int ov5640_probe(struct i2c_client *client)
ret = ov5640_get_regulators(sensor);
if (ret)
return ret;
goto entity_cleanup;
mutex_init(&sensor->lock);
ret = ov5640_check_chip_id(sensor);
if (ret)
goto entity_cleanup;
ret = ov5640_init_controls(sensor);
if (ret)
goto entity_cleanup;
ret = ov5640_sensor_resume(dev);
if (ret) {
dev_err(dev, "failed to power on\n");
goto entity_cleanup;
}
pm_runtime_set_active(dev);
pm_runtime_get_noresume(dev);
pm_runtime_enable(dev);
ret = ov5640_check_chip_id(sensor);
if (ret)
goto err_pm_runtime;
ret = v4l2_async_register_subdev_sensor(&sensor->sd);
if (ret)
goto free_ctrls;
goto err_pm_runtime;
pm_runtime_set_autosuspend_delay(dev, 1000);
pm_runtime_use_autosuspend(dev);
pm_runtime_put_autosuspend(dev);
return 0;
free_ctrls:
err_pm_runtime:
pm_runtime_put_noidle(dev);
pm_runtime_disable(dev);
v4l2_ctrl_handler_free(&sensor->ctrls.handler);
ov5640_sensor_suspend(dev);
entity_cleanup:
media_entity_cleanup(&sensor->sd.entity);
mutex_destroy(&sensor->lock);
@@ -3910,6 +3928,12 @@ static void ov5640_remove(struct i2c_client *client)
{
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov5640_dev *sensor = to_ov5640_dev(sd);
struct device *dev = &client->dev;
pm_runtime_disable(dev);
if (!pm_runtime_status_suspended(dev))
ov5640_sensor_suspend(dev);
pm_runtime_set_suspended(dev);
v4l2_async_unregister_subdev(&sensor->sd);
media_entity_cleanup(&sensor->sd.entity);
@@ -3917,6 +3941,10 @@ static void ov5640_remove(struct i2c_client *client)
mutex_destroy(&sensor->lock);
}
static const struct dev_pm_ops ov5640_pm_ops = {
SET_RUNTIME_PM_OPS(ov5640_sensor_suspend, ov5640_sensor_resume, NULL)
};
static const struct i2c_device_id ov5640_id[] = {
{"ov5640", 0},
{},
@@ -3933,6 +3961,7 @@ static struct i2c_driver ov5640_i2c_driver = {
.driver = {
.name = "ov5640",
.of_match_table = ov5640_dt_ids,
.pm = &ov5640_pm_ops,
},
.id_table = ov5640_id,
.probe_new = ov5640_probe,

View File

@@ -3034,11 +3034,13 @@ static int ov8865_probe(struct i2c_client *client)
&rate);
if (!ret && sensor->extclk) {
ret = clk_set_rate(sensor->extclk, rate);
if (ret)
return dev_err_probe(dev, ret,
"failed to set clock rate\n");
if (ret) {
dev_err_probe(dev, ret, "failed to set clock rate\n");
goto error_endpoint;
}
} else if (ret && !sensor->extclk) {
return dev_err_probe(dev, ret, "invalid clock config\n");
dev_err_probe(dev, ret, "invalid clock config\n");
goto error_endpoint;
}
sensor->extclk_rate = rate ? rate : clk_get_rate(sensor->extclk);

View File

@@ -581,7 +581,7 @@ static void __media_device_unregister_entity(struct media_entity *entity)
struct media_device *mdev = entity->graph_obj.mdev;
struct media_link *link, *tmp;
struct media_interface *intf;
unsigned int i;
struct media_pad *iter;
ida_free(&mdev->entity_internal_idx, entity->internal_idx);
@@ -597,8 +597,8 @@ static void __media_device_unregister_entity(struct media_entity *entity)
__media_entity_remove_links(entity);
/* Remove all pads that belong to this entity */
for (i = 0; i < entity->num_pads; i++)
media_gobj_destroy(&entity->pads[i].graph_obj);
media_entity_for_each_pad(entity, iter)
media_gobj_destroy(&iter->graph_obj);
/* Remove the entity */
media_gobj_destroy(&entity->graph_obj);
@@ -610,7 +610,7 @@ int __must_check media_device_register_entity(struct media_device *mdev,
struct media_entity *entity)
{
struct media_entity_notify *notify, *next;
unsigned int i;
struct media_pad *iter;
int ret;
if (entity->function == MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN ||
@@ -639,9 +639,8 @@ int __must_check media_device_register_entity(struct media_device *mdev,
media_gobj_create(mdev, MEDIA_GRAPH_ENTITY, &entity->graph_obj);
/* Initialize objects at the pads */
for (i = 0; i < entity->num_pads; i++)
media_gobj_create(mdev, MEDIA_GRAPH_PAD,
&entity->pads[i].graph_obj);
media_entity_for_each_pad(entity, iter)
media_gobj_create(mdev, MEDIA_GRAPH_PAD, &iter->graph_obj);
/* invoke entity_notify callbacks */
list_for_each_entry_safe(notify, next, &mdev->entity_notify, list)

View File

@@ -59,10 +59,12 @@ static inline const char *link_type_name(struct media_link *link)
}
}
__must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
int idx_max)
__must_check int media_entity_enum_init(struct media_entity_enum *ent_enum,
struct media_device *mdev)
{
idx_max = ALIGN(idx_max, BITS_PER_LONG);
int idx_max;
idx_max = ALIGN(mdev->entity_internal_idx_max + 1, BITS_PER_LONG);
ent_enum->bmap = bitmap_zalloc(idx_max, GFP_KERNEL);
if (!ent_enum->bmap)
return -ENOMEM;
@@ -71,7 +73,7 @@ __must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
return 0;
}
EXPORT_SYMBOL_GPL(__media_entity_enum_init);
EXPORT_SYMBOL_GPL(media_entity_enum_init);
void media_entity_enum_cleanup(struct media_entity_enum *ent_enum)
{
@@ -193,7 +195,8 @@ int media_entity_pads_init(struct media_entity *entity, u16 num_pads,
struct media_pad *pads)
{
struct media_device *mdev = entity->graph_obj.mdev;
unsigned int i;
struct media_pad *iter;
unsigned int i = 0;
if (num_pads >= MEDIA_ENTITY_MAX_PADS)
return -E2BIG;
@@ -204,12 +207,12 @@ int media_entity_pads_init(struct media_entity *entity, u16 num_pads,
if (mdev)
mutex_lock(&mdev->graph_mutex);
for (i = 0; i < num_pads; i++) {
pads[i].entity = entity;
pads[i].index = i;
media_entity_for_each_pad(entity, iter) {
iter->entity = entity;
iter->index = i++;
if (mdev)
media_gobj_create(mdev, MEDIA_GRAPH_PAD,
&entity->pads[i].graph_obj);
&iter->graph_obj);
}
if (mdev)
@@ -223,6 +226,33 @@ EXPORT_SYMBOL_GPL(media_entity_pads_init);
* Graph traversal
*/
/*
* This function checks the interdependency inside the entity between @pad0
* and @pad1. If two pads are interdependent they are part of the same pipeline
* and enabling one of the pads means that the other pad will become "locked"
* and doesn't allow configuration changes.
*
* This function uses the &media_entity_operations.has_pad_interdep() operation
* to check the dependency inside the entity between @pad0 and @pad1. If the
* has_pad_interdep operation is not implemented, all pads of the entity are
* considered to be interdependent.
*/
static bool media_entity_has_pad_interdep(struct media_entity *entity,
unsigned int pad0, unsigned int pad1)
{
if (pad0 >= entity->num_pads || pad1 >= entity->num_pads)
return false;
if (entity->pads[pad0].flags & entity->pads[pad1].flags &
(MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_SOURCE))
return false;
if (!entity->ops || !entity->ops->has_pad_interdep)
return true;
return entity->ops->has_pad_interdep(entity, pad0, pad1);
}
static struct media_entity *
media_entity_other(struct media_entity *entity, struct media_link *link)
{
@@ -367,139 +397,435 @@ struct media_entity *media_graph_walk_next(struct media_graph *graph)
}
EXPORT_SYMBOL_GPL(media_graph_walk_next);
int media_entity_get_fwnode_pad(struct media_entity *entity,
struct fwnode_handle *fwnode,
unsigned long direction_flags)
{
struct fwnode_endpoint endpoint;
unsigned int i;
int ret;
if (!entity->ops || !entity->ops->get_fwnode_pad) {
for (i = 0; i < entity->num_pads; i++) {
if (entity->pads[i].flags & direction_flags)
return i;
}
return -ENXIO;
}
ret = fwnode_graph_parse_endpoint(fwnode, &endpoint);
if (ret)
return ret;
ret = entity->ops->get_fwnode_pad(entity, &endpoint);
if (ret < 0)
return ret;
if (ret >= entity->num_pads)
return -ENXIO;
if (!(entity->pads[ret].flags & direction_flags))
return -ENXIO;
return ret;
}
EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad);
/* -----------------------------------------------------------------------------
* Pipeline management
*/
__must_check int __media_pipeline_start(struct media_entity *entity,
struct media_pipeline *pipe)
/*
* The pipeline traversal stack stores pads that are reached during graph
* traversal, with a list of links to be visited to continue the traversal.
* When a new pad is reached, an entry is pushed on the top of the stack and
* points to the incoming pad and the first link of the entity.
*
* To find further pads in the pipeline, the traversal algorithm follows
* internal pad dependencies in the entity, and then links in the graph. It
* does so by iterating over all links of the entity, and following enabled
* links that originate from a pad that is internally connected to the incoming
* pad, as reported by the media_entity_has_pad_interdep() function.
*/
/**
* struct media_pipeline_walk_entry - Entry in the pipeline traversal stack
*
* @pad: The media pad being visited
* @links: Links left to be visited
*/
struct media_pipeline_walk_entry {
struct media_pad *pad;
struct list_head *links;
};
/**
* struct media_pipeline_walk - State used by the media pipeline traversal
* algorithm
*
* @mdev: The media device
* @stack: Depth-first search stack
* @stack.size: Number of allocated entries in @stack.entries
* @stack.top: Index of the top stack entry (-1 if the stack is empty)
* @stack.entries: Stack entries
*/
struct media_pipeline_walk {
struct media_device *mdev;
struct {
unsigned int size;
int top;
struct media_pipeline_walk_entry *entries;
} stack;
};
#define MEDIA_PIPELINE_STACK_GROW_STEP 16
static struct media_pipeline_walk_entry *
media_pipeline_walk_top(struct media_pipeline_walk *walk)
{
struct media_device *mdev = entity->graph_obj.mdev;
struct media_graph *graph = &pipe->graph;
struct media_entity *entity_err = entity;
struct media_link *link;
return &walk->stack.entries[walk->stack.top];
}
static bool media_pipeline_walk_empty(struct media_pipeline_walk *walk)
{
return walk->stack.top == -1;
}
/* Increase the stack size by MEDIA_PIPELINE_STACK_GROW_STEP elements. */
static int media_pipeline_walk_resize(struct media_pipeline_walk *walk)
{
struct media_pipeline_walk_entry *entries;
unsigned int new_size;
/* Safety check, to avoid stack overflows in case of bugs. */
if (walk->stack.size >= 256)
return -E2BIG;
new_size = walk->stack.size + MEDIA_PIPELINE_STACK_GROW_STEP;
entries = krealloc(walk->stack.entries,
new_size * sizeof(*walk->stack.entries),
GFP_KERNEL);
if (!entries)
return -ENOMEM;
walk->stack.entries = entries;
walk->stack.size = new_size;
return 0;
}
/* Push a new entry on the stack. */
static int media_pipeline_walk_push(struct media_pipeline_walk *walk,
struct media_pad *pad)
{
struct media_pipeline_walk_entry *entry;
int ret;
if (pipe->streaming_count) {
pipe->streaming_count++;
if (walk->stack.top + 1 >= walk->stack.size) {
ret = media_pipeline_walk_resize(walk);
if (ret)
return ret;
}
walk->stack.top++;
entry = media_pipeline_walk_top(walk);
entry->pad = pad;
entry->links = pad->entity->links.next;
dev_dbg(walk->mdev->dev,
"media pipeline: pushed entry %u: '%s':%u\n",
walk->stack.top, pad->entity->name, pad->index);
return 0;
}
/*
* Move the top entry link cursor to the next link. If all links of the entry
* have been visited, pop the entry itself.
*/
static void media_pipeline_walk_pop(struct media_pipeline_walk *walk)
{
struct media_pipeline_walk_entry *entry;
if (WARN_ON(walk->stack.top < 0))
return;
entry = media_pipeline_walk_top(walk);
if (entry->links->next == &entry->pad->entity->links) {
dev_dbg(walk->mdev->dev,
"media pipeline: entry %u has no more links, popping\n",
walk->stack.top);
walk->stack.top--;
return;
}
entry->links = entry->links->next;
dev_dbg(walk->mdev->dev,
"media pipeline: moved entry %u to next link\n",
walk->stack.top);
}
/* Free all memory allocated while walking the pipeline. */
static void media_pipeline_walk_destroy(struct media_pipeline_walk *walk)
{
kfree(walk->stack.entries);
}
/* Add a pad to the pipeline and push it to the stack. */
static int media_pipeline_add_pad(struct media_pipeline *pipe,
struct media_pipeline_walk *walk,
struct media_pad *pad)
{
struct media_pipeline_pad *ppad;
list_for_each_entry(ppad, &pipe->pads, list) {
if (ppad->pad == pad) {
dev_dbg(pad->graph_obj.mdev->dev,
"media pipeline: already contains pad '%s':%u\n",
pad->entity->name, pad->index);
return 0;
}
}
ppad = kzalloc(sizeof(*ppad), GFP_KERNEL);
if (!ppad)
return -ENOMEM;
ppad->pipe = pipe;
ppad->pad = pad;
list_add_tail(&ppad->list, &pipe->pads);
dev_dbg(pad->graph_obj.mdev->dev,
"media pipeline: added pad '%s':%u\n",
pad->entity->name, pad->index);
return media_pipeline_walk_push(walk, pad);
}
/* Explore the next link of the entity at the top of the stack. */
static int media_pipeline_explore_next_link(struct media_pipeline *pipe,
struct media_pipeline_walk *walk)
{
struct media_pipeline_walk_entry *entry = media_pipeline_walk_top(walk);
struct media_pad *pad;
struct media_link *link;
struct media_pad *local;
struct media_pad *remote;
int ret;
pad = entry->pad;
link = list_entry(entry->links, typeof(*link), list);
media_pipeline_walk_pop(walk);
dev_dbg(walk->mdev->dev,
"media pipeline: exploring link '%s':%u -> '%s':%u\n",
link->source->entity->name, link->source->index,
link->sink->entity->name, link->sink->index);
/* Skip links that are not enabled. */
if (!(link->flags & MEDIA_LNK_FL_ENABLED)) {
dev_dbg(walk->mdev->dev,
"media pipeline: skipping link (disabled)\n");
return 0;
}
ret = media_graph_walk_init(&pipe->graph, mdev);
/* Get the local pad and remote pad. */
if (link->source->entity == pad->entity) {
local = link->source;
remote = link->sink;
} else {
local = link->sink;
remote = link->source;
}
/*
* Skip links that originate from a different pad than the incoming pad
* that is not connected internally in the entity to the incoming pad.
*/
if (pad != local &&
!media_entity_has_pad_interdep(pad->entity, pad->index, local->index)) {
dev_dbg(walk->mdev->dev,
"media pipeline: skipping link (no route)\n");
return 0;
}
/*
* Add the local and remote pads of the link to the pipeline and push
* them to the stack, if they're not already present.
*/
ret = media_pipeline_add_pad(pipe, walk, local);
if (ret)
return ret;
media_graph_walk_start(&pipe->graph, entity);
ret = media_pipeline_add_pad(pipe, walk, remote);
if (ret)
return ret;
while ((entity = media_graph_walk_next(graph))) {
DECLARE_BITMAP(active, MEDIA_ENTITY_MAX_PADS);
DECLARE_BITMAP(has_no_links, MEDIA_ENTITY_MAX_PADS);
return 0;
}
if (entity->pipe && entity->pipe != pipe) {
pr_err("Pipe active for %s. Can't start for %s\n",
entity->name,
entity_err->name);
static void media_pipeline_cleanup(struct media_pipeline *pipe)
{
while (!list_empty(&pipe->pads)) {
struct media_pipeline_pad *ppad;
ppad = list_first_entry(&pipe->pads, typeof(*ppad), list);
list_del(&ppad->list);
kfree(ppad);
}
}
static int media_pipeline_populate(struct media_pipeline *pipe,
struct media_pad *pad)
{
struct media_pipeline_walk walk = { };
struct media_pipeline_pad *ppad;
int ret;
/*
* Populate the media pipeline by walking the media graph, starting
* from @pad.
*/
INIT_LIST_HEAD(&pipe->pads);
pipe->mdev = pad->graph_obj.mdev;
walk.mdev = pipe->mdev;
walk.stack.top = -1;
ret = media_pipeline_add_pad(pipe, &walk, pad);
if (ret)
goto done;
/*
* Use a depth-first search algorithm: as long as the stack is not
* empty, explore the next link of the top entry. The
* media_pipeline_explore_next_link() function will either move to the
* next link, pop the entry if fully visited, or add new entries on
* top.
*/
while (!media_pipeline_walk_empty(&walk)) {
ret = media_pipeline_explore_next_link(pipe, &walk);
if (ret)
goto done;
}
dev_dbg(pad->graph_obj.mdev->dev,
"media pipeline populated, found pads:\n");
list_for_each_entry(ppad, &pipe->pads, list)
dev_dbg(pad->graph_obj.mdev->dev, "- '%s':%u\n",
ppad->pad->entity->name, ppad->pad->index);
WARN_ON(walk.stack.top != -1);
ret = 0;
done:
media_pipeline_walk_destroy(&walk);
if (ret)
media_pipeline_cleanup(pipe);
return ret;
}
__must_check int __media_pipeline_start(struct media_pad *pad,
struct media_pipeline *pipe)
{
struct media_device *mdev = pad->entity->graph_obj.mdev;
struct media_pipeline_pad *err_ppad;
struct media_pipeline_pad *ppad;
int ret;
lockdep_assert_held(&mdev->graph_mutex);
/*
* If the entity is already part of a pipeline, that pipeline must
* be the same as the pipe given to media_pipeline_start().
*/
if (WARN_ON(pad->pipe && pad->pipe != pipe))
return -EINVAL;
/*
* If the pipeline has already been started, it is guaranteed to be
* valid, so just increase the start count.
*/
if (pipe->start_count) {
pipe->start_count++;
return 0;
}
/*
* Populate the pipeline. This populates the media_pipeline pads list
* with media_pipeline_pad instances for each pad found during graph
* walk.
*/
ret = media_pipeline_populate(pipe, pad);
if (ret)
return ret;
/*
* Now that all the pads in the pipeline have been gathered, perform
* the validation steps.
*/
list_for_each_entry(ppad, &pipe->pads, list) {
struct media_pad *pad = ppad->pad;
struct media_entity *entity = pad->entity;
bool has_enabled_link = false;
bool has_link = false;
struct media_link *link;
dev_dbg(mdev->dev, "Validating pad '%s':%u\n", pad->entity->name,
pad->index);
/*
* 1. Ensure that the pad doesn't already belong to a different
* pipeline.
*/
if (pad->pipe) {
dev_dbg(mdev->dev, "Failed to start pipeline: pad '%s':%u busy\n",
pad->entity->name, pad->index);
ret = -EBUSY;
goto error;
}
/* Already streaming --- no need to check. */
if (entity->pipe)
continue;
entity->pipe = pipe;
if (!entity->ops || !entity->ops->link_validate)
continue;
bitmap_zero(active, entity->num_pads);
bitmap_fill(has_no_links, entity->num_pads);
/*
* 2. Validate all active links whose sink is the current pad.
* Validation of the source pads is performed in the context of
* the connected sink pad to avoid duplicating checks.
*/
for_each_media_entity_data_link(entity, link) {
struct media_pad *pad = link->sink->entity == entity
? link->sink : link->source;
/* Skip links unrelated to the current pad. */
if (link->sink != pad && link->source != pad)
continue;
/* Mark that a pad is connected by a link. */
bitmap_clear(has_no_links, pad->index, 1);
/* Record if the pad has links and enabled links. */
if (link->flags & MEDIA_LNK_FL_ENABLED)
has_enabled_link = true;
has_link = true;
/*
* Pads that either do not need to connect or
* are connected through an enabled link are
* fine.
* Validate the link if it's enabled and has the
* current pad as its sink.
*/
if (!(pad->flags & MEDIA_PAD_FL_MUST_CONNECT) ||
link->flags & MEDIA_LNK_FL_ENABLED)
bitmap_set(active, pad->index, 1);
if (!(link->flags & MEDIA_LNK_FL_ENABLED))
continue;
/*
* Link validation will only take place for
* sink ends of the link that are enabled.
*/
if (link->sink != pad ||
!(link->flags & MEDIA_LNK_FL_ENABLED))
if (link->sink != pad)
continue;
if (!entity->ops || !entity->ops->link_validate)
continue;
ret = entity->ops->link_validate(link);
if (ret < 0 && ret != -ENOIOCTLCMD) {
dev_dbg(entity->graph_obj.mdev->dev,
"link validation failed for '%s':%u -> '%s':%u, error %d\n",
if (ret) {
dev_dbg(mdev->dev,
"Link '%s':%u -> '%s':%u failed validation: %d\n",
link->source->entity->name,
link->source->index,
entity->name, link->sink->index, ret);
link->sink->entity->name,
link->sink->index, ret);
goto error;
}
dev_dbg(mdev->dev,
"Link '%s':%u -> '%s':%u is valid\n",
link->source->entity->name,
link->source->index,
link->sink->entity->name,
link->sink->index);
}
/* Either no links or validated links are fine. */
bitmap_or(active, active, has_no_links, entity->num_pads);
if (!bitmap_full(active, entity->num_pads)) {
/*
* 3. If the pad has the MEDIA_PAD_FL_MUST_CONNECT flag set,
* ensure that it has either no link or an enabled link.
*/
if ((pad->flags & MEDIA_PAD_FL_MUST_CONNECT) && has_link &&
!has_enabled_link) {
dev_dbg(mdev->dev,
"Pad '%s':%u must be connected by an enabled link\n",
pad->entity->name, pad->index);
ret = -ENOLINK;
dev_dbg(entity->graph_obj.mdev->dev,
"'%s':%u must be connected by an enabled link\n",
entity->name,
(unsigned)find_first_zero_bit(
active, entity->num_pads));
goto error;
}
/* Validation passed, store the pipe pointer in the pad. */
pad->pipe = pipe;
}
pipe->streaming_count++;
pipe->start_count++;
return 0;
@@ -508,42 +834,37 @@ __must_check int __media_pipeline_start(struct media_entity *entity,
* Link validation on graph failed. We revert what we did and
* return the error.
*/
media_graph_walk_start(graph, entity_err);
while ((entity_err = media_graph_walk_next(graph))) {
entity_err->pipe = NULL;
/*
* We haven't started entities further than this so we quit
* here.
*/
if (entity_err == entity)
list_for_each_entry(err_ppad, &pipe->pads, list) {
if (err_ppad == ppad)
break;
err_ppad->pad->pipe = NULL;
}
media_graph_walk_cleanup(graph);
media_pipeline_cleanup(pipe);
return ret;
}
EXPORT_SYMBOL_GPL(__media_pipeline_start);
__must_check int media_pipeline_start(struct media_entity *entity,
__must_check int media_pipeline_start(struct media_pad *pad,
struct media_pipeline *pipe)
{
struct media_device *mdev = entity->graph_obj.mdev;
struct media_device *mdev = pad->entity->graph_obj.mdev;
int ret;
mutex_lock(&mdev->graph_mutex);
ret = __media_pipeline_start(entity, pipe);
ret = __media_pipeline_start(pad, pipe);
mutex_unlock(&mdev->graph_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(media_pipeline_start);
void __media_pipeline_stop(struct media_entity *entity)
void __media_pipeline_stop(struct media_pad *pad)
{
struct media_graph *graph = &entity->pipe->graph;
struct media_pipeline *pipe = entity->pipe;
struct media_pipeline *pipe = pad->pipe;
struct media_pipeline_pad *ppad;
/*
* If the following check fails, the driver has performed an
@@ -552,29 +873,65 @@ void __media_pipeline_stop(struct media_entity *entity)
if (WARN_ON(!pipe))
return;
if (--pipe->streaming_count)
if (--pipe->start_count)
return;
media_graph_walk_start(graph, entity);
list_for_each_entry(ppad, &pipe->pads, list)
ppad->pad->pipe = NULL;
while ((entity = media_graph_walk_next(graph)))
entity->pipe = NULL;
media_graph_walk_cleanup(graph);
media_pipeline_cleanup(pipe);
if (pipe->allocated)
kfree(pipe);
}
EXPORT_SYMBOL_GPL(__media_pipeline_stop);
void media_pipeline_stop(struct media_entity *entity)
void media_pipeline_stop(struct media_pad *pad)
{
struct media_device *mdev = entity->graph_obj.mdev;
struct media_device *mdev = pad->entity->graph_obj.mdev;
mutex_lock(&mdev->graph_mutex);
__media_pipeline_stop(entity);
__media_pipeline_stop(pad);
mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_pipeline_stop);
__must_check int media_pipeline_alloc_start(struct media_pad *pad)
{
struct media_device *mdev = pad->entity->graph_obj.mdev;
struct media_pipeline *new_pipe = NULL;
struct media_pipeline *pipe;
int ret;
mutex_lock(&mdev->graph_mutex);
/*
* Is the entity already part of a pipeline? If not, we need to allocate
* a pipe.
*/
pipe = media_pad_pipeline(pad);
if (!pipe) {
new_pipe = kzalloc(sizeof(*new_pipe), GFP_KERNEL);
if (!new_pipe) {
ret = -ENOMEM;
goto out;
}
pipe = new_pipe;
pipe->allocated = true;
}
ret = __media_pipeline_start(pad, pipe);
if (ret)
kfree(new_pipe);
out:
mutex_unlock(&mdev->graph_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(media_pipeline_alloc_start);
/* -----------------------------------------------------------------------------
* Links management
*/
@@ -829,7 +1186,7 @@ int __media_entity_setup_link(struct media_link *link, u32 flags)
{
const u32 mask = MEDIA_LNK_FL_ENABLED;
struct media_device *mdev;
struct media_entity *source, *sink;
struct media_pad *source, *sink;
int ret = -EBUSY;
if (link == NULL)
@@ -845,12 +1202,11 @@ int __media_entity_setup_link(struct media_link *link, u32 flags)
if (link->flags == flags)
return 0;
source = link->source->entity;
sink = link->sink->entity;
source = link->source;
sink = link->sink;
if (!(link->flags & MEDIA_LNK_FL_DYNAMIC) &&
(media_entity_is_streaming(source) ||
media_entity_is_streaming(sink)))
(media_pad_is_streaming(source) || media_pad_is_streaming(sink)))
return -EBUSY;
mdev = source->graph_obj.mdev;
@@ -991,6 +1347,60 @@ struct media_pad *media_pad_remote_pad_unique(const struct media_pad *pad)
}
EXPORT_SYMBOL_GPL(media_pad_remote_pad_unique);
int media_entity_get_fwnode_pad(struct media_entity *entity,
struct fwnode_handle *fwnode,
unsigned long direction_flags)
{
struct fwnode_endpoint endpoint;
unsigned int i;
int ret;
if (!entity->ops || !entity->ops->get_fwnode_pad) {
for (i = 0; i < entity->num_pads; i++) {
if (entity->pads[i].flags & direction_flags)
return i;
}
return -ENXIO;
}
ret = fwnode_graph_parse_endpoint(fwnode, &endpoint);
if (ret)
return ret;
ret = entity->ops->get_fwnode_pad(entity, &endpoint);
if (ret < 0)
return ret;
if (ret >= entity->num_pads)
return -ENXIO;
if (!(entity->pads[ret].flags & direction_flags))
return -ENXIO;
return ret;
}
EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad);
struct media_pipeline *media_entity_pipeline(struct media_entity *entity)
{
struct media_pad *pad;
media_entity_for_each_pad(entity, pad) {
if (pad->pipe)
return pad->pipe;
}
return NULL;
}
EXPORT_SYMBOL_GPL(media_entity_pipeline);
struct media_pipeline *media_pad_pipeline(struct media_pad *pad)
{
return pad->pipe;
}
EXPORT_SYMBOL_GPL(media_pad_pipeline);
static void media_interface_init(struct media_device *mdev,
struct media_interface *intf,
u32 gobj_type,

View File

@@ -339,7 +339,7 @@ void cx18_av_std_setup(struct cx18 *cx)
/*
* For a 13.5 Mpps clock and 15,625 Hz line rate, a line is
* is 864 pixels = 720 active + 144 blanking. ITU-R BT.601
* 864 pixels = 720 active + 144 blanking. ITU-R BT.601
* specifies 12 luma clock periods or ~ 0.9 * 13.5 Mpps after
* the end of active video to start a horizontal line, so that
* leaves 132 pixels of hblank to ignore.
@@ -399,7 +399,7 @@ void cx18_av_std_setup(struct cx18 *cx)
/*
* For a 13.5 Mpps clock and 15,734.26 Hz line rate, a line is
* is 858 pixels = 720 active + 138 blanking. The Hsync leading
* 858 pixels = 720 active + 138 blanking. The Hsync leading
* edge should happen 1.2 us * 13.5 Mpps ~= 16 pixels after the
* end of active video, leaving 122 pixels of hblank to ignore
* before active video starts.

View File

@@ -586,7 +586,7 @@ void cx88_i2c_init_ir(struct cx88_core *core)
{
struct i2c_board_info info;
static const unsigned short default_addr_list[] = {
0x18, 0x6b, 0x71,
0x18, 0x33, 0x6b, 0x71,
I2C_CLIENT_END
};
static const unsigned short pvr2000_addr_list[] = {

View File

@@ -1388,6 +1388,7 @@ static int cx8800_initdev(struct pci_dev *pci_dev,
}
fallthrough;
case CX88_BOARD_DVICO_FUSIONHDTV_5_PCI_NANO:
case CX88_BOARD_NOTONLYTV_LV3H:
request_module("ir-kbd-i2c");
}

View File

@@ -989,7 +989,7 @@ static int cio2_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
return r;
}
r = media_pipeline_start(&q->vdev.entity, &q->pipe);
r = video_device_pipeline_start(&q->vdev, &q->pipe);
if (r)
goto fail_pipeline;
@@ -1009,7 +1009,7 @@ static int cio2_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
fail_csi2_subdev:
cio2_hw_exit(cio2, q);
fail_hw:
media_pipeline_stop(&q->vdev.entity);
video_device_pipeline_stop(&q->vdev);
fail_pipeline:
dev_dbg(dev, "failed to start streaming (%d)\n", r);
cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_QUEUED);
@@ -1030,7 +1030,7 @@ static void cio2_vb2_stop_streaming(struct vb2_queue *vq)
cio2_hw_exit(cio2, q);
synchronize_irq(cio2->pci_dev->irq);
cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_ERROR);
media_pipeline_stop(&q->vdev.entity);
video_device_pipeline_stop(&q->vdev);
pm_runtime_put(dev);
cio2->streaming = false;
}

View File

@@ -603,6 +603,10 @@ static int vpu_v4l2_release(struct vpu_inst *inst)
inst->workqueue = NULL;
}
if (inst->fh.m2m_ctx) {
v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
inst->fh.m2m_ctx = NULL;
}
v4l2_ctrl_handler_free(&inst->ctrl_handler);
mutex_destroy(&inst->lock);
v4l2_fh_del(&inst->fh);
@@ -685,13 +689,6 @@ int vpu_v4l2_close(struct file *file)
vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n", inst->tgid, inst->pid, inst);
vpu_inst_lock(inst);
if (inst->fh.m2m_ctx) {
v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
inst->fh.m2m_ctx = NULL;
}
vpu_inst_unlock(inst);
call_void_vop(inst, release);
vpu_inst_unregister(inst);
vpu_inst_put(inst);

Some files were not shown because too many files have changed in this diff Show More