ASoC: adau1372: Fix error handling in adau1372_set_power()

Jihed Chaibi <jihed.chaibi.dev@gmail.com> says:

adau1372_set_power() had two related error handling issues in its enable
path: clk_prepare_enable() was called but its return value discarded, and
adau1372_enable_pll() was a void function that silently swallowed lock
failures, leaving mclk enabled and adau1372->enabled set to true despite
the device being in a broken state.

Patch 1 fixes the unchecked clk_prepare_enable() by making
adau1372_set_power() return int and propagating the error.

Patch 2 converts adau1372_enable_pll() to return int and adds a full
unwind in adau1372_set_power() if PLL lock fails, reversing the regcache,
GPIO power-down, and clock state.
This commit is contained in:
Mark Brown
2026-03-26 10:33:38 +00:00
333 changed files with 3597 additions and 1522 deletions

View File

@@ -327,6 +327,7 @@ Henrik Rydberg <rydberg@bitmath.org>
Herbert Xu <herbert@gondor.apana.org.au>
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
Ignat Korchagin <ignat@linux.win> <ignat@cloudflare.com>
Ike Panhc <ikepanhc@gmail.com> <ike.pan@canonical.com>
J. Bruce Fields <bfields@fieldses.org> <bfields@redhat.com>
J. Bruce Fields <bfields@fieldses.org> <bfields@citi.umich.edu>

View File

@@ -336,6 +336,8 @@ command line arguments:
- ``--list_tests_attr``: If set, lists all tests that will be run and all of their
attributes.
- ``--list_suites``: If set, lists all suites that will be run.
Command-line completion
==============================

View File

@@ -19,9 +19,6 @@ description:
Flash sub nodes describe the memory range and optional per-flash
properties.
allOf:
- $ref: mtd.yaml#
properties:
compatible:
const: st,spear600-smi
@@ -42,14 +39,29 @@ properties:
$ref: /schemas/types.yaml#/definitions/uint32
description: Functional clock rate of the SMI controller in Hz.
st,smi-fast-mode:
type: boolean
description: Indicates that the attached flash supports fast read mode.
patternProperties:
"^flash@.*$":
$ref: /schemas/mtd/mtd.yaml#
properties:
reg:
maxItems: 1
st,smi-fast-mode:
type: boolean
description: Indicates that the attached flash supports fast read mode.
unevaluatedProperties: false
required:
- reg
required:
- compatible
- reg
- clock-rate
- "#address-cells"
- "#size-cells"
unevaluatedProperties: false
@@ -64,7 +76,7 @@ examples:
interrupts = <12>;
clock-rate = <50000000>; /* 50 MHz */
flash@f8000000 {
flash@fc000000 {
reg = <0xfc000000 0x1000>;
st,smi-fast-mode;
};

View File

@@ -168,7 +168,7 @@ properties:
offset from voltage set to regulator.
regulator-uv-protection-microvolt:
description: Set over under voltage protection limit. This is a limit where
description: Set under voltage protection limit. This is a limit where
hardware performs emergency shutdown. Zero can be passed to disable
protection and value '1' indicates that protection should be enabled but
limit setting can be omitted. Limit is given as microvolt offset from
@@ -182,7 +182,7 @@ properties:
is given as microvolt offset from voltage set to regulator.
regulator-uv-warn-microvolt:
description: Set over under voltage warning limit. This is a limit where
description: Set under voltage warning limit. This is a limit where
hardware is assumed still to be functional but approaching limit where
it gets damaged. Recovery actions should be initiated. Zero can be passed
to disable detection and value '1' indicates that detection should

View File

@@ -99,3 +99,51 @@ of the driver is decremented. All symlinks between the two are removed.
When a driver is removed, the list of devices that it supports is
iterated over, and the driver's remove callback is called for each
one. The device is removed from that list and the symlinks removed.
Driver Override
~~~~~~~~~~~~~~~
Userspace may override the standard matching by writing a driver name to
a device's ``driver_override`` sysfs attribute. When set, only a driver
whose name matches the override will be considered during binding. This
bypasses all bus-specific matching (OF, ACPI, ID tables, etc.).
The override may be cleared by writing an empty string, which returns
the device to standard matching rules. Writing to ``driver_override``
does not automatically unbind the device from its current driver or
make any attempt to load the specified driver.
Buses opt into this mechanism by setting the ``driver_override`` flag in
their ``struct bus_type``::
const struct bus_type example_bus_type = {
...
.driver_override = true,
};
When the flag is set, the driver core automatically creates the
``driver_override`` sysfs attribute for every device on that bus.
The bus's ``match()`` callback should check the override before performing
its own matching, using ``device_match_driver_override()``::
static int example_match(struct device *dev, const struct device_driver *drv)
{
int ret;
ret = device_match_driver_override(dev, drv);
if (ret >= 0)
return ret;
/* Fall through to bus-specific matching... */
}
``device_match_driver_override()`` returns > 0 if the override matches
the given driver, 0 if the override is set but does not match, or < 0 if
no override is set at all.
Additional helpers are available:
- ``device_set_driver_override()`` - set or clear the override from kernel code.
- ``device_has_driver_override()`` - check whether an override is set.

View File

@@ -247,8 +247,8 @@ operations:
flags: [admin-perm]
do:
pre: net-shaper-nl-pre-doit
post: net-shaper-nl-post-doit
pre: net-shaper-nl-pre-doit-write
post: net-shaper-nl-post-doit-write
request:
attributes:
- ifindex
@@ -278,8 +278,8 @@ operations:
flags: [admin-perm]
do:
pre: net-shaper-nl-pre-doit
post: net-shaper-nl-post-doit
pre: net-shaper-nl-pre-doit-write
post: net-shaper-nl-post-doit-write
request:
attributes: *ns-binding
@@ -309,8 +309,8 @@ operations:
flags: [admin-perm]
do:
pre: net-shaper-nl-pre-doit
post: net-shaper-nl-post-doit
pre: net-shaper-nl-pre-doit-write
post: net-shaper-nl-post-doit-write
request:
attributes:
- ifindex

View File

@@ -4022,7 +4022,7 @@ F: drivers/hwmon/asus_wmi_sensors.c
ASYMMETRIC KEYS
M: David Howells <dhowells@redhat.com>
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
M: Ignat Korchagin <ignat@linux.win>
L: keyrings@vger.kernel.org
L: linux-crypto@vger.kernel.org
S: Maintained
@@ -4035,7 +4035,7 @@ F: include/linux/verification.h
ASYMMETRIC KEYS - ECDSA
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
M: Ignat Korchagin <ignat@linux.win>
R: Stefan Berger <stefanb@linux.ibm.com>
L: linux-crypto@vger.kernel.org
S: Maintained
@@ -4045,14 +4045,14 @@ F: include/crypto/ecc*
ASYMMETRIC KEYS - GOST
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
M: Ignat Korchagin <ignat@linux.win>
L: linux-crypto@vger.kernel.org
S: Odd fixes
F: crypto/ecrdsa*
ASYMMETRIC KEYS - RSA
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
M: Ignat Korchagin <ignat@linux.win>
L: linux-crypto@vger.kernel.org
S: Maintained
F: crypto/rsa*
@@ -7998,7 +7998,9 @@ F: Documentation/devicetree/bindings/display/himax,hx8357.yaml
F: drivers/gpu/drm/tiny/hx8357d.c
DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE
M: Deepak Rawat <drawat.floss@gmail.com>
M: Dexuan Cui <decui@microsoft.com>
M: Long Li <longli@microsoft.com>
M: Saurabh Sengar <ssengar@linux.microsoft.com>
L: linux-hyperv@vger.kernel.org
L: dri-devel@lists.freedesktop.org
S: Maintained
@@ -24900,9 +24902,9 @@ F: drivers/clk/spear/
F: drivers/pinctrl/spear/
SPI NOR SUBSYSTEM
M: Tudor Ambarus <tudor.ambarus@linaro.org>
M: Pratyush Yadav <pratyush@kernel.org>
M: Michael Walle <mwalle@kernel.org>
R: Takahiro Kuwano <takahiro.kuwano@infineon.com>
L: linux-mtd@lists.infradead.org
S: Maintained
W: http://www.linux-mtd.infradead.org/

View File

@@ -2,7 +2,7 @@
VERSION = 7
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc5
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@@ -279,7 +279,6 @@ CONFIG_TI_CPSW_SWITCHDEV=y
CONFIG_TI_CPTS=y
CONFIG_TI_KEYSTONE_NETCP=y
CONFIG_TI_KEYSTONE_NETCP_ETHSS=y
CONFIG_TI_PRUSS=m
CONFIG_TI_PRUETH=m
CONFIG_XILINX_EMACLITE=y
CONFIG_SFP=m

View File

@@ -698,7 +698,7 @@ scif0: serial@c0700000 {
compatible = "renesas,scif-r8a78000",
"renesas,rcar-gen5-scif", "renesas,scif";
reg = <0 0xc0700000 0 0x40>;
interrupts = <GIC_SPI 4074 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 10 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -708,7 +708,7 @@ scif1: serial@c0704000 {
compatible = "renesas,scif-r8a78000",
"renesas,rcar-gen5-scif", "renesas,scif";
reg = <0 0xc0704000 0 0x40>;
interrupts = <GIC_SPI 4075 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 11 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -718,7 +718,7 @@ scif3: serial@c0708000 {
compatible = "renesas,scif-r8a78000",
"renesas,rcar-gen5-scif", "renesas,scif";
reg = <0 0xc0708000 0 0x40>;
interrupts = <GIC_SPI 4076 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 12 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -728,7 +728,7 @@ scif4: serial@c070c000 {
compatible = "renesas,scif-r8a78000",
"renesas,rcar-gen5-scif", "renesas,scif";
reg = <0 0xc070c000 0 0x40>;
interrupts = <GIC_SPI 4077 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 13 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -738,7 +738,7 @@ hscif0: serial@c0710000 {
compatible = "renesas,hscif-r8a78000",
"renesas,rcar-gen5-hscif", "renesas,hscif";
reg = <0 0xc0710000 0 0x60>;
interrupts = <GIC_SPI 4078 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 14 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -748,7 +748,7 @@ hscif1: serial@c0714000 {
compatible = "renesas,hscif-r8a78000",
"renesas,rcar-gen5-hscif", "renesas,hscif";
reg = <0 0xc0714000 0 0x60>;
interrupts = <GIC_SPI 4079 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 15 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -758,7 +758,7 @@ hscif2: serial@c0718000 {
compatible = "renesas,hscif-r8a78000",
"renesas,rcar-gen5-hscif", "renesas,hscif";
reg = <0 0xc0718000 0 0x60>;
interrupts = <GIC_SPI 4080 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 16 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";
@@ -768,7 +768,7 @@ hscif3: serial@c071c000 {
compatible = "renesas,hscif-r8a78000",
"renesas,rcar-gen5-hscif", "renesas,hscif";
reg = <0 0xc071c000 0 0x60>;
interrupts = <GIC_SPI 4081 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_ESPI 17 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
status = "disabled";

View File

@@ -581,16 +581,6 @@ ostm7: timer@12c03000 {
status = "disabled";
};
wdt0: watchdog@11c00400 {
compatible = "renesas,r9a09g057-wdt";
reg = <0 0x11c00400 0 0x400>;
clocks = <&cpg CPG_MOD 0x4b>, <&cpg CPG_MOD 0x4c>;
clock-names = "pclk", "oscclk";
resets = <&cpg 0x75>;
power-domains = <&cpg>;
status = "disabled";
};
wdt1: watchdog@14400000 {
compatible = "renesas,r9a09g057-wdt";
reg = <0 0x14400000 0 0x400>;
@@ -601,26 +591,6 @@ wdt1: watchdog@14400000 {
status = "disabled";
};
wdt2: watchdog@13000000 {
compatible = "renesas,r9a09g057-wdt";
reg = <0 0x13000000 0 0x400>;
clocks = <&cpg CPG_MOD 0x4f>, <&cpg CPG_MOD 0x50>;
clock-names = "pclk", "oscclk";
resets = <&cpg 0x77>;
power-domains = <&cpg>;
status = "disabled";
};
wdt3: watchdog@13000400 {
compatible = "renesas,r9a09g057-wdt";
reg = <0 0x13000400 0 0x400>;
clocks = <&cpg CPG_MOD 0x51>, <&cpg CPG_MOD 0x52>;
clock-names = "pclk", "oscclk";
resets = <&cpg 0x78>;
power-domains = <&cpg>;
status = "disabled";
};
rtc: rtc@11c00800 {
compatible = "renesas,r9a09g057-rtca3", "renesas,rz-rtca3";
reg = <0 0x11c00800 0 0x400>;

View File

@@ -974,8 +974,8 @@ mii_conv3: mii-conv@3 {
cpg: clock-controller@80280000 {
compatible = "renesas,r9a09g077-cpg-mssr";
reg = <0 0x80280000 0 0x1000>,
<0 0x81280000 0 0x9000>;
reg = <0 0x80280000 0 0x10000>,
<0 0x81280000 0 0x10000>;
clocks = <&extal_clk>;
clock-names = "extal";
#clock-cells = <2>;

View File

@@ -977,8 +977,8 @@ mii_conv3: mii-conv@3 {
cpg: clock-controller@80280000 {
compatible = "renesas,r9a09g087-cpg-mssr";
reg = <0 0x80280000 0 0x1000>,
<0 0x81280000 0 0x9000>;
reg = <0 0x80280000 0 0x10000>,
<0 0x81280000 0 0x10000>;
clocks = <&extal_clk>;
clock-names = "extal";
#clock-cells = <2>;

View File

@@ -162,7 +162,7 @@ versa3: clock-generator@68 {
<100000000>;
renesas,settings = [
80 00 11 19 4c 42 dc 2f 06 7d 20 1a 5f 1e f2 27
00 40 00 00 00 00 00 00 06 0c 19 02 3f f0 90 86
00 40 00 00 00 00 00 00 06 0c 19 02 3b f0 90 86
a0 80 30 30 9c
];
};

View File

@@ -53,6 +53,7 @@ vqmmc_sdhi0: regulator-vqmmc-sdhi0 {
regulator-max-microvolt = <3300000>;
gpios-states = <0>;
states = <3300000 0>, <1800000 1>;
regulator-ramp-delay = <60>;
};
#endif

View File

@@ -25,6 +25,7 @@ vqmmc_sdhi0: regulator-vqmmc-sdhi0 {
regulator-max-microvolt = <3300000>;
gpios-states = <0>;
states = <3300000 0>, <1800000 1>;
regulator-ramp-delay = <60>;
};
};

View File

@@ -76,19 +76,24 @@ static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_ctx *ctx = crypto_skcipher_ctx(tfm);
struct crypto_aes_ctx rk;
struct crypto_aes_ctx *rk;
int err;
err = aes_expandkey(&rk, in_key, key_len);
rk = kmalloc(sizeof(*rk), GFP_KERNEL);
if (!rk)
return -ENOMEM;
err = aes_expandkey(rk, in_key, key_len);
if (err)
return err;
goto out;
ctx->rounds = 6 + key_len / 4;
scoped_ksimd()
aesbs_convert_key(ctx->rk, rk.key_enc, ctx->rounds);
return 0;
aesbs_convert_key(ctx->rk, rk->key_enc, ctx->rounds);
out:
kfree_sensitive(rk);
return err;
}
static int __ecb_crypt(struct skcipher_request *req,
@@ -133,22 +138,26 @@ static int aesbs_cbc_ctr_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm);
struct crypto_aes_ctx rk;
struct crypto_aes_ctx *rk;
int err;
err = aes_expandkey(&rk, in_key, key_len);
rk = kmalloc(sizeof(*rk), GFP_KERNEL);
if (!rk)
return -ENOMEM;
err = aes_expandkey(rk, in_key, key_len);
if (err)
return err;
goto out;
ctx->key.rounds = 6 + key_len / 4;
memcpy(ctx->enc, rk.key_enc, sizeof(ctx->enc));
memcpy(ctx->enc, rk->key_enc, sizeof(ctx->enc));
scoped_ksimd()
aesbs_convert_key(ctx->key.rk, rk.key_enc, ctx->key.rounds);
memzero_explicit(&rk, sizeof(rk));
return 0;
aesbs_convert_key(ctx->key.rk, rk->key_enc, ctx->key.rounds);
out:
kfree_sensitive(rk);
return err;
}
static int cbc_encrypt(struct skcipher_request *req)

View File

@@ -192,6 +192,14 @@ static int scs_handle_fde_frame(const struct eh_frame *frame,
size -= 2;
break;
case DW_CFA_advance_loc4:
loc += *opcode++ * code_alignment_factor;
loc += (*opcode++ << 8) * code_alignment_factor;
loc += (*opcode++ << 16) * code_alignment_factor;
loc += (*opcode++ << 24) * code_alignment_factor;
size -= 4;
break;
case DW_CFA_def_cfa:
case DW_CFA_offset_extended:
size = skip_xleb128(&opcode, size);

View File

@@ -12,6 +12,7 @@
#include <asm/io.h>
#include <asm/mem_encrypt.h>
#include <asm/pgtable.h>
#include <asm/rsi.h>
static struct realm_config config;
@@ -146,7 +147,7 @@ void __init arm64_rsi_init(void)
return;
if (WARN_ON(rsi_get_realm_config(&config)))
return;
prot_ns_shared = BIT(config.ipa_bits - 1);
prot_ns_shared = __phys_to_pte_val(BIT(config.ipa_bits - 1));
if (arm64_ioremap_prot_hook_register(realm_ioremap_hook))
return;

View File

@@ -304,6 +304,9 @@ config AS_HAS_LBT_EXTENSION
config AS_HAS_LVZ_EXTENSION
def_bool $(as-instr,hvcl 0)
config AS_HAS_SCQ_EXTENSION
def_bool $(as-instr,sc.q \$t0$(comma)\$t1$(comma)\$t2)
config CC_HAS_ANNOTATE_TABLEJUMP
def_bool $(cc-option,-mannotate-tablejump)

View File

@@ -238,6 +238,8 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, unsigned int
arch_cmpxchg((ptr), (o), (n)); \
})
#ifdef CONFIG_AS_HAS_SCQ_EXTENSION
union __u128_halves {
u128 full;
struct {
@@ -290,6 +292,9 @@ union __u128_halves {
BUILD_BUG_ON(sizeof(*(ptr)) != 16); \
__arch_cmpxchg128(ptr, o, n, ""); \
})
#endif /* CONFIG_AS_HAS_SCQ_EXTENSION */
#else
#include <asm-generic/cmpxchg-local.h>
#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))

View File

@@ -253,8 +253,13 @@ do { \
\
__get_kernel_common(*((type *)(dst)), sizeof(type), \
(__force type *)(src)); \
if (unlikely(__gu_err)) \
if (unlikely(__gu_err)) { \
pr_info("%s: memory access failed, ecode 0x%x\n", \
__func__, read_csr_excode()); \
pr_info("%s: the caller is %pS\n", \
__func__, __builtin_return_address(0)); \
goto err_label; \
} \
} while (0)
#define __put_kernel_nofault(dst, src, type, err_label) \
@@ -264,8 +269,13 @@ do { \
\
__pu_val = *(__force type *)(src); \
__put_kernel_common(((type *)(dst)), sizeof(type)); \
if (unlikely(__pu_err)) \
if (unlikely(__pu_err)) { \
pr_info("%s: memory access failed, ecode 0x%x\n", \
__func__, read_csr_excode()); \
pr_info("%s: the caller is %pS\n", \
__func__, __builtin_return_address(0)); \
goto err_label; \
} \
} while (0)
extern unsigned long __copy_user(void *to, const void *from, __kernel_size_t n);

View File

@@ -246,32 +246,51 @@ static int text_copy_cb(void *data)
if (smp_processor_id() == copy->cpu) {
ret = copy_to_kernel_nofault(copy->dst, copy->src, copy->len);
if (ret)
if (ret) {
pr_err("%s: operation failed\n", __func__);
return ret;
}
}
flush_icache_range((unsigned long)copy->dst, (unsigned long)copy->dst + copy->len);
return ret;
return 0;
}
int larch_insn_text_copy(void *dst, void *src, size_t len)
{
int ret = 0;
int err = 0;
size_t start, end;
struct insn_copy copy = {
.dst = dst,
.src = src,
.len = len,
.cpu = smp_processor_id(),
.cpu = raw_smp_processor_id(),
};
/*
* Ensure copy.cpu won't be hot removed before stop_machine.
* If it is removed nobody will really update the text.
*/
lockdep_assert_cpus_held();
start = round_down((size_t)dst, PAGE_SIZE);
end = round_up((size_t)dst + len, PAGE_SIZE);
set_memory_rw(start, (end - start) / PAGE_SIZE);
ret = stop_machine(text_copy_cb, &copy, cpu_online_mask);
set_memory_rox(start, (end - start) / PAGE_SIZE);
err = set_memory_rw(start, (end - start) / PAGE_SIZE);
if (err) {
pr_info("%s: set_memory_rw() failed\n", __func__);
return err;
}
ret = stop_machine_cpuslocked(text_copy_cb, &copy, cpu_online_mask);
err = set_memory_rox(start, (end - start) / PAGE_SIZE);
if (err) {
pr_info("%s: set_memory_rox() failed\n", __func__);
return err;
}
return ret;
}

View File

@@ -49,8 +49,8 @@ static void kvm_vm_init_features(struct kvm *kvm)
kvm->arch.kvm_features |= BIT(KVM_LOONGARCH_VM_FEAT_PMU);
/* Enable all PV features by default */
kvm->arch.pv_features = BIT(KVM_FEATURE_IPI);
kvm->arch.kvm_features = BIT(KVM_LOONGARCH_VM_FEAT_PV_IPI);
kvm->arch.pv_features |= BIT(KVM_FEATURE_IPI);
kvm->arch.kvm_features |= BIT(KVM_LOONGARCH_VM_FEAT_PV_IPI);
if (kvm_pvtime_supported()) {
kvm->arch.pv_features |= BIT(KVM_FEATURE_PREEMPT);
kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME);

View File

@@ -1379,9 +1379,11 @@ void *bpf_arch_text_copy(void *dst, void *src, size_t len)
{
int ret;
cpus_read_lock();
mutex_lock(&text_mutex);
ret = larch_insn_text_copy(dst, src, len);
mutex_unlock(&text_mutex);
cpus_read_unlock();
return ret ? ERR_PTR(-EINVAL) : dst;
}
@@ -1429,10 +1431,12 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
if (ret)
return ret;
cpus_read_lock();
mutex_lock(&text_mutex);
if (memcmp(ip, new_insns, LOONGARCH_LONG_JUMP_NBYTES))
ret = larch_insn_text_copy(ip, new_insns, LOONGARCH_LONG_JUMP_NBYTES);
mutex_unlock(&text_mutex);
cpus_read_unlock();
return ret;
}
@@ -1450,10 +1454,12 @@ int bpf_arch_text_invalidate(void *dst, size_t len)
for (i = 0; i < (len / sizeof(u32)); i++)
inst[i] = INSN_BREAK;
cpus_read_lock();
mutex_lock(&text_mutex);
if (larch_insn_text_copy(dst, inst, len))
ret = -EINVAL;
mutex_unlock(&text_mutex);
cpus_read_unlock();
kvfree(inst);
@@ -1568,6 +1574,11 @@ void arch_free_bpf_trampoline(void *image, unsigned int size)
bpf_prog_pack_free(image, size);
}
int arch_protect_bpf_trampoline(void *image, unsigned int size)
{
return 0;
}
/*
* Sign-extend the register if necessary
*/

View File

@@ -953,7 +953,7 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes,
#else
"1: cmpb,<<,n %0,%2,1b\n"
#endif
" fic,m %3(%4,%0)\n"
" fdc,m %3(%4,%0)\n"
"2: sync\n"
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1")
: "+r" (start), "+r" (error)
@@ -968,7 +968,7 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes,
#else
"1: cmpb,<<,n %0,%2,1b\n"
#endif
" fdc,m %3(%4,%0)\n"
" fic,m %3(%4,%0)\n"
"2: sync\n"
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1")
: "+r" (start), "+r" (error)

View File

@@ -428,6 +428,7 @@ can0: can@2010c000 {
clocks = <&clkcfg CLK_CAN0>, <&clkcfg CLK_MSSPLL3>;
interrupt-parent = <&plic>;
interrupts = <56>;
resets = <&mss_top_sysreg CLK_CAN0>;
status = "disabled";
};
@@ -437,6 +438,7 @@ can1: can@2010d000 {
clocks = <&clkcfg CLK_CAN1>, <&clkcfg CLK_MSSPLL3>;
interrupt-parent = <&plic>;
interrupts = <57>;
resets = <&mss_top_sysreg CLK_CAN1>;
status = "disabled";
};

View File

@@ -26,10 +26,6 @@ static int platform_match(struct device *dev, struct device_driver *drv)
struct platform_device *pdev = to_platform_device(dev);
struct platform_driver *pdrv = to_platform_driver(drv);
/* When driver_override is set, only bind to the matching driver */
if (pdev->driver_override)
return !strcmp(pdev->driver_override, drv->name);
/* Then try to match against the id table */
if (pdrv->id_table)
return platform_match_id(pdrv->id_table, pdev) != NULL;

View File

@@ -13,7 +13,7 @@
#include <linux/types.h>
#include <vdso/gettime.h>
#include "../../../../lib/vdso/gettimeofday.c"
#include "lib/vdso/gettimeofday.c"
int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
{

View File

@@ -1372,14 +1372,17 @@ static void x86_pmu_enable(struct pmu *pmu)
else if (i < n_running)
continue;
if (hwc->state & PERF_HES_ARCH)
cpuc->events[hwc->idx] = event;
if (hwc->state & PERF_HES_ARCH) {
static_call(x86_pmu_set_period)(event);
continue;
}
/*
* if cpuc->enabled = 0, then no wrmsr as
* per x86_pmu_enable_event()
*/
cpuc->events[hwc->idx] = event;
x86_pmu_start(event, PERF_EF_RELOAD);
}
cpuc->n_added = 0;

View File

@@ -4628,6 +4628,19 @@ static inline void intel_pmu_set_acr_caused_constr(struct perf_event *event,
event->hw.dyn_constraint &= hybrid(event->pmu, acr_cause_mask64);
}
static inline int intel_set_branch_counter_constr(struct perf_event *event,
int *num)
{
if (branch_sample_call_stack(event))
return -EINVAL;
if (branch_sample_counters(event)) {
(*num)++;
event->hw.dyn_constraint &= x86_pmu.lbr_counters;
}
return 0;
}
static int intel_pmu_hw_config(struct perf_event *event)
{
int ret = x86_pmu_hw_config(event);
@@ -4698,21 +4711,19 @@ static int intel_pmu_hw_config(struct perf_event *event)
* group, which requires the extra space to store the counters.
*/
leader = event->group_leader;
if (branch_sample_call_stack(leader))
if (intel_set_branch_counter_constr(leader, &num))
return -EINVAL;
if (branch_sample_counters(leader)) {
num++;
leader->hw.dyn_constraint &= x86_pmu.lbr_counters;
}
leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS;
for_each_sibling_event(sibling, leader) {
if (branch_sample_call_stack(sibling))
if (intel_set_branch_counter_constr(sibling, &num))
return -EINVAL;
}
/* event isn't installed as a sibling yet. */
if (event != leader) {
if (intel_set_branch_counter_constr(event, &num))
return -EINVAL;
if (branch_sample_counters(sibling)) {
num++;
sibling->hw.dyn_constraint &= x86_pmu.lbr_counters;
}
}
if (num > fls(x86_pmu.lbr_counters))

View File

@@ -345,12 +345,12 @@ static u64 parse_omr_data_source(u8 dse)
if (omr.omr_remote)
val |= REM;
val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT);
if (omr.omr_source == 0x2) {
u8 snoop = omr.omr_snoop | omr.omr_promoted;
u8 snoop = omr.omr_snoop | (omr.omr_promoted << 1);
if (snoop == 0x0)
if (omr.omr_hitm)
val |= P(SNOOP, HITM);
else if (snoop == 0x0)
val |= P(SNOOP, NA);
else if (snoop == 0x1)
val |= P(SNOOP, MISS);
@@ -359,7 +359,10 @@ static u64 parse_omr_data_source(u8 dse)
else if (snoop == 0x3)
val |= P(SNOOP, NONE);
} else if (omr.omr_source > 0x2 && omr.omr_source < 0x7) {
val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT);
val |= omr.omr_snoop ? P(SNOOPX, FWD) : 0;
} else {
val |= P(SNOOP, NONE);
}
return val;

View File

@@ -107,14 +107,12 @@ static void __noreturn hv_panic_timeout_reboot(void)
cpu_relax();
}
/* This cannot be inlined as it needs stack */
static noinline __noclone void hv_crash_restore_tss(void)
static void hv_crash_restore_tss(void)
{
load_TR_desc();
}
/* This cannot be inlined as it needs stack */
static noinline void hv_crash_clear_kernpt(void)
static void hv_crash_clear_kernpt(void)
{
pgd_t *pgd;
p4d_t *p4d;
@@ -125,50 +123,9 @@ static noinline void hv_crash_clear_kernpt(void)
native_p4d_clear(p4d);
}
/*
* This is the C entry point from the asm glue code after the disable hypercall.
* We enter here in IA32-e long mode, ie, full 64bit mode running on kernel
* page tables with our below 4G page identity mapped, but using a temporary
* GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not
* available. We restore kernel GDT, and rest of the context, and continue
* to kexec.
*/
static asmlinkage void __noreturn hv_crash_c_entry(void)
static void __noreturn hv_crash_handle(void)
{
struct hv_crash_ctxt *ctxt = &hv_crash_ctxt;
/* first thing, restore kernel gdt */
native_load_gdt(&ctxt->gdtr);
asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss));
asm volatile("movq %0, %%rsp" : : "m"(ctxt->rsp));
asm volatile("movw %%ax, %%ds" : : "a"(ctxt->ds));
asm volatile("movw %%ax, %%es" : : "a"(ctxt->es));
asm volatile("movw %%ax, %%fs" : : "a"(ctxt->fs));
asm volatile("movw %%ax, %%gs" : : "a"(ctxt->gs));
native_wrmsrq(MSR_IA32_CR_PAT, ctxt->pat);
asm volatile("movq %0, %%cr0" : : "r"(ctxt->cr0));
asm volatile("movq %0, %%cr8" : : "r"(ctxt->cr8));
asm volatile("movq %0, %%cr4" : : "r"(ctxt->cr4));
asm volatile("movq %0, %%cr2" : : "r"(ctxt->cr4));
native_load_idt(&ctxt->idtr);
native_wrmsrq(MSR_GS_BASE, ctxt->gsbase);
native_wrmsrq(MSR_EFER, ctxt->efer);
/* restore the original kernel CS now via far return */
asm volatile("movzwq %0, %%rax\n\t"
"pushq %%rax\n\t"
"pushq $1f\n\t"
"lretq\n\t"
"1:nop\n\t" : : "m"(ctxt->cs) : "rax");
/* We are in asmlinkage without stack frame, hence make C function
* calls which will buy stack frames.
*/
hv_crash_restore_tss();
hv_crash_clear_kernpt();
@@ -177,7 +134,54 @@ static asmlinkage void __noreturn hv_crash_c_entry(void)
hv_panic_timeout_reboot();
}
/* Tell gcc we are using lretq long jump in the above function intentionally */
/*
* __naked functions do not permit function calls, not even to __always_inline
* functions that only contain asm() blocks themselves. So use a macro instead.
*/
#define hv_wrmsr(msr, val) \
asm volatile("wrmsr" :: "c"(msr), "a"((u32)val), "d"((u32)(val >> 32)) : "memory")
/*
* This is the C entry point from the asm glue code after the disable hypercall.
* We enter here in IA32-e long mode, ie, full 64bit mode running on kernel
* page tables with our below 4G page identity mapped, but using a temporary
* GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not
* available. We restore kernel GDT, and rest of the context, and continue
* to kexec.
*/
static void __naked hv_crash_c_entry(void)
{
/* first thing, restore kernel gdt */
asm volatile("lgdt %0" : : "m" (hv_crash_ctxt.gdtr));
asm volatile("movw %0, %%ss\n\t"
"movq %1, %%rsp"
:: "m"(hv_crash_ctxt.ss), "m"(hv_crash_ctxt.rsp));
asm volatile("movw %0, %%ds" : : "m"(hv_crash_ctxt.ds));
asm volatile("movw %0, %%es" : : "m"(hv_crash_ctxt.es));
asm volatile("movw %0, %%fs" : : "m"(hv_crash_ctxt.fs));
asm volatile("movw %0, %%gs" : : "m"(hv_crash_ctxt.gs));
hv_wrmsr(MSR_IA32_CR_PAT, hv_crash_ctxt.pat);
asm volatile("movq %0, %%cr0" : : "r"(hv_crash_ctxt.cr0));
asm volatile("movq %0, %%cr8" : : "r"(hv_crash_ctxt.cr8));
asm volatile("movq %0, %%cr4" : : "r"(hv_crash_ctxt.cr4));
asm volatile("movq %0, %%cr2" : : "r"(hv_crash_ctxt.cr2));
asm volatile("lidt %0" : : "m" (hv_crash_ctxt.idtr));
hv_wrmsr(MSR_GS_BASE, hv_crash_ctxt.gsbase);
hv_wrmsr(MSR_EFER, hv_crash_ctxt.efer);
/* restore the original kernel CS now via far return */
asm volatile("pushq %q0\n\t"
"pushq %q1\n\t"
"lretq"
:: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle));
}
/* Tell objtool we are using lretq long jump in the above function intentionally */
STACK_FRAME_NON_STANDARD(hv_crash_c_entry);
static void hv_mark_tss_not_busy(void)
@@ -195,20 +199,20 @@ static void hv_hvcrash_ctxt_save(void)
{
struct hv_crash_ctxt *ctxt = &hv_crash_ctxt;
asm volatile("movq %%rsp,%0" : "=m"(ctxt->rsp));
ctxt->rsp = current_stack_pointer;
ctxt->cr0 = native_read_cr0();
ctxt->cr4 = native_read_cr4();
asm volatile("movq %%cr2, %0" : "=a"(ctxt->cr2));
asm volatile("movq %%cr8, %0" : "=a"(ctxt->cr8));
asm volatile("movq %%cr2, %0" : "=r"(ctxt->cr2));
asm volatile("movq %%cr8, %0" : "=r"(ctxt->cr8));
asm volatile("movl %%cs, %%eax" : "=a"(ctxt->cs));
asm volatile("movl %%ss, %%eax" : "=a"(ctxt->ss));
asm volatile("movl %%ds, %%eax" : "=a"(ctxt->ds));
asm volatile("movl %%es, %%eax" : "=a"(ctxt->es));
asm volatile("movl %%fs, %%eax" : "=a"(ctxt->fs));
asm volatile("movl %%gs, %%eax" : "=a"(ctxt->gs));
asm volatile("movw %%cs, %0" : "=m"(ctxt->cs));
asm volatile("movw %%ss, %0" : "=m"(ctxt->ss));
asm volatile("movw %%ds, %0" : "=m"(ctxt->ds));
asm volatile("movw %%es, %0" : "=m"(ctxt->es));
asm volatile("movw %%fs, %0" : "=m"(ctxt->fs));
asm volatile("movw %%gs, %0" : "=m"(ctxt->gs));
native_store_gdt(&ctxt->gdtr);
store_idt(&ctxt->idtr);

View File

@@ -1708,8 +1708,22 @@ static void __init uv_system_init_hub(void)
struct uv_hub_info_s *new_hub;
/* Allocate & fill new per hub info list */
new_hub = (bid == 0) ? &uv_hub_info_node0
: kzalloc_node(bytes, GFP_KERNEL, uv_blade_to_node(bid));
if (bid == 0) {
new_hub = &uv_hub_info_node0;
} else {
int nid;
/*
* Deconfigured sockets are mapped to SOCK_EMPTY. Use
* NUMA_NO_NODE to allocate on a valid node.
*/
nid = uv_blade_to_node(bid);
if (nid == SOCK_EMPTY)
nid = NUMA_NO_NODE;
new_hub = kzalloc_node(bytes, GFP_KERNEL, nid);
}
if (WARN_ON_ONCE(!new_hub)) {
/* do not kfree() bid 0, which is statically allocated */
while (--bid > 0)

View File

@@ -875,13 +875,18 @@ void amd_clear_bank(struct mce *m)
{
amd_reset_thr_limit(m->bank);
/* Clear MCA_DESTAT for all deferred errors even those logged in MCA_STATUS. */
if (m->status & MCI_STATUS_DEFERRED)
mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);
if (mce_flags.smca) {
/*
* Clear MCA_DESTAT for all deferred errors even those
* logged in MCA_STATUS.
*/
if (m->status & MCI_STATUS_DEFERRED)
mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);
/* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */
if (m->kflags & MCE_CHECK_DFR_REGS)
return;
/* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */
if (m->kflags & MCE_CHECK_DFR_REGS)
return;
}
mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);
}

View File

@@ -496,8 +496,9 @@ static void hv_reserve_irq_vectors(void)
test_and_set_bit(HYPERV_DBG_FASTFAIL_VECTOR, system_vectors))
BUG();
pr_info("Hyper-V: reserve vectors: %d %d %d\n", HYPERV_DBG_ASSERT_VECTOR,
HYPERV_DBG_SERVICE_VECTOR, HYPERV_DBG_FASTFAIL_VECTOR);
pr_info("Hyper-V: reserve vectors: 0x%x 0x%x 0x%x\n",
HYPERV_DBG_ASSERT_VECTOR, HYPERV_DBG_SERVICE_VECTOR,
HYPERV_DBG_FASTFAIL_VECTOR);
}
static void __init ms_hyperv_init_platform(void)

View File

@@ -113,6 +113,10 @@ static int acpi_processor_errata_piix4(struct pci_dev *dev)
PCI_ANY_ID, PCI_ANY_ID, NULL);
if (ide_dev) {
errata.piix4.bmisx = pci_resource_start(ide_dev, 4);
if (errata.piix4.bmisx)
dev_dbg(&ide_dev->dev,
"Bus master activity detection (BM-IDE) erratum enabled\n");
pci_dev_put(ide_dev);
}
@@ -131,20 +135,17 @@ static int acpi_processor_errata_piix4(struct pci_dev *dev)
if (isa_dev) {
pci_read_config_byte(isa_dev, 0x76, &value1);
pci_read_config_byte(isa_dev, 0x77, &value2);
if ((value1 & 0x80) || (value2 & 0x80))
if ((value1 & 0x80) || (value2 & 0x80)) {
errata.piix4.fdma = 1;
dev_dbg(&isa_dev->dev,
"Type-F DMA livelock erratum (C3 disabled)\n");
}
pci_dev_put(isa_dev);
}
break;
}
if (ide_dev)
dev_dbg(&ide_dev->dev, "Bus master activity detection (BM-IDE) erratum enabled\n");
if (isa_dev)
dev_dbg(&isa_dev->dev, "Type-F DMA livelock erratum (C3 disabled)\n");
return 0;
}

View File

@@ -451,7 +451,7 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_DSM",
METHOD_4ARGS(ACPI_TYPE_BUFFER, ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER,
ACPI_TYPE_ANY | ACPI_TYPE_PACKAGE) |
ACPI_TYPE_PACKAGE | ACPI_TYPE_ANY) |
ARG_COUNT_IS_MINIMUM,
METHOD_RETURNS(ACPI_RTYPE_ALL)}}, /* Must return a value, but it can be of any type */

View File

@@ -818,9 +818,6 @@ const struct acpi_device *acpi_companion_match(const struct device *dev)
if (list_empty(&adev->pnp.ids))
return NULL;
if (adev->pnp.type.backlight)
return adev;
return acpi_primary_dev_companion(adev, dev);
}

View File

@@ -4188,6 +4188,9 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
{ "ST3320[68]13AS", "SD1[5-9]", ATA_QUIRK_NONCQ |
ATA_QUIRK_FIRMWARE_WARN },
/* ADATA devices with LPM issues. */
{ "ADATA SU680", NULL, ATA_QUIRK_NOLPM },
/* Seagate disks with LPM issues */
{ "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM },
{ "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM },

View File

@@ -3600,7 +3600,7 @@ static unsigned int ata_scsiop_maint_in(struct ata_device *dev,
if (cdb[2] != 1 && cdb[2] != 3) {
ata_dev_warn(dev, "invalid command format %d\n", cdb[2]);
ata_scsi_set_invalid_field(dev, cmd, 1, 0xff);
ata_scsi_set_invalid_field(dev, cmd, 2, 0xff);
return 0;
}

View File

@@ -504,6 +504,36 @@ int bus_for_each_drv(const struct bus_type *bus, struct device_driver *start,
}
EXPORT_SYMBOL_GPL(bus_for_each_drv);
static ssize_t driver_override_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int ret;
ret = __device_set_driver_override(dev, buf, count);
if (ret)
return ret;
return count;
}
static ssize_t driver_override_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
guard(spinlock)(&dev->driver_override.lock);
return sysfs_emit(buf, "%s\n", dev->driver_override.name);
}
static DEVICE_ATTR_RW(driver_override);
static struct attribute *driver_override_dev_attrs[] = {
&dev_attr_driver_override.attr,
NULL,
};
static const struct attribute_group driver_override_dev_group = {
.attrs = driver_override_dev_attrs,
};
/**
* bus_add_device - add device to bus
* @dev: device being added
@@ -537,9 +567,15 @@ int bus_add_device(struct device *dev)
if (error)
goto out_put;
if (dev->bus->driver_override) {
error = device_add_group(dev, &driver_override_dev_group);
if (error)
goto out_groups;
}
error = sysfs_create_link(&sp->devices_kset->kobj, &dev->kobj, dev_name(dev));
if (error)
goto out_groups;
goto out_override;
error = sysfs_create_link(&dev->kobj, &sp->subsys.kobj, "subsystem");
if (error)
@@ -550,6 +586,9 @@ int bus_add_device(struct device *dev)
out_subsys:
sysfs_remove_link(&sp->devices_kset->kobj, dev_name(dev));
out_override:
if (dev->bus->driver_override)
device_remove_group(dev, &driver_override_dev_group);
out_groups:
device_remove_groups(dev, sp->bus->dev_groups);
out_put:
@@ -607,6 +646,8 @@ void bus_remove_device(struct device *dev)
sysfs_remove_link(&dev->kobj, "subsystem");
sysfs_remove_link(&sp->devices_kset->kobj, dev_name(dev));
if (dev->bus->driver_override)
device_remove_group(dev, &driver_override_dev_group);
device_remove_groups(dev, dev->bus->dev_groups);
if (klist_node_attached(&dev->p->knode_bus))
klist_del(&dev->p->knode_bus);

View File

@@ -2556,6 +2556,7 @@ static void device_release(struct kobject *kobj)
devres_release_all(dev);
kfree(dev->dma_range_map);
kfree(dev->driver_override.name);
if (dev->release)
dev->release(dev);
@@ -3159,6 +3160,7 @@ void device_initialize(struct device *dev)
kobject_init(&dev->kobj, &device_ktype);
INIT_LIST_HEAD(&dev->dma_pools);
mutex_init(&dev->mutex);
spin_lock_init(&dev->driver_override.lock);
lockdep_set_novalidate_class(&dev->mutex);
spin_lock_init(&dev->devres_lock);
INIT_LIST_HEAD(&dev->devres_head);

View File

@@ -381,6 +381,66 @@ static void __exit deferred_probe_exit(void)
}
__exitcall(deferred_probe_exit);
int __device_set_driver_override(struct device *dev, const char *s, size_t len)
{
const char *new, *old;
char *cp;
if (!s)
return -EINVAL;
/*
* The stored value will be used in sysfs show callback (sysfs_emit()),
* which has a length limit of PAGE_SIZE and adds a trailing newline.
* Thus we can store one character less to avoid truncation during sysfs
* show.
*/
if (len >= (PAGE_SIZE - 1))
return -EINVAL;
/*
* Compute the real length of the string in case userspace sends us a
* bunch of \0 characters like python likes to do.
*/
len = strlen(s);
if (!len) {
/* Empty string passed - clear override */
spin_lock(&dev->driver_override.lock);
old = dev->driver_override.name;
dev->driver_override.name = NULL;
spin_unlock(&dev->driver_override.lock);
kfree(old);
return 0;
}
cp = strnchr(s, len, '\n');
if (cp)
len = cp - s;
new = kstrndup(s, len, GFP_KERNEL);
if (!new)
return -ENOMEM;
spin_lock(&dev->driver_override.lock);
old = dev->driver_override.name;
if (cp != s) {
dev->driver_override.name = new;
spin_unlock(&dev->driver_override.lock);
} else {
/* "\n" passed - clear override */
dev->driver_override.name = NULL;
spin_unlock(&dev->driver_override.lock);
kfree(new);
}
kfree(old);
return 0;
}
EXPORT_SYMBOL_GPL(__device_set_driver_override);
/**
* device_is_bound() - Check if device is bound to a driver
* @dev: device to check

View File

@@ -603,7 +603,6 @@ static void platform_device_release(struct device *dev)
kfree(pa->pdev.dev.platform_data);
kfree(pa->pdev.mfd_cell);
kfree(pa->pdev.resource);
kfree(pa->pdev.driver_override);
kfree(pa);
}
@@ -1306,38 +1305,9 @@ static ssize_t numa_node_show(struct device *dev,
}
static DEVICE_ATTR_RO(numa_node);
static ssize_t driver_override_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct platform_device *pdev = to_platform_device(dev);
ssize_t len;
device_lock(dev);
len = sysfs_emit(buf, "%s\n", pdev->driver_override);
device_unlock(dev);
return len;
}
static ssize_t driver_override_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct platform_device *pdev = to_platform_device(dev);
int ret;
ret = driver_set_override(dev, &pdev->driver_override, buf, count);
if (ret)
return ret;
return count;
}
static DEVICE_ATTR_RW(driver_override);
static struct attribute *platform_dev_attrs[] = {
&dev_attr_modalias.attr,
&dev_attr_numa_node.attr,
&dev_attr_driver_override.attr,
NULL,
};
@@ -1377,10 +1347,12 @@ static int platform_match(struct device *dev, const struct device_driver *drv)
{
struct platform_device *pdev = to_platform_device(dev);
struct platform_driver *pdrv = to_platform_driver(drv);
int ret;
/* When driver_override is set, only bind to the matching driver */
if (pdev->driver_override)
return !strcmp(pdev->driver_override, drv->name);
ret = device_match_driver_override(dev, drv);
if (ret >= 0)
return ret;
/* Attempt an OF style match first */
if (of_driver_match_device(dev, drv))
@@ -1516,6 +1488,7 @@ static const struct dev_pm_ops platform_dev_pm_ops = {
const struct bus_type platform_bus_type = {
.name = "platform",
.dev_groups = platform_dev_groups,
.driver_override = true,
.match = platform_match,
.uevent = platform_uevent,
.probe = platform_probe,

View File

@@ -1895,6 +1895,7 @@ void pm_runtime_reinit(struct device *dev)
void pm_runtime_remove(struct device *dev)
{
__pm_runtime_disable(dev, false);
flush_work(&dev->power.work);
pm_runtime_reinit(dev);
}

View File

@@ -787,6 +787,8 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
*/
if (soc_type == QCA_WCN3988)
rom_ver = ((soc_ver & 0x00000f00) >> 0x05) | (soc_ver & 0x0000000f);
else if (soc_type == QCA_WCN3998)
rom_ver = ((soc_ver & 0x0000f000) >> 0x07) | (soc_ver & 0x0000000f);
else
rom_ver = ((soc_ver & 0x00000f00) >> 0x04) | (soc_ver & 0x0000000f);

View File

@@ -36,7 +36,7 @@ static int simple_pm_bus_probe(struct platform_device *pdev)
* that's not listed in simple_pm_bus_of_match. We don't want to do any
* of the simple-pm-bus tasks for these devices, so return early.
*/
if (pdev->driver_override)
if (device_has_driver_override(&pdev->dev))
return 0;
match = of_match_device(dev->driver->of_match_table, dev);
@@ -78,7 +78,7 @@ static void simple_pm_bus_remove(struct platform_device *pdev)
{
const void *data = of_device_get_match_data(&pdev->dev);
if (pdev->driver_override || data)
if (device_has_driver_override(&pdev->dev) || data)
return;
dev_dbg(&pdev->dev, "%s\n", __func__);

View File

@@ -178,11 +178,11 @@ static const struct of_device_id ax45mp_cache_ids[] = {
static int __init ax45mp_cache_init(void)
{
struct device_node *np;
struct resource res;
int ret;
np = of_find_matching_node(NULL, ax45mp_cache_ids);
struct device_node *np __free(device_node) =
of_find_matching_node(NULL, ax45mp_cache_ids);
if (!of_device_is_available(np))
return -ENODEV;

View File

@@ -102,11 +102,11 @@ static const struct of_device_id starlink_cache_ids[] = {
static int __init starlink_cache_init(void)
{
struct device_node *np;
u32 block_size;
int ret;
np = of_find_matching_node(NULL, starlink_cache_ids);
struct device_node *np __free(device_node) =
of_find_matching_node(NULL, starlink_cache_ids);
if (!of_device_is_available(np))
return -ENODEV;

View File

@@ -706,8 +706,7 @@ struct clk_hw *imx_clk_scu_alloc_dev(const char *name,
if (ret)
goto put_device;
ret = driver_set_override(&pdev->dev, &pdev->driver_override,
"imx-scu-clk", strlen("imx-scu-clk"));
ret = device_set_driver_override(&pdev->dev, "imx-scu-clk");
if (ret)
goto put_device;

View File

@@ -2408,10 +2408,8 @@ static int sev_ioctl_do_snp_platform_status(struct sev_issue_cmd *argp)
* in Firmware state on failure. Use snp_reclaim_pages() to
* transition either case back to Hypervisor-owned state.
*/
if (snp_reclaim_pages(__pa(data), 1, true)) {
snp_leak_pages(__page_to_pfn(status_page), 1);
if (snp_reclaim_pages(__pa(data), 1, true))
return -EFAULT;
}
}
if (ret)

View File

@@ -332,6 +332,13 @@ static int __init padlock_init(void)
if (!x86_match_cpu(padlock_sha_ids) || !boot_cpu_has(X86_FEATURE_PHE_EN))
return -ENODEV;
/*
* Skip family 0x07 and newer used by Zhaoxin processors,
* as the driver's self-tests fail on these CPUs.
*/
if (c->x86 >= 0x07)
return -ENODEV;
/* Register the newly added algorithm module if on *
* VIA Nano processor, or else just do as before */
if (c->x86_model < 0x0f) {

View File

@@ -257,9 +257,10 @@ static void fwnet_header_cache_update(struct hh_cache *hh,
memcpy((u8 *)hh->hh_data + HH_DATA_OFF(FWNET_HLEN), haddr, net->addr_len);
}
static int fwnet_header_parse(const struct sk_buff *skb, unsigned char *haddr)
static int fwnet_header_parse(const struct sk_buff *skb, const struct net_device *dev,
unsigned char *haddr)
{
memcpy(haddr, skb->dev->dev_addr, FWNET_ALEN);
memcpy(haddr, dev->dev_addr, FWNET_ALEN);
return FWNET_ALEN;
}

View File

@@ -205,12 +205,12 @@ static int ffa_rxtx_map(phys_addr_t tx_buf, phys_addr_t rx_buf, u32 pg_cnt)
return 0;
}
static int ffa_rxtx_unmap(u16 vm_id)
static int ffa_rxtx_unmap(void)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_RXTX_UNMAP, .a1 = PACK_TARGET_INFO(vm_id, 0),
.a0 = FFA_RXTX_UNMAP,
}, &ret);
if (ret.a0 == FFA_ERROR)
@@ -2097,7 +2097,7 @@ static int __init ffa_init(void)
pr_err("failed to setup partitions\n");
ffa_notifications_cleanup();
ffa_rxtx_unmap(drv_info->vm_id);
ffa_rxtx_unmap();
free_pages:
if (drv_info->tx_buffer)
free_pages_exact(drv_info->tx_buffer, rxtx_bufsz);
@@ -2112,7 +2112,7 @@ static void __exit ffa_exit(void)
{
ffa_notifications_cleanup();
ffa_partitions_cleanup();
ffa_rxtx_unmap(drv_info->vm_id);
ffa_rxtx_unmap();
free_pages_exact(drv_info->tx_buffer, drv_info->rxtx_bufsz);
free_pages_exact(drv_info->rx_buffer, drv_info->rxtx_bufsz);
kfree(drv_info);

View File

@@ -1066,7 +1066,7 @@ static int scmi_register_event_handler(struct scmi_notify_instance *ni,
* since at creation time we usually want to have all setup and ready before
* events really start flowing.
*
* Return: A properly refcounted handler on Success, NULL on Failure
* Return: A properly refcounted handler on Success, ERR_PTR on Failure
*/
static inline struct scmi_event_handler *
__scmi_event_handler_get_ops(struct scmi_notify_instance *ni,
@@ -1113,7 +1113,7 @@ __scmi_event_handler_get_ops(struct scmi_notify_instance *ni,
}
mutex_unlock(&ni->pending_mtx);
return hndl;
return hndl ?: ERR_PTR(-ENODEV);
}
static struct scmi_event_handler *

View File

@@ -189,13 +189,13 @@ struct scmi_protocol_handle {
/**
* struct scmi_iterator_state - Iterator current state descriptor
* @desc_index: Starting index for the current mulit-part request.
* @desc_index: Starting index for the current multi-part request.
* @num_returned: Number of returned items in the last multi-part reply.
* @num_remaining: Number of remaining items in the multi-part message.
* @max_resources: Maximum acceptable number of items, configured by the caller
* depending on the underlying resources that it is querying.
* @loop_idx: The iterator loop index in the current multi-part reply.
* @rx_len: Size in bytes of the currenly processed message; it can be used by
* @rx_len: Size in bytes of the currently processed message; it can be used by
* the user of the iterator to verify a reply size.
* @priv: Optional pointer to some additional state-related private data setup
* by the caller during the iterations.

View File

@@ -18,6 +18,7 @@
#include <linux/bitmap.h>
#include <linux/bitfield.h>
#include <linux/cleanup.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/export.h>
@@ -940,13 +941,13 @@ static int scpi_probe(struct platform_device *pdev)
int idx = scpi_drvinfo->num_chans;
struct scpi_chan *pchan = scpi_drvinfo->channels + idx;
struct mbox_client *cl = &pchan->cl;
struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
struct device_node *shmem __free(device_node) =
of_parse_phandle(np, "shmem", idx);
if (!of_match_node(shmem_of_match, shmem))
return -ENXIO;
ret = of_address_to_resource(shmem, 0, &res);
of_node_put(shmem);
if (ret) {
dev_err(dev, "failed to get SCPI payload mem resource\n");
return ret;

View File

@@ -36,6 +36,7 @@
#define AMDGPU_BO_LIST_MAX_PRIORITY 32u
#define AMDGPU_BO_LIST_NUM_BUCKETS (AMDGPU_BO_LIST_MAX_PRIORITY + 1)
#define AMDGPU_BO_LIST_MAX_ENTRIES (128 * 1024)
static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)
{
@@ -188,6 +189,9 @@ int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
const uint32_t bo_number = in->bo_number;
struct drm_amdgpu_bo_list_entry *info;
if (bo_number > AMDGPU_BO_LIST_MAX_ENTRIES)
return -EINVAL;
/* copy the handle array from userspace to a kernel buffer */
if (likely(info_size == bo_info_size)) {
info = vmemdup_array_user(uptr, bo_number, info_size);

View File

@@ -1069,7 +1069,10 @@ amdgpu_vm_tlb_flush(struct amdgpu_vm_update_params *params,
}
/* Prepare a TLB flush fence to be attached to PTs */
if (!params->unlocked) {
/* The check for need_tlb_fence should be dropped once we
* sort out the issues with KIQ/MES TLB invalidation timeouts.
*/
if (!params->unlocked && vm->need_tlb_fence) {
amdgpu_vm_tlb_fence_create(params->adev, vm, fence);
/* Makes sure no PD/PT is freed before the flush */
@@ -2602,6 +2605,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
ttm_lru_bulk_move_init(&vm->lru_bulk_move);
vm->is_compute_context = false;
vm->need_tlb_fence = amdgpu_userq_enabled(&adev->ddev);
vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
AMDGPU_VM_USE_CPU_FOR_GFX);
@@ -2739,6 +2743,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
dma_fence_put(vm->last_update);
vm->last_update = dma_fence_get_stub();
vm->is_compute_context = true;
vm->need_tlb_fence = true;
unreserve_bo:
amdgpu_bo_unreserve(vm->root.bo);

View File

@@ -441,6 +441,8 @@ struct amdgpu_vm {
struct ttm_lru_bulk_move lru_bulk_move;
/* Flag to indicate if VM is used for compute */
bool is_compute_context;
/* Flag to indicate if VM needs a TLB fence (KFD or KGD) */
bool need_tlb_fence;
/* Memory partition number, -1 means any partition */
int8_t mem_id;

View File

@@ -662,28 +662,35 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
} else {
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
mmhub_cid = mmhub_client_ids_vega10[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega10) ?
mmhub_client_ids_vega10[cid][rw] : NULL;
break;
case IP_VERSION(9, 3, 0):
mmhub_cid = mmhub_client_ids_vega12[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega12) ?
mmhub_client_ids_vega12[cid][rw] : NULL;
break;
case IP_VERSION(9, 4, 0):
mmhub_cid = mmhub_client_ids_vega20[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega20) ?
mmhub_client_ids_vega20[cid][rw] : NULL;
break;
case IP_VERSION(9, 4, 1):
mmhub_cid = mmhub_client_ids_arcturus[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_arcturus) ?
mmhub_client_ids_arcturus[cid][rw] : NULL;
break;
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 2, 0):
mmhub_cid = mmhub_client_ids_raven[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_raven) ?
mmhub_client_ids_raven[cid][rw] : NULL;
break;
case IP_VERSION(1, 5, 0):
case IP_VERSION(2, 4, 0):
mmhub_cid = mmhub_client_ids_renoir[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_renoir) ?
mmhub_client_ids_renoir[cid][rw] : NULL;
break;
case IP_VERSION(1, 8, 0):
case IP_VERSION(9, 4, 2):
mmhub_cid = mmhub_client_ids_aldebaran[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_aldebaran) ?
mmhub_client_ids_aldebaran[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -129,7 +129,7 @@ static int isp_genpd_add_device(struct device *dev, void *data)
if (!pdev)
return -EINVAL;
if (!dev->type->name) {
if (!dev->type || !dev->type->name) {
drm_dbg(&adev->ddev, "Invalid device type to add\n");
goto exit;
}
@@ -165,7 +165,7 @@ static int isp_genpd_remove_device(struct device *dev, void *data)
if (!pdev)
return -EINVAL;
if (!dev->type->name) {
if (!dev->type || !dev->type->name) {
drm_dbg(&adev->ddev, "Invalid device type to remove\n");
goto exit;
}

View File

@@ -154,14 +154,17 @@ mmhub_v2_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
case IP_VERSION(2, 0, 0):
case IP_VERSION(2, 0, 2):
mmhub_cid = mmhub_client_ids_navi1x[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_navi1x) ?
mmhub_client_ids_navi1x[cid][rw] : NULL;
break;
case IP_VERSION(2, 1, 0):
case IP_VERSION(2, 1, 1):
mmhub_cid = mmhub_client_ids_sienna_cichlid[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_sienna_cichlid) ?
mmhub_client_ids_sienna_cichlid[cid][rw] : NULL;
break;
case IP_VERSION(2, 1, 2):
mmhub_cid = mmhub_client_ids_beige_goby[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_beige_goby) ?
mmhub_client_ids_beige_goby[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -94,7 +94,8 @@ mmhub_v2_3_print_l2_protection_fault_status(struct amdgpu_device *adev,
case IP_VERSION(2, 3, 0):
case IP_VERSION(2, 4, 0):
case IP_VERSION(2, 4, 1):
mmhub_cid = mmhub_client_ids_vangogh[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vangogh) ?
mmhub_client_ids_vangogh[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -110,7 +110,8 @@ mmhub_v3_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
case IP_VERSION(3, 0, 0):
case IP_VERSION(3, 0, 1):
mmhub_cid = mmhub_client_ids_v3_0_0[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_0) ?
mmhub_client_ids_v3_0_0[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -117,7 +117,8 @@ mmhub_v3_0_1_print_l2_protection_fault_status(struct amdgpu_device *adev,
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
case IP_VERSION(3, 0, 1):
mmhub_cid = mmhub_client_ids_v3_0_1[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_1) ?
mmhub_client_ids_v3_0_1[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -108,7 +108,8 @@ mmhub_v3_0_2_print_l2_protection_fault_status(struct amdgpu_device *adev,
"MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
mmhub_cid = mmhub_client_ids_v3_0_2[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_2) ?
mmhub_client_ids_v3_0_2[cid][rw] : NULL;
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",

View File

@@ -102,7 +102,8 @@ mmhub_v4_1_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
status);
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
case IP_VERSION(4, 1, 0):
mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_1_0) ?
mmhub_client_ids_v4_1_0[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -688,7 +688,8 @@ mmhub_v4_2_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
status);
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
case IP_VERSION(4, 2, 0):
mmhub_cid = mmhub_client_ids_v4_2_0[cid][rw];
mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_2_0) ?
mmhub_client_ids_v4_2_0[cid][rw] : NULL;
break;
default:
mmhub_cid = NULL;

View File

@@ -2554,7 +2554,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
fw_meta_info_params.fw_inst_const = adev->dm.dmub_fw->data +
le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
PSP_HEADER_BYTES_256;
fw_meta_info_params.fw_bss_data = region_params.bss_data_size ? adev->dm.dmub_fw->data +
fw_meta_info_params.fw_bss_data = fw_meta_info_params.bss_data_size ? adev->dm.dmub_fw->data +
le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
le32_to_cpu(hdr->inst_const_bytes) : NULL;
fw_meta_info_params.custom_psp_footer_size = 0;
@@ -13119,7 +13119,7 @@ static void parse_edid_displayid_vrr(struct drm_connector *connector,
u16 min_vfreq;
u16 max_vfreq;
if (edid == NULL || edid->extensions == 0)
if (!edid || !edid->extensions)
return;
/* Find DisplayID extension */
@@ -13129,7 +13129,7 @@ static void parse_edid_displayid_vrr(struct drm_connector *connector,
break;
}
if (edid_ext == NULL)
if (i == edid->extensions)
return;
while (j < EDID_LENGTH) {

View File

@@ -37,19 +37,19 @@ const u64 amdgpu_dm_supported_degam_tfs =
BIT(DRM_COLOROP_1D_CURVE_SRGB_EOTF) |
BIT(DRM_COLOROP_1D_CURVE_PQ_125_EOTF) |
BIT(DRM_COLOROP_1D_CURVE_BT2020_INV_OETF) |
BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV);
BIT(DRM_COLOROP_1D_CURVE_GAMMA22);
const u64 amdgpu_dm_supported_shaper_tfs =
BIT(DRM_COLOROP_1D_CURVE_SRGB_INV_EOTF) |
BIT(DRM_COLOROP_1D_CURVE_PQ_125_INV_EOTF) |
BIT(DRM_COLOROP_1D_CURVE_BT2020_OETF) |
BIT(DRM_COLOROP_1D_CURVE_GAMMA22);
BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV);
const u64 amdgpu_dm_supported_blnd_tfs =
BIT(DRM_COLOROP_1D_CURVE_SRGB_EOTF) |
BIT(DRM_COLOROP_1D_CURVE_PQ_125_EOTF) |
BIT(DRM_COLOROP_1D_CURVE_BT2020_INV_OETF) |
BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV);
BIT(DRM_COLOROP_1D_CURVE_GAMMA22);
#define MAX_COLOR_PIPELINE_OPS 10

View File

@@ -255,6 +255,10 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
BREAK_TO_DEBUGGER();
return NULL;
}
if (ctx->dce_version == DCN_VERSION_2_01) {
dcn201_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
return &clk_mgr->base;
}
if (ASICREV_IS_SIENNA_CICHLID_P(asic_id.hw_internal_rev)) {
dcn3_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
return &clk_mgr->base;
@@ -267,10 +271,6 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
dcn3_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
return &clk_mgr->base;
}
if (ctx->dce_version == DCN_VERSION_2_01) {
dcn201_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
return &clk_mgr->base;
}
dcn20_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
return &clk_mgr->base;
}

View File

@@ -1785,7 +1785,10 @@ static bool dml1_validate(struct dc *dc, struct dc_state *context, enum dc_valid
dc->res_pool->funcs->calculate_wm_and_dlg(dc, context, pipes, pipe_cnt, vlevel);
DC_FP_START();
dcn32_override_min_req_memclk(dc, context);
DC_FP_END();
dcn32_override_min_req_dcfclk(dc, context);
BW_VAL_TRACE_END_WATERMARKS();

View File

@@ -3454,9 +3454,11 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
if (adev->asic_type == CHIP_HAINAN) {
if ((adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0xC3) ||
(adev->pdev->device == 0x6660) ||
(adev->pdev->device == 0x6664) ||
(adev->pdev->device == 0x6665) ||
(adev->pdev->device == 0x6667)) {
(adev->pdev->device == 0x6667) ||
(adev->pdev->device == 0x666F)) {
max_sclk = 75000;
}
if ((adev->pdev->revision == 0xC3) ||

View File

@@ -848,7 +848,7 @@ static int dw_hdmi_qp_config_audio_infoframe(struct dw_hdmi_qp *hdmi,
regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS0, &header_bytes, 1);
regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS1, &buffer[3], 1);
regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS2, &buffer[4], 1);
regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS2, &buffer[7], 1);
/* Enable ACR, AUDI, AMD */
dw_hdmi_qp_mod(hdmi,

View File

@@ -233,6 +233,7 @@ static void drm_events_release(struct drm_file *file_priv)
void drm_file_free(struct drm_file *file)
{
struct drm_device *dev;
int idx;
if (!file)
return;
@@ -249,9 +250,11 @@ void drm_file_free(struct drm_file *file)
drm_events_release(file);
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
if (drm_core_check_feature(dev, DRIVER_MODESET) &&
drm_dev_enter(dev, &idx)) {
drm_fb_release(file);
drm_property_destroy_user_blobs(dev, file);
drm_dev_exit(idx);
}
if (drm_core_check_feature(dev, DRIVER_SYNCOBJ))

View File

@@ -577,10 +577,13 @@ void drm_mode_config_cleanup(struct drm_device *dev)
*/
WARN_ON(!list_empty(&dev->mode_config.fb_list));
list_for_each_entry_safe(fb, fbt, &dev->mode_config.fb_list, head) {
struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]");
if (list_empty(&fb->filp_head) || drm_framebuffer_read_refcount(fb) > 1) {
struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]");
drm_printf(&p, "framebuffer[%u]:\n", fb->base.id);
drm_framebuffer_print_info(&p, 1, fb);
drm_printf(&p, "framebuffer[%u]:\n", fb->base.id);
drm_framebuffer_print_info(&p, 1, fb);
}
list_del_init(&fb->filp_head);
drm_framebuffer_free(&fb->base.refcount);
}

View File

@@ -65,18 +65,14 @@ static void drm_pagemap_cache_fini(void *arg)
drm_dbg(cache->shrinker->drm, "Destroying dpagemap cache.\n");
spin_lock(&cache->lock);
dpagemap = cache->dpagemap;
if (!dpagemap) {
spin_unlock(&cache->lock);
goto out;
}
cache->dpagemap = NULL;
if (dpagemap && !drm_pagemap_shrinker_cancel(dpagemap))
dpagemap = NULL;
spin_unlock(&cache->lock);
if (drm_pagemap_shrinker_cancel(dpagemap)) {
cache->dpagemap = NULL;
spin_unlock(&cache->lock);
if (dpagemap)
drm_pagemap_destroy(dpagemap, false);
}
out:
mutex_destroy(&cache->lookup_mutex);
kfree(cache);
}

View File

@@ -806,7 +806,7 @@ void gen9_set_dc_state(struct intel_display *display, u32 state)
power_domains->dc_state, val & mask);
enable_dc6 = state & DC_STATE_EN_UPTO_DC6;
dc6_was_enabled = val & DC_STATE_EN_UPTO_DC6;
dc6_was_enabled = power_domains->dc_state & DC_STATE_EN_UPTO_DC6;
if (!dc6_was_enabled && enable_dc6)
intel_dmc_update_dc6_allowed_count(display, true);

View File

@@ -1186,6 +1186,7 @@ struct intel_crtc_state {
u32 dc3co_exitline;
u16 su_y_granularity;
u8 active_non_psr_pipes;
u8 entry_setup_frames;
const char *no_psr_reason;
/*

View File

@@ -1599,8 +1599,7 @@ static bool intel_dmc_get_dc6_allowed_count(struct intel_display *display, u32 *
return false;
mutex_lock(&power_domains->lock);
dc6_enabled = intel_de_read(display, DC_STATE_EN) &
DC_STATE_EN_UPTO_DC6;
dc6_enabled = power_domains->dc_state & DC_STATE_EN_UPTO_DC6;
if (dc6_enabled)
intel_dmc_update_dc6_allowed_count(display, false);

View File

@@ -1717,7 +1717,7 @@ static bool _psr_compute_config(struct intel_dp *intel_dp,
entry_setup_frames = intel_psr_entry_setup_frames(intel_dp, conn_state, adjusted_mode);
if (entry_setup_frames >= 0) {
intel_dp->psr.entry_setup_frames = entry_setup_frames;
crtc_state->entry_setup_frames = entry_setup_frames;
} else {
crtc_state->no_psr_reason = "PSR setup timing not met";
drm_dbg_kms(display->drm,
@@ -1815,7 +1815,7 @@ static bool intel_psr_needs_wa_18037818876(struct intel_dp *intel_dp,
{
struct intel_display *display = to_intel_display(intel_dp);
return (DISPLAY_VER(display) == 20 && intel_dp->psr.entry_setup_frames > 0 &&
return (DISPLAY_VER(display) == 20 && crtc_state->entry_setup_frames > 0 &&
!crtc_state->has_sel_update);
}
@@ -2189,6 +2189,7 @@ static void intel_psr_enable_locked(struct intel_dp *intel_dp,
intel_dp->psr.pkg_c_latency_used = crtc_state->pkg_c_latency_used;
intel_dp->psr.io_wake_lines = crtc_state->alpm_state.io_wake_lines;
intel_dp->psr.fast_wake_lines = crtc_state->alpm_state.fast_wake_lines;
intel_dp->psr.entry_setup_frames = crtc_state->entry_setup_frames;
if (!psr_interrupt_error_check(intel_dp))
return;
@@ -3109,6 +3110,8 @@ void intel_psr_pre_plane_update(struct intel_atomic_state *state,
* - Display WA #1136: skl, bxt
*/
if (intel_crtc_needs_modeset(new_crtc_state) ||
new_crtc_state->update_m_n ||
new_crtc_state->update_lrr ||
!new_crtc_state->has_psr ||
!new_crtc_state->active_planes ||
new_crtc_state->has_sel_update != psr->sel_update_enabled ||

View File

@@ -1967,7 +1967,8 @@ void intel_engines_reset_default_submission(struct intel_gt *gt)
if (engine->sanitize)
engine->sanitize(engine);
engine->set_default_submission(engine);
if (engine->set_default_submission)
engine->set_default_submission(engine);
}
}

View File

@@ -225,29 +225,12 @@ static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
}
if (pvr_dev->has_safety_events) {
int err;
/*
* Ensure the GPU is powered on since some safety events (such
* as ECC faults) can happen outside of job submissions, which
* are otherwise the only time a power reference is held.
*/
err = pvr_power_get(pvr_dev);
if (err) {
drm_err_ratelimited(drm_dev,
"%s: could not take power reference (%d)\n",
__func__, err);
return ret;
}
while (pvr_device_safety_irq_pending(pvr_dev)) {
pvr_device_safety_irq_clear(pvr_dev);
pvr_device_handle_safety_events(pvr_dev);
ret = IRQ_HANDLED;
}
pvr_power_put(pvr_dev);
}
return ret;

View File

@@ -90,11 +90,11 @@ pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
}
static int
pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset, bool rpm_suspend)
{
if (!hard_reset) {
int err;
int err;
if (!hard_reset) {
cancel_delayed_work_sync(&pvr_dev->watchdog.work);
err = pvr_power_request_idle(pvr_dev);
@@ -106,29 +106,47 @@ pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
return err;
}
return pvr_fw_stop(pvr_dev);
if (rpm_suspend) {
/* This also waits for late processing of GPU or firmware IRQs in other cores */
disable_irq(pvr_dev->irq);
}
err = pvr_fw_stop(pvr_dev);
if (err && rpm_suspend)
enable_irq(pvr_dev->irq);
return err;
}
static int
pvr_power_fw_enable(struct pvr_device *pvr_dev)
pvr_power_fw_enable(struct pvr_device *pvr_dev, bool rpm_resume)
{
int err;
if (rpm_resume)
enable_irq(pvr_dev->irq);
err = pvr_fw_start(pvr_dev);
if (err)
return err;
goto out;
err = pvr_wait_for_fw_boot(pvr_dev);
if (err) {
drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");
pvr_fw_stop(pvr_dev);
return err;
goto out;
}
queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
msecs_to_jiffies(WATCHDOG_TIME_MS));
return 0;
out:
if (rpm_resume)
disable_irq(pvr_dev->irq);
return err;
}
bool
@@ -361,7 +379,7 @@ pvr_power_device_suspend(struct device *dev)
return -EIO;
if (pvr_dev->fw_dev.booted) {
err = pvr_power_fw_disable(pvr_dev, false);
err = pvr_power_fw_disable(pvr_dev, false, true);
if (err)
goto err_drm_dev_exit;
}
@@ -391,7 +409,7 @@ pvr_power_device_resume(struct device *dev)
goto err_drm_dev_exit;
if (pvr_dev->fw_dev.booted) {
err = pvr_power_fw_enable(pvr_dev);
err = pvr_power_fw_enable(pvr_dev, true);
if (err)
goto err_power_off;
}
@@ -510,7 +528,16 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
}
/* Disable IRQs for the duration of the reset. */
disable_irq(pvr_dev->irq);
if (hard_reset) {
disable_irq(pvr_dev->irq);
} else {
/*
* Soft reset is triggered as a response to a FW command to the Host and is
* processed from the threaded IRQ handler. This code cannot (nor needs to)
* wait for any IRQ processing to complete.
*/
disable_irq_nosync(pvr_dev->irq);
}
do {
if (hard_reset) {
@@ -518,7 +545,7 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
queues_disabled = true;
}
err = pvr_power_fw_disable(pvr_dev, hard_reset);
err = pvr_power_fw_disable(pvr_dev, hard_reset, false);
if (!err) {
if (hard_reset) {
pvr_dev->fw_dev.booted = false;
@@ -541,7 +568,7 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
pvr_fw_irq_clear(pvr_dev);
err = pvr_power_fw_enable(pvr_dev);
err = pvr_power_fw_enable(pvr_dev, false);
}
if (err && hard_reset)

View File

@@ -2915,9 +2915,11 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
if (rdev->family == CHIP_HAINAN) {
if ((rdev->pdev->revision == 0x81) ||
(rdev->pdev->revision == 0xC3) ||
(rdev->pdev->device == 0x6660) ||
(rdev->pdev->device == 0x6664) ||
(rdev->pdev->device == 0x6665) ||
(rdev->pdev->device == 0x6667)) {
(rdev->pdev->device == 0x6667) ||
(rdev->pdev->device == 0x666F)) {
max_sclk = 75000;
}
if ((rdev->pdev->revision == 0xC3) ||

View File

@@ -96,12 +96,17 @@ struct vmwgfx_hash_item {
struct vmw_res_func;
struct vmw_bo;
struct vmw_bo;
struct vmw_resource_dirty;
/**
* struct vmw-resource - base class for hardware resources
* struct vmw_resource - base class for hardware resources
*
* @kref: For refcounting.
* @dev_priv: Pointer to the device private for this resource. Immutable.
* @id: Device id. Protected by @dev_priv::resource_lock.
* @used_prio: Priority for this resource.
* @guest_memory_size: Guest memory buffer size. Immutable.
* @res_dirty: Resource contains data not yet in the guest memory buffer.
* Protected by resource reserved.
@@ -117,18 +122,16 @@ struct vmw_res_func;
* pin-count greater than zero. It is not on the resource LRU lists and its
* guest memory buffer is pinned. Hence it can't be evicted.
* @func: Method vtable for this resource. Immutable.
* @mob_node; Node for the MOB guest memory rbtree. Protected by
* @mob_node: Node for the MOB guest memory rbtree. Protected by
* @guest_memory_bo reserved.
* @lru_head: List head for the LRU list. Protected by @dev_priv::resource_lock.
* @binding_head: List head for the context binding list. Protected by
* the @dev_priv::binding_mutex
* @dirty: resource's dirty tracker
* @res_free: The resource destructor.
* @hw_destroy: Callback to destroy the resource on the device, as part of
* resource destruction.
*/
struct vmw_bo;
struct vmw_bo;
struct vmw_resource_dirty;
struct vmw_resource {
struct kref kref;
struct vmw_private *dev_priv;
@@ -196,8 +199,8 @@ struct vmw_surface_offset;
* @quality_level: Quality level.
* @autogen_filter: Filter for automatically generated mipmaps.
* @array_size: Number of array elements for a 1D/2D texture. For cubemap
texture number of faces * array_size. This should be 0 for pre
SM4 device.
* texture number of faces * array_size. This should be 0 for pre
* SM4 device.
* @buffer_byte_stride: Buffer byte stride.
* @num_sizes: Size of @sizes. For GB surface this should always be 1.
* @base_size: Surface dimension.
@@ -265,18 +268,24 @@ struct vmw_fifo_state {
struct vmw_res_cache_entry {
uint32_t handle;
struct vmw_resource *res;
/* private: */
void *private;
/* public: */
unsigned short valid_handle;
unsigned short valid;
};
/**
* enum vmw_dma_map_mode - indicate how to perform TTM page dma mappings.
* @vmw_dma_alloc_coherent: Use TTM coherent pages
* @vmw_dma_map_populate: Unmap from DMA just after unpopulate
* @vmw_dma_map_bind: Unmap from DMA just before unbind
*/
enum vmw_dma_map_mode {
vmw_dma_alloc_coherent, /* Use TTM coherent pages */
vmw_dma_map_populate, /* Unmap from DMA just after unpopulate */
vmw_dma_map_bind, /* Unmap from DMA just before unbind */
vmw_dma_alloc_coherent,
vmw_dma_map_populate,
vmw_dma_map_bind,
/* private: */
vmw_dma_map_max
};
@@ -284,8 +293,11 @@ enum vmw_dma_map_mode {
* struct vmw_sg_table - Scatter/gather table for binding, with additional
* device-specific information.
*
* @mode: which page mapping mode to use
* @pages: Array of page pointers to the pages.
* @addrs: DMA addresses to the pages if coherent pages are used.
* @sgt: Pointer to a struct sg_table with binding information
* @num_regions: Number of regions with device-address contiguous pages
* @num_pages: Number of @pages
*/
struct vmw_sg_table {
enum vmw_dma_map_mode mode;
@@ -353,6 +365,7 @@ struct vmw_ctx_validation_info;
* than from user-space
* @fp: If @kernel is false, points to the file of the client. Otherwise
* NULL
* @filp: DRM state for this file
* @cmd_bounce: Command bounce buffer used for command validation before
* copying to fifo space
* @cmd_bounce_size: Current command bounce buffer size
@@ -729,7 +742,7 @@ extern void vmw_svga_disable(struct vmw_private *dev_priv);
bool vmwgfx_supported(struct vmw_private *vmw);
/**
/*
* GMR utilities - vmwgfx_gmr.c
*/
@@ -739,7 +752,7 @@ extern int vmw_gmr_bind(struct vmw_private *dev_priv,
int gmr_id);
extern void vmw_gmr_unbind(struct vmw_private *dev_priv, int gmr_id);
/**
/*
* User handles
*/
struct vmw_user_object {
@@ -759,7 +772,7 @@ void *vmw_user_object_map_size(struct vmw_user_object *uo, size_t size);
void vmw_user_object_unmap(struct vmw_user_object *uo);
bool vmw_user_object_is_mapped(struct vmw_user_object *uo);
/**
/*
* Resource utilities - vmwgfx_resource.c
*/
struct vmw_user_resource_conv;
@@ -819,7 +832,7 @@ static inline bool vmw_resource_mob_attached(const struct vmw_resource *res)
return !RB_EMPTY_NODE(&res->mob_node);
}
/**
/*
* GEM related functionality - vmwgfx_gem.c
*/
struct vmw_bo_params;
@@ -833,7 +846,7 @@ extern int vmw_gem_object_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp);
extern void vmw_debugfs_gem_init(struct vmw_private *vdev);
/**
/*
* Misc Ioctl functionality - vmwgfx_ioctl.c
*/
@@ -846,7 +859,7 @@ extern int vmw_present_ioctl(struct drm_device *dev, void *data,
extern int vmw_present_readback_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
/**
/*
* Fifo utilities - vmwgfx_fifo.c
*/
@@ -880,9 +893,11 @@ extern int vmw_cmd_flush(struct vmw_private *dev_priv,
/**
* vmw_fifo_caps - Returns the capabilities of the FIFO command
* vmw_fifo_caps - Get the capabilities of the FIFO command
* queue or 0 if fifo memory isn't present.
* @dev_priv: The device private context
*
* Returns: capabilities of the FIFO command or %0 if fifo memory not present
*/
static inline uint32_t vmw_fifo_caps(const struct vmw_private *dev_priv)
{
@@ -893,9 +908,11 @@ static inline uint32_t vmw_fifo_caps(const struct vmw_private *dev_priv)
/**
* vmw_is_cursor_bypass3_enabled - Returns TRUE iff Cursor Bypass 3
* is enabled in the FIFO.
* vmw_is_cursor_bypass3_enabled - check Cursor Bypass 3 enabled setting
* in the FIFO.
* @dev_priv: The device private context
*
* Returns: %true iff Cursor Bypass 3 is enabled in the FIFO
*/
static inline bool
vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv)
@@ -903,7 +920,7 @@ vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv)
return (vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_CURSOR_BYPASS_3) != 0;
}
/**
/*
* TTM buffer object driver - vmwgfx_ttm_buffer.c
*/
@@ -927,7 +944,7 @@ extern void vmw_piter_start(struct vmw_piter *viter,
*
* @viter: Pointer to the iterator to advance.
*
* Returns false if past the list of pages, true otherwise.
* Returns: false if past the list of pages, true otherwise.
*/
static inline bool vmw_piter_next(struct vmw_piter *viter)
{
@@ -939,7 +956,7 @@ static inline bool vmw_piter_next(struct vmw_piter *viter)
*
* @viter: Pointer to the iterator
*
* Returns the DMA address of the page pointed to by @viter.
* Returns: the DMA address of the page pointed to by @viter.
*/
static inline dma_addr_t vmw_piter_dma_addr(struct vmw_piter *viter)
{
@@ -951,14 +968,14 @@ static inline dma_addr_t vmw_piter_dma_addr(struct vmw_piter *viter)
*
* @viter: Pointer to the iterator
*
* Returns the DMA address of the page pointed to by @viter.
* Returns: the DMA address of the page pointed to by @viter.
*/
static inline struct page *vmw_piter_page(struct vmw_piter *viter)
{
return viter->pages[viter->i];
}
/**
/*
* Command submission - vmwgfx_execbuf.c
*/
@@ -993,7 +1010,7 @@ extern int vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
int32_t out_fence_fd);
bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd);
/**
/*
* IRQs and wating - vmwgfx_irq.c
*/
@@ -1016,7 +1033,7 @@ bool vmw_generic_waiter_add(struct vmw_private *dev_priv, u32 flag,
bool vmw_generic_waiter_remove(struct vmw_private *dev_priv,
u32 flag, int *waiter_count);
/**
/*
* Kernel modesetting - vmwgfx_kms.c
*/
@@ -1048,7 +1065,7 @@ extern int vmw_resource_pin(struct vmw_resource *res, bool interruptible);
extern void vmw_resource_unpin(struct vmw_resource *res);
extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res);
/**
/*
* Overlay control - vmwgfx_overlay.c
*/
@@ -1063,20 +1080,20 @@ int vmw_overlay_unref(struct vmw_private *dev_priv, uint32_t stream_id);
int vmw_overlay_num_overlays(struct vmw_private *dev_priv);
int vmw_overlay_num_free_overlays(struct vmw_private *dev_priv);
/**
/*
* GMR Id manager
*/
int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type);
void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type);
/**
/*
* System memory manager
*/
int vmw_sys_man_init(struct vmw_private *dev_priv);
void vmw_sys_man_fini(struct vmw_private *dev_priv);
/**
/*
* Prime - vmwgfx_prime.c
*/
@@ -1292,7 +1309,7 @@ extern void vmw_cmdbuf_irqthread(struct vmw_cmdbuf_man *man);
* @line: The current line of the blit.
* @line_offset: Offset of the current line segment.
* @cpp: Bytes per pixel (granularity information).
* @memcpy: Which memcpy function to use.
* @do_cpy: Which memcpy function to use.
*/
struct vmw_diff_cpy {
struct drm_rect rect;
@@ -1380,13 +1397,14 @@ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf);
/**
* VMW_DEBUG_KMS - Debug output for kernel mode-setting
* @fmt: format string for the args
*
* This macro is for debugging vmwgfx mode-setting code.
*/
#define VMW_DEBUG_KMS(fmt, ...) \
DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)
/**
/*
* Inline helper functions
*/
@@ -1417,11 +1435,13 @@ static inline void vmw_fifo_resource_dec(struct vmw_private *dev_priv)
/**
* vmw_fifo_mem_read - Perform a MMIO read from the fifo memory
*
* @vmw: The device private structure
* @fifo_reg: The fifo register to read from
*
* This function is intended to be equivalent to ioread32() on
* memremap'd memory, but without byteswapping.
*
* Returns: the value read
*/
static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg)
{
@@ -1431,8 +1451,9 @@ static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg)
/**
* vmw_fifo_mem_write - Perform a MMIO write to volatile memory
*
* @addr: The fifo register to write to
* @vmw: The device private structure
* @fifo_reg: The fifo register to write to
* @value: The value to write
*
* This function is intended to be equivalent to iowrite32 on
* memremap'd memory, but without byteswapping.

View File

@@ -771,7 +771,8 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
ret = vmw_bo_dirty_add(bo);
if (!ret && surface && surface->res.func->dirty_alloc) {
surface->res.coherent = true;
ret = surface->res.func->dirty_alloc(&surface->res);
if (surface->res.dirty == NULL)
ret = surface->res.func->dirty_alloc(&surface->res);
}
ttm_bo_unreserve(&bo->tbo);
}

View File

@@ -313,6 +313,8 @@ static void dev_fini_ggtt(void *arg)
{
struct xe_ggtt *ggtt = arg;
scoped_guard(mutex, &ggtt->lock)
ggtt->flags &= ~XE_GGTT_FLAGS_ONLINE;
drain_workqueue(ggtt->wq);
}
@@ -377,6 +379,7 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
if (err)
return err;
ggtt->flags |= XE_GGTT_FLAGS_ONLINE;
err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt);
if (err)
return err;
@@ -410,13 +413,10 @@ static void xe_ggtt_initial_clear(struct xe_ggtt *ggtt)
static void ggtt_node_remove(struct xe_ggtt_node *node)
{
struct xe_ggtt *ggtt = node->ggtt;
struct xe_device *xe = tile_to_xe(ggtt->tile);
bool bound;
int idx;
bound = drm_dev_enter(&xe->drm, &idx);
mutex_lock(&ggtt->lock);
bound = ggtt->flags & XE_GGTT_FLAGS_ONLINE;
if (bound)
xe_ggtt_clear(ggtt, node->base.start, node->base.size);
drm_mm_remove_node(&node->base);
@@ -429,8 +429,6 @@ static void ggtt_node_remove(struct xe_ggtt_node *node)
if (node->invalidate_on_remove)
xe_ggtt_invalidate(ggtt);
drm_dev_exit(idx);
free_node:
xe_ggtt_node_fini(node);
}

View File

@@ -28,11 +28,14 @@ struct xe_ggtt {
/** @size: Total usable size of this GGTT */
u64 size;
#define XE_GGTT_FLAGS_64K BIT(0)
#define XE_GGTT_FLAGS_64K BIT(0)
#define XE_GGTT_FLAGS_ONLINE BIT(1)
/**
* @flags: Flags for this GGTT
* Acceptable flags:
* - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K.
* - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock
* after init
*/
unsigned int flags;
/** @scratch: Internal object allocation used as a scratch page */

View File

@@ -12,6 +12,7 @@
#include "xe_gt_printk.h"
#include "xe_gt_sysfs.h"
#include "xe_mmio.h"
#include "xe_pm.h"
#include "xe_sriov.h"
static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines)
@@ -150,6 +151,7 @@ ccs_mode_store(struct device *kdev, struct device_attribute *attr,
xe_gt_info(gt, "Setting compute mode to %d\n", num_engines);
gt->ccs_mode = num_engines;
xe_gt_record_user_engines(gt);
guard(xe_pm_runtime)(xe);
xe_gt_reset(gt);
}

View File

@@ -1124,14 +1124,14 @@ static int guc_wait_ucode(struct xe_guc *guc)
struct xe_guc_pc *guc_pc = &gt->uc.guc.pc;
u32 before_freq, act_freq, cur_freq;
u32 status = 0, tries = 0;
int load_result, ret;
ktime_t before;
u64 delta_ms;
int ret;
before_freq = xe_guc_pc_get_act_freq(guc_pc);
before = ktime_get();
ret = poll_timeout_us(ret = guc_load_done(gt, &status, &tries), ret,
ret = poll_timeout_us(load_result = guc_load_done(gt, &status, &tries), load_result,
10 * USEC_PER_MSEC,
GUC_LOAD_TIMEOUT_SEC * USEC_PER_SEC, false);
@@ -1139,7 +1139,7 @@ static int guc_wait_ucode(struct xe_guc *guc)
act_freq = xe_guc_pc_get_act_freq(guc_pc);
cur_freq = xe_guc_pc_get_cur_freq_fw(guc_pc);
if (ret) {
if (ret || load_result <= 0) {
xe_gt_err(gt, "load failed: status = 0x%08X, time = %lldms, freq = %dMHz (req %dMHz)\n",
status, delta_ms, xe_guc_pc_get_act_freq(guc_pc),
xe_guc_pc_get_cur_freq_fw(guc_pc));
@@ -1347,15 +1347,37 @@ int xe_guc_enable_communication(struct xe_guc *guc)
return 0;
}
int xe_guc_suspend(struct xe_guc *guc)
/**
* xe_guc_softreset() - Soft reset GuC
* @guc: The GuC object
*
* Send soft reset command to GuC through mmio send.
*
* Return: 0 if success, otherwise error code
*/
int xe_guc_softreset(struct xe_guc *guc)
{
struct xe_gt *gt = guc_to_gt(guc);
u32 action[] = {
XE_GUC_ACTION_CLIENT_SOFT_RESET,
};
int ret;
if (!xe_uc_fw_is_running(&guc->fw))
return 0;
ret = xe_guc_mmio_send(guc, action, ARRAY_SIZE(action));
if (ret)
return ret;
return 0;
}
int xe_guc_suspend(struct xe_guc *guc)
{
struct xe_gt *gt = guc_to_gt(guc);
int ret;
ret = xe_guc_softreset(guc);
if (ret) {
xe_gt_err(gt, "GuC suspend failed: %pe\n", ERR_PTR(ret));
return ret;

View File

@@ -44,6 +44,7 @@ int xe_guc_opt_in_features_enable(struct xe_guc *guc);
void xe_guc_runtime_suspend(struct xe_guc *guc);
void xe_guc_runtime_resume(struct xe_guc *guc);
int xe_guc_suspend(struct xe_guc *guc);
int xe_guc_softreset(struct xe_guc *guc);
void xe_guc_notify(struct xe_guc *guc);
int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr);
int xe_guc_mmio_send(struct xe_guc *guc, const u32 *request, u32 len);

View File

@@ -345,6 +345,7 @@ static void guc_action_disable_ct(void *arg)
{
struct xe_guc_ct *ct = arg;
xe_guc_ct_stop(ct);
guc_ct_change_state(ct, XE_GUC_CT_STATE_DISABLED);
}

View File

@@ -48,6 +48,8 @@
#define XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN 6
static int guc_submit_reset_prepare(struct xe_guc *guc);
static struct xe_guc *
exec_queue_to_guc(struct xe_exec_queue *q)
{
@@ -239,7 +241,7 @@ static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
EXEC_QUEUE_STATE_BANNED));
}
static void guc_submit_fini(struct drm_device *drm, void *arg)
static void guc_submit_sw_fini(struct drm_device *drm, void *arg)
{
struct xe_guc *guc = arg;
struct xe_device *xe = guc_to_xe(guc);
@@ -257,6 +259,19 @@ static void guc_submit_fini(struct drm_device *drm, void *arg)
xa_destroy(&guc->submission_state.exec_queue_lookup);
}
static void guc_submit_fini(void *arg)
{
struct xe_guc *guc = arg;
/* Forcefully kill any remaining exec queues */
xe_guc_ct_stop(&guc->ct);
guc_submit_reset_prepare(guc);
xe_guc_softreset(guc);
xe_guc_submit_stop(guc);
xe_uc_fw_sanitize(&guc->fw);
xe_guc_submit_pause_abort(guc);
}
static void guc_submit_wedged_fini(void *arg)
{
struct xe_guc *guc = arg;
@@ -326,7 +341,11 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
guc->submission_state.initialized = true;
return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
err = drmm_add_action_or_reset(&xe->drm, guc_submit_sw_fini, guc);
if (err)
return err;
return devm_add_action_or_reset(xe->drm.dev, guc_submit_fini, guc);
}
/*
@@ -1252,6 +1271,7 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
*/
void xe_guc_submit_wedge(struct xe_guc *guc)
{
struct xe_device *xe = guc_to_xe(guc);
struct xe_gt *gt = guc_to_gt(guc);
struct xe_exec_queue *q;
unsigned long index;
@@ -1266,20 +1286,28 @@ void xe_guc_submit_wedge(struct xe_guc *guc)
if (!guc->submission_state.initialized)
return;
err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,
guc_submit_wedged_fini, guc);
if (err) {
xe_gt_err(gt, "Failed to register clean-up in wedged.mode=%s; "
"Although device is wedged.\n",
xe_wedged_mode_to_string(XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET));
return;
}
if (xe->wedged.mode == 2) {
err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,
guc_submit_wedged_fini, guc);
if (err) {
xe_gt_err(gt, "Failed to register clean-up on wedged.mode=2; "
"Although device is wedged.\n");
return;
}
mutex_lock(&guc->submission_state.lock);
xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)
if (xe_exec_queue_get_unless_zero(q))
set_exec_queue_wedged(q);
mutex_unlock(&guc->submission_state.lock);
mutex_lock(&guc->submission_state.lock);
xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)
if (xe_exec_queue_get_unless_zero(q))
set_exec_queue_wedged(q);
mutex_unlock(&guc->submission_state.lock);
} else {
/* Forcefully kill any remaining exec queues, signal fences */
guc_submit_reset_prepare(guc);
xe_guc_submit_stop(guc);
xe_guc_softreset(guc);
xe_uc_fw_sanitize(&guc->fw);
xe_guc_submit_pause_abort(guc);
}
}
static bool guc_submit_hint_wedged(struct xe_guc *guc)
@@ -2230,6 +2258,7 @@ static const struct xe_exec_queue_ops guc_exec_queue_ops = {
static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
{
struct xe_gpu_scheduler *sched = &q->guc->sched;
bool do_destroy = false;
/* Stop scheduling + flush any DRM scheduler operations */
xe_sched_submission_stop(sched);
@@ -2237,7 +2266,7 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
/* Clean up lost G2H + reset engine state */
if (exec_queue_registered(q)) {
if (exec_queue_destroyed(q))
__guc_exec_queue_destroy(guc, q);
do_destroy = true;
}
if (q->guc->suspend_pending) {
set_exec_queue_suspended(q);
@@ -2273,18 +2302,15 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
xe_guc_exec_queue_trigger_cleanup(q);
}
}
if (do_destroy)
__guc_exec_queue_destroy(guc, q);
}
int xe_guc_submit_reset_prepare(struct xe_guc *guc)
static int guc_submit_reset_prepare(struct xe_guc *guc)
{
int ret;
if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))
return 0;
if (!guc->submission_state.initialized)
return 0;
/*
* Using an atomic here rather than submission_state.lock as this
* function can be called while holding the CT lock (engine reset
@@ -2299,6 +2325,17 @@ int xe_guc_submit_reset_prepare(struct xe_guc *guc)
return ret;
}
int xe_guc_submit_reset_prepare(struct xe_guc *guc)
{
if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))
return 0;
if (!guc->submission_state.initialized)
return 0;
return guc_submit_reset_prepare(guc);
}
void xe_guc_submit_reset_wait(struct xe_guc *guc)
{
wait_event(guc->ct.wq, xe_device_wedged(guc_to_xe(guc)) ||
@@ -2695,8 +2732,7 @@ void xe_guc_submit_pause_abort(struct xe_guc *guc)
continue;
xe_sched_submission_start(sched);
if (exec_queue_killed_or_banned_or_wedged(q))
xe_guc_exec_queue_trigger_cleanup(q);
guc_exec_queue_kill(q);
}
mutex_unlock(&guc->submission_state.lock);
}

View File

@@ -2413,14 +2413,14 @@ static int get_ctx_timestamp(struct xe_lrc *lrc, u32 engine_id, u64 *reg_ctx_ts)
* @lrc: Pointer to the lrc.
*
* Return latest ctx timestamp. With support for active contexts, the
* calculation may bb slightly racy, so follow a read-again logic to ensure that
* calculation may be slightly racy, so follow a read-again logic to ensure that
* the context is still active before returning the right timestamp.
*
* Returns: New ctx timestamp value
*/
u64 xe_lrc_timestamp(struct xe_lrc *lrc)
{
u64 lrc_ts, reg_ts, new_ts;
u64 lrc_ts, reg_ts, new_ts = lrc->ctx_timestamp;
u32 engine_id;
lrc_ts = xe_lrc_ctx_timestamp(lrc);

View File

@@ -543,8 +543,7 @@ static ssize_t xe_oa_read(struct file *file, char __user *buf,
size_t offset = 0;
int ret;
/* Can't read from disabled streams */
if (!stream->enabled || !stream->sample)
if (!stream->sample)
return -EINVAL;
if (!(file->f_flags & O_NONBLOCK)) {
@@ -1460,6 +1459,10 @@ static void xe_oa_stream_disable(struct xe_oa_stream *stream)
if (stream->sample)
hrtimer_cancel(&stream->poll_check_timer);
/* Update stream->oa_buffer.tail to allow any final reports to be read */
if (xe_oa_buffer_check_unlocked(stream))
wake_up(&stream->poll_wq);
}
static int xe_oa_enable_preempt_timeslice(struct xe_oa_stream *stream)

View File

@@ -1655,14 +1655,35 @@ static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset,
XE_WARN_ON(!level);
/* Check for leaf node */
if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) &&
(!xe_child->base.children || !xe_child->base.children[first])) {
xe_child->level <= MAX_HUGEPTE_LEVEL) {
struct iosys_map *leaf_map = &xe_child->bo->vmap;
pgoff_t count = xe_pt_num_entries(addr, next, xe_child->level, walk);
for (pgoff_t i = 0; i < count; i++) {
u64 pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64);
u64 pte;
int ret;
/*
* If not a leaf pt, skip unless non-leaf pt is interleaved between
* leaf ptes which causes the page walk to skip over the child leaves
*/
if (xe_child->base.children && xe_child->base.children[first + i]) {
u64 pt_size = 1ULL << walk->shifts[xe_child->level];
bool edge_pt = (i == 0 && !IS_ALIGNED(addr, pt_size)) ||
(i == count - 1 && !IS_ALIGNED(next, pt_size));
if (!edge_pt) {
xe_page_reclaim_list_abort(xe_walk->tile->primary_gt,
xe_walk->prl,
"PT is skipped by walk at level=%u offset=%lu",
xe_child->level, first + i);
break;
}
continue;
}
pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64);
/*
* In rare scenarios, pte may not be written yet due to racy conditions.
* In such cases, invalidate the PRL and fallback to full PPC invalidation.
@@ -1674,9 +1695,8 @@ static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset,
}
/* Ensure it is a defined page */
xe_tile_assert(xe_walk->tile,
xe_child->level == 0 ||
(pte & (XE_PTE_PS64 | XE_PDE_PS_2M | XE_PDPE_PS_1G)));
xe_tile_assert(xe_walk->tile, xe_child->level == 0 ||
(pte & (XE_PDE_PS_2M | XE_PDPE_PS_1G)));
/* An entry should be added for 64KB but contigious 4K have XE_PTE_PS64 */
if (pte & XE_PTE_PS64)
@@ -1701,11 +1721,11 @@ static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset,
killed = xe_pt_check_kill(addr, next, level - 1, xe_child, action, walk);
/*
* Verify PRL is active and if entry is not a leaf pte (base.children conditions),
* there is a potential need to invalidate the PRL if any PTE (num_live) are dropped.
* Verify if any PTE are potentially dropped at non-leaf levels, either from being
* killed or the page walk covers the region.
*/
if (xe_walk->prl && level > 1 && xe_child->num_live &&
xe_child->base.children && xe_child->base.children[first]) {
if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) &&
xe_child->level > MAX_HUGEPTE_LEVEL && xe_child->num_live) {
bool covered = xe_pt_covers(addr, next, xe_child->level, &xe_walk->base);
/*

View File

@@ -444,6 +444,8 @@ hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz,
(u64)(long)ctx,
true); /* prevent infinite recursions */
if (ret > size)
ret = size;
if (ret > 0)
memcpy(buf, dma_data, ret);

Some files were not shown because too many files have changed in this diff Show More