From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id EBA59139360 for ; Thu, 12 Aug 2021 11:54:52 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 217EEE0A81; Thu, 12 Aug 2021 11:54:52 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id C7917E0A81 for ; Thu, 12 Aug 2021 11:54:51 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 77CDA342AB2 for ; Thu, 12 Aug 2021 11:54:50 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 1D77D7B2 for ; Thu, 12 Aug 2021 11:54:49 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1628769276.4f6147ba258215799948704b0f8626c7b5611a11.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.13 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1009_linux-5.13.10.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 4f6147ba258215799948704b0f8626c7b5611a11 X-VCS-Branch: 5.13 Date: Thu, 12 Aug 2021 11:54:49 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 9f6de412-79b6-45bc-876e-74144231aa28 X-Archives-Hash: e6adb67057f6280c13dda2d737b41b6f commit: 4f6147ba258215799948704b0f8626c7b5611a11 Author: Mike Pagano gentoo org> AuthorDate: Thu Aug 12 11:54:36 2021 +0000 Commit: Mike Pagano gentoo org> CommitDate: Thu Aug 12 11:54:36 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4f6147ba Linux patch 5.13.10 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1009_linux-5.13.10.patch | 6040 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 6044 insertions(+) diff --git a/0000_README b/0000_README index fd05e40..deff891 100644 --- a/0000_README +++ b/0000_README @@ -79,6 +79,10 @@ Patch: 1008_linux-5.13.9.patch From: http://www.kernel.org Desc: Linux 5.13.9 +Patch: 1009_linux-5.13.10.patch +From: http://www.kernel.org +Desc: Linux 5.13.10 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1009_linux-5.13.10.patch b/1009_linux-5.13.10.patch new file mode 100644 index 0000000..612fced --- /dev/null +++ b/1009_linux-5.13.10.patch @@ -0,0 +1,6040 @@ +diff --git a/Makefile b/Makefile +index 9d810e13a83f4..4e9f877f513f9 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 13 +-SUBLEVEL = 9 ++SUBLEVEL = 10 + EXTRAVERSION = + NAME = Opossums on Parade + +@@ -1366,6 +1366,15 @@ scripts_unifdef: scripts_basic + $(Q)$(MAKE) $(build)=scripts scripts/unifdef + + # --------------------------------------------------------------------------- ++# Install ++ ++# Many distributions have the custom install script, /sbin/installkernel. ++# If DKMS is installed, 'make install' will eventually recuses back ++# to the this Makefile to build and install external modules. ++# Cancel sub_make_done so that options such as M=, V=, etc. are parsed. ++ ++install: sub_make_done := ++ + # Kernel selftest + + PHONY += kselftest +diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c +index 4b2575f936d46..cb64e4797d2a8 100644 +--- a/arch/alpha/kernel/smp.c ++++ b/arch/alpha/kernel/smp.c +@@ -582,7 +582,7 @@ void + smp_send_stop(void) + { + cpumask_t to_whom; +- cpumask_copy(&to_whom, cpu_possible_mask); ++ cpumask_copy(&to_whom, cpu_online_mask); + cpumask_clear_cpu(smp_processor_id(), &to_whom); + #ifdef DEBUG_IPI_MSG + if (hard_smp_processor_id() != boot_cpu_id) +diff --git a/arch/arm/boot/dts/am437x-l4.dtsi b/arch/arm/boot/dts/am437x-l4.dtsi +index a6f19ae7d3e6b..f73ecec1995a3 100644 +--- a/arch/arm/boot/dts/am437x-l4.dtsi ++++ b/arch/arm/boot/dts/am437x-l4.dtsi +@@ -1595,7 +1595,7 @@ + compatible = "ti,am4372-d_can", "ti,am3352-d_can"; + reg = <0x0 0x2000>; + clocks = <&dcan1_fck>; +- clock-name = "fck"; ++ clock-names = "fck"; + syscon-raminit = <&scm_conf 0x644 1>; + interrupts = ; + status = "disabled"; +diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts +index f98691ae4415b..d3082b9774e40 100644 +--- a/arch/arm/boot/dts/imx53-m53menlo.dts ++++ b/arch/arm/boot/dts/imx53-m53menlo.dts +@@ -388,13 +388,13 @@ + + pinctrl_power_button: powerbutgrp { + fsl,pins = < +- MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4 ++ MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4 + >; + }; + + pinctrl_power_out: poweroutgrp { + fsl,pins = < +- MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4 ++ MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4 + >; + }; + +diff --git a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi +index 0ad8ccde0cf87..f86efd0ccc404 100644 +--- a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi ++++ b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi +@@ -54,7 +54,13 @@ + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_microsom_enet_ar8035>; + phy-mode = "rgmii-id"; +- phy-reset-duration = <2>; ++ ++ /* ++ * The PHY seems to require a long-enough reset duration to avoid ++ * some rare issues where the PHY gets stuck in an inconsistent and ++ * non-functional state at boot-up. 10ms proved to be fine . ++ */ ++ phy-reset-duration = <10>; + phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>; + status = "okay"; + +diff --git a/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi b/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi +index a0545431b3dc3..9f1e38282bee7 100644 +--- a/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi ++++ b/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi +@@ -43,6 +43,7 @@ + assigned-clock-rates = <0>, <198000000>; + cap-power-off-card; + keep-power-in-suspend; ++ max-frequency = <25000000>; + mmc-pwrseq = <&wifi_pwrseq>; + no-1-8-v; + non-removable; +diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi +index d8f13626cfd1b..3a8f102314758 100644 +--- a/arch/arm/boot/dts/omap5-board-common.dtsi ++++ b/arch/arm/boot/dts/omap5-board-common.dtsi +@@ -30,14 +30,6 @@ + regulator-max-microvolt = <5000000>; + }; + +- vdds_1v8_main: fixedregulator-vdds_1v8_main { +- compatible = "regulator-fixed"; +- regulator-name = "vdds_1v8_main"; +- vin-supply = <&smps7_reg>; +- regulator-min-microvolt = <1800000>; +- regulator-max-microvolt = <1800000>; +- }; +- + vmmcsd_fixed: fixedregulator-mmcsd { + compatible = "regulator-fixed"; + regulator-name = "vmmcsd_fixed"; +@@ -487,6 +479,7 @@ + regulator-boot-on; + }; + ++ vdds_1v8_main: + smps7_reg: smps7 { + /* VDDS_1v8_OMAP over VDDS_1v8_MAIN */ + regulator-name = "smps7"; +diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi +index c5ea08fec535f..6cf1c8b4c6e28 100644 +--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi ++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi +@@ -37,7 +37,7 @@ + poll-interval = <20>; + + /* +- * The EXTi IRQ line 3 is shared with touchscreen and ethernet, ++ * The EXTi IRQ line 3 is shared with ethernet, + * so mark this as polled GPIO key. + */ + button-0 { +@@ -46,6 +46,16 @@ + gpios = <&gpiof 3 GPIO_ACTIVE_LOW>; + }; + ++ /* ++ * The EXTi IRQ line 6 is shared with touchscreen, ++ * so mark this as polled GPIO key. ++ */ ++ button-1 { ++ label = "TA2-GPIO-B"; ++ linux,code = ; ++ gpios = <&gpiod 6 GPIO_ACTIVE_LOW>; ++ }; ++ + /* + * The EXTi IRQ line 0 is shared with PMIC, + * so mark this as polled GPIO key. +@@ -60,13 +70,6 @@ + gpio-keys { + compatible = "gpio-keys"; + +- button-1 { +- label = "TA2-GPIO-B"; +- linux,code = ; +- gpios = <&gpiod 6 GPIO_ACTIVE_LOW>; +- wakeup-source; +- }; +- + button-3 { + label = "TA4-GPIO-D"; + linux,code = ; +@@ -82,6 +85,7 @@ + label = "green:led5"; + gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>; + default-state = "off"; ++ status = "disabled"; + }; + + led-1 { +@@ -185,8 +189,8 @@ + touchscreen@38 { + compatible = "edt,edt-ft5406"; + reg = <0x38>; +- interrupt-parent = <&gpiog>; +- interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */ ++ interrupt-parent = <&gpioc>; ++ interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */ + }; + }; + +diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi +index 2af0a67526747..8c41f819f7769 100644 +--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi ++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi +@@ -12,6 +12,8 @@ + aliases { + ethernet0 = ðernet0; + ethernet1 = &ksz8851; ++ rtc0 = &hwrtc; ++ rtc1 = &rtc; + }; + + memory@c0000000 { +@@ -138,6 +140,7 @@ + reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>; + reset-assert-us = <500>; + reset-deassert-us = <500>; ++ smsc,disable-energy-detect; + interrupt-parent = <&gpioi>; + interrupts = <11 IRQ_TYPE_LEVEL_LOW>; + }; +@@ -248,7 +251,7 @@ + /delete-property/dmas; + /delete-property/dma-names; + +- rtc@32 { ++ hwrtc: rtc@32 { + compatible = "microcrystal,rv8803"; + reg = <0x32>; + }; +diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c +index 0dfd0ae7a63dd..af12668d0bf51 100644 +--- a/arch/arm/mach-imx/mmdc.c ++++ b/arch/arm/mach-imx/mmdc.c +@@ -103,6 +103,7 @@ struct mmdc_pmu { + struct perf_event *mmdc_events[MMDC_NUM_COUNTERS]; + struct hlist_node node; + struct fsl_mmdc_devtype_data *devtype_data; ++ struct clk *mmdc_ipg_clk; + }; + + /* +@@ -462,11 +463,14 @@ static int imx_mmdc_remove(struct platform_device *pdev) + + cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node); + perf_pmu_unregister(&pmu_mmdc->pmu); ++ iounmap(pmu_mmdc->mmdc_base); ++ clk_disable_unprepare(pmu_mmdc->mmdc_ipg_clk); + kfree(pmu_mmdc); + return 0; + } + +-static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base) ++static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base, ++ struct clk *mmdc_ipg_clk) + { + struct mmdc_pmu *pmu_mmdc; + char *name; +@@ -494,6 +498,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b + } + + mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev); ++ pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk; + if (mmdc_num == 0) + name = "mmdc"; + else +@@ -529,7 +534,7 @@ pmu_free: + + #else + #define imx_mmdc_remove NULL +-#define imx_mmdc_perf_init(pdev, mmdc_base) 0 ++#define imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk) 0 + #endif + + static int imx_mmdc_probe(struct platform_device *pdev) +@@ -567,7 +572,13 @@ static int imx_mmdc_probe(struct platform_device *pdev) + val &= ~(1 << BP_MMDC_MAPSR_PSD); + writel_relaxed(val, reg); + +- return imx_mmdc_perf_init(pdev, mmdc_base); ++ err = imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk); ++ if (err) { ++ iounmap(mmdc_base); ++ clk_disable_unprepare(mmdc_ipg_clk); ++ } ++ ++ return err; + } + + int imx_mmdc_get_ddr_type(void) +diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c +index 65934b2924fb5..12b26e04686fa 100644 +--- a/arch/arm/mach-omap2/omap_hwmod.c ++++ b/arch/arm/mach-omap2/omap_hwmod.c +@@ -3776,6 +3776,7 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh) + struct omap_hwmod_ocp_if *oi; + struct clockdomain *clkdm; + struct clk_hw_omap *clk; ++ struct clk_hw *hw; + + if (!oh) + return NULL; +@@ -3792,7 +3793,14 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh) + c = oi->_clk; + } + +- clk = to_clk_hw_omap(__clk_get_hw(c)); ++ hw = __clk_get_hw(c); ++ if (!hw) ++ return NULL; ++ ++ clk = to_clk_hw_omap(hw); ++ if (!clk) ++ return NULL; ++ + clkdm = clk->clkdm; + if (!clkdm) + return NULL; +diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts +index dd764b720fb0a..f6a79c8080d14 100644 +--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts ++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts +@@ -54,6 +54,7 @@ + + &mscc_felix_port0 { + label = "swp0"; ++ managed = "in-band-status"; + phy-handle = <&phy0>; + phy-mode = "sgmii"; + status = "okay"; +@@ -61,6 +62,7 @@ + + &mscc_felix_port1 { + label = "swp1"; ++ managed = "in-band-status"; + phy-handle = <&phy1>; + phy-mode = "sgmii"; + status = "okay"; +diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi +index a30249ebffa8c..a94cbd6dcce66 100644 +--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi ++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi +@@ -66,7 +66,7 @@ + }; + }; + +- sysclk: clock-sysclk { ++ sysclk: sysclk { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <100000000>; +diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts +index ce2bcddf396f8..a05b1ab2dd12c 100644 +--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts ++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts +@@ -19,6 +19,8 @@ + aliases { + spi0 = &spi0; + ethernet1 = ð1; ++ mmc0 = &sdhci0; ++ mmc1 = &sdhci1; + }; + + chosen { +@@ -119,6 +121,7 @@ + pinctrl-names = "default"; + pinctrl-0 = <&i2c1_pins>; + clock-frequency = <100000>; ++ /delete-property/ mrvl,i2c-fast-mode; + status = "okay"; + + rtc@6f { +diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h +index e58bca832dfff..41b332c054ab8 100644 +--- a/arch/arm64/include/asm/ptrace.h ++++ b/arch/arm64/include/asm/ptrace.h +@@ -320,7 +320,17 @@ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs) + + static inline unsigned long regs_return_value(struct pt_regs *regs) + { +- return regs->regs[0]; ++ unsigned long val = regs->regs[0]; ++ ++ /* ++ * Audit currently uses regs_return_value() instead of ++ * syscall_get_return_value(). Apply the same sign-extension here until ++ * audit is updated to use syscall_get_return_value(). ++ */ ++ if (compat_user_mode(regs)) ++ val = sign_extend64(val, 31); ++ ++ return val; + } + + static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) +diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h +index cfc0672013f67..03e20895453a7 100644 +--- a/arch/arm64/include/asm/syscall.h ++++ b/arch/arm64/include/asm/syscall.h +@@ -29,22 +29,23 @@ static inline void syscall_rollback(struct task_struct *task, + regs->regs[0] = regs->orig_x0; + } + +- +-static inline long syscall_get_error(struct task_struct *task, +- struct pt_regs *regs) ++static inline long syscall_get_return_value(struct task_struct *task, ++ struct pt_regs *regs) + { +- unsigned long error = regs->regs[0]; ++ unsigned long val = regs->regs[0]; + + if (is_compat_thread(task_thread_info(task))) +- error = sign_extend64(error, 31); ++ val = sign_extend64(val, 31); + +- return IS_ERR_VALUE(error) ? error : 0; ++ return val; + } + +-static inline long syscall_get_return_value(struct task_struct *task, +- struct pt_regs *regs) ++static inline long syscall_get_error(struct task_struct *task, ++ struct pt_regs *regs) + { +- return regs->regs[0]; ++ unsigned long error = syscall_get_return_value(task, regs); ++ ++ return IS_ERR_VALUE(error) ? error : 0; + } + + static inline void syscall_set_return_value(struct task_struct *task, +diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c +index eb2f73939b7bb..af3b64ca482d0 100644 +--- a/arch/arm64/kernel/ptrace.c ++++ b/arch/arm64/kernel/ptrace.c +@@ -1862,7 +1862,7 @@ void syscall_trace_exit(struct pt_regs *regs) + audit_syscall_exit(regs); + + if (flags & _TIF_SYSCALL_TRACEPOINT) +- trace_sys_exit(regs, regs_return_value(regs)); ++ trace_sys_exit(regs, syscall_get_return_value(current, regs)); + + if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)) + tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT); +diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c +index 6237486ff6bb7..22899c86711aa 100644 +--- a/arch/arm64/kernel/signal.c ++++ b/arch/arm64/kernel/signal.c +@@ -29,6 +29,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -890,7 +891,7 @@ static void do_signal(struct pt_regs *regs) + retval == -ERESTART_RESTARTBLOCK || + (retval == -ERESTARTSYS && + !(ksig.ka.sa.sa_flags & SA_RESTART)))) { +- regs->regs[0] = -EINTR; ++ syscall_set_return_value(current, regs, -EINTR, 0); + regs->pc = continue_addr; + } + +diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c +index de07147a79260..7ae41b35c923e 100644 +--- a/arch/arm64/kernel/stacktrace.c ++++ b/arch/arm64/kernel/stacktrace.c +@@ -220,7 +220,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) + + #ifdef CONFIG_STACKTRACE + +-noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, ++noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, + void *cookie, struct task_struct *task, + struct pt_regs *regs) + { +diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c +index 263d6c1a525f3..50a0f1a38e849 100644 +--- a/arch/arm64/kernel/syscall.c ++++ b/arch/arm64/kernel/syscall.c +@@ -54,10 +54,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno, + ret = do_ni_syscall(regs, scno); + } + +- if (is_compat_task()) +- ret = lower_32_bits(ret); +- +- regs->regs[0] = ret; ++ syscall_set_return_value(current, regs, 0, ret); + + /* + * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), +@@ -115,7 +112,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, + * syscall. do_notify_resume() will send a signal to userspace + * before the syscall is restarted. + */ +- regs->regs[0] = -ERESTARTNOINTR; ++ syscall_set_return_value(current, regs, -ERESTARTNOINTR, 0); + return; + } + +@@ -136,7 +133,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, + * anyway. + */ + if (scno == NO_SYSCALL) +- regs->regs[0] = -ENOSYS; ++ syscall_set_return_value(current, regs, -ENOSYS, 0); + scno = syscall_trace_enter(regs); + if (scno == NO_SYSCALL) + goto trace_exit; +diff --git a/arch/mips/Makefile b/arch/mips/Makefile +index 258234c35a096..674f68d16a73f 100644 +--- a/arch/mips/Makefile ++++ b/arch/mips/Makefile +@@ -321,7 +321,7 @@ KBUILD_LDFLAGS += -m $(ld-emul) + + ifdef CONFIG_MIPS + CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \ +- egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \ ++ egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \ + sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g') + endif + +diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h +index d0cf997b4ba84..139b4050259fa 100644 +--- a/arch/mips/include/asm/pgalloc.h ++++ b/arch/mips/include/asm/pgalloc.h +@@ -59,15 +59,20 @@ do { \ + + static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) + { +- pmd_t *pmd = NULL; ++ pmd_t *pmd; + struct page *pg; + +- pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER); +- if (pg) { +- pgtable_pmd_page_ctor(pg); +- pmd = (pmd_t *)page_address(pg); +- pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table); ++ pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER); ++ if (!pg) ++ return NULL; ++ ++ if (!pgtable_pmd_page_ctor(pg)) { ++ __free_pages(pg, PMD_ORDER); ++ return NULL; + } ++ ++ pmd = (pmd_t *)page_address(pg); ++ pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table); + return pmd; + } + +diff --git a/arch/mips/mti-malta/malta-platform.c b/arch/mips/mti-malta/malta-platform.c +index 11e9527c6e441..62ffac500eb52 100644 +--- a/arch/mips/mti-malta/malta-platform.c ++++ b/arch/mips/mti-malta/malta-platform.c +@@ -47,7 +47,8 @@ static struct plat_serial8250_port uart8250_data[] = { + .mapbase = 0x1f000900, /* The CBUS UART */ + .irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2, + .uartclk = 3686400, /* Twice the usual clk! */ +- .iotype = UPIO_MEM32, ++ .iotype = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ? ++ UPIO_MEM32BE : UPIO_MEM32, + .flags = CBUS_UART_FLAGS, + .regshift = 3, + }, +diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig +index 18ec0f9bb8d5c..3c3647ac33cb8 100644 +--- a/arch/riscv/Kconfig ++++ b/arch/riscv/Kconfig +@@ -489,6 +489,7 @@ config CC_HAVE_STACKPROTECTOR_TLS + + config STACKPROTECTOR_PER_TASK + def_bool y ++ depends on !GCC_PLUGIN_RANDSTRUCT + depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS + + config PHYS_RAM_BASE_FIXED +diff --git a/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts +index b1c3c596578f1..2e4ea84f27e77 100644 +--- a/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts ++++ b/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts +@@ -24,7 +24,7 @@ + + memory@80000000 { + device_type = "memory"; +- reg = <0x0 0x80000000 0x2 0x00000000>; ++ reg = <0x0 0x80000000 0x4 0x00000000>; + }; + + soc { +diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c +index bde85fc53357f..7bc8af75933a7 100644 +--- a/arch/riscv/kernel/stacktrace.c ++++ b/arch/riscv/kernel/stacktrace.c +@@ -27,7 +27,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, + fp = frame_pointer(regs); + sp = user_stack_pointer(regs); + pc = instruction_pointer(regs); +- } else if (task == current) { ++ } else if (task == NULL || task == current) { + fp = (unsigned long)__builtin_frame_address(1); + sp = (unsigned long)__builtin_frame_address(0); + pc = (unsigned long)__builtin_return_address(0); +diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h +index 2bf1c7ea2758d..2938c902ffbe4 100644 +--- a/arch/x86/events/perf_event.h ++++ b/arch/x86/events/perf_event.h +@@ -1115,9 +1115,10 @@ void x86_pmu_stop(struct perf_event *event, int flags); + + static inline void x86_pmu_disable_event(struct perf_event *event) + { ++ u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask); + struct hw_perf_event *hwc = &event->hw; + +- wrmsrl(hwc->config_base, hwc->config); ++ wrmsrl(hwc->config_base, hwc->config & ~disable_mask); + + if (is_counter_pair(hwc)) + wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0); +diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c +index 716266ab177f4..ed469ddf20741 100644 +--- a/arch/x86/kvm/mmu/mmu.c ++++ b/arch/x86/kvm/mmu/mmu.c +@@ -1546,7 +1546,7 @@ static int is_empty_shadow_page(u64 *spt) + * aggregate version in order to make the slab shrinker + * faster + */ +-static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, unsigned long nr) ++static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) + { + kvm->arch.n_used_mmu_pages += nr; + percpu_counter_add(&kvm_total_used_mmu_pages, nr); +diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c +index 02d60d7f903da..7498c1384b938 100644 +--- a/arch/x86/kvm/svm/sev.c ++++ b/arch/x86/kvm/svm/sev.c +@@ -188,7 +188,7 @@ static void sev_asid_free(struct kvm_sev_info *sev) + + for_each_possible_cpu(cpu) { + sd = per_cpu(svm_data, cpu); +- sd->sev_vmcbs[pos] = NULL; ++ sd->sev_vmcbs[sev->asid] = NULL; + } + + mutex_unlock(&sev_bitmap_lock); +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index d6a9f05187849..1e11198f89934 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -4252,8 +4252,17 @@ static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu) + + static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu) + { +- return kvm_arch_interrupt_allowed(vcpu) && +- kvm_cpu_accept_dm_intr(vcpu); ++ /* ++ * Do not cause an interrupt window exit if an exception ++ * is pending or an event needs reinjection; userspace ++ * might want to inject the interrupt manually using KVM_SET_REGS ++ * or KVM_SET_SREGS. For that to work, we must be at an ++ * instruction boundary and with no events half-injected. ++ */ ++ return (kvm_arch_interrupt_allowed(vcpu) && ++ kvm_cpu_accept_dm_intr(vcpu) && ++ !kvm_event_needs_reinjection(vcpu) && ++ !vcpu->arch.exception.pending); + } + + static int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, +diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c +index 04c5a44b96827..9ba700dc47de4 100644 +--- a/arch/x86/tools/relocs.c ++++ b/arch/x86/tools/relocs.c +@@ -57,12 +57,12 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = { + [S_REL] = + "^(__init_(begin|end)|" + "__x86_cpu_dev_(start|end)|" +- "(__parainstructions|__alt_instructions)(|_end)|" +- "(__iommu_table|__apicdrivers|__smp_locks)(|_end)|" ++ "(__parainstructions|__alt_instructions)(_end)?|" ++ "(__iommu_table|__apicdrivers|__smp_locks)(_end)?|" + "__(start|end)_pci_.*|" + "__(start|end)_builtin_fw|" +- "__(start|stop)___ksymtab(|_gpl)|" +- "__(start|stop)___kcrctab(|_gpl)|" ++ "__(start|stop)___ksymtab(_gpl)?|" ++ "__(start|stop)___kcrctab(_gpl)?|" + "__(start|stop)___param|" + "__(start|stop)___modver|" + "__(start|stop)___bug_table|" +diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c +index 81be0096411da..d8b0d8bd132bc 100644 +--- a/block/blk-iolatency.c ++++ b/block/blk-iolatency.c +@@ -833,7 +833,11 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, + + enable = iolatency_set_min_lat_nsec(blkg, lat_val); + if (enable) { +- WARN_ON_ONCE(!blk_get_queue(blkg->q)); ++ if (!blk_get_queue(blkg->q)) { ++ ret = -ENODEV; ++ goto out; ++ } ++ + blkg_get(blkg); + } + +diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c +index 38e10ab976e67..14b71b41e8453 100644 +--- a/drivers/acpi/acpica/nsrepair2.c ++++ b/drivers/acpi/acpica/nsrepair2.c +@@ -379,13 +379,6 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info, + + (*element_ptr)->common.reference_count = + original_ref_count; +- +- /* +- * The original_element holds a reference from the package object +- * that represents _HID. Since a new element was created by _HID, +- * remove the reference from the _CID package. +- */ +- acpi_ut_remove_reference(original_element); + } + + element_ptr++; +diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c +index ae7189d1a5682..b71ea4a680b01 100644 +--- a/drivers/ata/libata-sff.c ++++ b/drivers/ata/libata-sff.c +@@ -637,6 +637,20 @@ unsigned int ata_sff_data_xfer32(struct ata_queued_cmd *qc, unsigned char *buf, + } + EXPORT_SYMBOL_GPL(ata_sff_data_xfer32); + ++static void ata_pio_xfer(struct ata_queued_cmd *qc, struct page *page, ++ unsigned int offset, size_t xfer_size) ++{ ++ bool do_write = (qc->tf.flags & ATA_TFLAG_WRITE); ++ unsigned char *buf; ++ ++ buf = kmap_atomic(page); ++ qc->ap->ops->sff_data_xfer(qc, buf + offset, xfer_size, do_write); ++ kunmap_atomic(buf); ++ ++ if (!do_write && !PageSlab(page)) ++ flush_dcache_page(page); ++} ++ + /** + * ata_pio_sector - Transfer a sector of data. + * @qc: Command on going +@@ -648,11 +662,9 @@ EXPORT_SYMBOL_GPL(ata_sff_data_xfer32); + */ + static void ata_pio_sector(struct ata_queued_cmd *qc) + { +- int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); + struct ata_port *ap = qc->ap; + struct page *page; + unsigned int offset; +- unsigned char *buf; + + if (!qc->cursg) { + qc->curbytes = qc->nbytes; +@@ -670,13 +682,20 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) + + DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); + +- /* do the actual data transfer */ +- buf = kmap_atomic(page); +- ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, do_write); +- kunmap_atomic(buf); ++ /* ++ * Split the transfer when it splits a page boundary. Note that the ++ * split still has to be dword aligned like all ATA data transfers. ++ */ ++ WARN_ON_ONCE(offset % 4); ++ if (offset + qc->sect_size > PAGE_SIZE) { ++ unsigned int split_len = PAGE_SIZE - offset; + +- if (!do_write && !PageSlab(page)) +- flush_dcache_page(page); ++ ata_pio_xfer(qc, page, offset, split_len); ++ ata_pio_xfer(qc, nth_page(page, 1), 0, ++ qc->sect_size - split_len); ++ } else { ++ ata_pio_xfer(qc, page, offset, qc->sect_size); ++ } + + qc->curbytes += qc->sect_size; + qc->cursg_ofs += qc->sect_size; +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index ecd7cf848daff..592b3955abe22 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -634,8 +634,6 @@ dev_groups_failed: + else if (drv->remove) + drv->remove(dev); + probe_failed: +- kfree(dev->dma_range_map); +- dev->dma_range_map = NULL; + if (dev->bus) + blocking_notifier_call_chain(&dev->bus->p->bus_notifier, + BUS_NOTIFY_DRIVER_NOT_BOUND, dev); +@@ -643,6 +641,8 @@ pinctrl_bind_failed: + device_links_no_driver(dev); + devres_release_all(dev); + arch_teardown_dma_ops(dev); ++ kfree(dev->dma_range_map); ++ dev->dma_range_map = NULL; + driver_sysfs_remove(dev); + dev->driver = NULL; + dev_set_drvdata(dev, NULL); +diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c +index 91899d185e311..d7d63c1aa993f 100644 +--- a/drivers/base/firmware_loader/fallback.c ++++ b/drivers/base/firmware_loader/fallback.c +@@ -89,12 +89,11 @@ static void __fw_load_abort(struct fw_priv *fw_priv) + { + /* + * There is a small window in which user can write to 'loading' +- * between loading done and disappearance of 'loading' ++ * between loading done/aborted and disappearance of 'loading' + */ +- if (fw_sysfs_done(fw_priv)) ++ if (fw_state_is_aborted(fw_priv) || fw_sysfs_done(fw_priv)) + return; + +- list_del_init(&fw_priv->pending_list); + fw_state_aborted(fw_priv); + } + +@@ -280,7 +279,6 @@ static ssize_t firmware_loading_store(struct device *dev, + * Same logic as fw_load_abort, only the DONE bit + * is ignored and we set ABORT only on failure. + */ +- list_del_init(&fw_priv->pending_list); + if (rc) { + fw_state_aborted(fw_priv); + written = rc; +@@ -513,6 +511,11 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout) + } + + mutex_lock(&fw_lock); ++ if (fw_state_is_aborted(fw_priv)) { ++ mutex_unlock(&fw_lock); ++ retval = -EINTR; ++ goto out; ++ } + list_add(&fw_priv->pending_list, &pending_fw_head); + mutex_unlock(&fw_lock); + +@@ -535,11 +538,10 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout) + if (fw_state_is_aborted(fw_priv)) { + if (retval == -ERESTARTSYS) + retval = -EINTR; +- else +- retval = -EAGAIN; + } else if (fw_priv->is_paged_buf && !fw_priv->data) + retval = -ENOMEM; + ++out: + device_del(f_dev); + err_put_dev: + put_device(f_dev); +diff --git a/drivers/base/firmware_loader/firmware.h b/drivers/base/firmware_loader/firmware.h +index 63bd29fdcb9c5..a3014e9e2c852 100644 +--- a/drivers/base/firmware_loader/firmware.h ++++ b/drivers/base/firmware_loader/firmware.h +@@ -117,8 +117,16 @@ static inline void __fw_state_set(struct fw_priv *fw_priv, + + WRITE_ONCE(fw_st->status, status); + +- if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) ++ if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) { ++#ifdef CONFIG_FW_LOADER_USER_HELPER ++ /* ++ * Doing this here ensures that the fw_priv is deleted from ++ * the pending list in all abort/done paths. ++ */ ++ list_del_init(&fw_priv->pending_list); ++#endif + complete_all(&fw_st->completion); ++ } + } + + static inline void fw_state_aborted(struct fw_priv *fw_priv) +diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c +index 4fdb8219cd083..68c549d712304 100644 +--- a/drivers/base/firmware_loader/main.c ++++ b/drivers/base/firmware_loader/main.c +@@ -783,8 +783,10 @@ static void fw_abort_batch_reqs(struct firmware *fw) + return; + + fw_priv = fw->priv; ++ mutex_lock(&fw_lock); + if (!fw_state_is_aborted(fw_priv)) + fw_state_aborted(fw_priv); ++ mutex_unlock(&fw_lock); + } + + /* called from request_firmware() and request_firmware_work_func() */ +diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c +index 38cb116ed433f..0ef98e3ba3410 100644 +--- a/drivers/bus/ti-sysc.c ++++ b/drivers/bus/ti-sysc.c +@@ -100,6 +100,7 @@ static const char * const clock_names[SYSC_MAX_CLOCKS] = { + * @cookie: data used by legacy platform callbacks + * @name: name if available + * @revision: interconnect target module revision ++ * @reserved: target module is reserved and already in use + * @enabled: sysc runtime enabled status + * @needs_resume: runtime resume needed on resume from suspend + * @child_needs_resume: runtime resume needed for child on resume from suspend +@@ -130,6 +131,7 @@ struct sysc { + struct ti_sysc_cookie cookie; + const char *name; + u32 revision; ++ unsigned int reserved:1; + unsigned int enabled:1; + unsigned int needs_resume:1; + unsigned int child_needs_resume:1; +@@ -2951,6 +2953,8 @@ static int sysc_init_soc(struct sysc *ddata) + case SOC_3430 ... SOC_3630: + sysc_add_disabled(0x48304000); /* timer12 */ + break; ++ case SOC_AM3: ++ sysc_add_disabled(0x48310000); /* rng */ + default: + break; + } +@@ -3093,8 +3097,8 @@ static int sysc_probe(struct platform_device *pdev) + return error; + + error = sysc_check_active_timer(ddata); +- if (error) +- return error; ++ if (error == -EBUSY) ++ ddata->reserved = true; + + error = sysc_get_clocks(ddata); + if (error) +@@ -3130,11 +3134,15 @@ static int sysc_probe(struct platform_device *pdev) + sysc_show_registers(ddata); + + ddata->dev->type = &sysc_device_type; +- error = of_platform_populate(ddata->dev->of_node, sysc_match_table, +- pdata ? pdata->auxdata : NULL, +- ddata->dev); +- if (error) +- goto err; ++ ++ if (!ddata->reserved) { ++ error = of_platform_populate(ddata->dev->of_node, ++ sysc_match_table, ++ pdata ? pdata->auxdata : NULL, ++ ddata->dev); ++ if (error) ++ goto err; ++ } + + INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle); + +diff --git a/drivers/char/tpm/tpm_ftpm_tee.c b/drivers/char/tpm/tpm_ftpm_tee.c +index 2ccdf8ac69948..6e3235565a4d8 100644 +--- a/drivers/char/tpm/tpm_ftpm_tee.c ++++ b/drivers/char/tpm/tpm_ftpm_tee.c +@@ -254,11 +254,11 @@ static int ftpm_tee_probe(struct device *dev) + pvt_data->session = sess_arg.session; + + /* Allocate dynamic shared memory with fTPM TA */ +- pvt_data->shm = tee_shm_alloc(pvt_data->ctx, +- MAX_COMMAND_SIZE + MAX_RESPONSE_SIZE, +- TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); ++ pvt_data->shm = tee_shm_alloc_kernel_buf(pvt_data->ctx, ++ MAX_COMMAND_SIZE + ++ MAX_RESPONSE_SIZE); + if (IS_ERR(pvt_data->shm)) { +- dev_err(dev, "%s: tee_shm_alloc failed\n", __func__); ++ dev_err(dev, "%s: tee_shm_alloc_kernel_buf failed\n", __func__); + rc = -ENOMEM; + goto out_shm_alloc; + } +diff --git a/drivers/clk/clk-devres.c b/drivers/clk/clk-devres.c +index be160764911bf..f9d5b73343417 100644 +--- a/drivers/clk/clk-devres.c ++++ b/drivers/clk/clk-devres.c +@@ -92,13 +92,20 @@ int __must_check devm_clk_bulk_get_optional(struct device *dev, int num_clks, + } + EXPORT_SYMBOL_GPL(devm_clk_bulk_get_optional); + ++static void devm_clk_bulk_release_all(struct device *dev, void *res) ++{ ++ struct clk_bulk_devres *devres = res; ++ ++ clk_bulk_put_all(devres->num_clks, devres->clks); ++} ++ + int __must_check devm_clk_bulk_get_all(struct device *dev, + struct clk_bulk_data **clks) + { + struct clk_bulk_devres *devres; + int ret; + +- devres = devres_alloc(devm_clk_bulk_release, ++ devres = devres_alloc(devm_clk_bulk_release_all, + sizeof(*devres), GFP_KERNEL); + if (!devres) + return -ENOMEM; +diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c +index 18117ce5ff85f..5c75e3d906c20 100644 +--- a/drivers/clk/clk-stm32f4.c ++++ b/drivers/clk/clk-stm32f4.c +@@ -526,7 +526,7 @@ struct stm32f4_pll { + + struct stm32f4_pll_post_div_data { + int idx; +- u8 pll_num; ++ int pll_idx; + const char *name; + const char *parent; + u8 flag; +@@ -557,13 +557,13 @@ static const struct clk_div_table post_divr_table[] = { + + #define MAX_POST_DIV 3 + static const struct stm32f4_pll_post_div_data post_div_data[MAX_POST_DIV] = { +- { CLK_I2SQ_PDIV, PLL_I2S, "plli2s-q-div", "plli2s-q", ++ { CLK_I2SQ_PDIV, PLL_VCO_I2S, "plli2s-q-div", "plli2s-q", + CLK_SET_RATE_PARENT, STM32F4_RCC_DCKCFGR, 0, 5, 0, NULL}, + +- { CLK_SAIQ_PDIV, PLL_SAI, "pllsai-q-div", "pllsai-q", ++ { CLK_SAIQ_PDIV, PLL_VCO_SAI, "pllsai-q-div", "pllsai-q", + CLK_SET_RATE_PARENT, STM32F4_RCC_DCKCFGR, 8, 5, 0, NULL }, + +- { NO_IDX, PLL_SAI, "pllsai-r-div", "pllsai-r", CLK_SET_RATE_PARENT, ++ { NO_IDX, PLL_VCO_SAI, "pllsai-r-div", "pllsai-r", CLK_SET_RATE_PARENT, + STM32F4_RCC_DCKCFGR, 16, 2, 0, post_divr_table }, + }; + +@@ -1774,7 +1774,7 @@ static void __init stm32f4_rcc_init(struct device_node *np) + post_div->width, + post_div->flag_div, + post_div->div_table, +- clks[post_div->pll_num], ++ clks[post_div->pll_idx], + &stm32f4_clk_lock); + + if (post_div->idx != NO_IDX) +diff --git a/drivers/clk/tegra/clk-sdmmc-mux.c b/drivers/clk/tegra/clk-sdmmc-mux.c +index 316912d3b1a4f..4f2c3309eea4d 100644 +--- a/drivers/clk/tegra/clk-sdmmc-mux.c ++++ b/drivers/clk/tegra/clk-sdmmc-mux.c +@@ -194,6 +194,15 @@ static void clk_sdmmc_mux_disable(struct clk_hw *hw) + gate_ops->disable(gate_hw); + } + ++static void clk_sdmmc_mux_disable_unused(struct clk_hw *hw) ++{ ++ struct tegra_sdmmc_mux *sdmmc_mux = to_clk_sdmmc_mux(hw); ++ const struct clk_ops *gate_ops = sdmmc_mux->gate_ops; ++ struct clk_hw *gate_hw = &sdmmc_mux->gate.hw; ++ ++ gate_ops->disable_unused(gate_hw); ++} ++ + static void clk_sdmmc_mux_restore_context(struct clk_hw *hw) + { + struct clk_hw *parent = clk_hw_get_parent(hw); +@@ -218,6 +227,7 @@ static const struct clk_ops tegra_clk_sdmmc_mux_ops = { + .is_enabled = clk_sdmmc_mux_is_enabled, + .enable = clk_sdmmc_mux_enable, + .disable = clk_sdmmc_mux_disable, ++ .disable_unused = clk_sdmmc_mux_disable_unused, + .restore_context = clk_sdmmc_mux_restore_context, + }; + +diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h +index 26482c7d4c3a3..fc708be7ad9a2 100644 +--- a/drivers/dma/idxd/idxd.h ++++ b/drivers/dma/idxd/idxd.h +@@ -294,6 +294,14 @@ struct idxd_desc { + struct idxd_wq *wq; + }; + ++/* ++ * This is software defined error for the completion status. We overload the error code ++ * that will never appear in completion status and only SWERR register. ++ */ ++enum idxd_completion_status { ++ IDXD_COMP_DESC_ABORT = 0xff, ++}; ++ + #define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev) + #define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev) + +@@ -482,4 +490,10 @@ static inline void perfmon_init(void) {} + static inline void perfmon_exit(void) {} + #endif + ++static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason) ++{ ++ idxd_dma_complete_txd(desc, reason); ++ idxd_free_desc(desc->wq, desc); ++} ++ + #endif +diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c +index 442d55c11a5f4..32cca6a0e66ac 100644 +--- a/drivers/dma/idxd/init.c ++++ b/drivers/dma/idxd/init.c +@@ -102,6 +102,8 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) + spin_lock_init(&idxd->irq_entries[i].list_lock); + } + ++ idxd_msix_perm_setup(idxd); ++ + irq_entry = &idxd->irq_entries[0]; + rc = request_threaded_irq(irq_entry->vector, NULL, idxd_misc_thread, + 0, "idxd-misc", irq_entry); +@@ -148,7 +150,6 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) + } + + idxd_unmask_error_interrupts(idxd); +- idxd_msix_perm_setup(idxd); + return 0; + + err_wq_irqs: +@@ -162,6 +163,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) + err_misc_irq: + /* Disable error interrupt generation */ + idxd_mask_error_interrupts(idxd); ++ idxd_msix_perm_clear(idxd); + err_irq_entries: + pci_free_irq_vectors(pdev); + dev_err(dev, "No usable interrupts\n"); +@@ -757,32 +759,40 @@ static void idxd_shutdown(struct pci_dev *pdev) + for (i = 0; i < msixcnt; i++) { + irq_entry = &idxd->irq_entries[i]; + synchronize_irq(irq_entry->vector); +- free_irq(irq_entry->vector, irq_entry); + if (i == 0) + continue; + idxd_flush_pending_llist(irq_entry); + idxd_flush_work_list(irq_entry); + } +- +- idxd_msix_perm_clear(idxd); +- idxd_release_int_handles(idxd); +- pci_free_irq_vectors(pdev); +- pci_iounmap(pdev, idxd->reg_base); +- pci_disable_device(pdev); +- destroy_workqueue(idxd->wq); ++ flush_workqueue(idxd->wq); + } + + static void idxd_remove(struct pci_dev *pdev) + { + struct idxd_device *idxd = pci_get_drvdata(pdev); ++ struct idxd_irq_entry *irq_entry; ++ int msixcnt = pci_msix_vec_count(pdev); ++ int i; + + dev_dbg(&pdev->dev, "%s called\n", __func__); + idxd_shutdown(pdev); + if (device_pasid_enabled(idxd)) + idxd_disable_system_pasid(idxd); + idxd_unregister_devices(idxd); +- perfmon_pmu_remove(idxd); ++ ++ for (i = 0; i < msixcnt; i++) { ++ irq_entry = &idxd->irq_entries[i]; ++ free_irq(irq_entry->vector, irq_entry); ++ } ++ idxd_msix_perm_clear(idxd); ++ idxd_release_int_handles(idxd); ++ pci_free_irq_vectors(pdev); ++ pci_iounmap(pdev, idxd->reg_base); + iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA); ++ pci_disable_device(pdev); ++ destroy_workqueue(idxd->wq); ++ perfmon_pmu_remove(idxd); ++ device_unregister(&idxd->conf_dev); + } + + static struct pci_driver idxd_pci_driver = { +diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c +index ae68e1e5487a0..4e3a7198c0caf 100644 +--- a/drivers/dma/idxd/irq.c ++++ b/drivers/dma/idxd/irq.c +@@ -245,12 +245,6 @@ static inline bool match_fault(struct idxd_desc *desc, u64 fault_addr) + return false; + } + +-static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason) +-{ +- idxd_dma_complete_txd(desc, reason); +- idxd_free_desc(desc->wq, desc); +-} +- + static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry, + enum irq_work_type wtype, + int *processed, u64 data) +@@ -272,8 +266,16 @@ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry, + reason = IDXD_COMPLETE_DEV_FAIL; + + llist_for_each_entry_safe(desc, t, head, llnode) { +- if (desc->completion->status) { +- if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS) ++ u8 status = desc->completion->status & DSA_COMP_STATUS_MASK; ++ ++ if (status) { ++ if (unlikely(status == IDXD_COMP_DESC_ABORT)) { ++ complete_desc(desc, IDXD_COMPLETE_ABORT); ++ (*processed)++; ++ continue; ++ } ++ ++ if (unlikely(status != DSA_COMP_SUCCESS)) + match_fault(desc, data); + complete_desc(desc, reason); + (*processed)++; +@@ -329,7 +331,14 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry, + spin_unlock_irqrestore(&irq_entry->list_lock, flags); + + list_for_each_entry(desc, &flist, list) { +- if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS) ++ u8 status = desc->completion->status & DSA_COMP_STATUS_MASK; ++ ++ if (unlikely(status == IDXD_COMP_DESC_ABORT)) { ++ complete_desc(desc, IDXD_COMPLETE_ABORT); ++ continue; ++ } ++ ++ if (unlikely(status != DSA_COMP_SUCCESS)) + match_fault(desc, data); + complete_desc(desc, reason); + } +diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c +index 19afb62abaffd..36c9c1a89b7e7 100644 +--- a/drivers/dma/idxd/submit.c ++++ b/drivers/dma/idxd/submit.c +@@ -25,11 +25,10 @@ static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu) + * Descriptor completion vectors are 1...N for MSIX. We will round + * robin through the N vectors. + */ +- wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1; ++ wq->vec_ptr = desc->vector = (wq->vec_ptr % idxd->num_wq_irqs) + 1; + if (!idxd->int_handles) { + desc->hw->int_handle = wq->vec_ptr; + } else { +- desc->vector = wq->vec_ptr; + /* + * int_handles are only for descriptor completion. However for device + * MSIX enumeration, vec 0 is used for misc interrupts. Therefore even +@@ -88,9 +87,64 @@ void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc) + sbitmap_queue_clear(&wq->sbq, desc->id, cpu); + } + ++static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie, ++ struct idxd_desc *desc) ++{ ++ struct idxd_desc *d, *n; ++ ++ lockdep_assert_held(&ie->list_lock); ++ list_for_each_entry_safe(d, n, &ie->work_list, list) { ++ if (d == desc) { ++ list_del(&d->list); ++ return d; ++ } ++ } ++ ++ /* ++ * At this point, the desc needs to be aborted is held by the completion ++ * handler where it has taken it off the pending list but has not added to the ++ * work list. It will be cleaned up by the interrupt handler when it sees the ++ * IDXD_COMP_DESC_ABORT for completion status. ++ */ ++ return NULL; ++} ++ ++static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie, ++ struct idxd_desc *desc) ++{ ++ struct idxd_desc *d, *t, *found = NULL; ++ struct llist_node *head; ++ unsigned long flags; ++ ++ desc->completion->status = IDXD_COMP_DESC_ABORT; ++ /* ++ * Grab the list lock so it will block the irq thread handler. This allows the ++ * abort code to locate the descriptor need to be aborted. ++ */ ++ spin_lock_irqsave(&ie->list_lock, flags); ++ head = llist_del_all(&ie->pending_llist); ++ if (head) { ++ llist_for_each_entry_safe(d, t, head, llnode) { ++ if (d == desc) { ++ found = desc; ++ continue; ++ } ++ list_add_tail(&desc->list, &ie->work_list); ++ } ++ } ++ ++ if (!found) ++ found = list_abort_desc(wq, ie, desc); ++ spin_unlock_irqrestore(&ie->list_lock, flags); ++ ++ if (found) ++ complete_desc(found, IDXD_COMPLETE_ABORT); ++} ++ + int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) + { + struct idxd_device *idxd = wq->idxd; ++ struct idxd_irq_entry *ie = NULL; + void __iomem *portal; + int rc; + +@@ -108,6 +162,16 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) + * even on UP because the recipient is a device. + */ + wmb(); ++ ++ /* ++ * Pending the descriptor to the lockless list for the irq_entry ++ * that we designated the descriptor to. ++ */ ++ if (desc->hw->flags & IDXD_OP_FLAG_RCI) { ++ ie = &idxd->irq_entries[desc->vector]; ++ llist_add(&desc->llnode, &ie->pending_llist); ++ } ++ + if (wq_dedicated(wq)) { + iosubmit_cmds512(portal, desc->hw, 1); + } else { +@@ -118,29 +182,13 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) + * device is not accepting descriptor at all. + */ + rc = enqcmds(portal, desc->hw); +- if (rc < 0) ++ if (rc < 0) { ++ if (ie) ++ llist_abort_desc(wq, ie, desc); + return rc; ++ } + } + + percpu_ref_put(&wq->wq_active); +- +- /* +- * Pending the descriptor to the lockless list for the irq_entry +- * that we designated the descriptor to. +- */ +- if (desc->hw->flags & IDXD_OP_FLAG_RCI) { +- int vec; +- +- /* +- * If the driver is on host kernel, it would be the value +- * assigned to interrupt handle, which is index for MSIX +- * vector. If it's guest then can't use the int_handle since +- * that is the index to IMS for the entire device. The guest +- * device local index will be used. +- */ +- vec = !idxd->int_handles ? desc->hw->int_handle : desc->vector; +- llist_add(&desc->llnode, &idxd->irq_entries[vec].pending_llist); +- } +- + return 0; + } +diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c +index 0460d58e3941f..bb4df63906a72 100644 +--- a/drivers/dma/idxd/sysfs.c ++++ b/drivers/dma/idxd/sysfs.c +@@ -1744,8 +1744,6 @@ void idxd_unregister_devices(struct idxd_device *idxd) + + device_unregister(&group->conf_dev); + } +- +- device_unregister(&idxd->conf_dev); + } + + int idxd_register_bus_type(void) +diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c +index 7f116bbcfad2a..2ddc31e64db03 100644 +--- a/drivers/dma/imx-dma.c ++++ b/drivers/dma/imx-dma.c +@@ -812,6 +812,8 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg( + dma_length += sg_dma_len(sg); + } + ++ imxdma_config_write(chan, &imxdmac->config, direction); ++ + switch (imxdmac->word_size) { + case DMA_SLAVE_BUSWIDTH_4_BYTES: + if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3) +diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c +index f54ecb123a521..7dd1d3d0bf063 100644 +--- a/drivers/dma/stm32-dma.c ++++ b/drivers/dma/stm32-dma.c +@@ -1200,7 +1200,7 @@ static int stm32_dma_alloc_chan_resources(struct dma_chan *c) + + chan->config_init = false; + +- ret = pm_runtime_get_sync(dmadev->ddev.dev); ++ ret = pm_runtime_resume_and_get(dmadev->ddev.dev); + if (ret < 0) + return ret; + +@@ -1470,7 +1470,7 @@ static int stm32_dma_suspend(struct device *dev) + struct stm32_dma_device *dmadev = dev_get_drvdata(dev); + int id, ret, scr; + +- ret = pm_runtime_get_sync(dev); ++ ret = pm_runtime_resume_and_get(dev); + if (ret < 0) + return ret; + +diff --git a/drivers/dma/stm32-dmamux.c b/drivers/dma/stm32-dmamux.c +index ef0d0555103d9..a42164389ebc2 100644 +--- a/drivers/dma/stm32-dmamux.c ++++ b/drivers/dma/stm32-dmamux.c +@@ -137,7 +137,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec, + + /* Set dma request */ + spin_lock_irqsave(&dmamux->lock, flags); +- ret = pm_runtime_get_sync(&pdev->dev); ++ ret = pm_runtime_resume_and_get(&pdev->dev); + if (ret < 0) { + spin_unlock_irqrestore(&dmamux->lock, flags); + goto error; +@@ -336,7 +336,7 @@ static int stm32_dmamux_suspend(struct device *dev) + struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev); + int i, ret; + +- ret = pm_runtime_get_sync(dev); ++ ret = pm_runtime_resume_and_get(dev); + if (ret < 0) + return ret; + +@@ -361,7 +361,7 @@ static int stm32_dmamux_resume(struct device *dev) + if (ret < 0) + return ret; + +- ret = pm_runtime_get_sync(dev); ++ ret = pm_runtime_resume_and_get(dev); + if (ret < 0) + return ret; + +diff --git a/drivers/dma/uniphier-xdmac.c b/drivers/dma/uniphier-xdmac.c +index 16b19654873df..d6b8a202474f4 100644 +--- a/drivers/dma/uniphier-xdmac.c ++++ b/drivers/dma/uniphier-xdmac.c +@@ -209,8 +209,8 @@ static int uniphier_xdmac_chan_stop(struct uniphier_xdmac_chan *xc) + writel(0, xc->reg_ch_base + XDMAC_TSS); + + /* wait until transfer is stopped */ +- return readl_poll_timeout(xc->reg_ch_base + XDMAC_STAT, val, +- !(val & XDMAC_STAT_TENF), 100, 1000); ++ return readl_poll_timeout_atomic(xc->reg_ch_base + XDMAC_STAT, val, ++ !(val & XDMAC_STAT_TENF), 100, 1000); + } + + /* xc->vc.lock must be held by caller */ +diff --git a/drivers/fpga/dfl-fme-perf.c b/drivers/fpga/dfl-fme-perf.c +index 4299145ef347e..587c82be12f7a 100644 +--- a/drivers/fpga/dfl-fme-perf.c ++++ b/drivers/fpga/dfl-fme-perf.c +@@ -953,6 +953,8 @@ static int fme_perf_offline_cpu(unsigned int cpu, struct hlist_node *node) + return 0; + + priv->cpu = target; ++ perf_pmu_migrate_context(&priv->pmu, cpu, target); ++ + return 0; + } + +diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c +index 4b9157a69fca0..50b321a1ab1b6 100644 +--- a/drivers/gpio/gpio-mpc8xxx.c ++++ b/drivers/gpio/gpio-mpc8xxx.c +@@ -405,7 +405,7 @@ static int mpc8xxx_probe(struct platform_device *pdev) + + ret = devm_request_irq(&pdev->dev, mpc8xxx_gc->irqn, + mpc8xxx_gpio_irq_cascade, +- IRQF_SHARED, "gpio-cascade", ++ IRQF_NO_THREAD | IRQF_SHARED, "gpio-cascade", + mpc8xxx_gc); + if (ret) { + dev_err(&pdev->dev, +diff --git a/drivers/gpio/gpio-tqmx86.c b/drivers/gpio/gpio-tqmx86.c +index 5022e0ad0faee..0f5d17f343f1e 100644 +--- a/drivers/gpio/gpio-tqmx86.c ++++ b/drivers/gpio/gpio-tqmx86.c +@@ -238,8 +238,8 @@ static int tqmx86_gpio_probe(struct platform_device *pdev) + struct resource *res; + int ret, irq; + +- irq = platform_get_irq(pdev, 0); +- if (irq < 0) ++ irq = platform_get_irq_optional(pdev, 0); ++ if (irq < 0 && irq != -ENXIO) + return irq; + + res = platform_get_resource(pdev, IORESOURCE_IO, 0); +@@ -278,7 +278,7 @@ static int tqmx86_gpio_probe(struct platform_device *pdev) + + pm_runtime_enable(&pdev->dev); + +- if (irq) { ++ if (irq > 0) { + struct irq_chip *irq_chip = &gpio->irq_chip; + u8 irq_status; + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c +index 355a6923849d3..b53eab384adb7 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c +@@ -904,7 +904,7 @@ void amdgpu_acpi_fini(struct amdgpu_device *adev) + */ + bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev) + { +-#if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE) ++#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP) + if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) { + if (adev->flags & AMD_IS_APU) + return pm_suspend_target_state == PM_SUSPEND_TO_IDLE; +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index eeaae1cf2bc2b..0894cd505361a 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -1493,6 +1493,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev) + } + + hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data; ++ adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version); + + if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { + adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id = +@@ -1506,7 +1507,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev) + adev->dm.dmcub_fw_version); + } + +- adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version); + + adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL); + dmub_srv = adev->dm.dmub_srv; +@@ -2367,9 +2367,9 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) + max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll; + min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll; + +- if (caps->ext_caps->bits.oled == 1 || ++ if (caps->ext_caps->bits.oled == 1 /*|| + caps->ext_caps->bits.sdr_aux_backlight_control == 1 || +- caps->ext_caps->bits.hdr_aux_backlight_control == 1) ++ caps->ext_caps->bits.hdr_aux_backlight_control == 1*/) + caps->aux_support = true; + + if (amdgpu_backlight == 0) +diff --git a/drivers/gpu/drm/i915/i915_globals.c b/drivers/gpu/drm/i915/i915_globals.c +index 3aa2136842935..57d2943884ab6 100644 +--- a/drivers/gpu/drm/i915/i915_globals.c ++++ b/drivers/gpu/drm/i915/i915_globals.c +@@ -139,7 +139,7 @@ void i915_globals_unpark(void) + atomic_inc(&active); + } + +-static void __exit __i915_globals_flush(void) ++static void __i915_globals_flush(void) + { + atomic_inc(&active); /* skip shrinking */ + +@@ -149,7 +149,7 @@ static void __exit __i915_globals_flush(void) + atomic_dec(&active); + } + +-void __exit i915_globals_exit(void) ++void i915_globals_exit(void) + { + GEM_BUG_ON(atomic_read(&active)); + +diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c +index 480553746794f..a6261a8103f42 100644 +--- a/drivers/gpu/drm/i915/i915_pci.c ++++ b/drivers/gpu/drm/i915/i915_pci.c +@@ -1168,6 +1168,7 @@ static int __init i915_init(void) + err = pci_register_driver(&i915_pci_driver); + if (err) { + i915_pmu_exit(); ++ i915_globals_exit(); + return err; + } + +diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h +index cbf7a60afe542..97fc7a51c1006 100644 +--- a/drivers/gpu/drm/i915/i915_reg.h ++++ b/drivers/gpu/drm/i915/i915_reg.h +@@ -416,7 +416,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg) + #define GEN11_VECS_SFC_USAGE(engine) _MMIO((engine)->mmio_base + 0x2014) + #define GEN11_VECS_SFC_USAGE_BIT (1 << 0) + +-#define GEN12_SFC_DONE(n) _MMIO(0x1cc00 + (n) * 0x100) ++#define GEN12_SFC_DONE(n) _MMIO(0x1cc000 + (n) * 0x1000) + #define GEN12_SFC_DONE_MAX 4 + + #define RING_PP_DIR_BASE(base) _MMIO((base) + 0x228) +diff --git a/drivers/gpu/drm/kmb/kmb_drv.c b/drivers/gpu/drm/kmb/kmb_drv.c +index 96ea1a2c11dd6..c0b1c6f992496 100644 +--- a/drivers/gpu/drm/kmb/kmb_drv.c ++++ b/drivers/gpu/drm/kmb/kmb_drv.c +@@ -203,6 +203,7 @@ static irqreturn_t handle_lcd_irq(struct drm_device *dev) + unsigned long status, val, val1; + int plane_id, dma0_state, dma1_state; + struct kmb_drm_private *kmb = to_kmb(dev); ++ u32 ctrl = 0; + + status = kmb_read_lcd(kmb, LCD_INT_STATUS); + +@@ -227,6 +228,19 @@ static irqreturn_t handle_lcd_irq(struct drm_device *dev) + kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, + kmb->plane_status[plane_id].ctrl); + ++ ctrl = kmb_read_lcd(kmb, LCD_CONTROL); ++ if (!(ctrl & (LCD_CTRL_VL1_ENABLE | ++ LCD_CTRL_VL2_ENABLE | ++ LCD_CTRL_GL1_ENABLE | ++ LCD_CTRL_GL2_ENABLE))) { ++ /* If no LCD layers are using DMA, ++ * then disable DMA pipelined AXI read ++ * transactions. ++ */ ++ kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, ++ LCD_CTRL_PIPELINE_DMA); ++ } ++ + kmb->plane_status[plane_id].disable = false; + } + } +diff --git a/drivers/gpu/drm/kmb/kmb_plane.c b/drivers/gpu/drm/kmb/kmb_plane.c +index d5b6195856d12..ecee6782612d8 100644 +--- a/drivers/gpu/drm/kmb/kmb_plane.c ++++ b/drivers/gpu/drm/kmb/kmb_plane.c +@@ -427,8 +427,14 @@ static void kmb_plane_atomic_update(struct drm_plane *plane, + + kmb_set_bitmask_lcd(kmb, LCD_CONTROL, ctrl); + +- /* FIXME no doc on how to set output format,these values are +- * taken from the Myriadx tests ++ /* Enable pipeline AXI read transactions for the DMA ++ * after setting graphics layers. This must be done ++ * in a separate write cycle. ++ */ ++ kmb_set_bitmask_lcd(kmb, LCD_CONTROL, LCD_CTRL_PIPELINE_DMA); ++ ++ /* FIXME no doc on how to set output format, these values are taken ++ * from the Myriadx tests + */ + out_format |= LCD_OUTF_FORMAT_RGB888; + +@@ -526,6 +532,11 @@ struct kmb_plane *kmb_plane_init(struct drm_device *drm) + plane->id = i; + } + ++ /* Disable pipeline AXI read transactions for the DMA ++ * prior to setting graphics layers ++ */ ++ kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, LCD_CTRL_PIPELINE_DMA); ++ + return primary; + cleanup: + drmm_kfree(drm, plane); +diff --git a/drivers/hid/hid-ft260.c b/drivers/hid/hid-ft260.c +index f43a8406cb9a9..e73776ae6976f 100644 +--- a/drivers/hid/hid-ft260.c ++++ b/drivers/hid/hid-ft260.c +@@ -742,7 +742,7 @@ static int ft260_is_interface_enabled(struct hid_device *hdev) + int ret; + + ret = ft260_get_system_config(hdev, &cfg); +- if (ret) ++ if (ret < 0) + return ret; + + ft260_dbg("interface: 0x%02x\n", interface); +@@ -754,23 +754,16 @@ static int ft260_is_interface_enabled(struct hid_device *hdev) + switch (cfg.chip_mode) { + case FT260_MODE_ALL: + case FT260_MODE_BOTH: +- if (interface == 1) { ++ if (interface == 1) + hid_info(hdev, "uart interface is not supported\n"); +- return 0; +- } +- ret = 1; ++ else ++ ret = 1; + break; + case FT260_MODE_UART: +- if (interface == 0) { +- hid_info(hdev, "uart is unsupported on interface 0\n"); +- ret = 0; +- } ++ hid_info(hdev, "uart interface is not supported\n"); + break; + case FT260_MODE_I2C: +- if (interface == 1) { +- hid_info(hdev, "i2c is unsupported on interface 1\n"); +- ret = 0; +- } ++ ret = 1; + break; + } + return ret; +@@ -1004,11 +997,9 @@ err_hid_stop: + + static void ft260_remove(struct hid_device *hdev) + { +- int ret; + struct ft260_device *dev = hid_get_drvdata(hdev); + +- ret = ft260_is_interface_enabled(hdev); +- if (ret <= 0) ++ if (!dev) + return; + + sysfs_remove_group(&hdev->dev.kobj, &ft260_attr_group); +diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c +index 8f68cc3ff193f..84f3f2b5f0976 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c ++++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c +@@ -213,8 +213,10 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev) + + hr_cmd->context = + kcalloc(hr_cmd->max_cmds, sizeof(*hr_cmd->context), GFP_KERNEL); +- if (!hr_cmd->context) ++ if (!hr_cmd->context) { ++ hr_dev->cmd_mod = 0; + return -ENOMEM; ++ } + + for (i = 0; i < hr_cmd->max_cmds; ++i) { + hr_cmd->context[i].token = i; +@@ -228,7 +230,6 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev) + spin_lock_init(&hr_cmd->context_lock); + + hr_cmd->use_events = 1; +- down(&hr_cmd->poll_sem); + + return 0; + } +@@ -239,8 +240,6 @@ void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev) + + kfree(hr_cmd->context); + hr_cmd->use_events = 0; +- +- up(&hr_cmd->poll_sem); + } + + struct hns_roce_cmd_mailbox * +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c +index 6c6e82b11d8bc..33b84f219d0d0 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c +@@ -897,11 +897,9 @@ int hns_roce_init(struct hns_roce_dev *hr_dev) + + if (hr_dev->cmd_mod) { + ret = hns_roce_cmd_use_events(hr_dev); +- if (ret) { ++ if (ret) + dev_warn(dev, + "Cmd event mode failed, set back to poll!\n"); +- hns_roce_cmd_use_polling(hr_dev); +- } + } + + ret = hns_roce_init_hem(hr_dev); +diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c +index 425423dfac724..fd113ddf6e862 100644 +--- a/drivers/infiniband/hw/mlx5/mr.c ++++ b/drivers/infiniband/hw/mlx5/mr.c +@@ -530,8 +530,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) + */ + spin_unlock_irq(&ent->lock); + need_delay = need_resched() || someone_adding(cache) || +- time_after(jiffies, +- READ_ONCE(cache->last_add) + 300 * HZ); ++ !time_after(jiffies, ++ READ_ONCE(cache->last_add) + 300 * HZ); + spin_lock_irq(&ent->lock); + if (ent->disabled) + goto out; +diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c +index 8a1e70e008764..7887941730dbb 100644 +--- a/drivers/interconnect/core.c ++++ b/drivers/interconnect/core.c +@@ -403,7 +403,7 @@ struct icc_path *devm_of_icc_get(struct device *dev, const char *name) + { + struct icc_path **ptr, *path; + +- ptr = devres_alloc(devm_icc_release, sizeof(**ptr), GFP_KERNEL); ++ ptr = devres_alloc(devm_icc_release, sizeof(*ptr), GFP_KERNEL); + if (!ptr) + return ERR_PTR(-ENOMEM); + +@@ -973,9 +973,14 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) + } + node->avg_bw = node->init_avg; + node->peak_bw = node->init_peak; ++ ++ if (provider->pre_aggregate) ++ provider->pre_aggregate(node); ++ + if (provider->aggregate) + provider->aggregate(node, 0, node->init_avg, node->init_peak, + &node->avg_bw, &node->peak_bw); ++ + provider->set(node, node); + node->avg_bw = 0; + node->peak_bw = 0; +@@ -1106,6 +1111,8 @@ void icc_sync_state(struct device *dev) + dev_dbg(p->dev, "interconnect provider is in synced state\n"); + list_for_each_entry(n, &p->nodes, node_list) { + if (n->init_avg || n->init_peak) { ++ n->init_avg = 0; ++ n->init_peak = 0; + aggregate_requests(n); + p->set(n, n); + } +diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c +index bf01d09dba6c4..f6fae64861ce8 100644 +--- a/drivers/interconnect/qcom/icc-rpmh.c ++++ b/drivers/interconnect/qcom/icc-rpmh.c +@@ -57,6 +57,11 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, + qn->sum_avg[i] += avg_bw; + qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); + } ++ ++ if (node->init_avg || node->init_peak) { ++ qn->sum_avg[i] = max_t(u64, qn->sum_avg[i], node->init_avg); ++ qn->max_peak[i] = max_t(u64, qn->max_peak[i], node->init_peak); ++ } + } + + *agg_avg += avg_bw; +@@ -79,7 +84,6 @@ EXPORT_SYMBOL_GPL(qcom_icc_aggregate); + int qcom_icc_set(struct icc_node *src, struct icc_node *dst) + { + struct qcom_icc_provider *qp; +- struct qcom_icc_node *qn; + struct icc_node *node; + + if (!src) +@@ -88,12 +92,6 @@ int qcom_icc_set(struct icc_node *src, struct icc_node *dst) + node = src; + + qp = to_qcom_provider(node->provider); +- qn = node->data; +- +- qn->sum_avg[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->sum_avg[QCOM_ICC_BUCKET_AMC], +- node->avg_bw); +- qn->max_peak[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->max_peak[QCOM_ICC_BUCKET_AMC], +- node->peak_bw); + + qcom_icc_bcm_voter_commit(qp->voter); + +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c +index ced076ba560e1..753822ca96131 100644 +--- a/drivers/md/raid1.c ++++ b/drivers/md/raid1.c +@@ -472,8 +472,6 @@ static void raid1_end_write_request(struct bio *bio) + /* + * When the device is faulty, it is not necessary to + * handle write error. +- * For failfast, this is the only remaining device, +- * We need to retry the write without FailFast. + */ + if (!test_bit(Faulty, &rdev->flags)) + set_bit(R1BIO_WriteError, &r1_bio->state); +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index 13f5e6b2a73d6..40e845fb97170 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -469,12 +469,12 @@ static void raid10_end_write_request(struct bio *bio) + /* + * When the device is faulty, it is not necessary to + * handle write error. +- * For failfast, this is the only remaining device, +- * We need to retry the write without FailFast. + */ + if (!test_bit(Faulty, &rdev->flags)) + set_bit(R10BIO_WriteError, &r10_bio->state); + else { ++ /* Fail the request */ ++ set_bit(R10BIO_Degraded, &r10_bio->state); + r10_bio->devs[slot].bio = NULL; + to_put = bio; + dec_rdev = 1; +diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c +index 02281d13505f4..508ac295eb06e 100644 +--- a/drivers/media/common/videobuf2/videobuf2-core.c ++++ b/drivers/media/common/videobuf2/videobuf2-core.c +@@ -1573,6 +1573,7 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb, + struct media_request *req) + { + struct vb2_buffer *vb; ++ enum vb2_buffer_state orig_state; + int ret; + + if (q->error) { +@@ -1673,6 +1674,7 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb, + * Add to the queued buffers list, a buffer will stay on it until + * dequeued in dqbuf. + */ ++ orig_state = vb->state; + list_add_tail(&vb->queued_entry, &q->queued_list); + q->queued_count++; + q->waiting_for_buffers = false; +@@ -1703,8 +1705,17 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb, + if (q->streaming && !q->start_streaming_called && + q->queued_count >= q->min_buffers_needed) { + ret = vb2_start_streaming(q); +- if (ret) ++ if (ret) { ++ /* ++ * Since vb2_core_qbuf will return with an error, ++ * we should return it to state DEQUEUED since ++ * the error indicates that the buffer wasn't queued. ++ */ ++ list_del(&vb->queued_entry); ++ q->queued_count--; ++ vb->state = orig_state; + return ret; ++ } + } + + dprintk(q, 2, "qbuf of buffer %d succeeded\n", vb->index); +diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c +index 97ed17a141bbf..a6124472cb06f 100644 +--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c ++++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c +@@ -37,7 +37,16 @@ static int rtl28xxu_ctrl_msg(struct dvb_usb_device *d, struct rtl28xxu_req *req) + } else { + /* read */ + requesttype = (USB_TYPE_VENDOR | USB_DIR_IN); +- pipe = usb_rcvctrlpipe(d->udev, 0); ++ ++ /* ++ * Zero-length transfers must use usb_sndctrlpipe() and ++ * rtl28xxu_identify_state() uses a zero-length i2c read ++ * command to determine the chip type. ++ */ ++ if (req->size) ++ pipe = usb_rcvctrlpipe(d->udev, 0); ++ else ++ pipe = usb_sndctrlpipe(d->udev, 0); + } + + ret = usb_control_msg(d->udev, pipe, 0, requesttype, req->value, +diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c +index ca2ad77b71f1c..6686192e1883e 100644 +--- a/drivers/net/dsa/qca/ar9331.c ++++ b/drivers/net/dsa/qca/ar9331.c +@@ -837,16 +837,24 @@ static int ar9331_mdio_write(void *ctx, u32 reg, u32 val) + return 0; + } + +- ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val); ++ /* In case of this switch we work with 32bit registers on top of 16bit ++ * bus. Some registers (for example access to forwarding database) have ++ * trigger bit on the first 16bit half of request, the result and ++ * configuration of request in the second half. ++ * To make it work properly, we should do the second part of transfer ++ * before the first one is done. ++ */ ++ ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2, ++ val >> 16); + if (ret < 0) + goto error; + +- ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2, +- val >> 16); ++ ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val); + if (ret < 0) + goto error; + + return 0; ++ + error: + dev_err_ratelimited(&sbus->dev, "Bus error. Failed to write register.\n"); + return ret; +diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c +index 5b7947832b877..4b05a2424623c 100644 +--- a/drivers/net/dsa/sja1105/sja1105_main.c ++++ b/drivers/net/dsa/sja1105/sja1105_main.c +@@ -1308,10 +1308,11 @@ static int sja1105et_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin, + int sja1105et_fdb_add(struct dsa_switch *ds, int port, + const unsigned char *addr, u16 vid) + { +- struct sja1105_l2_lookup_entry l2_lookup = {0}; ++ struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp; + struct sja1105_private *priv = ds->priv; + struct device *dev = ds->dev; + int last_unused = -1; ++ int start, end, i; + int bin, way, rc; + + bin = sja1105et_fdb_hash(priv, addr, vid); +@@ -1323,7 +1324,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port, + * mask? If yes, we need to do nothing. If not, we need + * to rewrite the entry by adding this port to it. + */ +- if (l2_lookup.destports & BIT(port)) ++ if ((l2_lookup.destports & BIT(port)) && l2_lookup.lockeds) + return 0; + l2_lookup.destports |= BIT(port); + } else { +@@ -1354,6 +1355,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port, + index, NULL, false); + } + } ++ l2_lookup.lockeds = true; + l2_lookup.index = sja1105et_fdb_index(bin, way); + + rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP, +@@ -1362,6 +1364,29 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port, + if (rc < 0) + return rc; + ++ /* Invalidate a dynamically learned entry if that exists */ ++ start = sja1105et_fdb_index(bin, 0); ++ end = sja1105et_fdb_index(bin, way); ++ ++ for (i = start; i < end; i++) { ++ rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP, ++ i, &tmp); ++ if (rc == -ENOENT) ++ continue; ++ if (rc) ++ return rc; ++ ++ if (tmp.macaddr != ether_addr_to_u64(addr) || tmp.vlanid != vid) ++ continue; ++ ++ rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP, ++ i, NULL, false); ++ if (rc) ++ return rc; ++ ++ break; ++ } ++ + return sja1105_static_fdb_change(priv, port, &l2_lookup, true); + } + +@@ -1403,32 +1428,30 @@ int sja1105et_fdb_del(struct dsa_switch *ds, int port, + int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port, + const unsigned char *addr, u16 vid) + { +- struct sja1105_l2_lookup_entry l2_lookup = {0}; ++ struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp; + struct sja1105_private *priv = ds->priv; + int rc, i; + + /* Search for an existing entry in the FDB table */ + l2_lookup.macaddr = ether_addr_to_u64(addr); + l2_lookup.vlanid = vid; +- l2_lookup.iotag = SJA1105_S_TAG; + l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0); +- if (priv->vlan_state != SJA1105_VLAN_UNAWARE) { +- l2_lookup.mask_vlanid = VLAN_VID_MASK; +- l2_lookup.mask_iotag = BIT(0); +- } else { +- l2_lookup.mask_vlanid = 0; +- l2_lookup.mask_iotag = 0; +- } ++ l2_lookup.mask_vlanid = VLAN_VID_MASK; + l2_lookup.destports = BIT(port); + ++ tmp = l2_lookup; ++ + rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP, +- SJA1105_SEARCH, &l2_lookup); +- if (rc == 0) { +- /* Found and this port is already in the entry's ++ SJA1105_SEARCH, &tmp); ++ if (rc == 0 && tmp.index != SJA1105_MAX_L2_LOOKUP_COUNT - 1) { ++ /* Found a static entry and this port is already in the entry's + * port mask => job done + */ +- if (l2_lookup.destports & BIT(port)) ++ if ((tmp.destports & BIT(port)) && tmp.lockeds) + return 0; ++ ++ l2_lookup = tmp; ++ + /* l2_lookup.index is populated by the switch in case it + * found something. + */ +@@ -1450,16 +1473,46 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port, + dev_err(ds->dev, "FDB is full, cannot add entry.\n"); + return -EINVAL; + } +- l2_lookup.lockeds = true; + l2_lookup.index = i; + + skip_finding_an_index: ++ l2_lookup.lockeds = true; ++ + rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP, + l2_lookup.index, &l2_lookup, + true); + if (rc < 0) + return rc; + ++ /* The switch learns dynamic entries and looks up the FDB left to ++ * right. It is possible that our addition was concurrent with the ++ * dynamic learning of the same address, so now that the static entry ++ * has been installed, we are certain that address learning for this ++ * particular address has been turned off, so the dynamic entry either ++ * is in the FDB at an index smaller than the static one, or isn't (it ++ * can also be at a larger index, but in that case it is inactive ++ * because the static FDB entry will match first, and the dynamic one ++ * will eventually age out). Search for a dynamically learned address ++ * prior to our static one and invalidate it. ++ */ ++ tmp = l2_lookup; ++ ++ rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP, ++ SJA1105_SEARCH, &tmp); ++ if (rc < 0) { ++ dev_err(ds->dev, ++ "port %d failed to read back entry for %pM vid %d: %pe\n", ++ port, addr, vid, ERR_PTR(rc)); ++ return rc; ++ } ++ ++ if (tmp.index < l2_lookup.index) { ++ rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP, ++ tmp.index, NULL, false); ++ if (rc < 0) ++ return rc; ++ } ++ + return sja1105_static_fdb_change(priv, port, &l2_lookup, true); + } + +@@ -1473,15 +1526,8 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port, + + l2_lookup.macaddr = ether_addr_to_u64(addr); + l2_lookup.vlanid = vid; +- l2_lookup.iotag = SJA1105_S_TAG; + l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0); +- if (priv->vlan_state != SJA1105_VLAN_UNAWARE) { +- l2_lookup.mask_vlanid = VLAN_VID_MASK; +- l2_lookup.mask_iotag = BIT(0); +- } else { +- l2_lookup.mask_vlanid = 0; +- l2_lookup.mask_iotag = 0; +- } ++ l2_lookup.mask_vlanid = VLAN_VID_MASK; + l2_lookup.destports = BIT(port); + + rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP, +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +index 1a6ec1a12d531..b5d954cb409ae 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +@@ -2669,7 +2669,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode) + } + + /* Allocated memory for FW statistics */ +- if (bnx2x_alloc_fw_stats_mem(bp)) ++ rc = bnx2x_alloc_fw_stats_mem(bp); ++ if (rc) + LOAD_ERROR_EXIT(bp, load_error0); + + /* request pf to initialize status blocks */ +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c +index 8aea707a65a77..7e4c4980ced79 100644 +--- a/drivers/net/ethernet/freescale/fec_main.c ++++ b/drivers/net/ethernet/freescale/fec_main.c +@@ -3843,13 +3843,13 @@ fec_drv_remove(struct platform_device *pdev) + if (of_phy_is_fixed_link(np)) + of_phy_deregister_fixed_link(np); + of_node_put(fep->phy_node); +- free_netdev(ndev); + + clk_disable_unprepare(fep->clk_ahb); + clk_disable_unprepare(fep->clk_ipg); + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_disable(&pdev->dev); + ++ free_netdev(ndev); + return 0; + } + +diff --git a/drivers/net/ethernet/natsemi/natsemi.c b/drivers/net/ethernet/natsemi/natsemi.c +index b81e1487945c8..14a17ad730f03 100644 +--- a/drivers/net/ethernet/natsemi/natsemi.c ++++ b/drivers/net/ethernet/natsemi/natsemi.c +@@ -819,7 +819,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent) + printk(version); + #endif + +- i = pci_enable_device(pdev); ++ i = pcim_enable_device(pdev); + if (i) return i; + + /* natsemi has a non-standard PM control register +@@ -852,7 +852,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent) + ioaddr = ioremap(iostart, iosize); + if (!ioaddr) { + i = -ENOMEM; +- goto err_ioremap; ++ goto err_pci_request_regions; + } + + /* Work around the dropped serial bit. */ +@@ -974,9 +974,6 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent) + err_register_netdev: + iounmap(ioaddr); + +- err_ioremap: +- pci_release_regions(pdev); +- + err_pci_request_regions: + free_netdev(dev); + return i; +@@ -3241,7 +3238,6 @@ static void natsemi_remove1(struct pci_dev *pdev) + + NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround); + unregister_netdev (dev); +- pci_release_regions (pdev); + iounmap(ioaddr); + free_netdev (dev); + } +diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c +index 87892bd992b18..56556373548c4 100644 +--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c ++++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c +@@ -3527,13 +3527,13 @@ static void vxge_device_unregister(struct __vxge_hw_device *hldev) + + kfree(vdev->vpaths); + +- /* we are safe to free it now */ +- free_netdev(dev); +- + vxge_debug_init(vdev->level_trace, "%s: ethernet device unregistered", + buf); + vxge_debug_entryexit(vdev->level_trace, "%s: %s:%d Exiting...", buf, + __func__, __LINE__); ++ ++ /* we are safe to free it now */ ++ free_netdev(dev); + } + + /* +diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +index 1b482446536dc..8803faadd3020 100644 +--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c ++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +@@ -286,6 +286,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev, + + /* Init to unknowns */ + ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); ++ ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); ++ ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); + cmd->base.port = PORT_OTHER; + cmd->base.speed = SPEED_UNKNOWN; + cmd->base.duplex = DUPLEX_UNKNOWN; +diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c +index c59b72c902932..a2e4dfb5cb44e 100644 +--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c ++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c +@@ -831,7 +831,7 @@ int qede_configure_vlan_filters(struct qede_dev *edev) + int qede_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid) + { + struct qede_dev *edev = netdev_priv(dev); +- struct qede_vlan *vlan = NULL; ++ struct qede_vlan *vlan; + int rc = 0; + + DP_VERBOSE(edev, NETIF_MSG_IFDOWN, "Removing vlan 0x%04x\n", vid); +@@ -842,7 +842,7 @@ int qede_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid) + if (vlan->vid == vid) + break; + +- if (!vlan || (vlan->vid != vid)) { ++ if (list_entry_is_head(vlan, &edev->vlan_list, list)) { + DP_VERBOSE(edev, (NETIF_MSG_IFUP | NETIF_MSG_IFDOWN), + "Vlan isn't configured\n"); + goto out; +diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c +index 2376b2729633f..c00ad57575eab 100644 +--- a/drivers/net/ethernet/qlogic/qla3xxx.c ++++ b/drivers/net/ethernet/qlogic/qla3xxx.c +@@ -154,7 +154,7 @@ static int ql_wait_for_drvr_lock(struct ql3_adapter *qdev) + "driver lock acquired\n"); + return 1; + } +- ssleep(1); ++ mdelay(1000); + } while (++i < 10); + + netdev_err(qdev->ndev, "Timed out waiting for driver lock...\n"); +@@ -3274,7 +3274,7 @@ static int ql_adapter_reset(struct ql3_adapter *qdev) + if ((value & ISP_CONTROL_SR) == 0) + break; + +- ssleep(1); ++ mdelay(1000); + } while ((--max_wait_time)); + + /* +@@ -3310,7 +3310,7 @@ static int ql_adapter_reset(struct ql3_adapter *qdev) + ispControlStatus); + if ((value & ISP_CONTROL_FSR) == 0) + break; +- ssleep(1); ++ mdelay(1000); + } while ((--max_wait_time)); + } + if (max_wait_time == 0) +diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c +index 718539cdd2f2e..67a08cbba859d 100644 +--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c ++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c +@@ -2060,8 +2060,12 @@ static void am65_cpsw_port_offload_fwd_mark_update(struct am65_cpsw_common *comm + + for (i = 1; i <= common->port_num; i++) { + struct am65_cpsw_port *port = am65_common_get_port(common, i); +- struct am65_cpsw_ndev_priv *priv = am65_ndev_to_priv(port->ndev); ++ struct am65_cpsw_ndev_priv *priv; + ++ if (!port->ndev) ++ continue; ++ ++ priv = am65_ndev_to_priv(port->ndev); + priv->offload_fwd_mark = set_val; + } + } +diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c +index a14a00328fa37..7afd9edaf2490 100644 +--- a/drivers/net/phy/micrel.c ++++ b/drivers/net/phy/micrel.c +@@ -382,11 +382,11 @@ static int ksz8041_config_aneg(struct phy_device *phydev) + } + + static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev, +- const u32 ksz_phy_id) ++ const bool ksz_8051) + { + int ret; + +- if ((phydev->phy_id & MICREL_PHY_ID_MASK) != ksz_phy_id) ++ if ((phydev->phy_id & MICREL_PHY_ID_MASK) != PHY_ID_KSZ8051) + return 0; + + ret = phy_read(phydev, MII_BMSR); +@@ -399,7 +399,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev, + * the switch does not. + */ + ret &= BMSR_ERCAP; +- if (ksz_phy_id == PHY_ID_KSZ8051) ++ if (ksz_8051) + return ret; + else + return !ret; +@@ -407,7 +407,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev, + + static int ksz8051_match_phy_device(struct phy_device *phydev) + { +- return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ8051); ++ return ksz8051_ksz8795_match_phy_device(phydev, true); + } + + static int ksz8081_config_init(struct phy_device *phydev) +@@ -435,7 +435,7 @@ static int ksz8061_config_init(struct phy_device *phydev) + + static int ksz8795_match_phy_device(struct phy_device *phydev) + { +- return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ87XX); ++ return ksz8051_ksz8795_match_phy_device(phydev, false); + } + + static int ksz9021_load_values_from_of(struct phy_device *phydev, +diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c +index 9a907182569cf..bc2dbf86496b5 100644 +--- a/drivers/net/usb/pegasus.c ++++ b/drivers/net/usb/pegasus.c +@@ -735,12 +735,16 @@ static inline void disable_net_traffic(pegasus_t *pegasus) + set_registers(pegasus, EthCtrl0, sizeof(tmp), &tmp); + } + +-static inline void get_interrupt_interval(pegasus_t *pegasus) ++static inline int get_interrupt_interval(pegasus_t *pegasus) + { + u16 data; + u8 interval; ++ int ret; ++ ++ ret = read_eprom_word(pegasus, 4, &data); ++ if (ret < 0) ++ return ret; + +- read_eprom_word(pegasus, 4, &data); + interval = data >> 8; + if (pegasus->usb->speed != USB_SPEED_HIGH) { + if (interval < 0x80) { +@@ -755,6 +759,8 @@ static inline void get_interrupt_interval(pegasus_t *pegasus) + } + } + pegasus->intr_interval = interval; ++ ++ return 0; + } + + static void set_carrier(struct net_device *net) +@@ -1149,7 +1155,9 @@ static int pegasus_probe(struct usb_interface *intf, + | NETIF_MSG_PROBE | NETIF_MSG_LINK); + + pegasus->features = usb_dev_id[dev_index].private; +- get_interrupt_interval(pegasus); ++ res = get_interrupt_interval(pegasus); ++ if (res) ++ goto out2; + if (reset_mac(pegasus)) { + dev_err(&intf->dev, "can't reset MAC\n"); + res = -EIO; +diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c +index 1df959532c7d3..514f2c1124b61 100644 +--- a/drivers/net/wireless/virt_wifi.c ++++ b/drivers/net/wireless/virt_wifi.c +@@ -136,6 +136,29 @@ static struct ieee80211_supported_band band_5ghz = { + /* Assigned at module init. Guaranteed locally-administered and unicast. */ + static u8 fake_router_bssid[ETH_ALEN] __ro_after_init = {}; + ++static void virt_wifi_inform_bss(struct wiphy *wiphy) ++{ ++ u64 tsf = div_u64(ktime_get_boottime_ns(), 1000); ++ struct cfg80211_bss *informed_bss; ++ static const struct { ++ u8 tag; ++ u8 len; ++ u8 ssid[8]; ++ } __packed ssid = { ++ .tag = WLAN_EID_SSID, ++ .len = 8, ++ .ssid = "VirtWifi", ++ }; ++ ++ informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz, ++ CFG80211_BSS_FTYPE_PRESP, ++ fake_router_bssid, tsf, ++ WLAN_CAPABILITY_ESS, 0, ++ (void *)&ssid, sizeof(ssid), ++ DBM_TO_MBM(-50), GFP_KERNEL); ++ cfg80211_put_bss(wiphy, informed_bss); ++} ++ + /* Called with the rtnl lock held. */ + static int virt_wifi_scan(struct wiphy *wiphy, + struct cfg80211_scan_request *request) +@@ -156,28 +179,13 @@ static int virt_wifi_scan(struct wiphy *wiphy, + /* Acquires and releases the rdev BSS lock. */ + static void virt_wifi_scan_result(struct work_struct *work) + { +- struct { +- u8 tag; +- u8 len; +- u8 ssid[8]; +- } __packed ssid = { +- .tag = WLAN_EID_SSID, .len = 8, .ssid = "VirtWifi", +- }; +- struct cfg80211_bss *informed_bss; + struct virt_wifi_wiphy_priv *priv = + container_of(work, struct virt_wifi_wiphy_priv, + scan_result.work); + struct wiphy *wiphy = priv_to_wiphy(priv); + struct cfg80211_scan_info scan_info = { .aborted = false }; +- u64 tsf = div_u64(ktime_get_boottime_ns(), 1000); + +- informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz, +- CFG80211_BSS_FTYPE_PRESP, +- fake_router_bssid, tsf, +- WLAN_CAPABILITY_ESS, 0, +- (void *)&ssid, sizeof(ssid), +- DBM_TO_MBM(-50), GFP_KERNEL); +- cfg80211_put_bss(wiphy, informed_bss); ++ virt_wifi_inform_bss(wiphy); + + /* Schedules work which acquires and releases the rtnl lock. */ + cfg80211_scan_done(priv->scan_request, &scan_info); +@@ -225,10 +233,12 @@ static int virt_wifi_connect(struct wiphy *wiphy, struct net_device *netdev, + if (!could_schedule) + return -EBUSY; + +- if (sme->bssid) ++ if (sme->bssid) { + ether_addr_copy(priv->connect_requested_bss, sme->bssid); +- else ++ } else { ++ virt_wifi_inform_bss(wiphy); + eth_zero_addr(priv->connect_requested_bss); ++ } + + wiphy_debug(wiphy, "connect\n"); + +@@ -241,11 +251,13 @@ static void virt_wifi_connect_complete(struct work_struct *work) + struct virt_wifi_netdev_priv *priv = + container_of(work, struct virt_wifi_netdev_priv, connect.work); + u8 *requested_bss = priv->connect_requested_bss; +- bool has_addr = !is_zero_ether_addr(requested_bss); + bool right_addr = ether_addr_equal(requested_bss, fake_router_bssid); + u16 status = WLAN_STATUS_SUCCESS; + +- if (!priv->is_up || (has_addr && !right_addr)) ++ if (is_zero_ether_addr(requested_bss)) ++ requested_bss = NULL; ++ ++ if (!priv->is_up || (requested_bss && !right_addr)) + status = WLAN_STATUS_UNSPECIFIED_FAILURE; + else + priv->is_connected = true; +diff --git a/drivers/pcmcia/i82092.c b/drivers/pcmcia/i82092.c +index 85887d885b5f3..192c9049d654f 100644 +--- a/drivers/pcmcia/i82092.c ++++ b/drivers/pcmcia/i82092.c +@@ -112,6 +112,7 @@ static int i82092aa_pci_probe(struct pci_dev *dev, + for (i = 0; i < socket_count; i++) { + sockets[i].card_state = 1; /* 1 = present but empty */ + sockets[i].io_base = pci_resource_start(dev, 0); ++ sockets[i].dev = dev; + sockets[i].socket.features |= SS_CAP_PCCARD; + sockets[i].socket.map_size = 0x1000; + sockets[i].socket.irq_mask = 0; +diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c +index 5529d7b0abea3..fbb224a82e34c 100644 +--- a/drivers/platform/x86/gigabyte-wmi.c ++++ b/drivers/platform/x86/gigabyte-wmi.c +@@ -141,6 +141,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev) + + static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"), ++ DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"), + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"), + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M AORUS PRO-P"), + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"), +diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c +index a6ac505cbdd7d..701c1d0094ec6 100644 +--- a/drivers/s390/block/dasd_eckd.c ++++ b/drivers/s390/block/dasd_eckd.c +@@ -1004,15 +1004,23 @@ static unsigned char dasd_eckd_path_access(void *conf_data, int conf_len) + static void dasd_eckd_store_conf_data(struct dasd_device *device, + struct dasd_conf_data *conf_data, int chp) + { ++ struct dasd_eckd_private *private = device->private; + struct channel_path_desc_fmt0 *chp_desc; + struct subchannel_id sch_id; ++ void *cdp; + +- ccw_device_get_schid(device->cdev, &sch_id); + /* + * path handling and read_conf allocate data + * free it before replacing the pointer ++ * also replace the old private->conf_data pointer ++ * with the new one if this points to the same data + */ +- kfree(device->path[chp].conf_data); ++ cdp = device->path[chp].conf_data; ++ if (private->conf_data == cdp) { ++ private->conf_data = (void *)conf_data; ++ dasd_eckd_identify_conf_parts(private); ++ } ++ ccw_device_get_schid(device->cdev, &sch_id); + device->path[chp].conf_data = conf_data; + device->path[chp].cssid = sch_id.cssid; + device->path[chp].ssid = sch_id.ssid; +@@ -1020,6 +1028,7 @@ static void dasd_eckd_store_conf_data(struct dasd_device *device, + if (chp_desc) + device->path[chp].chpid = chp_desc->chpid; + kfree(chp_desc); ++ kfree(cdp); + } + + static void dasd_eckd_clear_conf_data(struct dasd_device *device) +diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c +index 6540d48eb0e8e..23fd361343b48 100644 +--- a/drivers/scsi/ibmvscsi/ibmvfc.c ++++ b/drivers/scsi/ibmvscsi/ibmvfc.c +@@ -804,6 +804,13 @@ static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost, + for (i = 0; i < size; ++i) { + struct ibmvfc_event *evt = &pool->events[i]; + ++ /* ++ * evt->active states ++ * 1 = in flight ++ * 0 = being completed ++ * -1 = free/freed ++ */ ++ atomic_set(&evt->active, -1); + atomic_set(&evt->free, 1); + evt->crq.valid = 0x80; + evt->crq.ioba = cpu_to_be64(pool->iu_token + (sizeof(*evt->xfer_iu) * i)); +@@ -1014,6 +1021,7 @@ static void ibmvfc_free_event(struct ibmvfc_event *evt) + + BUG_ON(!ibmvfc_valid_event(pool, evt)); + BUG_ON(atomic_inc_return(&evt->free) != 1); ++ BUG_ON(atomic_dec_and_test(&evt->active)); + + spin_lock_irqsave(&evt->queue->l_lock, flags); + list_add_tail(&evt->queue_list, &evt->queue->free); +@@ -1069,6 +1077,12 @@ static void ibmvfc_complete_purge(struct list_head *purge_list) + **/ + static void ibmvfc_fail_request(struct ibmvfc_event *evt, int error_code) + { ++ /* ++ * Anything we are failing should still be active. Otherwise, it ++ * implies we already got a response for the command and are doing ++ * something bad like double completing it. ++ */ ++ BUG_ON(!atomic_dec_and_test(&evt->active)); + if (evt->cmnd) { + evt->cmnd->result = (error_code << 16); + evt->done = ibmvfc_scsi_eh_done; +@@ -1720,6 +1734,7 @@ static int ibmvfc_send_event(struct ibmvfc_event *evt, + + evt->done(evt); + } else { ++ atomic_set(&evt->active, 1); + spin_unlock_irqrestore(&evt->queue->l_lock, flags); + ibmvfc_trc_start(evt); + } +@@ -3248,7 +3263,7 @@ static void ibmvfc_handle_crq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost, + return; + } + +- if (unlikely(atomic_read(&evt->free))) { ++ if (unlikely(atomic_dec_if_positive(&evt->active))) { + dev_err(vhost->dev, "Received duplicate correlation_token 0x%08llx!\n", + crq->ioba); + return; +@@ -3775,7 +3790,7 @@ static void ibmvfc_handle_scrq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost + return; + } + +- if (unlikely(atomic_read(&evt->free))) { ++ if (unlikely(atomic_dec_if_positive(&evt->active))) { + dev_err(vhost->dev, "Received duplicate correlation_token 0x%08llx!\n", + crq->ioba); + return; +diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h +index 19dcec3ae9ba7..994846ec64c6b 100644 +--- a/drivers/scsi/ibmvscsi/ibmvfc.h ++++ b/drivers/scsi/ibmvscsi/ibmvfc.h +@@ -744,6 +744,7 @@ struct ibmvfc_event { + struct ibmvfc_target *tgt; + struct scsi_cmnd *cmnd; + atomic_t free; ++ atomic_t active; + union ibmvfc_iu *xfer_iu; + void (*done)(struct ibmvfc_event *evt); + void (*_done)(struct ibmvfc_event *evt); +diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c +index 1a94c7b1de2df..261d3663cbb70 100644 +--- a/drivers/scsi/sr.c ++++ b/drivers/scsi/sr.c +@@ -221,7 +221,7 @@ static unsigned int sr_get_events(struct scsi_device *sdev) + else if (med->media_event_code == 2) + return DISK_EVENT_MEDIA_CHANGE; + else if (med->media_event_code == 3) +- return DISK_EVENT_EJECT_REQUEST; ++ return DISK_EVENT_MEDIA_CHANGE; + return 0; + } + +diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c +index 071e14496e4ba..cc57a384d74d2 100644 +--- a/drivers/soc/imx/soc-imx8m.c ++++ b/drivers/soc/imx/soc-imx8m.c +@@ -5,8 +5,6 @@ + + #include + #include +-#include +-#include + #include + #include + #include +@@ -31,7 +29,7 @@ + + struct imx8_soc_data { + char *name; +- u32 (*soc_revision)(struct device *dev); ++ u32 (*soc_revision)(void); + }; + + static u64 soc_uid; +@@ -52,7 +50,7 @@ static u32 imx8mq_soc_revision_from_atf(void) + static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; }; + #endif + +-static u32 __init imx8mq_soc_revision(struct device *dev) ++static u32 __init imx8mq_soc_revision(void) + { + struct device_node *np; + void __iomem *ocotp_base; +@@ -77,20 +75,9 @@ static u32 __init imx8mq_soc_revision(struct device *dev) + rev = REV_B1; + } + +- if (dev) { +- int ret; +- +- ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid); +- if (ret) { +- iounmap(ocotp_base); +- of_node_put(np); +- return ret; +- } +- } else { +- soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); +- soc_uid <<= 32; +- soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); +- } ++ soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); ++ soc_uid <<= 32; ++ soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); + + iounmap(ocotp_base); + of_node_put(np); +@@ -120,7 +107,7 @@ static void __init imx8mm_soc_uid(void) + of_node_put(np); + } + +-static u32 __init imx8mm_soc_revision(struct device *dev) ++static u32 __init imx8mm_soc_revision(void) + { + struct device_node *np; + void __iomem *anatop_base; +@@ -138,15 +125,7 @@ static u32 __init imx8mm_soc_revision(struct device *dev) + iounmap(anatop_base); + of_node_put(np); + +- if (dev) { +- int ret; +- +- ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid); +- if (ret) +- return ret; +- } else { +- imx8mm_soc_uid(); +- } ++ imx8mm_soc_uid(); + + return rev; + } +@@ -171,7 +150,7 @@ static const struct imx8_soc_data imx8mp_soc_data = { + .soc_revision = imx8mm_soc_revision, + }; + +-static __maybe_unused const struct of_device_id imx8_machine_match[] = { ++static __maybe_unused const struct of_device_id imx8_soc_match[] = { + { .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, }, + { .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, }, + { .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, }, +@@ -179,20 +158,12 @@ static __maybe_unused const struct of_device_id imx8_machine_match[] = { + { } + }; + +-static __maybe_unused const struct of_device_id imx8_soc_match[] = { +- { .compatible = "fsl,imx8mq-soc", .data = &imx8mq_soc_data, }, +- { .compatible = "fsl,imx8mm-soc", .data = &imx8mm_soc_data, }, +- { .compatible = "fsl,imx8mn-soc", .data = &imx8mn_soc_data, }, +- { .compatible = "fsl,imx8mp-soc", .data = &imx8mp_soc_data, }, +- { } +-}; +- + #define imx8_revision(soc_rev) \ + soc_rev ? \ + kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \ + "unknown" + +-static int imx8_soc_info(struct platform_device *pdev) ++static int __init imx8_soc_init(void) + { + struct soc_device_attribute *soc_dev_attr; + struct soc_device *soc_dev; +@@ -211,10 +182,7 @@ static int imx8_soc_info(struct platform_device *pdev) + if (ret) + goto free_soc; + +- if (pdev) +- id = of_match_node(imx8_soc_match, pdev->dev.of_node); +- else +- id = of_match_node(imx8_machine_match, of_root); ++ id = of_match_node(imx8_soc_match, of_root); + if (!id) { + ret = -ENODEV; + goto free_soc; +@@ -223,16 +191,8 @@ static int imx8_soc_info(struct platform_device *pdev) + data = id->data; + if (data) { + soc_dev_attr->soc_id = data->name; +- if (data->soc_revision) { +- if (pdev) { +- soc_rev = data->soc_revision(&pdev->dev); +- ret = soc_rev; +- if (ret < 0) +- goto free_soc; +- } else { +- soc_rev = data->soc_revision(NULL); +- } +- } ++ if (data->soc_revision) ++ soc_rev = data->soc_revision(); + } + + soc_dev_attr->revision = imx8_revision(soc_rev); +@@ -270,24 +230,4 @@ free_soc: + kfree(soc_dev_attr); + return ret; + } +- +-/* Retain device_initcall is for backward compatibility with DTS. */ +-static int __init imx8_soc_init(void) +-{ +- if (of_find_matching_node_and_match(NULL, imx8_soc_match, NULL)) +- return 0; +- +- return imx8_soc_info(NULL); +-} + device_initcall(imx8_soc_init); +- +-static struct platform_driver imx8_soc_info_driver = { +- .probe = imx8_soc_info, +- .driver = { +- .name = "imx8_soc_info", +- .of_match_table = imx8_soc_match, +- }, +-}; +- +-module_platform_driver(imx8_soc_info_driver); +-MODULE_LICENSE("GPL v2"); +diff --git a/drivers/soc/ixp4xx/ixp4xx-npe.c b/drivers/soc/ixp4xx/ixp4xx-npe.c +index ec90b44fa0cd3..6065aaab67403 100644 +--- a/drivers/soc/ixp4xx/ixp4xx-npe.c ++++ b/drivers/soc/ixp4xx/ixp4xx-npe.c +@@ -690,8 +690,8 @@ static int ixp4xx_npe_probe(struct platform_device *pdev) + + if (!(ixp4xx_read_feature_bits() & + (IXP4XX_FEATURE_RESET_NPEA << i))) { +- dev_info(dev, "NPE%d at 0x%08x-0x%08x not available\n", +- i, res->start, res->end); ++ dev_info(dev, "NPE%d at %pR not available\n", ++ i, res); + continue; /* NPE already disabled or not present */ + } + npe->regs = devm_ioremap_resource(dev, res); +@@ -699,13 +699,12 @@ static int ixp4xx_npe_probe(struct platform_device *pdev) + return PTR_ERR(npe->regs); + + if (npe_reset(npe)) { +- dev_info(dev, "NPE%d at 0x%08x-0x%08x does not reset\n", +- i, res->start, res->end); ++ dev_info(dev, "NPE%d at %pR does not reset\n", ++ i, res); + continue; + } + npe->valid = 1; +- dev_info(dev, "NPE%d at 0x%08x-0x%08x registered\n", +- i, res->start, res->end); ++ dev_info(dev, "NPE%d at %pR registered\n", i, res); + found++; + } + +diff --git a/drivers/soc/ixp4xx/ixp4xx-qmgr.c b/drivers/soc/ixp4xx/ixp4xx-qmgr.c +index 8c968382cea76..065a800717bd5 100644 +--- a/drivers/soc/ixp4xx/ixp4xx-qmgr.c ++++ b/drivers/soc/ixp4xx/ixp4xx-qmgr.c +@@ -145,12 +145,12 @@ static irqreturn_t qmgr_irq1_a0(int irq, void *pdev) + /* ACK - it may clear any bits so don't rely on it */ + __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[0]); + +- en_bitmap = qmgr_regs->irqen[0]; ++ en_bitmap = __raw_readl(&qmgr_regs->irqen[0]); + while (en_bitmap) { + i = __fls(en_bitmap); /* number of the last "low" queue */ + en_bitmap &= ~BIT(i); +- src = qmgr_regs->irqsrc[i >> 3]; +- stat = qmgr_regs->stat1[i >> 3]; ++ src = __raw_readl(&qmgr_regs->irqsrc[i >> 3]); ++ stat = __raw_readl(&qmgr_regs->stat1[i >> 3]); + if (src & 4) /* the IRQ condition is inverted */ + stat = ~stat; + if (stat & BIT(src & 3)) { +@@ -170,7 +170,8 @@ static irqreturn_t qmgr_irq2_a0(int irq, void *pdev) + /* ACK - it may clear any bits so don't rely on it */ + __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[1]); + +- req_bitmap = qmgr_regs->irqen[1] & qmgr_regs->statne_h; ++ req_bitmap = __raw_readl(&qmgr_regs->irqen[1]) & ++ __raw_readl(&qmgr_regs->statne_h); + while (req_bitmap) { + i = __fls(req_bitmap); /* number of the last "high" queue */ + req_bitmap &= ~BIT(i); +diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c +index 39dc02e366f4b..2872993550bd5 100644 +--- a/drivers/spi/spi-imx.c ++++ b/drivers/spi/spi-imx.c +@@ -505,8 +505,10 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx, + struct spi_message *msg) + { + struct spi_device *spi = msg->spi; ++ struct spi_transfer *xfer; + u32 ctrl = MX51_ECSPI_CTRL_ENABLE; +- u32 testreg; ++ u32 min_speed_hz = ~0U; ++ u32 testreg, delay; + u32 cfg = readl(spi_imx->base + MX51_ECSPI_CONFIG); + + /* set Master or Slave mode */ +@@ -567,6 +569,35 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx, + + writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG); + ++ /* ++ * Wait until the changes in the configuration register CONFIGREG ++ * propagate into the hardware. It takes exactly one tick of the ++ * SCLK clock, but we will wait two SCLK clock just to be sure. The ++ * effect of the delay it takes for the hardware to apply changes ++ * is noticable if the SCLK clock run very slow. In such a case, if ++ * the polarity of SCLK should be inverted, the GPIO ChipSelect might ++ * be asserted before the SCLK polarity changes, which would disrupt ++ * the SPI communication as the device on the other end would consider ++ * the change of SCLK polarity as a clock tick already. ++ * ++ * Because spi_imx->spi_bus_clk is only set in bitbang prepare_message ++ * callback, iterate over all the transfers in spi_message, find the ++ * one with lowest bus frequency, and use that bus frequency for the ++ * delay calculation. In case all transfers have speed_hz == 0, then ++ * min_speed_hz is ~0 and the resulting delay is zero. ++ */ ++ list_for_each_entry(xfer, &msg->transfers, transfer_list) { ++ if (!xfer->speed_hz) ++ continue; ++ min_speed_hz = min(xfer->speed_hz, min_speed_hz); ++ } ++ ++ delay = (2 * 1000000) / min_speed_hz; ++ if (likely(delay < 10)) /* SCLK is faster than 100 kHz */ ++ udelay(delay); ++ else /* SCLK is _very_ slow */ ++ usleep_range(delay, delay + 10); ++ + return 0; + } + +@@ -574,7 +605,7 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx, + struct spi_device *spi) + { + u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL); +- u32 clk, delay; ++ u32 clk; + + /* Clear BL field and set the right value */ + ctrl &= ~MX51_ECSPI_CTRL_BL_MASK; +@@ -596,23 +627,6 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx, + + writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL); + +- /* +- * Wait until the changes in the configuration register CONFIGREG +- * propagate into the hardware. It takes exactly one tick of the +- * SCLK clock, but we will wait two SCLK clock just to be sure. The +- * effect of the delay it takes for the hardware to apply changes +- * is noticable if the SCLK clock run very slow. In such a case, if +- * the polarity of SCLK should be inverted, the GPIO ChipSelect might +- * be asserted before the SCLK polarity changes, which would disrupt +- * the SPI communication as the device on the other end would consider +- * the change of SCLK polarity as a clock tick already. +- */ +- delay = (2 * 1000000) / clk; +- if (likely(delay < 10)) /* SCLK is faster than 100 kHz */ +- udelay(delay); +- else /* SCLK is _very_ slow */ +- usleep_range(delay, delay + 10); +- + return 0; + } + +diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c +index b2c4621db34d7..c208efeadd184 100644 +--- a/drivers/spi/spi-meson-spicc.c ++++ b/drivers/spi/spi-meson-spicc.c +@@ -785,6 +785,8 @@ static int meson_spicc_remove(struct platform_device *pdev) + clk_disable_unprepare(spicc->core); + clk_disable_unprepare(spicc->pclk); + ++ spi_master_put(spicc->master); ++ + return 0; + } + +diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c +index 22974277afa08..4eff3fdecdb8a 100644 +--- a/drivers/staging/rtl8712/hal_init.c ++++ b/drivers/staging/rtl8712/hal_init.c +@@ -29,21 +29,31 @@ + #define FWBUFF_ALIGN_SZ 512 + #define MAX_DUMP_FWSZ (48 * 1024) + ++static void rtl871x_load_fw_fail(struct _adapter *adapter) ++{ ++ struct usb_device *udev = adapter->dvobjpriv.pusbdev; ++ struct device *dev = &udev->dev; ++ struct device *parent = dev->parent; ++ ++ complete(&adapter->rtl8712_fw_ready); ++ ++ dev_err(&udev->dev, "r8712u: Firmware request failed\n"); ++ ++ if (parent) ++ device_lock(parent); ++ ++ device_release_driver(dev); ++ ++ if (parent) ++ device_unlock(parent); ++} ++ + static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context) + { + struct _adapter *adapter = context; + + if (!firmware) { +- struct usb_device *udev = adapter->dvobjpriv.pusbdev; +- struct usb_interface *usb_intf = adapter->pusb_intf; +- +- dev_err(&udev->dev, "r8712u: Firmware request failed\n"); +- usb_put_dev(udev); +- usb_set_intfdata(usb_intf, NULL); +- r8712_free_drv_sw(adapter); +- adapter->dvobj_deinit(adapter); +- complete(&adapter->rtl8712_fw_ready); +- free_netdev(adapter->pnetdev); ++ rtl871x_load_fw_fail(adapter); + return; + } + adapter->fw = firmware; +diff --git a/drivers/staging/rtl8712/rtl8712_led.c b/drivers/staging/rtl8712/rtl8712_led.c +index 5901026949f25..d5fc9026b036e 100644 +--- a/drivers/staging/rtl8712/rtl8712_led.c ++++ b/drivers/staging/rtl8712/rtl8712_led.c +@@ -1820,3 +1820,11 @@ void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction) + break; + } + } ++ ++void r8712_flush_led_works(struct _adapter *padapter) ++{ ++ struct led_priv *pledpriv = &padapter->ledpriv; ++ ++ flush_work(&pledpriv->SwLed0.BlinkWorkItem); ++ flush_work(&pledpriv->SwLed1.BlinkWorkItem); ++} +diff --git a/drivers/staging/rtl8712/rtl871x_led.h b/drivers/staging/rtl8712/rtl871x_led.h +index ee19c873cf010..2f0768132ad8f 100644 +--- a/drivers/staging/rtl8712/rtl871x_led.h ++++ b/drivers/staging/rtl8712/rtl871x_led.h +@@ -112,6 +112,7 @@ struct led_priv { + void r8712_InitSwLeds(struct _adapter *padapter); + void r8712_DeInitSwLeds(struct _adapter *padapter); + void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction); ++void r8712_flush_led_works(struct _adapter *padapter); + + #endif + +diff --git a/drivers/staging/rtl8712/rtl871x_pwrctrl.c b/drivers/staging/rtl8712/rtl871x_pwrctrl.c +index 23cff43437e21..cd6d9ff0bebca 100644 +--- a/drivers/staging/rtl8712/rtl871x_pwrctrl.c ++++ b/drivers/staging/rtl8712/rtl871x_pwrctrl.c +@@ -224,3 +224,11 @@ void r8712_unregister_cmd_alive(struct _adapter *padapter) + } + mutex_unlock(&pwrctrl->mutex_lock); + } ++ ++void r8712_flush_rwctrl_works(struct _adapter *padapter) ++{ ++ struct pwrctrl_priv *pwrctrl = &padapter->pwrctrlpriv; ++ ++ flush_work(&pwrctrl->SetPSModeWorkItem); ++ flush_work(&pwrctrl->rpwm_workitem); ++} +diff --git a/drivers/staging/rtl8712/rtl871x_pwrctrl.h b/drivers/staging/rtl8712/rtl871x_pwrctrl.h +index bf6623cfaf27b..b35b9c7920ebb 100644 +--- a/drivers/staging/rtl8712/rtl871x_pwrctrl.h ++++ b/drivers/staging/rtl8712/rtl871x_pwrctrl.h +@@ -108,5 +108,6 @@ void r8712_cpwm_int_hdl(struct _adapter *padapter, + void r8712_set_ps_mode(struct _adapter *padapter, uint ps_mode, + uint smart_ps); + void r8712_set_rpwm(struct _adapter *padapter, u8 val8); ++void r8712_flush_rwctrl_works(struct _adapter *padapter); + + #endif /* __RTL871X_PWRCTRL_H_ */ +diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c +index b760bc3559373..17d28af0d0867 100644 +--- a/drivers/staging/rtl8712/usb_intf.c ++++ b/drivers/staging/rtl8712/usb_intf.c +@@ -594,35 +594,30 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf) + { + struct net_device *pnetdev = usb_get_intfdata(pusb_intf); + struct usb_device *udev = interface_to_usbdev(pusb_intf); ++ struct _adapter *padapter = netdev_priv(pnetdev); ++ ++ /* never exit with a firmware callback pending */ ++ wait_for_completion(&padapter->rtl8712_fw_ready); ++ usb_set_intfdata(pusb_intf, NULL); ++ release_firmware(padapter->fw); ++ if (drvpriv.drv_registered) ++ padapter->surprise_removed = true; ++ if (pnetdev->reg_state != NETREG_UNINITIALIZED) ++ unregister_netdev(pnetdev); /* will call netdev_close() */ ++ r8712_flush_rwctrl_works(padapter); ++ r8712_flush_led_works(padapter); ++ udelay(1); ++ /* Stop driver mlme relation timer */ ++ r8712_stop_drv_timers(padapter); ++ r871x_dev_unload(padapter); ++ r8712_free_drv_sw(padapter); ++ free_netdev(pnetdev); ++ ++ /* decrease the reference count of the usb device structure ++ * when disconnect ++ */ ++ usb_put_dev(udev); + +- if (pnetdev) { +- struct _adapter *padapter = netdev_priv(pnetdev); +- +- /* never exit with a firmware callback pending */ +- wait_for_completion(&padapter->rtl8712_fw_ready); +- pnetdev = usb_get_intfdata(pusb_intf); +- usb_set_intfdata(pusb_intf, NULL); +- if (!pnetdev) +- goto firmware_load_fail; +- release_firmware(padapter->fw); +- if (drvpriv.drv_registered) +- padapter->surprise_removed = true; +- if (pnetdev->reg_state != NETREG_UNINITIALIZED) +- unregister_netdev(pnetdev); /* will call netdev_close() */ +- flush_scheduled_work(); +- udelay(1); +- /* Stop driver mlme relation timer */ +- r8712_stop_drv_timers(padapter); +- r871x_dev_unload(padapter); +- r8712_free_drv_sw(padapter); +- free_netdev(pnetdev); +- +- /* decrease the reference count of the usb device structure +- * when disconnect +- */ +- usb_put_dev(udev); +- } +-firmware_load_fail: + /* If we didn't unplug usb dongle and remove/insert module, driver + * fails on sitesurvey for the first time when device is up. + * Reset usb port for sitesurvey fail issue. +diff --git a/drivers/staging/rtl8723bs/hal/sdio_ops.c b/drivers/staging/rtl8723bs/hal/sdio_ops.c +index a31694525bc1e..eaf7d689b0c55 100644 +--- a/drivers/staging/rtl8723bs/hal/sdio_ops.c ++++ b/drivers/staging/rtl8723bs/hal/sdio_ops.c +@@ -921,6 +921,8 @@ void sd_int_dpc(struct adapter *adapter) + } else { + rtw_c2h_wk_cmd(adapter, (u8 *)c2h_evt); + } ++ } else { ++ kfree(c2h_evt); + } + } else { + /* Error handling for malloc fail */ +diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c +index 6e6eb836e9b62..945f03da02237 100644 +--- a/drivers/tee/optee/call.c ++++ b/drivers/tee/optee/call.c +@@ -184,7 +184,7 @@ static struct tee_shm *get_msg_arg(struct tee_context *ctx, size_t num_params, + struct optee_msg_arg *ma; + + shm = tee_shm_alloc(ctx, OPTEE_MSG_GET_ARG_SIZE(num_params), +- TEE_SHM_MAPPED); ++ TEE_SHM_MAPPED | TEE_SHM_PRIV); + if (IS_ERR(shm)) + return shm; + +@@ -416,11 +416,13 @@ void optee_enable_shm_cache(struct optee *optee) + } + + /** +- * optee_disable_shm_cache() - Disables caching of some shared memory allocation +- * in OP-TEE ++ * __optee_disable_shm_cache() - Disables caching of some shared memory ++ * allocation in OP-TEE + * @optee: main service struct ++ * @is_mapped: true if the cached shared memory addresses were mapped by this ++ * kernel, are safe to dereference, and should be freed + */ +-void optee_disable_shm_cache(struct optee *optee) ++static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped) + { + struct optee_call_waiter w; + +@@ -439,6 +441,13 @@ void optee_disable_shm_cache(struct optee *optee) + if (res.result.status == OPTEE_SMC_RETURN_OK) { + struct tee_shm *shm; + ++ /* ++ * Shared memory references that were not mapped by ++ * this kernel must be ignored to prevent a crash. ++ */ ++ if (!is_mapped) ++ continue; ++ + shm = reg_pair_to_ptr(res.result.shm_upper32, + res.result.shm_lower32); + tee_shm_free(shm); +@@ -449,6 +458,27 @@ void optee_disable_shm_cache(struct optee *optee) + optee_cq_wait_final(&optee->call_queue, &w); + } + ++/** ++ * optee_disable_shm_cache() - Disables caching of mapped shared memory ++ * allocations in OP-TEE ++ * @optee: main service struct ++ */ ++void optee_disable_shm_cache(struct optee *optee) ++{ ++ return __optee_disable_shm_cache(optee, true); ++} ++ ++/** ++ * optee_disable_unmapped_shm_cache() - Disables caching of shared memory ++ * allocations in OP-TEE which are not ++ * currently mapped ++ * @optee: main service struct ++ */ ++void optee_disable_unmapped_shm_cache(struct optee *optee) ++{ ++ return __optee_disable_shm_cache(optee, false); ++} ++ + #define PAGELIST_ENTRIES_PER_PAGE \ + ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1) + +diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c +index ddb8f9ecf3078..5ce13b099d7dc 100644 +--- a/drivers/tee/optee/core.c ++++ b/drivers/tee/optee/core.c +@@ -6,6 +6,7 @@ + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + + #include ++#include + #include + #include + #include +@@ -277,7 +278,8 @@ static void optee_release(struct tee_context *ctx) + if (!ctxdata) + return; + +- shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), TEE_SHM_MAPPED); ++ shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), ++ TEE_SHM_MAPPED | TEE_SHM_PRIV); + if (!IS_ERR(shm)) { + arg = tee_shm_get_va(shm, 0); + /* +@@ -572,6 +574,13 @@ static optee_invoke_fn *get_invoke_func(struct device *dev) + return ERR_PTR(-EINVAL); + } + ++/* optee_remove - Device Removal Routine ++ * @pdev: platform device information struct ++ * ++ * optee_remove is called by platform subsystem to alert the driver ++ * that it should release the device ++ */ ++ + static int optee_remove(struct platform_device *pdev) + { + struct optee *optee = platform_get_drvdata(pdev); +@@ -602,6 +611,18 @@ static int optee_remove(struct platform_device *pdev) + return 0; + } + ++/* optee_shutdown - Device Removal Routine ++ * @pdev: platform device information struct ++ * ++ * platform_shutdown is called by the platform subsystem to alert ++ * the driver that a shutdown, reboot, or kexec is happening and ++ * device must be disabled. ++ */ ++static void optee_shutdown(struct platform_device *pdev) ++{ ++ optee_disable_shm_cache(platform_get_drvdata(pdev)); ++} ++ + static int optee_probe(struct platform_device *pdev) + { + optee_invoke_fn *invoke_fn; +@@ -612,6 +633,16 @@ static int optee_probe(struct platform_device *pdev) + u32 sec_caps; + int rc; + ++ /* ++ * The kernel may have crashed at the same time that all available ++ * secure world threads were suspended and we cannot reschedule the ++ * suspended threads without access to the crashed kernel's wait_queue. ++ * Therefore, we cannot reliably initialize the OP-TEE driver in the ++ * kdump kernel. ++ */ ++ if (is_kdump_kernel()) ++ return -ENODEV; ++ + invoke_fn = get_invoke_func(&pdev->dev); + if (IS_ERR(invoke_fn)) + return PTR_ERR(invoke_fn); +@@ -686,6 +717,15 @@ static int optee_probe(struct platform_device *pdev) + optee->memremaped_shm = memremaped_shm; + optee->pool = pool; + ++ /* ++ * Ensure that there are no pre-existing shm objects before enabling ++ * the shm cache so that there's no chance of receiving an invalid ++ * address during shutdown. This could occur, for example, if we're ++ * kexec booting from an older kernel that did not properly cleanup the ++ * shm cache. ++ */ ++ optee_disable_unmapped_shm_cache(optee); ++ + optee_enable_shm_cache(optee); + + if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) +@@ -728,6 +768,7 @@ MODULE_DEVICE_TABLE(of, optee_dt_match); + static struct platform_driver optee_driver = { + .probe = optee_probe, + .remove = optee_remove, ++ .shutdown = optee_shutdown, + .driver = { + .name = "optee", + .of_match_table = optee_dt_match, +diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h +index e25b216a14ef8..dbdd367be1568 100644 +--- a/drivers/tee/optee/optee_private.h ++++ b/drivers/tee/optee/optee_private.h +@@ -159,6 +159,7 @@ int optee_cancel_req(struct tee_context *ctx, u32 cancel_id, u32 session); + + void optee_enable_shm_cache(struct optee *optee); + void optee_disable_shm_cache(struct optee *optee); ++void optee_disable_unmapped_shm_cache(struct optee *optee); + + int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm, + struct page **pages, size_t num_pages, +diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c +index 1849180b0278b..efbaff7ad7e59 100644 +--- a/drivers/tee/optee/rpc.c ++++ b/drivers/tee/optee/rpc.c +@@ -314,7 +314,7 @@ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx, + shm = cmd_alloc_suppl(ctx, sz); + break; + case OPTEE_RPC_SHM_TYPE_KERNEL: +- shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED); ++ shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV); + break; + default: + arg->ret = TEEC_ERROR_BAD_PARAMETERS; +@@ -502,7 +502,8 @@ void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param, + + switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) { + case OPTEE_SMC_RPC_FUNC_ALLOC: +- shm = tee_shm_alloc(ctx, param->a1, TEE_SHM_MAPPED); ++ shm = tee_shm_alloc(ctx, param->a1, ++ TEE_SHM_MAPPED | TEE_SHM_PRIV); + if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) { + reg_pair_from_64(¶m->a1, ¶m->a2, pa); + reg_pair_from_64(¶m->a4, ¶m->a5, +diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c +index d767eebf30bdd..c41a9a501a6e9 100644 +--- a/drivers/tee/optee/shm_pool.c ++++ b/drivers/tee/optee/shm_pool.c +@@ -27,13 +27,19 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm, + shm->paddr = page_to_phys(page); + shm->size = PAGE_SIZE << order; + +- if (shm->flags & TEE_SHM_DMA_BUF) { ++ /* ++ * Shared memory private to the OP-TEE driver doesn't need ++ * to be registered with OP-TEE. ++ */ ++ if (!(shm->flags & TEE_SHM_PRIV)) { + unsigned int nr_pages = 1 << order, i; + struct page **pages; + + pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL); +- if (!pages) +- return -ENOMEM; ++ if (!pages) { ++ rc = -ENOMEM; ++ goto err; ++ } + + for (i = 0; i < nr_pages; i++) { + pages[i] = page; +@@ -44,15 +50,21 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm, + rc = optee_shm_register(shm->ctx, shm, pages, nr_pages, + (unsigned long)shm->kaddr); + kfree(pages); ++ if (rc) ++ goto err; + } + ++ return 0; ++ ++err: ++ __free_pages(page, order); + return rc; + } + + static void pool_op_free(struct tee_shm_pool_mgr *poolm, + struct tee_shm *shm) + { +- if (shm->flags & TEE_SHM_DMA_BUF) ++ if (!(shm->flags & TEE_SHM_PRIV)) + optee_shm_unregister(shm->ctx, shm); + + free_pages((unsigned long)shm->kaddr, get_order(shm->size)); +diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c +index 00472f5ce22e4..8a9384a64f3e2 100644 +--- a/drivers/tee/tee_shm.c ++++ b/drivers/tee/tee_shm.c +@@ -117,7 +117,7 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags) + return ERR_PTR(-EINVAL); + } + +- if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF))) { ++ if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF | TEE_SHM_PRIV))) { + dev_err(teedev->dev.parent, "invalid shm flags 0x%x", flags); + return ERR_PTR(-EINVAL); + } +@@ -193,6 +193,24 @@ err_dev_put: + } + EXPORT_SYMBOL_GPL(tee_shm_alloc); + ++/** ++ * tee_shm_alloc_kernel_buf() - Allocate shared memory for kernel buffer ++ * @ctx: Context that allocates the shared memory ++ * @size: Requested size of shared memory ++ * ++ * The returned memory registered in secure world and is suitable to be ++ * passed as a memory buffer in parameter argument to ++ * tee_client_invoke_func(). The memory allocated is later freed with a ++ * call to tee_shm_free(). ++ * ++ * @returns a pointer to 'struct tee_shm' ++ */ ++struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) ++{ ++ return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED); ++} ++EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); ++ + struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, + size_t length, u32 flags) + { +diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c +index e73cd296db7ef..a82032c081e83 100644 +--- a/drivers/thunderbolt/switch.c ++++ b/drivers/thunderbolt/switch.c +@@ -1740,18 +1740,6 @@ static struct attribute *switch_attrs[] = { + NULL, + }; + +-static bool has_port(const struct tb_switch *sw, enum tb_port_type type) +-{ +- const struct tb_port *port; +- +- tb_switch_for_each_port(sw, port) { +- if (!port->disabled && port->config.type == type) +- return true; +- } +- +- return false; +-} +- + static umode_t switch_attr_is_visible(struct kobject *kobj, + struct attribute *attr, int n) + { +@@ -1760,8 +1748,7 @@ static umode_t switch_attr_is_visible(struct kobject *kobj, + + if (attr == &dev_attr_authorized.attr) { + if (sw->tb->security_level == TB_SECURITY_NOPCIE || +- sw->tb->security_level == TB_SECURITY_DPONLY || +- !has_port(sw, TB_TYPE_PCIE_UP)) ++ sw->tb->security_level == TB_SECURITY_DPONLY) + return 0; + } else if (attr == &dev_attr_device.attr) { + if (!sw->device) +diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c +index d035d08cb9871..60dfd1aa4ad22 100644 +--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c ++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c +@@ -320,6 +320,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port) + { + struct uart_8250_port *up = up_to_u8250p(port); + unsigned int iir, lsr; ++ unsigned long flags; + int space, count; + + iir = serial_port_in(port, UART_IIR); +@@ -327,7 +328,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port) + if (iir & UART_IIR_NO_INT) + return 0; + +- spin_lock(&port->lock); ++ spin_lock_irqsave(&port->lock, flags); + + lsr = serial_port_in(port, UART_LSR); + +@@ -363,7 +364,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port) + if (lsr & UART_LSR_THRE) + serial8250_tx_chars(up); + +- uart_unlock_and_check_sysrq(port); ++ uart_unlock_and_check_sysrq_irqrestore(port, flags); + + return 1; + } +diff --git a/drivers/tty/serial/8250/8250_fsl.c b/drivers/tty/serial/8250/8250_fsl.c +index 4e75d2e4f87cb..fc65a2293ce9e 100644 +--- a/drivers/tty/serial/8250/8250_fsl.c ++++ b/drivers/tty/serial/8250/8250_fsl.c +@@ -30,10 +30,11 @@ struct fsl8250_data { + int fsl8250_handle_irq(struct uart_port *port) + { + unsigned char lsr, orig_lsr; ++ unsigned long flags; + unsigned int iir; + struct uart_8250_port *up = up_to_u8250p(port); + +- spin_lock(&up->port.lock); ++ spin_lock_irqsave(&up->port.lock, flags); + + iir = port->serial_in(port, UART_IIR); + if (iir & UART_IIR_NO_INT) { +@@ -82,7 +83,7 @@ int fsl8250_handle_irq(struct uart_port *port) + + up->lsr_saved_flags = orig_lsr; + +- uart_unlock_and_check_sysrq(&up->port); ++ uart_unlock_and_check_sysrq_irqrestore(&up->port, flags); + + return 1; + } +diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c +index f7d3023f860f0..fb65dc601b237 100644 +--- a/drivers/tty/serial/8250/8250_mtk.c ++++ b/drivers/tty/serial/8250/8250_mtk.c +@@ -93,10 +93,13 @@ static void mtk8250_dma_rx_complete(void *param) + struct dma_tx_state state; + int copied, total, cnt; + unsigned char *ptr; ++ unsigned long flags; + + if (data->rx_status == DMA_RX_SHUTDOWN) + return; + ++ spin_lock_irqsave(&up->port.lock, flags); ++ + dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); + total = dma->rx_size - state.residue; + cnt = total; +@@ -120,6 +123,8 @@ static void mtk8250_dma_rx_complete(void *param) + tty_flip_buffer_push(tty_port); + + mtk8250_rx_dma(up); ++ ++ spin_unlock_irqrestore(&up->port.lock, flags); + } + + static void mtk8250_rx_dma(struct uart_8250_port *up) +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 780cc99732b62..1934940b96170 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -3804,6 +3804,12 @@ static const struct pci_device_id blacklist[] = { + { PCI_VDEVICE(INTEL, 0x0f0c), }, + { PCI_VDEVICE(INTEL, 0x228a), }, + { PCI_VDEVICE(INTEL, 0x228c), }, ++ { PCI_VDEVICE(INTEL, 0x4b96), }, ++ { PCI_VDEVICE(INTEL, 0x4b97), }, ++ { PCI_VDEVICE(INTEL, 0x4b98), }, ++ { PCI_VDEVICE(INTEL, 0x4b99), }, ++ { PCI_VDEVICE(INTEL, 0x4b9a), }, ++ { PCI_VDEVICE(INTEL, 0x4b9b), }, + { PCI_VDEVICE(INTEL, 0x9ce3), }, + { PCI_VDEVICE(INTEL, 0x9ce4), }, + +@@ -3964,6 +3970,7 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board) + if (pci_match_id(pci_use_msi, dev)) { + dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n"); + pci_set_master(dev); ++ uart.port.flags &= ~UPF_SHARE_IRQ; + rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES); + } else { + dev_dbg(&dev->dev, "Using legacy interrupts\n"); +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index ff3f13693def7..9422284bb3f33 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -311,7 +311,11 @@ static const struct serial8250_config uart_config[] = { + /* Uart divisor latch read */ + static int default_serial_dl_read(struct uart_8250_port *up) + { +- return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8; ++ /* Assign these in pieces to truncate any bits above 7. */ ++ unsigned char dll = serial_in(up, UART_DLL); ++ unsigned char dlm = serial_in(up, UART_DLM); ++ ++ return dll | dlm << 8; + } + + /* Uart divisor latch write */ +@@ -1297,9 +1301,11 @@ static void autoconfig(struct uart_8250_port *up) + serial_out(up, UART_LCR, 0); + + serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO); +- scratch = serial_in(up, UART_IIR) >> 6; + +- switch (scratch) { ++ /* Assign this as it is to truncate any bits above 7. */ ++ scratch = serial_in(up, UART_IIR); ++ ++ switch (scratch >> 6) { + case 0: + autoconfig_8250(up); + break; +@@ -1893,11 +1899,12 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) + unsigned char status; + struct uart_8250_port *up = up_to_u8250p(port); + bool skip_rx = false; ++ unsigned long flags; + + if (iir & UART_IIR_NO_INT) + return 0; + +- spin_lock(&port->lock); ++ spin_lock_irqsave(&port->lock, flags); + + status = serial_port_in(port, UART_LSR); + +@@ -1923,7 +1930,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) + (up->ier & UART_IER_THRI)) + serial8250_tx_chars(up); + +- uart_unlock_and_check_sysrq(port); ++ uart_unlock_and_check_sysrq_irqrestore(port, flags); + + return 1; + } +diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c +index 222032792d6c2..eba5b9ecba348 100644 +--- a/drivers/tty/serial/serial-tegra.c ++++ b/drivers/tty/serial/serial-tegra.c +@@ -1045,9 +1045,11 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup) + + if (tup->cdata->fifo_mode_enable_status) { + ret = tegra_uart_wait_fifo_mode_enabled(tup); +- dev_err(tup->uport.dev, "FIFO mode not enabled\n"); +- if (ret < 0) ++ if (ret < 0) { ++ dev_err(tup->uport.dev, ++ "Failed to enable FIFO mode: %d\n", ret); + return ret; ++ } + } else { + /* + * For all tegra devices (up to t210), there is a hardware +diff --git a/drivers/usb/cdns3/cdns3-ep0.c b/drivers/usb/cdns3/cdns3-ep0.c +index 9a17802275d51..ec5bfd8944c36 100644 +--- a/drivers/usb/cdns3/cdns3-ep0.c ++++ b/drivers/usb/cdns3/cdns3-ep0.c +@@ -731,6 +731,7 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep, + request->actual = 0; + priv_dev->status_completion_no_call = true; + priv_dev->pending_status_request = request; ++ usb_gadget_set_state(&priv_dev->gadget, USB_STATE_CONFIGURED); + spin_unlock_irqrestore(&priv_dev->lock, flags); + + /* +diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c +index c083985e387b2..cf03ba79a3553 100644 +--- a/drivers/usb/cdns3/cdnsp-gadget.c ++++ b/drivers/usb/cdns3/cdnsp-gadget.c +@@ -1881,7 +1881,7 @@ static int __cdnsp_gadget_init(struct cdns *cdns) + pdev->gadget.name = "cdnsp-gadget"; + pdev->gadget.speed = USB_SPEED_UNKNOWN; + pdev->gadget.sg_supported = 1; +- pdev->gadget.max_speed = USB_SPEED_SUPER_PLUS; ++ pdev->gadget.max_speed = max_speed; + pdev->gadget.lpm_capable = 1; + + pdev->setup_buf = kzalloc(CDNSP_EP0_SETUP_SIZE, GFP_KERNEL); +diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h +index 783ca8ffde007..f740fa6089d85 100644 +--- a/drivers/usb/cdns3/cdnsp-gadget.h ++++ b/drivers/usb/cdns3/cdnsp-gadget.h +@@ -383,8 +383,8 @@ struct cdnsp_intr_reg { + #define IMAN_IE BIT(1) + #define IMAN_IP BIT(0) + /* bits 2:31 need to be preserved */ +-#define IMAN_IE_SET(p) (((p) & IMAN_IE) | 0x2) +-#define IMAN_IE_CLEAR(p) (((p) & IMAN_IE) & ~(0x2)) ++#define IMAN_IE_SET(p) ((p) | IMAN_IE) ++#define IMAN_IE_CLEAR(p) ((p) & ~IMAN_IE) + + /* IMOD - Interrupter Moderation Register - irq_control bitmasks. */ + /* +diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c +index 68972746e3636..1b1438457fb04 100644 +--- a/drivers/usb/cdns3/cdnsp-ring.c ++++ b/drivers/usb/cdns3/cdnsp-ring.c +@@ -1932,15 +1932,13 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq) + } + + if (enqd_len + trb_buff_len >= full_len) { +- if (need_zero_pkt && zero_len_trb) { +- zero_len_trb = true; +- } else { +- field &= ~TRB_CHAIN; +- field |= TRB_IOC; +- more_trbs_coming = false; +- need_zero_pkt = false; +- preq->td.last_trb = ring->enqueue; +- } ++ if (need_zero_pkt) ++ zero_len_trb = !zero_len_trb; ++ ++ field &= ~TRB_CHAIN; ++ field |= TRB_IOC; ++ more_trbs_coming = false; ++ preq->td.last_trb = ring->enqueue; + } + + /* Only set interrupt on short packet for OUT endpoints. */ +@@ -1955,7 +1953,7 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq) + length_field = TRB_LEN(trb_buff_len) | TRB_TD_SIZE(remainder) | + TRB_INTR_TARGET(0); + +- cdnsp_queue_trb(pdev, ring, more_trbs_coming | need_zero_pkt, ++ cdnsp_queue_trb(pdev, ring, more_trbs_coming | zero_len_trb, + lower_32_bits(send_addr), + upper_32_bits(send_addr), + length_field, +diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c +index 74d5a9c5238af..73f419adce610 100644 +--- a/drivers/usb/class/usbtmc.c ++++ b/drivers/usb/class/usbtmc.c +@@ -2324,17 +2324,10 @@ static void usbtmc_interrupt(struct urb *urb) + dev_err(dev, "overflow with length %d, actual length is %d\n", + data->iin_wMaxPacketSize, urb->actual_length); + fallthrough; +- case -ECONNRESET: +- case -ENOENT: +- case -ESHUTDOWN: +- case -EILSEQ: +- case -ETIME: +- case -EPIPE: ++ default: + /* urb terminated, clean up */ + dev_dbg(dev, "urb terminated, status: %d\n", status); + return; +- default: +- dev_err(dev, "unknown status received: %d\n", status); + } + exit: + rv = usb_submit_urb(urb, GFP_ATOMIC); +diff --git a/drivers/usb/common/usb-otg-fsm.c b/drivers/usb/common/usb-otg-fsm.c +index 3740cf95560e9..0697fde51d00f 100644 +--- a/drivers/usb/common/usb-otg-fsm.c ++++ b/drivers/usb/common/usb-otg-fsm.c +@@ -193,7 +193,11 @@ static void otg_start_hnp_polling(struct otg_fsm *fsm) + if (!fsm->host_req_flag) + return; + +- INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work); ++ if (!fsm->hnp_work_inited) { ++ INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work); ++ fsm->hnp_work_inited = true; ++ } ++ + schedule_delayed_work(&fsm->hnp_polling_work, + msecs_to_jiffies(T_HOST_REQ_POLL)); + } +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index f14c2aa837598..b9ff414a0e9d2 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -1741,9 +1741,13 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep) + { + struct dwc3_request *req; + struct dwc3_request *tmp; ++ struct list_head local; + struct dwc3 *dwc = dep->dwc; + +- list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) { ++restart: ++ list_replace_init(&dep->cancelled_list, &local); ++ ++ list_for_each_entry_safe(req, tmp, &local, list) { + dwc3_gadget_ep_skip_trbs(dep, req); + switch (req->status) { + case DWC3_REQUEST_STATUS_DISCONNECTED: +@@ -1761,6 +1765,9 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep) + break; + } + } ++ ++ if (!list_empty(&dep->cancelled_list)) ++ goto restart; + } + + static int dwc3_gadget_ep_dequeue(struct usb_ep *ep, +@@ -2249,6 +2256,17 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on) + } + } + ++ /* ++ * Avoid issuing a runtime resume if the device is already in the ++ * suspended state during gadget disconnect. DWC3 gadget was already ++ * halted/stopped during runtime suspend. ++ */ ++ if (!is_on) { ++ pm_runtime_barrier(dwc->dev); ++ if (pm_runtime_suspended(dwc->dev)) ++ return 0; ++ } ++ + /* + * Check the return value for successful resume, or error. For a + * successful resume, the DWC3 runtime PM resume routine will handle +@@ -2945,8 +2963,12 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep, + { + struct dwc3_request *req; + struct dwc3_request *tmp; ++ struct list_head local; + +- list_for_each_entry_safe(req, tmp, &dep->started_list, list) { ++restart: ++ list_replace_init(&dep->started_list, &local); ++ ++ list_for_each_entry_safe(req, tmp, &local, list) { + int ret; + + ret = dwc3_gadget_ep_cleanup_completed_request(dep, event, +@@ -2954,6 +2976,9 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep, + if (ret) + break; + } ++ ++ if (!list_empty(&dep->started_list)) ++ goto restart; + } + + static bool dwc3_gadget_ep_should_continue(struct dwc3_ep *dep) +diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c +index a82b3de1a54be..6742271cd6e6a 100644 +--- a/drivers/usb/gadget/function/f_hid.c ++++ b/drivers/usb/gadget/function/f_hid.c +@@ -41,6 +41,7 @@ struct f_hidg { + unsigned char bInterfaceSubClass; + unsigned char bInterfaceProtocol; + unsigned char protocol; ++ unsigned char idle; + unsigned short report_desc_length; + char *report_desc; + unsigned short report_length; +@@ -338,6 +339,11 @@ static ssize_t f_hidg_write(struct file *file, const char __user *buffer, + + spin_lock_irqsave(&hidg->write_spinlock, flags); + ++ if (!hidg->req) { ++ spin_unlock_irqrestore(&hidg->write_spinlock, flags); ++ return -ESHUTDOWN; ++ } ++ + #define WRITE_COND (!hidg->write_pending) + try_again: + /* write queue */ +@@ -358,8 +364,14 @@ try_again: + count = min_t(unsigned, count, hidg->report_length); + + spin_unlock_irqrestore(&hidg->write_spinlock, flags); +- status = copy_from_user(req->buf, buffer, count); + ++ if (!req) { ++ ERROR(hidg->func.config->cdev, "hidg->req is NULL\n"); ++ status = -ESHUTDOWN; ++ goto release_write_pending; ++ } ++ ++ status = copy_from_user(req->buf, buffer, count); + if (status != 0) { + ERROR(hidg->func.config->cdev, + "copy_from_user error\n"); +@@ -387,14 +399,17 @@ try_again: + + spin_unlock_irqrestore(&hidg->write_spinlock, flags); + ++ if (!hidg->in_ep->enabled) { ++ ERROR(hidg->func.config->cdev, "in_ep is disabled\n"); ++ status = -ESHUTDOWN; ++ goto release_write_pending; ++ } ++ + status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC); +- if (status < 0) { +- ERROR(hidg->func.config->cdev, +- "usb_ep_queue error on int endpoint %zd\n", status); ++ if (status < 0) + goto release_write_pending; +- } else { ++ else + status = count; +- } + + return status; + release_write_pending: +@@ -523,6 +538,14 @@ static int hidg_setup(struct usb_function *f, + goto respond; + break; + ++ case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8 ++ | HID_REQ_GET_IDLE): ++ VDBG(cdev, "get_idle\n"); ++ length = min_t(unsigned int, length, 1); ++ ((u8 *) req->buf)[0] = hidg->idle; ++ goto respond; ++ break; ++ + case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8 + | HID_REQ_SET_REPORT): + VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength); +@@ -546,6 +569,14 @@ static int hidg_setup(struct usb_function *f, + goto stall; + break; + ++ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8 ++ | HID_REQ_SET_IDLE): ++ VDBG(cdev, "set_idle\n"); ++ length = 0; ++ hidg->idle = value >> 8; ++ goto respond; ++ break; ++ + case ((USB_DIR_IN | USB_TYPE_STANDARD | USB_RECIP_INTERFACE) << 8 + | USB_REQ_GET_DESCRIPTOR): + switch (value >> 8) { +@@ -773,6 +804,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f) + hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass; + hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol; + hidg->protocol = HID_REPORT_PROTOCOL; ++ hidg->idle = 1; + hidg_ss_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length); + hidg_ss_in_comp_desc.wBytesPerInterval = + cpu_to_le16(hidg->report_length); +diff --git a/drivers/usb/gadget/udc/max3420_udc.c b/drivers/usb/gadget/udc/max3420_udc.c +index 35179543c3272..91c9e9057cff3 100644 +--- a/drivers/usb/gadget/udc/max3420_udc.c ++++ b/drivers/usb/gadget/udc/max3420_udc.c +@@ -1260,12 +1260,14 @@ static int max3420_probe(struct spi_device *spi) + err = devm_request_irq(&spi->dev, irq, max3420_irq_handler, 0, + "max3420", udc); + if (err < 0) +- return err; ++ goto del_gadget; + + udc->thread_task = kthread_create(max3420_thread, udc, + "max3420-thread"); +- if (IS_ERR(udc->thread_task)) +- return PTR_ERR(udc->thread_task); ++ if (IS_ERR(udc->thread_task)) { ++ err = PTR_ERR(udc->thread_task); ++ goto del_gadget; ++ } + + irq = of_irq_get_byname(spi->dev.of_node, "vbus"); + if (irq <= 0) { /* no vbus irq implies self-powered design */ +@@ -1285,10 +1287,14 @@ static int max3420_probe(struct spi_device *spi) + err = devm_request_irq(&spi->dev, irq, + max3420_vbus_handler, 0, "vbus", udc); + if (err < 0) +- return err; ++ goto del_gadget; + } + + return 0; ++ ++del_gadget: ++ usb_del_gadget_udc(&udc->gadget); ++ return err; + } + + static int max3420_remove(struct spi_device *spi) +diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c +index 9bbd7ddd0003e..a24aea3d2759e 100644 +--- a/drivers/usb/host/ohci-at91.c ++++ b/drivers/usb/host/ohci-at91.c +@@ -611,8 +611,6 @@ ohci_hcd_at91_drv_suspend(struct device *dev) + if (ohci_at91->wakeup) + enable_irq_wake(hcd->irq); + +- ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1); +- + ret = ohci_suspend(hcd, ohci_at91->wakeup); + if (ret) { + if (ohci_at91->wakeup) +@@ -632,7 +630,10 @@ ohci_hcd_at91_drv_suspend(struct device *dev) + /* flush the writes */ + (void) ohci_readl (ohci, &ohci->regs->control); + msleep(1); ++ ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1); + at91_stop_clock(ohci_at91); ++ } else { ++ ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1); + } + + return ret; +@@ -644,6 +645,8 @@ ohci_hcd_at91_drv_resume(struct device *dev) + struct usb_hcd *hcd = dev_get_drvdata(dev); + struct ohci_at91_priv *ohci_at91 = hcd_to_ohci_at91_priv(hcd); + ++ ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0); ++ + if (ohci_at91->wakeup) + disable_irq_wake(hcd->irq); + else +@@ -651,8 +654,6 @@ ohci_hcd_at91_drv_resume(struct device *dev) + + ohci_resume(hcd, false); + +- ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0); +- + return 0; + } + +diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c +index 2db917eab7995..8a521b5ea769e 100644 +--- a/drivers/usb/serial/ch341.c ++++ b/drivers/usb/serial/ch341.c +@@ -851,6 +851,7 @@ static struct usb_serial_driver ch341_device = { + .owner = THIS_MODULE, + .name = "ch341-uart", + }, ++ .bulk_in_size = 512, + .id_table = id_table, + .num_ports = 1, + .open = ch341_open, +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index 4a1f3a95d0177..33bbb3470ca3b 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -219,6 +219,7 @@ static const struct usb_device_id id_table_combined[] = { + { USB_DEVICE(FTDI_VID, FTDI_MTXORB_6_PID) }, + { USB_DEVICE(FTDI_VID, FTDI_R2000KU_TRUE_RNG) }, + { USB_DEVICE(FTDI_VID, FTDI_VARDAAN_PID) }, ++ { USB_DEVICE(FTDI_VID, FTDI_AUTO_M3_OP_COM_V2_PID) }, + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0100_PID) }, + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0101_PID) }, + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0102_PID) }, +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index add602bebd820..755858ca20bac 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -159,6 +159,9 @@ + /* Vardaan Enterprises Serial Interface VEUSB422R3 */ + #define FTDI_VARDAAN_PID 0xF070 + ++/* Auto-M3 Ltd. - OP-COM USB V2 - OBD interface Adapter */ ++#define FTDI_AUTO_M3_OP_COM_V2_PID 0x4f50 ++ + /* + * Xsens Technologies BV products (http://www.xsens.com). + */ +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 0fbe253dc570b..039450069ca45 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1203,6 +1203,8 @@ static const struct usb_device_id option_ids[] = { + .driver_info = NCTRL(2) | RSVD(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1055, 0xff), /* Telit FN980 (PCIe) */ + .driver_info = NCTRL(0) | RSVD(1) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff), /* Telit FD980 */ ++ .driver_info = NCTRL(2) | RSVD(3) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), + .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), +diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c +index 940050c314822..2ce9cbf49e974 100644 +--- a/drivers/usb/serial/pl2303.c ++++ b/drivers/usb/serial/pl2303.c +@@ -418,24 +418,34 @@ static int pl2303_detect_type(struct usb_serial *serial) + bcdDevice = le16_to_cpu(desc->bcdDevice); + bcdUSB = le16_to_cpu(desc->bcdUSB); + +- switch (bcdDevice) { +- case 0x100: +- /* +- * Assume it's an HXN-type if the device doesn't support the old read +- * request value. +- */ +- if (bcdUSB == 0x200 && !pl2303_supports_hx_status(serial)) +- return TYPE_HXN; ++ switch (bcdUSB) { ++ case 0x110: ++ switch (bcdDevice) { ++ case 0x300: ++ return TYPE_HX; ++ case 0x400: ++ return TYPE_HXD; ++ default: ++ return TYPE_HX; ++ } + break; +- case 0x300: +- if (bcdUSB == 0x200) ++ case 0x200: ++ switch (bcdDevice) { ++ case 0x100: ++ case 0x305: ++ /* ++ * Assume it's an HXN-type if the device doesn't ++ * support the old read request value. ++ */ ++ if (!pl2303_supports_hx_status(serial)) ++ return TYPE_HXN; ++ break; ++ case 0x300: + return TYPE_TA; +- +- return TYPE_HX; +- case 0x400: +- return TYPE_HXD; +- case 0x500: +- return TYPE_TB; ++ case 0x500: ++ return TYPE_TB; ++ } ++ break; + } + + dev_err(&serial->interface->dev, +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c +index 1b7f18d35df45..426e37a1e78c5 100644 +--- a/drivers/usb/typec/tcpm/tcpm.c ++++ b/drivers/usb/typec/tcpm/tcpm.c +@@ -5355,7 +5355,7 @@ EXPORT_SYMBOL_GPL(tcpm_pd_hard_reset); + void tcpm_sink_frs(struct tcpm_port *port) + { + spin_lock(&port->pd_event_lock); +- port->pd_events = TCPM_FRS_EVENT; ++ port->pd_events |= TCPM_FRS_EVENT; + spin_unlock(&port->pd_event_lock); + kthread_queue_work(port->wq, &port->event_work); + } +@@ -5364,7 +5364,7 @@ EXPORT_SYMBOL_GPL(tcpm_sink_frs); + void tcpm_sourcing_vbus(struct tcpm_port *port) + { + spin_lock(&port->pd_event_lock); +- port->pd_events = TCPM_SOURCING_VBUS; ++ port->pd_events |= TCPM_SOURCING_VBUS; + spin_unlock(&port->pd_event_lock); + kthread_queue_work(port->wq, &port->event_work); + } +diff --git a/drivers/virt/acrn/vm.c b/drivers/virt/acrn/vm.c +index 0d002a355a936..fbc9f1042000c 100644 +--- a/drivers/virt/acrn/vm.c ++++ b/drivers/virt/acrn/vm.c +@@ -64,6 +64,14 @@ int acrn_vm_destroy(struct acrn_vm *vm) + test_and_set_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags)) + return 0; + ++ ret = hcall_destroy_vm(vm->vmid); ++ if (ret < 0) { ++ dev_err(acrn_dev.this_device, ++ "Failed to destroy VM %u\n", vm->vmid); ++ clear_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags); ++ return ret; ++ } ++ + /* Remove from global VM list */ + write_lock_bh(&acrn_vm_list_lock); + list_del_init(&vm->list); +@@ -78,14 +86,6 @@ int acrn_vm_destroy(struct acrn_vm *vm) + vm->monitor_page = NULL; + } + +- ret = hcall_destroy_vm(vm->vmid); +- if (ret < 0) { +- dev_err(acrn_dev.this_device, +- "Failed to destroy VM %u\n", vm->vmid); +- clear_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags); +- return ret; +- } +- + acrn_vm_all_ram_unmap(vm); + + dev_dbg(acrn_dev.this_device, "VM %u destroyed.\n", vm->vmid); +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index 398c941e38974..f77156187a0ae 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -3613,7 +3613,8 @@ static int smb3_simple_fallocate_write_range(unsigned int xid, + char *buf) + { + struct cifs_io_parms io_parms = {0}; +- int rc, nbytes; ++ int nbytes; ++ int rc = 0; + struct kvec iov[2]; + + io_parms.netfid = cfile->fid.netfid; +diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c +index bc364c119af6a..cebea4270817e 100644 +--- a/fs/ext4/mmp.c ++++ b/fs/ext4/mmp.c +@@ -138,7 +138,7 @@ static int kmmpd(void *data) + unsigned mmp_check_interval; + unsigned long last_update_time; + unsigned long diff; +- int retval; ++ int retval = 0; + + mmp_block = le64_to_cpu(es->s_mmp_block); + mmp = (struct mmp_struct *)(bh->b_data); +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index a4af26d4459a3..18332550b4464 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -2517,7 +2517,7 @@ again: + goto journal_error; + err = ext4_handle_dirty_dx_node(handle, dir, + frame->bh); +- if (err) ++ if (restart || err) + goto journal_error; + } else { + struct dx_root *dxroot; +diff --git a/fs/io-wq.c b/fs/io-wq.c +index 9efecdf025b9c..77026d42cb799 100644 +--- a/fs/io-wq.c ++++ b/fs/io-wq.c +@@ -131,6 +131,7 @@ struct io_cb_cancel_data { + }; + + static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index); ++static void io_wqe_dec_running(struct io_worker *worker); + + static bool io_worker_get(struct io_worker *worker) + { +@@ -169,26 +170,21 @@ static void io_worker_exit(struct io_worker *worker) + { + struct io_wqe *wqe = worker->wqe; + struct io_wqe_acct *acct = io_wqe_get_acct(worker); +- unsigned flags; + + if (refcount_dec_and_test(&worker->ref)) + complete(&worker->ref_done); + wait_for_completion(&worker->ref_done); + +- preempt_disable(); +- current->flags &= ~PF_IO_WORKER; +- flags = worker->flags; +- worker->flags = 0; +- if (flags & IO_WORKER_F_RUNNING) +- atomic_dec(&acct->nr_running); +- worker->flags = 0; +- preempt_enable(); +- + raw_spin_lock_irq(&wqe->lock); +- if (flags & IO_WORKER_F_FREE) ++ if (worker->flags & IO_WORKER_F_FREE) + hlist_nulls_del_rcu(&worker->nulls_node); + list_del_rcu(&worker->all_list); + acct->nr_workers--; ++ preempt_disable(); ++ io_wqe_dec_running(worker); ++ worker->flags = 0; ++ current->flags &= ~PF_IO_WORKER; ++ preempt_enable(); + raw_spin_unlock_irq(&wqe->lock); + + kfree_rcu(worker, rcu); +@@ -215,15 +211,19 @@ static bool io_wqe_activate_free_worker(struct io_wqe *wqe) + struct hlist_nulls_node *n; + struct io_worker *worker; + +- n = rcu_dereference(hlist_nulls_first_rcu(&wqe->free_list)); +- if (is_a_nulls(n)) +- return false; +- +- worker = hlist_nulls_entry(n, struct io_worker, nulls_node); +- if (io_worker_get(worker)) { +- wake_up_process(worker->task); ++ /* ++ * Iterate free_list and see if we can find an idle worker to ++ * activate. If a given worker is on the free_list but in the process ++ * of exiting, keep trying. ++ */ ++ hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) { ++ if (!io_worker_get(worker)) ++ continue; ++ if (wake_up_process(worker->task)) { ++ io_worker_release(worker); ++ return true; ++ } + io_worker_release(worker); +- return true; + } + + return false; +@@ -248,10 +248,19 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct) + ret = io_wqe_activate_free_worker(wqe); + rcu_read_unlock(); + +- if (!ret && acct->nr_workers < acct->max_workers) { +- atomic_inc(&acct->nr_running); +- atomic_inc(&wqe->wq->worker_refs); +- create_io_worker(wqe->wq, wqe, acct->index); ++ if (!ret) { ++ bool do_create = false; ++ ++ raw_spin_lock_irq(&wqe->lock); ++ if (acct->nr_workers < acct->max_workers) { ++ atomic_inc(&acct->nr_running); ++ atomic_inc(&wqe->wq->worker_refs); ++ acct->nr_workers++; ++ do_create = true; ++ } ++ raw_spin_unlock_irq(&wqe->lock); ++ if (do_create) ++ create_io_worker(wqe->wq, wqe, acct->index); + } + } + +@@ -272,9 +281,17 @@ static void create_worker_cb(struct callback_head *cb) + { + struct create_worker_data *cwd; + struct io_wq *wq; ++ struct io_wqe *wqe; ++ struct io_wqe_acct *acct; + + cwd = container_of(cb, struct create_worker_data, work); +- wq = cwd->wqe->wq; ++ wqe = cwd->wqe; ++ wq = wqe->wq; ++ acct = &wqe->acct[cwd->index]; ++ raw_spin_lock_irq(&wqe->lock); ++ if (acct->nr_workers < acct->max_workers) ++ acct->nr_workers++; ++ raw_spin_unlock_irq(&wqe->lock); + create_io_worker(wq, cwd->wqe, cwd->index); + kfree(cwd); + } +@@ -640,6 +657,9 @@ static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index) + kfree(worker); + fail: + atomic_dec(&acct->nr_running); ++ raw_spin_lock_irq(&wqe->lock); ++ acct->nr_workers--; ++ raw_spin_unlock_irq(&wqe->lock); + io_worker_ref_put(wq); + return; + } +@@ -655,9 +675,8 @@ fail: + worker->flags |= IO_WORKER_F_FREE; + if (index == IO_WQ_ACCT_BOUND) + worker->flags |= IO_WORKER_F_BOUND; +- if (!acct->nr_workers && (worker->flags & IO_WORKER_F_BOUND)) ++ if ((acct->nr_workers == 1) && (worker->flags & IO_WORKER_F_BOUND)) + worker->flags |= IO_WORKER_F_FIXED; +- acct->nr_workers++; + raw_spin_unlock_irq(&wqe->lock); + wake_up_new_task(tsk); + } +diff --git a/fs/pipe.c b/fs/pipe.c +index 9ef4231cce61c..8e6ef62aeb1c6 100644 +--- a/fs/pipe.c ++++ b/fs/pipe.c +@@ -31,6 +31,21 @@ + + #include "internal.h" + ++/* ++ * New pipe buffers will be restricted to this size while the user is exceeding ++ * their pipe buffer quota. The general pipe use case needs at least two ++ * buffers: one for data yet to be read, and one for new data. If this is less ++ * than two, then a write to a non-empty pipe may block even if the pipe is not ++ * full. This can occur with GNU make jobserver or similar uses of pipes as ++ * semaphores: multiple processes may be waiting to write tokens back to the ++ * pipe before reading tokens: https://lore.kernel.org/lkml/1628086770.5rn8p04n6j.none@localhost/. ++ * ++ * Users can reduce their pipe buffers with F_SETPIPE_SZ below this at their ++ * own risk, namely: pipe writes to non-full pipes may block until the pipe is ++ * emptied. ++ */ ++#define PIPE_MIN_DEF_BUFFERS 2 ++ + /* + * The max size that a non-root user is allowed to grow the pipe. Can + * be set by root in /proc/sys/fs/pipe-max-size +@@ -781,8 +796,8 @@ struct pipe_inode_info *alloc_pipe_info(void) + user_bufs = account_pipe_buffers(user, 0, pipe_bufs); + + if (too_many_pipe_buffers_soft(user_bufs) && pipe_is_unprivileged_user()) { +- user_bufs = account_pipe_buffers(user, pipe_bufs, 1); +- pipe_bufs = 1; ++ user_bufs = account_pipe_buffers(user, pipe_bufs, PIPE_MIN_DEF_BUFFERS); ++ pipe_bufs = PIPE_MIN_DEF_BUFFERS; + } + + if (too_many_pipe_buffers_hard(user_bufs) && pipe_is_unprivileged_user()) +diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c +index 476a7ff494822..ef42729216d1f 100644 +--- a/fs/reiserfs/stree.c ++++ b/fs/reiserfs/stree.c +@@ -387,6 +387,24 @@ void pathrelse(struct treepath *search_path) + search_path->path_length = ILLEGAL_PATH_ELEMENT_OFFSET; + } + ++static int has_valid_deh_location(struct buffer_head *bh, struct item_head *ih) ++{ ++ struct reiserfs_de_head *deh; ++ int i; ++ ++ deh = B_I_DEH(bh, ih); ++ for (i = 0; i < ih_entry_count(ih); i++) { ++ if (deh_location(&deh[i]) > ih_item_len(ih)) { ++ reiserfs_warning(NULL, "reiserfs-5094", ++ "directory entry location seems wrong %h", ++ &deh[i]); ++ return 0; ++ } ++ } ++ ++ return 1; ++} ++ + static int is_leaf(char *buf, int blocksize, struct buffer_head *bh) + { + struct block_head *blkh; +@@ -454,11 +472,14 @@ static int is_leaf(char *buf, int blocksize, struct buffer_head *bh) + "(second one): %h", ih); + return 0; + } +- if (is_direntry_le_ih(ih) && (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE))) { +- reiserfs_warning(NULL, "reiserfs-5093", +- "item entry count seems wrong %h", +- ih); +- return 0; ++ if (is_direntry_le_ih(ih)) { ++ if (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE)) { ++ reiserfs_warning(NULL, "reiserfs-5093", ++ "item entry count seems wrong %h", ++ ih); ++ return 0; ++ } ++ return has_valid_deh_location(bh, ih); + } + prev_location = ih_location(ih); + } +diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c +index 3ffafc73acf02..58481f8d63d5b 100644 +--- a/fs/reiserfs/super.c ++++ b/fs/reiserfs/super.c +@@ -2082,6 +2082,14 @@ static int reiserfs_fill_super(struct super_block *s, void *data, int silent) + unlock_new_inode(root_inode); + } + ++ if (!S_ISDIR(root_inode->i_mode) || !inode_get_bytes(root_inode) || ++ !root_inode->i_size) { ++ SWARN(silent, s, "", "corrupt root inode, run fsck"); ++ iput(root_inode); ++ errval = -EUCLEAN; ++ goto error; ++ } ++ + s->s_root = d_make_root(root_inode); + if (!s->s_root) + goto error; +diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h +index d7ed00f1594ef..a11067cebdabc 100644 +--- a/include/linux/serial_core.h ++++ b/include/linux/serial_core.h +@@ -517,6 +517,25 @@ static inline void uart_unlock_and_check_sysrq(struct uart_port *port) + if (sysrq_ch) + handle_sysrq(sysrq_ch); + } ++ ++static inline void uart_unlock_and_check_sysrq_irqrestore(struct uart_port *port, ++ unsigned long flags) ++{ ++ int sysrq_ch; ++ ++ if (!port->has_sysrq) { ++ spin_unlock_irqrestore(&port->lock, flags); ++ return; ++ } ++ ++ sysrq_ch = port->sysrq_ch; ++ port->sysrq_ch = 0; ++ ++ spin_unlock_irqrestore(&port->lock, flags); ++ ++ if (sysrq_ch) ++ handle_sysrq(sysrq_ch); ++} + #else /* CONFIG_MAGIC_SYSRQ_SERIAL */ + static inline int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch) + { +@@ -530,6 +549,11 @@ static inline void uart_unlock_and_check_sysrq(struct uart_port *port) + { + spin_unlock(&port->lock); + } ++static inline void uart_unlock_and_check_sysrq_irqrestore(struct uart_port *port, ++ unsigned long flags) ++{ ++ spin_unlock_irqrestore(&port->lock, flags); ++} + #endif /* CONFIG_MAGIC_SYSRQ_SERIAL */ + + /* +diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h +index 54269e47ac9a3..3ebfea0781f10 100644 +--- a/include/linux/tee_drv.h ++++ b/include/linux/tee_drv.h +@@ -27,6 +27,7 @@ + #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */ + #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */ + #define TEE_SHM_KERNEL_MAPPED BIT(6) /* Memory mapped in kernel space */ ++#define TEE_SHM_PRIV BIT(7) /* Memory private to TEE driver */ + + struct device; + struct tee_device; +@@ -332,6 +333,7 @@ void *tee_get_drvdata(struct tee_device *teedev); + * @returns a pointer to 'struct tee_shm' + */ + struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags); ++struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); + + /** + * tee_shm_register() - Register shared memory buffer +diff --git a/include/linux/usb/otg-fsm.h b/include/linux/usb/otg-fsm.h +index e78eb577d0fa1..8ef7d148c1493 100644 +--- a/include/linux/usb/otg-fsm.h ++++ b/include/linux/usb/otg-fsm.h +@@ -196,6 +196,7 @@ struct otg_fsm { + struct mutex lock; + u8 *host_req_flag; + struct delayed_work hnp_polling_work; ++ bool hnp_work_inited; + bool state_changed; + }; + +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h +index 89c8406dddb4a..34a92d5ed12b5 100644 +--- a/include/net/bluetooth/hci_core.h ++++ b/include/net/bluetooth/hci_core.h +@@ -1229,6 +1229,7 @@ struct hci_dev *hci_alloc_dev(void); + void hci_free_dev(struct hci_dev *hdev); + int hci_register_dev(struct hci_dev *hdev); + void hci_unregister_dev(struct hci_dev *hdev); ++void hci_cleanup_dev(struct hci_dev *hdev); + int hci_suspend_dev(struct hci_dev *hdev); + int hci_resume_dev(struct hci_dev *hdev); + int hci_reset_dev(struct hci_dev *hdev); +diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h +index 625a38ccb5d94..0bf09a9bca4e0 100644 +--- a/include/net/ip6_route.h ++++ b/include/net/ip6_route.h +@@ -265,7 +265,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb, + + static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb) + { +- int mtu; ++ unsigned int mtu; + + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? + inet6_sk(skb->sk) : NULL; +diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h +index e816b6a3ef2b0..9b376b87bd543 100644 +--- a/include/net/netns/xfrm.h ++++ b/include/net/netns/xfrm.h +@@ -74,6 +74,7 @@ struct netns_xfrm { + #endif + spinlock_t xfrm_state_lock; + seqcount_spinlock_t xfrm_state_hash_generation; ++ seqcount_spinlock_t xfrm_policy_hash_generation; + + spinlock_t xfrm_policy_lock; + struct mutex xfrm_cfg_mutex; +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 9ebac2a794679..49a5678750fbf 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -12159,10 +12159,33 @@ SYSCALL_DEFINE5(perf_event_open, + } + + if (task) { ++ unsigned int ptrace_mode = PTRACE_MODE_READ_REALCREDS; ++ bool is_capable; ++ + err = down_read_interruptible(&task->signal->exec_update_lock); + if (err) + goto err_file; + ++ is_capable = perfmon_capable(); ++ if (attr.sigtrap) { ++ /* ++ * perf_event_attr::sigtrap sends signals to the other ++ * task. Require the current task to also have ++ * CAP_KILL. ++ */ ++ rcu_read_lock(); ++ is_capable &= ns_capable(__task_cred(task)->user_ns, CAP_KILL); ++ rcu_read_unlock(); ++ ++ /* ++ * If the required capabilities aren't available, checks ++ * for ptrace permissions: upgrade to ATTACH, since ++ * sending signals can effectively change the target ++ * task. ++ */ ++ ptrace_mode = PTRACE_MODE_ATTACH_REALCREDS; ++ } ++ + /* + * Preserve ptrace permission check for backwards compatibility. + * +@@ -12172,7 +12195,7 @@ SYSCALL_DEFINE5(perf_event_open, + * perf_event_exit_task() that could imply). + */ + err = -EACCES; +- if (!perfmon_capable() && !ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) ++ if (!is_capable && !ptrace_may_access(task, ptrace_mode)) + goto err_cred; + } + +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index e5858999b54de..15b4d2fb6be38 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -1632,12 +1632,18 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags) + dequeue_task(rq, p, flags); + } + +-/* +- * __normal_prio - return the priority that is based on the static prio +- */ +-static inline int __normal_prio(struct task_struct *p) ++static inline int __normal_prio(int policy, int rt_prio, int nice) + { +- return p->static_prio; ++ int prio; ++ ++ if (dl_policy(policy)) ++ prio = MAX_DL_PRIO - 1; ++ else if (rt_policy(policy)) ++ prio = MAX_RT_PRIO - 1 - rt_prio; ++ else ++ prio = NICE_TO_PRIO(nice); ++ ++ return prio; + } + + /* +@@ -1649,15 +1655,7 @@ static inline int __normal_prio(struct task_struct *p) + */ + static inline int normal_prio(struct task_struct *p) + { +- int prio; +- +- if (task_has_dl_policy(p)) +- prio = MAX_DL_PRIO-1; +- else if (task_has_rt_policy(p)) +- prio = MAX_RT_PRIO-1 - p->rt_priority; +- else +- prio = __normal_prio(p); +- return prio; ++ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio)); + } + + /* +@@ -3759,7 +3757,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) + } else if (PRIO_TO_NICE(p->static_prio) < 0) + p->static_prio = NICE_TO_PRIO(0); + +- p->prio = p->normal_prio = __normal_prio(p); ++ p->prio = p->normal_prio = p->static_prio; + set_load_weight(p, false); + + /* +@@ -5552,6 +5550,18 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag + } + EXPORT_SYMBOL(default_wake_function); + ++static void __setscheduler_prio(struct task_struct *p, int prio) ++{ ++ if (dl_prio(prio)) ++ p->sched_class = &dl_sched_class; ++ else if (rt_prio(prio)) ++ p->sched_class = &rt_sched_class; ++ else ++ p->sched_class = &fair_sched_class; ++ ++ p->prio = prio; ++} ++ + #ifdef CONFIG_RT_MUTEXES + + static inline int __rt_effective_prio(struct task_struct *pi_task, int prio) +@@ -5667,22 +5677,19 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) + } else { + p->dl.pi_se = &p->dl; + } +- p->sched_class = &dl_sched_class; + } else if (rt_prio(prio)) { + if (dl_prio(oldprio)) + p->dl.pi_se = &p->dl; + if (oldprio < prio) + queue_flag |= ENQUEUE_HEAD; +- p->sched_class = &rt_sched_class; + } else { + if (dl_prio(oldprio)) + p->dl.pi_se = &p->dl; + if (rt_prio(oldprio)) + p->rt.timeout = 0; +- p->sched_class = &fair_sched_class; + } + +- p->prio = prio; ++ __setscheduler_prio(p, prio); + + if (queued) + enqueue_task(rq, p, queue_flag); +@@ -6035,35 +6042,6 @@ static void __setscheduler_params(struct task_struct *p, + set_load_weight(p, true); + } + +-/* Actually do priority change: must hold pi & rq lock. */ +-static void __setscheduler(struct rq *rq, struct task_struct *p, +- const struct sched_attr *attr, bool keep_boost) +-{ +- /* +- * If params can't change scheduling class changes aren't allowed +- * either. +- */ +- if (attr->sched_flags & SCHED_FLAG_KEEP_PARAMS) +- return; +- +- __setscheduler_params(p, attr); +- +- /* +- * Keep a potential priority boosting if called from +- * sched_setscheduler(). +- */ +- p->prio = normal_prio(p); +- if (keep_boost) +- p->prio = rt_effective_prio(p, p->prio); +- +- if (dl_prio(p->prio)) +- p->sched_class = &dl_sched_class; +- else if (rt_prio(p->prio)) +- p->sched_class = &rt_sched_class; +- else +- p->sched_class = &fair_sched_class; +-} +- + /* + * Check the target process has a UID that matches the current process's: + */ +@@ -6084,10 +6062,8 @@ static int __sched_setscheduler(struct task_struct *p, + const struct sched_attr *attr, + bool user, bool pi) + { +- int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 : +- MAX_RT_PRIO - 1 - attr->sched_priority; +- int retval, oldprio, oldpolicy = -1, queued, running; +- int new_effective_prio, policy = attr->sched_policy; ++ int oldpolicy = -1, policy = attr->sched_policy; ++ int retval, oldprio, newprio, queued, running; + const struct sched_class *prev_class; + struct callback_head *head; + struct rq_flags rf; +@@ -6285,6 +6261,7 @@ change: + p->sched_reset_on_fork = reset_on_fork; + oldprio = p->prio; + ++ newprio = __normal_prio(policy, attr->sched_priority, attr->sched_nice); + if (pi) { + /* + * Take priority boosted tasks into account. If the new +@@ -6293,8 +6270,8 @@ change: + * the runqueue. This will be done when the task deboost + * itself. + */ +- new_effective_prio = rt_effective_prio(p, newprio); +- if (new_effective_prio == oldprio) ++ newprio = rt_effective_prio(p, newprio); ++ if (newprio == oldprio) + queue_flags &= ~DEQUEUE_MOVE; + } + +@@ -6307,7 +6284,10 @@ change: + + prev_class = p->sched_class; + +- __setscheduler(rq, p, attr, pi); ++ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) { ++ __setscheduler_params(p, attr); ++ __setscheduler_prio(p, newprio); ++ } + __setscheduler_uclamp(p, attr); + + if (queued) { +diff --git a/kernel/time/timer.c b/kernel/time/timer.c +index 99b97ccefdbdf..2870a7a51638c 100644 +--- a/kernel/time/timer.c ++++ b/kernel/time/timer.c +@@ -1279,8 +1279,10 @@ static inline void timer_base_unlock_expiry(struct timer_base *base) + static void timer_sync_wait_running(struct timer_base *base) + { + if (atomic_read(&base->timer_waiters)) { ++ raw_spin_unlock_irq(&base->lock); + spin_unlock(&base->expiry_lock); + spin_lock(&base->expiry_lock); ++ raw_spin_lock_irq(&base->lock); + } + } + +@@ -1471,14 +1473,14 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) + if (timer->flags & TIMER_IRQSAFE) { + raw_spin_unlock(&base->lock); + call_timer_fn(timer, fn, baseclk); +- base->running_timer = NULL; + raw_spin_lock(&base->lock); ++ base->running_timer = NULL; + } else { + raw_spin_unlock_irq(&base->lock); + call_timer_fn(timer, fn, baseclk); ++ raw_spin_lock_irq(&base->lock); + base->running_timer = NULL; + timer_sync_wait_running(base); +- raw_spin_lock_irq(&base->lock); + } + } + } +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 20196380fc545..018067e379f2b 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -9006,8 +9006,10 @@ static int trace_array_create_dir(struct trace_array *tr) + return -EINVAL; + + ret = event_trace_add_tracer(tr->dir, tr); +- if (ret) ++ if (ret) { + tracefs_remove(tr->dir); ++ return ret; ++ } + + init_tracer_tracefs(tr, tr->dir); + __update_tracer_options(tr); +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c +index 023777a4a04d7..c59793ffd59ce 100644 +--- a/kernel/trace/trace_events_hist.c ++++ b/kernel/trace/trace_events_hist.c +@@ -65,7 +65,8 @@ + C(INVALID_SORT_MODIFIER,"Invalid sort modifier"), \ + C(EMPTY_SORT_FIELD, "Empty sort field"), \ + C(TOO_MANY_SORT_FIELDS, "Too many sort fields (Max = 2)"), \ +- C(INVALID_SORT_FIELD, "Sort field must be a key or a val"), ++ C(INVALID_SORT_FIELD, "Sort field must be a key or a val"), \ ++ C(INVALID_STR_OPERAND, "String type can not be an operand in expression"), + + #undef C + #define C(a, b) HIST_ERR_##a +@@ -2156,6 +2157,13 @@ static struct hist_field *parse_unary(struct hist_trigger_data *hist_data, + ret = PTR_ERR(operand1); + goto free; + } ++ if (operand1->flags & HIST_FIELD_FL_STRING) { ++ /* String type can not be the operand of unary operator. */ ++ hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(str)); ++ destroy_hist_field(operand1, 0); ++ ret = -EINVAL; ++ goto free; ++ } + + expr->flags |= operand1->flags & + (HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS); +@@ -2257,6 +2265,11 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data, + operand1 = NULL; + goto free; + } ++ if (operand1->flags & HIST_FIELD_FL_STRING) { ++ hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(operand1_str)); ++ ret = -EINVAL; ++ goto free; ++ } + + /* rest of string could be another expression e.g. b+c in a+b+c */ + operand_flags = 0; +@@ -2266,6 +2279,11 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data, + operand2 = NULL; + goto free; + } ++ if (operand2->flags & HIST_FIELD_FL_STRING) { ++ hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(str)); ++ ret = -EINVAL; ++ goto free; ++ } + + ret = check_expr_operands(file->tr, operand1, operand2); + if (ret) +@@ -2287,6 +2305,10 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data, + + expr->operands[0] = operand1; + expr->operands[1] = operand2; ++ ++ /* The operand sizes should be the same, so just pick one */ ++ expr->size = operand1->size; ++ + expr->operator = field_op; + expr->name = expr_str(expr, 0); + expr->type = kstrdup(operand1->type, GFP_KERNEL); +diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c +index fc32821f8240b..efd14c79fab41 100644 +--- a/kernel/tracepoint.c ++++ b/kernel/tracepoint.c +@@ -15,12 +15,57 @@ + #include + #include + ++enum tp_func_state { ++ TP_FUNC_0, ++ TP_FUNC_1, ++ TP_FUNC_2, ++ TP_FUNC_N, ++}; ++ + extern tracepoint_ptr_t __start___tracepoints_ptrs[]; + extern tracepoint_ptr_t __stop___tracepoints_ptrs[]; + + DEFINE_SRCU(tracepoint_srcu); + EXPORT_SYMBOL_GPL(tracepoint_srcu); + ++enum tp_transition_sync { ++ TP_TRANSITION_SYNC_1_0_1, ++ TP_TRANSITION_SYNC_N_2_1, ++ ++ _NR_TP_TRANSITION_SYNC, ++}; ++ ++struct tp_transition_snapshot { ++ unsigned long rcu; ++ unsigned long srcu; ++ bool ongoing; ++}; ++ ++/* Protected by tracepoints_mutex */ ++static struct tp_transition_snapshot tp_transition_snapshot[_NR_TP_TRANSITION_SYNC]; ++ ++static void tp_rcu_get_state(enum tp_transition_sync sync) ++{ ++ struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync]; ++ ++ /* Keep the latest get_state snapshot. */ ++ snapshot->rcu = get_state_synchronize_rcu(); ++ snapshot->srcu = start_poll_synchronize_srcu(&tracepoint_srcu); ++ snapshot->ongoing = true; ++} ++ ++static void tp_rcu_cond_sync(enum tp_transition_sync sync) ++{ ++ struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync]; ++ ++ if (!snapshot->ongoing) ++ return; ++ cond_synchronize_rcu(snapshot->rcu); ++ if (!poll_state_synchronize_srcu(&tracepoint_srcu, snapshot->srcu)) ++ synchronize_srcu(&tracepoint_srcu); ++ snapshot->ongoing = false; ++} ++ + /* Set to 1 to enable tracepoint debug output */ + static const int tracepoint_debug; + +@@ -246,26 +291,29 @@ static void *func_remove(struct tracepoint_func **funcs, + return old; + } + +-static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs, bool sync) ++/* ++ * Count the number of functions (enum tp_func_state) in a tp_funcs array. ++ */ ++static enum tp_func_state nr_func_state(const struct tracepoint_func *tp_funcs) ++{ ++ if (!tp_funcs) ++ return TP_FUNC_0; ++ if (!tp_funcs[1].func) ++ return TP_FUNC_1; ++ if (!tp_funcs[2].func) ++ return TP_FUNC_2; ++ return TP_FUNC_N; /* 3 or more */ ++} ++ ++static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs) + { + void *func = tp->iterator; + + /* Synthetic events do not have static call sites */ + if (!tp->static_call_key) + return; +- +- if (!tp_funcs[1].func) { ++ if (nr_func_state(tp_funcs) == TP_FUNC_1) + func = tp_funcs[0].func; +- /* +- * If going from the iterator back to a single caller, +- * we need to synchronize with __DO_TRACE to make sure +- * that the data passed to the callback is the one that +- * belongs to that callback. +- */ +- if (sync) +- tracepoint_synchronize_unregister(); +- } +- + __static_call_update(tp->static_call_key, tp->static_call_tramp, func); + } + +@@ -299,9 +347,41 @@ static int tracepoint_add_func(struct tracepoint *tp, + * a pointer to it. This array is referenced by __DO_TRACE from + * include/linux/tracepoint.h using rcu_dereference_sched(). + */ +- tracepoint_update_call(tp, tp_funcs, false); +- rcu_assign_pointer(tp->funcs, tp_funcs); +- static_key_enable(&tp->key); ++ switch (nr_func_state(tp_funcs)) { ++ case TP_FUNC_1: /* 0->1 */ ++ /* ++ * Make sure new static func never uses old data after a ++ * 1->0->1 transition sequence. ++ */ ++ tp_rcu_cond_sync(TP_TRANSITION_SYNC_1_0_1); ++ /* Set static call to first function */ ++ tracepoint_update_call(tp, tp_funcs); ++ /* Both iterator and static call handle NULL tp->funcs */ ++ rcu_assign_pointer(tp->funcs, tp_funcs); ++ static_key_enable(&tp->key); ++ break; ++ case TP_FUNC_2: /* 1->2 */ ++ /* Set iterator static call */ ++ tracepoint_update_call(tp, tp_funcs); ++ /* ++ * Iterator callback installed before updating tp->funcs. ++ * Requires ordering between RCU assign/dereference and ++ * static call update/call. ++ */ ++ fallthrough; ++ case TP_FUNC_N: /* N->N+1 (N>1) */ ++ rcu_assign_pointer(tp->funcs, tp_funcs); ++ /* ++ * Make sure static func never uses incorrect data after a ++ * N->...->2->1 (N>1) transition sequence. ++ */ ++ if (tp_funcs[0].data != old[0].data) ++ tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1); ++ break; ++ default: ++ WARN_ON_ONCE(1); ++ break; ++ } + + release_probes(old); + return 0; +@@ -328,17 +408,52 @@ static int tracepoint_remove_func(struct tracepoint *tp, + /* Failed allocating new tp_funcs, replaced func with stub */ + return 0; + +- if (!tp_funcs) { ++ switch (nr_func_state(tp_funcs)) { ++ case TP_FUNC_0: /* 1->0 */ + /* Removed last function */ + if (tp->unregfunc && static_key_enabled(&tp->key)) + tp->unregfunc(); + + static_key_disable(&tp->key); ++ /* Set iterator static call */ ++ tracepoint_update_call(tp, tp_funcs); ++ /* Both iterator and static call handle NULL tp->funcs */ ++ rcu_assign_pointer(tp->funcs, NULL); ++ /* ++ * Make sure new static func never uses old data after a ++ * 1->0->1 transition sequence. ++ */ ++ tp_rcu_get_state(TP_TRANSITION_SYNC_1_0_1); ++ break; ++ case TP_FUNC_1: /* 2->1 */ + rcu_assign_pointer(tp->funcs, tp_funcs); +- } else { ++ /* ++ * Make sure static func never uses incorrect data after a ++ * N->...->2->1 (N>2) transition sequence. If the first ++ * element's data has changed, then force the synchronization ++ * to prevent current readers that have loaded the old data ++ * from calling the new function. ++ */ ++ if (tp_funcs[0].data != old[0].data) ++ tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1); ++ tp_rcu_cond_sync(TP_TRANSITION_SYNC_N_2_1); ++ /* Set static call to first function */ ++ tracepoint_update_call(tp, tp_funcs); ++ break; ++ case TP_FUNC_2: /* N->N-1 (N>2) */ ++ fallthrough; ++ case TP_FUNC_N: + rcu_assign_pointer(tp->funcs, tp_funcs); +- tracepoint_update_call(tp, tp_funcs, +- tp_funcs[0].func != old[0].func); ++ /* ++ * Make sure static func never uses incorrect data after a ++ * N->...->2->1 (N>2) transition sequence. ++ */ ++ if (tp_funcs[0].data != old[0].data) ++ tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1); ++ break; ++ default: ++ WARN_ON_ONCE(1); ++ break; + } + release_probes(old); + return 0; +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index 7d71d104fdfda..ee59d1c7f1f6c 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -3976,14 +3976,10 @@ EXPORT_SYMBOL(hci_register_dev); + /* Unregister HCI device */ + void hci_unregister_dev(struct hci_dev *hdev) + { +- int id; +- + BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); + + hci_dev_set_flag(hdev, HCI_UNREGISTER); + +- id = hdev->id; +- + write_lock(&hci_dev_list_lock); + list_del(&hdev->list); + write_unlock(&hci_dev_list_lock); +@@ -4018,7 +4014,14 @@ void hci_unregister_dev(struct hci_dev *hdev) + } + + device_del(&hdev->dev); ++ /* Actual cleanup is deferred until hci_cleanup_dev(). */ ++ hci_dev_put(hdev); ++} ++EXPORT_SYMBOL(hci_unregister_dev); + ++/* Cleanup HCI device */ ++void hci_cleanup_dev(struct hci_dev *hdev) ++{ + debugfs_remove_recursive(hdev->debugfs); + kfree_const(hdev->hw_info); + kfree_const(hdev->fw_info); +@@ -4043,11 +4046,8 @@ void hci_unregister_dev(struct hci_dev *hdev) + hci_blocked_keys_clear(hdev); + hci_dev_unlock(hdev); + +- hci_dev_put(hdev); +- +- ida_simple_remove(&hci_index_ida, id); ++ ida_simple_remove(&hci_index_ida, hdev->id); + } +-EXPORT_SYMBOL(hci_unregister_dev); + + /* Suspend HCI device */ + int hci_suspend_dev(struct hci_dev *hdev) +diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c +index eed0dd066e12c..53f85d7c5f9e5 100644 +--- a/net/bluetooth/hci_sock.c ++++ b/net/bluetooth/hci_sock.c +@@ -59,6 +59,17 @@ struct hci_pinfo { + char comm[TASK_COMM_LEN]; + }; + ++static struct hci_dev *hci_hdev_from_sock(struct sock *sk) ++{ ++ struct hci_dev *hdev = hci_pi(sk)->hdev; ++ ++ if (!hdev) ++ return ERR_PTR(-EBADFD); ++ if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) ++ return ERR_PTR(-EPIPE); ++ return hdev; ++} ++ + void hci_sock_set_flag(struct sock *sk, int nr) + { + set_bit(nr, &hci_pi(sk)->flags); +@@ -759,19 +770,13 @@ void hci_sock_dev_event(struct hci_dev *hdev, int event) + if (event == HCI_DEV_UNREG) { + struct sock *sk; + +- /* Detach sockets from device */ ++ /* Wake up sockets using this dead device */ + read_lock(&hci_sk_list.lock); + sk_for_each(sk, &hci_sk_list.head) { +- lock_sock(sk); + if (hci_pi(sk)->hdev == hdev) { +- hci_pi(sk)->hdev = NULL; + sk->sk_err = EPIPE; +- sk->sk_state = BT_OPEN; + sk->sk_state_change(sk); +- +- hci_dev_put(hdev); + } +- release_sock(sk); + } + read_unlock(&hci_sk_list.lock); + } +@@ -930,10 +935,10 @@ static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg) + static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd, + unsigned long arg) + { +- struct hci_dev *hdev = hci_pi(sk)->hdev; ++ struct hci_dev *hdev = hci_hdev_from_sock(sk); + +- if (!hdev) +- return -EBADFD; ++ if (IS_ERR(hdev)) ++ return PTR_ERR(hdev); + + if (hci_dev_test_flag(hdev, HCI_USER_CHANNEL)) + return -EBUSY; +@@ -1103,6 +1108,18 @@ static int hci_sock_bind(struct socket *sock, struct sockaddr *addr, + + lock_sock(sk); + ++ /* Allow detaching from dead device and attaching to alive device, if ++ * the caller wants to re-bind (instead of close) this socket in ++ * response to hci_sock_dev_event(HCI_DEV_UNREG) notification. ++ */ ++ hdev = hci_pi(sk)->hdev; ++ if (hdev && hci_dev_test_flag(hdev, HCI_UNREGISTER)) { ++ hci_pi(sk)->hdev = NULL; ++ sk->sk_state = BT_OPEN; ++ hci_dev_put(hdev); ++ } ++ hdev = NULL; ++ + if (sk->sk_state == BT_BOUND) { + err = -EALREADY; + goto done; +@@ -1379,9 +1396,9 @@ static int hci_sock_getname(struct socket *sock, struct sockaddr *addr, + + lock_sock(sk); + +- hdev = hci_pi(sk)->hdev; +- if (!hdev) { +- err = -EBADFD; ++ hdev = hci_hdev_from_sock(sk); ++ if (IS_ERR(hdev)) { ++ err = PTR_ERR(hdev); + goto done; + } + +@@ -1743,9 +1760,9 @@ static int hci_sock_sendmsg(struct socket *sock, struct msghdr *msg, + goto done; + } + +- hdev = hci_pi(sk)->hdev; +- if (!hdev) { +- err = -EBADFD; ++ hdev = hci_hdev_from_sock(sk); ++ if (IS_ERR(hdev)) { ++ err = PTR_ERR(hdev); + goto done; + } + +diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c +index 9874844a95a98..b69d88b88d2e4 100644 +--- a/net/bluetooth/hci_sysfs.c ++++ b/net/bluetooth/hci_sysfs.c +@@ -83,6 +83,9 @@ void hci_conn_del_sysfs(struct hci_conn *conn) + static void bt_host_release(struct device *dev) + { + struct hci_dev *hdev = to_hci_dev(dev); ++ ++ if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) ++ hci_cleanup_dev(hdev); + kfree(hdev); + module_put(THIS_MODULE); + } +diff --git a/net/bridge/br.c b/net/bridge/br.c +index ef743f94254d7..bbab9984f24e5 100644 +--- a/net/bridge/br.c ++++ b/net/bridge/br.c +@@ -166,7 +166,8 @@ static int br_switchdev_event(struct notifier_block *unused, + case SWITCHDEV_FDB_ADD_TO_BRIDGE: + fdb_info = ptr; + err = br_fdb_external_learn_add(br, p, fdb_info->addr, +- fdb_info->vid, false); ++ fdb_info->vid, ++ fdb_info->is_local, false); + if (err) { + err = notifier_from_errno(err); + break; +diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c +index 698b79747d32e..87ce52bba6498 100644 +--- a/net/bridge/br_fdb.c ++++ b/net/bridge/br_fdb.c +@@ -1001,7 +1001,8 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source, + + static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br, + struct net_bridge_port *p, const unsigned char *addr, +- u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[]) ++ u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[], ++ struct netlink_ext_ack *extack) + { + int err = 0; + +@@ -1020,7 +1021,15 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br, + rcu_read_unlock(); + local_bh_enable(); + } else if (ndm->ndm_flags & NTF_EXT_LEARNED) { +- err = br_fdb_external_learn_add(br, p, addr, vid, true); ++ if (!p && !(ndm->ndm_state & NUD_PERMANENT)) { ++ NL_SET_ERR_MSG_MOD(extack, ++ "FDB entry towards bridge must be permanent"); ++ return -EINVAL; ++ } ++ ++ err = br_fdb_external_learn_add(br, p, addr, vid, ++ ndm->ndm_state & NUD_PERMANENT, ++ true); + } else { + spin_lock_bh(&br->hash_lock); + err = fdb_add_entry(br, p, addr, ndm, nlh_flags, vid, nfea_tb); +@@ -1092,9 +1101,11 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], + } + + /* VID was specified, so use it. */ +- err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb); ++ err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb, ++ extack); + } else { +- err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb); ++ err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb, ++ extack); + if (err || !vg || !vg->num_vlans) + goto out; + +@@ -1106,7 +1117,7 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], + if (!br_vlan_should_use(v)) + continue; + err = __br_fdb_add(ndm, br, p, addr, nlh_flags, v->vid, +- nfea_tb); ++ nfea_tb, extack); + if (err) + goto out; + } +@@ -1246,7 +1257,7 @@ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p) + } + + int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, +- const unsigned char *addr, u16 vid, ++ const unsigned char *addr, u16 vid, bool is_local, + bool swdev_notify) + { + struct net_bridge_fdb_entry *fdb; +@@ -1263,6 +1274,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, + + if (swdev_notify) + flags |= BIT(BR_FDB_ADDED_BY_USER); ++ ++ if (is_local) ++ flags |= BIT(BR_FDB_LOCAL); ++ + fdb = fdb_create(br, p, addr, vid, flags); + if (!fdb) { + err = -ENOMEM; +@@ -1289,6 +1304,9 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, + if (swdev_notify) + set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); + ++ if (is_local) ++ set_bit(BR_FDB_LOCAL, &fdb->flags); ++ + if (modified) + fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify); + } +diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h +index e013d33f1c7ca..4e3d26e0a2d11 100644 +--- a/net/bridge/br_private.h ++++ b/net/bridge/br_private.h +@@ -707,7 +707,7 @@ int br_fdb_get(struct sk_buff *skb, struct nlattr *tb[], struct net_device *dev, + int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p); + void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p); + int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, +- const unsigned char *addr, u16 vid, ++ const unsigned char *addr, u16 vid, bool is_local, + bool swdev_notify); + int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p, + const unsigned char *addr, u16 vid, +diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c +index e09147ac9a990..fc61cd3fea652 100644 +--- a/net/ipv4/tcp_offload.c ++++ b/net/ipv4/tcp_offload.c +@@ -298,6 +298,9 @@ int tcp_gro_complete(struct sk_buff *skb) + if (th->cwr) + skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN; + ++ if (skb->encapsulation) ++ skb->inner_transport_header = skb->transport_header; ++ + return 0; + } + EXPORT_SYMBOL(tcp_gro_complete); +diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c +index 9dde1e5fb449b..1380a6b6f4ff4 100644 +--- a/net/ipv4/udp_offload.c ++++ b/net/ipv4/udp_offload.c +@@ -624,6 +624,10 @@ static int udp_gro_complete_segment(struct sk_buff *skb) + + skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; + skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_L4; ++ ++ if (skb->encapsulation) ++ skb->inner_transport_header = skb->transport_header; ++ + return 0; + } + +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index fc8b56bcabf39..1ee96a5c5ee0a 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -886,7 +886,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, + + /* seqlock has the same scope of busylock, for NOLOCK qdisc */ + spin_lock_init(&sch->seqlock); +- lockdep_set_class(&sch->busylock, ++ lockdep_set_class(&sch->seqlock, + dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); + + seqcount_init(&sch->running); +diff --git a/net/sctp/auth.c b/net/sctp/auth.c +index fe74c5f956303..db6b7373d16c3 100644 +--- a/net/sctp/auth.c ++++ b/net/sctp/auth.c +@@ -857,14 +857,18 @@ int sctp_auth_set_key(struct sctp_endpoint *ep, + memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylength); + cur_key->key = key; + +- if (replace) { +- list_del_init(&shkey->key_list); +- sctp_auth_shkey_release(shkey); +- if (asoc && asoc->active_key_id == auth_key->sca_keynumber) +- sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL); ++ if (!replace) { ++ list_add(&cur_key->key_list, sh_keys); ++ return 0; + } ++ ++ list_del_init(&shkey->key_list); ++ sctp_auth_shkey_release(shkey); + list_add(&cur_key->key_list, sh_keys); + ++ if (asoc && asoc->active_key_id == auth_key->sca_keynumber) ++ sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL); ++ + return 0; + } + +diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c +index a20aec9d73933..2bf2693901631 100644 +--- a/net/xfrm/xfrm_compat.c ++++ b/net/xfrm/xfrm_compat.c +@@ -298,8 +298,16 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src) + len = nlmsg_attrlen(nlh_src, xfrm_msg_min[type]); + + nla_for_each_attr(nla, attrs, len, remaining) { +- int err = xfrm_xlate64_attr(dst, nla); ++ int err; + ++ switch (type) { ++ case XFRM_MSG_NEWSPDINFO: ++ err = xfrm_nla_cpy(dst, nla, nla_len(nla)); ++ break; ++ default: ++ err = xfrm_xlate64_attr(dst, nla); ++ break; ++ } + if (err) + return err; + } +@@ -341,7 +349,8 @@ static int xfrm_alloc_compat(struct sk_buff *skb, const struct nlmsghdr *nlh_src + + /* Calculates len of translated 64-bit message. */ + static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src, +- struct nlattr *attrs[XFRMA_MAX+1]) ++ struct nlattr *attrs[XFRMA_MAX + 1], ++ int maxtype) + { + size_t len = nlmsg_len(src); + +@@ -358,10 +367,20 @@ static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src, + case XFRM_MSG_POLEXPIRE: + len += 8; + break; ++ case XFRM_MSG_NEWSPDINFO: ++ /* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */ ++ return len; + default: + break; + } + ++ /* Unexpected for anything, but XFRM_MSG_NEWSPDINFO, please ++ * correct both 64=>32-bit and 32=>64-bit translators to copy ++ * new attributes. ++ */ ++ if (WARN_ON_ONCE(maxtype)) ++ return len; ++ + if (attrs[XFRMA_SA]) + len += 4; + if (attrs[XFRMA_POLICY]) +@@ -440,7 +459,8 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla, + + static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src, + struct nlattr *attrs[XFRMA_MAX+1], +- size_t size, u8 type, struct netlink_ext_ack *extack) ++ size_t size, u8 type, int maxtype, ++ struct netlink_ext_ack *extack) + { + size_t pos; + int i; +@@ -520,6 +540,25 @@ static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src, + } + pos = dst->nlmsg_len; + ++ if (maxtype) { ++ /* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */ ++ WARN_ON_ONCE(src->nlmsg_type != XFRM_MSG_NEWSPDINFO); ++ ++ for (i = 1; i <= maxtype; i++) { ++ int err; ++ ++ if (!attrs[i]) ++ continue; ++ ++ /* just copy - no need for translation */ ++ err = xfrm_attr_cpy32(dst, &pos, attrs[i], size, ++ nla_len(attrs[i]), nla_len(attrs[i])); ++ if (err) ++ return err; ++ } ++ return 0; ++ } ++ + for (i = 1; i < XFRMA_MAX + 1; i++) { + int err; + +@@ -564,7 +603,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32, + if (err < 0) + return ERR_PTR(err); + +- len = xfrm_user_rcv_calculate_len64(h32, attrs); ++ len = xfrm_user_rcv_calculate_len64(h32, attrs, maxtype); + /* The message doesn't need translation */ + if (len == nlmsg_len(h32)) + return NULL; +@@ -574,7 +613,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32, + if (!h64) + return ERR_PTR(-ENOMEM); + +- err = xfrm_xlate32(h64, h32, attrs, len, type, extack); ++ err = xfrm_xlate32(h64, h32, attrs, len, type, maxtype, extack); + if (err < 0) { + kvfree(h64); + return ERR_PTR(err); +diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c +index ce500f847b991..46a6d15b66d6f 100644 +--- a/net/xfrm/xfrm_policy.c ++++ b/net/xfrm/xfrm_policy.c +@@ -155,7 +155,6 @@ static struct xfrm_policy_afinfo const __rcu *xfrm_policy_afinfo[AF_INET6 + 1] + __read_mostly; + + static struct kmem_cache *xfrm_dst_cache __ro_after_init; +-static __read_mostly seqcount_mutex_t xfrm_policy_hash_generation; + + static struct rhashtable xfrm_policy_inexact_table; + static const struct rhashtable_params xfrm_pol_inexact_params; +@@ -585,7 +584,7 @@ static void xfrm_bydst_resize(struct net *net, int dir) + return; + + spin_lock_bh(&net->xfrm.xfrm_policy_lock); +- write_seqcount_begin(&xfrm_policy_hash_generation); ++ write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation); + + odst = rcu_dereference_protected(net->xfrm.policy_bydst[dir].table, + lockdep_is_held(&net->xfrm.xfrm_policy_lock)); +@@ -596,7 +595,7 @@ static void xfrm_bydst_resize(struct net *net, int dir) + rcu_assign_pointer(net->xfrm.policy_bydst[dir].table, ndst); + net->xfrm.policy_bydst[dir].hmask = nhashmask; + +- write_seqcount_end(&xfrm_policy_hash_generation); ++ write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation); + spin_unlock_bh(&net->xfrm.xfrm_policy_lock); + + synchronize_rcu(); +@@ -1245,7 +1244,7 @@ static void xfrm_hash_rebuild(struct work_struct *work) + } while (read_seqretry(&net->xfrm.policy_hthresh.lock, seq)); + + spin_lock_bh(&net->xfrm.xfrm_policy_lock); +- write_seqcount_begin(&xfrm_policy_hash_generation); ++ write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation); + + /* make sure that we can insert the indirect policies again before + * we start with destructive action. +@@ -1354,7 +1353,7 @@ static void xfrm_hash_rebuild(struct work_struct *work) + + out_unlock: + __xfrm_policy_inexact_flush(net); +- write_seqcount_end(&xfrm_policy_hash_generation); ++ write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation); + spin_unlock_bh(&net->xfrm.xfrm_policy_lock); + + mutex_unlock(&hash_resize_mutex); +@@ -2095,9 +2094,9 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type, + rcu_read_lock(); + retry: + do { +- sequence = read_seqcount_begin(&xfrm_policy_hash_generation); ++ sequence = read_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation); + chain = policy_hash_direct(net, daddr, saddr, family, dir); +- } while (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)); ++ } while (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence)); + + ret = NULL; + hlist_for_each_entry_rcu(pol, chain, bydst) { +@@ -2128,7 +2127,7 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type, + } + + skip_inexact: +- if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) ++ if (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence)) + goto retry; + + if (ret && !xfrm_pol_hold_rcu(ret)) +@@ -4084,6 +4083,7 @@ static int __net_init xfrm_net_init(struct net *net) + /* Initialize the per-net locks here */ + spin_lock_init(&net->xfrm.xfrm_state_lock); + spin_lock_init(&net->xfrm.xfrm_policy_lock); ++ seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock); + mutex_init(&net->xfrm.xfrm_cfg_mutex); + + rv = xfrm_statistics_init(net); +@@ -4128,7 +4128,6 @@ void __init xfrm_init(void) + { + register_pernet_subsys(&xfrm_net_ops); + xfrm_dev_init(); +- seqcount_mutex_init(&xfrm_policy_hash_generation, &hash_resize_mutex); + xfrm_input_init(); + + #ifdef CONFIG_XFRM_ESPINTCP +diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c +index b47d613409b70..7aff641c717d7 100644 +--- a/net/xfrm/xfrm_user.c ++++ b/net/xfrm/xfrm_user.c +@@ -2811,6 +2811,16 @@ static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, + + err = link->doit(skb, nlh, attrs); + ++ /* We need to free skb allocated in xfrm_alloc_compat() before ++ * returning from this function, because consume_skb() won't take ++ * care of frag_list since netlink destructor sets ++ * sbk->head to NULL. (see netlink_skb_destructor()) ++ */ ++ if (skb_has_frag_list(skb)) { ++ kfree_skb(skb_shinfo(skb)->frag_list); ++ skb_shinfo(skb)->frag_list = NULL; ++ } ++ + err: + kvfree(nlh64); + return err; +diff --git a/scripts/tracing/draw_functrace.py b/scripts/tracing/draw_functrace.py +index 74f8aadfd4cbc..7011fbe003ff2 100755 +--- a/scripts/tracing/draw_functrace.py ++++ b/scripts/tracing/draw_functrace.py +@@ -17,7 +17,7 @@ Usage: + $ cat /sys/kernel/debug/tracing/trace_pipe > ~/raw_trace_func + Wait some times but not too much, the script is a bit slow. + Break the pipe (Ctrl + Z) +- $ scripts/draw_functrace.py < raw_trace_func > draw_functrace ++ $ scripts/tracing/draw_functrace.py < ~/raw_trace_func > draw_functrace + Then you have your drawn trace in draw_functrace + """ + +@@ -103,10 +103,10 @@ def parseLine(line): + line = line.strip() + if line.startswith("#"): + raise CommentLineException +- m = re.match("[^]]+?\\] +([0-9.]+): (\\w+) <-(\\w+)", line) ++ m = re.match("[^]]+?\\] +([a-z.]+) +([0-9.]+): (\\w+) <-(\\w+)", line) + if m is None: + raise BrokenLineException +- return (m.group(1), m.group(2), m.group(3)) ++ return (m.group(2), m.group(3), m.group(4)) + + + def main(): +diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c +index 9fccf417006b0..6a04de21343f8 100644 +--- a/security/selinux/ss/policydb.c ++++ b/security/selinux/ss/policydb.c +@@ -874,7 +874,7 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s) + rc = sidtab_init(s); + if (rc) { + pr_err("SELinux: out of memory on SID table init\n"); +- goto out; ++ return rc; + } + + head = p->ocontexts[OCON_ISID]; +@@ -885,7 +885,7 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s) + if (sid == SECSID_NULL) { + pr_err("SELinux: SID 0 was assigned a context.\n"); + sidtab_destroy(s); +- goto out; ++ return -EINVAL; + } + + /* Ignore initial SIDs unused by this kernel. */ +@@ -897,12 +897,10 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s) + pr_err("SELinux: unable to load initial SID %s.\n", + name); + sidtab_destroy(s); +- goto out; ++ return rc; + } + } +- rc = 0; +-out: +- return rc; ++ return 0; + } + + int policydb_class_isvalid(struct policydb *p, unsigned int class) +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c +index a9fd486089a7a..2b3c164d21f17 100644 +--- a/sound/core/pcm_native.c ++++ b/sound/core/pcm_native.c +@@ -246,7 +246,7 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream) + if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP)) + return false; + +- if (substream->ops->mmap) ++ if (substream->ops->mmap || substream->ops->page) + return true; + + switch (substream->dma_buffer.dev.type) { +diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c +index b9c2ce2b8d5a3..84d78630463e4 100644 +--- a/sound/core/seq/seq_ports.c ++++ b/sound/core/seq/seq_ports.c +@@ -514,10 +514,11 @@ static int check_and_subscribe_port(struct snd_seq_client *client, + return err; + } + +-static void delete_and_unsubscribe_port(struct snd_seq_client *client, +- struct snd_seq_client_port *port, +- struct snd_seq_subscribers *subs, +- bool is_src, bool ack) ++/* called with grp->list_mutex held */ ++static void __delete_and_unsubscribe_port(struct snd_seq_client *client, ++ struct snd_seq_client_port *port, ++ struct snd_seq_subscribers *subs, ++ bool is_src, bool ack) + { + struct snd_seq_port_subs_info *grp; + struct list_head *list; +@@ -525,7 +526,6 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client, + + grp = is_src ? &port->c_src : &port->c_dest; + list = is_src ? &subs->src_list : &subs->dest_list; +- down_write(&grp->list_mutex); + write_lock_irq(&grp->list_lock); + empty = list_empty(list); + if (!empty) +@@ -535,6 +535,18 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client, + + if (!empty) + unsubscribe_port(client, port, grp, &subs->info, ack); ++} ++ ++static void delete_and_unsubscribe_port(struct snd_seq_client *client, ++ struct snd_seq_client_port *port, ++ struct snd_seq_subscribers *subs, ++ bool is_src, bool ack) ++{ ++ struct snd_seq_port_subs_info *grp; ++ ++ grp = is_src ? &port->c_src : &port->c_dest; ++ down_write(&grp->list_mutex); ++ __delete_and_unsubscribe_port(client, port, subs, is_src, ack); + up_write(&grp->list_mutex); + } + +@@ -590,27 +602,30 @@ int snd_seq_port_disconnect(struct snd_seq_client *connector, + struct snd_seq_client_port *dest_port, + struct snd_seq_port_subscribe *info) + { +- struct snd_seq_port_subs_info *src = &src_port->c_src; ++ struct snd_seq_port_subs_info *dest = &dest_port->c_dest; + struct snd_seq_subscribers *subs; + int err = -ENOENT; + +- down_write(&src->list_mutex); ++ /* always start from deleting the dest port for avoiding concurrent ++ * deletions ++ */ ++ down_write(&dest->list_mutex); + /* look for the connection */ +- list_for_each_entry(subs, &src->list_head, src_list) { ++ list_for_each_entry(subs, &dest->list_head, dest_list) { + if (match_subs_info(info, &subs->info)) { +- atomic_dec(&subs->ref_count); /* mark as not ready */ ++ __delete_and_unsubscribe_port(dest_client, dest_port, ++ subs, false, ++ connector->number != dest_client->number); + err = 0; + break; + } + } +- up_write(&src->list_mutex); ++ up_write(&dest->list_mutex); + if (err < 0) + return err; + + delete_and_unsubscribe_port(src_client, src_port, subs, true, + connector->number != src_client->number); +- delete_and_unsubscribe_port(dest_client, dest_port, subs, false, +- connector->number != dest_client->number); + kfree(subs); + return 0; + } +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 19a3ae79c0012..c92d9b9cf9441 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -8214,9 +8214,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), + SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), + SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC), ++ SND_PCI_QUIRK(0x1025, 0x1300, "Acer SWIFT SF314-56", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), + SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC), + SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC), ++ SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC), + SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), +diff --git a/sound/usb/card.c b/sound/usb/card.c +index 2f6a62416c057..a1f8c3a026f57 100644 +--- a/sound/usb/card.c ++++ b/sound/usb/card.c +@@ -907,7 +907,7 @@ static void usb_audio_disconnect(struct usb_interface *intf) + } + } + +- if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND) ++ if (chip->quirk_type == QUIRK_SETUP_DISABLE_AUTOSUSPEND) + usb_enable_autosuspend(interface_to_usbdev(intf)); + + chip->num_interfaces--; +diff --git a/sound/usb/clock.c b/sound/usb/clock.c +index 17bbde73d4d15..14772209194bc 100644 +--- a/sound/usb/clock.c ++++ b/sound/usb/clock.c +@@ -325,6 +325,12 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip, + selector->baCSourceID[ret - 1], + visited, validate); + if (ret > 0) { ++ /* ++ * For Samsung USBC Headset (AKG), setting clock selector again ++ * will result in incorrect default clock setting problems ++ */ ++ if (chip->usb_id == USB_ID(0x04e8, 0xa051)) ++ return ret; + err = uac_clock_selector_set_val(chip, entity_id, cur); + if (err < 0) + return err; +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c +index f4cdaf1ba44ac..9b713b4a5ec4c 100644 +--- a/sound/usb/mixer.c ++++ b/sound/usb/mixer.c +@@ -1816,6 +1816,15 @@ static void get_connector_control_name(struct usb_mixer_interface *mixer, + strlcat(name, " - Output Jack", name_size); + } + ++/* get connector value to "wake up" the USB audio */ ++static int connector_mixer_resume(struct usb_mixer_elem_list *list) ++{ ++ struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list); ++ ++ get_connector_value(cval, NULL, NULL); ++ return 0; ++} ++ + /* Build a mixer control for a UAC connector control (jack-detect) */ + static void build_connector_control(struct usb_mixer_interface *mixer, + const struct usbmix_name_map *imap, +@@ -1833,6 +1842,10 @@ static void build_connector_control(struct usb_mixer_interface *mixer, + if (!cval) + return; + snd_usb_mixer_elem_init_std(&cval->head, mixer, term->id); ++ ++ /* set up a specific resume callback */ ++ cval->head.resume = connector_mixer_resume; ++ + /* + * UAC2: The first byte from reading the UAC2_TE_CONNECTOR control returns the + * number of channels connected. +@@ -3642,23 +3655,15 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list) + return 0; + } + +-static int default_mixer_resume(struct usb_mixer_elem_list *list) +-{ +- struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list); +- +- /* get connector value to "wake up" the USB audio */ +- if (cval->val_type == USB_MIXER_BOOLEAN && cval->channels == 1) +- get_connector_value(cval, NULL, NULL); +- +- return 0; +-} +- + static int default_mixer_reset_resume(struct usb_mixer_elem_list *list) + { +- int err = default_mixer_resume(list); ++ int err; + +- if (err < 0) +- return err; ++ if (list->resume) { ++ err = list->resume(list); ++ if (err < 0) ++ return err; ++ } + return restore_mixer_value(list); + } + +@@ -3697,7 +3702,7 @@ void snd_usb_mixer_elem_init_std(struct usb_mixer_elem_list *list, + list->id = unitid; + list->dump = snd_usb_mixer_dump_cval; + #ifdef CONFIG_PM +- list->resume = default_mixer_resume; ++ list->resume = NULL; + list->reset_resume = default_mixer_reset_resume; + #endif + } +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c +index e7accd87e0632..326d1b0ea5e69 100644 +--- a/sound/usb/quirks.c ++++ b/sound/usb/quirks.c +@@ -1899,6 +1899,7 @@ static const struct registration_quirk registration_quirks[] = { + REG_QUIRK_ENTRY(0x0951, 0x16ea, 2), /* Kingston HyperX Cloud Flight S */ + REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2), /* JBL Quantum 600 */ + REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2), /* JBL Quantum 400 */ ++ REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2), /* JBL Quantum 600 */ + REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2), /* JBL Quantum 800 */ + { 0 } /* terminator */ + }; +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 0119466677b7d..1dcc66060a19a 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -845,6 +845,8 @@ static void kvm_destroy_vm_debugfs(struct kvm *kvm) + + static int kvm_create_vm_debugfs(struct kvm *kvm, int fd) + { ++ static DEFINE_MUTEX(kvm_debugfs_lock); ++ struct dentry *dent; + char dir_name[ITOA_MAX_LEN * 2]; + struct kvm_stat_data *stat_data; + struct kvm_stats_debugfs_item *p; +@@ -853,8 +855,20 @@ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd) + return 0; + + snprintf(dir_name, sizeof(dir_name), "%d-%d", task_pid_nr(current), fd); +- kvm->debugfs_dentry = debugfs_create_dir(dir_name, kvm_debugfs_dir); ++ mutex_lock(&kvm_debugfs_lock); ++ dent = debugfs_lookup(dir_name, kvm_debugfs_dir); ++ if (dent) { ++ pr_warn_ratelimited("KVM: debugfs: duplicate directory %s\n", dir_name); ++ dput(dent); ++ mutex_unlock(&kvm_debugfs_lock); ++ return 0; ++ } ++ dent = debugfs_create_dir(dir_name, kvm_debugfs_dir); ++ mutex_unlock(&kvm_debugfs_lock); ++ if (IS_ERR(dent)) ++ return 0; + ++ kvm->debugfs_dentry = dent; + kvm->debugfs_stat_data = kcalloc(kvm_debugfs_num_entries, + sizeof(*kvm->debugfs_stat_data), + GFP_KERNEL_ACCOUNT); +@@ -4993,7 +5007,7 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm) + } + add_uevent_var(env, "PID=%d", kvm->userspace_pid); + +- if (!IS_ERR_OR_NULL(kvm->debugfs_dentry)) { ++ if (kvm->debugfs_dentry) { + char *tmp, *p = kmalloc(PATH_MAX, GFP_KERNEL_ACCOUNT); + + if (p) {