From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 29678158008 for ; Wed, 14 Jun 2023 10:17:01 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 5F6ABE088C; Wed, 14 Jun 2023 10:17:00 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id E4821E088C for ; Wed, 14 Jun 2023 10:16:59 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 4200A33FE74 for ; Wed, 14 Jun 2023 10:16:58 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id C8748A66 for ; Wed, 14 Jun 2023 10:16:56 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1686737802.e53984ce4fd0952e980a7418817445b3725c8096.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.3 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1007_linux-6.3.8.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: e53984ce4fd0952e980a7418817445b3725c8096 X-VCS-Branch: 6.3 Date: Wed, 14 Jun 2023 10:16:56 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: aaf4dbbf-e6b3-4ac3-8ac1-56e60937ccad X-Archives-Hash: 3aec3aef55bdebf98d49b6a625884523 commit: e53984ce4fd0952e980a7418817445b3725c8096 Author: Mike Pagano gentoo org> AuthorDate: Wed Jun 14 10:16:42 2023 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Jun 14 10:16:42 2023 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e53984ce Linux patch 6.3.8 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1007_linux-6.3.8.patch | 7325 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 7329 insertions(+) diff --git a/0000_README b/0000_README index ac375662..5d0c85ce 100644 --- a/0000_README +++ b/0000_README @@ -71,6 +71,10 @@ Patch: 1006_linux-6.3.7.patch From: https://www.kernel.org Desc: Linux 6.3.7 +Patch: 1007_linux-6.3.8.patch +From: https://www.kernel.org +Desc: Linux 6.3.8 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1007_linux-6.3.8.patch b/1007_linux-6.3.8.patch new file mode 100644 index 00000000..85b81339 --- /dev/null +++ b/1007_linux-6.3.8.patch @@ -0,0 +1,7325 @@ +diff --git a/Documentation/mm/page_table_check.rst b/Documentation/mm/page_table_check.rst +index cfd8f4117cf3e..c12838ce6b8de 100644 +--- a/Documentation/mm/page_table_check.rst ++++ b/Documentation/mm/page_table_check.rst +@@ -52,3 +52,22 @@ Build kernel with: + + Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page + table support without extra kernel parameter. ++ ++Implementation notes ++==================== ++ ++We specifically decided not to use VMA information in order to avoid relying on ++MM states (except for limited "struct page" info). The page table check is a ++separate from Linux-MM state machine that verifies that the user accessible ++pages are not falsely shared. ++ ++PAGE_TABLE_CHECK depends on EXCLUSIVE_SYSTEM_RAM. The reason is that without ++EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary physical memory ++regions into the userspace via /dev/mem. At the same time, pages may change ++their properties (e.g., from anonymous pages to named pages) while they are ++still being mapped in the userspace, leading to "corruption" detected by the ++page table check. ++ ++Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be mapped via ++/dev/mem. However, these pages are always considered as named pages, so they ++won't break the logic used in the page table check. +diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst +index 58a78a3166978..97ae2b5a6101c 100644 +--- a/Documentation/networking/ip-sysctl.rst ++++ b/Documentation/networking/ip-sysctl.rst +@@ -1352,8 +1352,8 @@ ping_group_range - 2 INTEGERS + Restrict ICMP_PROTO datagram sockets to users in the group range. + The default is "1 0", meaning, that nobody (not even root) may + create ping sockets. Setting it to "100 100" would grant permissions +- to the single group. "0 4294967295" would enable it for the world, "100 +- 4294967295" would enable it for the users, but not daemons. ++ to the single group. "0 4294967294" would enable it for the world, "100 ++ 4294967294" would enable it for the users, but not daemons. + + tcp_early_demux - BOOLEAN + Enable early demux for established TCP sockets. +diff --git a/Makefile b/Makefile +index 71c958fd52854..b4267d7a57b35 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 3 +-SUBLEVEL = 7 ++SUBLEVEL = 8 + EXTRAVERSION = + NAME = Hurr durr I'ma ninja sloth + +diff --git a/arch/arm/boot/dts/at91-sama7g5ek.dts b/arch/arm/boot/dts/at91-sama7g5ek.dts +index aa5cc0e98bbab..217e9b96c61e5 100644 +--- a/arch/arm/boot/dts/at91-sama7g5ek.dts ++++ b/arch/arm/boot/dts/at91-sama7g5ek.dts +@@ -792,7 +792,7 @@ + }; + + &shdwc { +- atmel,shdwc-debouncer = <976>; ++ debounce-delay-us = <976>; + status = "okay"; + + input@0 { +diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c +index 60dc56d8acfb9..437dd0352fd44 100644 +--- a/arch/arm/mach-at91/pm.c ++++ b/arch/arm/mach-at91/pm.c +@@ -334,16 +334,14 @@ static bool at91_pm_eth_quirk_is_valid(struct at91_pm_quirk_eth *eth) + pdev = of_find_device_by_node(eth->np); + if (!pdev) + return false; ++ /* put_device(eth->dev) is called at the end of suspend. */ + eth->dev = &pdev->dev; + } + + /* No quirks if device isn't a wakeup source. */ +- if (!device_may_wakeup(eth->dev)) { +- put_device(eth->dev); ++ if (!device_may_wakeup(eth->dev)) + return false; +- } + +- /* put_device(eth->dev) is called at the end of suspend. */ + return true; + } + +@@ -439,14 +437,14 @@ clk_unconfigure: + pr_err("AT91: PM: failed to enable %s clocks\n", + j == AT91_PM_G_ETH ? "geth" : "eth"); + } +- } else { +- /* +- * Release the reference to eth->dev taken in +- * at91_pm_eth_quirk_is_valid(). +- */ +- put_device(eth->dev); +- eth->dev = NULL; + } ++ ++ /* ++ * Release the reference to eth->dev taken in ++ * at91_pm_eth_quirk_is_valid(). ++ */ ++ put_device(eth->dev); ++ eth->dev = NULL; + } + + return ret; +diff --git a/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi b/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi +index a943a1e2797f4..21345ae14eb25 100644 +--- a/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi +@@ -90,6 +90,8 @@ dma_subsys: bus@5a000000 { + clocks = <&uart0_lpcg IMX_LPCG_CLK_4>, + <&uart0_lpcg IMX_LPCG_CLK_0>; + clock-names = "ipg", "baud"; ++ assigned-clocks = <&clk IMX_SC_R_UART_0 IMX_SC_PM_CLK_PER>; ++ assigned-clock-rates = <80000000>; + power-domains = <&pd IMX_SC_R_UART_0>; + status = "disabled"; + }; +@@ -100,6 +102,8 @@ dma_subsys: bus@5a000000 { + clocks = <&uart1_lpcg IMX_LPCG_CLK_4>, + <&uart1_lpcg IMX_LPCG_CLK_0>; + clock-names = "ipg", "baud"; ++ assigned-clocks = <&clk IMX_SC_R_UART_1 IMX_SC_PM_CLK_PER>; ++ assigned-clock-rates = <80000000>; + power-domains = <&pd IMX_SC_R_UART_1>; + status = "disabled"; + }; +@@ -110,6 +114,8 @@ dma_subsys: bus@5a000000 { + clocks = <&uart2_lpcg IMX_LPCG_CLK_4>, + <&uart2_lpcg IMX_LPCG_CLK_0>; + clock-names = "ipg", "baud"; ++ assigned-clocks = <&clk IMX_SC_R_UART_2 IMX_SC_PM_CLK_PER>; ++ assigned-clock-rates = <80000000>; + power-domains = <&pd IMX_SC_R_UART_2>; + status = "disabled"; + }; +@@ -120,6 +126,8 @@ dma_subsys: bus@5a000000 { + clocks = <&uart3_lpcg IMX_LPCG_CLK_4>, + <&uart3_lpcg IMX_LPCG_CLK_0>; + clock-names = "ipg", "baud"; ++ assigned-clocks = <&clk IMX_SC_R_UART_3 IMX_SC_PM_CLK_PER>; ++ assigned-clock-rates = <80000000>; + power-domains = <&pd IMX_SC_R_UART_3>; + status = "disabled"; + }; +diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi +index 9e82069c941fa..5a1f7c30afe57 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi +@@ -81,7 +81,7 @@ + &ecspi2 { + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_espi2>; +- cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>; ++ cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>; + status = "okay"; + + eeprom@0 { +@@ -202,7 +202,7 @@ + MX8MN_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82 + MX8MN_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82 + MX8MN_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82 +- MX8MN_IOMUXC_ECSPI1_SS0_GPIO5_IO9 0x41 ++ MX8MN_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0x41 + >; + }; + +diff --git a/arch/arm64/boot/dts/freescale/imx8qm-mek.dts b/arch/arm64/boot/dts/freescale/imx8qm-mek.dts +index ce9d3f0b98fc0..607cd6b4e9721 100644 +--- a/arch/arm64/boot/dts/freescale/imx8qm-mek.dts ++++ b/arch/arm64/boot/dts/freescale/imx8qm-mek.dts +@@ -82,8 +82,8 @@ + pinctrl-0 = <&pinctrl_usdhc2>; + bus-width = <4>; + vmmc-supply = <®_usdhc2_vmmc>; +- cd-gpios = <&lsio_gpio4 22 GPIO_ACTIVE_LOW>; +- wp-gpios = <&lsio_gpio4 21 GPIO_ACTIVE_HIGH>; ++ cd-gpios = <&lsio_gpio5 22 GPIO_ACTIVE_LOW>; ++ wp-gpios = <&lsio_gpio5 21 GPIO_ACTIVE_HIGH>; + status = "okay"; + }; + +diff --git a/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi b/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi +index d8ed1d7b4ec76..4b306a59d9bec 100644 +--- a/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi ++++ b/arch/arm64/boot/dts/qcom/sc7180-lite.dtsi +@@ -16,3 +16,11 @@ + &cpu6_opp12 { + opp-peak-kBps = <8532000 23347200>; + }; ++ ++&cpu6_opp13 { ++ opp-peak-kBps = <8532000 23347200>; ++}; ++ ++&cpu6_opp14 { ++ opp-peak-kBps = <8532000 23347200>; ++}; +diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi +index 03b679b75201d..f081ca449699a 100644 +--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi ++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi +@@ -3957,6 +3957,7 @@ + qcom,tcs-config = , , + , ; + label = "apps_rsc"; ++ power-domains = <&CLUSTER_PD>; + + apps_bcm_voter: bcm-voter { + compatible = "qcom,bcm-voter"; +diff --git a/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts b/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts +index b691c3834b6b6..71970dd3fc1ad 100644 +--- a/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts ++++ b/arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts +@@ -151,12 +151,12 @@ + }; + + &remoteproc_adsp { +- firmware-name = "qcom/Sony/murray/adsp.mbn"; ++ firmware-name = "qcom/sm6375/Sony/murray/adsp.mbn"; + status = "okay"; + }; + + &remoteproc_cdsp { +- firmware-name = "qcom/Sony/murray/cdsp.mbn"; ++ firmware-name = "qcom/sm6375/Sony/murray/cdsp.mbn"; + status = "okay"; + }; + +diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig +index eb7f29a412f87..b462ed7d41fe1 100644 +--- a/arch/riscv/Kconfig ++++ b/arch/riscv/Kconfig +@@ -25,6 +25,7 @@ config RISCV + select ARCH_HAS_GIGANTIC_PAGE + select ARCH_HAS_KCOV + select ARCH_HAS_MMIOWB ++ select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE + select ARCH_HAS_PMEM_API + select ARCH_HAS_PTE_SPECIAL + select ARCH_HAS_SET_DIRECT_MAP if MMU +diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h +index f641837ccf31d..05eda3281ba90 100644 +--- a/arch/riscv/include/asm/pgtable.h ++++ b/arch/riscv/include/asm/pgtable.h +@@ -165,8 +165,7 @@ extern struct pt_alloc_ops pt_ops __initdata; + _PAGE_EXEC | _PAGE_WRITE) + + #define PAGE_COPY PAGE_READ +-#define PAGE_COPY_EXEC PAGE_EXEC +-#define PAGE_COPY_READ_EXEC PAGE_READ_EXEC ++#define PAGE_COPY_EXEC PAGE_READ_EXEC + #define PAGE_SHARED PAGE_WRITE + #define PAGE_SHARED_EXEC PAGE_WRITE_EXEC + +diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c +index dc1793bf01796..309d685d70267 100644 +--- a/arch/riscv/mm/init.c ++++ b/arch/riscv/mm/init.c +@@ -286,7 +286,7 @@ static const pgprot_t protection_map[16] = { + [VM_EXEC] = PAGE_EXEC, + [VM_EXEC | VM_READ] = PAGE_READ_EXEC, + [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC, +- [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_READ_EXEC, ++ [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC, + [VM_SHARED] = PAGE_NONE, + [VM_SHARED | VM_READ] = PAGE_READ, + [VM_SHARED | VM_WRITE] = PAGE_SHARED, +diff --git a/block/blk-mq.c b/block/blk-mq.c +index ae08c4936743d..f2e2ffd135baf 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -717,6 +717,10 @@ static void __blk_mq_free_request(struct request *rq) + blk_crypto_free_request(rq); + blk_pm_mark_last_busy(rq); + rq->mq_hctx = NULL; ++ ++ if (rq->rq_flags & RQF_MQ_INFLIGHT) ++ __blk_mq_dec_active_requests(hctx); ++ + if (rq->tag != BLK_MQ_NO_TAG) + blk_mq_put_tag(hctx->tags, ctx, rq->tag); + if (sched_tag != BLK_MQ_NO_TAG) +@@ -728,15 +732,11 @@ static void __blk_mq_free_request(struct request *rq) + void blk_mq_free_request(struct request *rq) + { + struct request_queue *q = rq->q; +- struct blk_mq_hw_ctx *hctx = rq->mq_hctx; + + if ((rq->rq_flags & RQF_ELVPRIV) && + q->elevator->type->ops.finish_request) + q->elevator->type->ops.finish_request(rq); + +- if (rq->rq_flags & RQF_MQ_INFLIGHT) +- __blk_mq_dec_active_requests(hctx); +- + if (unlikely(laptop_mode && !blk_rq_is_passthrough(rq))) + laptop_io_completion(q->disk->bdi); + +diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig +index 9bdf168bf1d0e..1a4c4ed9d1136 100644 +--- a/drivers/accel/ivpu/Kconfig ++++ b/drivers/accel/ivpu/Kconfig +@@ -7,6 +7,7 @@ config DRM_ACCEL_IVPU + depends on PCI && PCI_MSI + select FW_LOADER + select SHMEM ++ select GENERIC_ALLOCATOR + help + Choose this option if you have a system that has an 14th generation Intel CPU + or newer. VPU stands for Versatile Processing Unit and it's a CPU-integrated +diff --git a/drivers/accel/ivpu/ivpu_hw_mtl.c b/drivers/accel/ivpu/ivpu_hw_mtl.c +index 382ec127be8ea..fef35422c6f0d 100644 +--- a/drivers/accel/ivpu/ivpu_hw_mtl.c ++++ b/drivers/accel/ivpu/ivpu_hw_mtl.c +@@ -197,6 +197,11 @@ static void ivpu_pll_init_frequency_ratios(struct ivpu_device *vdev) + hw->pll.pn_ratio = clamp_t(u8, fuse_pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio); + } + ++static int ivpu_hw_mtl_wait_for_vpuip_bar(struct ivpu_device *vdev) ++{ ++ return REGV_POLL_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, AON, 0, 100); ++} ++ + static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable) + { + struct ivpu_hw_info *hw = vdev->hw; +@@ -239,6 +244,12 @@ static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable) + ivpu_err(vdev, "Timed out waiting for PLL ready status\n"); + return ret; + } ++ ++ ret = ivpu_hw_mtl_wait_for_vpuip_bar(vdev); ++ if (ret) { ++ ivpu_err(vdev, "Timed out waiting for VPUIP bar\n"); ++ return ret; ++ } + } + + return 0; +@@ -256,7 +267,7 @@ static int ivpu_pll_disable(struct ivpu_device *vdev) + + static void ivpu_boot_host_ss_rst_clr_assert(struct ivpu_device *vdev) + { +- u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_RST_CLR); ++ u32 val = 0; + + val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, TOP_NOC, val); + val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, DSS_MAS, val); +@@ -754,9 +765,8 @@ static int ivpu_hw_mtl_power_down(struct ivpu_device *vdev) + { + int ret = 0; + +- if (ivpu_hw_mtl_reset(vdev)) { ++ if (!ivpu_hw_mtl_is_idle(vdev) && ivpu_hw_mtl_reset(vdev)) { + ivpu_err(vdev, "Failed to reset the VPU\n"); +- ret = -EIO; + } + + if (ivpu_pll_disable(vdev)) { +@@ -764,8 +774,10 @@ static int ivpu_hw_mtl_power_down(struct ivpu_device *vdev) + ret = -EIO; + } + +- if (ivpu_hw_mtl_d0i3_enable(vdev)) +- ivpu_warn(vdev, "Failed to enable D0I3\n"); ++ if (ivpu_hw_mtl_d0i3_enable(vdev)) { ++ ivpu_err(vdev, "Failed to enter D0I3\n"); ++ ret = -EIO; ++ } + + return ret; + } +diff --git a/drivers/accel/ivpu/ivpu_hw_mtl_reg.h b/drivers/accel/ivpu/ivpu_hw_mtl_reg.h +index d83ccfd9a871b..593b8ff074170 100644 +--- a/drivers/accel/ivpu/ivpu_hw_mtl_reg.h ++++ b/drivers/accel/ivpu/ivpu_hw_mtl_reg.h +@@ -91,6 +91,7 @@ + #define MTL_VPU_HOST_SS_CPR_RST_SET_MSS_MAS_MASK BIT_MASK(11) + + #define MTL_VPU_HOST_SS_CPR_RST_CLR 0x00000098u ++#define MTL_VPU_HOST_SS_CPR_RST_CLR_AON_MASK BIT_MASK(0) + #define MTL_VPU_HOST_SS_CPR_RST_CLR_TOP_NOC_MASK BIT_MASK(1) + #define MTL_VPU_HOST_SS_CPR_RST_CLR_DSS_MAS_MASK BIT_MASK(10) + #define MTL_VPU_HOST_SS_CPR_RST_CLR_MSS_MAS_MASK BIT_MASK(11) +diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c +index 3adcfa80fc0e5..fa0af59e39ab6 100644 +--- a/drivers/accel/ivpu/ivpu_ipc.c ++++ b/drivers/accel/ivpu/ivpu_ipc.c +@@ -183,9 +183,7 @@ ivpu_ipc_send(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons, struct v + struct ivpu_ipc_info *ipc = vdev->ipc; + int ret; + +- ret = mutex_lock_interruptible(&ipc->lock); +- if (ret) +- return ret; ++ mutex_lock(&ipc->lock); + + if (!ipc->on) { + ret = -EAGAIN; +diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c +index 3c6f1e16cf2ff..d45be0615b476 100644 +--- a/drivers/accel/ivpu/ivpu_job.c ++++ b/drivers/accel/ivpu/ivpu_job.c +@@ -431,6 +431,7 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32 + struct ivpu_file_priv *file_priv = file->driver_priv; + struct ivpu_device *vdev = file_priv->vdev; + struct ww_acquire_ctx acquire_ctx; ++ enum dma_resv_usage usage; + struct ivpu_bo *bo; + int ret; + u32 i; +@@ -461,22 +462,28 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32 + + job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset; + +- ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx); ++ ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count, ++ &acquire_ctx); + if (ret) { + ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret); + return ret; + } + +- ret = dma_resv_reserve_fences(bo->base.resv, 1); +- if (ret) { +- ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); +- goto unlock_reservations; ++ for (i = 0; i < buf_count; i++) { ++ ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1); ++ if (ret) { ++ ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); ++ goto unlock_reservations; ++ } + } + +- dma_resv_add_fence(bo->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE); ++ for (i = 0; i < buf_count; i++) { ++ usage = (i == CMD_BUF_IDX) ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_BOOKKEEP; ++ dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, usage); ++ } + + unlock_reservations: +- drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx); ++ drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx); + + wmb(); /* Flush write combining buffers */ + +diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c +index 694e978aba663..b8b259b3aa635 100644 +--- a/drivers/accel/ivpu/ivpu_mmu.c ++++ b/drivers/accel/ivpu/ivpu_mmu.c +@@ -587,16 +587,11 @@ static int ivpu_mmu_strtab_init(struct ivpu_device *vdev) + int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid) + { + struct ivpu_mmu_info *mmu = vdev->mmu; +- int ret; +- +- ret = mutex_lock_interruptible(&mmu->lock); +- if (ret) +- return ret; ++ int ret = 0; + +- if (!mmu->on) { +- ret = 0; ++ mutex_lock(&mmu->lock); ++ if (!mmu->on) + goto unlock; +- } + + ret = ivpu_mmu_cmdq_write_tlbi_nh_asid(vdev, ssid); + if (ret) +@@ -614,7 +609,7 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma) + struct ivpu_mmu_cdtab *cdtab = &mmu->cdtab; + u64 *entry; + u64 cd[4]; +- int ret; ++ int ret = 0; + + if (ssid > IVPU_MMU_CDTAB_ENT_COUNT) + return -EINVAL; +@@ -655,14 +650,9 @@ static int ivpu_mmu_cd_add(struct ivpu_device *vdev, u32 ssid, u64 cd_dma) + ivpu_dbg(vdev, MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n", + cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]); + +- ret = mutex_lock_interruptible(&mmu->lock); +- if (ret) +- return ret; +- +- if (!mmu->on) { +- ret = 0; ++ mutex_lock(&mmu->lock); ++ if (!mmu->on) + goto unlock; +- } + + ret = ivpu_mmu_cmdq_write_cfgi_all(vdev); + if (ret) +diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c +index 5cb008b9700a0..f218a2b114b9b 100644 +--- a/drivers/block/rbd.c ++++ b/drivers/block/rbd.c +@@ -1334,14 +1334,30 @@ static bool rbd_obj_is_tail(struct rbd_obj_request *obj_req) + /* + * Must be called after rbd_obj_calc_img_extents(). + */ +-static bool rbd_obj_copyup_enabled(struct rbd_obj_request *obj_req) ++static void rbd_obj_set_copyup_enabled(struct rbd_obj_request *obj_req) + { +- if (!obj_req->num_img_extents || +- (rbd_obj_is_entire(obj_req) && +- !obj_req->img_request->snapc->num_snaps)) +- return false; ++ rbd_assert(obj_req->img_request->snapc); + +- return true; ++ if (obj_req->img_request->op_type == OBJ_OP_DISCARD) { ++ dout("%s %p objno %llu discard\n", __func__, obj_req, ++ obj_req->ex.oe_objno); ++ return; ++ } ++ ++ if (!obj_req->num_img_extents) { ++ dout("%s %p objno %llu not overlapping\n", __func__, obj_req, ++ obj_req->ex.oe_objno); ++ return; ++ } ++ ++ if (rbd_obj_is_entire(obj_req) && ++ !obj_req->img_request->snapc->num_snaps) { ++ dout("%s %p objno %llu entire\n", __func__, obj_req, ++ obj_req->ex.oe_objno); ++ return; ++ } ++ ++ obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; + } + + static u64 rbd_obj_img_extents_bytes(struct rbd_obj_request *obj_req) +@@ -1442,6 +1458,7 @@ __rbd_obj_add_osd_request(struct rbd_obj_request *obj_req, + static struct ceph_osd_request * + rbd_obj_add_osd_request(struct rbd_obj_request *obj_req, int num_ops) + { ++ rbd_assert(obj_req->img_request->snapc); + return __rbd_obj_add_osd_request(obj_req, obj_req->img_request->snapc, + num_ops); + } +@@ -1578,15 +1595,18 @@ static void rbd_img_request_init(struct rbd_img_request *img_request, + mutex_init(&img_request->state_mutex); + } + ++/* ++ * Only snap_id is captured here, for reads. For writes, snapshot ++ * context is captured in rbd_img_object_requests() after exclusive ++ * lock is ensured to be held. ++ */ + static void rbd_img_capture_header(struct rbd_img_request *img_req) + { + struct rbd_device *rbd_dev = img_req->rbd_dev; + + lockdep_assert_held(&rbd_dev->header_rwsem); + +- if (rbd_img_is_write(img_req)) +- img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); +- else ++ if (!rbd_img_is_write(img_req)) + img_req->snap_id = rbd_dev->spec->snap_id; + + if (rbd_dev_parent_get(rbd_dev)) +@@ -2233,9 +2253,6 @@ static int rbd_obj_init_write(struct rbd_obj_request *obj_req) + if (ret) + return ret; + +- if (rbd_obj_copyup_enabled(obj_req)) +- obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; +- + obj_req->write_state = RBD_OBJ_WRITE_START; + return 0; + } +@@ -2341,8 +2358,6 @@ static int rbd_obj_init_zeroout(struct rbd_obj_request *obj_req) + if (ret) + return ret; + +- if (rbd_obj_copyup_enabled(obj_req)) +- obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; + if (!obj_req->num_img_extents) { + obj_req->flags |= RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT; + if (rbd_obj_is_entire(obj_req)) +@@ -3286,6 +3301,7 @@ again: + case RBD_OBJ_WRITE_START: + rbd_assert(!*result); + ++ rbd_obj_set_copyup_enabled(obj_req); + if (rbd_obj_write_is_noop(obj_req)) + return true; + +@@ -3472,9 +3488,19 @@ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req) + + static void rbd_img_object_requests(struct rbd_img_request *img_req) + { ++ struct rbd_device *rbd_dev = img_req->rbd_dev; + struct rbd_obj_request *obj_req; + + rbd_assert(!img_req->pending.result && !img_req->pending.num_pending); ++ rbd_assert(!need_exclusive_lock(img_req) || ++ __rbd_is_lock_owner(rbd_dev)); ++ ++ if (rbd_img_is_write(img_req)) { ++ rbd_assert(!img_req->snapc); ++ down_read(&rbd_dev->header_rwsem); ++ img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); ++ up_read(&rbd_dev->header_rwsem); ++ } + + for_each_obj_request(img_req, obj_req) { + int result = 0; +@@ -3492,7 +3518,6 @@ static void rbd_img_object_requests(struct rbd_img_request *img_req) + + static bool rbd_img_advance(struct rbd_img_request *img_req, int *result) + { +- struct rbd_device *rbd_dev = img_req->rbd_dev; + int ret; + + again: +@@ -3513,9 +3538,6 @@ again: + if (*result) + return true; + +- rbd_assert(!need_exclusive_lock(img_req) || +- __rbd_is_lock_owner(rbd_dev)); +- + rbd_img_object_requests(img_req); + if (!img_req->pending.num_pending) { + *result = img_req->pending.result; +@@ -3977,6 +3999,10 @@ static int rbd_post_acquire_action(struct rbd_device *rbd_dev) + { + int ret; + ++ ret = rbd_dev_refresh(rbd_dev); ++ if (ret) ++ return ret; ++ + if (rbd_dev->header.features & RBD_FEATURE_OBJECT_MAP) { + ret = rbd_object_map_open(rbd_dev); + if (ret) +diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c +index 3df8c3606e933..bd3b65a13e741 100644 +--- a/drivers/bluetooth/hci_qca.c ++++ b/drivers/bluetooth/hci_qca.c +@@ -78,7 +78,8 @@ enum qca_flags { + QCA_HW_ERROR_EVENT, + QCA_SSR_TRIGGERED, + QCA_BT_OFF, +- QCA_ROM_FW ++ QCA_ROM_FW, ++ QCA_DEBUGFS_CREATED, + }; + + enum qca_capabilities { +@@ -635,6 +636,9 @@ static void qca_debugfs_init(struct hci_dev *hdev) + if (!hdev->debugfs) + return; + ++ if (test_and_set_bit(QCA_DEBUGFS_CREATED, &qca->flags)) ++ return; ++ + ibs_dir = debugfs_create_dir("ibs", hdev->debugfs); + + /* read only */ +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index e234091386671..2109cd178ff70 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -424,6 +424,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize, + ep_mem_access->flag = 0; + ep_mem_access->reserved = 0; + } ++ mem_region->handle = 0; + mem_region->reserved_0 = 0; + mem_region->reserved_1 = 0; + mem_region->ep_count = args->nattrs; +diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c +index e5dfd636c63c1..09aa0b64859b4 100644 +--- a/drivers/gpio/gpio-sim.c ++++ b/drivers/gpio/gpio-sim.c +@@ -721,8 +721,10 @@ static char **gpio_sim_make_line_names(struct gpio_sim_bank *bank, + if (!line_names) + return ERR_PTR(-ENOMEM); + +- list_for_each_entry(line, &bank->line_list, siblings) +- line_names[line->offset] = line->name; ++ list_for_each_entry(line, &bank->line_list, siblings) { ++ if (line->name && (line->offset <= max_offset)) ++ line_names[line->offset] = line->name; ++ } + + return line_names; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c +index aeeec211861c4..e1b01554e3231 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c +@@ -1092,16 +1092,20 @@ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) + * S0ix even though the system is suspending to idle, so return false + * in that case. + */ +- if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) ++ if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) { + dev_warn_once(adev->dev, + "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n" + "To use suspend-to-idle change the sleep mode in BIOS setup.\n"); ++ return false; ++ } + + #if !IS_ENABLED(CONFIG_AMD_PMC) + dev_warn_once(adev->dev, + "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n"); +-#endif /* CONFIG_AMD_PMC */ ++ return false; ++#else + return true; ++#endif /* CONFIG_AMD_PMC */ + } + + #endif /* CONFIG_SUSPEND */ +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +index 6c7d672412b21..5e9a0c1bb3079 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +@@ -79,9 +79,10 @@ static void amdgpu_bo_user_destroy(struct ttm_buffer_object *tbo) + static void amdgpu_bo_vm_destroy(struct ttm_buffer_object *tbo) + { + struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev); +- struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo); ++ struct amdgpu_bo *shadow_bo = ttm_to_amdgpu_bo(tbo), *bo; + struct amdgpu_bo_vm *vmbo; + ++ bo = shadow_bo->parent; + vmbo = to_amdgpu_bo_vm(bo); + /* in case amdgpu_device_recover_vram got NULL of bo->parent */ + if (!list_empty(&vmbo->shadow_list)) { +@@ -694,11 +695,6 @@ int amdgpu_bo_create_vm(struct amdgpu_device *adev, + return r; + + *vmbo_ptr = to_amdgpu_bo_vm(bo_ptr); +- INIT_LIST_HEAD(&(*vmbo_ptr)->shadow_list); +- /* Set destroy callback to amdgpu_bo_vm_destroy after vmbo->shadow_list +- * is initialized. +- */ +- bo_ptr->tbo.destroy = &amdgpu_bo_vm_destroy; + return r; + } + +@@ -715,6 +711,8 @@ void amdgpu_bo_add_to_shadow_list(struct amdgpu_bo_vm *vmbo) + + mutex_lock(&adev->shadow_list_lock); + list_add_tail(&vmbo->shadow_list, &adev->shadow_list); ++ vmbo->shadow->parent = amdgpu_bo_ref(&vmbo->bo); ++ vmbo->shadow->tbo.destroy = &amdgpu_bo_vm_destroy; + mutex_unlock(&adev->shadow_list_lock); + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c +index 01e42bdd8e4e8..4642cff0e1a4f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c +@@ -564,7 +564,6 @@ int amdgpu_vm_pt_create(struct amdgpu_device *adev, struct amdgpu_vm *vm, + return r; + } + +- (*vmbo)->shadow->parent = amdgpu_bo_ref(bo); + amdgpu_bo_add_to_shadow_list(*vmbo); + + return 0; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +index 43d6a9d6a5384..afacfb9b5bf6c 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +@@ -800,7 +800,7 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man, + { + struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); + struct drm_buddy *mm = &mgr->mm; +- struct drm_buddy_block *block; ++ struct amdgpu_vram_reservation *rsv; + + drm_printf(printer, " vis usage:%llu\n", + amdgpu_vram_mgr_vis_usage(mgr)); +@@ -812,8 +812,9 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man, + drm_buddy_print(mm, printer); + + drm_printf(printer, "reserved:\n"); +- list_for_each_entry(block, &mgr->reserved_pages, link) +- drm_buddy_block_print(mm, block, printer); ++ list_for_each_entry(rsv, &mgr->reserved_pages, blocks) ++ drm_printf(printer, "%#018llx-%#018llx: %llu\n", ++ rsv->start, rsv->start + rsv->size, rsv->size); + mutex_unlock(&mgr->lock); + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c +index ceab8783575ca..4924d853f9e45 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vi.c ++++ b/drivers/gpu/drm/amd/amdgpu/vi.c +@@ -542,8 +542,15 @@ static u32 vi_get_xclk(struct amdgpu_device *adev) + u32 reference_clock = adev->clock.spll.reference_freq; + u32 tmp; + +- if (adev->flags & AMD_IS_APU) +- return reference_clock; ++ if (adev->flags & AMD_IS_APU) { ++ switch (adev->asic_type) { ++ case CHIP_STONEY: ++ /* vbios says 48Mhz, but the actual freq is 100Mhz */ ++ return 10000; ++ default: ++ return reference_clock; ++ } ++ } + + tmp = RREG32_SMC(ixCG_CLKPIN_CNTL_2); + if (REG_GET_FIELD(tmp, CG_CLKPIN_CNTL_2, MUX_TCLK_TO_XCLK)) +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c +index f07cba121d010..eab53d6317c9f 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c +@@ -1962,6 +1962,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c + return result; + } + ++static bool commit_minimal_transition_state(struct dc *dc, ++ struct dc_state *transition_base_context); ++ + /** + * dc_commit_streams - Commit current stream state + * +@@ -1983,6 +1986,8 @@ enum dc_status dc_commit_streams(struct dc *dc, + struct dc_state *context; + enum dc_status res = DC_OK; + struct dc_validation_set set[MAX_STREAMS] = {0}; ++ struct pipe_ctx *pipe; ++ bool handle_exit_odm2to1 = false; + + if (dc->ctx->dce_environment == DCE_ENV_VIRTUAL_HW) + return res; +@@ -2007,6 +2012,22 @@ enum dc_status dc_commit_streams(struct dc *dc, + } + } + ++ /* Check for case where we are going from odm 2:1 to max ++ * pipe scenario. For these cases, we will call ++ * commit_minimal_transition_state() to exit out of odm 2:1 ++ * first before processing new streams ++ */ ++ if (stream_count == dc->res_pool->pipe_count) { ++ for (i = 0; i < dc->res_pool->pipe_count; i++) { ++ pipe = &dc->current_state->res_ctx.pipe_ctx[i]; ++ if (pipe->next_odm_pipe) ++ handle_exit_odm2to1 = true; ++ } ++ } ++ ++ if (handle_exit_odm2to1) ++ res = commit_minimal_transition_state(dc, dc->current_state); ++ + context = dc_create_state(dc); + if (!context) + goto context_alloc_fail; +@@ -3915,6 +3936,7 @@ static bool commit_minimal_transition_state(struct dc *dc, + unsigned int i, j; + unsigned int pipe_in_use = 0; + bool subvp_in_use = false; ++ bool odm_in_use = false; + + if (!transition_context) + return false; +@@ -3943,6 +3965,18 @@ static bool commit_minimal_transition_state(struct dc *dc, + } + } + ++ /* If ODM is enabled and we are adding or removing planes from any ODM ++ * pipe, we must use the minimal transition. ++ */ ++ for (i = 0; i < dc->res_pool->pipe_count; i++) { ++ struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i]; ++ ++ if (pipe->stream && pipe->next_odm_pipe) { ++ odm_in_use = true; ++ break; ++ } ++ } ++ + /* When the OS add a new surface if we have been used all of pipes with odm combine + * and mpc split feature, it need use commit_minimal_transition_state to transition safely. + * After OS exit MPO, it will back to use odm and mpc split with all of pipes, we need +@@ -3951,7 +3985,7 @@ static bool commit_minimal_transition_state(struct dc *dc, + * Reduce the scenarios to use dc_commit_state_no_check in the stage of flip. Especially + * enter/exit MPO when DCN still have enough resources. + */ +- if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use) { ++ if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use && !odm_in_use) { + dc_release_state(transition_context); + return true; + } +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +index 0ae6dcc403a4b..986de684b078e 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +@@ -1444,6 +1444,26 @@ static int acquire_first_split_pipe( + split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst; + split_pipe->pipe_idx = i; + ++ split_pipe->stream = stream; ++ return i; ++ } else if (split_pipe->prev_odm_pipe && ++ split_pipe->prev_odm_pipe->plane_state == split_pipe->plane_state) { ++ split_pipe->prev_odm_pipe->next_odm_pipe = split_pipe->next_odm_pipe; ++ if (split_pipe->next_odm_pipe) ++ split_pipe->next_odm_pipe->prev_odm_pipe = split_pipe->prev_odm_pipe; ++ ++ if (split_pipe->prev_odm_pipe->plane_state) ++ resource_build_scaling_params(split_pipe->prev_odm_pipe); ++ ++ memset(split_pipe, 0, sizeof(*split_pipe)); ++ split_pipe->stream_res.tg = pool->timing_generators[i]; ++ split_pipe->plane_res.hubp = pool->hubps[i]; ++ split_pipe->plane_res.ipp = pool->ipps[i]; ++ split_pipe->plane_res.dpp = pool->dpps[i]; ++ split_pipe->stream_res.opp = pool->opps[i]; ++ split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst; ++ split_pipe->pipe_idx = i; ++ + split_pipe->stream = stream; + return i; + } +diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c +index e47828e3b6d5d..7c06a339ab93c 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c ++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c +@@ -138,7 +138,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_2_soc = { + .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096, + .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096, + .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096, +- .pct_ideal_sdp_bw_after_urgent = 100.0, ++ .pct_ideal_sdp_bw_after_urgent = 90.0, + .pct_ideal_fabric_bw_after_urgent = 67.0, + .pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0, + .pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +index 75f18681e984c..85d53597eb07a 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +@@ -2067,33 +2067,94 @@ static int sienna_cichlid_display_disable_memory_clock_switch(struct smu_context + return ret; + } + ++static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu, ++ uint32_t *gen_speed_override, ++ uint32_t *lane_width_override) ++{ ++ struct amdgpu_device *adev = smu->adev; ++ ++ *gen_speed_override = 0xff; ++ *lane_width_override = 0xff; ++ ++ switch (adev->pdev->device) { ++ case 0x73A0: ++ case 0x73A1: ++ case 0x73A2: ++ case 0x73A3: ++ case 0x73AB: ++ case 0x73AE: ++ /* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */ ++ *lane_width_override = 6; ++ break; ++ case 0x73E0: ++ case 0x73E1: ++ case 0x73E3: ++ *lane_width_override = 4; ++ break; ++ case 0x7420: ++ case 0x7421: ++ case 0x7422: ++ case 0x7423: ++ case 0x7424: ++ *lane_width_override = 3; ++ break; ++ default: ++ break; ++ } ++} ++ ++#define MAX(a, b) ((a) > (b) ? (a) : (b)) ++ + static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, + uint32_t pcie_gen_cap, + uint32_t pcie_width_cap) + { + struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; +- +- uint32_t smu_pcie_arg; ++ struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table; ++ uint32_t gen_speed_override, lane_width_override; + uint8_t *table_member1, *table_member2; ++ uint32_t min_gen_speed, max_gen_speed; ++ uint32_t min_lane_width, max_lane_width; ++ uint32_t smu_pcie_arg; + int ret, i; + + GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1); + GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2); + +- /* lclk dpm table setup */ +- for (i = 0; i < MAX_PCIE_CONF; i++) { +- dpm_context->dpm_tables.pcie_table.pcie_gen[i] = table_member1[i]; +- dpm_context->dpm_tables.pcie_table.pcie_lane[i] = table_member2[i]; ++ sienna_cichlid_get_override_pcie_settings(smu, ++ &gen_speed_override, ++ &lane_width_override); ++ ++ /* PCIE gen speed override */ ++ if (gen_speed_override != 0xff) { ++ min_gen_speed = MIN(pcie_gen_cap, gen_speed_override); ++ max_gen_speed = MIN(pcie_gen_cap, gen_speed_override); ++ } else { ++ min_gen_speed = MAX(0, table_member1[0]); ++ max_gen_speed = MIN(pcie_gen_cap, table_member1[1]); ++ min_gen_speed = min_gen_speed > max_gen_speed ? ++ max_gen_speed : min_gen_speed; + } ++ pcie_table->pcie_gen[0] = min_gen_speed; ++ pcie_table->pcie_gen[1] = max_gen_speed; ++ ++ /* PCIE lane width override */ ++ if (lane_width_override != 0xff) { ++ min_lane_width = MIN(pcie_width_cap, lane_width_override); ++ max_lane_width = MIN(pcie_width_cap, lane_width_override); ++ } else { ++ min_lane_width = MAX(1, table_member2[0]); ++ max_lane_width = MIN(pcie_width_cap, table_member2[1]); ++ min_lane_width = min_lane_width > max_lane_width ? ++ max_lane_width : min_lane_width; ++ } ++ pcie_table->pcie_lane[0] = min_lane_width; ++ pcie_table->pcie_lane[1] = max_lane_width; + + for (i = 0; i < NUM_LINK_LEVELS; i++) { +- smu_pcie_arg = (i << 16) | +- ((table_member1[i] <= pcie_gen_cap) ? +- (table_member1[i] << 8) : +- (pcie_gen_cap << 8)) | +- ((table_member2[i] <= pcie_width_cap) ? +- table_member2[i] : +- pcie_width_cap); ++ smu_pcie_arg = (i << 16 | ++ pcie_table->pcie_gen[i] << 8 | ++ pcie_table->pcie_lane[i]); + + ret = smu_cmn_send_smc_msg_with_param(smu, + SMU_MSG_OverridePcieParameters, +@@ -2101,11 +2162,6 @@ static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, + NULL); + if (ret) + return ret; +- +- if (table_member1[i] > pcie_gen_cap) +- dpm_context->dpm_tables.pcie_table.pcie_gen[i] = pcie_gen_cap; +- if (table_member2[i] > pcie_width_cap) +- dpm_context->dpm_tables.pcie_table.pcie_lane[i] = pcie_width_cap; + } + + return 0; +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +index a52ed0580fd7e..f827f95755525 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +@@ -566,11 +566,11 @@ int smu_v13_0_init_power(struct smu_context *smu) + if (smu_power->power_context || smu_power->power_context_size != 0) + return -EINVAL; + +- smu_power->power_context = kzalloc(sizeof(struct smu_13_0_dpm_context), ++ smu_power->power_context = kzalloc(sizeof(struct smu_13_0_power_context), + GFP_KERNEL); + if (!smu_power->power_context) + return -ENOMEM; +- smu_power->power_context_size = sizeof(struct smu_13_0_dpm_context); ++ smu_power->power_context_size = sizeof(struct smu_13_0_power_context); + + return 0; + } +diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.c b/drivers/gpu/drm/i915/display/intel_dp_aux.c +index 30c98810e28bb..36d6ece8b4616 100644 +--- a/drivers/gpu/drm/i915/display/intel_dp_aux.c ++++ b/drivers/gpu/drm/i915/display/intel_dp_aux.c +@@ -117,6 +117,32 @@ static u32 skl_get_aux_clock_divider(struct intel_dp *intel_dp, int index) + return index ? 0 : 1; + } + ++static int intel_dp_aux_sync_len(void) ++{ ++ int precharge = 16; /* 10-16 */ ++ int preamble = 16; ++ ++ return precharge + preamble; ++} ++ ++static int intel_dp_aux_fw_sync_len(void) ++{ ++ int precharge = 10; /* 10-16 */ ++ int preamble = 8; ++ ++ return precharge + preamble; ++} ++ ++static int g4x_dp_aux_precharge_len(void) ++{ ++ int precharge_min = 10; ++ int preamble = 16; ++ ++ /* HW wants the length of the extra precharge in 2us units */ ++ return (intel_dp_aux_sync_len() - ++ precharge_min - preamble) / 2; ++} ++ + static u32 g4x_get_aux_send_ctl(struct intel_dp *intel_dp, + int send_bytes, + u32 aux_clock_divider) +@@ -139,7 +165,7 @@ static u32 g4x_get_aux_send_ctl(struct intel_dp *intel_dp, + timeout | + DP_AUX_CH_CTL_RECEIVE_ERROR | + (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | +- (3 << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) | ++ (g4x_dp_aux_precharge_len() << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) | + (aux_clock_divider << DP_AUX_CH_CTL_BIT_CLOCK_2X_SHIFT); + } + +@@ -163,8 +189,8 @@ static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp, + DP_AUX_CH_CTL_TIME_OUT_MAX | + DP_AUX_CH_CTL_RECEIVE_ERROR | + (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | +- DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(24) | +- DP_AUX_CH_CTL_SYNC_PULSE_SKL(32); ++ DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(intel_dp_aux_fw_sync_len()) | ++ DP_AUX_CH_CTL_SYNC_PULSE_SKL(intel_dp_aux_sync_len()); + + if (intel_tc_port_in_tbt_alt_mode(dig_port)) + ret |= DP_AUX_CH_CTL_TBT_IO; +diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +index a81fa6a20f5aa..7b516b1a4915b 100644 +--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c ++++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +@@ -346,8 +346,10 @@ static int live_parallel_switch(void *arg) + continue; + + ce = intel_context_create(data[m].ce[0]->engine); +- if (IS_ERR(ce)) ++ if (IS_ERR(ce)) { ++ err = PTR_ERR(ce); + goto out; ++ } + + err = intel_context_pin(ce); + if (err) { +@@ -367,8 +369,10 @@ static int live_parallel_switch(void *arg) + + worker = kthread_create_worker(0, "igt/parallel:%s", + data[n].ce[0]->engine->name); +- if (IS_ERR(worker)) ++ if (IS_ERR(worker)) { ++ err = PTR_ERR(worker); + goto out; ++ } + + data[n].worker = worker; + } +@@ -397,8 +401,10 @@ static int live_parallel_switch(void *arg) + } + } + +- if (igt_live_test_end(&t)) +- err = -EIO; ++ if (igt_live_test_end(&t)) { ++ err = err ?: -EIO; ++ break; ++ } + } + + out: +diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c +index 736b89a8ecf54..4202df5b8c122 100644 +--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c ++++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c +@@ -1530,8 +1530,8 @@ static int live_busywait_preempt(void *arg) + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + enum intel_engine_id id; +- int err = -ENOMEM; + u32 *map; ++ int err; + + /* + * Verify that even without HAS_LOGICAL_RING_PREEMPTION, we can +@@ -1539,13 +1539,17 @@ static int live_busywait_preempt(void *arg) + */ + + ctx_hi = kernel_context(gt->i915, NULL); +- if (!ctx_hi) +- return -ENOMEM; ++ if (IS_ERR(ctx_hi)) ++ return PTR_ERR(ctx_hi); ++ + ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY; + + ctx_lo = kernel_context(gt->i915, NULL); +- if (!ctx_lo) ++ if (IS_ERR(ctx_lo)) { ++ err = PTR_ERR(ctx_lo); + goto err_ctx_hi; ++ } ++ + ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY; + + obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE); +diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c +index ff003403fbbc7..ffd91a5ee2990 100644 +--- a/drivers/gpu/drm/lima/lima_sched.c ++++ b/drivers/gpu/drm/lima/lima_sched.c +@@ -165,7 +165,7 @@ int lima_sched_context_init(struct lima_sched_pipe *pipe, + void lima_sched_context_fini(struct lima_sched_pipe *pipe, + struct lima_sched_context *context) + { +- drm_sched_entity_fini(&context->base); ++ drm_sched_entity_destroy(&context->base); + } + + struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task) +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +index 7f5bc73b20402..611311b65b168 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +@@ -1514,8 +1514,6 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) + if (!pdev) + return -ENODEV; + +- mutex_init(&gmu->lock); +- + gmu->dev = &pdev->dev; + + of_dma_configure(gmu->dev, node, true); +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +index 6faea5049f765..2942d2548ce69 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +@@ -1998,6 +1998,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) + adreno_gpu = &a6xx_gpu->base; + gpu = &adreno_gpu->base; + ++ mutex_init(&a6xx_gpu->gmu.lock); ++ + adreno_gpu->registers = NULL; + + /* +diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c +index 047dfef7a6577..878c076ebdc6b 100644 +--- a/drivers/i2c/busses/i2c-mv64xxx.c ++++ b/drivers/i2c/busses/i2c-mv64xxx.c +@@ -520,6 +520,17 @@ mv64xxx_i2c_intr(int irq, void *dev_id) + + while (readl(drv_data->reg_base + drv_data->reg_offsets.control) & + MV64XXX_I2C_REG_CONTROL_IFLG) { ++ /* ++ * It seems that sometime the controller updates the status ++ * register only after it asserts IFLG in control register. ++ * This may result in weird bugs when in atomic mode. A delay ++ * of 100 ns before reading the status register solves this ++ * issue. This bug does not seem to appear when using ++ * interrupts. ++ */ ++ if (drv_data->atomic) ++ ndelay(100); ++ + status = readl(drv_data->reg_base + drv_data->reg_offsets.status); + mv64xxx_i2c_fsm(drv_data, status); + mv64xxx_i2c_do_action(drv_data); +diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c +index 4fe15cd78907e..ffc54fbf814dd 100644 +--- a/drivers/i2c/busses/i2c-sprd.c ++++ b/drivers/i2c/busses/i2c-sprd.c +@@ -576,12 +576,14 @@ static int sprd_i2c_remove(struct platform_device *pdev) + struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev); + int ret; + +- ret = pm_runtime_resume_and_get(i2c_dev->dev); ++ ret = pm_runtime_get_sync(i2c_dev->dev); + if (ret < 0) +- return ret; ++ dev_err(&pdev->dev, "Failed to resume device (%pe)\n", ERR_PTR(ret)); + + i2c_del_adapter(&i2c_dev->adap); +- clk_disable_unprepare(i2c_dev->clk); ++ ++ if (ret >= 0) ++ clk_disable_unprepare(i2c_dev->clk); + + pm_runtime_put_noidle(i2c_dev->dev); + pm_runtime_disable(i2c_dev->dev); +diff --git a/drivers/input/input.c b/drivers/input/input.c +index 37e876d45eb9c..641eb86f276e6 100644 +--- a/drivers/input/input.c ++++ b/drivers/input/input.c +@@ -703,7 +703,7 @@ void input_close_device(struct input_handle *handle) + + __input_release_device(handle); + +- if (!dev->inhibited && !--dev->users) { ++ if (!--dev->users && !dev->inhibited) { + if (dev->poller) + input_dev_poller_stop(dev->poller); + if (dev->close) +diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c +index f617b2c60819c..8f65f41a1fd75 100644 +--- a/drivers/input/joystick/xpad.c ++++ b/drivers/input/joystick/xpad.c +@@ -282,7 +282,6 @@ static const struct xpad_device { + { 0x1430, 0xf801, "RedOctane Controller", 0, XTYPE_XBOX360 }, + { 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 }, + { 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, +- { 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 }, + { 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, + { 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE }, + { 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 }, +diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c +index ece97f8c6a3e3..2118b2075f437 100644 +--- a/drivers/input/mouse/elantech.c ++++ b/drivers/input/mouse/elantech.c +@@ -674,10 +674,11 @@ static void process_packet_head_v4(struct psmouse *psmouse) + struct input_dev *dev = psmouse->dev; + struct elantech_data *etd = psmouse->private; + unsigned char *packet = psmouse->packet; +- int id = ((packet[3] & 0xe0) >> 5) - 1; ++ int id; + int pres, traces; + +- if (id < 0) ++ id = ((packet[3] & 0xe0) >> 5) - 1; ++ if (id < 0 || id >= ETP_MAX_FINGERS) + return; + + etd->mt[id].x = ((packet[1] & 0x0f) << 8) | packet[2]; +@@ -707,7 +708,7 @@ static void process_packet_motion_v4(struct psmouse *psmouse) + int id, sid; + + id = ((packet[0] & 0xe0) >> 5) - 1; +- if (id < 0) ++ if (id < 0 || id >= ETP_MAX_FINGERS) + return; + + sid = ((packet[3] & 0xe0) >> 5) - 1; +@@ -728,7 +729,7 @@ static void process_packet_motion_v4(struct psmouse *psmouse) + input_report_abs(dev, ABS_MT_POSITION_X, etd->mt[id].x); + input_report_abs(dev, ABS_MT_POSITION_Y, etd->mt[id].y); + +- if (sid >= 0) { ++ if (sid >= 0 && sid < ETP_MAX_FINGERS) { + etd->mt[sid].x += delta_x2 * weight; + etd->mt[sid].y -= delta_y2 * weight; + input_mt_slot(dev, sid); +diff --git a/drivers/input/touchscreen/cyttsp5.c b/drivers/input/touchscreen/cyttsp5.c +index 30102cb80fac8..3c9d07218f48d 100644 +--- a/drivers/input/touchscreen/cyttsp5.c ++++ b/drivers/input/touchscreen/cyttsp5.c +@@ -560,7 +560,7 @@ static int cyttsp5_hid_output_get_sysinfo(struct cyttsp5 *ts) + static int cyttsp5_hid_output_bl_launch_app(struct cyttsp5 *ts) + { + int rc; +- u8 cmd[HID_OUTPUT_BL_LAUNCH_APP]; ++ u8 cmd[HID_OUTPUT_BL_LAUNCH_APP_SIZE]; + u16 crc; + + put_unaligned_le16(HID_OUTPUT_BL_LAUNCH_APP_SIZE, cmd); +diff --git a/drivers/misc/eeprom/Kconfig b/drivers/misc/eeprom/Kconfig +index f0a7531f354c1..2d240bfa819f8 100644 +--- a/drivers/misc/eeprom/Kconfig ++++ b/drivers/misc/eeprom/Kconfig +@@ -6,6 +6,7 @@ config EEPROM_AT24 + depends on I2C && SYSFS + select NVMEM + select NVMEM_SYSFS ++ select REGMAP + select REGMAP_I2C + help + Enable this driver to get read/write support to most I2C EEPROMs +diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c +index cbe8318753471..c0215a8770f49 100644 +--- a/drivers/net/dsa/lan9303-core.c ++++ b/drivers/net/dsa/lan9303-core.c +@@ -1188,8 +1188,6 @@ static int lan9303_port_fdb_add(struct dsa_switch *ds, int port, + struct lan9303 *chip = ds->priv; + + dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid); +- if (vid) +- return -EOPNOTSUPP; + + return lan9303_alr_add_port(chip, addr, port, false); + } +@@ -1201,8 +1199,6 @@ static int lan9303_port_fdb_del(struct dsa_switch *ds, int port, + struct lan9303 *chip = ds->priv; + + dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid); +- if (vid) +- return -EOPNOTSUPP; + lan9303_alr_del_port(chip, addr, port); + + return 0; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index 651b79ce5d80c..9784e86d4d96a 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -2392,6 +2392,9 @@ static int bnxt_async_event_process(struct bnxt *bp, + struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; + u64 ns; + ++ if (!ptp) ++ goto async_event_process_exit; ++ + spin_lock_bh(&ptp->ptp_lock); + bnxt_ptp_update_current_time(bp); + ns = (((u64)BNXT_EVENT_PHC_RTC_UPDATE(data1) << +@@ -4789,6 +4792,9 @@ int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap, int bmap_size, + if (event_id == ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY && + !(bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY)) + continue; ++ if (event_id == ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE && ++ !bp->ptp_cfg) ++ continue; + __set_bit(bnxt_async_events_arr[i], async_events_bmap); + } + if (bmap && bmap_size) { +@@ -5376,6 +5382,7 @@ static void bnxt_hwrm_update_rss_hash_cfg(struct bnxt *bp) + if (hwrm_req_init(bp, req, HWRM_VNIC_RSS_QCFG)) + return; + ++ req->vnic_id = cpu_to_le16(vnic->fw_vnic_id); + /* all contexts configured to same hash_type, zero always exists */ + req->rss_ctx_idx = cpu_to_le16(vnic->fw_rss_cos_lb_ctx[0]); + resp = hwrm_req_hold(bp, req); +@@ -8838,6 +8845,9 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init) + goto err_out; + } + ++ if (BNXT_VF(bp)) ++ bnxt_hwrm_func_qcfg(bp); ++ + rc = bnxt_setup_vnic(bp, 0); + if (rc) + goto err_out; +@@ -11624,6 +11634,7 @@ static void bnxt_tx_timeout(struct net_device *dev, unsigned int txqueue) + static void bnxt_fw_health_check(struct bnxt *bp) + { + struct bnxt_fw_health *fw_health = bp->fw_health; ++ struct pci_dev *pdev = bp->pdev; + u32 val; + + if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) +@@ -11637,7 +11648,7 @@ static void bnxt_fw_health_check(struct bnxt *bp) + } + + val = bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG); +- if (val == fw_health->last_fw_heartbeat) { ++ if (val == fw_health->last_fw_heartbeat && pci_device_is_present(pdev)) { + fw_health->arrests++; + goto fw_reset; + } +@@ -11645,7 +11656,7 @@ static void bnxt_fw_health_check(struct bnxt *bp) + fw_health->last_fw_heartbeat = val; + + val = bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG); +- if (val != fw_health->last_fw_reset_cnt) { ++ if (val != fw_health->last_fw_reset_cnt && pci_device_is_present(pdev)) { + fw_health->discoveries++; + goto fw_reset; + } +@@ -13051,26 +13062,37 @@ static void bnxt_cfg_ntp_filters(struct bnxt *bp) + + #endif /* CONFIG_RFS_ACCEL */ + +-static int bnxt_udp_tunnel_sync(struct net_device *netdev, unsigned int table) ++static int bnxt_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, ++ unsigned int entry, struct udp_tunnel_info *ti) + { + struct bnxt *bp = netdev_priv(netdev); +- struct udp_tunnel_info ti; + unsigned int cmd; + +- udp_tunnel_nic_get_port(netdev, table, 0, &ti); +- if (ti.type == UDP_TUNNEL_TYPE_VXLAN) ++ if (ti->type == UDP_TUNNEL_TYPE_VXLAN) + cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN; + else + cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE; + +- if (ti.port) +- return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti.port, cmd); ++ return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti->port, cmd); ++} ++ ++static int bnxt_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table, ++ unsigned int entry, struct udp_tunnel_info *ti) ++{ ++ struct bnxt *bp = netdev_priv(netdev); ++ unsigned int cmd; ++ ++ if (ti->type == UDP_TUNNEL_TYPE_VXLAN) ++ cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN; ++ else ++ cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE; + + return bnxt_hwrm_tunnel_dst_port_free(bp, cmd); + } + + static const struct udp_tunnel_nic_info bnxt_udp_tunnels = { +- .sync_table = bnxt_udp_tunnel_sync, ++ .set_port = bnxt_udp_tunnel_set_port, ++ .unset_port = bnxt_udp_tunnel_unset_port, + .flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP | + UDP_TUNNEL_NIC_INFO_OPEN_ONLY, + .tables = { +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +index 2dd8ee4a6f75b..8fd5071d8b099 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +@@ -3831,7 +3831,7 @@ static int bnxt_reset(struct net_device *dev, u32 *flags) + } + } + +- if (req & BNXT_FW_RESET_AP) { ++ if (!BNXT_CHIP_P4_PLUS(bp) && (req & BNXT_FW_RESET_AP)) { + /* This feature is not supported in older firmware versions */ + if (bp->hwrm_spec_code >= 0x10803) { + if (!bnxt_firmware_reset_ap(dev)) { +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c +index a3a3978a4d1c2..af7b4466f9520 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c +@@ -946,6 +946,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg) + bnxt_ptp_timecounter_init(bp, true); + bnxt_ptp_adjfine_rtc(bp, 0); + } ++ bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true); + + ptp->ptp_info = bnxt_ptp_caps; + if ((bp->fw_cap & BNXT_FW_CAP_PTP_PPS)) { +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +index eca0c92c0c84d..2b5761ad2f92f 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +@@ -1272,7 +1272,8 @@ static void bcmgenet_get_ethtool_stats(struct net_device *dev, + } + } + +-static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable) ++void bcmgenet_eee_enable_set(struct net_device *dev, bool enable, ++ bool tx_lpi_enabled) + { + struct bcmgenet_priv *priv = netdev_priv(dev); + u32 off = priv->hw_params->tbuf_offset + TBUF_ENERGY_CTRL; +@@ -1292,7 +1293,7 @@ static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable) + + /* Enable EEE and switch to a 27Mhz clock automatically */ + reg = bcmgenet_readl(priv->base + off); +- if (enable) ++ if (tx_lpi_enabled) + reg |= TBUF_EEE_EN | TBUF_PM_EN; + else + reg &= ~(TBUF_EEE_EN | TBUF_PM_EN); +@@ -1313,6 +1314,7 @@ static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable) + + priv->eee.eee_enabled = enable; + priv->eee.eee_active = enable; ++ priv->eee.tx_lpi_enabled = tx_lpi_enabled; + } + + static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e) +@@ -1328,6 +1330,7 @@ static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e) + + e->eee_enabled = p->eee_enabled; + e->eee_active = p->eee_active; ++ e->tx_lpi_enabled = p->tx_lpi_enabled; + e->tx_lpi_timer = bcmgenet_umac_readl(priv, UMAC_EEE_LPI_TIMER); + + return phy_ethtool_get_eee(dev->phydev, e); +@@ -1337,7 +1340,6 @@ static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_eee *e) + { + struct bcmgenet_priv *priv = netdev_priv(dev); + struct ethtool_eee *p = &priv->eee; +- int ret = 0; + + if (GENET_IS_V1(priv)) + return -EOPNOTSUPP; +@@ -1348,16 +1350,11 @@ static int bcmgenet_set_eee(struct net_device *dev, struct ethtool_eee *e) + p->eee_enabled = e->eee_enabled; + + if (!p->eee_enabled) { +- bcmgenet_eee_enable_set(dev, false); ++ bcmgenet_eee_enable_set(dev, false, false); + } else { +- ret = phy_init_eee(dev->phydev, false); +- if (ret) { +- netif_err(priv, hw, dev, "EEE initialization failed\n"); +- return ret; +- } +- ++ p->eee_active = phy_init_eee(dev->phydev, false) >= 0; + bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER); +- bcmgenet_eee_enable_set(dev, true); ++ bcmgenet_eee_enable_set(dev, p->eee_active, e->tx_lpi_enabled); + } + + return phy_ethtool_set_eee(dev->phydev, e); +@@ -4279,9 +4276,6 @@ static int bcmgenet_resume(struct device *d) + if (!device_may_wakeup(d)) + phy_resume(dev->phydev); + +- if (priv->eee.eee_enabled) +- bcmgenet_eee_enable_set(dev, true); +- + bcmgenet_netif_start(dev); + + netif_device_attach(dev); +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h +index 946f6e283c4e6..1985c0ec4da2a 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h +@@ -703,4 +703,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv, + void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv, + enum bcmgenet_power_mode mode); + ++void bcmgenet_eee_enable_set(struct net_device *dev, bool enable, ++ bool tx_lpi_enabled); ++ + #endif /* __BCMGENET_H__ */ +diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c +index be042905ada2a..c15ed0acdb777 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c +@@ -87,6 +87,11 @@ static void bcmgenet_mac_config(struct net_device *dev) + reg |= CMD_TX_EN | CMD_RX_EN; + } + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ ++ priv->eee.eee_active = phy_init_eee(phydev, 0) >= 0; ++ bcmgenet_eee_enable_set(dev, ++ priv->eee.eee_enabled && priv->eee.eee_active, ++ priv->eee.tx_lpi_enabled); + } + + /* setup netdev link state when PHY link status change and +diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c +index 2fc712b24d126..24024745ecef6 100644 +--- a/drivers/net/ethernet/freescale/enetc/enetc.c ++++ b/drivers/net/ethernet/freescale/enetc/enetc.c +@@ -1222,7 +1222,13 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring, + if (!skb) + break; + +- rx_byte_cnt += skb->len; ++ /* When set, the outer VLAN header is extracted and reported ++ * in the receive buffer descriptor. So rx_byte_cnt should ++ * add the length of the extracted VLAN header. ++ */ ++ if (bd_status & ENETC_RXBD_FLAG_VLAN) ++ rx_byte_cnt += VLAN_HLEN; ++ rx_byte_cnt += skb->len + ETH_HLEN; + rx_frm_cnt++; + + napi_gro_receive(napi, skb); +@@ -1558,6 +1564,14 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, + enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i, + &cleaned_cnt, &xdp_buff); + ++ /* When set, the outer VLAN header is extracted and reported ++ * in the receive buffer descriptor. So rx_byte_cnt should ++ * add the length of the extracted VLAN header. ++ */ ++ if (bd_status & ENETC_RXBD_FLAG_VLAN) ++ rx_byte_cnt += VLAN_HLEN; ++ rx_byte_cnt += xdp_get_buff_len(&xdp_buff); ++ + xdp_act = bpf_prog_run_xdp(prog, &xdp_buff); + + switch (xdp_act) { +diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c +index c2fda4fa4188c..b534d7726d3e8 100644 +--- a/drivers/net/ethernet/intel/ice/ice_common.c ++++ b/drivers/net/ethernet/intel/ice/ice_common.c +@@ -5169,7 +5169,7 @@ ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, + */ + int + ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, +- u16 bus_addr, __le16 addr, u8 params, u8 *data, ++ u16 bus_addr, __le16 addr, u8 params, const u8 *data, + struct ice_sq_cd *cd) + { + struct ice_aq_desc desc = { 0 }; +diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h +index 8ba5f935a092b..81961a7d65985 100644 +--- a/drivers/net/ethernet/intel/ice/ice_common.h ++++ b/drivers/net/ethernet/intel/ice/ice_common.h +@@ -229,7 +229,7 @@ ice_aq_read_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, + struct ice_sq_cd *cd); + int + ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, +- u16 bus_addr, __le16 addr, u8 params, u8 *data, ++ u16 bus_addr, __le16 addr, u8 params, const u8 *data, + struct ice_sq_cd *cd); + bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw); + #endif /* _ICE_COMMON_H_ */ +diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c +index 8dec748bb53a4..12086aafb42fb 100644 +--- a/drivers/net/ethernet/intel/ice/ice_gnss.c ++++ b/drivers/net/ethernet/intel/ice/ice_gnss.c +@@ -16,8 +16,8 @@ + * * number of bytes written - success + * * negative - error code + */ +-static unsigned int +-ice_gnss_do_write(struct ice_pf *pf, unsigned char *buf, unsigned int size) ++static int ++ice_gnss_do_write(struct ice_pf *pf, const unsigned char *buf, unsigned int size) + { + struct ice_aqc_link_topo_addr link_topo; + struct ice_hw *hw = &pf->hw; +@@ -72,39 +72,7 @@ err_out: + dev_err(ice_pf_to_dev(pf), "GNSS failed to write, offset=%u, size=%u, err=%d\n", + offset, size, err); + +- return offset; +-} +- +-/** +- * ice_gnss_write_pending - Write all pending data to internal GNSS +- * @work: GNSS write work structure +- */ +-static void ice_gnss_write_pending(struct kthread_work *work) +-{ +- struct gnss_serial *gnss = container_of(work, struct gnss_serial, +- write_work); +- struct ice_pf *pf = gnss->back; +- +- if (!pf) +- return; +- +- if (!test_bit(ICE_FLAG_GNSS, pf->flags)) +- return; +- +- if (!list_empty(&gnss->queue)) { +- struct gnss_write_buf *write_buf = NULL; +- unsigned int bytes; +- +- write_buf = list_first_entry(&gnss->queue, +- struct gnss_write_buf, queue); +- +- bytes = ice_gnss_do_write(pf, write_buf->buf, write_buf->size); +- dev_dbg(ice_pf_to_dev(pf), "%u bytes written to GNSS\n", bytes); +- +- list_del(&write_buf->queue); +- kfree(write_buf->buf); +- kfree(write_buf); +- } ++ return err; + } + + /** +@@ -224,8 +192,6 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf) + pf->gnss_serial = gnss; + + kthread_init_delayed_work(&gnss->read_work, ice_gnss_read); +- INIT_LIST_HEAD(&gnss->queue); +- kthread_init_work(&gnss->write_work, ice_gnss_write_pending); + kworker = kthread_create_worker(0, "ice-gnss-%s", dev_name(dev)); + if (IS_ERR(kworker)) { + kfree(gnss); +@@ -285,7 +251,6 @@ static void ice_gnss_close(struct gnss_device *gdev) + if (!gnss) + return; + +- kthread_cancel_work_sync(&gnss->write_work); + kthread_cancel_delayed_work_sync(&gnss->read_work); + } + +@@ -304,10 +269,7 @@ ice_gnss_write(struct gnss_device *gdev, const unsigned char *buf, + size_t count) + { + struct ice_pf *pf = gnss_get_drvdata(gdev); +- struct gnss_write_buf *write_buf; + struct gnss_serial *gnss; +- unsigned char *cmd_buf; +- int err = count; + + /* We cannot write a single byte using our I2C implementation. */ + if (count <= 1 || count > ICE_GNSS_TTY_WRITE_BUF) +@@ -323,24 +285,7 @@ ice_gnss_write(struct gnss_device *gdev, const unsigned char *buf, + if (!gnss) + return -ENODEV; + +- cmd_buf = kcalloc(count, sizeof(*buf), GFP_KERNEL); +- if (!cmd_buf) +- return -ENOMEM; +- +- memcpy(cmd_buf, buf, count); +- write_buf = kzalloc(sizeof(*write_buf), GFP_KERNEL); +- if (!write_buf) { +- kfree(cmd_buf); +- return -ENOMEM; +- } +- +- write_buf->buf = cmd_buf; +- write_buf->size = count; +- INIT_LIST_HEAD(&write_buf->queue); +- list_add_tail(&write_buf->queue, &gnss->queue); +- kthread_queue_work(gnss->kworker, &gnss->write_work); +- +- return err; ++ return ice_gnss_do_write(pf, buf, count); + } + + static const struct gnss_operations ice_gnss_ops = { +@@ -436,7 +381,6 @@ void ice_gnss_exit(struct ice_pf *pf) + if (pf->gnss_serial) { + struct gnss_serial *gnss = pf->gnss_serial; + +- kthread_cancel_work_sync(&gnss->write_work); + kthread_cancel_delayed_work_sync(&gnss->read_work); + kthread_destroy_worker(gnss->kworker); + gnss->kworker = NULL; +diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.h b/drivers/net/ethernet/intel/ice/ice_gnss.h +index 4d49e5b0b4b81..d95ca3928b2ea 100644 +--- a/drivers/net/ethernet/intel/ice/ice_gnss.h ++++ b/drivers/net/ethernet/intel/ice/ice_gnss.h +@@ -23,26 +23,16 @@ + #define ICE_MAX_UBX_READ_TRIES 255 + #define ICE_MAX_UBX_ACK_READ_TRIES 4095 + +-struct gnss_write_buf { +- struct list_head queue; +- unsigned int size; +- unsigned char *buf; +-}; +- + /** + * struct gnss_serial - data used to initialize GNSS TTY port + * @back: back pointer to PF + * @kworker: kwork thread for handling periodic work + * @read_work: read_work function for handling GNSS reads +- * @write_work: write_work function for handling GNSS writes +- * @queue: write buffers queue + */ + struct gnss_serial { + struct ice_pf *back; + struct kthread_worker *kworker; + struct kthread_delayed_work read_work; +- struct kthread_work write_work; +- struct list_head queue; + }; + + #if IS_ENABLED(CONFIG_GNSS) +diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c +index 2edd6bf64a3cc..7776d3bdd459a 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c +@@ -1903,7 +1903,7 @@ void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats *stats) + { + u32 i; + +- if (!cdev) { ++ if (!cdev || cdev->recov_in_prog) { + memset(stats, 0, sizeof(*stats)); + return; + } +diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h +index f90dcfe9ee688..8a63f99d499c4 100644 +--- a/drivers/net/ethernet/qlogic/qede/qede.h ++++ b/drivers/net/ethernet/qlogic/qede/qede.h +@@ -271,6 +271,10 @@ struct qede_dev { + #define QEDE_ERR_WARN 3 + + struct qede_dump_info dump_info; ++ struct delayed_work periodic_task; ++ unsigned long stats_coal_ticks; ++ u32 stats_coal_usecs; ++ spinlock_t stats_lock; /* lock for vport stats access */ + }; + + enum QEDE_STATE { +diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c +index 8034d812d5a00..d0a3395b2bc1f 100644 +--- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c ++++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c +@@ -430,6 +430,8 @@ static void qede_get_ethtool_stats(struct net_device *dev, + } + } + ++ spin_lock(&edev->stats_lock); ++ + for (i = 0; i < QEDE_NUM_STATS; i++) { + if (qede_is_irrelevant_stat(edev, i)) + continue; +@@ -439,6 +441,8 @@ static void qede_get_ethtool_stats(struct net_device *dev, + buf++; + } + ++ spin_unlock(&edev->stats_lock); ++ + __qede_unlock(edev); + } + +@@ -830,6 +834,7 @@ out: + + coal->rx_coalesce_usecs = rx_coal; + coal->tx_coalesce_usecs = tx_coal; ++ coal->stats_block_coalesce_usecs = edev->stats_coal_usecs; + + return rc; + } +@@ -843,6 +848,19 @@ int qede_set_coalesce(struct net_device *dev, struct ethtool_coalesce *coal, + int i, rc = 0; + u16 rxc, txc; + ++ if (edev->stats_coal_usecs != coal->stats_block_coalesce_usecs) { ++ edev->stats_coal_usecs = coal->stats_block_coalesce_usecs; ++ if (edev->stats_coal_usecs) { ++ edev->stats_coal_ticks = usecs_to_jiffies(edev->stats_coal_usecs); ++ schedule_delayed_work(&edev->periodic_task, 0); ++ ++ DP_INFO(edev, "Configured stats coal ticks=%lu jiffies\n", ++ edev->stats_coal_ticks); ++ } else { ++ cancel_delayed_work_sync(&edev->periodic_task); ++ } ++ } ++ + if (!netif_running(dev)) { + DP_INFO(edev, "Interface is down\n"); + return -EINVAL; +@@ -2253,7 +2271,8 @@ out: + } + + static const struct ethtool_ops qede_ethtool_ops = { +- .supported_coalesce_params = ETHTOOL_COALESCE_USECS, ++ .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ++ ETHTOOL_COALESCE_STATS_BLOCK_USECS, + .get_link_ksettings = qede_get_link_ksettings, + .set_link_ksettings = qede_set_link_ksettings, + .get_drvinfo = qede_get_drvinfo, +@@ -2304,7 +2323,8 @@ static const struct ethtool_ops qede_ethtool_ops = { + }; + + static const struct ethtool_ops qede_vf_ethtool_ops = { +- .supported_coalesce_params = ETHTOOL_COALESCE_USECS, ++ .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ++ ETHTOOL_COALESCE_STATS_BLOCK_USECS, + .get_link_ksettings = qede_get_link_ksettings, + .get_drvinfo = qede_get_drvinfo, + .get_msglevel = qede_get_msglevel, +diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c +index 261f982ca40da..36a75e84a084a 100644 +--- a/drivers/net/ethernet/qlogic/qede/qede_main.c ++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c +@@ -308,6 +308,8 @@ void qede_fill_by_demand_stats(struct qede_dev *edev) + + edev->ops->get_vport_stats(edev->cdev, &stats); + ++ spin_lock(&edev->stats_lock); ++ + p_common->no_buff_discards = stats.common.no_buff_discards; + p_common->packet_too_big_discard = stats.common.packet_too_big_discard; + p_common->ttl0_discard = stats.common.ttl0_discard; +@@ -405,6 +407,8 @@ void qede_fill_by_demand_stats(struct qede_dev *edev) + p_ah->tx_1519_to_max_byte_packets = + stats.ah.tx_1519_to_max_byte_packets; + } ++ ++ spin_unlock(&edev->stats_lock); + } + + static void qede_get_stats64(struct net_device *dev, +@@ -413,9 +417,10 @@ static void qede_get_stats64(struct net_device *dev, + struct qede_dev *edev = netdev_priv(dev); + struct qede_stats_common *p_common; + +- qede_fill_by_demand_stats(edev); + p_common = &edev->stats.common; + ++ spin_lock(&edev->stats_lock); ++ + stats->rx_packets = p_common->rx_ucast_pkts + p_common->rx_mcast_pkts + + p_common->rx_bcast_pkts; + stats->tx_packets = p_common->tx_ucast_pkts + p_common->tx_mcast_pkts + +@@ -435,6 +440,8 @@ static void qede_get_stats64(struct net_device *dev, + stats->collisions = edev->stats.bb.tx_total_collisions; + stats->rx_crc_errors = p_common->rx_crc_errors; + stats->rx_frame_errors = p_common->rx_align_errors; ++ ++ spin_unlock(&edev->stats_lock); + } + + #ifdef CONFIG_QED_SRIOV +@@ -1064,6 +1071,23 @@ static void qede_unlock(struct qede_dev *edev) + rtnl_unlock(); + } + ++static void qede_periodic_task(struct work_struct *work) ++{ ++ struct qede_dev *edev = container_of(work, struct qede_dev, ++ periodic_task.work); ++ ++ qede_fill_by_demand_stats(edev); ++ schedule_delayed_work(&edev->periodic_task, edev->stats_coal_ticks); ++} ++ ++static void qede_init_periodic_task(struct qede_dev *edev) ++{ ++ INIT_DELAYED_WORK(&edev->periodic_task, qede_periodic_task); ++ spin_lock_init(&edev->stats_lock); ++ edev->stats_coal_usecs = USEC_PER_SEC; ++ edev->stats_coal_ticks = usecs_to_jiffies(USEC_PER_SEC); ++} ++ + static void qede_sp_task(struct work_struct *work) + { + struct qede_dev *edev = container_of(work, struct qede_dev, +@@ -1083,6 +1107,7 @@ static void qede_sp_task(struct work_struct *work) + */ + + if (test_and_clear_bit(QEDE_SP_RECOVERY, &edev->sp_flags)) { ++ cancel_delayed_work_sync(&edev->periodic_task); + #ifdef CONFIG_QED_SRIOV + /* SRIOV must be disabled outside the lock to avoid a deadlock. + * The recovery of the active VFs is currently not supported. +@@ -1273,6 +1298,7 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level, + */ + INIT_DELAYED_WORK(&edev->sp_task, qede_sp_task); + mutex_init(&edev->qede_lock); ++ qede_init_periodic_task(edev); + + rc = register_netdev(edev->ndev); + if (rc) { +@@ -1297,6 +1323,11 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level, + edev->rx_copybreak = QEDE_RX_HDR_SIZE; + + qede_log_probe(edev); ++ ++ /* retain user config (for example - after recovery) */ ++ if (edev->stats_coal_usecs) ++ schedule_delayed_work(&edev->periodic_task, 0); ++ + return 0; + + err4: +@@ -1365,6 +1396,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode) + unregister_netdev(ndev); + + cancel_delayed_work_sync(&edev->sp_task); ++ cancel_delayed_work_sync(&edev->periodic_task); + + edev->ops->common->set_power_state(cdev, PCI_D0); + +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index 13ac7f1c7ae8c..c40fc1e304f2d 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -204,6 +204,8 @@ struct control_buf { + __virtio16 vid; + __virtio64 offloads; + struct virtio_net_ctrl_rss rss; ++ struct virtio_net_ctrl_coal_tx coal_tx; ++ struct virtio_net_ctrl_coal_rx coal_rx; + }; + + struct virtnet_info { +@@ -2933,12 +2935,10 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, + struct ethtool_coalesce *ec) + { + struct scatterlist sgs_tx, sgs_rx; +- struct virtio_net_ctrl_coal_tx coal_tx; +- struct virtio_net_ctrl_coal_rx coal_rx; + +- coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs); +- coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames); +- sg_init_one(&sgs_tx, &coal_tx, sizeof(coal_tx)); ++ vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs); ++ vi->ctrl->coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames); ++ sg_init_one(&sgs_tx, &vi->ctrl->coal_tx, sizeof(vi->ctrl->coal_tx)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, + VIRTIO_NET_CTRL_NOTF_COAL_TX_SET, +@@ -2949,9 +2949,9 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, + vi->tx_usecs = ec->tx_coalesce_usecs; + vi->tx_max_packets = ec->tx_max_coalesced_frames; + +- coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); +- coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); +- sg_init_one(&sgs_rx, &coal_rx, sizeof(coal_rx)); ++ vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); ++ vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); ++ sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, + VIRTIO_NET_CTRL_NOTF_COAL_RX_SET, +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +index d75fec8d0afd4..cbba850094187 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c +@@ -2729,17 +2729,13 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait, + if (wowlan_info_ver < 2) { + struct iwl_wowlan_info_notif_v1 *notif_v1 = (void *)pkt->data; + +- notif = kmemdup(notif_v1, +- offsetofend(struct iwl_wowlan_info_notif, +- received_beacons), +- GFP_ATOMIC); +- ++ notif = kmemdup(notif_v1, sizeof(*notif), GFP_ATOMIC); + if (!notif) + return false; + + notif->tid_tear_down = notif_v1->tid_tear_down; + notif->station_id = notif_v1->station_id; +- ++ memset_after(notif, 0, station_id); + } else { + notif = (void *)pkt->data; + } +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c +index eafa0f204c1f8..12f7bcec53ae1 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c +@@ -919,7 +919,10 @@ void mt7615_mac_sta_poll(struct mt7615_dev *dev) + + msta = list_first_entry(&sta_poll_list, struct mt7615_sta, + poll_list); ++ ++ spin_lock_bh(&dev->sta_poll_lock); + list_del_init(&msta->poll_list); ++ spin_unlock_bh(&dev->sta_poll_lock); + + addr = mt7615_mac_wtbl_addr(dev, msta->wcid.idx) + 19 * 4; + +diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c +index e29ca5dcf105b..0bf6882d18f14 100644 +--- a/drivers/net/wireless/realtek/rtw88/mac80211.c ++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c +@@ -88,15 +88,6 @@ static int rtw_ops_config(struct ieee80211_hw *hw, u32 changed) + } + } + +- if (changed & IEEE80211_CONF_CHANGE_PS) { +- if (hw->conf.flags & IEEE80211_CONF_PS) { +- rtwdev->ps_enabled = true; +- } else { +- rtwdev->ps_enabled = false; +- rtw_leave_lps(rtwdev); +- } +- } +- + if (changed & IEEE80211_CONF_CHANGE_CHANNEL) + rtw_set_channel(rtwdev); + +@@ -206,6 +197,7 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw, + rtwvif->bcn_ctrl = bcn_ctrl; + config |= PORT_SET_BCN_CTRL; + rtw_vif_port_config(rtwdev, rtwvif, config); ++ rtw_recalc_lps(rtwdev, vif); + + mutex_unlock(&rtwdev->mutex); + +@@ -236,6 +228,7 @@ static void rtw_ops_remove_interface(struct ieee80211_hw *hw, + rtwvif->bcn_ctrl = 0; + config |= PORT_SET_BCN_CTRL; + rtw_vif_port_config(rtwdev, rtwvif, config); ++ rtw_recalc_lps(rtwdev, NULL); + + mutex_unlock(&rtwdev->mutex); + } +@@ -428,6 +421,9 @@ static void rtw_ops_bss_info_changed(struct ieee80211_hw *hw, + if (changed & BSS_CHANGED_ERP_SLOT) + rtw_conf_tx(rtwdev, rtwvif); + ++ if (changed & BSS_CHANGED_PS) ++ rtw_recalc_lps(rtwdev, NULL); ++ + rtw_vif_port_config(rtwdev, rtwvif, config); + + mutex_unlock(&rtwdev->mutex); +diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c +index 76f7aadef77c5..fc51acc9a5998 100644 +--- a/drivers/net/wireless/realtek/rtw88/main.c ++++ b/drivers/net/wireless/realtek/rtw88/main.c +@@ -250,8 +250,8 @@ static void rtw_watch_dog_work(struct work_struct *work) + * more than two stations associated to the AP, then we can not enter + * lps, because fw does not handle the overlapped beacon interval + * +- * mac80211 should iterate vifs and determine if driver can enter +- * ps by passing IEEE80211_CONF_PS to us, all we need to do is to ++ * rtw_recalc_lps() iterate vifs and determine if driver can enter ++ * ps by vif->type and vif->cfg.ps, all we need to do here is to + * get that vif and check if device is having traffic more than the + * threshold. + */ +diff --git a/drivers/net/wireless/realtek/rtw88/ps.c b/drivers/net/wireless/realtek/rtw88/ps.c +index 996365575f44f..53933fb38a330 100644 +--- a/drivers/net/wireless/realtek/rtw88/ps.c ++++ b/drivers/net/wireless/realtek/rtw88/ps.c +@@ -299,3 +299,46 @@ void rtw_leave_lps_deep(struct rtw_dev *rtwdev) + + __rtw_leave_lps_deep(rtwdev); + } ++ ++struct rtw_vif_recalc_lps_iter_data { ++ struct rtw_dev *rtwdev; ++ struct ieee80211_vif *found_vif; ++ int count; ++}; ++ ++static void __rtw_vif_recalc_lps(struct rtw_vif_recalc_lps_iter_data *data, ++ struct ieee80211_vif *vif) ++{ ++ if (data->count < 0) ++ return; ++ ++ if (vif->type != NL80211_IFTYPE_STATION) { ++ data->count = -1; ++ return; ++ } ++ ++ data->count++; ++ data->found_vif = vif; ++} ++ ++static void rtw_vif_recalc_lps_iter(void *data, u8 *mac, ++ struct ieee80211_vif *vif) ++{ ++ __rtw_vif_recalc_lps(data, vif); ++} ++ ++void rtw_recalc_lps(struct rtw_dev *rtwdev, struct ieee80211_vif *new_vif) ++{ ++ struct rtw_vif_recalc_lps_iter_data data = { .rtwdev = rtwdev }; ++ ++ if (new_vif) ++ __rtw_vif_recalc_lps(&data, new_vif); ++ rtw_iterate_vifs(rtwdev, rtw_vif_recalc_lps_iter, &data); ++ ++ if (data.count == 1 && data.found_vif->cfg.ps) { ++ rtwdev->ps_enabled = true; ++ } else { ++ rtwdev->ps_enabled = false; ++ rtw_leave_lps(rtwdev); ++ } ++} +diff --git a/drivers/net/wireless/realtek/rtw88/ps.h b/drivers/net/wireless/realtek/rtw88/ps.h +index c194386f6db53..5ae83d2526cfd 100644 +--- a/drivers/net/wireless/realtek/rtw88/ps.h ++++ b/drivers/net/wireless/realtek/rtw88/ps.h +@@ -23,4 +23,6 @@ void rtw_enter_lps(struct rtw_dev *rtwdev, u8 port_id); + void rtw_leave_lps(struct rtw_dev *rtwdev); + void rtw_leave_lps_deep(struct rtw_dev *rtwdev); + enum rtw_lps_deep_mode rtw_get_lps_deep_mode(struct rtw_dev *rtwdev); ++void rtw_recalc_lps(struct rtw_dev *rtwdev, struct ieee80211_vif *new_vif); ++ + #endif +diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c +index d43281f7335b1..3dc988cac4aeb 100644 +--- a/drivers/net/wireless/realtek/rtw89/mac80211.c ++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c +@@ -79,15 +79,6 @@ static int rtw89_ops_config(struct ieee80211_hw *hw, u32 changed) + !(hw->conf.flags & IEEE80211_CONF_IDLE)) + rtw89_leave_ips(rtwdev); + +- if (changed & IEEE80211_CONF_CHANGE_PS) { +- if (hw->conf.flags & IEEE80211_CONF_PS) { +- rtwdev->lps_enabled = true; +- } else { +- rtw89_leave_lps(rtwdev); +- rtwdev->lps_enabled = false; +- } +- } +- + if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { + rtw89_config_entity_chandef(rtwdev, RTW89_SUB_ENTITY_0, + &hw->conf.chandef); +@@ -147,6 +138,8 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw, + rtw89_core_txq_init(rtwdev, vif->txq); + + rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_START); ++ ++ rtw89_recalc_lps(rtwdev); + out: + mutex_unlock(&rtwdev->mutex); + +@@ -170,6 +163,8 @@ static void rtw89_ops_remove_interface(struct ieee80211_hw *hw, + rtw89_mac_remove_vif(rtwdev, rtwvif); + rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port); + list_del_init(&rtwvif->list); ++ rtw89_recalc_lps(rtwdev); ++ + mutex_unlock(&rtwdev->mutex); + } + +@@ -425,6 +420,9 @@ static void rtw89_ops_bss_info_changed(struct ieee80211_hw *hw, + if (changed & BSS_CHANGED_P2P_PS) + rtw89_process_p2p_ps(rtwdev, vif); + ++ if (changed & BSS_CHANGED_PS) ++ rtw89_recalc_lps(rtwdev); ++ + mutex_unlock(&rtwdev->mutex); + } + +diff --git a/drivers/net/wireless/realtek/rtw89/ps.c b/drivers/net/wireless/realtek/rtw89/ps.c +index 40498812205ea..c1f1083d3f634 100644 +--- a/drivers/net/wireless/realtek/rtw89/ps.c ++++ b/drivers/net/wireless/realtek/rtw89/ps.c +@@ -244,3 +244,29 @@ void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif) + rtw89_p2p_disable_all_noa(rtwdev, vif); + rtw89_p2p_update_noa(rtwdev, vif); + } ++ ++void rtw89_recalc_lps(struct rtw89_dev *rtwdev) ++{ ++ struct ieee80211_vif *vif, *found_vif = NULL; ++ struct rtw89_vif *rtwvif; ++ int count = 0; ++ ++ rtw89_for_each_rtwvif(rtwdev, rtwvif) { ++ vif = rtwvif_to_vif(rtwvif); ++ ++ if (vif->type != NL80211_IFTYPE_STATION) { ++ count = 0; ++ break; ++ } ++ ++ count++; ++ found_vif = vif; ++ } ++ ++ if (count == 1 && found_vif->cfg.ps) { ++ rtwdev->lps_enabled = true; ++ } else { ++ rtw89_leave_lps(rtwdev); ++ rtwdev->lps_enabled = false; ++ } ++} +diff --git a/drivers/net/wireless/realtek/rtw89/ps.h b/drivers/net/wireless/realtek/rtw89/ps.h +index 6ac1f7ea53394..374e9a358683f 100644 +--- a/drivers/net/wireless/realtek/rtw89/ps.h ++++ b/drivers/net/wireless/realtek/rtw89/ps.h +@@ -14,5 +14,6 @@ void rtw89_enter_ips(struct rtw89_dev *rtwdev); + void rtw89_leave_ips(struct rtw89_dev *rtwdev); + void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl); + void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif); ++void rtw89_recalc_lps(struct rtw89_dev *rtwdev); + + #endif +diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg.c b/drivers/pinctrl/meson/pinctrl-meson-axg.c +index 7bfecdfba1779..d249a035c2b9b 100644 +--- a/drivers/pinctrl/meson/pinctrl-meson-axg.c ++++ b/drivers/pinctrl/meson/pinctrl-meson-axg.c +@@ -400,6 +400,7 @@ static struct meson_pmx_group meson_axg_periphs_groups[] = { + GPIO_GROUP(GPIOA_15), + GPIO_GROUP(GPIOA_16), + GPIO_GROUP(GPIOA_17), ++ GPIO_GROUP(GPIOA_18), + GPIO_GROUP(GPIOA_19), + GPIO_GROUP(GPIOA_20), + +diff --git a/drivers/platform/surface/aggregator/controller.c b/drivers/platform/surface/aggregator/controller.c +index 535581c0471c5..7fc602e01487d 100644 +--- a/drivers/platform/surface/aggregator/controller.c ++++ b/drivers/platform/surface/aggregator/controller.c +@@ -825,7 +825,7 @@ static int ssam_cplt_init(struct ssam_cplt *cplt, struct device *dev) + + cplt->dev = dev; + +- cplt->wq = create_workqueue(SSAM_CPLT_WQ_NAME); ++ cplt->wq = alloc_workqueue(SSAM_CPLT_WQ_NAME, WQ_UNBOUND | WQ_MEM_RECLAIM, 0); + if (!cplt->wq) + return -ENOMEM; + +diff --git a/drivers/platform/surface/surface_aggregator_tabletsw.c b/drivers/platform/surface/surface_aggregator_tabletsw.c +index 9fed800c7cc09..a18e9fc7896b3 100644 +--- a/drivers/platform/surface/surface_aggregator_tabletsw.c ++++ b/drivers/platform/surface/surface_aggregator_tabletsw.c +@@ -201,6 +201,7 @@ enum ssam_kip_cover_state { + SSAM_KIP_COVER_STATE_LAPTOP = 0x03, + SSAM_KIP_COVER_STATE_FOLDED_CANVAS = 0x04, + SSAM_KIP_COVER_STATE_FOLDED_BACK = 0x05, ++ SSAM_KIP_COVER_STATE_BOOK = 0x06, + }; + + static const char *ssam_kip_cover_state_name(struct ssam_tablet_sw *sw, u32 state) +@@ -221,6 +222,9 @@ static const char *ssam_kip_cover_state_name(struct ssam_tablet_sw *sw, u32 stat + case SSAM_KIP_COVER_STATE_FOLDED_BACK: + return "folded-back"; + ++ case SSAM_KIP_COVER_STATE_BOOK: ++ return "book"; ++ + default: + dev_warn(&sw->sdev->dev, "unknown KIP cover state: %u\n", state); + return ""; +@@ -233,6 +237,7 @@ static bool ssam_kip_cover_state_is_tablet_mode(struct ssam_tablet_sw *sw, u32 s + case SSAM_KIP_COVER_STATE_DISCONNECTED: + case SSAM_KIP_COVER_STATE_FOLDED_CANVAS: + case SSAM_KIP_COVER_STATE_FOLDED_BACK: ++ case SSAM_KIP_COVER_STATE_BOOK: + return true; + + case SSAM_KIP_COVER_STATE_CLOSED: +diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c +index 9327dcdd6e5e4..8fca725b3daec 100644 +--- a/drivers/s390/block/dasd_ioctl.c ++++ b/drivers/s390/block/dasd_ioctl.c +@@ -552,10 +552,10 @@ static int __dasd_ioctl_information(struct dasd_block *block, + + memcpy(dasd_info->type, base->discipline->name, 4); + +- spin_lock_irqsave(&block->queue_lock, flags); ++ spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags); + list_for_each(l, &base->ccw_queue) + dasd_info->chanq_len++; +- spin_unlock_irqrestore(&block->queue_lock, flags); ++ spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags); + return 0; + } + +diff --git a/drivers/soc/qcom/icc-bwmon.c b/drivers/soc/qcom/icc-bwmon.c +index d07be3700db60..7829549993c92 100644 +--- a/drivers/soc/qcom/icc-bwmon.c ++++ b/drivers/soc/qcom/icc-bwmon.c +@@ -603,12 +603,12 @@ static int bwmon_probe(struct platform_device *pdev) + bwmon->max_bw_kbps = UINT_MAX; + opp = dev_pm_opp_find_bw_floor(dev, &bwmon->max_bw_kbps, 0); + if (IS_ERR(opp)) +- return dev_err_probe(dev, ret, "failed to find max peak bandwidth\n"); ++ return dev_err_probe(dev, PTR_ERR(opp), "failed to find max peak bandwidth\n"); + + bwmon->min_bw_kbps = 0; + opp = dev_pm_opp_find_bw_ceil(dev, &bwmon->min_bw_kbps, 0); + if (IS_ERR(opp)) +- return dev_err_probe(dev, ret, "failed to find min peak bandwidth\n"); ++ return dev_err_probe(dev, PTR_ERR(opp), "failed to find min peak bandwidth\n"); + + bwmon->dev = dev; + +diff --git a/drivers/soc/qcom/ramp_controller.c b/drivers/soc/qcom/ramp_controller.c +index dc74d2a19de2b..5e3ba0be09035 100644 +--- a/drivers/soc/qcom/ramp_controller.c ++++ b/drivers/soc/qcom/ramp_controller.c +@@ -296,7 +296,7 @@ static int qcom_ramp_controller_probe(struct platform_device *pdev) + return -ENOMEM; + + qrc->desc = device_get_match_data(&pdev->dev); +- if (!qrc) ++ if (!qrc->desc) + return -EINVAL; + + qrc->regmap = devm_regmap_init_mmio(&pdev->dev, base, &qrc_regmap_config); +diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c +index 0d31377f178d5..d4bda086c141a 100644 +--- a/drivers/soc/qcom/rmtfs_mem.c ++++ b/drivers/soc/qcom/rmtfs_mem.c +@@ -234,6 +234,7 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev) + num_vmids = 0; + } else if (num_vmids < 0) { + dev_err(&pdev->dev, "failed to count qcom,vmid elements: %d\n", num_vmids); ++ ret = num_vmids; + goto remove_cdev; + } else if (num_vmids > NUM_MAX_VMIDS) { + dev_warn(&pdev->dev, +diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c +index f93544f6d7961..0dd4363ebac8f 100644 +--- a/drivers/soc/qcom/rpmh-rsc.c ++++ b/drivers/soc/qcom/rpmh-rsc.c +@@ -1073,7 +1073,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev) + drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT); + drv->ver.minor >>= MINOR_VER_SHIFT; + +- if (drv->ver.major == 3 && drv->ver.minor >= 0) ++ if (drv->ver.major == 3) + drv->regs = rpmh_rsc_reg_offset_ver_3_0; + else + drv->regs = rpmh_rsc_reg_offset_ver_2_7; +diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c +index 8c6da1739e3d1..3c909853aaf89 100644 +--- a/drivers/soundwire/stream.c ++++ b/drivers/soundwire/stream.c +@@ -2019,8 +2019,10 @@ int sdw_stream_add_slave(struct sdw_slave *slave, + + skip_alloc_master_rt: + s_rt = sdw_slave_rt_find(slave, stream); +- if (s_rt) ++ if (s_rt) { ++ alloc_slave_rt = false; + goto skip_alloc_slave_rt; ++ } + + s_rt = sdw_slave_rt_alloc(slave, m_rt); + if (!s_rt) { +diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c +index 9eab6c20dbc56..6e95efb50acbc 100644 +--- a/drivers/spi/spi-mt65xx.c ++++ b/drivers/spi/spi-mt65xx.c +@@ -1275,6 +1275,9 @@ static int mtk_spi_remove(struct platform_device *pdev) + struct mtk_spi *mdata = spi_master_get_devdata(master); + int ret; + ++ if (mdata->use_spimem && !completion_done(&mdata->spimem_done)) ++ complete(&mdata->spimem_done); ++ + ret = pm_runtime_resume_and_get(&pdev->dev); + if (ret < 0) + return ret; +diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c +index 205e54f157b4a..fb6b7738b4f55 100644 +--- a/drivers/spi/spi-qup.c ++++ b/drivers/spi/spi-qup.c +@@ -1029,23 +1029,8 @@ static int spi_qup_probe(struct platform_device *pdev) + return -ENXIO; + } + +- ret = clk_prepare_enable(cclk); +- if (ret) { +- dev_err(dev, "cannot enable core clock\n"); +- return ret; +- } +- +- ret = clk_prepare_enable(iclk); +- if (ret) { +- clk_disable_unprepare(cclk); +- dev_err(dev, "cannot enable iface clock\n"); +- return ret; +- } +- + master = spi_alloc_master(dev, sizeof(struct spi_qup)); + if (!master) { +- clk_disable_unprepare(cclk); +- clk_disable_unprepare(iclk); + dev_err(dev, "cannot allocate master\n"); + return -ENOMEM; + } +@@ -1093,6 +1078,19 @@ static int spi_qup_probe(struct platform_device *pdev) + spin_lock_init(&controller->lock); + init_completion(&controller->done); + ++ ret = clk_prepare_enable(cclk); ++ if (ret) { ++ dev_err(dev, "cannot enable core clock\n"); ++ goto error_dma; ++ } ++ ++ ret = clk_prepare_enable(iclk); ++ if (ret) { ++ clk_disable_unprepare(cclk); ++ dev_err(dev, "cannot enable iface clock\n"); ++ goto error_dma; ++ } ++ + iomode = readl_relaxed(base + QUP_IO_M_MODES); + + size = QUP_IO_M_OUTPUT_BLOCK_SIZE(iomode); +@@ -1122,7 +1120,7 @@ static int spi_qup_probe(struct platform_device *pdev) + ret = spi_qup_set_state(controller, QUP_STATE_RESET); + if (ret) { + dev_err(dev, "cannot set RESET state\n"); +- goto error_dma; ++ goto error_clk; + } + + writel_relaxed(0, base + QUP_OPERATIONAL); +@@ -1146,7 +1144,7 @@ static int spi_qup_probe(struct platform_device *pdev) + ret = devm_request_irq(dev, irq, spi_qup_qup_irq, + IRQF_TRIGGER_HIGH, pdev->name, controller); + if (ret) +- goto error_dma; ++ goto error_clk; + + pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); + pm_runtime_use_autosuspend(dev); +@@ -1161,11 +1159,12 @@ static int spi_qup_probe(struct platform_device *pdev) + + disable_pm: + pm_runtime_disable(&pdev->dev); ++error_clk: ++ clk_disable_unprepare(cclk); ++ clk_disable_unprepare(iclk); + error_dma: + spi_qup_release_dma(master); + error: +- clk_disable_unprepare(cclk); +- clk_disable_unprepare(iclk); + spi_master_put(master); + return ret; + } +diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c +index 92552ce30cd58..72d76dc7df781 100644 +--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c ++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c +@@ -48,9 +48,9 @@ static const struct rtl819x_ops rtl819xp_ops = { + }; + + static struct pci_device_id rtl8192_pci_id_tbl[] = { +- {PCI_DEVICE(0x10ec, 0x8192)}, +- {PCI_DEVICE(0x07aa, 0x0044)}, +- {PCI_DEVICE(0x07aa, 0x0047)}, ++ {RTL_PCI_DEVICE(0x10ec, 0x8192, rtl819xp_ops)}, ++ {RTL_PCI_DEVICE(0x07aa, 0x0044, rtl819xp_ops)}, ++ {RTL_PCI_DEVICE(0x07aa, 0x0047, rtl819xp_ops)}, + {} + }; + +diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h +index bbc1c4bac3588..fd96eef90c7fa 100644 +--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.h ++++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.h +@@ -55,6 +55,11 @@ + #define IS_HARDWARE_TYPE_8192SE(_priv) \ + (((struct r8192_priv *)rtllib_priv(dev))->card_8192 == NIC_8192SE) + ++#define RTL_PCI_DEVICE(vend, dev, cfg) \ ++ .vendor = (vend), .device = (dev), \ ++ .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \ ++ .driver_data = (kernel_ulong_t)&(cfg) ++ + #define TOTAL_CAM_ENTRY 32 + #define CAM_CONTENT_COUNT 8 + +diff --git a/drivers/tee/amdtee/amdtee_if.h b/drivers/tee/amdtee/amdtee_if.h +index ff48c3e473750..e2014e21530ac 100644 +--- a/drivers/tee/amdtee/amdtee_if.h ++++ b/drivers/tee/amdtee/amdtee_if.h +@@ -118,16 +118,18 @@ struct tee_cmd_unmap_shared_mem { + + /** + * struct tee_cmd_load_ta - load Trusted Application (TA) binary into TEE +- * @low_addr: [in] bits [31:0] of the physical address of the TA binary +- * @hi_addr: [in] bits [63:32] of the physical address of the TA binary +- * @size: [in] size of TA binary in bytes +- * @ta_handle: [out] return handle of the loaded TA ++ * @low_addr: [in] bits [31:0] of the physical address of the TA binary ++ * @hi_addr: [in] bits [63:32] of the physical address of the TA binary ++ * @size: [in] size of TA binary in bytes ++ * @ta_handle: [out] return handle of the loaded TA ++ * @return_origin: [out] origin of return code after TEE processing + */ + struct tee_cmd_load_ta { + u32 low_addr; + u32 hi_addr; + u32 size; + u32 ta_handle; ++ u32 return_origin; + }; + + /** +diff --git a/drivers/tee/amdtee/call.c b/drivers/tee/amdtee/call.c +index cec6e70f0ac92..8a02c5fe33a6b 100644 +--- a/drivers/tee/amdtee/call.c ++++ b/drivers/tee/amdtee/call.c +@@ -423,19 +423,23 @@ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg) + if (ret) { + arg->ret_origin = TEEC_ORIGIN_COMMS; + arg->ret = TEEC_ERROR_COMMUNICATION; +- } else if (arg->ret == TEEC_SUCCESS) { +- ret = get_ta_refcount(load_cmd.ta_handle); +- if (!ret) { +- arg->ret_origin = TEEC_ORIGIN_COMMS; +- arg->ret = TEEC_ERROR_OUT_OF_MEMORY; +- +- /* Unload the TA on error */ +- unload_cmd.ta_handle = load_cmd.ta_handle; +- psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, +- (void *)&unload_cmd, +- sizeof(unload_cmd), &ret); +- } else { +- set_session_id(load_cmd.ta_handle, 0, &arg->session); ++ } else { ++ arg->ret_origin = load_cmd.return_origin; ++ ++ if (arg->ret == TEEC_SUCCESS) { ++ ret = get_ta_refcount(load_cmd.ta_handle); ++ if (!ret) { ++ arg->ret_origin = TEEC_ORIGIN_COMMS; ++ arg->ret = TEEC_ERROR_OUT_OF_MEMORY; ++ ++ /* Unload the TA on error */ ++ unload_cmd.ta_handle = load_cmd.ta_handle; ++ psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, ++ (void *)&unload_cmd, ++ sizeof(unload_cmd), &ret); ++ } else { ++ set_session_id(load_cmd.ta_handle, 0, &arg->session); ++ } + } + } + mutex_unlock(&ta_refcount_mutex); +diff --git a/drivers/usb/core/buffer.c b/drivers/usb/core/buffer.c +index fbb087b728dc9..268ccbec88f95 100644 +--- a/drivers/usb/core/buffer.c ++++ b/drivers/usb/core/buffer.c +@@ -172,3 +172,44 @@ void hcd_buffer_free( + } + dma_free_coherent(hcd->self.sysdev, size, addr, dma); + } ++ ++void *hcd_buffer_alloc_pages(struct usb_hcd *hcd, ++ size_t size, gfp_t mem_flags, dma_addr_t *dma) ++{ ++ if (size == 0) ++ return NULL; ++ ++ if (hcd->localmem_pool) ++ return gen_pool_dma_alloc_align(hcd->localmem_pool, ++ size, dma, PAGE_SIZE); ++ ++ /* some USB hosts just use PIO */ ++ if (!hcd_uses_dma(hcd)) { ++ *dma = DMA_MAPPING_ERROR; ++ return (void *)__get_free_pages(mem_flags, ++ get_order(size)); ++ } ++ ++ return dma_alloc_coherent(hcd->self.sysdev, ++ size, dma, mem_flags); ++} ++ ++void hcd_buffer_free_pages(struct usb_hcd *hcd, ++ size_t size, void *addr, dma_addr_t dma) ++{ ++ if (!addr) ++ return; ++ ++ if (hcd->localmem_pool) { ++ gen_pool_free(hcd->localmem_pool, ++ (unsigned long)addr, size); ++ return; ++ } ++ ++ if (!hcd_uses_dma(hcd)) { ++ free_pages((unsigned long)addr, get_order(size)); ++ return; ++ } ++ ++ dma_free_coherent(hcd->self.sysdev, size, addr, dma); ++} +diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c +index e501a03d6c700..fcf68818e9992 100644 +--- a/drivers/usb/core/devio.c ++++ b/drivers/usb/core/devio.c +@@ -186,6 +186,7 @@ static int connected(struct usb_dev_state *ps) + static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count) + { + struct usb_dev_state *ps = usbm->ps; ++ struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus); + unsigned long flags; + + spin_lock_irqsave(&ps->lock, flags); +@@ -194,8 +195,8 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count) + list_del(&usbm->memlist); + spin_unlock_irqrestore(&ps->lock, flags); + +- usb_free_coherent(ps->dev, usbm->size, usbm->mem, +- usbm->dma_handle); ++ hcd_buffer_free_pages(hcd, usbm->size, ++ usbm->mem, usbm->dma_handle); + usbfs_decrease_memory_usage( + usbm->size + sizeof(struct usb_memory)); + kfree(usbm); +@@ -234,7 +235,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma) + size_t size = vma->vm_end - vma->vm_start; + void *mem; + unsigned long flags; +- dma_addr_t dma_handle; ++ dma_addr_t dma_handle = DMA_MAPPING_ERROR; + int ret; + + ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory)); +@@ -247,8 +248,8 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma) + goto error_decrease_mem; + } + +- mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN, +- &dma_handle); ++ mem = hcd_buffer_alloc_pages(hcd, ++ size, GFP_USER | __GFP_NOWARN, &dma_handle); + if (!mem) { + ret = -ENOMEM; + goto error_free_usbm; +@@ -264,7 +265,14 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma) + usbm->vma_use_count = 1; + INIT_LIST_HEAD(&usbm->memlist); + +- if (hcd->localmem_pool || !hcd_uses_dma(hcd)) { ++ /* ++ * In DMA-unavailable cases, hcd_buffer_alloc_pages allocates ++ * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check ++ * whether we are in such cases, and then use remap_pfn_range (or ++ * dma_mmap_coherent) to map normal (or DMA) pages into the user ++ * space, respectively. ++ */ ++ if (dma_handle == DMA_MAPPING_ERROR) { + if (remap_pfn_range(vma, vma->vm_start, + virt_to_phys(usbm->mem) >> PAGE_SHIFT, + size, vma->vm_page_prot) < 0) { +diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c +index 97a16f7eb8941..0b228fbb2a68b 100644 +--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c ++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c +@@ -3323,10 +3323,10 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device * + mlx5_vdpa_remove_debugfs(ndev->debugfs); + ndev->debugfs = NULL; + unregister_link_notifier(ndev); ++ _vdpa_unregister_device(dev); + wq = mvdev->wq; + mvdev->wq = NULL; + destroy_workqueue(wq); +- _vdpa_unregister_device(dev); + mgtdev->ndev = NULL; + } + +diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c +index 0c3b48616a9f3..695b20b17e010 100644 +--- a/drivers/vdpa/vdpa_user/vduse_dev.c ++++ b/drivers/vdpa/vdpa_user/vduse_dev.c +@@ -1443,6 +1443,9 @@ static bool vduse_validate_config(struct vduse_dev_config *config) + if (config->vq_num > 0xffff) + return false; + ++ if (!config->name[0]) ++ return false; ++ + if (!device_is_allowed(config->device_id)) + return false; + +diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c +index 74c7d1f978b75..779fc44677162 100644 +--- a/drivers/vhost/vdpa.c ++++ b/drivers/vhost/vdpa.c +@@ -572,7 +572,14 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd, + if (r) + return r; + +- vq->last_avail_idx = vq_state.split.avail_index; ++ if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { ++ vq->last_avail_idx = vq_state.packed.last_avail_idx | ++ (vq_state.packed.last_avail_counter << 15); ++ vq->last_used_idx = vq_state.packed.last_used_idx | ++ (vq_state.packed.last_used_counter << 15); ++ } else { ++ vq->last_avail_idx = vq_state.split.avail_index; ++ } + break; + } + +@@ -590,9 +597,15 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd, + break; + + case VHOST_SET_VRING_BASE: +- vq_state.split.avail_index = vq->last_avail_idx; +- if (ops->set_vq_state(vdpa, idx, &vq_state)) +- r = -EINVAL; ++ if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { ++ vq_state.packed.last_avail_idx = vq->last_avail_idx & 0x7fff; ++ vq_state.packed.last_avail_counter = !!(vq->last_avail_idx & 0x8000); ++ vq_state.packed.last_used_idx = vq->last_used_idx & 0x7fff; ++ vq_state.packed.last_used_counter = !!(vq->last_used_idx & 0x8000); ++ } else { ++ vq_state.split.avail_index = vq->last_avail_idx; ++ } ++ r = ops->set_vq_state(vdpa, idx, &vq_state); + break; + + case VHOST_SET_VRING_CALL: +diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c +index f11bdbe4c2c5f..f64efda48f21c 100644 +--- a/drivers/vhost/vhost.c ++++ b/drivers/vhost/vhost.c +@@ -1633,17 +1633,25 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg + r = -EFAULT; + break; + } +- if (s.num > 0xffff) { +- r = -EINVAL; +- break; ++ if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { ++ vq->last_avail_idx = s.num & 0xffff; ++ vq->last_used_idx = (s.num >> 16) & 0xffff; ++ } else { ++ if (s.num > 0xffff) { ++ r = -EINVAL; ++ break; ++ } ++ vq->last_avail_idx = s.num; + } +- vq->last_avail_idx = s.num; + /* Forget the cached index value. */ + vq->avail_idx = vq->last_avail_idx; + break; + case VHOST_GET_VRING_BASE: + s.index = idx; +- s.num = vq->last_avail_idx; ++ if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) ++ s.num = (u32)vq->last_avail_idx | ((u32)vq->last_used_idx << 16); ++ else ++ s.num = vq->last_avail_idx; + if (copy_to_user(argp, &s, sizeof s)) + r = -EFAULT; + break; +diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h +index 1647b750169c7..6f73f29d59791 100644 +--- a/drivers/vhost/vhost.h ++++ b/drivers/vhost/vhost.h +@@ -85,13 +85,17 @@ struct vhost_virtqueue { + /* The routine to call when the Guest pings us, or timeout. */ + vhost_work_fn_t handle_kick; + +- /* Last available index we saw. */ ++ /* Last available index we saw. ++ * Values are limited to 0x7fff, and the high bit is used as ++ * a wrap counter when using VIRTIO_F_RING_PACKED. */ + u16 last_avail_idx; + + /* Caches available index value from user. */ + u16 avail_idx; + +- /* Last index we used. */ ++ /* Last index we used. ++ * Values are limited to 0x7fff, and the high bit is used as ++ * a wrap counter when using VIRTIO_F_RING_PACKED. */ + u16 last_used_idx; + + /* Used flags */ +diff --git a/fs/afs/dir.c b/fs/afs/dir.c +index a97499fd747b6..93e8b06ef76a6 100644 +--- a/fs/afs/dir.c ++++ b/fs/afs/dir.c +@@ -1358,6 +1358,7 @@ static int afs_mkdir(struct mnt_idmap *idmap, struct inode *dir, + op->dentry = dentry; + op->create.mode = S_IFDIR | mode; + op->create.reason = afs_edit_dir_for_mkdir; ++ op->mtime = current_time(dir); + op->ops = &afs_mkdir_operation; + return afs_do_sync_operation(op); + } +@@ -1661,6 +1662,7 @@ static int afs_create(struct mnt_idmap *idmap, struct inode *dir, + op->dentry = dentry; + op->create.mode = S_IFREG | mode; + op->create.reason = afs_edit_dir_for_create; ++ op->mtime = current_time(dir); + op->ops = &afs_create_operation; + return afs_do_sync_operation(op); + +@@ -1796,6 +1798,7 @@ static int afs_symlink(struct mnt_idmap *idmap, struct inode *dir, + op->ops = &afs_symlink_operation; + op->create.reason = afs_edit_dir_for_symlink; + op->create.symlink = content; ++ op->mtime = current_time(dir); + return afs_do_sync_operation(op); + + error: +diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c +index 789be30d6ee22..2321e5ddb664d 100644 +--- a/fs/ceph/caps.c ++++ b/fs/ceph/caps.c +@@ -1627,6 +1627,7 @@ void ceph_flush_snaps(struct ceph_inode_info *ci, + struct inode *inode = &ci->netfs.inode; + struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc; + struct ceph_mds_session *session = NULL; ++ bool need_put = false; + int mds; + + dout("ceph_flush_snaps %p\n", inode); +@@ -1671,8 +1672,13 @@ out: + ceph_put_mds_session(session); + /* we flushed them all; remove this inode from the queue */ + spin_lock(&mdsc->snap_flush_lock); ++ if (!list_empty(&ci->i_snap_flush_item)) ++ need_put = true; + list_del_init(&ci->i_snap_flush_item); + spin_unlock(&mdsc->snap_flush_lock); ++ ++ if (need_put) ++ iput(inode); + } + + /* +diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c +index 0b236ebd989fc..2e73ba62bd7aa 100644 +--- a/fs/ceph/snap.c ++++ b/fs/ceph/snap.c +@@ -693,8 +693,10 @@ int __ceph_finish_cap_snap(struct ceph_inode_info *ci, + capsnap->size); + + spin_lock(&mdsc->snap_flush_lock); +- if (list_empty(&ci->i_snap_flush_item)) ++ if (list_empty(&ci->i_snap_flush_item)) { ++ ihold(inode); + list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); ++ } + spin_unlock(&mdsc->snap_flush_lock); + return 1; /* caller may want to ceph_flush_snaps */ + } +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index 1f222c396932e..9c987bdbab334 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -6354,7 +6354,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb) + struct ext4_mount_options old_opts; + ext4_group_t g; + int err = 0; +- int enable_rw = 0; + #ifdef CONFIG_QUOTA + int enable_quota = 0; + int i, j; +@@ -6541,7 +6540,7 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb) + if (err) + goto restore_opts; + +- enable_rw = 1; ++ sb->s_flags &= ~SB_RDONLY; + if (ext4_has_feature_mmp(sb)) { + err = ext4_multi_mount_protect(sb, + le64_to_cpu(es->s_mmp_block)); +@@ -6588,9 +6587,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb) + if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks) + ext4_release_system_zone(sb); + +- if (enable_rw) +- sb->s_flags &= ~SB_RDONLY; +- + /* + * Reinitialize lazy itable initialization thread based on + * current settings +diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c +index 6fa38707163b1..7421ca1d968b0 100644 +--- a/fs/ext4/xattr.c ++++ b/fs/ext4/xattr.c +@@ -2057,8 +2057,9 @@ inserted: + else { + u32 ref; + ++#ifdef EXT4_XATTR_DEBUG + WARN_ON_ONCE(dquot_initialize_needed(inode)); +- ++#endif + /* The old block is released after updating + the inode. */ + error = dquot_alloc_block(inode, +@@ -2121,8 +2122,9 @@ inserted: + /* We need to allocate a new block */ + ext4_fsblk_t goal, block; + ++#ifdef EXT4_XATTR_DEBUG + WARN_ON_ONCE(dquot_initialize_needed(inode)); +- ++#endif + goal = ext4_group_first_block_no(sb, + EXT4_I(inode)->i_block_group); + block = ext4_new_meta_blocks(handle, inode, goal, 0, +diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c +index 4882a812ea867..e11d4a1e63d73 100644 +--- a/fs/ksmbd/connection.c ++++ b/fs/ksmbd/connection.c +@@ -294,6 +294,9 @@ bool ksmbd_conn_alive(struct ksmbd_conn *conn) + return true; + } + ++#define SMB1_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb_hdr)) ++#define SMB2_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb2_hdr) + 4) ++ + /** + * ksmbd_conn_handler_loop() - session thread to listen on new smb requests + * @p: connection instance +@@ -350,6 +353,9 @@ int ksmbd_conn_handler_loop(void *p) + if (pdu_size > MAX_STREAM_PROT_LEN) + break; + ++ if (pdu_size < SMB1_MIN_SUPPORTED_HEADER_SIZE) ++ break; ++ + /* 4 for rfc1002 length field */ + /* 1 for implied bcc[0] */ + size = pdu_size + 4 + 1; +@@ -377,6 +383,12 @@ int ksmbd_conn_handler_loop(void *p) + continue; + } + ++ if (((struct smb2_hdr *)smb2_get_msg(conn->request_buf))->ProtocolId == ++ SMB2_PROTO_NUMBER) { ++ if (pdu_size < SMB2_MIN_SUPPORTED_HEADER_SIZE) ++ break; ++ } ++ + if (!default_conn_ops.process_fn) { + pr_err("No connection request callback\n"); + break; +diff --git a/fs/ksmbd/oplock.c b/fs/ksmbd/oplock.c +index db181bdad73a2..844b303baf293 100644 +--- a/fs/ksmbd/oplock.c ++++ b/fs/ksmbd/oplock.c +@@ -1415,56 +1415,38 @@ void create_lease_buf(u8 *rbuf, struct lease *lease) + */ + struct lease_ctx_info *parse_lease_state(void *open_req) + { +- char *data_offset; + struct create_context *cc; +- unsigned int next = 0; +- char *name; +- bool found = false; + struct smb2_create_req *req = (struct smb2_create_req *)open_req; +- struct lease_ctx_info *lreq = kzalloc(sizeof(struct lease_ctx_info), +- GFP_KERNEL); ++ struct lease_ctx_info *lreq; ++ ++ cc = smb2_find_context_vals(req, SMB2_CREATE_REQUEST_LEASE, 4); ++ if (IS_ERR_OR_NULL(cc)) ++ return NULL; ++ ++ lreq = kzalloc(sizeof(struct lease_ctx_info), GFP_KERNEL); + if (!lreq) + return NULL; + +- data_offset = (char *)req + le32_to_cpu(req->CreateContextsOffset); +- cc = (struct create_context *)data_offset; +- do { +- cc = (struct create_context *)((char *)cc + next); +- name = le16_to_cpu(cc->NameOffset) + (char *)cc; +- if (le16_to_cpu(cc->NameLength) != 4 || +- strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4)) { +- next = le32_to_cpu(cc->Next); +- continue; +- } +- found = true; +- break; +- } while (next != 0); ++ if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) { ++ struct create_lease_v2 *lc = (struct create_lease_v2 *)cc; + +- if (found) { +- if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) { +- struct create_lease_v2 *lc = (struct create_lease_v2 *)cc; +- +- memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); +- lreq->req_state = lc->lcontext.LeaseState; +- lreq->flags = lc->lcontext.LeaseFlags; +- lreq->duration = lc->lcontext.LeaseDuration; +- memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, +- SMB2_LEASE_KEY_SIZE); +- lreq->version = 2; +- } else { +- struct create_lease *lc = (struct create_lease *)cc; ++ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); ++ lreq->req_state = lc->lcontext.LeaseState; ++ lreq->flags = lc->lcontext.LeaseFlags; ++ lreq->duration = lc->lcontext.LeaseDuration; ++ memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, ++ SMB2_LEASE_KEY_SIZE); ++ lreq->version = 2; ++ } else { ++ struct create_lease *lc = (struct create_lease *)cc; + +- memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); +- lreq->req_state = lc->lcontext.LeaseState; +- lreq->flags = lc->lcontext.LeaseFlags; +- lreq->duration = lc->lcontext.LeaseDuration; +- lreq->version = 1; +- } +- return lreq; ++ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); ++ lreq->req_state = lc->lcontext.LeaseState; ++ lreq->flags = lc->lcontext.LeaseFlags; ++ lreq->duration = lc->lcontext.LeaseDuration; ++ lreq->version = 1; + } +- +- kfree(lreq); +- return NULL; ++ return lreq; + } + + /** +diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c +index 46f815098a753..85783112e56a0 100644 +--- a/fs/ksmbd/smb2pdu.c ++++ b/fs/ksmbd/smb2pdu.c +@@ -991,13 +991,13 @@ static void decode_sign_cap_ctxt(struct ksmbd_conn *conn, + + static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn, + struct smb2_negotiate_req *req, +- int len_of_smb) ++ unsigned int len_of_smb) + { + /* +4 is to account for the RFC1001 len field */ + struct smb2_neg_context *pctx = (struct smb2_neg_context *)req; + int i = 0, len_of_ctxts; +- int offset = le32_to_cpu(req->NegotiateContextOffset); +- int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount); ++ unsigned int offset = le32_to_cpu(req->NegotiateContextOffset); ++ unsigned int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount); + __le32 status = STATUS_INVALID_PARAMETER; + + ksmbd_debug(SMB, "decoding %d negotiate contexts\n", neg_ctxt_cnt); +@@ -1011,7 +1011,7 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn, + while (i++ < neg_ctxt_cnt) { + int clen, ctxt_len; + +- if (len_of_ctxts < sizeof(struct smb2_neg_context)) ++ if (len_of_ctxts < (int)sizeof(struct smb2_neg_context)) + break; + + pctx = (struct smb2_neg_context *)((char *)pctx + offset); +@@ -1066,9 +1066,8 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn, + } + + /* offsets must be 8 byte aligned */ +- clen = (clen + 7) & ~0x7; +- offset = clen + sizeof(struct smb2_neg_context); +- len_of_ctxts -= clen + sizeof(struct smb2_neg_context); ++ offset = (ctxt_len + 7) & ~0x7; ++ len_of_ctxts -= offset; + } + return status; + } +diff --git a/fs/ksmbd/smbacl.c b/fs/ksmbd/smbacl.c +index 6d6cfb6957a99..0a5862a61c773 100644 +--- a/fs/ksmbd/smbacl.c ++++ b/fs/ksmbd/smbacl.c +@@ -1290,7 +1290,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path, + + if (IS_ENABLED(CONFIG_FS_POSIX_ACL)) { + posix_acls = get_inode_acl(d_inode(path->dentry), ACL_TYPE_ACCESS); +- if (posix_acls && !found) { ++ if (!IS_ERR_OR_NULL(posix_acls) && !found) { + unsigned int id = -1; + + pa_entry = posix_acls->a_entries; +@@ -1314,7 +1314,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path, + } + } + } +- if (posix_acls) ++ if (!IS_ERR_OR_NULL(posix_acls)) + posix_acl_release(posix_acls); + } + +diff --git a/fs/ksmbd/vfs.c b/fs/ksmbd/vfs.c +index 5ea9229dad2c0..f6a8b26593090 100644 +--- a/fs/ksmbd/vfs.c ++++ b/fs/ksmbd/vfs.c +@@ -1377,7 +1377,7 @@ static struct xattr_smb_acl *ksmbd_vfs_make_xattr_posix_acl(struct mnt_idmap *id + return NULL; + + posix_acls = get_inode_acl(inode, acl_type); +- if (!posix_acls) ++ if (IS_ERR_OR_NULL(posix_acls)) + return NULL; + + smb_acl = kzalloc(sizeof(struct xattr_smb_acl) + +@@ -1886,7 +1886,7 @@ int ksmbd_vfs_inherit_posix_acl(struct mnt_idmap *idmap, + return -EOPNOTSUPP; + + acls = get_inode_acl(parent_inode, ACL_TYPE_DEFAULT); +- if (!acls) ++ if (IS_ERR_OR_NULL(acls)) + return -ENOENT; + pace = acls->a_entries; + +diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h +index 7db9f960221d3..7ed63f5bbe056 100644 +--- a/include/linux/netdevice.h ++++ b/include/linux/netdevice.h +@@ -612,7 +612,7 @@ struct netdev_queue { + netdevice_tracker dev_tracker; + + struct Qdisc __rcu *qdisc; +- struct Qdisc *qdisc_sleeping; ++ struct Qdisc __rcu *qdisc_sleeping; + #ifdef CONFIG_SYSFS + struct kobject kobj; + #endif +@@ -760,8 +760,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table, + /* We only give a hint, preemption can change CPU under us */ + val |= raw_smp_processor_id(); + +- if (table->ents[index] != val) +- table->ents[index] = val; ++ /* The following WRITE_ONCE() is paired with the READ_ONCE() ++ * here, and another one in get_rps_cpu(). ++ */ ++ if (READ_ONCE(table->ents[index]) != val) ++ WRITE_ONCE(table->ents[index], val); + } + } + +diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h +index a7e3a3405520a..5b9415cb979b1 100644 +--- a/include/linux/page-flags.h ++++ b/include/linux/page-flags.h +@@ -630,6 +630,12 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) + * Please note that, confusingly, "page_mapping" refers to the inode + * address_space which maps the page from disk; whereas "page_mapped" + * refers to user virtual address space into which the page is mapped. ++ * ++ * For slab pages, since slab reuses the bits in struct page to store its ++ * internal states, the page->mapping does not exist as such, nor do these ++ * flags below. So in order to avoid testing non-existent bits, please ++ * make sure that PageSlab(page) actually evaluates to false before calling ++ * the following functions (e.g., PageAnon). See mm/slab.h. + */ + #define PAGE_MAPPING_ANON 0x1 + #define PAGE_MAPPING_MOVABLE 0x2 +diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h +index b51c07111729b..ec37cc9743909 100644 +--- a/include/linux/usb/hcd.h ++++ b/include/linux/usb/hcd.h +@@ -503,6 +503,11 @@ void *hcd_buffer_alloc(struct usb_bus *bus, size_t size, + void hcd_buffer_free(struct usb_bus *bus, size_t size, + void *addr, dma_addr_t dma); + ++void *hcd_buffer_alloc_pages(struct usb_hcd *hcd, ++ size_t size, gfp_t mem_flags, dma_addr_t *dma); ++void hcd_buffer_free_pages(struct usb_hcd *hcd, ++ size_t size, void *addr, dma_addr_t dma); ++ + /* generic bus glue, needed for host controllers that don't use PCI */ + extern irqreturn_t usb_hcd_irq(int irq, void *__hcd); + +diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h +index bcc5a4cd2c17b..1b4230cd42a37 100644 +--- a/include/net/bluetooth/bluetooth.h ++++ b/include/net/bluetooth/bluetooth.h +@@ -1,6 +1,7 @@ + /* + BlueZ - Bluetooth protocol stack for Linux + Copyright (C) 2000-2001 Qualcomm Incorporated ++ Copyright 2023 NXP + + Written 2000,2001 by Maxim Krasnyansky + +@@ -171,23 +172,39 @@ struct bt_iso_io_qos { + __u8 rtn; + }; + +-struct bt_iso_qos { +- union { +- __u8 cig; +- __u8 big; +- }; +- union { +- __u8 cis; +- __u8 bis; +- }; +- union { +- __u8 sca; +- __u8 sync_interval; +- }; ++struct bt_iso_ucast_qos { ++ __u8 cig; ++ __u8 cis; ++ __u8 sca; ++ __u8 packing; ++ __u8 framing; ++ struct bt_iso_io_qos in; ++ struct bt_iso_io_qos out; ++}; ++ ++struct bt_iso_bcast_qos { ++ __u8 big; ++ __u8 bis; ++ __u8 sync_interval; + __u8 packing; + __u8 framing; + struct bt_iso_io_qos in; + struct bt_iso_io_qos out; ++ __u8 encryption; ++ __u8 bcode[16]; ++ __u8 options; ++ __u16 skip; ++ __u16 sync_timeout; ++ __u8 sync_cte_type; ++ __u8 mse; ++ __u16 timeout; ++}; ++ ++struct bt_iso_qos { ++ union { ++ struct bt_iso_ucast_qos ucast; ++ struct bt_iso_bcast_qos bcast; ++ }; + }; + + #define BT_ISO_PHY_1M 0x01 +diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h +index 07df96c47ef4f..872dcb91a540e 100644 +--- a/include/net/bluetooth/hci.h ++++ b/include/net/bluetooth/hci.h +@@ -350,6 +350,7 @@ enum { + enum { + HCI_SETUP, + HCI_CONFIG, ++ HCI_DEBUGFS_CREATED, + HCI_AUTO_OFF, + HCI_RFKILLED, + HCI_MGMT, +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h +index d5311ceb21c62..65b76bf63109b 100644 +--- a/include/net/bluetooth/hci_core.h ++++ b/include/net/bluetooth/hci_core.h +@@ -1,6 +1,7 @@ + /* + BlueZ - Bluetooth protocol stack for Linux + Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved. ++ Copyright 2023 NXP + + Written 2000,2001 by Maxim Krasnyansky + +@@ -513,6 +514,7 @@ struct hci_dev { + struct work_struct cmd_sync_work; + struct list_head cmd_sync_work_list; + struct mutex cmd_sync_work_lock; ++ struct mutex unregister_lock; + struct work_struct cmd_sync_cancel_work; + struct work_struct reenable_adv_work; + +@@ -764,7 +766,10 @@ struct hci_conn { + void *iso_data; + struct amp_mgr *amp_mgr; + +- struct hci_conn *link; ++ struct list_head link_list; ++ struct hci_conn *parent; ++ struct hci_link *link; ++ + struct bt_codec codec; + + void (*connect_cfm_cb) (struct hci_conn *conn, u8 status); +@@ -774,6 +779,11 @@ struct hci_conn { + void (*cleanup)(struct hci_conn *conn); + }; + ++struct hci_link { ++ struct list_head list; ++ struct hci_conn *conn; ++}; ++ + struct hci_chan { + struct list_head list; + __u16 handle; +@@ -1091,7 +1101,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev, + if (bacmp(&c->dst, ba) || c->type != ISO_LINK) + continue; + +- if (c->iso_qos.big == big && c->iso_qos.bis == bis) { ++ if (c->iso_qos.bcast.big == big && c->iso_qos.bcast.bis == bis) { + rcu_read_unlock(); + return c; + } +@@ -1166,7 +1176,9 @@ static inline struct hci_conn *hci_conn_hash_lookup_le(struct hci_dev *hdev, + + static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev, + bdaddr_t *ba, +- __u8 ba_type) ++ __u8 ba_type, ++ __u8 cig, ++ __u8 id) + { + struct hci_conn_hash *h = &hdev->conn_hash; + struct hci_conn *c; +@@ -1177,7 +1189,16 @@ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev, + if (c->type != ISO_LINK) + continue; + +- if (ba_type == c->dst_type && !bacmp(&c->dst, ba)) { ++ /* Match CIG ID if set */ ++ if (cig != BT_ISO_QOS_CIG_UNSET && cig != c->iso_qos.ucast.cig) ++ continue; ++ ++ /* Match CIS ID if set */ ++ if (id != BT_ISO_QOS_CIS_UNSET && id != c->iso_qos.ucast.cis) ++ continue; ++ ++ /* Match destination address if set */ ++ if (!ba || (ba_type == c->dst_type && !bacmp(&c->dst, ba))) { + rcu_read_unlock(); + return c; + } +@@ -1200,7 +1221,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cig(struct hci_dev *hdev, + if (c->type != ISO_LINK) + continue; + +- if (handle == c->iso_qos.cig) { ++ if (handle == c->iso_qos.ucast.cig) { + rcu_read_unlock(); + return c; + } +@@ -1223,7 +1244,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev, + if (bacmp(&c->dst, BDADDR_ANY) || c->type != ISO_LINK) + continue; + +- if (handle == c->iso_qos.big) { ++ if (handle == c->iso_qos.bcast.big) { + rcu_read_unlock(); + return c; + } +@@ -1303,7 +1324,7 @@ int hci_le_create_cis(struct hci_conn *conn); + + struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, + u8 role); +-int hci_conn_del(struct hci_conn *conn); ++void hci_conn_del(struct hci_conn *conn); + void hci_conn_hash_flush(struct hci_dev *hdev); + void hci_conn_check_pending(struct hci_dev *hdev); + +@@ -1332,7 +1353,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, + __u8 dst_type, struct bt_iso_qos *qos, + __u8 data_len, __u8 *data); + int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type, +- __u8 sid); ++ __u8 sid, struct bt_iso_qos *qos); + int hci_le_big_create_sync(struct hci_dev *hdev, struct bt_iso_qos *qos, + __u16 sync_handle, __u8 num_bis, __u8 bis[]); + int hci_conn_check_link_mode(struct hci_conn *conn); +@@ -1377,12 +1398,14 @@ static inline void hci_conn_put(struct hci_conn *conn) + put_device(&conn->dev); + } + +-static inline void hci_conn_hold(struct hci_conn *conn) ++static inline struct hci_conn *hci_conn_hold(struct hci_conn *conn) + { + BT_DBG("hcon %p orig refcnt %d", conn, atomic_read(&conn->refcnt)); + + atomic_inc(&conn->refcnt); + cancel_delayed_work(&conn->disc_work); ++ ++ return conn; + } + + static inline void hci_conn_drop(struct hci_conn *conn) +diff --git a/include/net/neighbour.h b/include/net/neighbour.h +index 2f2a6023fb0e5..94a1599824d8f 100644 +--- a/include/net/neighbour.h ++++ b/include/net/neighbour.h +@@ -180,7 +180,7 @@ struct pneigh_entry { + netdevice_tracker dev_tracker; + u32 flags; + u8 protocol; +- u8 key[]; ++ u32 key[]; + }; + + /* +diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h +index b4af4837d80b4..f6e6a3ab91489 100644 +--- a/include/net/netns/ipv6.h ++++ b/include/net/netns/ipv6.h +@@ -53,7 +53,7 @@ struct netns_sysctl_ipv6 { + int seg6_flowlabel; + u32 ioam6_id; + u64 ioam6_id_wide; +- bool skip_notify_on_dev_down; ++ int skip_notify_on_dev_down; + u8 fib_notify_on_flag_change; + }; + +diff --git a/include/net/ping.h b/include/net/ping.h +index 9233ad3de0ade..bc7779262e603 100644 +--- a/include/net/ping.h ++++ b/include/net/ping.h +@@ -16,11 +16,7 @@ + #define PING_HTABLE_SIZE 64 + #define PING_HTABLE_MASK (PING_HTABLE_SIZE-1) + +-/* +- * gid_t is either uint or ushort. We want to pass it to +- * proc_dointvec_minmax(), so it must not be larger than MAX_INT +- */ +-#define GID_T_MAX (((gid_t)~0U) >> 1) ++#define GID_T_MAX (((gid_t)~0U) - 1) + + /* Compatibility glue so we can support IPv6 when it's compiled as a module */ + struct pingv6_ops { +diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h +index fc688c7e95951..4df802f84eeba 100644 +--- a/include/net/pkt_sched.h ++++ b/include/net/pkt_sched.h +@@ -128,6 +128,8 @@ static inline void qdisc_run(struct Qdisc *q) + } + } + ++extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1]; ++ + /* Calculate maximal size of packet seen by hard_start_xmit + routine of this device. + */ +diff --git a/include/net/rpl.h b/include/net/rpl.h +index 308ef0a05caef..30fe780d1e7c8 100644 +--- a/include/net/rpl.h ++++ b/include/net/rpl.h +@@ -23,9 +23,6 @@ static inline int rpl_init(void) + static inline void rpl_exit(void) {} + #endif + +-/* Worst decompression memory usage ipv6 address (16) + pad 7 */ +-#define IPV6_RPL_SRH_WORST_SWAP_SIZE (sizeof(struct in6_addr) + 7) +- + size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri, + unsigned char cmpre); + +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index fab5ba3e61b7c..27271f2b37cb3 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -545,7 +545,7 @@ static inline struct Qdisc *qdisc_root_bh(const struct Qdisc *qdisc) + + static inline struct Qdisc *qdisc_root_sleeping(const struct Qdisc *qdisc) + { +- return qdisc->dev_queue->qdisc_sleeping; ++ return rcu_dereference_rtnl(qdisc->dev_queue->qdisc_sleeping); + } + + static inline spinlock_t *qdisc_root_sleeping_lock(const struct Qdisc *qdisc) +@@ -754,7 +754,9 @@ static inline bool qdisc_tx_changing(const struct net_device *dev) + + for (i = 0; i < dev->num_tx_queues; i++) { + struct netdev_queue *txq = netdev_get_tx_queue(dev, i); +- if (rcu_access_pointer(txq->qdisc) != txq->qdisc_sleeping) ++ ++ if (rcu_access_pointer(txq->qdisc) != ++ rcu_access_pointer(txq->qdisc_sleeping)) + return true; + } + return false; +diff --git a/include/net/sock.h b/include/net/sock.h +index 45e46a1c4afc6..f0654c44acf5f 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -1152,8 +1152,12 @@ static inline void sock_rps_record_flow(const struct sock *sk) + * OR an additional socket flag + * [1] : sk_state and sk_prot are in the same cache line. + */ +- if (sk->sk_state == TCP_ESTABLISHED) +- sock_rps_record_flow_hash(sk->sk_rxhash); ++ if (sk->sk_state == TCP_ESTABLISHED) { ++ /* This READ_ONCE() is paired with the WRITE_ONCE() ++ * from sock_rps_save_rxhash() and sock_rps_reset_rxhash(). ++ */ ++ sock_rps_record_flow_hash(READ_ONCE(sk->sk_rxhash)); ++ } + } + #endif + } +@@ -1162,15 +1166,19 @@ static inline void sock_rps_save_rxhash(struct sock *sk, + const struct sk_buff *skb) + { + #ifdef CONFIG_RPS +- if (unlikely(sk->sk_rxhash != skb->hash)) +- sk->sk_rxhash = skb->hash; ++ /* The following WRITE_ONCE() is paired with the READ_ONCE() ++ * here, and another one in sock_rps_record_flow(). ++ */ ++ if (unlikely(READ_ONCE(sk->sk_rxhash) != skb->hash)) ++ WRITE_ONCE(sk->sk_rxhash, skb->hash); + #endif + } + + static inline void sock_rps_reset_rxhash(struct sock *sk) + { + #ifdef CONFIG_RPS +- sk->sk_rxhash = 0; ++ /* Paired with READ_ONCE() in sock_rps_record_flow() */ ++ WRITE_ONCE(sk->sk_rxhash, 0); + #endif + } + +diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c +index 38136ec4e095a..fbc3e944dc747 100644 +--- a/kernel/bpf/map_in_map.c ++++ b/kernel/bpf/map_in_map.c +@@ -81,9 +81,13 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) + /* Misc members not needed in bpf_map_meta_equal() check. */ + inner_map_meta->ops = inner_map->ops; + if (inner_map->ops == &array_map_ops) { ++ struct bpf_array *inner_array_meta = ++ container_of(inner_map_meta, struct bpf_array, map); ++ struct bpf_array *inner_array = container_of(inner_map, struct bpf_array, map); ++ ++ inner_array_meta->index_mask = inner_array->index_mask; ++ inner_array_meta->elem_size = inner_array->elem_size; + inner_map_meta->bypass_spec_v1 = inner_map->bypass_spec_v1; +- container_of(inner_map_meta, struct bpf_array, map)->index_mask = +- container_of(inner_map, struct bpf_array, map)->index_mask; + } + + fdput(f); +diff --git a/kernel/fork.c b/kernel/fork.c +index ea332319dffea..1ec1e9ea4bf83 100644 +--- a/kernel/fork.c ++++ b/kernel/fork.c +@@ -559,6 +559,7 @@ void free_task(struct task_struct *tsk) + arch_release_task_struct(tsk); + if (tsk->flags & PF_KTHREAD) + free_kthread_struct(tsk); ++ bpf_task_storage_free(tsk); + free_task_struct(tsk); + } + EXPORT_SYMBOL(free_task); +@@ -845,7 +846,6 @@ void __put_task_struct(struct task_struct *tsk) + cgroup_free(tsk); + task_numa_free(tsk, true); + security_task_free(tsk); +- bpf_task_storage_free(tsk); + exit_creds(tsk); + delayacct_tsk_free(tsk); + put_signal_struct(tsk->signal); +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c +index e8da032bb6fc8..165441044bc55 100644 +--- a/kernel/trace/bpf_trace.c ++++ b/kernel/trace/bpf_trace.c +@@ -900,13 +900,23 @@ static const struct bpf_func_proto bpf_send_signal_thread_proto = { + + BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz) + { ++ struct path copy; + long len; + char *p; + + if (!sz) + return 0; + +- p = d_path(path, buf, sz); ++ /* ++ * The path pointer is verified as trusted and safe to use, ++ * but let's double check it's valid anyway to workaround ++ * potentially broken verifier. ++ */ ++ len = copy_from_kernel_nofault(©, path, sizeof(*path)); ++ if (len < 0) ++ return len; ++ ++ p = d_path(©, buf, sz); + if (IS_ERR(p)) { + len = PTR_ERR(p); + } else { +diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c +index e77f12bb3c774..1833ad73de6fc 100644 +--- a/lib/cpu_rmap.c ++++ b/lib/cpu_rmap.c +@@ -268,8 +268,8 @@ static void irq_cpu_rmap_release(struct kref *ref) + struct irq_glue *glue = + container_of(ref, struct irq_glue, notify.kref); + +- cpu_rmap_put(glue->rmap); + glue->rmap->obj[glue->index] = NULL; ++ cpu_rmap_put(glue->rmap); + kfree(glue); + } + +diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug +index c3547a373c9ca..c7d27c3f2e967 100644 +--- a/mm/Kconfig.debug ++++ b/mm/Kconfig.debug +@@ -98,6 +98,7 @@ config PAGE_OWNER + config PAGE_TABLE_CHECK + bool "Check for invalid mappings in user page tables" + depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK ++ depends on EXCLUSIVE_SYSTEM_RAM + select PAGE_EXTENSION + help + Check that anonymous page is not being mapped twice with read write +diff --git a/mm/page_table_check.c b/mm/page_table_check.c +index 25d8610c00429..f2baf97d5f389 100644 +--- a/mm/page_table_check.c ++++ b/mm/page_table_check.c +@@ -71,6 +71,8 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr, + + page = pfn_to_page(pfn); + page_ext = page_ext_get(page); ++ ++ BUG_ON(PageSlab(page)); + anon = PageAnon(page); + + for (i = 0; i < pgcnt; i++) { +@@ -107,6 +109,8 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr, + + page = pfn_to_page(pfn); + page_ext = page_ext_get(page); ++ ++ BUG_ON(PageSlab(page)); + anon = PageAnon(page); + + for (i = 0; i < pgcnt; i++) { +@@ -133,6 +137,8 @@ void __page_table_check_zero(struct page *page, unsigned int order) + struct page_ext *page_ext; + unsigned long i; + ++ BUG_ON(PageSlab(page)); ++ + page_ext = page_ext_get(page); + BUG_ON(!page_ext); + for (i = 0; i < (1ul << order); i++) { +diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c +index 6968e55eb9714..28a939d560906 100644 +--- a/net/batman-adv/distributed-arp-table.c ++++ b/net/batman-adv/distributed-arp-table.c +@@ -101,7 +101,6 @@ static void batadv_dat_purge(struct work_struct *work); + */ + static void batadv_dat_start_timer(struct batadv_priv *bat_priv) + { +- INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge); + queue_delayed_work(batadv_event_workqueue, &bat_priv->dat.work, + msecs_to_jiffies(10000)); + } +@@ -819,6 +818,7 @@ int batadv_dat_init(struct batadv_priv *bat_priv) + if (!bat_priv->dat.hash) + return -ENOMEM; + ++ INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge); + batadv_dat_start_timer(bat_priv); + + batadv_tvlv_handler_register(bat_priv, batadv_dat_tvlv_ogm_handler_v1, +diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c +index 8455ba141ee61..62518d06d2aff 100644 +--- a/net/bluetooth/hci_conn.c ++++ b/net/bluetooth/hci_conn.c +@@ -1,6 +1,7 @@ + /* + BlueZ - Bluetooth protocol stack for Linux + Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved. ++ Copyright 2023 NXP + + Written 2000,2001 by Maxim Krasnyansky + +@@ -329,8 +330,11 @@ static void hci_add_sco(struct hci_conn *conn, __u16 handle) + static bool find_next_esco_param(struct hci_conn *conn, + const struct sco_param *esco_param, int size) + { ++ if (!conn->parent) ++ return false; ++ + for (; conn->attempt <= size; conn->attempt++) { +- if (lmp_esco_2m_capable(conn->link) || ++ if (lmp_esco_2m_capable(conn->parent) || + (esco_param[conn->attempt - 1].pkt_type & ESCO_2EV3)) + break; + BT_DBG("hcon %p skipped attempt %d, eSCO 2M not supported", +@@ -460,7 +464,7 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data) + break; + + case BT_CODEC_CVSD: +- if (lmp_esco_capable(conn->link)) { ++ if (conn->parent && lmp_esco_capable(conn->parent)) { + if (!find_next_esco_param(conn, esco_param_cvsd, + ARRAY_SIZE(esco_param_cvsd))) + return -EINVAL; +@@ -530,7 +534,7 @@ static bool hci_setup_sync_conn(struct hci_conn *conn, __u16 handle) + param = &esco_param_msbc[conn->attempt - 1]; + break; + case SCO_AIRMODE_CVSD: +- if (lmp_esco_capable(conn->link)) { ++ if (conn->parent && lmp_esco_capable(conn->parent)) { + if (!find_next_esco_param(conn, esco_param_cvsd, + ARRAY_SIZE(esco_param_cvsd))) + return false; +@@ -636,21 +640,22 @@ void hci_le_start_enc(struct hci_conn *conn, __le16 ediv, __le64 rand, + /* Device _must_ be locked */ + void hci_sco_setup(struct hci_conn *conn, __u8 status) + { +- struct hci_conn *sco = conn->link; ++ struct hci_link *link; + +- if (!sco) ++ link = list_first_entry_or_null(&conn->link_list, struct hci_link, list); ++ if (!link || !link->conn) + return; + + BT_DBG("hcon %p", conn); + + if (!status) { + if (lmp_esco_capable(conn->hdev)) +- hci_setup_sync(sco, conn->handle); ++ hci_setup_sync(link->conn, conn->handle); + else +- hci_add_sco(sco, conn->handle); ++ hci_add_sco(link->conn, conn->handle); + } else { +- hci_connect_cfm(sco, status); +- hci_conn_del(sco); ++ hci_connect_cfm(link->conn, status); ++ hci_conn_del(link->conn); + } + } + +@@ -795,8 +800,8 @@ static void bis_list(struct hci_conn *conn, void *data) + if (bacmp(&conn->dst, BDADDR_ANY)) + return; + +- if (d->big != conn->iso_qos.big || d->bis == BT_ISO_QOS_BIS_UNSET || +- d->bis != conn->iso_qos.bis) ++ if (d->big != conn->iso_qos.bcast.big || d->bis == BT_ISO_QOS_BIS_UNSET || ++ d->bis != conn->iso_qos.bcast.bis) + return; + + d->count++; +@@ -916,10 +921,10 @@ static void bis_cleanup(struct hci_conn *conn) + if (!test_and_clear_bit(HCI_CONN_PER_ADV, &conn->flags)) + return; + +- hci_le_terminate_big(hdev, conn->iso_qos.big, +- conn->iso_qos.bis); ++ hci_le_terminate_big(hdev, conn->iso_qos.bcast.big, ++ conn->iso_qos.bcast.bis); + } else { +- hci_le_big_terminate(hdev, conn->iso_qos.big, ++ hci_le_big_terminate(hdev, conn->iso_qos.bcast.big, + conn->sync_handle); + } + } +@@ -942,8 +947,8 @@ static void find_cis(struct hci_conn *conn, void *data) + { + struct iso_list_data *d = data; + +- /* Ignore broadcast */ +- if (!bacmp(&conn->dst, BDADDR_ANY)) ++ /* Ignore broadcast or if CIG don't match */ ++ if (!bacmp(&conn->dst, BDADDR_ANY) || d->cig != conn->iso_qos.ucast.cig) + return; + + d->count++; +@@ -958,17 +963,22 @@ static void cis_cleanup(struct hci_conn *conn) + struct hci_dev *hdev = conn->hdev; + struct iso_list_data d; + ++ if (conn->iso_qos.ucast.cig == BT_ISO_QOS_CIG_UNSET) ++ return; ++ + memset(&d, 0, sizeof(d)); +- d.cig = conn->iso_qos.cig; ++ d.cig = conn->iso_qos.ucast.cig; + + /* Check if ISO connection is a CIS and remove CIG if there are + * no other connections using it. + */ ++ hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_BOUND, &d); ++ hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECT, &d); + hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECTED, &d); + if (d.count) + return; + +- hci_le_remove_cig(hdev, conn->iso_qos.cig); ++ hci_le_remove_cig(hdev, conn->iso_qos.ucast.cig); + } + + struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, +@@ -1041,6 +1051,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, + skb_queue_head_init(&conn->data_q); + + INIT_LIST_HEAD(&conn->chan_list); ++ INIT_LIST_HEAD(&conn->link_list); + + INIT_DELAYED_WORK(&conn->disc_work, hci_conn_timeout); + INIT_DELAYED_WORK(&conn->auto_accept_work, hci_conn_auto_accept); +@@ -1068,18 +1079,53 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, + return conn; + } + +-static bool hci_conn_unlink(struct hci_conn *conn) ++static void hci_conn_unlink(struct hci_conn *conn) + { ++ struct hci_dev *hdev = conn->hdev; ++ ++ bt_dev_dbg(hdev, "hcon %p", conn); ++ ++ if (!conn->parent) { ++ struct hci_link *link, *t; ++ ++ list_for_each_entry_safe(link, t, &conn->link_list, list) { ++ struct hci_conn *child = link->conn; ++ ++ hci_conn_unlink(child); ++ ++ /* If hdev is down it means ++ * hci_dev_close_sync/hci_conn_hash_flush is in progress ++ * and links don't need to be cleanup as all connections ++ * would be cleanup. ++ */ ++ if (!test_bit(HCI_UP, &hdev->flags)) ++ continue; ++ ++ /* Due to race, SCO connection might be not established ++ * yet at this point. Delete it now, otherwise it is ++ * possible for it to be stuck and can't be deleted. ++ */ ++ if (child->handle == HCI_CONN_HANDLE_UNSET) ++ hci_conn_del(child); ++ } ++ ++ return; ++ } ++ + if (!conn->link) +- return false; ++ return; + +- conn->link->link = NULL; +- conn->link = NULL; ++ list_del_rcu(&conn->link->list); ++ synchronize_rcu(); + +- return true; ++ hci_conn_put(conn->parent); ++ conn->parent = NULL; ++ ++ kfree(conn->link); ++ conn->link = NULL; + } + +-int hci_conn_del(struct hci_conn *conn) ++void hci_conn_del(struct hci_conn *conn) + { + struct hci_dev *hdev = conn->hdev; + +@@ -1090,18 +1136,7 @@ int hci_conn_del(struct hci_conn *conn) + cancel_delayed_work_sync(&conn->idle_work); + + if (conn->type == ACL_LINK) { +- struct hci_conn *link = conn->link; +- +- if (link) { +- hci_conn_unlink(conn); +- /* Due to race, SCO connection might be not established +- * yet at this point. Delete it now, otherwise it is +- * possible for it to be stuck and can't be deleted. +- */ +- if (link->handle == HCI_CONN_HANDLE_UNSET) +- hci_conn_del(link); +- } +- ++ hci_conn_unlink(conn); + /* Unacked frames */ + hdev->acl_cnt += conn->sent; + } else if (conn->type == LE_LINK) { +@@ -1112,7 +1147,7 @@ int hci_conn_del(struct hci_conn *conn) + else + hdev->acl_cnt += conn->sent; + } else { +- struct hci_conn *acl = conn->link; ++ struct hci_conn *acl = conn->parent; + + if (acl) { + hci_conn_unlink(conn); +@@ -1141,8 +1176,6 @@ int hci_conn_del(struct hci_conn *conn) + * rest of hci_conn_del. + */ + hci_conn_cleanup(conn); +- +- return 0; + } + + struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src, uint8_t src_type) +@@ -1411,7 +1444,7 @@ static int qos_set_big(struct hci_dev *hdev, struct bt_iso_qos *qos) + struct iso_list_data data; + + /* Allocate a BIG if not set */ +- if (qos->big == BT_ISO_QOS_BIG_UNSET) { ++ if (qos->bcast.big == BT_ISO_QOS_BIG_UNSET) { + for (data.big = 0x00; data.big < 0xef; data.big++) { + data.count = 0; + data.bis = 0xff; +@@ -1426,7 +1459,7 @@ static int qos_set_big(struct hci_dev *hdev, struct bt_iso_qos *qos) + return -EADDRNOTAVAIL; + + /* Update BIG */ +- qos->big = data.big; ++ qos->bcast.big = data.big; + } + + return 0; +@@ -1437,7 +1470,7 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos) + struct iso_list_data data; + + /* Allocate BIS if not set */ +- if (qos->bis == BT_ISO_QOS_BIS_UNSET) { ++ if (qos->bcast.bis == BT_ISO_QOS_BIS_UNSET) { + /* Find an unused adv set to advertise BIS, skip instance 0x00 + * since it is reserved as general purpose set. + */ +@@ -1455,7 +1488,7 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos) + return -EADDRNOTAVAIL; + + /* Update BIS */ +- qos->bis = data.bis; ++ qos->bcast.bis = data.bis; + } + + return 0; +@@ -1484,8 +1517,8 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst, + if (err) + return ERR_PTR(err); + +- data.big = qos->big; +- data.bis = qos->bis; ++ data.big = qos->bcast.big; ++ data.bis = qos->bcast.bis; + data.count = 0; + + /* Check if there is already a matching BIG/BIS */ +@@ -1493,7 +1526,7 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst, + if (data.count) + return ERR_PTR(-EADDRINUSE); + +- conn = hci_conn_hash_lookup_bis(hdev, dst, qos->big, qos->bis); ++ conn = hci_conn_hash_lookup_bis(hdev, dst, qos->bcast.big, qos->bcast.bis); + if (conn) + return ERR_PTR(-EADDRINUSE); + +@@ -1599,11 +1632,40 @@ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst, + return acl; + } + ++static struct hci_link *hci_conn_link(struct hci_conn *parent, ++ struct hci_conn *conn) ++{ ++ struct hci_dev *hdev = parent->hdev; ++ struct hci_link *link; ++ ++ bt_dev_dbg(hdev, "parent %p hcon %p", parent, conn); ++ ++ if (conn->link) ++ return conn->link; ++ ++ if (conn->parent) ++ return NULL; ++ ++ link = kzalloc(sizeof(*link), GFP_KERNEL); ++ if (!link) ++ return NULL; ++ ++ link->conn = hci_conn_hold(conn); ++ conn->link = link; ++ conn->parent = hci_conn_get(parent); ++ ++ /* Use list_add_tail_rcu append to the list */ ++ list_add_tail_rcu(&link->list, &parent->link_list); ++ ++ return link; ++} ++ + struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst, + __u16 setting, struct bt_codec *codec) + { + struct hci_conn *acl; + struct hci_conn *sco; ++ struct hci_link *link; + + acl = hci_connect_acl(hdev, dst, BT_SECURITY_LOW, HCI_AT_NO_BONDING, + CONN_REASON_SCO_CONNECT); +@@ -1619,10 +1681,12 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst, + } + } + +- acl->link = sco; +- sco->link = acl; +- +- hci_conn_hold(sco); ++ link = hci_conn_link(acl, sco); ++ if (!link) { ++ hci_conn_drop(acl); ++ hci_conn_drop(sco); ++ return NULL; ++ } + + sco->setting = setting; + sco->codec = *codec; +@@ -1648,13 +1712,13 @@ static void cis_add(struct iso_list_data *d, struct bt_iso_qos *qos) + { + struct hci_cis_params *cis = &d->pdu.cis[d->pdu.cp.num_cis]; + +- cis->cis_id = qos->cis; +- cis->c_sdu = cpu_to_le16(qos->out.sdu); +- cis->p_sdu = cpu_to_le16(qos->in.sdu); +- cis->c_phy = qos->out.phy ? qos->out.phy : qos->in.phy; +- cis->p_phy = qos->in.phy ? qos->in.phy : qos->out.phy; +- cis->c_rtn = qos->out.rtn; +- cis->p_rtn = qos->in.rtn; ++ cis->cis_id = qos->ucast.cis; ++ cis->c_sdu = cpu_to_le16(qos->ucast.out.sdu); ++ cis->p_sdu = cpu_to_le16(qos->ucast.in.sdu); ++ cis->c_phy = qos->ucast.out.phy ? qos->ucast.out.phy : qos->ucast.in.phy; ++ cis->p_phy = qos->ucast.in.phy ? qos->ucast.in.phy : qos->ucast.out.phy; ++ cis->c_rtn = qos->ucast.out.rtn; ++ cis->p_rtn = qos->ucast.in.rtn; + + d->pdu.cp.num_cis++; + } +@@ -1667,8 +1731,8 @@ static void cis_list(struct hci_conn *conn, void *data) + if (!bacmp(&conn->dst, BDADDR_ANY)) + return; + +- if (d->cig != conn->iso_qos.cig || d->cis == BT_ISO_QOS_CIS_UNSET || +- d->cis != conn->iso_qos.cis) ++ if (d->cig != conn->iso_qos.ucast.cig || d->cis == BT_ISO_QOS_CIS_UNSET || ++ d->cis != conn->iso_qos.ucast.cis) + return; + + d->count++; +@@ -1687,17 +1751,18 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos) + + memset(&cp, 0, sizeof(cp)); + +- cp.handle = qos->big; +- cp.adv_handle = qos->bis; ++ cp.handle = qos->bcast.big; ++ cp.adv_handle = qos->bcast.bis; + cp.num_bis = 0x01; +- hci_cpu_to_le24(qos->out.interval, cp.bis.sdu_interval); +- cp.bis.sdu = cpu_to_le16(qos->out.sdu); +- cp.bis.latency = cpu_to_le16(qos->out.latency); +- cp.bis.rtn = qos->out.rtn; +- cp.bis.phy = qos->out.phy; +- cp.bis.packing = qos->packing; +- cp.bis.framing = qos->framing; +- cp.bis.encryption = 0x00; ++ hci_cpu_to_le24(qos->bcast.out.interval, cp.bis.sdu_interval); ++ cp.bis.sdu = cpu_to_le16(qos->bcast.out.sdu); ++ cp.bis.latency = cpu_to_le16(qos->bcast.out.latency); ++ cp.bis.rtn = qos->bcast.out.rtn; ++ cp.bis.phy = qos->bcast.out.phy; ++ cp.bis.packing = qos->bcast.packing; ++ cp.bis.framing = qos->bcast.framing; ++ cp.bis.encryption = qos->bcast.encryption; ++ memcpy(cp.bis.bcode, qos->bcast.bcode, sizeof(cp.bis.bcode)); + memset(&cp.bis.bcode, 0, sizeof(cp.bis.bcode)); + + return hci_send_cmd(hdev, HCI_OP_LE_CREATE_BIG, sizeof(cp), &cp); +@@ -1710,43 +1775,42 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos) + + memset(&data, 0, sizeof(data)); + +- /* Allocate a CIG if not set */ +- if (qos->cig == BT_ISO_QOS_CIG_UNSET) { +- for (data.cig = 0x00; data.cig < 0xff; data.cig++) { ++ /* Allocate first still reconfigurable CIG if not set */ ++ if (qos->ucast.cig == BT_ISO_QOS_CIG_UNSET) { ++ for (data.cig = 0x00; data.cig < 0xf0; data.cig++) { + data.count = 0; +- data.cis = 0xff; + +- hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, +- BT_BOUND, &data); ++ hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, ++ BT_CONNECT, &data); + if (data.count) + continue; + +- hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, ++ hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, + BT_CONNECTED, &data); + if (!data.count) + break; + } + +- if (data.cig == 0xff) ++ if (data.cig == 0xf0) + return false; + + /* Update CIG */ +- qos->cig = data.cig; ++ qos->ucast.cig = data.cig; + } + +- data.pdu.cp.cig_id = qos->cig; +- hci_cpu_to_le24(qos->out.interval, data.pdu.cp.c_interval); +- hci_cpu_to_le24(qos->in.interval, data.pdu.cp.p_interval); +- data.pdu.cp.sca = qos->sca; +- data.pdu.cp.packing = qos->packing; +- data.pdu.cp.framing = qos->framing; +- data.pdu.cp.c_latency = cpu_to_le16(qos->out.latency); +- data.pdu.cp.p_latency = cpu_to_le16(qos->in.latency); ++ data.pdu.cp.cig_id = qos->ucast.cig; ++ hci_cpu_to_le24(qos->ucast.out.interval, data.pdu.cp.c_interval); ++ hci_cpu_to_le24(qos->ucast.in.interval, data.pdu.cp.p_interval); ++ data.pdu.cp.sca = qos->ucast.sca; ++ data.pdu.cp.packing = qos->ucast.packing; ++ data.pdu.cp.framing = qos->ucast.framing; ++ data.pdu.cp.c_latency = cpu_to_le16(qos->ucast.out.latency); ++ data.pdu.cp.p_latency = cpu_to_le16(qos->ucast.in.latency); + +- if (qos->cis != BT_ISO_QOS_CIS_UNSET) { ++ if (qos->ucast.cis != BT_ISO_QOS_CIS_UNSET) { + data.count = 0; +- data.cig = qos->cig; +- data.cis = qos->cis; ++ data.cig = qos->ucast.cig; ++ data.cis = qos->ucast.cis; + + hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, BT_BOUND, + &data); +@@ -1757,7 +1821,7 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos) + } + + /* Reprogram all CIS(s) with the same CIG */ +- for (data.cig = qos->cig, data.cis = 0x00; data.cis < 0x11; ++ for (data.cig = qos->ucast.cig, data.cis = 0x00; data.cis < 0x11; + data.cis++) { + data.count = 0; + +@@ -1767,14 +1831,14 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos) + continue; + + /* Allocate a CIS if not set */ +- if (qos->cis == BT_ISO_QOS_CIS_UNSET) { ++ if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET) { + /* Update CIS */ +- qos->cis = data.cis; ++ qos->ucast.cis = data.cis; + cis_add(&data, qos); + } + } + +- if (qos->cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis) ++ if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis) + return false; + + if (hci_send_cmd(hdev, HCI_OP_LE_SET_CIG_PARAMS, +@@ -1791,7 +1855,8 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst, + { + struct hci_conn *cis; + +- cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type); ++ cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type, qos->ucast.cig, ++ qos->ucast.cis); + if (!cis) { + cis = hci_conn_add(hdev, ISO_LINK, dst, HCI_ROLE_MASTER); + if (!cis) +@@ -1809,32 +1874,32 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst, + return cis; + + /* Update LINK PHYs according to QoS preference */ +- cis->le_tx_phy = qos->out.phy; +- cis->le_rx_phy = qos->in.phy; ++ cis->le_tx_phy = qos->ucast.out.phy; ++ cis->le_rx_phy = qos->ucast.in.phy; + + /* If output interval is not set use the input interval as it cannot be + * 0x000000. + */ +- if (!qos->out.interval) +- qos->out.interval = qos->in.interval; ++ if (!qos->ucast.out.interval) ++ qos->ucast.out.interval = qos->ucast.in.interval; + + /* If input interval is not set use the output interval as it cannot be + * 0x000000. + */ +- if (!qos->in.interval) +- qos->in.interval = qos->out.interval; ++ if (!qos->ucast.in.interval) ++ qos->ucast.in.interval = qos->ucast.out.interval; + + /* If output latency is not set use the input latency as it cannot be + * 0x0000. + */ +- if (!qos->out.latency) +- qos->out.latency = qos->in.latency; ++ if (!qos->ucast.out.latency) ++ qos->ucast.out.latency = qos->ucast.in.latency; + + /* If input latency is not set use the output latency as it cannot be + * 0x0000. + */ +- if (!qos->in.latency) +- qos->in.latency = qos->out.latency; ++ if (!qos->ucast.in.latency) ++ qos->ucast.in.latency = qos->ucast.out.latency; + + if (!hci_le_set_cig_params(cis, qos)) { + hci_conn_drop(cis); +@@ -1854,7 +1919,7 @@ bool hci_iso_setup_path(struct hci_conn *conn) + + memset(&cmd, 0, sizeof(cmd)); + +- if (conn->iso_qos.out.sdu) { ++ if (conn->iso_qos.ucast.out.sdu) { + cmd.handle = cpu_to_le16(conn->handle); + cmd.direction = 0x00; /* Input (Host to Controller) */ + cmd.path = 0x00; /* HCI path if enabled */ +@@ -1865,7 +1930,7 @@ bool hci_iso_setup_path(struct hci_conn *conn) + return false; + } + +- if (conn->iso_qos.in.sdu) { ++ if (conn->iso_qos.ucast.in.sdu) { + cmd.handle = cpu_to_le16(conn->handle); + cmd.direction = 0x01; /* Output (Controller to Host) */ + cmd.path = 0x00; /* HCI path if enabled */ +@@ -1889,10 +1954,10 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data) + u8 cig; + + memset(&cmd, 0, sizeof(cmd)); +- cmd.cis[0].acl_handle = cpu_to_le16(conn->link->handle); ++ cmd.cis[0].acl_handle = cpu_to_le16(conn->parent->handle); + cmd.cis[0].cis_handle = cpu_to_le16(conn->handle); + cmd.cp.num_cis++; +- cig = conn->iso_qos.cig; ++ cig = conn->iso_qos.ucast.cig; + + hci_dev_lock(hdev); + +@@ -1902,11 +1967,12 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data) + struct hci_cis *cis = &cmd.cis[cmd.cp.num_cis]; + + if (conn == data || conn->type != ISO_LINK || +- conn->state == BT_CONNECTED || conn->iso_qos.cig != cig) ++ conn->state == BT_CONNECTED || ++ conn->iso_qos.ucast.cig != cig) + continue; + + /* Check if all CIS(s) belonging to a CIG are ready */ +- if (!conn->link || conn->link->state != BT_CONNECTED || ++ if (!conn->parent || conn->parent->state != BT_CONNECTED || + conn->state != BT_CONNECT) { + cmd.cp.num_cis = 0; + break; +@@ -1923,7 +1989,7 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data) + * command have been generated, the Controller shall return the + * error code Command Disallowed (0x0C). + */ +- cis->acl_handle = cpu_to_le16(conn->link->handle); ++ cis->acl_handle = cpu_to_le16(conn->parent->handle); + cis->cis_handle = cpu_to_le16(conn->handle); + cmd.cp.num_cis++; + } +@@ -1942,15 +2008,33 @@ static int hci_create_cis_sync(struct hci_dev *hdev, void *data) + int hci_le_create_cis(struct hci_conn *conn) + { + struct hci_conn *cis; ++ struct hci_link *link, *t; + struct hci_dev *hdev = conn->hdev; + int err; + ++ bt_dev_dbg(hdev, "hcon %p", conn); ++ + switch (conn->type) { + case LE_LINK: +- if (!conn->link || conn->state != BT_CONNECTED) ++ if (conn->state != BT_CONNECTED || list_empty(&conn->link_list)) + return -EINVAL; +- cis = conn->link; +- break; ++ ++ cis = NULL; ++ ++ /* hci_conn_link uses list_add_tail_rcu so the list is in ++ * the same order as the connections are requested. ++ */ ++ list_for_each_entry_safe(link, t, &conn->link_list, list) { ++ if (link->conn->state == BT_BOUND) { ++ err = hci_le_create_cis(link->conn); ++ if (err) ++ return err; ++ ++ cis = link->conn; ++ } ++ } ++ ++ return cis ? 0 : -EINVAL; + case ISO_LINK: + cis = conn; + break; +@@ -2002,8 +2086,8 @@ static void hci_bind_bis(struct hci_conn *conn, + struct bt_iso_qos *qos) + { + /* Update LINK PHYs according to QoS preference */ +- conn->le_tx_phy = qos->out.phy; +- conn->le_tx_phy = qos->out.phy; ++ conn->le_tx_phy = qos->bcast.out.phy; ++ conn->le_tx_phy = qos->bcast.out.phy; + conn->iso_qos = *qos; + conn->state = BT_BOUND; + } +@@ -2016,16 +2100,16 @@ static int create_big_sync(struct hci_dev *hdev, void *data) + u32 flags = 0; + int err; + +- if (qos->out.phy == 0x02) ++ if (qos->bcast.out.phy == 0x02) + flags |= MGMT_ADV_FLAG_SEC_2M; + + /* Align intervals */ +- interval = qos->out.interval / 1250; ++ interval = qos->bcast.out.interval / 1250; + +- if (qos->bis) +- sync_interval = qos->sync_interval * 1600; ++ if (qos->bcast.bis) ++ sync_interval = qos->bcast.sync_interval * 1600; + +- err = hci_start_per_adv_sync(hdev, qos->bis, conn->le_per_adv_data_len, ++ err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->le_per_adv_data_len, + conn->le_per_adv_data, flags, interval, + interval, sync_interval); + if (err) +@@ -2062,7 +2146,7 @@ static int create_pa_sync(struct hci_dev *hdev, void *data) + } + + int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type, +- __u8 sid) ++ __u8 sid, struct bt_iso_qos *qos) + { + struct hci_cp_le_pa_create_sync *cp; + +@@ -2075,9 +2159,13 @@ int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type, + return -ENOMEM; + } + ++ cp->options = qos->bcast.options; + cp->sid = sid; + cp->addr_type = dst_type; + bacpy(&cp->addr, dst); ++ cp->skip = cpu_to_le16(qos->bcast.skip); ++ cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout); ++ cp->sync_cte_type = qos->bcast.sync_cte_type; + + /* Queue start pa_create_sync and scan */ + return hci_cmd_sync_queue(hdev, create_pa_sync, cp, create_pa_complete); +@@ -2100,8 +2188,12 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct bt_iso_qos *qos, + return err; + + memset(&pdu, 0, sizeof(pdu)); +- pdu.cp.handle = qos->big; ++ pdu.cp.handle = qos->bcast.big; + pdu.cp.sync_handle = cpu_to_le16(sync_handle); ++ pdu.cp.encryption = qos->bcast.encryption; ++ memcpy(pdu.cp.bcode, qos->bcast.bcode, sizeof(pdu.cp.bcode)); ++ pdu.cp.mse = qos->bcast.mse; ++ pdu.cp.timeout = cpu_to_le16(qos->bcast.timeout); + pdu.cp.num_bis = num_bis; + memcpy(pdu.bis, bis, num_bis); + +@@ -2151,7 +2243,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, + return ERR_PTR(err); + } + +- hci_iso_qos_setup(hdev, conn, &qos->out, ++ hci_iso_qos_setup(hdev, conn, &qos->bcast.out, + conn->le_tx_phy ? conn->le_tx_phy : + hdev->le_tx_def_phys); + +@@ -2163,6 +2255,7 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst, + { + struct hci_conn *le; + struct hci_conn *cis; ++ struct hci_link *link; + + if (hci_dev_test_flag(hdev, HCI_ADVERTISING)) + le = hci_connect_le(hdev, dst, dst_type, false, +@@ -2177,9 +2270,9 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst, + if (IS_ERR(le)) + return le; + +- hci_iso_qos_setup(hdev, le, &qos->out, ++ hci_iso_qos_setup(hdev, le, &qos->ucast.out, + le->le_tx_phy ? le->le_tx_phy : hdev->le_tx_def_phys); +- hci_iso_qos_setup(hdev, le, &qos->in, ++ hci_iso_qos_setup(hdev, le, &qos->ucast.in, + le->le_rx_phy ? le->le_rx_phy : hdev->le_rx_def_phys); + + cis = hci_bind_cis(hdev, dst, dst_type, qos); +@@ -2188,16 +2281,18 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst, + return cis; + } + +- le->link = cis; +- cis->link = le; +- +- hci_conn_hold(cis); ++ link = hci_conn_link(le, cis); ++ if (!link) { ++ hci_conn_drop(le); ++ hci_conn_drop(cis); ++ return NULL; ++ } + + /* If LE is already connected and CIS handle is already set proceed to + * Create CIS immediately. + */ + if (le->state == BT_CONNECTED && cis->handle != HCI_CONN_HANDLE_UNSET) +- hci_le_create_cis(le); ++ hci_le_create_cis(cis); + + return cis; + } +@@ -2437,22 +2532,27 @@ timer: + /* Drop all connection on the device */ + void hci_conn_hash_flush(struct hci_dev *hdev) + { +- struct hci_conn_hash *h = &hdev->conn_hash; +- struct hci_conn *c, *n; ++ struct list_head *head = &hdev->conn_hash.list; ++ struct hci_conn *conn; + + BT_DBG("hdev %s", hdev->name); + +- list_for_each_entry_safe(c, n, &h->list, list) { +- c->state = BT_CLOSED; +- +- hci_disconn_cfm(c, HCI_ERROR_LOCAL_HOST_TERM); ++ /* We should not traverse the list here, because hci_conn_del ++ * can remove extra links, which may cause the list traversal ++ * to hit items that have already been released. ++ */ ++ while ((conn = list_first_entry_or_null(head, ++ struct hci_conn, ++ list)) != NULL) { ++ conn->state = BT_CLOSED; ++ hci_disconn_cfm(conn, HCI_ERROR_LOCAL_HOST_TERM); + + /* Unlink before deleting otherwise it is possible that + * hci_conn_del removes the link which may cause the list to + * contain items already freed. + */ +- hci_conn_unlink(c); +- hci_conn_del(c); ++ hci_conn_unlink(conn); ++ hci_conn_del(conn); + } + } + +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index 334e308451f53..ca42129f8f91a 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -1416,10 +1416,10 @@ int hci_remove_link_key(struct hci_dev *hdev, bdaddr_t *bdaddr) + + int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type) + { +- struct smp_ltk *k; ++ struct smp_ltk *k, *tmp; + int removed = 0; + +- list_for_each_entry_rcu(k, &hdev->long_term_keys, list) { ++ list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) { + if (bacmp(bdaddr, &k->bdaddr) || k->bdaddr_type != bdaddr_type) + continue; + +@@ -1435,9 +1435,9 @@ int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type) + + void hci_remove_irk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type) + { +- struct smp_irk *k; ++ struct smp_irk *k, *tmp; + +- list_for_each_entry_rcu(k, &hdev->identity_resolving_keys, list) { ++ list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) { + if (bacmp(bdaddr, &k->bdaddr) || k->addr_type != addr_type) + continue; + +@@ -2685,7 +2685,9 @@ void hci_unregister_dev(struct hci_dev *hdev) + { + BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); + ++ mutex_lock(&hdev->unregister_lock); + hci_dev_set_flag(hdev, HCI_UNREGISTER); ++ mutex_unlock(&hdev->unregister_lock); + + write_lock(&hci_dev_list_lock); + list_del(&hdev->list); +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 51f13518dba9b..09ba6d8987ee1 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -1,6 +1,7 @@ + /* + BlueZ - Bluetooth protocol stack for Linux + Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved. ++ Copyright 2023 NXP + + Written 2000,2001 by Maxim Krasnyansky + +@@ -2344,7 +2345,8 @@ static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status) + static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status) + { + struct hci_cp_add_sco *cp; +- struct hci_conn *acl, *sco; ++ struct hci_conn *acl; ++ struct hci_link *link; + __u16 handle; + + bt_dev_dbg(hdev, "status 0x%2.2x", status); +@@ -2364,12 +2366,13 @@ static void hci_cs_add_sco(struct hci_dev *hdev, __u8 status) + + acl = hci_conn_hash_lookup_handle(hdev, handle); + if (acl) { +- sco = acl->link; +- if (sco) { +- sco->state = BT_CLOSED; ++ link = list_first_entry_or_null(&acl->link_list, ++ struct hci_link, list); ++ if (link && link->conn) { ++ link->conn->state = BT_CLOSED; + +- hci_connect_cfm(sco, status); +- hci_conn_del(sco); ++ hci_connect_cfm(link->conn, status); ++ hci_conn_del(link->conn); + } + } + +@@ -2636,74 +2639,61 @@ static void hci_cs_read_remote_ext_features(struct hci_dev *hdev, __u8 status) + hci_dev_unlock(hdev); + } + +-static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status) ++static void hci_setup_sync_conn_status(struct hci_dev *hdev, __u16 handle, ++ __u8 status) + { +- struct hci_cp_setup_sync_conn *cp; +- struct hci_conn *acl, *sco; +- __u16 handle; +- +- bt_dev_dbg(hdev, "status 0x%2.2x", status); +- +- if (!status) +- return; ++ struct hci_conn *acl; ++ struct hci_link *link; + +- cp = hci_sent_cmd_data(hdev, HCI_OP_SETUP_SYNC_CONN); +- if (!cp) +- return; +- +- handle = __le16_to_cpu(cp->handle); +- +- bt_dev_dbg(hdev, "handle 0x%4.4x", handle); ++ bt_dev_dbg(hdev, "handle 0x%4.4x status 0x%2.2x", handle, status); + + hci_dev_lock(hdev); + + acl = hci_conn_hash_lookup_handle(hdev, handle); + if (acl) { +- sco = acl->link; +- if (sco) { +- sco->state = BT_CLOSED; ++ link = list_first_entry_or_null(&acl->link_list, ++ struct hci_link, list); ++ if (link && link->conn) { ++ link->conn->state = BT_CLOSED; + +- hci_connect_cfm(sco, status); +- hci_conn_del(sco); ++ hci_connect_cfm(link->conn, status); ++ hci_conn_del(link->conn); + } + } + + hci_dev_unlock(hdev); + } + +-static void hci_cs_enhanced_setup_sync_conn(struct hci_dev *hdev, __u8 status) ++static void hci_cs_setup_sync_conn(struct hci_dev *hdev, __u8 status) + { +- struct hci_cp_enhanced_setup_sync_conn *cp; +- struct hci_conn *acl, *sco; +- __u16 handle; ++ struct hci_cp_setup_sync_conn *cp; + + bt_dev_dbg(hdev, "status 0x%2.2x", status); + + if (!status) + return; + +- cp = hci_sent_cmd_data(hdev, HCI_OP_ENHANCED_SETUP_SYNC_CONN); ++ cp = hci_sent_cmd_data(hdev, HCI_OP_SETUP_SYNC_CONN); + if (!cp) + return; + +- handle = __le16_to_cpu(cp->handle); ++ hci_setup_sync_conn_status(hdev, __le16_to_cpu(cp->handle), status); ++} + +- bt_dev_dbg(hdev, "handle 0x%4.4x", handle); ++static void hci_cs_enhanced_setup_sync_conn(struct hci_dev *hdev, __u8 status) ++{ ++ struct hci_cp_enhanced_setup_sync_conn *cp; + +- hci_dev_lock(hdev); ++ bt_dev_dbg(hdev, "status 0x%2.2x", status); + +- acl = hci_conn_hash_lookup_handle(hdev, handle); +- if (acl) { +- sco = acl->link; +- if (sco) { +- sco->state = BT_CLOSED; ++ if (!status) ++ return; + +- hci_connect_cfm(sco, status); +- hci_conn_del(sco); +- } +- } ++ cp = hci_sent_cmd_data(hdev, HCI_OP_ENHANCED_SETUP_SYNC_CONN); ++ if (!cp) ++ return; + +- hci_dev_unlock(hdev); ++ hci_setup_sync_conn_status(hdev, __le16_to_cpu(cp->handle), status); + } + + static void hci_cs_sniff_mode(struct hci_dev *hdev, __u8 status) +@@ -3814,47 +3804,56 @@ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data, + struct sk_buff *skb) + { + struct hci_rp_le_set_cig_params *rp = data; ++ struct hci_cp_le_set_cig_params *cp; + struct hci_conn *conn; +- int i = 0; ++ u8 status = rp->status; ++ int i; + + bt_dev_dbg(hdev, "status 0x%2.2x", rp->status); + ++ cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_CIG_PARAMS); ++ if (!cp || rp->num_handles != cp->num_cis || rp->cig_id != cp->cig_id) { ++ bt_dev_err(hdev, "unexpected Set CIG Parameters response data"); ++ status = HCI_ERROR_UNSPECIFIED; ++ } ++ + hci_dev_lock(hdev); + +- if (rp->status) { ++ if (status) { + while ((conn = hci_conn_hash_lookup_cig(hdev, rp->cig_id))) { + conn->state = BT_CLOSED; +- hci_connect_cfm(conn, rp->status); ++ hci_connect_cfm(conn, status); + hci_conn_del(conn); + } + goto unlock; + } + +- rcu_read_lock(); ++ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 2553 ++ * ++ * If the Status return parameter is zero, then the Controller shall ++ * set the Connection_Handle arrayed return parameter to the connection ++ * handle(s) corresponding to the CIS configurations specified in ++ * the CIS_IDs command parameter, in the same order. ++ */ ++ for (i = 0; i < rp->num_handles; ++i) { ++ conn = hci_conn_hash_lookup_cis(hdev, NULL, 0, rp->cig_id, ++ cp->cis[i].cis_id); ++ if (!conn || !bacmp(&conn->dst, BDADDR_ANY)) ++ continue; + +- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { +- if (conn->type != ISO_LINK || conn->iso_qos.cig != rp->cig_id || +- conn->state == BT_CONNECTED) ++ if (conn->state != BT_BOUND && conn->state != BT_CONNECT) + continue; + +- conn->handle = __le16_to_cpu(rp->handle[i++]); ++ conn->handle = __le16_to_cpu(rp->handle[i]); + +- bt_dev_dbg(hdev, "%p handle 0x%4.4x link %p", conn, +- conn->handle, conn->link); ++ bt_dev_dbg(hdev, "%p handle 0x%4.4x parent %p", conn, ++ conn->handle, conn->parent); + + /* Create CIS if LE is already connected */ +- if (conn->link && conn->link->state == BT_CONNECTED) { +- rcu_read_unlock(); +- hci_le_create_cis(conn->link); +- rcu_read_lock(); +- } +- +- if (i == rp->num_handles) +- break; ++ if (conn->parent && conn->parent->state == BT_CONNECTED) ++ hci_le_create_cis(conn); + } + +- rcu_read_unlock(); +- + unlock: + hci_dev_unlock(hdev); + +@@ -3890,7 +3889,7 @@ static u8 hci_cc_le_setup_iso_path(struct hci_dev *hdev, void *data, + /* Input (Host to Controller) */ + case 0x00: + /* Only confirm connection if output only */ +- if (conn->iso_qos.out.sdu && !conn->iso_qos.in.sdu) ++ if (conn->iso_qos.ucast.out.sdu && !conn->iso_qos.ucast.in.sdu) + hci_connect_cfm(conn, rp->status); + break; + /* Output (Controller to Host) */ +@@ -5030,7 +5029,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data, + if (conn->out) { + conn->pkt_type = (hdev->esco_type & SCO_ESCO_MASK) | + (hdev->esco_type & EDR_ESCO_MASK); +- if (hci_setup_sync(conn, conn->link->handle)) ++ if (hci_setup_sync(conn, conn->parent->handle)) + goto unlock; + } + fallthrough; +@@ -6818,15 +6817,15 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data, + memset(&interval, 0, sizeof(interval)); + + memcpy(&interval, ev->c_latency, sizeof(ev->c_latency)); +- conn->iso_qos.in.interval = le32_to_cpu(interval); ++ conn->iso_qos.ucast.in.interval = le32_to_cpu(interval); + memcpy(&interval, ev->p_latency, sizeof(ev->p_latency)); +- conn->iso_qos.out.interval = le32_to_cpu(interval); +- conn->iso_qos.in.latency = le16_to_cpu(ev->interval); +- conn->iso_qos.out.latency = le16_to_cpu(ev->interval); +- conn->iso_qos.in.sdu = le16_to_cpu(ev->c_mtu); +- conn->iso_qos.out.sdu = le16_to_cpu(ev->p_mtu); +- conn->iso_qos.in.phy = ev->c_phy; +- conn->iso_qos.out.phy = ev->p_phy; ++ conn->iso_qos.ucast.out.interval = le32_to_cpu(interval); ++ conn->iso_qos.ucast.in.latency = le16_to_cpu(ev->interval); ++ conn->iso_qos.ucast.out.latency = le16_to_cpu(ev->interval); ++ conn->iso_qos.ucast.in.sdu = le16_to_cpu(ev->c_mtu); ++ conn->iso_qos.ucast.out.sdu = le16_to_cpu(ev->p_mtu); ++ conn->iso_qos.ucast.in.phy = ev->c_phy; ++ conn->iso_qos.ucast.out.phy = ev->p_phy; + } + + if (!ev->status) { +@@ -6900,8 +6899,8 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data, + cis->handle = cis_handle; + } + +- cis->iso_qos.cig = ev->cig_id; +- cis->iso_qos.cis = ev->cis_id; ++ cis->iso_qos.ucast.cig = ev->cig_id; ++ cis->iso_qos.ucast.cis = ev->cis_id; + + if (!(flags & HCI_PROTO_DEFER)) { + hci_le_accept_cis(hdev, ev->cis_handle); +@@ -6988,13 +6987,13 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data, + bis->handle = handle; + } + +- bis->iso_qos.big = ev->handle; ++ bis->iso_qos.bcast.big = ev->handle; + memset(&interval, 0, sizeof(interval)); + memcpy(&interval, ev->latency, sizeof(ev->latency)); +- bis->iso_qos.in.interval = le32_to_cpu(interval); ++ bis->iso_qos.bcast.in.interval = le32_to_cpu(interval); + /* Convert ISO Interval (1.25 ms slots) to latency (ms) */ +- bis->iso_qos.in.latency = le16_to_cpu(ev->interval) * 125 / 100; +- bis->iso_qos.in.sdu = le16_to_cpu(ev->max_pdu); ++ bis->iso_qos.bcast.in.latency = le16_to_cpu(ev->interval) * 125 / 100; ++ bis->iso_qos.bcast.in.sdu = le16_to_cpu(ev->max_pdu); + + hci_iso_setup_path(bis); + } +diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c +index b65ee3a32e5d7..7f410f441e82c 100644 +--- a/net/bluetooth/hci_sync.c ++++ b/net/bluetooth/hci_sync.c +@@ -629,6 +629,7 @@ void hci_cmd_sync_init(struct hci_dev *hdev) + INIT_WORK(&hdev->cmd_sync_work, hci_cmd_sync_work); + INIT_LIST_HEAD(&hdev->cmd_sync_work_list); + mutex_init(&hdev->cmd_sync_work_lock); ++ mutex_init(&hdev->unregister_lock); + + INIT_WORK(&hdev->cmd_sync_cancel_work, hci_cmd_sync_cancel_work); + INIT_WORK(&hdev->reenable_adv_work, reenable_adv); +@@ -688,14 +689,19 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, + void *data, hci_cmd_sync_work_destroy_t destroy) + { + struct hci_cmd_sync_work_entry *entry; ++ int err = 0; + +- if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) +- return -ENODEV; ++ mutex_lock(&hdev->unregister_lock); ++ if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) { ++ err = -ENODEV; ++ goto unlock; ++ } + + entry = kmalloc(sizeof(*entry), GFP_KERNEL); +- if (!entry) +- return -ENOMEM; +- ++ if (!entry) { ++ err = -ENOMEM; ++ goto unlock; ++ } + entry->func = func; + entry->data = data; + entry->destroy = destroy; +@@ -706,7 +712,9 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, + + queue_work(hdev->req_workqueue, &hdev->cmd_sync_work); + +- return 0; ++unlock: ++ mutex_unlock(&hdev->unregister_lock); ++ return err; + } + EXPORT_SYMBOL(hci_cmd_sync_queue); + +@@ -4502,6 +4510,9 @@ static int hci_init_sync(struct hci_dev *hdev) + !hci_dev_test_flag(hdev, HCI_CONFIG)) + return 0; + ++ if (hci_dev_test_and_set_flag(hdev, HCI_DEBUGFS_CREATED)) ++ return 0; ++ + hci_debugfs_create_common(hdev); + + if (lmp_bredr_capable(hdev)) +diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c +index 8d136a7301630..34d55a85d8f6f 100644 +--- a/net/bluetooth/iso.c ++++ b/net/bluetooth/iso.c +@@ -3,6 +3,7 @@ + * BlueZ - Bluetooth protocol stack for Linux + * + * Copyright (C) 2022 Intel Corporation ++ * Copyright 2023 NXP + */ + + #include +@@ -59,11 +60,17 @@ struct iso_pinfo { + __u16 sync_handle; + __u32 flags; + struct bt_iso_qos qos; ++ bool qos_user_set; + __u8 base_len; + __u8 base[BASE_MAX_LENGTH]; + struct iso_conn *conn; + }; + ++static struct bt_iso_qos default_qos; ++ ++static bool check_ucast_qos(struct bt_iso_qos *qos); ++static bool check_bcast_qos(struct bt_iso_qos *qos); ++ + /* ---- ISO timers ---- */ + #define ISO_CONN_TIMEOUT (HZ * 40) + #define ISO_DISCONN_TIMEOUT (HZ * 2) +@@ -264,8 +271,15 @@ static int iso_connect_bis(struct sock *sk) + goto unlock; + } + ++ /* Fail if user set invalid QoS */ ++ if (iso_pi(sk)->qos_user_set && !check_bcast_qos(&iso_pi(sk)->qos)) { ++ iso_pi(sk)->qos = default_qos; ++ err = -EINVAL; ++ goto unlock; ++ } ++ + /* Fail if out PHYs are marked as disabled */ +- if (!iso_pi(sk)->qos.out.phy) { ++ if (!iso_pi(sk)->qos.bcast.out.phy) { + err = -EINVAL; + goto unlock; + } +@@ -336,8 +350,15 @@ static int iso_connect_cis(struct sock *sk) + goto unlock; + } + ++ /* Fail if user set invalid QoS */ ++ if (iso_pi(sk)->qos_user_set && !check_ucast_qos(&iso_pi(sk)->qos)) { ++ iso_pi(sk)->qos = default_qos; ++ err = -EINVAL; ++ goto unlock; ++ } ++ + /* Fail if either PHYs are marked as disabled */ +- if (!iso_pi(sk)->qos.in.phy && !iso_pi(sk)->qos.out.phy) { ++ if (!iso_pi(sk)->qos.ucast.in.phy && !iso_pi(sk)->qos.ucast.out.phy) { + err = -EINVAL; + goto unlock; + } +@@ -417,7 +438,7 @@ static int iso_send_frame(struct sock *sk, struct sk_buff *skb) + + BT_DBG("sk %p len %d", sk, skb->len); + +- if (skb->len > qos->out.sdu) ++ if (skb->len > qos->ucast.out.sdu) + return -EMSGSIZE; + + len = skb->len; +@@ -680,13 +701,23 @@ static struct proto iso_proto = { + } + + static struct bt_iso_qos default_qos = { +- .cig = BT_ISO_QOS_CIG_UNSET, +- .cis = BT_ISO_QOS_CIS_UNSET, +- .sca = 0x00, +- .packing = 0x00, +- .framing = 0x00, +- .in = DEFAULT_IO_QOS, +- .out = DEFAULT_IO_QOS, ++ .bcast = { ++ .big = BT_ISO_QOS_BIG_UNSET, ++ .bis = BT_ISO_QOS_BIS_UNSET, ++ .sync_interval = 0x00, ++ .packing = 0x00, ++ .framing = 0x00, ++ .in = DEFAULT_IO_QOS, ++ .out = DEFAULT_IO_QOS, ++ .encryption = 0x00, ++ .bcode = {0x00}, ++ .options = 0x00, ++ .skip = 0x0000, ++ .sync_timeout = 0x4000, ++ .sync_cte_type = 0x00, ++ .mse = 0x00, ++ .timeout = 0x4000, ++ }, + }; + + static struct sock *iso_sock_alloc(struct net *net, struct socket *sock, +@@ -893,9 +924,15 @@ static int iso_listen_bis(struct sock *sk) + if (!hdev) + return -EHOSTUNREACH; + ++ /* Fail if user set invalid QoS */ ++ if (iso_pi(sk)->qos_user_set && !check_bcast_qos(&iso_pi(sk)->qos)) { ++ iso_pi(sk)->qos = default_qos; ++ return -EINVAL; ++ } ++ + err = hci_pa_create_sync(hdev, &iso_pi(sk)->dst, + le_addr_type(iso_pi(sk)->dst_type), +- iso_pi(sk)->bc_sid); ++ iso_pi(sk)->bc_sid, &iso_pi(sk)->qos); + + hci_dev_put(hdev); + +@@ -1154,21 +1191,62 @@ static bool check_io_qos(struct bt_iso_io_qos *qos) + return true; + } + +-static bool check_qos(struct bt_iso_qos *qos) ++static bool check_ucast_qos(struct bt_iso_qos *qos) + { +- if (qos->sca > 0x07) ++ if (qos->ucast.sca > 0x07) + return false; + +- if (qos->packing > 0x01) ++ if (qos->ucast.packing > 0x01) + return false; + +- if (qos->framing > 0x01) ++ if (qos->ucast.framing > 0x01) + return false; + +- if (!check_io_qos(&qos->in)) ++ if (!check_io_qos(&qos->ucast.in)) + return false; + +- if (!check_io_qos(&qos->out)) ++ if (!check_io_qos(&qos->ucast.out)) ++ return false; ++ ++ return true; ++} ++ ++static bool check_bcast_qos(struct bt_iso_qos *qos) ++{ ++ if (qos->bcast.sync_interval > 0x07) ++ return false; ++ ++ if (qos->bcast.packing > 0x01) ++ return false; ++ ++ if (qos->bcast.framing > 0x01) ++ return false; ++ ++ if (!check_io_qos(&qos->bcast.in)) ++ return false; ++ ++ if (!check_io_qos(&qos->bcast.out)) ++ return false; ++ ++ if (qos->bcast.encryption > 0x01) ++ return false; ++ ++ if (qos->bcast.options > 0x07) ++ return false; ++ ++ if (qos->bcast.skip > 0x01f3) ++ return false; ++ ++ if (qos->bcast.sync_timeout < 0x000a || qos->bcast.sync_timeout > 0x4000) ++ return false; ++ ++ if (qos->bcast.sync_cte_type > 0x1f) ++ return false; ++ ++ if (qos->bcast.mse > 0x1f) ++ return false; ++ ++ if (qos->bcast.timeout < 0x000a || qos->bcast.timeout > 0x4000) + return false; + + return true; +@@ -1179,7 +1257,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname, + { + struct sock *sk = sock->sk; + int len, err = 0; +- struct bt_iso_qos qos; ++ struct bt_iso_qos qos = default_qos; + u32 opt; + + BT_DBG("sk %p", sk); +@@ -1212,24 +1290,19 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname, + } + + len = min_t(unsigned int, sizeof(qos), optlen); +- if (len != sizeof(qos)) { +- err = -EINVAL; +- break; +- } +- +- memset(&qos, 0, sizeof(qos)); + + if (copy_from_sockptr(&qos, optval, len)) { + err = -EFAULT; + break; + } + +- if (!check_qos(&qos)) { ++ if (len == sizeof(qos.ucast) && !check_ucast_qos(&qos)) { + err = -EINVAL; + break; + } + + iso_pi(sk)->qos = qos; ++ iso_pi(sk)->qos_user_set = true; + + break; + +@@ -1419,7 +1492,7 @@ static bool iso_match_big(struct sock *sk, void *data) + { + struct hci_evt_le_big_sync_estabilished *ev = data; + +- return ev->handle == iso_pi(sk)->qos.big; ++ return ev->handle == iso_pi(sk)->qos.bcast.big; + } + + static void iso_conn_ready(struct iso_conn *conn) +@@ -1584,8 +1657,12 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status) + + /* Check if LE link has failed */ + if (status) { +- if (hcon->link) +- iso_conn_del(hcon->link, bt_to_errno(status)); ++ struct hci_link *link, *t; ++ ++ list_for_each_entry_safe(link, t, &hcon->link_list, ++ list) ++ iso_conn_del(link->conn, bt_to_errno(status)); ++ + return; + } + +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index 24d075282996c..5678218a19607 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -4307,6 +4307,10 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn, + result = __le16_to_cpu(rsp->result); + status = __le16_to_cpu(rsp->status); + ++ if (result == L2CAP_CR_SUCCESS && (dcid < L2CAP_CID_DYN_START || ++ dcid > L2CAP_CID_DYN_END)) ++ return -EPROTO; ++ + BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x", + dcid, scid, result, status); + +@@ -4338,6 +4342,11 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn, + + switch (result) { + case L2CAP_CR_SUCCESS: ++ if (__l2cap_get_chan_by_dcid(conn, dcid)) { ++ err = -EBADSLT; ++ break; ++ } ++ + l2cap_state_change(chan, BT_CONFIG); + chan->ident = 0; + chan->dcid = dcid; +@@ -4664,7 +4673,9 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn, + + chan->ops->set_shutdown(chan); + ++ l2cap_chan_unlock(chan); + mutex_lock(&conn->chan_lock); ++ l2cap_chan_lock(chan); + l2cap_chan_del(chan, ECONNRESET); + mutex_unlock(&conn->chan_lock); + +@@ -4703,7 +4714,9 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, + return 0; + } + ++ l2cap_chan_unlock(chan); + mutex_lock(&conn->chan_lock); ++ l2cap_chan_lock(chan); + l2cap_chan_del(chan, 0); + mutex_unlock(&conn->chan_lock); + +diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c +index 821d4ff303b35..ecff1c947d683 100644 +--- a/net/can/j1939/main.c ++++ b/net/can/j1939/main.c +@@ -126,7 +126,7 @@ static void j1939_can_recv(struct sk_buff *iskb, void *data) + #define J1939_CAN_ID CAN_EFF_FLAG + #define J1939_CAN_MASK (CAN_EFF_FLAG | CAN_RTR_FLAG) + +-static DEFINE_SPINLOCK(j1939_netdev_lock); ++static DEFINE_MUTEX(j1939_netdev_lock); + + static struct j1939_priv *j1939_priv_create(struct net_device *ndev) + { +@@ -220,7 +220,7 @@ static void __j1939_rx_release(struct kref *kref) + j1939_can_rx_unregister(priv); + j1939_ecu_unmap_all(priv); + j1939_priv_set(priv->ndev, NULL); +- spin_unlock(&j1939_netdev_lock); ++ mutex_unlock(&j1939_netdev_lock); + } + + /* get pointer to priv without increasing ref counter */ +@@ -248,9 +248,9 @@ static struct j1939_priv *j1939_priv_get_by_ndev(struct net_device *ndev) + { + struct j1939_priv *priv; + +- spin_lock(&j1939_netdev_lock); ++ mutex_lock(&j1939_netdev_lock); + priv = j1939_priv_get_by_ndev_locked(ndev); +- spin_unlock(&j1939_netdev_lock); ++ mutex_unlock(&j1939_netdev_lock); + + return priv; + } +@@ -260,14 +260,14 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev) + struct j1939_priv *priv, *priv_new; + int ret; + +- spin_lock(&j1939_netdev_lock); ++ mutex_lock(&j1939_netdev_lock); + priv = j1939_priv_get_by_ndev_locked(ndev); + if (priv) { + kref_get(&priv->rx_kref); +- spin_unlock(&j1939_netdev_lock); ++ mutex_unlock(&j1939_netdev_lock); + return priv; + } +- spin_unlock(&j1939_netdev_lock); ++ mutex_unlock(&j1939_netdev_lock); + + priv = j1939_priv_create(ndev); + if (!priv) +@@ -277,29 +277,31 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev) + spin_lock_init(&priv->j1939_socks_lock); + INIT_LIST_HEAD(&priv->j1939_socks); + +- spin_lock(&j1939_netdev_lock); ++ mutex_lock(&j1939_netdev_lock); + priv_new = j1939_priv_get_by_ndev_locked(ndev); + if (priv_new) { + /* Someone was faster than us, use their priv and roll + * back our's. + */ + kref_get(&priv_new->rx_kref); +- spin_unlock(&j1939_netdev_lock); ++ mutex_unlock(&j1939_netdev_lock); + dev_put(ndev); + kfree(priv); + return priv_new; + } + j1939_priv_set(ndev, priv); +- spin_unlock(&j1939_netdev_lock); + + ret = j1939_can_rx_register(priv); + if (ret < 0) + goto out_priv_put; + ++ mutex_unlock(&j1939_netdev_lock); + return priv; + + out_priv_put: + j1939_priv_set(ndev, NULL); ++ mutex_unlock(&j1939_netdev_lock); ++ + dev_put(ndev); + kfree(priv); + +@@ -308,7 +310,7 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev) + + void j1939_netdev_stop(struct j1939_priv *priv) + { +- kref_put_lock(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock); ++ kref_put_mutex(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock); + j1939_priv_put(priv); + } + +diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c +index 1790469b25808..35970c25496ab 100644 +--- a/net/can/j1939/socket.c ++++ b/net/can/j1939/socket.c +@@ -1088,6 +1088,11 @@ void j1939_sk_errqueue(struct j1939_session *session, + + void j1939_sk_send_loop_abort(struct sock *sk, int err) + { ++ struct j1939_sock *jsk = j1939_sk(sk); ++ ++ if (jsk->state & J1939_SOCK_ERRQUEUE) ++ return; ++ + sk->sk_err = err; + + sk_error_report(sk); +diff --git a/net/core/dev.c b/net/core/dev.c +index b3d8e74fcaf06..bcb654fd519bd 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb, + u32 next_cpu; + u32 ident; + +- /* First check into global flow table if there is a match */ +- ident = sock_flow_table->ents[hash & sock_flow_table->mask]; ++ /* First check into global flow table if there is a match. ++ * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow(). ++ */ ++ ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]); + if ((ident ^ hash) & ~rps_cpu_mask) + goto try_rps; + +@@ -10505,7 +10507,7 @@ struct netdev_queue *dev_ingress_queue_create(struct net_device *dev) + return NULL; + netdev_init_one_queue(dev, queue, NULL); + RCU_INIT_POINTER(queue->qdisc, &noop_qdisc); +- queue->qdisc_sleeping = &noop_qdisc; ++ RCU_INIT_POINTER(queue->qdisc_sleeping, &noop_qdisc); + rcu_assign_pointer(dev->ingress_queue, queue); + #endif + return queue; +diff --git a/net/core/skmsg.c b/net/core/skmsg.c +index a9060e1f0e437..a29508e1ff356 100644 +--- a/net/core/skmsg.c ++++ b/net/core/skmsg.c +@@ -1210,7 +1210,8 @@ static void sk_psock_verdict_data_ready(struct sock *sk) + + rcu_read_lock(); + psock = sk_psock(sk); +- psock->saved_data_ready(sk); ++ if (psock) ++ psock->saved_data_ready(sk); + rcu_read_unlock(); + } + } +diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c +index 40fe70fc2015d..88dfe51e68f3c 100644 +--- a/net/ipv4/sysctl_net_ipv4.c ++++ b/net/ipv4/sysctl_net_ipv4.c +@@ -34,8 +34,8 @@ static int ip_ttl_min = 1; + static int ip_ttl_max = 255; + static int tcp_syn_retries_min = 1; + static int tcp_syn_retries_max = MAX_TCP_SYNCNT; +-static int ip_ping_group_range_min[] = { 0, 0 }; +-static int ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX }; ++static unsigned long ip_ping_group_range_min[] = { 0, 0 }; ++static unsigned long ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX }; + static u32 u32_max_div_HZ = UINT_MAX / HZ; + static int one_day_secs = 24 * 3600; + static u32 fib_multipath_hash_fields_all_mask __maybe_unused = +@@ -165,7 +165,7 @@ static int ipv4_ping_group_range(struct ctl_table *table, int write, + { + struct user_namespace *user_ns = current_user_ns(); + int ret; +- gid_t urange[2]; ++ unsigned long urange[2]; + kgid_t low, high; + struct ctl_table tmp = { + .data = &urange, +@@ -178,7 +178,7 @@ static int ipv4_ping_group_range(struct ctl_table *table, int write, + inet_get_ping_group_range_table(table, &low, &high); + urange[0] = from_kgid_munged(user_ns, low); + urange[1] = from_kgid_munged(user_ns, high); +- ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos); ++ ret = proc_doulongvec_minmax(&tmp, write, buffer, lenp, ppos); + + if (write && ret == 0) { + low = make_kgid(user_ns, urange[0]); +diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c +index 45dda78893870..4851211aa60d6 100644 +--- a/net/ipv4/tcp_offload.c ++++ b/net/ipv4/tcp_offload.c +@@ -60,12 +60,12 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb, + struct tcphdr *th; + unsigned int thlen; + unsigned int seq; +- __be32 delta; + unsigned int oldlen; + unsigned int mss; + struct sk_buff *gso_skb = skb; + __sum16 newcheck; + bool ooo_okay, copy_destructor; ++ __wsum delta; + + th = tcp_hdr(skb); + thlen = th->doff * 4; +@@ -75,7 +75,7 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb, + if (!pskb_may_pull(skb, thlen)) + goto out; + +- oldlen = (u16)~skb->len; ++ oldlen = ~skb->len; + __skb_pull(skb, thlen); + + mss = skb_shinfo(skb)->gso_size; +@@ -110,7 +110,7 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb, + if (skb_is_gso(segs)) + mss *= skb_shinfo(segs)->gso_segs; + +- delta = htonl(oldlen + (thlen + mss)); ++ delta = (__force __wsum)htonl(oldlen + thlen + mss); + + skb = segs; + th = tcp_hdr(skb); +@@ -119,8 +119,7 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb, + if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_SW_TSTAMP)) + tcp_gso_tstamp(segs, skb_shinfo(gso_skb)->tskey, seq, mss); + +- newcheck = ~csum_fold((__force __wsum)((__force u32)th->check + +- (__force u32)delta)); ++ newcheck = ~csum_fold(csum_add(csum_unfold(th->check), delta)); + + while (skb->next) { + th->fin = th->psh = 0; +@@ -165,11 +164,11 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb, + WARN_ON_ONCE(refcount_sub_and_test(-delta, &skb->sk->sk_wmem_alloc)); + } + +- delta = htonl(oldlen + (skb_tail_pointer(skb) - +- skb_transport_header(skb)) + +- skb->data_len); +- th->check = ~csum_fold((__force __wsum)((__force u32)th->check + +- (__force u32)delta)); ++ delta = (__force __wsum)htonl(oldlen + ++ (skb_tail_pointer(skb) - ++ skb_transport_header(skb)) + ++ skb->data_len); ++ th->check = ~csum_fold(csum_add(csum_unfold(th->check), delta)); + if (skb->ip_summed == CHECKSUM_PARTIAL) + gso_reset_checksum(skb, ~th->check); + else +diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c +index a8d961d3a477f..5fa0e37305d9d 100644 +--- a/net/ipv6/exthdrs.c ++++ b/net/ipv6/exthdrs.c +@@ -569,24 +569,6 @@ looped_back: + return -1; + } + +- if (skb_cloned(skb)) { +- if (pskb_expand_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE, 0, +- GFP_ATOMIC)) { +- __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), +- IPSTATS_MIB_OUTDISCARDS); +- kfree_skb(skb); +- return -1; +- } +- } else { +- err = skb_cow_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE); +- if (unlikely(err)) { +- kfree_skb(skb); +- return -1; +- } +- } +- +- hdr = (struct ipv6_rpl_sr_hdr *)skb_transport_header(skb); +- + if (!pskb_may_pull(skb, ipv6_rpl_srh_size(n, hdr->cmpri, + hdr->cmpre))) { + kfree_skb(skb); +@@ -630,6 +612,17 @@ looped_back: + skb_pull(skb, ((hdr->hdrlen + 1) << 3)); + skb_postpull_rcsum(skb, oldhdr, + sizeof(struct ipv6hdr) + ((hdr->hdrlen + 1) << 3)); ++ if (unlikely(!hdr->segments_left)) { ++ if (pskb_expand_head(skb, sizeof(struct ipv6hdr) + ((chdr->hdrlen + 1) << 3), 0, ++ GFP_ATOMIC)) { ++ __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_OUTDISCARDS); ++ kfree_skb(skb); ++ kfree(buf); ++ return -1; ++ } ++ ++ oldhdr = ipv6_hdr(skb); ++ } + skb_push(skb, ((chdr->hdrlen + 1) << 3) + sizeof(struct ipv6hdr)); + skb_reset_network_header(skb); + skb_mac_header_rebuild(skb); +diff --git a/net/mac80211/he.c b/net/mac80211/he.c +index 729f261520c77..0322abae08250 100644 +--- a/net/mac80211/he.c ++++ b/net/mac80211/he.c +@@ -3,7 +3,7 @@ + * HE handling + * + * Copyright(c) 2017 Intel Deutschland GmbH +- * Copyright(c) 2019 - 2022 Intel Corporation ++ * Copyright(c) 2019 - 2023 Intel Corporation + */ + + #include "ieee80211_i.h" +@@ -114,6 +114,7 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata, + struct link_sta_info *link_sta) + { + struct ieee80211_sta_he_cap *he_cap = &link_sta->pub->he_cap; ++ const struct ieee80211_sta_he_cap *own_he_cap_ptr; + struct ieee80211_sta_he_cap own_he_cap; + struct ieee80211_he_cap_elem *he_cap_ie_elem = (void *)he_cap_ie; + u8 he_ppe_size; +@@ -123,12 +124,16 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata, + + memset(he_cap, 0, sizeof(*he_cap)); + +- if (!he_cap_ie || +- !ieee80211_get_he_iftype_cap(sband, +- ieee80211_vif_type_p2p(&sdata->vif))) ++ if (!he_cap_ie) + return; + +- own_he_cap = sband->iftype_data->he_cap; ++ own_he_cap_ptr = ++ ieee80211_get_he_iftype_cap(sband, ++ ieee80211_vif_type_p2p(&sdata->vif)); ++ if (!own_he_cap_ptr) ++ return; ++ ++ own_he_cap = *own_he_cap_ptr; + + /* Make sure size is OK */ + mcs_nss_size = ieee80211_he_mcs_nss_size(he_cap_ie_elem); +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index 60792dfabc9d6..7a970b6dda640 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -1217,6 +1217,7 @@ static void ieee80211_add_non_inheritance_elem(struct sk_buff *skb, + const u16 *inner) + { + unsigned int skb_len = skb->len; ++ bool at_extension = false; + bool added = false; + int i, j; + u8 *len, *list_len = NULL; +@@ -1228,7 +1229,6 @@ static void ieee80211_add_non_inheritance_elem(struct sk_buff *skb, + for (i = 0; i < PRESENT_ELEMS_MAX && outer[i]; i++) { + u16 elem = outer[i]; + bool have_inner = false; +- bool at_extension = false; + + /* should at least be sorted in the sense of normal -> ext */ + WARN_ON(at_extension && elem < PRESENT_ELEM_EXT_OFFS); +@@ -1257,8 +1257,14 @@ static void ieee80211_add_non_inheritance_elem(struct sk_buff *skb, + } + *list_len += 1; + skb_put_u8(skb, (u8)elem); ++ added = true; + } + ++ /* if we added a list but no extension list, make a zero-len one */ ++ if (added && (!at_extension || !list_len)) ++ skb_put_u8(skb, 0); ++ ++ /* if nothing added remove extension element completely */ + if (!added) + skb_trim(skb, skb_len); + else +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index af57616d2f1d9..0e66ece35f8e2 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -4884,7 +4884,9 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx, + } + + if (unlikely(rx->sta && rx->sta->sta.mlo) && +- is_unicast_ether_addr(hdr->addr1)) { ++ is_unicast_ether_addr(hdr->addr1) && ++ !ieee80211_is_probe_resp(hdr->frame_control) && ++ !ieee80211_is_beacon(hdr->frame_control)) { + /* translate to MLD addresses */ + if (ether_addr_equal(link->conf->addr, hdr->addr1)) + ether_addr_copy(hdr->addr1, rx->sdata->vif.addr); +diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c +index 70f0ced3ca86e..10c288a0cb0c2 100644 +--- a/net/mptcp/pm.c ++++ b/net/mptcp/pm.c +@@ -87,8 +87,15 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk) + unsigned int subflows_max; + int ret = 0; + +- if (mptcp_pm_is_userspace(msk)) +- return mptcp_userspace_pm_active(msk); ++ if (mptcp_pm_is_userspace(msk)) { ++ if (mptcp_userspace_pm_active(msk)) { ++ spin_lock_bh(&pm->lock); ++ pm->subflows++; ++ spin_unlock_bh(&pm->lock); ++ return true; ++ } ++ return false; ++ } + + subflows_max = mptcp_pm_get_subflows_max(msk); + +@@ -181,8 +188,16 @@ void mptcp_pm_subflow_check_next(struct mptcp_sock *msk, const struct sock *ssk, + struct mptcp_pm_data *pm = &msk->pm; + bool update_subflows; + +- update_subflows = (subflow->request_join || subflow->mp_join) && +- mptcp_pm_is_kernel(msk); ++ update_subflows = subflow->request_join || subflow->mp_join; ++ if (mptcp_pm_is_userspace(msk)) { ++ if (update_subflows) { ++ spin_lock_bh(&pm->lock); ++ pm->subflows--; ++ spin_unlock_bh(&pm->lock); ++ } ++ return; ++ } ++ + if (!READ_ONCE(pm->work_pending) && !update_subflows) + return; + +diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c +index 5c8dea49626c3..5396907cc5963 100644 +--- a/net/mptcp/pm_netlink.c ++++ b/net/mptcp/pm_netlink.c +@@ -1558,6 +1558,24 @@ static int mptcp_nl_cmd_del_addr(struct sk_buff *skb, struct genl_info *info) + return ret; + } + ++void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list) ++{ ++ struct mptcp_rm_list alist = { .nr = 0 }; ++ struct mptcp_pm_addr_entry *entry; ++ ++ list_for_each_entry(entry, rm_list, list) { ++ remove_anno_list_by_saddr(msk, &entry->addr); ++ if (alist.nr < MPTCP_RM_IDS_MAX) ++ alist.ids[alist.nr++] = entry->addr.id; ++ } ++ ++ if (alist.nr) { ++ spin_lock_bh(&msk->pm.lock); ++ mptcp_pm_remove_addr(msk, &alist); ++ spin_unlock_bh(&msk->pm.lock); ++ } ++} ++ + void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk, + struct list_head *rm_list) + { +diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c +index a02d3cbf2a1b6..c3e1bb0ac3d5d 100644 +--- a/net/mptcp/pm_userspace.c ++++ b/net/mptcp/pm_userspace.c +@@ -69,6 +69,7 @@ int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk, + MPTCP_PM_MAX_ADDR_ID + 1, + 1); + list_add_tail_rcu(&e->list, &msk->pm.userspace_pm_local_addr_list); ++ msk->pm.local_addr_used++; + ret = e->addr.id; + } else if (match) { + ret = entry->addr.id; +@@ -79,6 +80,31 @@ append_err: + return ret; + } + ++/* If the subflow is closed from the other peer (not via a ++ * subflow destroy command then), we want to keep the entry ++ * not to assign the same ID to another address and to be ++ * able to send RM_ADDR after the removal of the subflow. ++ */ ++static int mptcp_userspace_pm_delete_local_addr(struct mptcp_sock *msk, ++ struct mptcp_pm_addr_entry *addr) ++{ ++ struct mptcp_pm_addr_entry *entry, *tmp; ++ ++ list_for_each_entry_safe(entry, tmp, &msk->pm.userspace_pm_local_addr_list, list) { ++ if (mptcp_addresses_equal(&entry->addr, &addr->addr, false)) { ++ /* TODO: a refcount is needed because the entry can ++ * be used multiple times (e.g. fullmesh mode). ++ */ ++ list_del_rcu(&entry->list); ++ kfree(entry); ++ msk->pm.local_addr_used--; ++ return 0; ++ } ++ } ++ ++ return -EINVAL; ++} ++ + int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, + unsigned int id, + u8 *flags, int *ifindex) +@@ -171,6 +197,7 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info) + spin_lock_bh(&msk->pm.lock); + + if (mptcp_pm_alloc_anno_list(msk, &addr_val)) { ++ msk->pm.add_addr_signaled++; + mptcp_pm_announce_addr(msk, &addr_val.addr, false); + mptcp_pm_nl_addr_send_ack(msk); + } +@@ -232,7 +259,7 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info) + + list_move(&match->list, &free_list); + +- mptcp_pm_remove_addrs_and_subflows(msk, &free_list); ++ mptcp_pm_remove_addrs(msk, &free_list); + + release_sock((struct sock *)msk); + +@@ -251,6 +278,7 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info) + struct nlattr *raddr = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE]; + struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN]; + struct nlattr *laddr = info->attrs[MPTCP_PM_ATTR_ADDR]; ++ struct mptcp_pm_addr_entry local = { 0 }; + struct mptcp_addr_info addr_r; + struct mptcp_addr_info addr_l; + struct mptcp_sock *msk; +@@ -302,12 +330,26 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info) + goto create_err; + } + ++ local.addr = addr_l; ++ err = mptcp_userspace_pm_append_new_local_addr(msk, &local); ++ if (err < 0) { ++ GENL_SET_ERR_MSG(info, "did not match address and id"); ++ goto create_err; ++ } ++ + lock_sock(sk); + + err = __mptcp_subflow_connect(sk, &addr_l, &addr_r); + + release_sock(sk); + ++ spin_lock_bh(&msk->pm.lock); ++ if (err) ++ mptcp_userspace_pm_delete_local_addr(msk, &local); ++ else ++ msk->pm.subflows++; ++ spin_unlock_bh(&msk->pm.lock); ++ + create_err: + sock_put((struct sock *)msk); + return err; +@@ -420,7 +462,11 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info) + ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r); + if (ssk) { + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); ++ struct mptcp_pm_addr_entry entry = { .addr = addr_l }; + ++ spin_lock_bh(&msk->pm.lock); ++ mptcp_userspace_pm_delete_local_addr(msk, &entry); ++ spin_unlock_bh(&msk->pm.lock); + mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN); + mptcp_close_ssk(sk, ssk, subflow); + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW); +diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h +index 9c4860cf18a97..554e676fa619c 100644 +--- a/net/mptcp/protocol.h ++++ b/net/mptcp/protocol.h +@@ -835,6 +835,7 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk, + bool echo); + int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list); + int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list); ++void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list); + void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk, + struct list_head *rm_list); + +diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c +index 46ebee9400dab..9a6b64779e644 100644 +--- a/net/netfilter/ipset/ip_set_core.c ++++ b/net/netfilter/ipset/ip_set_core.c +@@ -1694,6 +1694,14 @@ call_ad(struct net *net, struct sock *ctnl, struct sk_buff *skb, + bool eexist = flags & IPSET_FLAG_EXIST, retried = false; + + do { ++ if (retried) { ++ __ip_set_get(set); ++ nfnl_unlock(NFNL_SUBSYS_IPSET); ++ cond_resched(); ++ nfnl_lock(NFNL_SUBSYS_IPSET); ++ __ip_set_put(set); ++ } ++ + ip_set_lock(set); + ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried); + ip_set_unlock(set); +diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c +index 7ba6ab9b54b56..06582f0a5393c 100644 +--- a/net/netfilter/nf_conntrack_core.c ++++ b/net/netfilter/nf_conntrack_core.c +@@ -2260,6 +2260,9 @@ static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct, + return 0; + + helper = rcu_dereference(help->helper); ++ if (!helper) ++ return 0; ++ + if (!(helper->flags & NF_CT_HELPER_F_USERSPACE)) + return 0; + +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index ef80504c3ccd2..368aeabd8f8f1 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -1593,6 +1593,8 @@ static int nft_dump_basechain_hook(struct sk_buff *skb, int family, + + if (nft_base_chain_netdev(family, ops->hooknum)) { + nest_devs = nla_nest_start_noflag(skb, NFTA_HOOK_DEVS); ++ if (!nest_devs) ++ goto nla_put_failure; + + if (!hook_list) + hook_list = &basechain->hook_list; +@@ -8919,7 +8921,7 @@ static int nf_tables_commit_chain_prepare(struct net *net, struct nft_chain *cha + continue; + } + +- if (WARN_ON_ONCE(data + expr->ops->size > data_boundary)) ++ if (WARN_ON_ONCE(data + size + expr->ops->size > data_boundary)) + return -ENOMEM; + + memcpy(data + size, expr, expr->ops->size); +diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c +index 84eae7cabc67a..2527a01486efc 100644 +--- a/net/netfilter/nft_bitwise.c ++++ b/net/netfilter/nft_bitwise.c +@@ -323,7 +323,7 @@ static bool nft_bitwise_reduce(struct nft_regs_track *track, + dreg = priv->dreg; + regcount = DIV_ROUND_UP(priv->len, NFT_REG32_SIZE); + for (i = 0; i < regcount; i++, dreg++) +- track->regs[priv->dreg].bitwise = expr; ++ track->regs[dreg].bitwise = expr; + + return false; + } +diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c +index fcee6012293b1..58f530f60172a 100644 +--- a/net/openvswitch/datapath.c ++++ b/net/openvswitch/datapath.c +@@ -236,9 +236,6 @@ void ovs_dp_detach_port(struct vport *p) + /* First drop references to device. */ + hlist_del_rcu(&p->dp_hash_node); + +- /* Free percpu memory */ +- free_percpu(p->upcall_stats); +- + /* Then destroy it. */ + ovs_vport_del(p); + } +@@ -1858,12 +1855,6 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info) + goto err_destroy_portids; + } + +- vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu); +- if (!vport->upcall_stats) { +- err = -ENOMEM; +- goto err_destroy_vport; +- } +- + err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid, + info->snd_seq, 0, OVS_DP_CMD_NEW); + BUG_ON(err < 0); +@@ -1876,8 +1867,6 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info) + ovs_notify(&dp_datapath_genl_family, reply, info); + return 0; + +-err_destroy_vport: +- ovs_dp_detach_port(vport); + err_destroy_portids: + kfree(rcu_dereference_raw(dp->upcall_portids)); + err_unlock_and_destroy_meters: +@@ -2322,12 +2311,6 @@ restart: + goto exit_unlock_free; + } + +- vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu); +- if (!vport->upcall_stats) { +- err = -ENOMEM; +- goto exit_unlock_free_vport; +- } +- + err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info), + info->snd_portid, info->snd_seq, 0, + OVS_VPORT_CMD_NEW, GFP_KERNEL); +@@ -2345,8 +2328,6 @@ restart: + ovs_notify(&dp_vport_genl_family, reply, info); + return 0; + +-exit_unlock_free_vport: +- ovs_dp_detach_port(vport); + exit_unlock_free: + ovs_unlock(); + kfree_skb(reply); +diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c +index 7e0f5c45b5124..972ae01a70f76 100644 +--- a/net/openvswitch/vport.c ++++ b/net/openvswitch/vport.c +@@ -124,6 +124,7 @@ struct vport *ovs_vport_alloc(int priv_size, const struct vport_ops *ops, + { + struct vport *vport; + size_t alloc_size; ++ int err; + + alloc_size = sizeof(struct vport); + if (priv_size) { +@@ -135,17 +136,29 @@ struct vport *ovs_vport_alloc(int priv_size, const struct vport_ops *ops, + if (!vport) + return ERR_PTR(-ENOMEM); + ++ vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu); ++ if (!vport->upcall_stats) { ++ err = -ENOMEM; ++ goto err_kfree_vport; ++ } ++ + vport->dp = parms->dp; + vport->port_no = parms->port_no; + vport->ops = ops; + INIT_HLIST_NODE(&vport->dp_hash_node); + + if (ovs_vport_set_upcall_portids(vport, parms->upcall_portids)) { +- kfree(vport); +- return ERR_PTR(-EINVAL); ++ err = -EINVAL; ++ goto err_free_percpu; + } + + return vport; ++ ++err_free_percpu: ++ free_percpu(vport->upcall_stats); ++err_kfree_vport: ++ kfree(vport); ++ return ERR_PTR(err); + } + EXPORT_SYMBOL_GPL(ovs_vport_alloc); + +@@ -165,6 +178,7 @@ void ovs_vport_free(struct vport *vport) + * it is safe to use raw dereference. + */ + kfree(rcu_dereference_raw(vport->upcall_portids)); ++ free_percpu(vport->upcall_stats); + kfree(vport); + } + EXPORT_SYMBOL_GPL(ovs_vport_free); +diff --git a/net/sched/act_police.c b/net/sched/act_police.c +index 227cba58ce9f3..2e9dce03d1ecc 100644 +--- a/net/sched/act_police.c ++++ b/net/sched/act_police.c +@@ -357,23 +357,23 @@ static int tcf_police_dump(struct sk_buff *skb, struct tc_action *a, + opt.burst = PSCHED_NS2TICKS(p->tcfp_burst); + if (p->rate_present) { + psched_ratecfg_getrate(&opt.rate, &p->rate); +- if ((police->params->rate.rate_bytes_ps >= (1ULL << 32)) && ++ if ((p->rate.rate_bytes_ps >= (1ULL << 32)) && + nla_put_u64_64bit(skb, TCA_POLICE_RATE64, +- police->params->rate.rate_bytes_ps, ++ p->rate.rate_bytes_ps, + TCA_POLICE_PAD)) + goto nla_put_failure; + } + if (p->peak_present) { + psched_ratecfg_getrate(&opt.peakrate, &p->peak); +- if ((police->params->peak.rate_bytes_ps >= (1ULL << 32)) && ++ if ((p->peak.rate_bytes_ps >= (1ULL << 32)) && + nla_put_u64_64bit(skb, TCA_POLICE_PEAKRATE64, +- police->params->peak.rate_bytes_ps, ++ p->peak.rate_bytes_ps, + TCA_POLICE_PAD)) + goto nla_put_failure; + } + if (p->pps_present) { + if (nla_put_u64_64bit(skb, TCA_POLICE_PKTRATE64, +- police->params->ppsrate.rate_pkts_ps, ++ p->ppsrate.rate_pkts_ps, + TCA_POLICE_PAD)) + goto nla_put_failure; + if (nla_put_u64_64bit(skb, TCA_POLICE_PKTBURST64, +diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c +index 2621550bfddc1..c877a6343fd47 100644 +--- a/net/sched/cls_api.c ++++ b/net/sched/cls_api.c +@@ -43,8 +43,6 @@ + #include + #include + +-extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1]; +- + /* The list of all installed classifier types */ + static LIST_HEAD(tcf_proto_base); + +@@ -2952,6 +2950,7 @@ static int tc_chain_tmplt_add(struct tcf_chain *chain, struct net *net, + return PTR_ERR(ops); + if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump) { + NL_SET_ERR_MSG(extack, "Chain templates are not supported with specified classifier"); ++ module_put(ops->owner); + return -EOPNOTSUPP; + } + +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c +index 7045b67b5533e..b2a63d697a4aa 100644 +--- a/net/sched/sch_api.c ++++ b/net/sched/sch_api.c +@@ -309,7 +309,7 @@ struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle) + + if (dev_ingress_queue(dev)) + q = qdisc_match_from_root( +- dev_ingress_queue(dev)->qdisc_sleeping, ++ rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping), + handle); + out: + return q; +@@ -328,7 +328,8 @@ struct Qdisc *qdisc_lookup_rcu(struct net_device *dev, u32 handle) + + nq = dev_ingress_queue_rcu(dev); + if (nq) +- q = qdisc_match_from_root(nq->qdisc_sleeping, handle); ++ q = qdisc_match_from_root(rcu_dereference(nq->qdisc_sleeping), ++ handle); + out: + return q; + } +@@ -634,8 +635,13 @@ EXPORT_SYMBOL(qdisc_watchdog_init); + void qdisc_watchdog_schedule_range_ns(struct qdisc_watchdog *wd, u64 expires, + u64 delta_ns) + { +- if (test_bit(__QDISC_STATE_DEACTIVATED, +- &qdisc_root_sleeping(wd->qdisc)->state)) ++ bool deactivated; ++ ++ rcu_read_lock(); ++ deactivated = test_bit(__QDISC_STATE_DEACTIVATED, ++ &qdisc_root_sleeping(wd->qdisc)->state); ++ rcu_read_unlock(); ++ if (deactivated) + return; + + if (hrtimer_is_queued(&wd->timer)) { +@@ -1476,7 +1482,7 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n, + } + q = qdisc_leaf(p, clid); + } else if (dev_ingress_queue(dev)) { +- q = dev_ingress_queue(dev)->qdisc_sleeping; ++ q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); + } + } else { + q = rtnl_dereference(dev->qdisc); +@@ -1562,7 +1568,7 @@ replay: + } + q = qdisc_leaf(p, clid); + } else if (dev_ingress_queue_create(dev)) { +- q = dev_ingress_queue(dev)->qdisc_sleeping; ++ q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); + } + } else { + q = rtnl_dereference(dev->qdisc); +@@ -1803,8 +1809,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb) + + dev_queue = dev_ingress_queue(dev); + if (dev_queue && +- tc_dump_qdisc_root(dev_queue->qdisc_sleeping, skb, cb, +- &q_idx, s_q_idx, false, ++ tc_dump_qdisc_root(rtnl_dereference(dev_queue->qdisc_sleeping), ++ skb, cb, &q_idx, s_q_idx, false, + tca[TCA_DUMP_INVISIBLE]) < 0) + goto done; + +@@ -2247,8 +2253,8 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb) + + dev_queue = dev_ingress_queue(dev); + if (dev_queue && +- tc_dump_tclass_root(dev_queue->qdisc_sleeping, skb, tcm, cb, +- &t, s_t, false) < 0) ++ tc_dump_tclass_root(rtnl_dereference(dev_queue->qdisc_sleeping), ++ skb, tcm, cb, &t, s_t, false) < 0) + goto done; + + done: +diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c +index 6980796d435d9..591d87d5e5c0f 100644 +--- a/net/sched/sch_fq_pie.c ++++ b/net/sched/sch_fq_pie.c +@@ -201,6 +201,11 @@ out: + return NET_XMIT_CN; + } + ++static struct netlink_range_validation fq_pie_q_range = { ++ .min = 1, ++ .max = 1 << 20, ++}; ++ + static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = { + [TCA_FQ_PIE_LIMIT] = {.type = NLA_U32}, + [TCA_FQ_PIE_FLOWS] = {.type = NLA_U32}, +@@ -208,7 +213,8 @@ static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = { + [TCA_FQ_PIE_TUPDATE] = {.type = NLA_U32}, + [TCA_FQ_PIE_ALPHA] = {.type = NLA_U32}, + [TCA_FQ_PIE_BETA] = {.type = NLA_U32}, +- [TCA_FQ_PIE_QUANTUM] = {.type = NLA_U32}, ++ [TCA_FQ_PIE_QUANTUM] = ++ NLA_POLICY_FULL_RANGE(NLA_U32, &fq_pie_q_range), + [TCA_FQ_PIE_MEMORY_LIMIT] = {.type = NLA_U32}, + [TCA_FQ_PIE_ECN_PROB] = {.type = NLA_U32}, + [TCA_FQ_PIE_ECN] = {.type = NLA_U32}, +@@ -373,6 +379,7 @@ static void fq_pie_timer(struct timer_list *t) + spinlock_t *root_lock; /* to lock qdisc for probability calculations */ + u32 idx; + ++ rcu_read_lock(); + root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + spin_lock(root_lock); + +@@ -385,6 +392,7 @@ static void fq_pie_timer(struct timer_list *t) + mod_timer(&q->adapt_timer, jiffies + q->p_params.tupdate); + + spin_unlock(root_lock); ++ rcu_read_unlock(); + } + + static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt, +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index a9aadc4e68581..ee43e8ac039ed 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -648,7 +648,7 @@ struct Qdisc_ops noop_qdisc_ops __read_mostly = { + + static struct netdev_queue noop_netdev_queue = { + RCU_POINTER_INITIALIZER(qdisc, &noop_qdisc), +- .qdisc_sleeping = &noop_qdisc, ++ RCU_POINTER_INITIALIZER(qdisc_sleeping, &noop_qdisc), + }; + + struct Qdisc noop_qdisc = { +@@ -1103,7 +1103,7 @@ EXPORT_SYMBOL(qdisc_put_unlocked); + struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue, + struct Qdisc *qdisc) + { +- struct Qdisc *oqdisc = dev_queue->qdisc_sleeping; ++ struct Qdisc *oqdisc = rtnl_dereference(dev_queue->qdisc_sleeping); + spinlock_t *root_lock; + + root_lock = qdisc_lock(oqdisc); +@@ -1112,7 +1112,7 @@ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue, + /* ... and graft new one */ + if (qdisc == NULL) + qdisc = &noop_qdisc; +- dev_queue->qdisc_sleeping = qdisc; ++ rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc); + rcu_assign_pointer(dev_queue->qdisc, &noop_qdisc); + + spin_unlock_bh(root_lock); +@@ -1125,12 +1125,12 @@ static void shutdown_scheduler_queue(struct net_device *dev, + struct netdev_queue *dev_queue, + void *_qdisc_default) + { +- struct Qdisc *qdisc = dev_queue->qdisc_sleeping; ++ struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); + struct Qdisc *qdisc_default = _qdisc_default; + + if (qdisc) { + rcu_assign_pointer(dev_queue->qdisc, qdisc_default); +- dev_queue->qdisc_sleeping = qdisc_default; ++ rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc_default); + + qdisc_put(qdisc); + } +@@ -1154,7 +1154,7 @@ static void attach_one_default_qdisc(struct net_device *dev, + + if (!netif_is_multiqueue(dev)) + qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; +- dev_queue->qdisc_sleeping = qdisc; ++ rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc); + } + + static void attach_default_qdiscs(struct net_device *dev) +@@ -1167,7 +1167,7 @@ static void attach_default_qdiscs(struct net_device *dev) + if (!netif_is_multiqueue(dev) || + dev->priv_flags & IFF_NO_QUEUE) { + netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL); +- qdisc = txq->qdisc_sleeping; ++ qdisc = rtnl_dereference(txq->qdisc_sleeping); + rcu_assign_pointer(dev->qdisc, qdisc); + qdisc_refcount_inc(qdisc); + } else { +@@ -1186,7 +1186,7 @@ static void attach_default_qdiscs(struct net_device *dev) + netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc); + dev->priv_flags |= IFF_NO_QUEUE; + netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL); +- qdisc = txq->qdisc_sleeping; ++ qdisc = rtnl_dereference(txq->qdisc_sleeping); + rcu_assign_pointer(dev->qdisc, qdisc); + qdisc_refcount_inc(qdisc); + dev->priv_flags ^= IFF_NO_QUEUE; +@@ -1202,7 +1202,7 @@ static void transition_one_qdisc(struct net_device *dev, + struct netdev_queue *dev_queue, + void *_need_watchdog) + { +- struct Qdisc *new_qdisc = dev_queue->qdisc_sleeping; ++ struct Qdisc *new_qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); + int *need_watchdog_p = _need_watchdog; + + if (!(new_qdisc->flags & TCQ_F_BUILTIN)) +@@ -1272,7 +1272,7 @@ static void dev_reset_queue(struct net_device *dev, + struct Qdisc *qdisc; + bool nolock; + +- qdisc = dev_queue->qdisc_sleeping; ++ qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); + if (!qdisc) + return; + +@@ -1303,7 +1303,7 @@ static bool some_qdisc_is_busy(struct net_device *dev) + int val; + + dev_queue = netdev_get_tx_queue(dev, i); +- q = dev_queue->qdisc_sleeping; ++ q = rtnl_dereference(dev_queue->qdisc_sleeping); + + root_lock = qdisc_lock(q); + spin_lock_bh(root_lock); +@@ -1379,7 +1379,7 @@ EXPORT_SYMBOL(dev_deactivate); + static int qdisc_change_tx_queue_len(struct net_device *dev, + struct netdev_queue *dev_queue) + { +- struct Qdisc *qdisc = dev_queue->qdisc_sleeping; ++ struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); + const struct Qdisc_ops *ops = qdisc->ops; + + if (ops->change_tx_queue_len) +@@ -1404,7 +1404,7 @@ void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx) + unsigned int i; + + for (i = new_real_tx; i < dev->real_num_tx_queues; i++) { +- qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping; ++ qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc_sleeping); + /* Only update the default qdiscs we created, + * qdiscs with handles are always hashed. + */ +@@ -1412,7 +1412,7 @@ void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx) + qdisc_hash_del(qdisc); + } + for (i = dev->real_num_tx_queues; i < new_real_tx; i++) { +- qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping; ++ qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc_sleeping); + if (qdisc != &noop_qdisc && !qdisc->handle) + qdisc_hash_add(qdisc, false); + } +@@ -1449,7 +1449,7 @@ static void dev_init_scheduler_queue(struct net_device *dev, + struct Qdisc *qdisc = _qdisc; + + rcu_assign_pointer(dev_queue->qdisc, qdisc); +- dev_queue->qdisc_sleeping = qdisc; ++ rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc); + } + + void dev_init_scheduler(struct net_device *dev) +diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c +index d0bc660d7401f..c860119a8f091 100644 +--- a/net/sched/sch_mq.c ++++ b/net/sched/sch_mq.c +@@ -141,7 +141,7 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb) + * qdisc totals are added at end. + */ + for (ntx = 0; ntx < dev->num_tx_queues; ntx++) { +- qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping; ++ qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping); + spin_lock_bh(qdisc_lock(qdisc)); + + gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats, +@@ -202,7 +202,7 @@ static struct Qdisc *mq_leaf(struct Qdisc *sch, unsigned long cl) + { + struct netdev_queue *dev_queue = mq_queue_get(sch, cl); + +- return dev_queue->qdisc_sleeping; ++ return rtnl_dereference(dev_queue->qdisc_sleeping); + } + + static unsigned long mq_find(struct Qdisc *sch, u32 classid) +@@ -221,7 +221,7 @@ static int mq_dump_class(struct Qdisc *sch, unsigned long cl, + + tcm->tcm_parent = TC_H_ROOT; + tcm->tcm_handle |= TC_H_MIN(cl); +- tcm->tcm_info = dev_queue->qdisc_sleeping->handle; ++ tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle; + return 0; + } + +@@ -230,7 +230,7 @@ static int mq_dump_class_stats(struct Qdisc *sch, unsigned long cl, + { + struct netdev_queue *dev_queue = mq_queue_get(sch, cl); + +- sch = dev_queue->qdisc_sleeping; ++ sch = rtnl_dereference(dev_queue->qdisc_sleeping); + if (gnet_stats_copy_basic(d, sch->cpu_bstats, &sch->bstats, true) < 0 || + qdisc_qstats_copy(d, sch) < 0) + return -1; +diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c +index fc6225f15fcdb..dd29c9470c784 100644 +--- a/net/sched/sch_mqprio.c ++++ b/net/sched/sch_mqprio.c +@@ -421,7 +421,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb) + * qdisc totals are added at end. + */ + for (ntx = 0; ntx < dev->num_tx_queues; ntx++) { +- qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping; ++ qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping); + spin_lock_bh(qdisc_lock(qdisc)); + + gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats, +@@ -465,7 +465,7 @@ static struct Qdisc *mqprio_leaf(struct Qdisc *sch, unsigned long cl) + if (!dev_queue) + return NULL; + +- return dev_queue->qdisc_sleeping; ++ return rtnl_dereference(dev_queue->qdisc_sleeping); + } + + static unsigned long mqprio_find(struct Qdisc *sch, u32 classid) +@@ -498,7 +498,7 @@ static int mqprio_dump_class(struct Qdisc *sch, unsigned long cl, + tcm->tcm_parent = (tc < 0) ? 0 : + TC_H_MAKE(TC_H_MAJ(sch->handle), + TC_H_MIN(tc + TC_H_MIN_PRIORITY)); +- tcm->tcm_info = dev_queue->qdisc_sleeping->handle; ++ tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle; + } else { + tcm->tcm_parent = TC_H_ROOT; + tcm->tcm_info = 0; +@@ -554,7 +554,7 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl, + } else { + struct netdev_queue *dev_queue = mqprio_queue_get(sch, cl); + +- sch = dev_queue->qdisc_sleeping; ++ sch = rtnl_dereference(dev_queue->qdisc_sleeping); + if (gnet_stats_copy_basic(d, sch->cpu_bstats, + &sch->bstats, true) < 0 || + qdisc_qstats_copy(d, sch) < 0) +diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c +index 265c238047a42..b60b31ef71cc5 100644 +--- a/net/sched/sch_pie.c ++++ b/net/sched/sch_pie.c +@@ -421,8 +421,10 @@ static void pie_timer(struct timer_list *t) + { + struct pie_sched_data *q = from_timer(q, t, adapt_timer); + struct Qdisc *sch = q->sch; +- spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); ++ spinlock_t *root_lock; + ++ rcu_read_lock(); ++ root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + spin_lock(root_lock); + pie_calculate_probability(&q->params, &q->vars, sch->qstats.backlog); + +@@ -430,6 +432,7 @@ static void pie_timer(struct timer_list *t) + if (q->params.tupdate) + mod_timer(&q->adapt_timer, jiffies + q->params.tupdate); + spin_unlock(root_lock); ++ rcu_read_unlock(); + } + + static int pie_init(struct Qdisc *sch, struct nlattr *opt, +diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c +index 98129324e1573..16277b6a0238d 100644 +--- a/net/sched/sch_red.c ++++ b/net/sched/sch_red.c +@@ -321,12 +321,15 @@ static inline void red_adaptative_timer(struct timer_list *t) + { + struct red_sched_data *q = from_timer(q, t, adapt_timer); + struct Qdisc *sch = q->sch; +- spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); ++ spinlock_t *root_lock; + ++ rcu_read_lock(); ++ root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + spin_lock(root_lock); + red_adaptative_algo(&q->parms, &q->vars); + mod_timer(&q->adapt_timer, jiffies + HZ/2); + spin_unlock(root_lock); ++ rcu_read_unlock(); + } + + static int red_init(struct Qdisc *sch, struct nlattr *opt, +diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c +index abd436307d6a8..66dcb18638fea 100644 +--- a/net/sched/sch_sfq.c ++++ b/net/sched/sch_sfq.c +@@ -606,10 +606,12 @@ static void sfq_perturbation(struct timer_list *t) + { + struct sfq_sched_data *q = from_timer(q, t, perturb_timer); + struct Qdisc *sch = q->sch; +- spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); ++ spinlock_t *root_lock; + siphash_key_t nkey; + + get_random_bytes(&nkey, sizeof(nkey)); ++ rcu_read_lock(); ++ root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + spin_lock(root_lock); + q->perturbation = nkey; + if (!q->filter_list && q->tail) +@@ -618,6 +620,7 @@ static void sfq_perturbation(struct timer_list *t) + + if (q->perturb_period) + mod_timer(&q->perturb_timer, jiffies + q->perturb_period); ++ rcu_read_unlock(); + } + + static int sfq_change(struct Qdisc *sch, struct nlattr *opt) +diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c +index cbad430191721..a6cf56a969421 100644 +--- a/net/sched/sch_taprio.c ++++ b/net/sched/sch_taprio.c +@@ -2319,7 +2319,7 @@ static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl) + if (!dev_queue) + return NULL; + +- return dev_queue->qdisc_sleeping; ++ return rtnl_dereference(dev_queue->qdisc_sleeping); + } + + static unsigned long taprio_find(struct Qdisc *sch, u32 classid) +@@ -2338,7 +2338,7 @@ static int taprio_dump_class(struct Qdisc *sch, unsigned long cl, + + tcm->tcm_parent = TC_H_ROOT; + tcm->tcm_handle |= TC_H_MIN(cl); +- tcm->tcm_info = dev_queue->qdisc_sleeping->handle; ++ tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle; + + return 0; + } +@@ -2350,7 +2350,7 @@ static int taprio_dump_class_stats(struct Qdisc *sch, unsigned long cl, + { + struct netdev_queue *dev_queue = taprio_queue_get(sch, cl); + +- sch = dev_queue->qdisc_sleeping; ++ sch = rtnl_dereference(dev_queue->qdisc_sleeping); + if (gnet_stats_copy_basic(d, NULL, &sch->bstats, true) < 0 || + qdisc_qstats_copy(d, sch) < 0) + return -1; +diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c +index 16f9238aa51d1..7721239c185fb 100644 +--- a/net/sched/sch_teql.c ++++ b/net/sched/sch_teql.c +@@ -297,7 +297,7 @@ restart: + struct net_device *slave = qdisc_dev(q); + struct netdev_queue *slave_txq = netdev_get_tx_queue(slave, 0); + +- if (slave_txq->qdisc_sleeping != q) ++ if (rcu_access_pointer(slave_txq->qdisc_sleeping) != q) + continue; + if (netif_xmit_stopped(netdev_get_tx_queue(slave, subq)) || + !netif_running(slave)) { +diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c +index 7a8d9163d186e..90f0b60b196ab 100644 +--- a/net/smc/smc_llc.c ++++ b/net/smc/smc_llc.c +@@ -851,6 +851,8 @@ static int smc_llc_add_link_cont(struct smc_link *link, + addc_llc->num_rkeys = *num_rkeys_todo; + n = *num_rkeys_todo; + for (i = 0; i < min_t(u8, n, SMC_LLC_RKEYS_PER_CONT_MSG); i++) { ++ while (*buf_pos && !(*buf_pos)->used) ++ *buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos); + if (!*buf_pos) { + addc_llc->num_rkeys = addc_llc->num_rkeys - + *num_rkeys_todo; +@@ -867,8 +869,6 @@ static int smc_llc_add_link_cont(struct smc_link *link, + + (*num_rkeys_todo)--; + *buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos); +- while (*buf_pos && !(*buf_pos)->used) +- *buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos); + } + addc_llc->hd.common.llc_type = SMC_LLC_ADD_LINK_CONT; + addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont); +diff --git a/net/wireless/core.c b/net/wireless/core.c +index 5b0c4d5b80cf5..b3ec9eaec36b3 100644 +--- a/net/wireless/core.c ++++ b/net/wireless/core.c +@@ -368,12 +368,12 @@ static void cfg80211_sched_scan_stop_wk(struct work_struct *work) + rdev = container_of(work, struct cfg80211_registered_device, + sched_scan_stop_wk); + +- rtnl_lock(); ++ wiphy_lock(&rdev->wiphy); + list_for_each_entry_safe(req, tmp, &rdev->sched_scan_req_list, list) { + if (req->nl_owner_dead) + cfg80211_stop_sched_scan_req(rdev, req, false); + } +- rtnl_unlock(); ++ wiphy_unlock(&rdev->wiphy); + } + + static void cfg80211_propagate_radar_detect_wk(struct work_struct *work) +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index 4f63059efd813..1922fccb96ace 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -10642,6 +10642,8 @@ static int nl80211_authenticate(struct sk_buff *skb, struct genl_info *info) + if (!info->attrs[NL80211_ATTR_MLD_ADDR]) + return -EINVAL; + req.ap_mld_addr = nla_data(info->attrs[NL80211_ATTR_MLD_ADDR]); ++ if (!is_valid_ether_addr(req.ap_mld_addr)) ++ return -EINVAL; + } + + req.bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len, +diff --git a/sound/isa/gus/gus_pcm.c b/sound/isa/gus/gus_pcm.c +index 230f65a0e4b07..388db5fb65bd0 100644 +--- a/sound/isa/gus/gus_pcm.c ++++ b/sound/isa/gus/gus_pcm.c +@@ -892,10 +892,10 @@ int snd_gf1_pcm_new(struct snd_gus_card *gus, int pcm_dev, int control_index) + kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control1, gus); + else + kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control, gus); ++ kctl->id.index = control_index; + err = snd_ctl_add(card, kctl); + if (err < 0) + return err; +- kctl->id.index = control_index; + + return 0; + } +diff --git a/sound/pci/cmipci.c b/sound/pci/cmipci.c +index 727db6d433916..6d25c12d9ef00 100644 +--- a/sound/pci/cmipci.c ++++ b/sound/pci/cmipci.c +@@ -2688,20 +2688,20 @@ static int snd_cmipci_mixer_new(struct cmipci *cm, int pcm_spdif_device) + } + if (cm->can_ac3_hw) { + kctl = snd_ctl_new1(&snd_cmipci_spdif_default, cm); ++ kctl->id.device = pcm_spdif_device; + err = snd_ctl_add(card, kctl); + if (err < 0) + return err; +- kctl->id.device = pcm_spdif_device; + kctl = snd_ctl_new1(&snd_cmipci_spdif_mask, cm); ++ kctl->id.device = pcm_spdif_device; + err = snd_ctl_add(card, kctl); + if (err < 0) + return err; +- kctl->id.device = pcm_spdif_device; + kctl = snd_ctl_new1(&snd_cmipci_spdif_stream, cm); ++ kctl->id.device = pcm_spdif_device; + err = snd_ctl_add(card, kctl); + if (err < 0) + return err; +- kctl->id.device = pcm_spdif_device; + } + if (cm->chip_version <= 37) { + sw = snd_cmipci_old_mixer_switches; +diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c +index 9f79c0ac2bda7..bd19f92aeeec8 100644 +--- a/sound/pci/hda/hda_codec.c ++++ b/sound/pci/hda/hda_codec.c +@@ -2458,10 +2458,14 @@ int snd_hda_create_dig_out_ctls(struct hda_codec *codec, + type == HDA_PCM_TYPE_HDMI) { + /* suppose a single SPDIF device */ + for (dig_mix = dig_mixes; dig_mix->name; dig_mix++) { ++ struct snd_ctl_elem_id id; ++ + kctl = find_mixer_ctl(codec, dig_mix->name, 0, 0); + if (!kctl) + break; +- kctl->id.index = spdif_index; ++ id = kctl->id; ++ id.index = spdif_index; ++ snd_ctl_rename_id(codec->card, &kctl->id, &id); + } + bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI; + } +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 7b5f194513c7b..48a0e87136f1c 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -9547,6 +9547,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B), + SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC), ++ SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), + SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), +@@ -9565,6 +9566,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401), + SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), ++ SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC), + SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC), + SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), +@@ -9636,6 +9642,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), +@@ -11694,6 +11701,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB), + SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB), + SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2), ++ SND_PCI_QUIRK(0x103c, 0x8768, "HP Slim Desktop S01", ALC671_FIXUP_HP_HEADSET_MIC2), + SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2), + SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2), + SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE), +@@ -11715,6 +11723,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = { + SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE), + SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS), + SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN), ++ SND_PCI_QUIRK(0x17aa, 0x1064, "Lenovo P3 Tower", ALC897_FIXUP_HEADSET_MIC_PIN), + SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN), + SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN), + SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN), +diff --git a/sound/pci/ice1712/aureon.c b/sound/pci/ice1712/aureon.c +index 24b9782340001..027849329c1b0 100644 +--- a/sound/pci/ice1712/aureon.c ++++ b/sound/pci/ice1712/aureon.c +@@ -1899,11 +1899,12 @@ static int aureon_add_controls(struct snd_ice1712 *ice) + else { + for (i = 0; i < ARRAY_SIZE(cs8415_controls); i++) { + struct snd_kcontrol *kctl; +- err = snd_ctl_add(ice->card, (kctl = snd_ctl_new1(&cs8415_controls[i], ice))); +- if (err < 0) +- return err; ++ kctl = snd_ctl_new1(&cs8415_controls[i], ice); + if (i > 1) + kctl->id.device = ice->pcm->device; ++ err = snd_ctl_add(ice->card, kctl); ++ if (err < 0) ++ return err; + } + } + } +diff --git a/sound/pci/ice1712/ice1712.c b/sound/pci/ice1712/ice1712.c +index a5241a287851c..3b0c3e70987b9 100644 +--- a/sound/pci/ice1712/ice1712.c ++++ b/sound/pci/ice1712/ice1712.c +@@ -2371,22 +2371,26 @@ int snd_ice1712_spdif_build_controls(struct snd_ice1712 *ice) + + if (snd_BUG_ON(!ice->pcm_pro)) + return -EIO; +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice)); ++ kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice); ++ kctl->id.device = ice->pcm_pro->device; ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; ++ kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice); + kctl->id.device = ice->pcm_pro->device; +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice)); ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; ++ kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice); + kctl->id.device = ice->pcm_pro->device; +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice)); ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; ++ kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice); + kctl->id.device = ice->pcm_pro->device; +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice)); ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; +- kctl->id.device = ice->pcm_pro->device; + ice->spdif.stream_ctl = kctl; + return 0; + } +diff --git a/sound/pci/ice1712/ice1724.c b/sound/pci/ice1712/ice1724.c +index 6fab2ad85bbec..1dc776acd637c 100644 +--- a/sound/pci/ice1712/ice1724.c ++++ b/sound/pci/ice1712/ice1724.c +@@ -2392,23 +2392,27 @@ static int snd_vt1724_spdif_build_controls(struct snd_ice1712 *ice) + if (err < 0) + return err; + +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice)); ++ kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice); ++ kctl->id.device = ice->pcm->device; ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; ++ kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice); + kctl->id.device = ice->pcm->device; +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice)); ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; ++ kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice); + kctl->id.device = ice->pcm->device; +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice)); ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; +- kctl->id.device = ice->pcm->device; + #if 0 /* use default only */ +- err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice)); ++ kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice); ++ kctl->id.device = ice->pcm->device; ++ err = snd_ctl_add(ice->card, kctl); + if (err < 0) + return err; +- kctl->id.device = ice->pcm->device; + ice->spdif.stream_ctl = kctl; + #endif + return 0; +diff --git a/sound/pci/ymfpci/ymfpci_main.c b/sound/pci/ymfpci/ymfpci_main.c +index b492c32ce0704..f629b3956a69d 100644 +--- a/sound/pci/ymfpci/ymfpci_main.c ++++ b/sound/pci/ymfpci/ymfpci_main.c +@@ -1827,20 +1827,20 @@ int snd_ymfpci_mixer(struct snd_ymfpci *chip, int rear_switch) + if (snd_BUG_ON(!chip->pcm_spdif)) + return -ENXIO; + kctl = snd_ctl_new1(&snd_ymfpci_spdif_default, chip); ++ kctl->id.device = chip->pcm_spdif->device; + err = snd_ctl_add(chip->card, kctl); + if (err < 0) + return err; +- kctl->id.device = chip->pcm_spdif->device; + kctl = snd_ctl_new1(&snd_ymfpci_spdif_mask, chip); ++ kctl->id.device = chip->pcm_spdif->device; + err = snd_ctl_add(chip->card, kctl); + if (err < 0) + return err; +- kctl->id.device = chip->pcm_spdif->device; + kctl = snd_ctl_new1(&snd_ymfpci_spdif_stream, chip); ++ kctl->id.device = chip->pcm_spdif->device; + err = snd_ctl_add(chip->card, kctl); + if (err < 0) + return err; +- kctl->id.device = chip->pcm_spdif->device; + chip->spdif_pcm_ctl = kctl; + + /* direct recording source */ +diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c +index afddb9a77ba49..b1337b96ea8d6 100644 +--- a/sound/soc/amd/ps/pci-ps.c ++++ b/sound/soc/amd/ps/pci-ps.c +@@ -211,8 +211,7 @@ static int create_acp63_platform_devs(struct pci_dev *pci, struct acp63_dev_data + case ACP63_PDM_DEV_MASK: + adata->pdm_dev_index = 0; + acp63_fill_platform_dev_info(&pdevinfo[0], parent, NULL, "acp_ps_pdm_dma", +- 0, adata->res, 1, &adata->acp_lock, +- sizeof(adata->acp_lock)); ++ 0, adata->res, 1, NULL, 0); + acp63_fill_platform_dev_info(&pdevinfo[1], parent, NULL, "dmic-codec", + 0, NULL, 0, NULL, 0); + acp63_fill_platform_dev_info(&pdevinfo[2], parent, NULL, "acp_ps_mach", +diff --git a/sound/soc/amd/ps/ps-pdm-dma.c b/sound/soc/amd/ps/ps-pdm-dma.c +index 454dab062e4f5..527594aa9c113 100644 +--- a/sound/soc/amd/ps/ps-pdm-dma.c ++++ b/sound/soc/amd/ps/ps-pdm-dma.c +@@ -361,12 +361,12 @@ static int acp63_pdm_audio_probe(struct platform_device *pdev) + { + struct resource *res; + struct pdm_dev_data *adata; ++ struct acp63_dev_data *acp_data; ++ struct device *parent; + int status; + +- if (!pdev->dev.platform_data) { +- dev_err(&pdev->dev, "platform_data not retrieved\n"); +- return -ENODEV; +- } ++ parent = pdev->dev.parent; ++ acp_data = dev_get_drvdata(parent); + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + dev_err(&pdev->dev, "IORESOURCE_MEM FAILED\n"); +@@ -382,7 +382,7 @@ static int acp63_pdm_audio_probe(struct platform_device *pdev) + return -ENOMEM; + + adata->capture_stream = NULL; +- adata->acp_lock = pdev->dev.platform_data; ++ adata->acp_lock = &acp_data->acp_lock; + dev_set_drvdata(&pdev->dev, adata); + status = devm_snd_soc_register_component(&pdev->dev, + &acp63_pdm_component, +diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c +index f709231b1277a..97f6873a0a8c7 100644 +--- a/sound/soc/codecs/wsa881x.c ++++ b/sound/soc/codecs/wsa881x.c +@@ -645,7 +645,6 @@ static struct regmap_config wsa881x_regmap_config = { + .readable_reg = wsa881x_readable_register, + .reg_format_endian = REGMAP_ENDIAN_NATIVE, + .val_format_endian = REGMAP_ENDIAN_NATIVE, +- .can_multi_write = true, + }; + + enum { +diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c +index c609cb63dae6d..e80b531435696 100644 +--- a/sound/soc/codecs/wsa883x.c ++++ b/sound/soc/codecs/wsa883x.c +@@ -946,7 +946,6 @@ static struct regmap_config wsa883x_regmap_config = { + .writeable_reg = wsa883x_writeable_register, + .reg_format_endian = REGMAP_ENDIAN_NATIVE, + .val_format_endian = REGMAP_ENDIAN_NATIVE, +- .can_multi_write = true, + .use_single_read = true, + }; + +diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c +index 56552a616f21f..1f24344846ae9 100644 +--- a/sound/soc/generic/simple-card-utils.c ++++ b/sound/soc/generic/simple-card-utils.c +@@ -314,7 +314,7 @@ int asoc_simple_startup(struct snd_pcm_substream *substream) + } + ret = snd_pcm_hw_constraint_minmax(substream->runtime, SNDRV_PCM_HW_PARAM_RATE, + fixed_rate, fixed_rate); +- if (ret) ++ if (ret < 0) + goto codec_err; + } + +diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-clk.c b/sound/soc/mediatek/mt8188/mt8188-afe-clk.c +index 743d6a162cb9a..0fb97517f82c6 100644 +--- a/sound/soc/mediatek/mt8188/mt8188-afe-clk.c ++++ b/sound/soc/mediatek/mt8188/mt8188-afe-clk.c +@@ -418,13 +418,6 @@ int mt8188_afe_init_clock(struct mtk_base_afe *afe) + return 0; + } + +-void mt8188_afe_deinit_clock(void *priv) +-{ +- struct mtk_base_afe *afe = priv; +- +- mt8188_audsys_clk_unregister(afe); +-} +- + int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk) + { + int ret; +diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-clk.h b/sound/soc/mediatek/mt8188/mt8188-afe-clk.h +index 084fdfb1d877a..a4203a87a1e35 100644 +--- a/sound/soc/mediatek/mt8188/mt8188-afe-clk.h ++++ b/sound/soc/mediatek/mt8188/mt8188-afe-clk.h +@@ -100,7 +100,6 @@ int mt8188_afe_get_mclk_source_clk_id(int sel); + int mt8188_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll); + int mt8188_afe_get_default_mclk_source_by_rate(int rate); + int mt8188_afe_init_clock(struct mtk_base_afe *afe); +-void mt8188_afe_deinit_clock(void *priv); + int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk); + void mt8188_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk); + int mt8188_afe_set_clk_rate(struct mtk_base_afe *afe, struct clk *clk, +diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c +index e8e84de865422..45ab6e2829b7a 100644 +--- a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c ++++ b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c +@@ -3185,10 +3185,6 @@ static int mt8188_afe_pcm_dev_probe(struct platform_device *pdev) + if (ret) + return dev_err_probe(dev, ret, "init clock error"); + +- ret = devm_add_action_or_reset(dev, mt8188_afe_deinit_clock, (void *)afe); +- if (ret) +- return ret; +- + spin_lock_init(&afe_priv->afe_ctrl_lock); + + mutex_init(&afe->irq_alloc_lock); +diff --git a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c +index be1c53bf47298..c796ad8b62eea 100644 +--- a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c ++++ b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.c +@@ -138,6 +138,29 @@ static const struct afe_gate aud_clks[CLK_AUD_NR_CLK] = { + GATE_AUD6(CLK_AUD_GASRC11, "aud_gasrc11", "top_asm_h", 11), + }; + ++static void mt8188_audsys_clk_unregister(void *data) ++{ ++ struct mtk_base_afe *afe = data; ++ struct mt8188_afe_private *afe_priv = afe->platform_priv; ++ struct clk *clk; ++ struct clk_lookup *cl; ++ int i; ++ ++ if (!afe_priv) ++ return; ++ ++ for (i = 0; i < CLK_AUD_NR_CLK; i++) { ++ cl = afe_priv->lookup[i]; ++ if (!cl) ++ continue; ++ ++ clk = cl->clk; ++ clk_unregister_gate(clk); ++ ++ clkdev_drop(cl); ++ } ++} ++ + int mt8188_audsys_clk_register(struct mtk_base_afe *afe) + { + struct mt8188_afe_private *afe_priv = afe->platform_priv; +@@ -179,27 +202,5 @@ int mt8188_audsys_clk_register(struct mtk_base_afe *afe) + afe_priv->lookup[i] = cl; + } + +- return 0; +-} +- +-void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe) +-{ +- struct mt8188_afe_private *afe_priv = afe->platform_priv; +- struct clk *clk; +- struct clk_lookup *cl; +- int i; +- +- if (!afe_priv) +- return; +- +- for (i = 0; i < CLK_AUD_NR_CLK; i++) { +- cl = afe_priv->lookup[i]; +- if (!cl) +- continue; +- +- clk = cl->clk; +- clk_unregister_gate(clk); +- +- clkdev_drop(cl); +- } ++ return devm_add_action_or_reset(afe->dev, mt8188_audsys_clk_unregister, afe); + } +diff --git a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h +index 6c5f463ad7e4d..45b0948c4a06e 100644 +--- a/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h ++++ b/sound/soc/mediatek/mt8188/mt8188-audsys-clk.h +@@ -10,6 +10,5 @@ + #define _MT8188_AUDSYS_CLK_H_ + + int mt8188_audsys_clk_register(struct mtk_base_afe *afe); +-void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe); + + #endif +diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-clk.c b/sound/soc/mediatek/mt8195/mt8195-afe-clk.c +index 9ca2cb8c8a9c2..f35318ae07392 100644 +--- a/sound/soc/mediatek/mt8195/mt8195-afe-clk.c ++++ b/sound/soc/mediatek/mt8195/mt8195-afe-clk.c +@@ -410,11 +410,6 @@ int mt8195_afe_init_clock(struct mtk_base_afe *afe) + return 0; + } + +-void mt8195_afe_deinit_clock(struct mtk_base_afe *afe) +-{ +- mt8195_audsys_clk_unregister(afe); +-} +- + int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk) + { + int ret; +diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-clk.h b/sound/soc/mediatek/mt8195/mt8195-afe-clk.h +index 40663e31becd1..a08c0ee6c8602 100644 +--- a/sound/soc/mediatek/mt8195/mt8195-afe-clk.h ++++ b/sound/soc/mediatek/mt8195/mt8195-afe-clk.h +@@ -101,7 +101,6 @@ int mt8195_afe_get_mclk_source_clk_id(int sel); + int mt8195_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll); + int mt8195_afe_get_default_mclk_source_by_rate(int rate); + int mt8195_afe_init_clock(struct mtk_base_afe *afe); +-void mt8195_afe_deinit_clock(struct mtk_base_afe *afe); + int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk); + void mt8195_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk); + int mt8195_afe_prepare_clk(struct mtk_base_afe *afe, struct clk *clk); +diff --git a/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c b/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c +index 72b2c6d629b93..03dabc056b916 100644 +--- a/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c ++++ b/sound/soc/mediatek/mt8195/mt8195-afe-pcm.c +@@ -3253,18 +3253,13 @@ err_pm_put: + return ret; + } + +-static int mt8195_afe_pcm_dev_remove(struct platform_device *pdev) ++static void mt8195_afe_pcm_dev_remove(struct platform_device *pdev) + { +- struct mtk_base_afe *afe = platform_get_drvdata(pdev); +- + snd_soc_unregister_component(&pdev->dev); + + pm_runtime_disable(&pdev->dev); + if (!pm_runtime_status_suspended(&pdev->dev)) + mt8195_afe_runtime_suspend(&pdev->dev); +- +- mt8195_afe_deinit_clock(afe); +- return 0; + } + + static const struct of_device_id mt8195_afe_pcm_dt_match[] = { +@@ -3285,7 +3280,7 @@ static struct platform_driver mt8195_afe_pcm_driver = { + .pm = &mt8195_afe_pm_ops, + }, + .probe = mt8195_afe_pcm_dev_probe, +- .remove = mt8195_afe_pcm_dev_remove, ++ .remove_new = mt8195_afe_pcm_dev_remove, + }; + + module_platform_driver(mt8195_afe_pcm_driver); +diff --git a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c +index e0670e0dbd5b0..38594bc3f2f77 100644 +--- a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c ++++ b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.c +@@ -148,6 +148,29 @@ static const struct afe_gate aud_clks[CLK_AUD_NR_CLK] = { + GATE_AUD6(CLK_AUD_GASRC19, "aud_gasrc19", "top_asm_h", 19), + }; + ++static void mt8195_audsys_clk_unregister(void *data) ++{ ++ struct mtk_base_afe *afe = data; ++ struct mt8195_afe_private *afe_priv = afe->platform_priv; ++ struct clk *clk; ++ struct clk_lookup *cl; ++ int i; ++ ++ if (!afe_priv) ++ return; ++ ++ for (i = 0; i < CLK_AUD_NR_CLK; i++) { ++ cl = afe_priv->lookup[i]; ++ if (!cl) ++ continue; ++ ++ clk = cl->clk; ++ clk_unregister_gate(clk); ++ ++ clkdev_drop(cl); ++ } ++} ++ + int mt8195_audsys_clk_register(struct mtk_base_afe *afe) + { + struct mt8195_afe_private *afe_priv = afe->platform_priv; +@@ -188,27 +211,5 @@ int mt8195_audsys_clk_register(struct mtk_base_afe *afe) + afe_priv->lookup[i] = cl; + } + +- return 0; +-} +- +-void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe) +-{ +- struct mt8195_afe_private *afe_priv = afe->platform_priv; +- struct clk *clk; +- struct clk_lookup *cl; +- int i; +- +- if (!afe_priv) +- return; +- +- for (i = 0; i < CLK_AUD_NR_CLK; i++) { +- cl = afe_priv->lookup[i]; +- if (!cl) +- continue; +- +- clk = cl->clk; +- clk_unregister_gate(clk); +- +- clkdev_drop(cl); +- } ++ return devm_add_action_or_reset(afe->dev, mt8195_audsys_clk_unregister, afe); + } +diff --git a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h +index 239d31016ba76..69db2dd1c9e02 100644 +--- a/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h ++++ b/sound/soc/mediatek/mt8195/mt8195-audsys-clk.h +@@ -10,6 +10,5 @@ + #define _MT8195_AUDSYS_CLK_H_ + + int mt8195_audsys_clk_register(struct mtk_base_afe *afe); +-void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe); + + #endif +diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c +index 60d952719d275..05d0e07da3942 100644 +--- a/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c ++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_sk.c +@@ -3,6 +3,7 @@ + #include "cgroup_helpers.h" + + #include ++#include + #include "sockopt_sk.skel.h" + + #ifndef SOL_TCP +@@ -183,6 +184,33 @@ static int getsetsockopt(void) + goto err; + } + ++ /* optval=NULL case is handled correctly */ ++ ++ close(fd); ++ fd = socket(AF_NETLINK, SOCK_RAW, 0); ++ if (fd < 0) { ++ log_err("Failed to create AF_NETLINK socket"); ++ return -1; ++ } ++ ++ buf.u32 = 1; ++ optlen = sizeof(__u32); ++ err = setsockopt(fd, SOL_NETLINK, NETLINK_ADD_MEMBERSHIP, &buf, optlen); ++ if (err) { ++ log_err("Unexpected getsockopt(NETLINK_ADD_MEMBERSHIP) err=%d errno=%d", ++ err, errno); ++ goto err; ++ } ++ ++ optlen = 0; ++ err = getsockopt(fd, SOL_NETLINK, NETLINK_LIST_MEMBERSHIPS, NULL, &optlen); ++ if (err) { ++ log_err("Unexpected getsockopt(NETLINK_LIST_MEMBERSHIPS) err=%d errno=%d", ++ err, errno); ++ goto err; ++ } ++ ASSERT_EQ(optlen, 8, "Unexpected NETLINK_LIST_MEMBERSHIPS value"); ++ + free(big_buf); + close(fd); + return 0; +diff --git a/tools/testing/selftests/bpf/progs/sockopt_sk.c b/tools/testing/selftests/bpf/progs/sockopt_sk.c +index c8d810010a946..fe1df4cd206eb 100644 +--- a/tools/testing/selftests/bpf/progs/sockopt_sk.c ++++ b/tools/testing/selftests/bpf/progs/sockopt_sk.c +@@ -32,6 +32,12 @@ int _getsockopt(struct bpf_sockopt *ctx) + __u8 *optval_end = ctx->optval_end; + __u8 *optval = ctx->optval; + struct sockopt_sk *storage; ++ struct bpf_sock *sk; ++ ++ /* Bypass AF_NETLINK. */ ++ sk = ctx->sk; ++ if (sk && sk->family == AF_NETLINK) ++ return 1; + + /* Make sure bpf_get_netns_cookie is callable. + */ +@@ -131,6 +137,12 @@ int _setsockopt(struct bpf_sockopt *ctx) + __u8 *optval_end = ctx->optval_end; + __u8 *optval = ctx->optval; + struct sockopt_sk *storage; ++ struct bpf_sock *sk; ++ ++ /* Bypass AF_NETLINK. */ ++ sk = ctx->sk; ++ if (sk && sk->family == AF_NETLINK) ++ return 1; + + /* Make sure bpf_get_netns_cookie is callable. + */ +diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh +index 7c20811ab64bb..4152370298bf0 100755 +--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh ++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh +@@ -856,7 +856,15 @@ do_transfer() + sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q') + ip netns exec ${listener_ns} ./pm_nl_ctl ann $addr token $tk id $id + sleep 1 ++ sp=$(grep "type:10" "$evts_ns1" | ++ sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') ++ da=$(grep "type:10" "$evts_ns1" | ++ sed -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q') ++ dp=$(grep "type:10" "$evts_ns1" | ++ sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q') + ip netns exec ${listener_ns} ./pm_nl_ctl rem token $tk id $id ++ ip netns exec ${listener_ns} ./pm_nl_ctl dsf lip "::ffff:$addr" \ ++ lport $sp rip $da rport $dp token $tk + fi + + counter=$((counter + 1)) +@@ -922,6 +930,7 @@ do_transfer() + sleep 1 + sp=$(grep "type:10" "$evts_ns2" | + sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') ++ ip netns exec ${connector_ns} ./pm_nl_ctl rem token $tk id $id + ip netns exec ${connector_ns} ./pm_nl_ctl dsf lip $addr lport $sp \ + rip $da rport $dp token $tk + fi +@@ -3096,7 +3105,7 @@ userspace_tests() + pm_nl_set_limits $ns1 0 1 + run_tests $ns1 $ns2 10.0.1.1 0 0 userspace_1 slow + chk_join_nr 1 1 1 +- chk_rm_nr 0 1 ++ chk_rm_nr 1 1 + kill_events_pids + fi + }