From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 34FA11382C5 for ; Wed, 19 May 2021 12:25:53 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 002B5E07D7; Wed, 19 May 2021 12:25:51 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 9E385E07D7 for ; Wed, 19 May 2021 12:25:50 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id A5175335CCD for ; Wed, 19 May 2021 12:25:48 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 455BE63D for ; Wed, 19 May 2021 12:25:47 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1621427137.d3cadfda9c7e1f2c2045f8709a31a571ad9ae675.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.12 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1004_linux-5.12.5.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: d3cadfda9c7e1f2c2045f8709a31a571ad9ae675 X-VCS-Branch: 5.12 Date: Wed, 19 May 2021 12:25:47 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 9e869fca-3632-4ffc-abca-de21d141da53 X-Archives-Hash: 86f20b06ae261eca26ef1008648d4bf3 commit: d3cadfda9c7e1f2c2045f8709a31a571ad9ae675 Author: Mike Pagano gentoo org> AuthorDate: Wed May 19 12:25:37 2021 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed May 19 12:25:37 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d3cadfda Linux patch 5.12.5 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1004_linux-5.12.5.patch | 15905 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 15909 insertions(+) diff --git a/0000_README b/0000_README index 9e34155..055c634 100644 --- a/0000_README +++ b/0000_README @@ -59,6 +59,10 @@ Patch: 1003_linux-5.12.4.patch From: http://www.kernel.org Desc: Linux 5.12.4 +Patch: 1004_linux-5.12.5.patch +From: http://www.kernel.org +Desc: Linux 5.12.5 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1004_linux-5.12.5.patch b/1004_linux-5.12.5.patch new file mode 100644 index 0000000..bf752c4 --- /dev/null +++ b/1004_linux-5.12.5.patch @@ -0,0 +1,15905 @@ +diff --git a/.gitignore b/.gitignore +index 3af66272d6f10..127012c1f7170 100644 +--- a/.gitignore ++++ b/.gitignore +@@ -57,6 +57,7 @@ modules.order + /tags + /TAGS + /linux ++/modules-only.symvers + /vmlinux + /vmlinux.32 + /vmlinux.symvers +diff --git a/Documentation/devicetree/bindings/media/renesas,vin.yaml b/Documentation/devicetree/bindings/media/renesas,vin.yaml +index fe7c4cbfe4ba9..dd1a5ce5896ce 100644 +--- a/Documentation/devicetree/bindings/media/renesas,vin.yaml ++++ b/Documentation/devicetree/bindings/media/renesas,vin.yaml +@@ -193,23 +193,35 @@ required: + - interrupts + - clocks + - power-domains +- - resets +- +-if: +- properties: +- compatible: +- contains: +- enum: +- - renesas,vin-r8a7778 +- - renesas,vin-r8a7779 +- - renesas,rcar-gen2-vin +-then: +- required: +- - port +-else: +- required: +- - renesas,id +- - ports ++ ++allOf: ++ - if: ++ not: ++ properties: ++ compatible: ++ contains: ++ enum: ++ - renesas,vin-r8a7778 ++ - renesas,vin-r8a7779 ++ then: ++ required: ++ - resets ++ ++ - if: ++ properties: ++ compatible: ++ contains: ++ enum: ++ - renesas,vin-r8a7778 ++ - renesas,vin-r8a7779 ++ - renesas,rcar-gen2-vin ++ then: ++ required: ++ - port ++ else: ++ required: ++ - renesas,id ++ - ports + + additionalProperties: false + +diff --git a/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml b/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml +index 4a2bcc0158e2d..8fdfbc763d704 100644 +--- a/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml ++++ b/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml +@@ -17,6 +17,7 @@ allOf: + properties: + compatible: + oneOf: ++ - const: renesas,pcie-r8a7779 # R-Car H1 + - items: + - enum: + - renesas,pcie-r8a7742 # RZ/G1H +@@ -74,7 +75,16 @@ required: + - clocks + - clock-names + - power-domains +- - resets ++ ++if: ++ not: ++ properties: ++ compatible: ++ contains: ++ const: renesas,pcie-r8a7779 ++then: ++ required: ++ - resets + + unevaluatedProperties: false + +diff --git a/Documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml b/Documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml +index 626447fee0925..7808ec8bc7128 100644 +--- a/Documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml ++++ b/Documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml +@@ -25,11 +25,13 @@ properties: + - qcom,msm8998-qmp-pcie-phy + - qcom,msm8998-qmp-ufs-phy + - qcom,msm8998-qmp-usb3-phy ++ - qcom,sc7180-qmp-usb3-phy + - qcom,sc8180x-qmp-ufs-phy + - qcom,sc8180x-qmp-usb3-phy + - qcom,sdm845-qhp-pcie-phy + - qcom,sdm845-qmp-pcie-phy + - qcom,sdm845-qmp-ufs-phy ++ - qcom,sdm845-qmp-usb3-phy + - qcom,sdm845-qmp-usb3-uni-phy + - qcom,sm8150-qmp-ufs-phy + - qcom,sm8150-qmp-usb3-phy +diff --git a/Documentation/devicetree/bindings/phy/qcom,qmp-usb3-dp-phy.yaml b/Documentation/devicetree/bindings/phy/qcom,qmp-usb3-dp-phy.yaml +index 33974ad10afe4..62c0179d17657 100644 +--- a/Documentation/devicetree/bindings/phy/qcom,qmp-usb3-dp-phy.yaml ++++ b/Documentation/devicetree/bindings/phy/qcom,qmp-usb3-dp-phy.yaml +@@ -14,9 +14,7 @@ properties: + compatible: + enum: + - qcom,sc7180-qmp-usb3-dp-phy +- - qcom,sc7180-qmp-usb3-phy + - qcom,sdm845-qmp-usb3-dp-phy +- - qcom,sdm845-qmp-usb3-phy + reg: + items: + - description: Address and length of PHY's USB serdes block. +diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml +index f54cae9ff7b28..d3f87f2bfdc25 100644 +--- a/Documentation/devicetree/bindings/serial/8250.yaml ++++ b/Documentation/devicetree/bindings/serial/8250.yaml +@@ -93,11 +93,6 @@ properties: + - mediatek,mt7622-btif + - mediatek,mt7623-btif + - const: mediatek,mtk-btif +- - items: +- - enum: +- - mediatek,mt7622-btif +- - mediatek,mt7623-btif +- - const: mediatek,mtk-btif + - items: + - const: mrvl,mmp-uart + - const: intel,xscale-uart +diff --git a/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml b/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml +index b33a76eeac4e4..f963204e0b162 100644 +--- a/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml ++++ b/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml +@@ -28,14 +28,7 @@ properties: + - renesas,r8a77980-thermal # R-Car V3H + - renesas,r8a779a0-thermal # R-Car V3U + +- reg: +- minItems: 2 +- maxItems: 4 +- items: +- - description: TSC1 registers +- - description: TSC2 registers +- - description: TSC3 registers +- - description: TSC4 registers ++ reg: true + + interrupts: + items: +@@ -71,8 +64,25 @@ if: + enum: + - renesas,r8a779a0-thermal + then: ++ properties: ++ reg: ++ minItems: 2 ++ maxItems: 3 ++ items: ++ - description: TSC1 registers ++ - description: TSC2 registers ++ - description: TSC3 registers + required: + - interrupts ++else: ++ properties: ++ reg: ++ items: ++ - description: TSC0 registers ++ - description: TSC1 registers ++ - description: TSC2 registers ++ - description: TSC3 registers ++ - description: TSC4 registers + + additionalProperties: false + +@@ -111,3 +121,20 @@ examples: + }; + }; + }; ++ - | ++ #include ++ #include ++ #include ++ ++ tsc_r8a779a0: thermal@e6190000 { ++ compatible = "renesas,r8a779a0-thermal"; ++ reg = <0xe6190000 0x200>, ++ <0xe6198000 0x200>, ++ <0xe61a0000 0x200>, ++ <0xe61a8000 0x200>, ++ <0xe61b0000 0x200>; ++ clocks = <&cpg CPG_MOD 919>; ++ power-domains = <&sysc R8A779A0_PD_ALWAYS_ON>; ++ resets = <&cpg 919>; ++ #thermal-sensor-cells = <1>; ++ }; +diff --git a/Documentation/dontdiff b/Documentation/dontdiff +index e361fc95ca293..82e3eee7363b0 100644 +--- a/Documentation/dontdiff ++++ b/Documentation/dontdiff +@@ -178,6 +178,7 @@ mktables + mktree + mkutf8data + modpost ++modules-only.symvers + modules.builtin + modules.builtin.modinfo + modules.nsdeps +diff --git a/Makefile b/Makefile +index 0b1852621615c..f60212a0412c8 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 12 +-SUBLEVEL = 4 ++SUBLEVEL = 5 + EXTRAVERSION = + NAME = Frozen Wasteland + +@@ -1513,7 +1513,7 @@ endif # CONFIG_MODULES + # make distclean Remove editor backup files, patch leftover files and the like + + # Directories & files removed with 'make clean' +-CLEAN_FILES += include/ksym vmlinux.symvers \ ++CLEAN_FILES += include/ksym vmlinux.symvers modules-only.symvers \ + modules.builtin modules.builtin.modinfo modules.nsdeps \ + compile_commands.json .thinlto-cache + +diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h +index ad9b7fe4dba36..4a9d33372fe2b 100644 +--- a/arch/arc/include/asm/page.h ++++ b/arch/arc/include/asm/page.h +@@ -7,6 +7,18 @@ + + #include + ++#ifdef CONFIG_ARC_HAS_PAE40 ++ ++#define MAX_POSSIBLE_PHYSMEM_BITS 40 ++#define PAGE_MASK_PHYS (0xff00000000ull | PAGE_MASK) ++ ++#else /* CONFIG_ARC_HAS_PAE40 */ ++ ++#define MAX_POSSIBLE_PHYSMEM_BITS 32 ++#define PAGE_MASK_PHYS PAGE_MASK ++ ++#endif /* CONFIG_ARC_HAS_PAE40 */ ++ + #ifndef __ASSEMBLY__ + + #define clear_page(paddr) memset((paddr), 0, PAGE_SIZE) +diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h +index 163641726a2b9..5878846f00cfe 100644 +--- a/arch/arc/include/asm/pgtable.h ++++ b/arch/arc/include/asm/pgtable.h +@@ -107,8 +107,8 @@ + #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) + + /* Set of bits not changed in pte_modify */ +-#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL) +- ++#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \ ++ _PAGE_SPECIAL) + /* More Abbrevaited helpers */ + #define PAGE_U_NONE __pgprot(___DEF) + #define PAGE_U_R __pgprot(___DEF | _PAGE_READ) +@@ -132,13 +132,7 @@ + #define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) + #define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) + +-#ifdef CONFIG_ARC_HAS_PAE40 +-#define PTE_BITS_NON_RWX_IN_PD1 (0xff00000000 | PAGE_MASK | _PAGE_CACHEABLE) +-#define MAX_POSSIBLE_PHYSMEM_BITS 40 +-#else +-#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK | _PAGE_CACHEABLE) +-#define MAX_POSSIBLE_PHYSMEM_BITS 32 +-#endif ++#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) + + /************************************************************************** + * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) +diff --git a/arch/arc/include/uapi/asm/page.h b/arch/arc/include/uapi/asm/page.h +index 2a97e2718a219..2a4ad619abfba 100644 +--- a/arch/arc/include/uapi/asm/page.h ++++ b/arch/arc/include/uapi/asm/page.h +@@ -33,5 +33,4 @@ + + #define PAGE_MASK (~(PAGE_SIZE-1)) + +- + #endif /* _UAPI__ASM_ARC_PAGE_H */ +diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S +index 1743506081da6..2cb8dfe866b66 100644 +--- a/arch/arc/kernel/entry.S ++++ b/arch/arc/kernel/entry.S +@@ -177,7 +177,7 @@ tracesys: + + ; Do the Sys Call as we normally would. + ; Validate the Sys Call number +- cmp r8, NR_syscalls ++ cmp r8, NR_syscalls - 1 + mov.hi r0, -ENOSYS + bhi tracesys_exit + +@@ -255,7 +255,7 @@ ENTRY(EV_Trap) + ;============ Normal syscall case + + ; syscall num shd not exceed the total system calls avail +- cmp r8, NR_syscalls ++ cmp r8, NR_syscalls - 1 + mov.hi r0, -ENOSYS + bhi .Lret_from_system_call + +diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c +index ce07e697916c8..1bcc6985b9a0e 100644 +--- a/arch/arc/mm/init.c ++++ b/arch/arc/mm/init.c +@@ -157,7 +157,16 @@ void __init setup_arch_memory(void) + min_high_pfn = PFN_DOWN(high_mem_start); + max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz); + +- max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn; ++ /* ++ * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE. ++ * For HIGHMEM without PAE max_high_pfn should be less than ++ * min_low_pfn to guarantee that these two regions don't overlap. ++ * For PAE case highmem is greater than lowmem, so it is natural ++ * to use max_high_pfn. ++ * ++ * In both cases, holes should be handled by pfn_valid(). ++ */ ++ max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn; + + high_memory = (void *)(min_high_pfn << PAGE_SHIFT); + +diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c +index fac4adc902044..95c649fbc95af 100644 +--- a/arch/arc/mm/ioremap.c ++++ b/arch/arc/mm/ioremap.c +@@ -53,9 +53,10 @@ EXPORT_SYMBOL(ioremap); + void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size, + unsigned long flags) + { ++ unsigned int off; + unsigned long vaddr; + struct vm_struct *area; +- phys_addr_t off, end; ++ phys_addr_t end; + pgprot_t prot = __pgprot(flags); + + /* Don't allow wraparound, zero size */ +@@ -72,7 +73,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size, + + /* Mappings have to be page-aligned */ + off = paddr & ~PAGE_MASK; +- paddr &= PAGE_MASK; ++ paddr &= PAGE_MASK_PHYS; + size = PAGE_ALIGN(end + 1) - paddr; + + /* +diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c +index 9bb3c24f36770..9c7c682472896 100644 +--- a/arch/arc/mm/tlb.c ++++ b/arch/arc/mm/tlb.c +@@ -576,7 +576,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, + pte_t *ptep) + { + unsigned long vaddr = vaddr_unaligned & PAGE_MASK; +- phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK; ++ phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; + struct page *page = pfn_to_page(pte_pfn(*ptep)); + + create_tlb(vma, vaddr, ptep); +diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi +index 3bf90d9e33353..a294a02f2d232 100644 +--- a/arch/arm/boot/dts/dra7-l4.dtsi ++++ b/arch/arm/boot/dts/dra7-l4.dtsi +@@ -1168,7 +1168,7 @@ + }; + }; + +- target-module@34000 { /* 0x48034000, ap 7 46.0 */ ++ timer3_target: target-module@34000 { /* 0x48034000, ap 7 46.0 */ + compatible = "ti,sysc-omap4-timer", "ti,sysc"; + reg = <0x34000 0x4>, + <0x34010 0x4>; +@@ -1195,7 +1195,7 @@ + }; + }; + +- target-module@36000 { /* 0x48036000, ap 9 4e.0 */ ++ timer4_target: target-module@36000 { /* 0x48036000, ap 9 4e.0 */ + compatible = "ti,sysc-omap4-timer", "ti,sysc"; + reg = <0x36000 0x4>, + <0x36010 0x4>; +diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi +index ce1194744f840..53d68786a61f2 100644 +--- a/arch/arm/boot/dts/dra7.dtsi ++++ b/arch/arm/boot/dts/dra7.dtsi +@@ -46,6 +46,7 @@ + + timer { + compatible = "arm,armv7-timer"; ++ status = "disabled"; /* See ARM architected timer wrap erratum i940 */ + interrupts = , + , + , +@@ -1241,3 +1242,22 @@ + assigned-clock-parents = <&sys_32k_ck>; + }; + }; ++ ++/* Local timers, see ARM architected timer wrap erratum i940 */ ++&timer3_target { ++ ti,no-reset-on-init; ++ ti,no-idle; ++ timer@0 { ++ assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>; ++ assigned-clock-parents = <&timer_sys_clk_div>; ++ }; ++}; ++ ++&timer4_target { ++ ti,no-reset-on-init; ++ ti,no-idle; ++ timer@0 { ++ assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>; ++ assigned-clock-parents = <&timer_sys_clk_div>; ++ }; ++}; +diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c +index 08660ae9dcbce..b1423fb130ea4 100644 +--- a/arch/arm/kernel/hw_breakpoint.c ++++ b/arch/arm/kernel/hw_breakpoint.c +@@ -886,7 +886,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs) + info->trigger = addr; + pr_debug("breakpoint fired: address = 0x%x\n", addr); + perf_bp_event(bp, regs); +- if (!bp->overflow_handler) ++ if (is_default_overflow_handler(bp)) + enable_single_step(bp, addr); + goto unlock; + } +diff --git a/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi +index fa284a7260d68..e202e8aa69413 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi ++++ b/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi +@@ -12,6 +12,14 @@ + model = "Renesas Falcon CPU board"; + compatible = "renesas,falcon-cpu", "renesas,r8a779a0"; + ++ aliases { ++ serial0 = &scif0; ++ }; ++ ++ chosen { ++ stdout-path = "serial0:115200n8"; ++ }; ++ + memory@48000000 { + device_type = "memory"; + /* first 128MB is reserved for secure area. */ +diff --git a/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts b/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts +index 5617b81dd7dc3..273857ae38f3d 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts ++++ b/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts +@@ -14,11 +14,6 @@ + + aliases { + ethernet0 = &avb0; +- serial0 = &scif0; +- }; +- +- chosen { +- stdout-path = "serial0:115200n8"; + }; + }; + +diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h +index 1c26d7baa67f8..cfdde3a568059 100644 +--- a/arch/arm64/include/asm/daifflags.h ++++ b/arch/arm64/include/asm/daifflags.h +@@ -131,6 +131,9 @@ static inline void local_daif_inherit(struct pt_regs *regs) + if (interrupts_enabled(regs)) + trace_hardirqs_on(); + ++ if (system_uses_irq_prio_masking()) ++ gic_write_pmr(regs->pmr_save); ++ + /* + * We can't use local_daif_restore(regs->pstate) here as + * system_has_prio_mask_debugging() won't restore the I bit if it can +diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c +index 9d35884504737..117412bae9155 100644 +--- a/arch/arm64/kernel/entry-common.c ++++ b/arch/arm64/kernel/entry-common.c +@@ -226,14 +226,6 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) + { + unsigned long far = read_sysreg(far_el1); + +- /* +- * The CPU masked interrupts, and we are leaving them masked during +- * do_debug_exception(). Update PMR as if we had called +- * local_daif_mask(). +- */ +- if (system_uses_irq_prio_masking()) +- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +- + arm64_enter_el1_dbg(regs); + if (!cortex_a76_erratum_1463225_debug_handler(regs)) + do_debug_exception(far, esr, regs); +@@ -398,9 +390,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr) + /* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */ + unsigned long far = read_sysreg(far_el1); + +- if (system_uses_irq_prio_masking()) +- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +- + enter_from_user_mode(); + do_debug_exception(far, esr, regs); + local_daif_restore(DAIF_PROCCTX_NOIRQ); +@@ -408,9 +397,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr) + + static void noinstr el0_svc(struct pt_regs *regs) + { +- if (system_uses_irq_prio_masking()) +- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +- + enter_from_user_mode(); + cortex_a76_erratum_1463225_svc_handler(); + do_el0_svc(regs); +@@ -486,9 +472,6 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) + + static void noinstr el0_svc_compat(struct pt_regs *regs) + { +- if (system_uses_irq_prio_masking()) +- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); +- + enter_from_user_mode(); + cortex_a76_erratum_1463225_svc_handler(); + do_el0_svc_compat(regs); +diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S +index 6acfc5e6b5e05..e03fba3ae2a0d 100644 +--- a/arch/arm64/kernel/entry.S ++++ b/arch/arm64/kernel/entry.S +@@ -263,16 +263,16 @@ alternative_else_nop_endif + stp lr, x21, [sp, #S_LR] + + /* +- * For exceptions from EL0, terminate the callchain here. ++ * For exceptions from EL0, create a terminal frame record. + * For exceptions from EL1, create a synthetic frame record so the + * interrupted code shows up in the backtrace. + */ + .if \el == 0 +- mov x29, xzr ++ stp xzr, xzr, [sp, #S_STACKFRAME] + .else + stp x29, x22, [sp, #S_STACKFRAME] +- add x29, sp, #S_STACKFRAME + .endif ++ add x29, sp, #S_STACKFRAME + + #ifdef CONFIG_ARM64_SW_TTBR0_PAN + alternative_if_not ARM64_HAS_PAN +@@ -292,6 +292,8 @@ alternative_else_nop_endif + alternative_if ARM64_HAS_IRQ_PRIO_MASKING + mrs_s x20, SYS_ICC_PMR_EL1 + str x20, [sp, #S_PMR_SAVE] ++ mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET ++ msr_s SYS_ICC_PMR_EL1, x20 + alternative_else_nop_endif + + /* Re-enable tag checking (TCO set on exception entry) */ +@@ -493,8 +495,8 @@ tsk .req x28 // current thread_info + /* + * Interrupt handling. + */ +- .macro irq_handler +- ldr_l x1, handle_arch_irq ++ .macro irq_handler, handler:req ++ ldr_l x1, \handler + mov x0, sp + irq_stack_entry + blr x1 +@@ -524,13 +526,41 @@ alternative_endif + #endif + .endm + +- .macro gic_prio_irq_setup, pmr:req, tmp:req +-#ifdef CONFIG_ARM64_PSEUDO_NMI +- alternative_if ARM64_HAS_IRQ_PRIO_MASKING +- orr \tmp, \pmr, #GIC_PRIO_PSR_I_SET +- msr_s SYS_ICC_PMR_EL1, \tmp +- alternative_else_nop_endif ++ .macro el1_interrupt_handler, handler:req ++ enable_da_f ++ ++ mov x0, sp ++ bl enter_el1_irq_or_nmi ++ ++ irq_handler \handler ++ ++#ifdef CONFIG_PREEMPTION ++ ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count ++alternative_if ARM64_HAS_IRQ_PRIO_MASKING ++ /* ++ * DA_F were cleared at start of handling. If anything is set in DAIF, ++ * we come back from an NMI, so skip preemption ++ */ ++ mrs x0, daif ++ orr x24, x24, x0 ++alternative_else_nop_endif ++ cbnz x24, 1f // preempt count != 0 || NMI return path ++ bl arm64_preempt_schedule_irq // irq en/disable is done inside ++1: + #endif ++ ++ mov x0, sp ++ bl exit_el1_irq_or_nmi ++ .endm ++ ++ .macro el0_interrupt_handler, handler:req ++ user_exit_irqoff ++ enable_da_f ++ ++ tbz x22, #55, 1f ++ bl do_el0_irq_bp_hardening ++1: ++ irq_handler \handler + .endm + + .text +@@ -662,32 +692,7 @@ SYM_CODE_END(el1_sync) + .align 6 + SYM_CODE_START_LOCAL_NOALIGN(el1_irq) + kernel_entry 1 +- gic_prio_irq_setup pmr=x20, tmp=x1 +- enable_da_f +- +- mov x0, sp +- bl enter_el1_irq_or_nmi +- +- irq_handler +- +-#ifdef CONFIG_PREEMPTION +- ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count +-alternative_if ARM64_HAS_IRQ_PRIO_MASKING +- /* +- * DA_F were cleared at start of handling. If anything is set in DAIF, +- * we come back from an NMI, so skip preemption +- */ +- mrs x0, daif +- orr x24, x24, x0 +-alternative_else_nop_endif +- cbnz x24, 1f // preempt count != 0 || NMI return path +- bl arm64_preempt_schedule_irq // irq en/disable is done inside +-1: +-#endif +- +- mov x0, sp +- bl exit_el1_irq_or_nmi +- ++ el1_interrupt_handler handle_arch_irq + kernel_exit 1 + SYM_CODE_END(el1_irq) + +@@ -727,22 +732,13 @@ SYM_CODE_END(el0_error_compat) + SYM_CODE_START_LOCAL_NOALIGN(el0_irq) + kernel_entry 0 + el0_irq_naked: +- gic_prio_irq_setup pmr=x20, tmp=x0 +- user_exit_irqoff +- enable_da_f +- +- tbz x22, #55, 1f +- bl do_el0_irq_bp_hardening +-1: +- irq_handler +- ++ el0_interrupt_handler handle_arch_irq + b ret_to_user + SYM_CODE_END(el0_irq) + + SYM_CODE_START_LOCAL(el1_error) + kernel_entry 1 + mrs x1, esr_el1 +- gic_prio_kentry_setup tmp=x2 + enable_dbg + mov x0, sp + bl do_serror +@@ -753,7 +749,6 @@ SYM_CODE_START_LOCAL(el0_error) + kernel_entry 0 + el0_error_naked: + mrs x25, esr_el1 +- gic_prio_kentry_setup tmp=x2 + user_exit_irqoff + enable_dbg + mov x0, sp +diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c +index d55bdfb7789c1..7032a5f9e624c 100644 +--- a/arch/arm64/kernel/stacktrace.c ++++ b/arch/arm64/kernel/stacktrace.c +@@ -44,10 +44,6 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) + unsigned long fp = frame->fp; + struct stack_info info; + +- /* Terminal record; nothing to unwind */ +- if (!fp) +- return -ENOENT; +- + if (fp & 0xf) + return -EINVAL; + +@@ -108,6 +104,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) + + frame->pc = ptrauth_strip_insn_pac(frame->pc); + ++ /* ++ * This is a terminal record, so we have finished unwinding. ++ */ ++ if (!frame->fp && !frame->pc) ++ return -ENOENT; ++ + return 0; + } + NOKPROBE_SYMBOL(unwind_frame); +diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c +index ac485163a4a76..6d44c028d1c9e 100644 +--- a/arch/arm64/mm/flush.c ++++ b/arch/arm64/mm/flush.c +@@ -55,8 +55,10 @@ void __sync_icache_dcache(pte_t pte) + { + struct page *page = pte_page(pte); + +- if (!test_and_set_bit(PG_dcache_clean, &page->flags)) ++ if (!test_bit(PG_dcache_clean, &page->flags)) { + sync_icache_aliases(page_address(page), page_size(page)); ++ set_bit(PG_dcache_clean, &page->flags); ++ } + } + EXPORT_SYMBOL_GPL(__sync_icache_dcache); + +diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S +index c967bfd30d2b5..b183216a591cc 100644 +--- a/arch/arm64/mm/proc.S ++++ b/arch/arm64/mm/proc.S +@@ -444,6 +444,18 @@ SYM_FUNC_START(__cpu_setup) + mov x10, #(SYS_GCR_EL1_RRND | SYS_GCR_EL1_EXCL_MASK) + msr_s SYS_GCR_EL1, x10 + ++ /* ++ * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then ++ * RGSR_EL1.SEED must be non-zero for IRG to produce ++ * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we ++ * must initialize it. ++ */ ++ mrs x10, CNTVCT_EL0 ++ ands x10, x10, #SYS_RGSR_EL1_SEED_MASK ++ csinc x10, x10, xzr, ne ++ lsl x10, x10, #SYS_RGSR_EL1_SEED_SHIFT ++ msr_s SYS_RGSR_EL1, x10 ++ + /* clear any pending tag check faults in TFSR*_EL1 */ + msr_s SYS_TFSR_EL1, xzr + msr_s SYS_TFSRE0_EL1, xzr +diff --git a/arch/ia64/include/asm/module.h b/arch/ia64/include/asm/module.h +index 5a29652e6defc..7271b9c5fc760 100644 +--- a/arch/ia64/include/asm/module.h ++++ b/arch/ia64/include/asm/module.h +@@ -14,16 +14,20 @@ + struct elf64_shdr; /* forward declration */ + + struct mod_arch_specific { ++ /* Used only at module load time. */ + struct elf64_shdr *core_plt; /* core PLT section */ + struct elf64_shdr *init_plt; /* init PLT section */ + struct elf64_shdr *got; /* global offset table */ + struct elf64_shdr *opd; /* official procedure descriptors */ + struct elf64_shdr *unwind; /* unwind-table section */ + unsigned long gp; /* global-pointer for module */ ++ unsigned int next_got_entry; /* index of next available got entry */ + ++ /* Used at module run and cleanup time. */ + void *core_unw_table; /* core unwind-table cookie returned by unwinder */ + void *init_unw_table; /* init unwind-table cookie returned by unwinder */ +- unsigned int next_got_entry; /* index of next available got entry */ ++ void *opd_addr; /* symbolize uses .opd to get to actual function */ ++ unsigned long opd_size; + }; + + #define ARCH_SHF_SMALL SHF_IA_64_SHORT +diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c +index 00a496cb346f6..2cba53c1da82e 100644 +--- a/arch/ia64/kernel/module.c ++++ b/arch/ia64/kernel/module.c +@@ -905,9 +905,31 @@ register_unwind_table (struct module *mod) + int + module_finalize (const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod) + { ++ struct mod_arch_specific *mas = &mod->arch; ++ + DEBUGP("%s: init: entry=%p\n", __func__, mod->init); +- if (mod->arch.unwind) ++ if (mas->unwind) + register_unwind_table(mod); ++ ++ /* ++ * ".opd" was already relocated to the final destination. Store ++ * it's address for use in symbolizer. ++ */ ++ mas->opd_addr = (void *)mas->opd->sh_addr; ++ mas->opd_size = mas->opd->sh_size; ++ ++ /* ++ * Module relocation was already done at this point. Section ++ * headers are about to be deleted. Wipe out load-time context. ++ */ ++ mas->core_plt = NULL; ++ mas->init_plt = NULL; ++ mas->got = NULL; ++ mas->opd = NULL; ++ mas->unwind = NULL; ++ mas->gp = 0; ++ mas->next_got_entry = 0; ++ + return 0; + } + +@@ -926,10 +948,9 @@ module_arch_cleanup (struct module *mod) + + void *dereference_module_function_descriptor(struct module *mod, void *ptr) + { +- Elf64_Shdr *opd = mod->arch.opd; ++ struct mod_arch_specific *mas = &mod->arch; + +- if (ptr < (void *)opd->sh_addr || +- ptr >= (void *)(opd->sh_addr + opd->sh_size)) ++ if (ptr < mas->opd_addr || ptr >= mas->opd_addr + mas->opd_size) + return ptr; + + return dereference_function_descriptor(ptr); +diff --git a/arch/mips/include/asm/div64.h b/arch/mips/include/asm/div64.h +index dc5ea57364408..ceece76fc971a 100644 +--- a/arch/mips/include/asm/div64.h ++++ b/arch/mips/include/asm/div64.h +@@ -1,5 +1,5 @@ + /* +- * Copyright (C) 2000, 2004 Maciej W. Rozycki ++ * Copyright (C) 2000, 2004, 2021 Maciej W. Rozycki + * Copyright (C) 2003, 07 Ralf Baechle (ralf@linux-mips.org) + * + * This file is subject to the terms and conditions of the GNU General Public +@@ -9,25 +9,18 @@ + #ifndef __ASM_DIV64_H + #define __ASM_DIV64_H + +-#include +- +-#if BITS_PER_LONG == 64 ++#include + +-#include ++#if BITS_PER_LONG == 32 + + /* + * No traps on overflows for any of these... + */ + +-#define __div64_32(n, base) \ +-({ \ ++#define do_div64_32(res, high, low, base) ({ \ + unsigned long __cf, __tmp, __tmp2, __i; \ + unsigned long __quot32, __mod32; \ +- unsigned long __high, __low; \ +- unsigned long long __n; \ + \ +- __high = *__n >> 32; \ +- __low = __n; \ + __asm__( \ + " .set push \n" \ + " .set noat \n" \ +@@ -51,18 +44,48 @@ + " subu %0, %0, %z6 \n" \ + " addiu %2, %2, 1 \n" \ + "3: \n" \ +- " bnez %4, 0b\n\t" \ +- " srl %5, %1, 0x1f\n\t" \ ++ " bnez %4, 0b \n" \ ++ " srl %5, %1, 0x1f \n" \ + " .set pop" \ + : "=&r" (__mod32), "=&r" (__tmp), \ + "=&r" (__quot32), "=&r" (__cf), \ + "=&r" (__i), "=&r" (__tmp2) \ +- : "Jr" (base), "0" (__high), "1" (__low)); \ ++ : "Jr" (base), "0" (high), "1" (low)); \ + \ +- (__n) = __quot32; \ ++ (res) = __quot32; \ + __mod32; \ + }) + +-#endif /* BITS_PER_LONG == 64 */ ++#define __div64_32(n, base) ({ \ ++ unsigned long __upper, __low, __high, __radix; \ ++ unsigned long long __quot; \ ++ unsigned long long __div; \ ++ unsigned long __mod; \ ++ \ ++ __div = (*n); \ ++ __radix = (base); \ ++ \ ++ __high = __div >> 32; \ ++ __low = __div; \ ++ \ ++ if (__high < __radix) { \ ++ __upper = __high; \ ++ __high = 0; \ ++ } else { \ ++ __upper = __high % __radix; \ ++ __high /= __radix; \ ++ } \ ++ \ ++ __mod = do_div64_32(__low, __upper, __low, __radix); \ ++ \ ++ __quot = __high; \ ++ __quot = __quot << 32 | __low; \ ++ (*n) = __quot; \ ++ __mod; \ ++}) ++ ++#endif /* BITS_PER_LONG == 32 */ ++ ++#include + + #endif /* __ASM_DIV64_H */ +diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c +index b71892064f273..0ef240adefb59 100644 +--- a/arch/mips/kernel/cpu-probe.c ++++ b/arch/mips/kernel/cpu-probe.c +@@ -1752,7 +1752,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) + set_isa(c, MIPS_CPU_ISA_M64R2); + break; + } +- c->writecombine = _CACHE_UNCACHED_ACCELERATED; + c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_EXT | + MIPS_ASE_LOONGSON_EXT2); + break; +@@ -1782,7 +1781,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) + * register, we correct it here. + */ + c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE; +- c->writecombine = _CACHE_UNCACHED_ACCELERATED; + c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM | + MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2); + c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */ +@@ -1793,7 +1791,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu) + set_elf_platform(cpu, "loongson3a"); + set_isa(c, MIPS_CPU_ISA_M64R2); + decode_cpucfg(c); +- c->writecombine = _CACHE_UNCACHED_ACCELERATED; + break; + default: + panic("Unknown Loongson Processor ID!"); +diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h +index e8d09a841373b..31ed5356590a2 100644 +--- a/arch/powerpc/include/asm/interrupt.h ++++ b/arch/powerpc/include/asm/interrupt.h +@@ -138,6 +138,13 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte + local_paca->irq_soft_mask = IRQS_ALL_DISABLED; + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + ++ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !(regs->msr & MSR_PR) && ++ regs->nip < (unsigned long)__end_interrupts) { ++ // Kernel code running below __end_interrupts is ++ // implicitly soft-masked. ++ regs->softe = IRQS_ALL_DISABLED; ++ } ++ + /* Don't do any per-CPU operations until interrupt state is fixed */ + #endif + /* Allow DEC and PMI to be traced when they are soft-NMI */ +diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h +index 5d4706c145727..cf8ca08295bff 100644 +--- a/arch/powerpc/kernel/head_32.h ++++ b/arch/powerpc/kernel/head_32.h +@@ -261,11 +261,7 @@ label: + lis r1, emergency_ctx@ha + #endif + lwz r1, emergency_ctx@l(r1) +- cmpwi cr1, r1, 0 +- bne cr1, 1f +- lis r1, init_thread_union@ha +- addi r1, r1, init_thread_union@l +-1: addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE ++ addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE + EXCEPTION_PROLOG_2 + SAVE_NVGPRS(r11) + addi r3, r1, STACK_FRAME_OVERHEAD +diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c +index c00214a4355c8..4023f91defa68 100644 +--- a/arch/powerpc/kernel/iommu.c ++++ b/arch/powerpc/kernel/iommu.c +@@ -1096,7 +1096,7 @@ int iommu_take_ownership(struct iommu_table *tbl) + + spin_lock_irqsave(&tbl->large_pool.lock, flags); + for (i = 0; i < tbl->nr_pools; i++) +- spin_lock(&tbl->pools[i].lock); ++ spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock); + + iommu_table_release_pages(tbl); + +@@ -1124,7 +1124,7 @@ void iommu_release_ownership(struct iommu_table *tbl) + + spin_lock_irqsave(&tbl->large_pool.lock, flags); + for (i = 0; i < tbl->nr_pools; i++) +- spin_lock(&tbl->pools[i].lock); ++ spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock); + + memset(tbl->it_map, 0, sz); + +diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c +index 8ba49a6bf5159..d7c1f92152af6 100644 +--- a/arch/powerpc/kernel/setup_32.c ++++ b/arch/powerpc/kernel/setup_32.c +@@ -164,7 +164,7 @@ void __init irqstack_early_init(void) + } + + #ifdef CONFIG_VMAP_STACK +-void *emergency_ctx[NR_CPUS] __ro_after_init; ++void *emergency_ctx[NR_CPUS] __ro_after_init = {[0] = &init_stack}; + + void __init emergency_stack_init(void) + { +diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c +index 5c7ce1d506312..c2473e20f5f5b 100644 +--- a/arch/powerpc/kernel/smp.c ++++ b/arch/powerpc/kernel/smp.c +@@ -1546,6 +1546,9 @@ void start_secondary(void *unused) + + vdso_getcpu_init(); + #endif ++ set_numa_node(numa_cpu_lookup_table[cpu]); ++ set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); ++ + /* Update topology CPU masks */ + add_cpu_to_masks(cpu); + +@@ -1564,9 +1567,6 @@ void start_secondary(void *unused) + shared_caches = true; + } + +- set_numa_node(numa_cpu_lookup_table[cpu]); +- set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); +- + smp_wmb(); + notify_cpu_starting(cpu); + set_cpu_online(cpu, true); +diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c +index 1fd31b4b0e139..0aefa6a4a259b 100644 +--- a/arch/powerpc/lib/feature-fixups.c ++++ b/arch/powerpc/lib/feature-fixups.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -227,11 +228,25 @@ static void do_stf_exit_barrier_fixups(enum stf_barrier_type types) + : "unknown"); + } + ++static int __do_stf_barrier_fixups(void *data) ++{ ++ enum stf_barrier_type *types = data; ++ ++ do_stf_entry_barrier_fixups(*types); ++ do_stf_exit_barrier_fixups(*types); ++ ++ return 0; ++} + + void do_stf_barrier_fixups(enum stf_barrier_type types) + { +- do_stf_entry_barrier_fixups(types); +- do_stf_exit_barrier_fixups(types); ++ /* ++ * The call to the fallback entry flush, and the fallback/sync-ori exit ++ * flush can not be safely patched in/out while other CPUs are executing ++ * them. So call __do_stf_barrier_fixups() on one CPU while all other CPUs ++ * spin in the stop machine core with interrupts hard disabled. ++ */ ++ stop_machine(__do_stf_barrier_fixups, &types, NULL); + } + + void do_uaccess_flush_fixups(enum l1d_flush_type types) +@@ -284,8 +299,9 @@ void do_uaccess_flush_fixups(enum l1d_flush_type types) + : "unknown"); + } + +-void do_entry_flush_fixups(enum l1d_flush_type types) ++static int __do_entry_flush_fixups(void *data) + { ++ enum l1d_flush_type types = *(enum l1d_flush_type *)data; + unsigned int instrs[3], *dest; + long *start, *end; + int i; +@@ -354,6 +370,19 @@ void do_entry_flush_fixups(enum l1d_flush_type types) + : "ori type" : + (types & L1D_FLUSH_MTTRIG) ? "mttrig type" + : "unknown"); ++ ++ return 0; ++} ++ ++void do_entry_flush_fixups(enum l1d_flush_type types) ++{ ++ /* ++ * The call to the fallback flush can not be safely patched in/out while ++ * other CPUs are executing it. So call __do_entry_flush_fixups() on one ++ * CPU while all other CPUs spin in the stop machine core with interrupts ++ * hard disabled. ++ */ ++ stop_machine(__do_entry_flush_fixups, &types, NULL); + } + + void do_rfi_flush_fixups(enum l1d_flush_type types) +diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c +index 7719995323c3f..12de1906e97bc 100644 +--- a/arch/powerpc/mm/book3s64/hash_utils.c ++++ b/arch/powerpc/mm/book3s64/hash_utils.c +@@ -338,7 +338,7 @@ repeat: + int htab_remove_mapping(unsigned long vstart, unsigned long vend, + int psize, int ssize) + { +- unsigned long vaddr; ++ unsigned long vaddr, time_limit; + unsigned int step, shift; + int rc; + int ret = 0; +@@ -351,8 +351,19 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend, + + /* Unmap the full range specificied */ + vaddr = ALIGN_DOWN(vstart, step); ++ time_limit = jiffies + HZ; ++ + for (;vaddr < vend; vaddr += step) { + rc = mmu_hash_ops.hpte_removebolted(vaddr, psize, ssize); ++ ++ /* ++ * For large number of mappings introduce a cond_resched() ++ * to prevent softlockup warnings. ++ */ ++ if (time_after(jiffies, time_limit)) { ++ cond_resched(); ++ time_limit = jiffies + HZ; ++ } + if (rc == -ENOENT) { + ret = -ENOENT; + continue; +diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c +index 019669eb21d27..4ab7c3ef5826c 100644 +--- a/arch/powerpc/platforms/powernv/memtrace.c ++++ b/arch/powerpc/platforms/powernv/memtrace.c +@@ -88,8 +88,8 @@ static void memtrace_clear_range(unsigned long start_pfn, + * Before we go ahead and use this range as cache inhibited range + * flush the cache. + */ +- flush_dcache_range_chunked(PFN_PHYS(start_pfn), +- PFN_PHYS(start_pfn + nr_pages), ++ flush_dcache_range_chunked((unsigned long)pfn_to_kaddr(start_pfn), ++ (unsigned long)pfn_to_kaddr(start_pfn + nr_pages), + FLUSH_CHUNK_SIZE); + } + +diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c +index 12cbffd3c2e32..325f3b220f360 100644 +--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c ++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c +@@ -47,9 +47,6 @@ static void rtas_stop_self(void) + + BUG_ON(rtas_stop_self_token == RTAS_UNKNOWN_SERVICE); + +- printk("cpu %u (hwid %u) Ready to die...\n", +- smp_processor_id(), hard_smp_processor_id()); +- + rtas_call_unlocked(&args, rtas_stop_self_token, 0, 1, NULL); + + panic("Alas, I survived.\n"); +diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c +index 5cacb632eb37a..31b657c377353 100644 +--- a/arch/powerpc/sysdev/xive/common.c ++++ b/arch/powerpc/sysdev/xive/common.c +@@ -1341,17 +1341,14 @@ static int xive_prepare_cpu(unsigned int cpu) + + xc = per_cpu(xive_cpu, cpu); + if (!xc) { +- struct device_node *np; +- + xc = kzalloc_node(sizeof(struct xive_cpu), + GFP_KERNEL, cpu_to_node(cpu)); + if (!xc) + return -ENOMEM; +- np = of_get_cpu_node(cpu, NULL); +- if (np) +- xc->chip_id = of_get_ibm_chip_id(np); +- of_node_put(np); + xc->hw_ipi = XIVE_BAD_IRQ; ++ xc->chip_id = XIVE_INVALID_CHIP_ID; ++ if (xive_ops->prepare_cpu) ++ xive_ops->prepare_cpu(cpu, xc); + + per_cpu(xive_cpu, cpu) = xc; + } +diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c +index 05a800a3104ed..57e3f15404354 100644 +--- a/arch/powerpc/sysdev/xive/native.c ++++ b/arch/powerpc/sysdev/xive/native.c +@@ -380,6 +380,11 @@ static void xive_native_update_pending(struct xive_cpu *xc) + } + } + ++static void xive_native_prepare_cpu(unsigned int cpu, struct xive_cpu *xc) ++{ ++ xc->chip_id = cpu_to_chip_id(cpu); ++} ++ + static void xive_native_setup_cpu(unsigned int cpu, struct xive_cpu *xc) + { + s64 rc; +@@ -462,6 +467,7 @@ static const struct xive_ops xive_native_ops = { + .match = xive_native_match, + .shutdown = xive_native_shutdown, + .update_pending = xive_native_update_pending, ++ .prepare_cpu = xive_native_prepare_cpu, + .setup_cpu = xive_native_setup_cpu, + .teardown_cpu = xive_native_teardown_cpu, + .sync_source = xive_native_sync_source, +diff --git a/arch/powerpc/sysdev/xive/xive-internal.h b/arch/powerpc/sysdev/xive/xive-internal.h +index 9cf57c722faa3..6478be19b4d36 100644 +--- a/arch/powerpc/sysdev/xive/xive-internal.h ++++ b/arch/powerpc/sysdev/xive/xive-internal.h +@@ -46,6 +46,7 @@ struct xive_ops { + u32 *sw_irq); + int (*setup_queue)(unsigned int cpu, struct xive_cpu *xc, u8 prio); + void (*cleanup_queue)(unsigned int cpu, struct xive_cpu *xc, u8 prio); ++ void (*prepare_cpu)(unsigned int cpu, struct xive_cpu *xc); + void (*setup_cpu)(unsigned int cpu, struct xive_cpu *xc); + void (*teardown_cpu)(unsigned int cpu, struct xive_cpu *xc); + bool (*match)(struct device_node *np); +diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig +index 4515a10c5d222..d9522fc35ca5a 100644 +--- a/arch/riscv/Kconfig ++++ b/arch/riscv/Kconfig +@@ -227,7 +227,7 @@ config ARCH_RV64I + bool "RV64I" + select 64BIT + select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && GCC_VERSION >= 50000 +- select HAVE_DYNAMIC_FTRACE if MMU ++ select HAVE_DYNAMIC_FTRACE if MMU && $(cc-option,-fpatchable-function-entry=8) + select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE + select HAVE_FTRACE_MCOUNT_RECORD + select HAVE_FUNCTION_GRAPH_TRACER +diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c +index ea028d9e0d242..d44567490d911 100644 +--- a/arch/riscv/kernel/smp.c ++++ b/arch/riscv/kernel/smp.c +@@ -54,7 +54,7 @@ int riscv_hartid_to_cpuid(int hartid) + return i; + + pr_err("Couldn't find cpu id for hartid [%d]\n", hartid); +- return i; ++ return -ENOENT; + } + + void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out) +diff --git a/arch/sh/kernel/traps.c b/arch/sh/kernel/traps.c +index f5beecdac6938..e76b221570999 100644 +--- a/arch/sh/kernel/traps.c ++++ b/arch/sh/kernel/traps.c +@@ -180,7 +180,6 @@ static inline void arch_ftrace_nmi_exit(void) { } + + BUILD_TRAP_HANDLER(nmi) + { +- unsigned int cpu = smp_processor_id(); + TRAP_HANDLER_DECL; + + arch_ftrace_nmi_enter(); +diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h +index 5eb3bdf36a419..06b0789d61b91 100644 +--- a/arch/x86/include/asm/idtentry.h ++++ b/arch/x86/include/asm/idtentry.h +@@ -588,6 +588,21 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_MC, xenpv_exc_machine_check); + #endif + + /* NMI */ ++ ++#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL) ++/* ++ * Special NOIST entry point for VMX which invokes this on the kernel ++ * stack. asm_exc_nmi() requires an IST to work correctly vs. the NMI ++ * 'executing' marker. ++ * ++ * On 32bit this just uses the regular NMI entry point because 32-bit does ++ * not have ISTs. ++ */ ++DECLARE_IDTENTRY(X86_TRAP_NMI, exc_nmi_noist); ++#else ++#define asm_exc_nmi_noist asm_exc_nmi ++#endif ++ + DECLARE_IDTENTRY_NMI(X86_TRAP_NMI, exc_nmi); + #ifdef CONFIG_XEN_PV + DECLARE_IDTENTRY_RAW(X86_TRAP_NMI, xenpv_exc_nmi); +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index 3768819693e5c..eec2dcca2f390 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -1753,6 +1753,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, + unsigned long icr, int op_64_bit); + + void kvm_define_user_return_msr(unsigned index, u32 msr); ++int kvm_probe_user_return_msr(u32 msr); + int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask); + + u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc); +diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h +index f1b9ed5efaa90..908bcaea13617 100644 +--- a/arch/x86/include/asm/processor.h ++++ b/arch/x86/include/asm/processor.h +@@ -804,8 +804,10 @@ DECLARE_PER_CPU(u64, msr_misc_features_shadow); + + #ifdef CONFIG_CPU_SUP_AMD + extern u32 amd_get_nodes_per_socket(void); ++extern u32 amd_get_highest_perf(void); + #else + static inline u32 amd_get_nodes_per_socket(void) { return 0; } ++static inline u32 amd_get_highest_perf(void) { return 0; } + #endif + + static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves) +diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c +index 347a956f71ca0..eedb2b320946f 100644 +--- a/arch/x86/kernel/cpu/amd.c ++++ b/arch/x86/kernel/cpu/amd.c +@@ -1170,3 +1170,19 @@ void set_dr_addr_mask(unsigned long mask, int dr) + break; + } + } ++ ++u32 amd_get_highest_perf(void) ++{ ++ struct cpuinfo_x86 *c = &boot_cpu_data; ++ ++ if (c->x86 == 0x17 && ((c->x86_model >= 0x30 && c->x86_model < 0x40) || ++ (c->x86_model >= 0x70 && c->x86_model < 0x80))) ++ return 166; ++ ++ if (c->x86 == 0x19 && ((c->x86_model >= 0x20 && c->x86_model < 0x30) || ++ (c->x86_model >= 0x40 && c->x86_model < 0x70))) ++ return 166; ++ ++ return 255; ++} ++EXPORT_SYMBOL_GPL(amd_get_highest_perf); +diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c +index bf250a339655f..2ef961cf4cfc5 100644 +--- a/arch/x86/kernel/nmi.c ++++ b/arch/x86/kernel/nmi.c +@@ -524,6 +524,16 @@ nmi_restart: + mds_user_clear_cpu_buffers(); + } + ++#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL) ++DEFINE_IDTENTRY_RAW(exc_nmi_noist) ++{ ++ exc_nmi(regs); ++} ++#endif ++#if IS_MODULE(CONFIG_KVM_INTEL) ++EXPORT_SYMBOL_GPL(asm_exc_nmi_noist); ++#endif ++ + void stop_nmi(void) + { + ignore_nmis++; +diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c +index 6b08d1eb173fd..363b36bbd791a 100644 +--- a/arch/x86/kernel/smpboot.c ++++ b/arch/x86/kernel/smpboot.c +@@ -2046,7 +2046,7 @@ static bool amd_set_max_freq_ratio(void) + return false; + } + +- highest_perf = perf_caps.highest_perf; ++ highest_perf = amd_get_highest_perf(); + nominal_perf = perf_caps.nominal_perf; + + if (!highest_perf || !nominal_perf) { +diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c +index 6bd2f8b830e49..62f795352c024 100644 +--- a/arch/x86/kvm/cpuid.c ++++ b/arch/x86/kvm/cpuid.c +@@ -589,7 +589,8 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) + case 7: + entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; + entry->eax = 0; +- entry->ecx = F(RDPID); ++ if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP)) ++ entry->ecx = F(RDPID); + ++array->nent; + default: + break; +diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c +index abd9a4db11a88..8fc71e70857d0 100644 +--- a/arch/x86/kvm/emulate.c ++++ b/arch/x86/kvm/emulate.c +@@ -4502,7 +4502,7 @@ static const struct opcode group8[] = { + * from the register case of group9. + */ + static const struct gprefix pfx_0f_c7_7 = { +- N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdtscp), ++ N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdpid), + }; + + +diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h +index 0d359115429ad..f016838faedd6 100644 +--- a/arch/x86/kvm/kvm_emulate.h ++++ b/arch/x86/kvm/kvm_emulate.h +@@ -468,6 +468,7 @@ enum x86_intercept { + x86_intercept_clgi, + x86_intercept_skinit, + x86_intercept_rdtscp, ++ x86_intercept_rdpid, + x86_intercept_icebp, + x86_intercept_wbinvd, + x86_intercept_monitor, +diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c +index 49a839d0567a5..fa023f3feb25d 100644 +--- a/arch/x86/kvm/lapic.c ++++ b/arch/x86/kvm/lapic.c +@@ -1913,8 +1913,8 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu) + if (!apic->lapic_timer.hv_timer_in_use) + goto out; + WARN_ON(rcuwait_active(&vcpu->wait)); +- cancel_hv_timer(apic); + apic_timer_expired(apic, false); ++ cancel_hv_timer(apic); + + if (apic_lvtt_period(apic) && apic->lapic_timer.period) { + advance_periodic_target_expiration(apic); +diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c +index 019130011d0fc..dbc6214d69def 100644 +--- a/arch/x86/kvm/svm/sev.c ++++ b/arch/x86/kvm/svm/sev.c +@@ -1668,7 +1668,7 @@ vmgexit_err: + return -EINVAL; + } + +-static void pre_sev_es_run(struct vcpu_svm *svm) ++void sev_es_unmap_ghcb(struct vcpu_svm *svm) + { + if (!svm->ghcb) + return; +@@ -1704,9 +1704,6 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) + struct svm_cpu_data *sd = per_cpu(svm_data, cpu); + int asid = sev_get_asid(svm->vcpu.kvm); + +- /* Perform any SEV-ES pre-run actions */ +- pre_sev_es_run(svm); +- + /* Assign the asid allocated with this SEV guest */ + svm->asid = asid; + +@@ -2106,5 +2103,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) + * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a + * non-zero value. + */ ++ if (!svm->ghcb) ++ return; ++ + ghcb_set_sw_exit_info_2(svm->ghcb, 1); + } +diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c +index 309725151313d..48ee3deab64b1 100644 +--- a/arch/x86/kvm/svm/svm.c ++++ b/arch/x86/kvm/svm/svm.c +@@ -1416,6 +1416,9 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu) + struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); + unsigned int i; + ++ if (sev_es_guest(vcpu->kvm)) ++ sev_es_unmap_ghcb(svm); ++ + if (svm->guest_state_loaded) + return; + +@@ -2738,7 +2741,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) + if (!boot_cpu_has(X86_FEATURE_RDTSCP)) + return 1; + if (!msr_info->host_initiated && +- !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) + return 1; + msr_info->data = svm->tsc_aux; + break; +@@ -2811,7 +2815,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) + static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err) + { + struct vcpu_svm *svm = to_svm(vcpu); +- if (!sev_es_guest(svm->vcpu.kvm) || !err) ++ if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->ghcb)) + return kvm_complete_insn_gp(&svm->vcpu, err); + + ghcb_set_sw_exit_info_1(svm->ghcb, 1); +@@ -2949,7 +2953,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) + return 1; + + if (!msr->host_initiated && +- !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) + return 1; + + /* +diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h +index 39e071fdab0ca..98da0b91f273b 100644 +--- a/arch/x86/kvm/svm/svm.h ++++ b/arch/x86/kvm/svm/svm.h +@@ -571,6 +571,7 @@ void sev_es_init_vmcb(struct vcpu_svm *svm); + void sev_es_create_vcpu(struct vcpu_svm *svm); + void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); + void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu); ++void sev_es_unmap_ghcb(struct vcpu_svm *svm); + + /* vmenter.S */ + +diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c +index 1727057c53130..4ba2a43e188b5 100644 +--- a/arch/x86/kvm/vmx/nested.c ++++ b/arch/x86/kvm/vmx/nested.c +@@ -3100,15 +3100,8 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu) + nested_vmx_handle_enlightened_vmptrld(vcpu, false); + + if (evmptrld_status == EVMPTRLD_VMFAIL || +- evmptrld_status == EVMPTRLD_ERROR) { +- pr_debug_ratelimited("%s: enlightened vmptrld failed\n", +- __func__); +- vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; +- vcpu->run->internal.suberror = +- KVM_INTERNAL_ERROR_EMULATION; +- vcpu->run->internal.ndata = 0; ++ evmptrld_status == EVMPTRLD_ERROR) + return false; +- } + } + + return true; +@@ -3196,8 +3189,16 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) + + static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu) + { +- if (!nested_get_evmcs_page(vcpu)) ++ if (!nested_get_evmcs_page(vcpu)) { ++ pr_debug_ratelimited("%s: enlightened vmptrld failed\n", ++ __func__); ++ vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; ++ vcpu->run->internal.suberror = ++ KVM_INTERNAL_ERROR_EMULATION; ++ vcpu->run->internal.ndata = 0; ++ + return false; ++ } + + if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu)) + return false; +@@ -4424,7 +4425,15 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason, + /* trying to cancel vmlaunch/vmresume is a bug */ + WARN_ON_ONCE(vmx->nested.nested_run_pending); + +- kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); ++ if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) { ++ /* ++ * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map ++ * Enlightened VMCS after migration and we still need to ++ * do that when something is forcing L2->L1 exit prior to ++ * the first L2 run. ++ */ ++ (void)nested_get_evmcs_page(vcpu); ++ } + + /* Service the TLB flush request for L2 before switching to L1. */ + if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) +diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c +index f705e0d9f1618..f68ed9a1abcc9 100644 +--- a/arch/x86/kvm/vmx/vmx.c ++++ b/arch/x86/kvm/vmx/vmx.c +@@ -36,6 +36,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -1733,7 +1734,8 @@ static void setup_msrs(struct vcpu_vmx *vmx) + if (update_transition_efer(vmx)) + vmx_setup_uret_msr(vmx, MSR_EFER); + +- if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) ++ if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP) || ++ guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDPID)) + vmx_setup_uret_msr(vmx, MSR_TSC_AUX); + + vmx_setup_uret_msr(vmx, MSR_IA32_TSX_CTRL); +@@ -1932,7 +1934,8 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) + break; + case MSR_TSC_AUX: + if (!msr_info->host_initiated && +- !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) + return 1; + goto find_uret_msr; + case MSR_IA32_DEBUGCTLMSR: +@@ -2229,7 +2232,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) + break; + case MSR_TSC_AUX: + if (!msr_info->host_initiated && +- !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && ++ !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) + return 1; + /* Check reserved bit, higher 32 bits should be zero */ + if ((data >> 32) != 0) +@@ -4301,7 +4305,23 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) + xsaves_enabled, false); + } + +- vmx_adjust_sec_exec_feature(vmx, &exec_control, rdtscp, RDTSCP); ++ /* ++ * RDPID is also gated by ENABLE_RDTSCP, turn on the control if either ++ * feature is exposed to the guest. This creates a virtualization hole ++ * if both are supported in hardware but only one is exposed to the ++ * guest, but letting the guest execute RDTSCP or RDPID when either one ++ * is advertised is preferable to emulating the advertised instruction ++ * in KVM on #UD, and obviously better than incorrectly injecting #UD. ++ */ ++ if (cpu_has_vmx_rdtscp()) { ++ bool rdpid_or_rdtscp_enabled = ++ guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) || ++ guest_cpuid_has(vcpu, X86_FEATURE_RDPID); ++ ++ vmx_adjust_secondary_exec_control(vmx, &exec_control, ++ SECONDARY_EXEC_ENABLE_RDTSCP, ++ rdpid_or_rdtscp_enabled, false); ++ } + vmx_adjust_sec_exec_feature(vmx, &exec_control, invpcid, INVPCID); + + vmx_adjust_sec_exec_exiting(vmx, &exec_control, rdrand, RDRAND); +@@ -6394,18 +6414,17 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu) + + void vmx_do_interrupt_nmi_irqoff(unsigned long entry); + +-static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info) ++static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, ++ unsigned long entry) + { +- unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK; +- gate_desc *desc = (gate_desc *)host_idt_base + vector; +- + kvm_before_interrupt(vcpu); +- vmx_do_interrupt_nmi_irqoff(gate_offset(desc)); ++ vmx_do_interrupt_nmi_irqoff(entry); + kvm_after_interrupt(vcpu); + } + + static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx) + { ++ const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist; + u32 intr_info = vmx_get_intr_info(&vmx->vcpu); + + /* if exit due to PF check for async PF */ +@@ -6416,18 +6435,20 @@ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx) + kvm_machine_check(); + /* We need to handle NMIs before interrupts are enabled */ + else if (is_nmi(intr_info)) +- handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info); ++ handle_interrupt_nmi_irqoff(&vmx->vcpu, nmi_entry); + } + + static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu) + { + u32 intr_info = vmx_get_intr_info(vcpu); ++ unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK; ++ gate_desc *desc = (gate_desc *)host_idt_base + vector; + + if (WARN_ONCE(!is_external_intr(intr_info), + "KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info)) + return; + +- handle_interrupt_nmi_irqoff(vcpu, intr_info); ++ handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc)); + } + + static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu) +@@ -6893,12 +6914,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) + + for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) { + u32 index = vmx_uret_msrs_list[i]; +- u32 data_low, data_high; + int j = vmx->nr_uret_msrs; + +- if (rdmsr_safe(index, &data_low, &data_high) < 0) +- continue; +- if (wrmsr_safe(index, data_low, data_high) < 0) ++ if (kvm_probe_user_return_msr(index)) + continue; + + vmx->guest_uret_msrs[j].slot = i; +@@ -7331,9 +7349,11 @@ static __init void vmx_set_cpu_caps(void) + if (!cpu_has_vmx_xsaves()) + kvm_cpu_cap_clear(X86_FEATURE_XSAVES); + +- /* CPUID 0x80000001 */ +- if (!cpu_has_vmx_rdtscp()) ++ /* CPUID 0x80000001 and 0x7 (RDPID) */ ++ if (!cpu_has_vmx_rdtscp()) { + kvm_cpu_cap_clear(X86_FEATURE_RDTSCP); ++ kvm_cpu_cap_clear(X86_FEATURE_RDPID); ++ } + + if (cpu_has_vmx_waitpkg()) + kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG); +@@ -7389,8 +7409,9 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu, + /* + * RDPID causes #UD if disabled through secondary execution controls. + * Because it is marked as EmulateOnUD, we need to intercept it here. ++ * Note, RDPID is hidden behind ENABLE_RDTSCP. + */ +- case x86_intercept_rdtscp: ++ case x86_intercept_rdpid: + if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_RDTSCP)) { + exception->vector = UD_VECTOR; + exception->error_code_valid = false; +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 3406ff421c1a3..87311d39f9145 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -335,6 +335,22 @@ static void kvm_on_user_return(struct user_return_notifier *urn) + } + } + ++int kvm_probe_user_return_msr(u32 msr) ++{ ++ u64 val; ++ int ret; ++ ++ preempt_disable(); ++ ret = rdmsrl_safe(msr, &val); ++ if (ret) ++ goto out; ++ ret = wrmsrl_safe(msr, val); ++out: ++ preempt_enable(); ++ return ret; ++} ++EXPORT_SYMBOL_GPL(kvm_probe_user_return_msr); ++ + void kvm_define_user_return_msr(unsigned slot, u32 msr) + { + BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS); +@@ -5864,7 +5880,8 @@ static void kvm_init_msr_list(void) + continue; + break; + case MSR_TSC_AUX: +- if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP)) ++ if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP) && ++ !kvm_cpu_cap_has(X86_FEATURE_RDPID)) + continue; + break; + case MSR_IA32_UMWAIT_CONTROL: +@@ -7964,6 +7981,18 @@ static void pvclock_gtod_update_fn(struct work_struct *work) + + static DECLARE_WORK(pvclock_gtod_work, pvclock_gtod_update_fn); + ++/* ++ * Indirection to move queue_work() out of the tk_core.seq write held ++ * region to prevent possible deadlocks against time accessors which ++ * are invoked with work related locks held. ++ */ ++static void pvclock_irq_work_fn(struct irq_work *w) ++{ ++ queue_work(system_long_wq, &pvclock_gtod_work); ++} ++ ++static DEFINE_IRQ_WORK(pvclock_irq_work, pvclock_irq_work_fn); ++ + /* + * Notification about pvclock gtod data update. + */ +@@ -7975,13 +8004,14 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused, + + update_pvclock_gtod(tk); + +- /* disable master clock if host does not trust, or does not +- * use, TSC based clocksource. ++ /* ++ * Disable master clock if host does not trust, or does not use, ++ * TSC based clocksource. Delegate queue_work() to irq_work as ++ * this is invoked with tk_core.seq write held. + */ + if (!gtod_is_based_on_tsc(gtod->clock.vclock_mode) && + atomic_read(&kvm_guest_has_master_clock) != 0) +- queue_work(system_long_wq, &pvclock_gtod_work); +- ++ irq_work_queue(&pvclock_irq_work); + return 0; + } + +@@ -8096,6 +8126,8 @@ void kvm_arch_exit(void) + cpuhp_remove_state_nocalls(CPUHP_AP_X86_KVM_CLK_ONLINE); + #ifdef CONFIG_X86_64 + pvclock_gtod_unregister_notifier(&pvclock_gtod_notifier); ++ irq_work_sync(&pvclock_irq_work); ++ cancel_work_sync(&pvclock_gtod_work); + #endif + kvm_x86_ops.hardware_enable = NULL; + kvm_mmu_module_exit(); +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index 20ba5db0f61cd..bc319931d2b36 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -2263,10 +2263,9 @@ static void bfq_remove_request(struct request_queue *q, + + } + +-static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, ++static bool bfq_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs) + { +- struct request_queue *q = hctx->queue; + struct bfq_data *bfqd = q->elevator->elevator_data; + struct request *free = NULL; + /* +diff --git a/block/blk-iocost.c b/block/blk-iocost.c +index 98d656bdb42b7..4fbc875f7cb29 100644 +--- a/block/blk-iocost.c ++++ b/block/blk-iocost.c +@@ -1073,7 +1073,17 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse, + + lockdep_assert_held(&ioc->lock); + +- inuse = clamp_t(u32, inuse, 1, active); ++ /* ++ * For an active leaf node, its inuse shouldn't be zero or exceed ++ * @active. An active internal node's inuse is solely determined by the ++ * inuse to active ratio of its children regardless of @inuse. ++ */ ++ if (list_empty(&iocg->active_list) && iocg->child_active_sum) { ++ inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum, ++ iocg->child_active_sum); ++ } else { ++ inuse = clamp_t(u32, inuse, 1, active); ++ } + + iocg->last_inuse = iocg->inuse; + if (save) +@@ -1090,7 +1100,7 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse, + /* update the level sums */ + parent->child_active_sum += (s32)(active - child->active); + parent->child_inuse_sum += (s32)(inuse - child->inuse); +- /* apply the udpates */ ++ /* apply the updates */ + child->active = active; + child->inuse = inuse; + +diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c +index e1e997af89a0d..fdeb9773b55cf 100644 +--- a/block/blk-mq-sched.c ++++ b/block/blk-mq-sched.c +@@ -348,14 +348,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs) + { + struct elevator_queue *e = q->elevator; +- struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); +- struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); ++ struct blk_mq_ctx *ctx; ++ struct blk_mq_hw_ctx *hctx; + bool ret = false; + enum hctx_type type; + + if (e && e->type->ops.bio_merge) +- return e->type->ops.bio_merge(hctx, bio, nr_segs); ++ return e->type->ops.bio_merge(q, bio, nr_segs); + ++ ctx = blk_mq_get_ctx(q); ++ hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); + type = hctx->type; + if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || + list_empty_careful(&ctx->rq_lists[type])) +diff --git a/block/blk-mq.c b/block/blk-mq.c +index d4d7c1caa4396..0e120547ccb72 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -2216,8 +2216,9 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) + /* Bypass scheduler for flush requests */ + blk_insert_flush(rq); + blk_mq_run_hw_queue(data.hctx, true); +- } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs || +- !blk_queue_nonrot(q))) { ++ } else if (plug && (q->nr_hw_queues == 1 || ++ blk_mq_is_sbitmap_shared(rq->mq_hctx->flags) || ++ q->mq_ops->commit_rqs || !blk_queue_nonrot(q))) { + /* + * Use plugging if we have a ->commit_rqs() hook as well, as + * we know the driver uses bd->last in a smart fashion. +@@ -3269,10 +3270,12 @@ EXPORT_SYMBOL(blk_mq_init_allocated_queue); + /* tags can _not_ be used after returning from blk_mq_exit_queue */ + void blk_mq_exit_queue(struct request_queue *q) + { +- struct blk_mq_tag_set *set = q->tag_set; ++ struct blk_mq_tag_set *set = q->tag_set; + +- blk_mq_del_queue_tag_set(q); ++ /* Checks hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED. */ + blk_mq_exit_hw_queues(q, set, set->nr_hw_queues); ++ /* May clear BLK_MQ_F_TAG_QUEUE_SHARED in hctx->flags. */ ++ blk_mq_del_queue_tag_set(q); + } + + static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) +diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c +index 33d34d69cade2..79b69d7046d66 100644 +--- a/block/kyber-iosched.c ++++ b/block/kyber-iosched.c +@@ -560,11 +560,12 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) + } + } + +-static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, ++static bool kyber_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs) + { ++ struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); ++ struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); + struct kyber_hctx_data *khd = hctx->sched_data; +- struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue); + struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]]; + unsigned int sched_domain = kyber_sched_domain(bio->bi_opf); + struct list_head *rq_list = &kcq->rq_list[sched_domain]; +diff --git a/block/mq-deadline.c b/block/mq-deadline.c +index f3631a2874667..3aabcd2a7893c 100644 +--- a/block/mq-deadline.c ++++ b/block/mq-deadline.c +@@ -461,10 +461,9 @@ static int dd_request_merge(struct request_queue *q, struct request **rq, + return ELEVATOR_NO_MERGE; + } + +-static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, ++static bool dd_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs) + { +- struct request_queue *q = hctx->queue; + struct deadline_data *dd = q->elevator->elevator_data; + struct request *free = NULL; + bool ret; +diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c +index 096153761ebc3..58876248b1921 100644 +--- a/drivers/acpi/device_pm.c ++++ b/drivers/acpi/device_pm.c +@@ -1310,6 +1310,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on) + {"PNP0C0B", }, /* Generic ACPI fan */ + {"INT3404", }, /* Fan */ + {"INTC1044", }, /* Fan for Tiger Lake generation */ ++ {"INTC1048", }, /* Fan for Alder Lake generation */ + {} + }; + struct acpi_device *adev = ACPI_COMPANION(dev); +diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c +index 6efe7edd7b1ea..345777bf7af9c 100644 +--- a/drivers/acpi/scan.c ++++ b/drivers/acpi/scan.c +@@ -701,6 +701,7 @@ int acpi_device_add(struct acpi_device *device, + + result = acpi_device_set_name(device, acpi_device_bus_id); + if (result) { ++ kfree_const(acpi_device_bus_id->bus_id); + kfree(acpi_device_bus_id); + goto err_unlock; + } +diff --git a/drivers/ata/ahci_brcm.c b/drivers/ata/ahci_brcm.c +index 5b32df5d33adc..6e9c5ade4c2ea 100644 +--- a/drivers/ata/ahci_brcm.c ++++ b/drivers/ata/ahci_brcm.c +@@ -86,7 +86,8 @@ struct brcm_ahci_priv { + u32 port_mask; + u32 quirks; + enum brcm_ahci_version version; +- struct reset_control *rcdev; ++ struct reset_control *rcdev_rescal; ++ struct reset_control *rcdev_ahci; + }; + + static inline u32 brcm_sata_readreg(void __iomem *addr) +@@ -352,8 +353,8 @@ static int brcm_ahci_suspend(struct device *dev) + else + ret = 0; + +- if (priv->version != BRCM_SATA_BCM7216) +- reset_control_assert(priv->rcdev); ++ reset_control_assert(priv->rcdev_ahci); ++ reset_control_rearm(priv->rcdev_rescal); + + return ret; + } +@@ -365,10 +366,10 @@ static int __maybe_unused brcm_ahci_resume(struct device *dev) + struct brcm_ahci_priv *priv = hpriv->plat_data; + int ret = 0; + +- if (priv->version == BRCM_SATA_BCM7216) +- ret = reset_control_reset(priv->rcdev); +- else +- ret = reset_control_deassert(priv->rcdev); ++ ret = reset_control_deassert(priv->rcdev_ahci); ++ if (ret) ++ return ret; ++ ret = reset_control_reset(priv->rcdev_rescal); + if (ret) + return ret; + +@@ -434,7 +435,6 @@ static int brcm_ahci_probe(struct platform_device *pdev) + { + const struct of_device_id *of_id; + struct device *dev = &pdev->dev; +- const char *reset_name = NULL; + struct brcm_ahci_priv *priv; + struct ahci_host_priv *hpriv; + struct resource *res; +@@ -456,15 +456,15 @@ static int brcm_ahci_probe(struct platform_device *pdev) + if (IS_ERR(priv->top_ctrl)) + return PTR_ERR(priv->top_ctrl); + +- /* Reset is optional depending on platform and named differently */ +- if (priv->version == BRCM_SATA_BCM7216) +- reset_name = "rescal"; +- else +- reset_name = "ahci"; +- +- priv->rcdev = devm_reset_control_get_optional(&pdev->dev, reset_name); +- if (IS_ERR(priv->rcdev)) +- return PTR_ERR(priv->rcdev); ++ if (priv->version == BRCM_SATA_BCM7216) { ++ priv->rcdev_rescal = devm_reset_control_get_optional_shared( ++ &pdev->dev, "rescal"); ++ if (IS_ERR(priv->rcdev_rescal)) ++ return PTR_ERR(priv->rcdev_rescal); ++ } ++ priv->rcdev_ahci = devm_reset_control_get_optional(&pdev->dev, "ahci"); ++ if (IS_ERR(priv->rcdev_ahci)) ++ return PTR_ERR(priv->rcdev_ahci); + + hpriv = ahci_platform_get_resources(pdev, 0); + if (IS_ERR(hpriv)) +@@ -485,10 +485,10 @@ static int brcm_ahci_probe(struct platform_device *pdev) + break; + } + +- if (priv->version == BRCM_SATA_BCM7216) +- ret = reset_control_reset(priv->rcdev); +- else +- ret = reset_control_deassert(priv->rcdev); ++ ret = reset_control_reset(priv->rcdev_rescal); ++ if (ret) ++ return ret; ++ ret = reset_control_deassert(priv->rcdev_ahci); + if (ret) + return ret; + +@@ -539,8 +539,8 @@ out_disable_regulators: + out_disable_clks: + ahci_platform_disable_clks(hpriv); + out_reset: +- if (priv->version != BRCM_SATA_BCM7216) +- reset_control_assert(priv->rcdev); ++ reset_control_assert(priv->rcdev_ahci); ++ reset_control_rearm(priv->rcdev_rescal); + return ret; + } + +diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c +index fe1dad68aee4d..ae011f2bc537d 100644 +--- a/drivers/base/power/runtime.c ++++ b/drivers/base/power/runtime.c +@@ -1637,6 +1637,7 @@ void pm_runtime_init(struct device *dev) + dev->power.request_pending = false; + dev->power.request = RPM_REQ_NONE; + dev->power.deferred_resume = false; ++ dev->power.needs_force_resume = 0; + INIT_WORK(&dev->power.work, pm_runtime_work); + + dev->power.timer_expires = 0; +@@ -1804,10 +1805,12 @@ int pm_runtime_force_suspend(struct device *dev) + * its parent, but set its status to RPM_SUSPENDED anyway in case this + * function will be called again for it in the meantime. + */ +- if (pm_runtime_need_not_resume(dev)) ++ if (pm_runtime_need_not_resume(dev)) { + pm_runtime_set_suspended(dev); +- else ++ } else { + __update_runtime_status(dev, RPM_SUSPENDED); ++ dev->power.needs_force_resume = 1; ++ } + + return 0; + +@@ -1834,7 +1837,7 @@ int pm_runtime_force_resume(struct device *dev) + int (*callback)(struct device *); + int ret = 0; + +- if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev)) ++ if (!pm_runtime_status_suspended(dev) || !dev->power.needs_force_resume) + goto out; + + /* +@@ -1853,6 +1856,7 @@ int pm_runtime_force_resume(struct device *dev) + + pm_runtime_mark_last_busy(dev); + out: ++ dev->power.needs_force_resume = 0; + pm_runtime_enable(dev); + return ret; + } +diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c +index 4ff71b579cfcc..974da561b8e5e 100644 +--- a/drivers/block/nbd.c ++++ b/drivers/block/nbd.c +@@ -1980,7 +1980,8 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd) + * config ref and try to destroy the workqueue from inside the work + * queue. + */ +- flush_workqueue(nbd->recv_workq); ++ if (nbd->recv_workq) ++ flush_workqueue(nbd->recv_workq); + if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF, + &nbd->config->runtime_flags)) + nbd_config_put(nbd); +diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c +index 45a4700766524..5ab7319ff2ead 100644 +--- a/drivers/block/rnbd/rnbd-clt.c ++++ b/drivers/block/rnbd/rnbd-clt.c +@@ -693,7 +693,11 @@ static void remap_devs(struct rnbd_clt_session *sess) + return; + } + +- rtrs_clt_query(sess->rtrs, &attrs); ++ err = rtrs_clt_query(sess->rtrs, &attrs); ++ if (err) { ++ pr_err("rtrs_clt_query(\"%s\"): %d\n", sess->sessname, err); ++ return; ++ } + mutex_lock(&sess->lock); + sess->max_io_size = attrs.max_io_size; + +@@ -1234,7 +1238,11 @@ find_and_get_or_create_sess(const char *sessname, + err = PTR_ERR(sess->rtrs); + goto wake_up_and_put; + } +- rtrs_clt_query(sess->rtrs, &attrs); ++ ++ err = rtrs_clt_query(sess->rtrs, &attrs); ++ if (err) ++ goto close_rtrs; ++ + sess->max_io_size = attrs.max_io_size; + sess->queue_depth = attrs.queue_depth; + +diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h +index 537d499dad3b0..73d9808405310 100644 +--- a/drivers/block/rnbd/rnbd-clt.h ++++ b/drivers/block/rnbd/rnbd-clt.h +@@ -87,7 +87,7 @@ struct rnbd_clt_session { + DECLARE_BITMAP(cpu_queues_bm, NR_CPUS); + int __percpu *cpu_rr; /* per-cpu var for CPU round-robin */ + atomic_t busy; +- int queue_depth; ++ size_t queue_depth; + u32 max_io_size; + struct blk_mq_tag_set tag_set; + struct mutex lock; /* protects state and devs_list */ +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index 5cbfbd948f676..4a901508e48e7 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -399,7 +399,9 @@ static const struct usb_device_id blacklist_table[] = { + + /* MediaTek Bluetooth devices */ + { USB_VENDOR_AND_INTERFACE_INFO(0x0e8d, 0xe0, 0x01, 0x01), +- .driver_info = BTUSB_MEDIATEK }, ++ .driver_info = BTUSB_MEDIATEK | ++ BTUSB_WIDEBAND_SPEECH | ++ BTUSB_VALID_LE_STATES }, + + /* Additional MediaTek MT7615E Bluetooth devices */ + { USB_DEVICE(0x13d3, 0x3560), .driver_info = BTUSB_MEDIATEK}, +diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c +index eff1f12d981ab..c84d239512197 100644 +--- a/drivers/char/tpm/tpm2-cmd.c ++++ b/drivers/char/tpm/tpm2-cmd.c +@@ -656,6 +656,7 @@ int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip) + + if (nr_commands != + be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) { ++ rc = -EFAULT; + tpm_buf_destroy(&buf); + goto out; + } +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index a2e0395cbe618..55b9d3965ae1b 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -709,16 +709,14 @@ static int tpm_tis_gen_interrupt(struct tpm_chip *chip) + cap_t cap; + int ret; + +- /* TPM 2.0 */ +- if (chip->flags & TPM_CHIP_FLAG_TPM2) +- return tpm2_get_tpm_pt(chip, 0x100, &cap2, desc); +- +- /* TPM 1.2 */ + ret = request_locality(chip, 0); + if (ret < 0) + return ret; + +- ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0); ++ if (chip->flags & TPM_CHIP_FLAG_TPM2) ++ ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc); ++ else ++ ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0); + + release_locality(chip, 0); + +@@ -1127,12 +1125,20 @@ int tpm_tis_resume(struct device *dev) + if (ret) + return ret; + +- /* TPM 1.2 requires self-test on resume. This function actually returns ++ /* ++ * TPM 1.2 requires self-test on resume. This function actually returns + * an error code but for unknown reason it isn't handled. + */ +- if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) ++ if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { ++ ret = request_locality(chip, 0); ++ if (ret < 0) ++ return ret; ++ + tpm1_do_selftest(chip); + ++ release_locality(chip, 0); ++ } ++ + return 0; + } + EXPORT_SYMBOL_GPL(tpm_tis_resume); +diff --git a/drivers/clk/samsung/clk-exynos7.c b/drivers/clk/samsung/clk-exynos7.c +index 87ee1bad9a9a8..4a5d2a914bd66 100644 +--- a/drivers/clk/samsung/clk-exynos7.c ++++ b/drivers/clk/samsung/clk-exynos7.c +@@ -537,8 +537,13 @@ static const struct samsung_gate_clock top1_gate_clks[] __initconst = { + GATE(CLK_ACLK_FSYS0_200, "aclk_fsys0_200", "dout_aclk_fsys0_200", + ENABLE_ACLK_TOP13, 28, CLK_SET_RATE_PARENT | + CLK_IS_CRITICAL, 0), ++ /* ++ * This clock is required for the CMU_FSYS1 registers access, keep it ++ * enabled permanently until proper runtime PM support is added. ++ */ + GATE(CLK_ACLK_FSYS1_200, "aclk_fsys1_200", "dout_aclk_fsys1_200", +- ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT, 0), ++ ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT | ++ CLK_IS_CRITICAL, 0), + + GATE(CLK_SCLK_PHY_FSYS1_26M, "sclk_phy_fsys1_26m", + "dout_sclk_phy_fsys1_26m", ENABLE_SCLK_TOP1_FSYS11, +diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c +index 3fae9ebb58b83..b6f97960d8ee0 100644 +--- a/drivers/clocksource/timer-ti-dm-systimer.c ++++ b/drivers/clocksource/timer-ti-dm-systimer.c +@@ -2,6 +2,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -530,17 +531,17 @@ static void omap_clockevent_unidle(struct clock_event_device *evt) + writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup); + } + +-static int __init dmtimer_clockevent_init(struct device_node *np) ++static int __init dmtimer_clkevt_init_common(struct dmtimer_clockevent *clkevt, ++ struct device_node *np, ++ unsigned int features, ++ const struct cpumask *cpumask, ++ const char *name, ++ int rating) + { +- struct dmtimer_clockevent *clkevt; + struct clock_event_device *dev; + struct dmtimer_systimer *t; + int error; + +- clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL); +- if (!clkevt) +- return -ENOMEM; +- + t = &clkevt->t; + dev = &clkevt->dev; + +@@ -548,25 +549,23 @@ static int __init dmtimer_clockevent_init(struct device_node *np) + * We mostly use cpuidle_coupled with ARM local timers for runtime, + * so there's probably no use for CLOCK_EVT_FEAT_DYNIRQ here. + */ +- dev->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; +- dev->rating = 300; ++ dev->features = features; ++ dev->rating = rating; + dev->set_next_event = dmtimer_set_next_event; + dev->set_state_shutdown = dmtimer_clockevent_shutdown; + dev->set_state_periodic = dmtimer_set_periodic; + dev->set_state_oneshot = dmtimer_clockevent_shutdown; + dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown; + dev->tick_resume = dmtimer_clockevent_shutdown; +- dev->cpumask = cpu_possible_mask; ++ dev->cpumask = cpumask; + + dev->irq = irq_of_parse_and_map(np, 0); +- if (!dev->irq) { +- error = -ENXIO; +- goto err_out_free; +- } ++ if (!dev->irq) ++ return -ENXIO; + + error = dmtimer_systimer_setup(np, &clkevt->t); + if (error) +- goto err_out_free; ++ return error; + + clkevt->period = 0xffffffff - DIV_ROUND_CLOSEST(t->rate, HZ); + +@@ -578,38 +577,132 @@ static int __init dmtimer_clockevent_init(struct device_node *np) + writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl); + + error = request_irq(dev->irq, dmtimer_clockevent_interrupt, +- IRQF_TIMER, "clockevent", clkevt); ++ IRQF_TIMER, name, clkevt); + if (error) + goto err_out_unmap; + + writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena); + writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup); + +- pr_info("TI gptimer clockevent: %s%lu Hz at %pOF\n", +- of_find_property(np, "ti,timer-alwon", NULL) ? ++ pr_info("TI gptimer %s: %s%lu Hz at %pOF\n", ++ name, of_find_property(np, "ti,timer-alwon", NULL) ? + "always-on " : "", t->rate, np->parent); + +- clockevents_config_and_register(dev, t->rate, +- 3, /* Timer internal resynch latency */ ++ return 0; ++ ++err_out_unmap: ++ iounmap(t->base); ++ ++ return error; ++} ++ ++static int __init dmtimer_clockevent_init(struct device_node *np) ++{ ++ struct dmtimer_clockevent *clkevt; ++ int error; ++ ++ clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL); ++ if (!clkevt) ++ return -ENOMEM; ++ ++ error = dmtimer_clkevt_init_common(clkevt, np, ++ CLOCK_EVT_FEAT_PERIODIC | ++ CLOCK_EVT_FEAT_ONESHOT, ++ cpu_possible_mask, "clockevent", ++ 300); ++ if (error) ++ goto err_out_free; ++ ++ clockevents_config_and_register(&clkevt->dev, clkevt->t.rate, ++ 3, /* Timer internal resync latency */ + 0xffffffff); + + if (of_machine_is_compatible("ti,am33xx") || + of_machine_is_compatible("ti,am43")) { +- dev->suspend = omap_clockevent_idle; +- dev->resume = omap_clockevent_unidle; ++ clkevt->dev.suspend = omap_clockevent_idle; ++ clkevt->dev.resume = omap_clockevent_unidle; + } + + return 0; + +-err_out_unmap: +- iounmap(t->base); +- + err_out_free: + kfree(clkevt); + + return error; + } + ++/* Dmtimer as percpu timer. See dra7 ARM architected timer wrap erratum i940 */ ++static DEFINE_PER_CPU(struct dmtimer_clockevent, dmtimer_percpu_timer); ++ ++static int __init dmtimer_percpu_timer_init(struct device_node *np, int cpu) ++{ ++ struct dmtimer_clockevent *clkevt; ++ int error; ++ ++ if (!cpu_possible(cpu)) ++ return -EINVAL; ++ ++ if (!of_property_read_bool(np->parent, "ti,no-reset-on-init") || ++ !of_property_read_bool(np->parent, "ti,no-idle")) ++ pr_warn("Incomplete dtb for percpu dmtimer %pOF\n", np->parent); ++ ++ clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu); ++ ++ error = dmtimer_clkevt_init_common(clkevt, np, CLOCK_EVT_FEAT_ONESHOT, ++ cpumask_of(cpu), "percpu-dmtimer", ++ 500); ++ if (error) ++ return error; ++ ++ return 0; ++} ++ ++/* See TRM for timer internal resynch latency */ ++static int omap_dmtimer_starting_cpu(unsigned int cpu) ++{ ++ struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu); ++ struct clock_event_device *dev = &clkevt->dev; ++ struct dmtimer_systimer *t = &clkevt->t; ++ ++ clockevents_config_and_register(dev, t->rate, 3, ULONG_MAX); ++ irq_force_affinity(dev->irq, cpumask_of(cpu)); ++ ++ return 0; ++} ++ ++static int __init dmtimer_percpu_timer_startup(void) ++{ ++ struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, 0); ++ struct dmtimer_systimer *t = &clkevt->t; ++ ++ if (t->sysc) { ++ cpuhp_setup_state(CPUHP_AP_TI_GP_TIMER_STARTING, ++ "clockevents/omap/gptimer:starting", ++ omap_dmtimer_starting_cpu, NULL); ++ } ++ ++ return 0; ++} ++subsys_initcall(dmtimer_percpu_timer_startup); ++ ++static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa) ++{ ++ struct device_node *arm_timer; ++ ++ arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer"); ++ if (of_device_is_available(arm_timer)) { ++ pr_warn_once("ARM architected timer wrap issue i940 detected\n"); ++ return 0; ++ } ++ ++ if (pa == 0x48034000) /* dra7 dmtimer3 */ ++ return dmtimer_percpu_timer_init(np, 0); ++ else if (pa == 0x48036000) /* dra7 dmtimer4 */ ++ return dmtimer_percpu_timer_init(np, 1); ++ ++ return 0; ++} ++ + /* Clocksource */ + static struct dmtimer_clocksource * + to_dmtimer_clocksource(struct clocksource *cs) +@@ -743,6 +836,9 @@ static int __init dmtimer_systimer_init(struct device_node *np) + if (clockevent == pa) + return dmtimer_clockevent_init(np); + ++ if (of_machine_is_compatible("ti,dra7")) ++ return dmtimer_percpu_quirk_init(np, pa); ++ + return 0; + } + +diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c +index d1bbc16fba4b4..7e7450453714d 100644 +--- a/drivers/cpufreq/acpi-cpufreq.c ++++ b/drivers/cpufreq/acpi-cpufreq.c +@@ -646,7 +646,11 @@ static u64 get_max_boost_ratio(unsigned int cpu) + return 0; + } + +- highest_perf = perf_caps.highest_perf; ++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ++ highest_perf = amd_get_highest_perf(); ++ else ++ highest_perf = perf_caps.highest_perf; ++ + nominal_perf = perf_caps.nominal_perf; + + if (!highest_perf || !nominal_perf) { +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c +index 5175ae3cac44b..34196c107de64 100644 +--- a/drivers/cpufreq/intel_pstate.c ++++ b/drivers/cpufreq/intel_pstate.c +@@ -3054,6 +3054,14 @@ static const struct x86_cpu_id hwp_support_ids[] __initconst = { + {} + }; + ++static bool intel_pstate_hwp_is_enabled(void) ++{ ++ u64 value; ++ ++ rdmsrl(MSR_PM_ENABLE, value); ++ return !!(value & 0x1); ++} ++ + static int __init intel_pstate_init(void) + { + const struct x86_cpu_id *id; +@@ -3072,8 +3080,12 @@ static int __init intel_pstate_init(void) + * Avoid enabling HWP for processors without EPP support, + * because that means incomplete HWP implementation which is a + * corner case and supporting it is generally problematic. ++ * ++ * If HWP is enabled already, though, there is no choice but to ++ * deal with it. + */ +- if (!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) { ++ if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || ++ intel_pstate_hwp_is_enabled()) { + hwp_active++; + hwp_mode_bdw = id->driver_data; + intel_pstate.attr = hwp_cpufreq_attrs; +diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c +index 8fd43c1acac13..3e0d1d6922bab 100644 +--- a/drivers/crypto/ccp/sev-dev.c ++++ b/drivers/crypto/ccp/sev-dev.c +@@ -990,7 +990,7 @@ int sev_dev_init(struct psp_device *psp) + if (!sev->vdata) { + ret = -ENODEV; + dev_err(dev, "sev: missing driver data\n"); +- goto e_err; ++ goto e_sev; + } + + psp_set_sev_irq_handler(psp, sev_irq_handler, sev); +@@ -1005,6 +1005,8 @@ int sev_dev_init(struct psp_device *psp) + + e_irq: + psp_clear_sev_irq_handler(psp); ++e_sev: ++ devm_kfree(dev, sev); + e_err: + psp->sev_data = NULL; + +diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c +index 0db9b82ed8cf5..1d8a3876b7452 100644 +--- a/drivers/dma/idxd/cdev.c ++++ b/drivers/dma/idxd/cdev.c +@@ -39,15 +39,15 @@ struct idxd_user_context { + struct iommu_sva *sva; + }; + +-enum idxd_cdev_cleanup { +- CDEV_NORMAL = 0, +- CDEV_FAILED, +-}; +- + static void idxd_cdev_dev_release(struct device *dev) + { +- dev_dbg(dev, "releasing cdev device\n"); +- kfree(dev); ++ struct idxd_cdev *idxd_cdev = container_of(dev, struct idxd_cdev, dev); ++ struct idxd_cdev_context *cdev_ctx; ++ struct idxd_wq *wq = idxd_cdev->wq; ++ ++ cdev_ctx = &ictx[wq->idxd->type]; ++ ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor); ++ kfree(idxd_cdev); + } + + static struct device_type idxd_cdev_device_type = { +@@ -62,14 +62,11 @@ static inline struct idxd_cdev *inode_idxd_cdev(struct inode *inode) + return container_of(cdev, struct idxd_cdev, cdev); + } + +-static inline struct idxd_wq *idxd_cdev_wq(struct idxd_cdev *idxd_cdev) +-{ +- return container_of(idxd_cdev, struct idxd_wq, idxd_cdev); +-} +- + static inline struct idxd_wq *inode_wq(struct inode *inode) + { +- return idxd_cdev_wq(inode_idxd_cdev(inode)); ++ struct idxd_cdev *idxd_cdev = inode_idxd_cdev(inode); ++ ++ return idxd_cdev->wq; + } + + static int idxd_cdev_open(struct inode *inode, struct file *filp) +@@ -220,11 +217,10 @@ static __poll_t idxd_cdev_poll(struct file *filp, + struct idxd_user_context *ctx = filp->private_data; + struct idxd_wq *wq = ctx->wq; + struct idxd_device *idxd = wq->idxd; +- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev; + unsigned long flags; + __poll_t out = 0; + +- poll_wait(filp, &idxd_cdev->err_queue, wait); ++ poll_wait(filp, &wq->err_queue, wait); + spin_lock_irqsave(&idxd->dev_lock, flags); + if (idxd->sw_err.valid) + out = EPOLLIN | EPOLLRDNORM; +@@ -246,98 +242,67 @@ int idxd_cdev_get_major(struct idxd_device *idxd) + return MAJOR(ictx[idxd->type].devt); + } + +-static int idxd_wq_cdev_dev_setup(struct idxd_wq *wq) ++int idxd_wq_add_cdev(struct idxd_wq *wq) + { + struct idxd_device *idxd = wq->idxd; +- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev; +- struct idxd_cdev_context *cdev_ctx; ++ struct idxd_cdev *idxd_cdev; ++ struct cdev *cdev; + struct device *dev; +- int minor, rc; ++ struct idxd_cdev_context *cdev_ctx; ++ int rc, minor; + +- idxd_cdev->dev = kzalloc(sizeof(*idxd_cdev->dev), GFP_KERNEL); +- if (!idxd_cdev->dev) ++ idxd_cdev = kzalloc(sizeof(*idxd_cdev), GFP_KERNEL); ++ if (!idxd_cdev) + return -ENOMEM; + +- dev = idxd_cdev->dev; +- dev->parent = &idxd->pdev->dev; +- dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd), +- idxd->id, wq->id); +- dev->bus = idxd_get_bus_type(idxd); +- ++ idxd_cdev->wq = wq; ++ cdev = &idxd_cdev->cdev; ++ dev = &idxd_cdev->dev; + cdev_ctx = &ictx[wq->idxd->type]; + minor = ida_simple_get(&cdev_ctx->minor_ida, 0, MINORMASK, GFP_KERNEL); + if (minor < 0) { +- rc = minor; +- kfree(dev); +- goto ida_err; +- } +- +- dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor); +- dev->type = &idxd_cdev_device_type; +- rc = device_register(dev); +- if (rc < 0) { +- dev_err(&idxd->pdev->dev, "device register failed\n"); +- goto dev_reg_err; ++ kfree(idxd_cdev); ++ return minor; + } + idxd_cdev->minor = minor; + +- return 0; +- +- dev_reg_err: +- ida_simple_remove(&cdev_ctx->minor_ida, MINOR(dev->devt)); +- put_device(dev); +- ida_err: +- idxd_cdev->dev = NULL; +- return rc; +-} +- +-static void idxd_wq_cdev_cleanup(struct idxd_wq *wq, +- enum idxd_cdev_cleanup cdev_state) +-{ +- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev; +- struct idxd_cdev_context *cdev_ctx; +- +- cdev_ctx = &ictx[wq->idxd->type]; +- if (cdev_state == CDEV_NORMAL) +- cdev_del(&idxd_cdev->cdev); +- device_unregister(idxd_cdev->dev); +- /* +- * The device_type->release() will be called on the device and free +- * the allocated struct device. We can just forget it. +- */ +- ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor); +- idxd_cdev->dev = NULL; +- idxd_cdev->minor = -1; +-} +- +-int idxd_wq_add_cdev(struct idxd_wq *wq) +-{ +- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev; +- struct cdev *cdev = &idxd_cdev->cdev; +- struct device *dev; +- int rc; ++ device_initialize(dev); ++ dev->parent = &wq->conf_dev; ++ dev->bus = idxd_get_bus_type(idxd); ++ dev->type = &idxd_cdev_device_type; ++ dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor); + +- rc = idxd_wq_cdev_dev_setup(wq); ++ rc = dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd), ++ idxd->id, wq->id); + if (rc < 0) +- return rc; ++ goto err; + +- dev = idxd_cdev->dev; ++ wq->idxd_cdev = idxd_cdev; + cdev_init(cdev, &idxd_cdev_fops); +- cdev_set_parent(cdev, &dev->kobj); +- rc = cdev_add(cdev, dev->devt, 1); ++ rc = cdev_device_add(cdev, dev); + if (rc) { + dev_dbg(&wq->idxd->pdev->dev, "cdev_add failed: %d\n", rc); +- idxd_wq_cdev_cleanup(wq, CDEV_FAILED); +- return rc; ++ goto err; + } + +- init_waitqueue_head(&idxd_cdev->err_queue); + return 0; ++ ++ err: ++ put_device(dev); ++ wq->idxd_cdev = NULL; ++ return rc; + } + + void idxd_wq_del_cdev(struct idxd_wq *wq) + { +- idxd_wq_cdev_cleanup(wq, CDEV_NORMAL); ++ struct idxd_cdev *idxd_cdev; ++ struct idxd_cdev_context *cdev_ctx; ++ ++ cdev_ctx = &ictx[wq->idxd->type]; ++ idxd_cdev = wq->idxd_cdev; ++ wq->idxd_cdev = NULL; ++ cdev_device_del(&idxd_cdev->cdev, &idxd_cdev->dev); ++ put_device(&idxd_cdev->dev); + } + + int idxd_cdev_register(void) +diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c +index 31c819544a229..4fef57717049e 100644 +--- a/drivers/dma/idxd/device.c ++++ b/drivers/dma/idxd/device.c +@@ -19,7 +19,7 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand, + /* Interrupt control bits */ + void idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id) + { +- struct irq_data *data = irq_get_irq_data(idxd->msix_entries[vec_id].vector); ++ struct irq_data *data = irq_get_irq_data(idxd->irq_entries[vec_id].vector); + + pci_msi_mask_irq(data); + } +@@ -36,7 +36,7 @@ void idxd_mask_msix_vectors(struct idxd_device *idxd) + + void idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id) + { +- struct irq_data *data = irq_get_irq_data(idxd->msix_entries[vec_id].vector); ++ struct irq_data *data = irq_get_irq_data(idxd->irq_entries[vec_id].vector); + + pci_msi_unmask_irq(data); + } +@@ -186,8 +186,6 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) + desc->id = i; + desc->wq = wq; + desc->cpu = -1; +- dma_async_tx_descriptor_init(&desc->txd, &wq->dma_chan); +- desc->txd.tx_submit = idxd_dma_tx_submit; + } + + return 0; +@@ -451,7 +449,8 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand, + + if (idxd_device_is_halted(idxd)) { + dev_warn(&idxd->pdev->dev, "Device is HALTED!\n"); +- *status = IDXD_CMDSTS_HW_ERR; ++ if (status) ++ *status = IDXD_CMDSTS_HW_ERR; + return; + } + +@@ -521,7 +520,7 @@ void idxd_device_wqs_clear_state(struct idxd_device *idxd) + lockdep_assert_held(&idxd->dev_lock); + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + if (wq->state == IDXD_WQ_ENABLED) { + idxd_wq_disable_cleanup(wq); +@@ -660,7 +659,7 @@ static int idxd_groups_config_write(struct idxd_device *idxd) + ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET)); + + for (i = 0; i < idxd->max_groups; i++) { +- struct idxd_group *group = &idxd->groups[i]; ++ struct idxd_group *group = idxd->groups[i]; + + idxd_group_config_write(group); + } +@@ -739,7 +738,7 @@ static int idxd_wqs_config_write(struct idxd_device *idxd) + int i, rc; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + rc = idxd_wq_config_write(wq); + if (rc < 0) +@@ -755,7 +754,7 @@ static void idxd_group_flags_setup(struct idxd_device *idxd) + + /* TC-A 0 and TC-B 1 should be defaults */ + for (i = 0; i < idxd->max_groups; i++) { +- struct idxd_group *group = &idxd->groups[i]; ++ struct idxd_group *group = idxd->groups[i]; + + if (group->tc_a == -1) + group->tc_a = group->grpcfg.flags.tc_a = 0; +@@ -782,12 +781,12 @@ static int idxd_engines_setup(struct idxd_device *idxd) + struct idxd_group *group; + + for (i = 0; i < idxd->max_groups; i++) { +- group = &idxd->groups[i]; ++ group = idxd->groups[i]; + group->grpcfg.engines = 0; + } + + for (i = 0; i < idxd->max_engines; i++) { +- eng = &idxd->engines[i]; ++ eng = idxd->engines[i]; + group = eng->group; + + if (!group) +@@ -811,13 +810,13 @@ static int idxd_wqs_setup(struct idxd_device *idxd) + struct device *dev = &idxd->pdev->dev; + + for (i = 0; i < idxd->max_groups; i++) { +- group = &idxd->groups[i]; ++ group = idxd->groups[i]; + for (j = 0; j < 4; j++) + group->grpcfg.wqs[j] = 0; + } + + for (i = 0; i < idxd->max_wqs; i++) { +- wq = &idxd->wqs[i]; ++ wq = idxd->wqs[i]; + group = wq->group; + + if (!wq->group) +diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c +index a15e50126434e..77439b6450448 100644 +--- a/drivers/dma/idxd/dma.c ++++ b/drivers/dma/idxd/dma.c +@@ -14,7 +14,10 @@ + + static inline struct idxd_wq *to_idxd_wq(struct dma_chan *c) + { +- return container_of(c, struct idxd_wq, dma_chan); ++ struct idxd_dma_chan *idxd_chan; ++ ++ idxd_chan = container_of(c, struct idxd_dma_chan, chan); ++ return idxd_chan->wq; + } + + void idxd_dma_complete_txd(struct idxd_desc *desc, +@@ -135,7 +138,7 @@ static void idxd_dma_issue_pending(struct dma_chan *dma_chan) + { + } + +-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx) ++static dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx) + { + struct dma_chan *c = tx->chan; + struct idxd_wq *wq = to_idxd_wq(c); +@@ -156,14 +159,25 @@ dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx) + + static void idxd_dma_release(struct dma_device *device) + { ++ struct idxd_dma_dev *idxd_dma = container_of(device, struct idxd_dma_dev, dma); ++ ++ kfree(idxd_dma); + } + + int idxd_register_dma_device(struct idxd_device *idxd) + { +- struct dma_device *dma = &idxd->dma_dev; ++ struct idxd_dma_dev *idxd_dma; ++ struct dma_device *dma; ++ struct device *dev = &idxd->pdev->dev; ++ int rc; + ++ idxd_dma = kzalloc_node(sizeof(*idxd_dma), GFP_KERNEL, dev_to_node(dev)); ++ if (!idxd_dma) ++ return -ENOMEM; ++ ++ dma = &idxd_dma->dma; + INIT_LIST_HEAD(&dma->channels); +- dma->dev = &idxd->pdev->dev; ++ dma->dev = dev; + + dma_cap_set(DMA_PRIVATE, dma->cap_mask); + dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask); +@@ -179,35 +193,72 @@ int idxd_register_dma_device(struct idxd_device *idxd) + dma->device_alloc_chan_resources = idxd_dma_alloc_chan_resources; + dma->device_free_chan_resources = idxd_dma_free_chan_resources; + +- return dma_async_device_register(&idxd->dma_dev); ++ rc = dma_async_device_register(dma); ++ if (rc < 0) { ++ kfree(idxd_dma); ++ return rc; ++ } ++ ++ idxd_dma->idxd = idxd; ++ /* ++ * This pointer is protected by the refs taken by the dma_chan. It will remain valid ++ * as long as there are outstanding channels. ++ */ ++ idxd->idxd_dma = idxd_dma; ++ return 0; + } + + void idxd_unregister_dma_device(struct idxd_device *idxd) + { +- dma_async_device_unregister(&idxd->dma_dev); ++ dma_async_device_unregister(&idxd->idxd_dma->dma); + } + + int idxd_register_dma_channel(struct idxd_wq *wq) + { + struct idxd_device *idxd = wq->idxd; +- struct dma_device *dma = &idxd->dma_dev; +- struct dma_chan *chan = &wq->dma_chan; +- int rc; ++ struct dma_device *dma = &idxd->idxd_dma->dma; ++ struct device *dev = &idxd->pdev->dev; ++ struct idxd_dma_chan *idxd_chan; ++ struct dma_chan *chan; ++ int rc, i; ++ ++ idxd_chan = kzalloc_node(sizeof(*idxd_chan), GFP_KERNEL, dev_to_node(dev)); ++ if (!idxd_chan) ++ return -ENOMEM; + +- memset(&wq->dma_chan, 0, sizeof(struct dma_chan)); ++ chan = &idxd_chan->chan; + chan->device = dma; + list_add_tail(&chan->device_node, &dma->channels); ++ ++ for (i = 0; i < wq->num_descs; i++) { ++ struct idxd_desc *desc = wq->descs[i]; ++ ++ dma_async_tx_descriptor_init(&desc->txd, chan); ++ desc->txd.tx_submit = idxd_dma_tx_submit; ++ } ++ + rc = dma_async_device_channel_register(dma, chan); +- if (rc < 0) ++ if (rc < 0) { ++ kfree(idxd_chan); + return rc; ++ } ++ ++ wq->idxd_chan = idxd_chan; ++ idxd_chan->wq = wq; ++ get_device(&wq->conf_dev); + + return 0; + } + + void idxd_unregister_dma_channel(struct idxd_wq *wq) + { +- struct dma_chan *chan = &wq->dma_chan; ++ struct idxd_dma_chan *idxd_chan = wq->idxd_chan; ++ struct dma_chan *chan = &idxd_chan->chan; ++ struct idxd_dma_dev *idxd_dma = wq->idxd->idxd_dma; + +- dma_async_device_channel_unregister(&wq->idxd->dma_dev, chan); ++ dma_async_device_channel_unregister(&idxd_dma->dma, chan); + list_del(&chan->device_node); ++ kfree(wq->idxd_chan); ++ wq->idxd_chan = NULL; ++ put_device(&wq->conf_dev); + } +diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h +index 76014c14f4732..89daf746d1215 100644 +--- a/drivers/dma/idxd/idxd.h ++++ b/drivers/dma/idxd/idxd.h +@@ -8,12 +8,16 @@ + #include + #include + #include ++#include + #include "registers.h" + + #define IDXD_DRIVER_VERSION "1.00" + + extern struct kmem_cache *idxd_desc_pool; + ++struct idxd_device; ++struct idxd_wq; ++ + #define IDXD_REG_TIMEOUT 50 + #define IDXD_DRAIN_TIMEOUT 5000 + +@@ -33,6 +37,7 @@ struct idxd_device_driver { + struct idxd_irq_entry { + struct idxd_device *idxd; + int id; ++ int vector; + struct llist_head pending_llist; + struct list_head work_list; + /* +@@ -75,10 +80,10 @@ enum idxd_wq_type { + }; + + struct idxd_cdev { ++ struct idxd_wq *wq; + struct cdev cdev; +- struct device *dev; ++ struct device dev; + int minor; +- struct wait_queue_head err_queue; + }; + + #define IDXD_ALLOCATED_BATCH_SIZE 128U +@@ -96,10 +101,16 @@ enum idxd_complete_type { + IDXD_COMPLETE_DEV_FAIL, + }; + ++struct idxd_dma_chan { ++ struct dma_chan chan; ++ struct idxd_wq *wq; ++}; ++ + struct idxd_wq { + void __iomem *portal; + struct device conf_dev; +- struct idxd_cdev idxd_cdev; ++ struct idxd_cdev *idxd_cdev; ++ struct wait_queue_head err_queue; + struct idxd_device *idxd; + int id; + enum idxd_wq_type type; +@@ -125,7 +136,7 @@ struct idxd_wq { + int compls_size; + struct idxd_desc **descs; + struct sbitmap_queue sbq; +- struct dma_chan dma_chan; ++ struct idxd_dma_chan *idxd_chan; + char name[WQ_NAME_SIZE + 1]; + u64 max_xfer_bytes; + u32 max_batch_size; +@@ -162,6 +173,11 @@ enum idxd_device_flag { + IDXD_FLAG_PASID_ENABLED, + }; + ++struct idxd_dma_dev { ++ struct idxd_device *idxd; ++ struct dma_device dma; ++}; ++ + struct idxd_device { + enum idxd_type type; + struct device conf_dev; +@@ -178,9 +194,9 @@ struct idxd_device { + + spinlock_t dev_lock; /* spinlock for device */ + struct completion *cmd_done; +- struct idxd_group *groups; +- struct idxd_wq *wqs; +- struct idxd_engine *engines; ++ struct idxd_group **groups; ++ struct idxd_wq **wqs; ++ struct idxd_engine **engines; + + struct iommu_sva *sva; + unsigned int pasid; +@@ -206,11 +222,10 @@ struct idxd_device { + + union sw_err_reg sw_err; + wait_queue_head_t cmd_waitq; +- struct msix_entry *msix_entries; + int num_wq_irqs; + struct idxd_irq_entry *irq_entries; + +- struct dma_device dma_dev; ++ struct idxd_dma_dev *idxd_dma; + struct workqueue_struct *wq; + struct work_struct work; + }; +@@ -242,6 +257,43 @@ extern struct bus_type dsa_bus_type; + extern struct bus_type iax_bus_type; + + extern bool support_enqcmd; ++extern struct device_type dsa_device_type; ++extern struct device_type iax_device_type; ++extern struct device_type idxd_wq_device_type; ++extern struct device_type idxd_engine_device_type; ++extern struct device_type idxd_group_device_type; ++ ++static inline bool is_dsa_dev(struct device *dev) ++{ ++ return dev->type == &dsa_device_type; ++} ++ ++static inline bool is_iax_dev(struct device *dev) ++{ ++ return dev->type == &iax_device_type; ++} ++ ++static inline bool is_idxd_dev(struct device *dev) ++{ ++ return is_dsa_dev(dev) || is_iax_dev(dev); ++} ++ ++static inline bool is_idxd_wq_dev(struct device *dev) ++{ ++ return dev->type == &idxd_wq_device_type; ++} ++ ++static inline bool is_idxd_wq_dmaengine(struct idxd_wq *wq) ++{ ++ if (wq->type == IDXD_WQT_KERNEL && strcmp(wq->name, "dmaengine") == 0) ++ return true; ++ return false; ++} ++ ++static inline bool is_idxd_wq_cdev(struct idxd_wq *wq) ++{ ++ return wq->type == IDXD_WQT_USER; ++} + + static inline bool wq_dedicated(struct idxd_wq *wq) + { +@@ -279,18 +331,6 @@ static inline int idxd_get_wq_portal_full_offset(int wq_id, + return ((wq_id * 4) << PAGE_SHIFT) + idxd_get_wq_portal_offset(prot); + } + +-static inline void idxd_set_type(struct idxd_device *idxd) +-{ +- struct pci_dev *pdev = idxd->pdev; +- +- if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0) +- idxd->type = IDXD_TYPE_DSA; +- else if (pdev->device == PCI_DEVICE_ID_INTEL_IAX_SPR0) +- idxd->type = IDXD_TYPE_IAX; +- else +- idxd->type = IDXD_TYPE_UNKNOWN; +-} +- + static inline void idxd_wq_get(struct idxd_wq *wq) + { + wq->client_count++; +@@ -306,14 +346,16 @@ static inline int idxd_wq_refcount(struct idxd_wq *wq) + return wq->client_count; + }; + ++struct ida *idxd_ida(struct idxd_device *idxd); + const char *idxd_get_dev_name(struct idxd_device *idxd); + int idxd_register_bus_type(void); + void idxd_unregister_bus_type(void); +-int idxd_setup_sysfs(struct idxd_device *idxd); +-void idxd_cleanup_sysfs(struct idxd_device *idxd); ++int idxd_register_devices(struct idxd_device *idxd); ++void idxd_unregister_devices(struct idxd_device *idxd); + int idxd_register_driver(void); + void idxd_unregister_driver(void); + struct bus_type *idxd_get_bus_type(struct idxd_device *idxd); ++struct device_type *idxd_get_device_type(struct idxd_device *idxd); + + /* device interrupt control */ + void idxd_msix_perm_setup(struct idxd_device *idxd); +@@ -363,7 +405,6 @@ void idxd_unregister_dma_channel(struct idxd_wq *wq); + void idxd_parse_completion_status(u8 status, enum dmaengine_tx_result *res); + void idxd_dma_complete_txd(struct idxd_desc *desc, + enum idxd_complete_type comp_type); +-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx); + + /* cdev */ + int idxd_cdev_register(void); +diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c +index 6584b0ec07d54..07cf7977a0450 100644 +--- a/drivers/dma/idxd/init.c ++++ b/drivers/dma/idxd/init.c +@@ -34,8 +34,7 @@ MODULE_PARM_DESC(sva, "Toggle SVA support on/off"); + + bool support_enqcmd; + +-static struct idr idxd_idrs[IDXD_TYPE_MAX]; +-static DEFINE_MUTEX(idxd_idr_lock); ++static struct ida idxd_idas[IDXD_TYPE_MAX]; + + static struct pci_device_id idxd_pci_tbl[] = { + /* DSA ver 1.0 platforms */ +@@ -52,6 +51,11 @@ static char *idxd_name[] = { + "iax" + }; + ++struct ida *idxd_ida(struct idxd_device *idxd) ++{ ++ return &idxd_idas[idxd->type]; ++} ++ + const char *idxd_get_dev_name(struct idxd_device *idxd) + { + return idxd_name[idxd->type]; +@@ -61,7 +65,6 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) + { + struct pci_dev *pdev = idxd->pdev; + struct device *dev = &pdev->dev; +- struct msix_entry *msix; + struct idxd_irq_entry *irq_entry; + int i, msixcnt; + int rc = 0; +@@ -69,23 +72,13 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) + msixcnt = pci_msix_vec_count(pdev); + if (msixcnt < 0) { + dev_err(dev, "Not MSI-X interrupt capable.\n"); +- goto err_no_irq; +- } +- +- idxd->msix_entries = devm_kzalloc(dev, sizeof(struct msix_entry) * +- msixcnt, GFP_KERNEL); +- if (!idxd->msix_entries) { +- rc = -ENOMEM; +- goto err_no_irq; ++ return -ENOSPC; + } + +- for (i = 0; i < msixcnt; i++) +- idxd->msix_entries[i].entry = i; +- +- rc = pci_enable_msix_exact(pdev, idxd->msix_entries, msixcnt); +- if (rc) { +- dev_err(dev, "Failed enabling %d MSIX entries.\n", msixcnt); +- goto err_no_irq; ++ rc = pci_alloc_irq_vectors(pdev, msixcnt, msixcnt, PCI_IRQ_MSIX); ++ if (rc != msixcnt) { ++ dev_err(dev, "Failed enabling %d MSIX entries: %d\n", msixcnt, rc); ++ return -ENOSPC; + } + dev_dbg(dev, "Enabled %d msix vectors\n", msixcnt); + +@@ -93,119 +86,236 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) + * We implement 1 completion list per MSI-X entry except for + * entry 0, which is for errors and others. + */ +- idxd->irq_entries = devm_kcalloc(dev, msixcnt, +- sizeof(struct idxd_irq_entry), +- GFP_KERNEL); ++ idxd->irq_entries = kcalloc_node(msixcnt, sizeof(struct idxd_irq_entry), ++ GFP_KERNEL, dev_to_node(dev)); + if (!idxd->irq_entries) { + rc = -ENOMEM; +- goto err_no_irq; ++ goto err_irq_entries; + } + + for (i = 0; i < msixcnt; i++) { + idxd->irq_entries[i].id = i; + idxd->irq_entries[i].idxd = idxd; ++ idxd->irq_entries[i].vector = pci_irq_vector(pdev, i); + spin_lock_init(&idxd->irq_entries[i].list_lock); + } + +- msix = &idxd->msix_entries[0]; + irq_entry = &idxd->irq_entries[0]; +- rc = devm_request_threaded_irq(dev, msix->vector, idxd_irq_handler, +- idxd_misc_thread, 0, "idxd-misc", +- irq_entry); ++ rc = request_threaded_irq(irq_entry->vector, idxd_irq_handler, idxd_misc_thread, ++ 0, "idxd-misc", irq_entry); + if (rc < 0) { + dev_err(dev, "Failed to allocate misc interrupt.\n"); +- goto err_no_irq; ++ goto err_misc_irq; + } + +- dev_dbg(dev, "Allocated idxd-misc handler on msix vector %d\n", +- msix->vector); ++ dev_dbg(dev, "Allocated idxd-misc handler on msix vector %d\n", irq_entry->vector); + + /* first MSI-X entry is not for wq interrupts */ + idxd->num_wq_irqs = msixcnt - 1; + + for (i = 1; i < msixcnt; i++) { +- msix = &idxd->msix_entries[i]; + irq_entry = &idxd->irq_entries[i]; + + init_llist_head(&idxd->irq_entries[i].pending_llist); + INIT_LIST_HEAD(&idxd->irq_entries[i].work_list); +- rc = devm_request_threaded_irq(dev, msix->vector, +- idxd_irq_handler, +- idxd_wq_thread, 0, +- "idxd-portal", irq_entry); ++ rc = request_threaded_irq(irq_entry->vector, idxd_irq_handler, ++ idxd_wq_thread, 0, "idxd-portal", irq_entry); + if (rc < 0) { +- dev_err(dev, "Failed to allocate irq %d.\n", +- msix->vector); +- goto err_no_irq; ++ dev_err(dev, "Failed to allocate irq %d.\n", irq_entry->vector); ++ goto err_wq_irqs; + } +- dev_dbg(dev, "Allocated idxd-msix %d for vector %d\n", +- i, msix->vector); ++ dev_dbg(dev, "Allocated idxd-msix %d for vector %d\n", i, irq_entry->vector); + } + + idxd_unmask_error_interrupts(idxd); + idxd_msix_perm_setup(idxd); + return 0; + +- err_no_irq: ++ err_wq_irqs: ++ while (--i >= 0) { ++ irq_entry = &idxd->irq_entries[i]; ++ free_irq(irq_entry->vector, irq_entry); ++ } ++ err_misc_irq: + /* Disable error interrupt generation */ + idxd_mask_error_interrupts(idxd); +- pci_disable_msix(pdev); ++ err_irq_entries: ++ pci_free_irq_vectors(pdev); + dev_err(dev, "No usable interrupts\n"); + return rc; + } + +-static int idxd_setup_internals(struct idxd_device *idxd) ++static int idxd_setup_wqs(struct idxd_device *idxd) + { + struct device *dev = &idxd->pdev->dev; +- int i; +- +- init_waitqueue_head(&idxd->cmd_waitq); +- idxd->groups = devm_kcalloc(dev, idxd->max_groups, +- sizeof(struct idxd_group), GFP_KERNEL); +- if (!idxd->groups) +- return -ENOMEM; +- +- for (i = 0; i < idxd->max_groups; i++) { +- idxd->groups[i].idxd = idxd; +- idxd->groups[i].id = i; +- idxd->groups[i].tc_a = -1; +- idxd->groups[i].tc_b = -1; +- } ++ struct idxd_wq *wq; ++ int i, rc; + +- idxd->wqs = devm_kcalloc(dev, idxd->max_wqs, sizeof(struct idxd_wq), +- GFP_KERNEL); ++ idxd->wqs = kcalloc_node(idxd->max_wqs, sizeof(struct idxd_wq *), ++ GFP_KERNEL, dev_to_node(dev)); + if (!idxd->wqs) + return -ENOMEM; + +- idxd->engines = devm_kcalloc(dev, idxd->max_engines, +- sizeof(struct idxd_engine), GFP_KERNEL); +- if (!idxd->engines) +- return -ENOMEM; +- + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ wq = kzalloc_node(sizeof(*wq), GFP_KERNEL, dev_to_node(dev)); ++ if (!wq) { ++ rc = -ENOMEM; ++ goto err; ++ } + + wq->id = i; + wq->idxd = idxd; ++ device_initialize(&wq->conf_dev); ++ wq->conf_dev.parent = &idxd->conf_dev; ++ wq->conf_dev.bus = idxd_get_bus_type(idxd); ++ wq->conf_dev.type = &idxd_wq_device_type; ++ rc = dev_set_name(&wq->conf_dev, "wq%d.%d", idxd->id, wq->id); ++ if (rc < 0) { ++ put_device(&wq->conf_dev); ++ goto err; ++ } ++ + mutex_init(&wq->wq_lock); +- wq->idxd_cdev.minor = -1; ++ init_waitqueue_head(&wq->err_queue); + wq->max_xfer_bytes = idxd->max_xfer_bytes; + wq->max_batch_size = idxd->max_batch_size; +- wq->wqcfg = devm_kzalloc(dev, idxd->wqcfg_size, GFP_KERNEL); +- if (!wq->wqcfg) +- return -ENOMEM; ++ wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev)); ++ if (!wq->wqcfg) { ++ put_device(&wq->conf_dev); ++ rc = -ENOMEM; ++ goto err; ++ } ++ idxd->wqs[i] = wq; + } + ++ return 0; ++ ++ err: ++ while (--i >= 0) ++ put_device(&idxd->wqs[i]->conf_dev); ++ return rc; ++} ++ ++static int idxd_setup_engines(struct idxd_device *idxd) ++{ ++ struct idxd_engine *engine; ++ struct device *dev = &idxd->pdev->dev; ++ int i, rc; ++ ++ idxd->engines = kcalloc_node(idxd->max_engines, sizeof(struct idxd_engine *), ++ GFP_KERNEL, dev_to_node(dev)); ++ if (!idxd->engines) ++ return -ENOMEM; ++ + for (i = 0; i < idxd->max_engines; i++) { +- idxd->engines[i].idxd = idxd; +- idxd->engines[i].id = i; ++ engine = kzalloc_node(sizeof(*engine), GFP_KERNEL, dev_to_node(dev)); ++ if (!engine) { ++ rc = -ENOMEM; ++ goto err; ++ } ++ ++ engine->id = i; ++ engine->idxd = idxd; ++ device_initialize(&engine->conf_dev); ++ engine->conf_dev.parent = &idxd->conf_dev; ++ engine->conf_dev.type = &idxd_engine_device_type; ++ rc = dev_set_name(&engine->conf_dev, "engine%d.%d", idxd->id, engine->id); ++ if (rc < 0) { ++ put_device(&engine->conf_dev); ++ goto err; ++ } ++ ++ idxd->engines[i] = engine; + } + +- idxd->wq = create_workqueue(dev_name(dev)); +- if (!idxd->wq) ++ return 0; ++ ++ err: ++ while (--i >= 0) ++ put_device(&idxd->engines[i]->conf_dev); ++ return rc; ++} ++ ++static int idxd_setup_groups(struct idxd_device *idxd) ++{ ++ struct device *dev = &idxd->pdev->dev; ++ struct idxd_group *group; ++ int i, rc; ++ ++ idxd->groups = kcalloc_node(idxd->max_groups, sizeof(struct idxd_group *), ++ GFP_KERNEL, dev_to_node(dev)); ++ if (!idxd->groups) + return -ENOMEM; + ++ for (i = 0; i < idxd->max_groups; i++) { ++ group = kzalloc_node(sizeof(*group), GFP_KERNEL, dev_to_node(dev)); ++ if (!group) { ++ rc = -ENOMEM; ++ goto err; ++ } ++ ++ group->id = i; ++ group->idxd = idxd; ++ device_initialize(&group->conf_dev); ++ group->conf_dev.parent = &idxd->conf_dev; ++ group->conf_dev.bus = idxd_get_bus_type(idxd); ++ group->conf_dev.type = &idxd_group_device_type; ++ rc = dev_set_name(&group->conf_dev, "group%d.%d", idxd->id, group->id); ++ if (rc < 0) { ++ put_device(&group->conf_dev); ++ goto err; ++ } ++ ++ idxd->groups[i] = group; ++ group->tc_a = -1; ++ group->tc_b = -1; ++ } ++ ++ return 0; ++ ++ err: ++ while (--i >= 0) ++ put_device(&idxd->groups[i]->conf_dev); ++ return rc; ++} ++ ++static int idxd_setup_internals(struct idxd_device *idxd) ++{ ++ struct device *dev = &idxd->pdev->dev; ++ int rc, i; ++ ++ init_waitqueue_head(&idxd->cmd_waitq); ++ ++ rc = idxd_setup_wqs(idxd); ++ if (rc < 0) ++ return rc; ++ ++ rc = idxd_setup_engines(idxd); ++ if (rc < 0) ++ goto err_engine; ++ ++ rc = idxd_setup_groups(idxd); ++ if (rc < 0) ++ goto err_group; ++ ++ idxd->wq = create_workqueue(dev_name(dev)); ++ if (!idxd->wq) { ++ rc = -ENOMEM; ++ goto err_wkq_create; ++ } ++ + return 0; ++ ++ err_wkq_create: ++ for (i = 0; i < idxd->max_groups; i++) ++ put_device(&idxd->groups[i]->conf_dev); ++ err_group: ++ for (i = 0; i < idxd->max_engines; i++) ++ put_device(&idxd->engines[i]->conf_dev); ++ err_engine: ++ for (i = 0; i < idxd->max_wqs; i++) ++ put_device(&idxd->wqs[i]->conf_dev); ++ return rc; + } + + static void idxd_read_table_offsets(struct idxd_device *idxd) +@@ -275,16 +385,44 @@ static void idxd_read_caps(struct idxd_device *idxd) + } + } + ++static inline void idxd_set_type(struct idxd_device *idxd) ++{ ++ struct pci_dev *pdev = idxd->pdev; ++ ++ if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0) ++ idxd->type = IDXD_TYPE_DSA; ++ else if (pdev->device == PCI_DEVICE_ID_INTEL_IAX_SPR0) ++ idxd->type = IDXD_TYPE_IAX; ++ else ++ idxd->type = IDXD_TYPE_UNKNOWN; ++} ++ + static struct idxd_device *idxd_alloc(struct pci_dev *pdev) + { + struct device *dev = &pdev->dev; + struct idxd_device *idxd; ++ int rc; + +- idxd = devm_kzalloc(dev, sizeof(struct idxd_device), GFP_KERNEL); ++ idxd = kzalloc_node(sizeof(*idxd), GFP_KERNEL, dev_to_node(dev)); + if (!idxd) + return NULL; + + idxd->pdev = pdev; ++ idxd_set_type(idxd); ++ idxd->id = ida_alloc(idxd_ida(idxd), GFP_KERNEL); ++ if (idxd->id < 0) ++ return NULL; ++ ++ device_initialize(&idxd->conf_dev); ++ idxd->conf_dev.parent = dev; ++ idxd->conf_dev.bus = idxd_get_bus_type(idxd); ++ idxd->conf_dev.type = idxd_get_device_type(idxd); ++ rc = dev_set_name(&idxd->conf_dev, "%s%d", idxd_get_dev_name(idxd), idxd->id); ++ if (rc < 0) { ++ put_device(&idxd->conf_dev); ++ return NULL; ++ } ++ + spin_lock_init(&idxd->dev_lock); + + return idxd; +@@ -352,31 +490,20 @@ static int idxd_probe(struct idxd_device *idxd) + + rc = idxd_setup_internals(idxd); + if (rc) +- goto err_setup; ++ goto err; + + rc = idxd_setup_interrupts(idxd); + if (rc) +- goto err_setup; ++ goto err; + + dev_dbg(dev, "IDXD interrupt setup complete.\n"); + +- mutex_lock(&idxd_idr_lock); +- idxd->id = idr_alloc(&idxd_idrs[idxd->type], idxd, 0, 0, GFP_KERNEL); +- mutex_unlock(&idxd_idr_lock); +- if (idxd->id < 0) { +- rc = -ENOMEM; +- goto err_idr_fail; +- } +- + idxd->major = idxd_cdev_get_major(idxd); + + dev_dbg(dev, "IDXD device %d probed successfully\n", idxd->id); + return 0; + +- err_idr_fail: +- idxd_mask_error_interrupts(idxd); +- idxd_mask_msix_vectors(idxd); +- err_setup: ++ err: + if (device_pasid_enabled(idxd)) + idxd_disable_system_pasid(idxd); + return rc; +@@ -396,34 +523,37 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) + struct idxd_device *idxd; + int rc; + +- rc = pcim_enable_device(pdev); ++ rc = pci_enable_device(pdev); + if (rc) + return rc; + + dev_dbg(dev, "Alloc IDXD context\n"); + idxd = idxd_alloc(pdev); +- if (!idxd) +- return -ENOMEM; ++ if (!idxd) { ++ rc = -ENOMEM; ++ goto err_idxd_alloc; ++ } + + dev_dbg(dev, "Mapping BARs\n"); +- idxd->reg_base = pcim_iomap(pdev, IDXD_MMIO_BAR, 0); +- if (!idxd->reg_base) +- return -ENOMEM; ++ idxd->reg_base = pci_iomap(pdev, IDXD_MMIO_BAR, 0); ++ if (!idxd->reg_base) { ++ rc = -ENOMEM; ++ goto err_iomap; ++ } + + dev_dbg(dev, "Set DMA masks\n"); + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); + if (rc) + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + if (rc) +- return rc; ++ goto err; + + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); + if (rc) + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + if (rc) +- return rc; ++ goto err; + +- idxd_set_type(idxd); + + idxd_type_init(idxd); + +@@ -435,13 +565,13 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) + rc = idxd_probe(idxd); + if (rc) { + dev_err(dev, "Intel(R) IDXD DMA Engine init failed\n"); +- return -ENODEV; ++ goto err; + } + +- rc = idxd_setup_sysfs(idxd); ++ rc = idxd_register_devices(idxd); + if (rc) { + dev_err(dev, "IDXD sysfs setup failed\n"); +- return -ENODEV; ++ goto err; + } + + idxd->state = IDXD_DEV_CONF_READY; +@@ -450,6 +580,14 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) + idxd->hw.version); + + return 0; ++ ++ err: ++ pci_iounmap(pdev, idxd->reg_base); ++ err_iomap: ++ put_device(&idxd->conf_dev); ++ err_idxd_alloc: ++ pci_disable_device(pdev); ++ return rc; + } + + static void idxd_flush_pending_llist(struct idxd_irq_entry *ie) +@@ -495,7 +633,8 @@ static void idxd_shutdown(struct pci_dev *pdev) + + for (i = 0; i < msixcnt; i++) { + irq_entry = &idxd->irq_entries[i]; +- synchronize_irq(idxd->msix_entries[i].vector); ++ synchronize_irq(irq_entry->vector); ++ free_irq(irq_entry->vector, irq_entry); + if (i == 0) + continue; + idxd_flush_pending_llist(irq_entry); +@@ -503,6 +642,9 @@ static void idxd_shutdown(struct pci_dev *pdev) + } + + idxd_msix_perm_clear(idxd); ++ pci_free_irq_vectors(pdev); ++ pci_iounmap(pdev, idxd->reg_base); ++ pci_disable_device(pdev); + destroy_workqueue(idxd->wq); + } + +@@ -511,13 +653,10 @@ static void idxd_remove(struct pci_dev *pdev) + struct idxd_device *idxd = pci_get_drvdata(pdev); + + dev_dbg(&pdev->dev, "%s called\n", __func__); +- idxd_cleanup_sysfs(idxd); + idxd_shutdown(pdev); + if (device_pasid_enabled(idxd)) + idxd_disable_system_pasid(idxd); +- mutex_lock(&idxd_idr_lock); +- idr_remove(&idxd_idrs[idxd->type], idxd->id); +- mutex_unlock(&idxd_idr_lock); ++ idxd_unregister_devices(idxd); + } + + static struct pci_driver idxd_pci_driver = { +@@ -547,7 +686,7 @@ static int __init idxd_init_module(void) + support_enqcmd = true; + + for (i = 0; i < IDXD_TYPE_MAX; i++) +- idr_init(&idxd_idrs[i]); ++ ida_init(&idxd_idas[i]); + + err = idxd_register_bus_type(); + if (err < 0) +diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c +index f1463fc581125..fc0781e3f36d4 100644 +--- a/drivers/dma/idxd/irq.c ++++ b/drivers/dma/idxd/irq.c +@@ -45,7 +45,7 @@ static void idxd_device_reinit(struct work_struct *work) + goto out; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + if (wq->state == IDXD_WQ_ENABLED) { + rc = idxd_wq_enable(wq); +@@ -130,18 +130,18 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause) + + if (idxd->sw_err.valid && idxd->sw_err.wq_idx_valid) { + int id = idxd->sw_err.wq_idx; +- struct idxd_wq *wq = &idxd->wqs[id]; ++ struct idxd_wq *wq = idxd->wqs[id]; + + if (wq->type == IDXD_WQT_USER) +- wake_up_interruptible(&wq->idxd_cdev.err_queue); ++ wake_up_interruptible(&wq->err_queue); + } else { + int i; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + if (wq->type == IDXD_WQT_USER) +- wake_up_interruptible(&wq->idxd_cdev.err_queue); ++ wake_up_interruptible(&wq->err_queue); + } + } + +diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c +index 18bf4d1489890..9586b55abce56 100644 +--- a/drivers/dma/idxd/sysfs.c ++++ b/drivers/dma/idxd/sysfs.c +@@ -16,69 +16,6 @@ static char *idxd_wq_type_names[] = { + [IDXD_WQT_USER] = "user", + }; + +-static void idxd_conf_device_release(struct device *dev) +-{ +- dev_dbg(dev, "%s for %s\n", __func__, dev_name(dev)); +-} +- +-static struct device_type idxd_group_device_type = { +- .name = "group", +- .release = idxd_conf_device_release, +-}; +- +-static struct device_type idxd_wq_device_type = { +- .name = "wq", +- .release = idxd_conf_device_release, +-}; +- +-static struct device_type idxd_engine_device_type = { +- .name = "engine", +- .release = idxd_conf_device_release, +-}; +- +-static struct device_type dsa_device_type = { +- .name = "dsa", +- .release = idxd_conf_device_release, +-}; +- +-static struct device_type iax_device_type = { +- .name = "iax", +- .release = idxd_conf_device_release, +-}; +- +-static inline bool is_dsa_dev(struct device *dev) +-{ +- return dev ? dev->type == &dsa_device_type : false; +-} +- +-static inline bool is_iax_dev(struct device *dev) +-{ +- return dev ? dev->type == &iax_device_type : false; +-} +- +-static inline bool is_idxd_dev(struct device *dev) +-{ +- return is_dsa_dev(dev) || is_iax_dev(dev); +-} +- +-static inline bool is_idxd_wq_dev(struct device *dev) +-{ +- return dev ? dev->type == &idxd_wq_device_type : false; +-} +- +-static inline bool is_idxd_wq_dmaengine(struct idxd_wq *wq) +-{ +- if (wq->type == IDXD_WQT_KERNEL && +- strcmp(wq->name, "dmaengine") == 0) +- return true; +- return false; +-} +- +-static inline bool is_idxd_wq_cdev(struct idxd_wq *wq) +-{ +- return wq->type == IDXD_WQT_USER; +-} +- + static int idxd_config_bus_match(struct device *dev, + struct device_driver *drv) + { +@@ -322,7 +259,7 @@ static int idxd_config_bus_remove(struct device *dev) + dev_dbg(dev, "%s removing dev %s\n", __func__, + dev_name(&idxd->conf_dev)); + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + if (wq->state == IDXD_WQ_DISABLED) + continue; +@@ -334,7 +271,7 @@ static int idxd_config_bus_remove(struct device *dev) + idxd_unregister_dma_device(idxd); + rc = idxd_device_disable(idxd); + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + mutex_lock(&wq->wq_lock); + idxd_wq_disable_cleanup(wq); +@@ -405,7 +342,7 @@ struct bus_type *idxd_get_bus_type(struct idxd_device *idxd) + return idxd_bus_types[idxd->type]; + } + +-static struct device_type *idxd_get_device_type(struct idxd_device *idxd) ++struct device_type *idxd_get_device_type(struct idxd_device *idxd) + { + if (idxd->type == IDXD_TYPE_DSA) + return &dsa_device_type; +@@ -488,7 +425,7 @@ static ssize_t engine_group_id_store(struct device *dev, + + if (prevg) + prevg->num_engines--; +- engine->group = &idxd->groups[id]; ++ engine->group = idxd->groups[id]; + engine->group->num_engines++; + + return count; +@@ -512,6 +449,19 @@ static const struct attribute_group *idxd_engine_attribute_groups[] = { + NULL, + }; + ++static void idxd_conf_engine_release(struct device *dev) ++{ ++ struct idxd_engine *engine = container_of(dev, struct idxd_engine, conf_dev); ++ ++ kfree(engine); ++} ++ ++struct device_type idxd_engine_device_type = { ++ .name = "engine", ++ .release = idxd_conf_engine_release, ++ .groups = idxd_engine_attribute_groups, ++}; ++ + /* Group attributes */ + + static void idxd_set_free_tokens(struct idxd_device *idxd) +@@ -519,7 +469,7 @@ static void idxd_set_free_tokens(struct idxd_device *idxd) + int i, tokens; + + for (i = 0, tokens = 0; i < idxd->max_groups; i++) { +- struct idxd_group *g = &idxd->groups[i]; ++ struct idxd_group *g = idxd->groups[i]; + + tokens += g->tokens_reserved; + } +@@ -674,7 +624,7 @@ static ssize_t group_engines_show(struct device *dev, + struct idxd_device *idxd = group->idxd; + + for (i = 0; i < idxd->max_engines; i++) { +- struct idxd_engine *engine = &idxd->engines[i]; ++ struct idxd_engine *engine = idxd->engines[i]; + + if (!engine->group) + continue; +@@ -703,7 +653,7 @@ static ssize_t group_work_queues_show(struct device *dev, + struct idxd_device *idxd = group->idxd; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + if (!wq->group) + continue; +@@ -824,6 +774,19 @@ static const struct attribute_group *idxd_group_attribute_groups[] = { + NULL, + }; + ++static void idxd_conf_group_release(struct device *dev) ++{ ++ struct idxd_group *group = container_of(dev, struct idxd_group, conf_dev); ++ ++ kfree(group); ++} ++ ++struct device_type idxd_group_device_type = { ++ .name = "group", ++ .release = idxd_conf_group_release, ++ .groups = idxd_group_attribute_groups, ++}; ++ + /* IDXD work queue attribs */ + static ssize_t wq_clients_show(struct device *dev, + struct device_attribute *attr, char *buf) +@@ -896,7 +859,7 @@ static ssize_t wq_group_id_store(struct device *dev, + return count; + } + +- group = &idxd->groups[id]; ++ group = idxd->groups[id]; + prevg = wq->group; + + if (prevg) +@@ -960,7 +923,7 @@ static int total_claimed_wq_size(struct idxd_device *idxd) + int wq_size = 0; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + wq_size += wq->size; + } +@@ -1206,8 +1169,16 @@ static ssize_t wq_cdev_minor_show(struct device *dev, + struct device_attribute *attr, char *buf) + { + struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev); ++ int minor = -1; + +- return sprintf(buf, "%d\n", wq->idxd_cdev.minor); ++ mutex_lock(&wq->wq_lock); ++ if (wq->idxd_cdev) ++ minor = wq->idxd_cdev->minor; ++ mutex_unlock(&wq->wq_lock); ++ ++ if (minor == -1) ++ return -ENXIO; ++ return sysfs_emit(buf, "%d\n", minor); + } + + static struct device_attribute dev_attr_wq_cdev_minor = +@@ -1356,6 +1327,20 @@ static const struct attribute_group *idxd_wq_attribute_groups[] = { + NULL, + }; + ++static void idxd_conf_wq_release(struct device *dev) ++{ ++ struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev); ++ ++ kfree(wq->wqcfg); ++ kfree(wq); ++} ++ ++struct device_type idxd_wq_device_type = { ++ .name = "wq", ++ .release = idxd_conf_wq_release, ++ .groups = idxd_wq_attribute_groups, ++}; ++ + /* IDXD device attribs */ + static ssize_t version_show(struct device *dev, struct device_attribute *attr, + char *buf) +@@ -1486,7 +1471,7 @@ static ssize_t clients_show(struct device *dev, + + spin_lock_irqsave(&idxd->dev_lock, flags); + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + count += wq->client_count; + } +@@ -1644,183 +1629,160 @@ static const struct attribute_group *idxd_attribute_groups[] = { + NULL, + }; + +-static int idxd_setup_engine_sysfs(struct idxd_device *idxd) ++static void idxd_conf_device_release(struct device *dev) + { +- struct device *dev = &idxd->pdev->dev; +- int i, rc; ++ struct idxd_device *idxd = container_of(dev, struct idxd_device, conf_dev); ++ ++ kfree(idxd->groups); ++ kfree(idxd->wqs); ++ kfree(idxd->engines); ++ kfree(idxd->irq_entries); ++ ida_free(idxd_ida(idxd), idxd->id); ++ kfree(idxd); ++} ++ ++struct device_type dsa_device_type = { ++ .name = "dsa", ++ .release = idxd_conf_device_release, ++ .groups = idxd_attribute_groups, ++}; ++ ++struct device_type iax_device_type = { ++ .name = "iax", ++ .release = idxd_conf_device_release, ++ .groups = idxd_attribute_groups, ++}; ++ ++static int idxd_register_engine_devices(struct idxd_device *idxd) ++{ ++ int i, j, rc; + + for (i = 0; i < idxd->max_engines; i++) { +- struct idxd_engine *engine = &idxd->engines[i]; +- +- engine->conf_dev.parent = &idxd->conf_dev; +- dev_set_name(&engine->conf_dev, "engine%d.%d", +- idxd->id, engine->id); +- engine->conf_dev.bus = idxd_get_bus_type(idxd); +- engine->conf_dev.groups = idxd_engine_attribute_groups; +- engine->conf_dev.type = &idxd_engine_device_type; +- dev_dbg(dev, "Engine device register: %s\n", +- dev_name(&engine->conf_dev)); +- rc = device_register(&engine->conf_dev); +- if (rc < 0) { +- put_device(&engine->conf_dev); ++ struct idxd_engine *engine = idxd->engines[i]; ++ ++ rc = device_add(&engine->conf_dev); ++ if (rc < 0) + goto cleanup; +- } + } + + return 0; + + cleanup: +- while (i--) { +- struct idxd_engine *engine = &idxd->engines[i]; ++ j = i - 1; ++ for (; i < idxd->max_engines; i++) ++ put_device(&idxd->engines[i]->conf_dev); + +- device_unregister(&engine->conf_dev); +- } ++ while (j--) ++ device_unregister(&idxd->engines[j]->conf_dev); + return rc; + } + +-static int idxd_setup_group_sysfs(struct idxd_device *idxd) ++static int idxd_register_group_devices(struct idxd_device *idxd) + { +- struct device *dev = &idxd->pdev->dev; +- int i, rc; ++ int i, j, rc; + + for (i = 0; i < idxd->max_groups; i++) { +- struct idxd_group *group = &idxd->groups[i]; +- +- group->conf_dev.parent = &idxd->conf_dev; +- dev_set_name(&group->conf_dev, "group%d.%d", +- idxd->id, group->id); +- group->conf_dev.bus = idxd_get_bus_type(idxd); +- group->conf_dev.groups = idxd_group_attribute_groups; +- group->conf_dev.type = &idxd_group_device_type; +- dev_dbg(dev, "Group device register: %s\n", +- dev_name(&group->conf_dev)); +- rc = device_register(&group->conf_dev); +- if (rc < 0) { +- put_device(&group->conf_dev); ++ struct idxd_group *group = idxd->groups[i]; ++ ++ rc = device_add(&group->conf_dev); ++ if (rc < 0) + goto cleanup; +- } + } + + return 0; + + cleanup: +- while (i--) { +- struct idxd_group *group = &idxd->groups[i]; ++ j = i - 1; ++ for (; i < idxd->max_groups; i++) ++ put_device(&idxd->groups[i]->conf_dev); + +- device_unregister(&group->conf_dev); +- } ++ while (j--) ++ device_unregister(&idxd->groups[j]->conf_dev); + return rc; + } + +-static int idxd_setup_wq_sysfs(struct idxd_device *idxd) ++static int idxd_register_wq_devices(struct idxd_device *idxd) + { +- struct device *dev = &idxd->pdev->dev; +- int i, rc; ++ int i, rc, j; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; +- +- wq->conf_dev.parent = &idxd->conf_dev; +- dev_set_name(&wq->conf_dev, "wq%d.%d", idxd->id, wq->id); +- wq->conf_dev.bus = idxd_get_bus_type(idxd); +- wq->conf_dev.groups = idxd_wq_attribute_groups; +- wq->conf_dev.type = &idxd_wq_device_type; +- dev_dbg(dev, "WQ device register: %s\n", +- dev_name(&wq->conf_dev)); +- rc = device_register(&wq->conf_dev); +- if (rc < 0) { +- put_device(&wq->conf_dev); ++ struct idxd_wq *wq = idxd->wqs[i]; ++ ++ rc = device_add(&wq->conf_dev); ++ if (rc < 0) + goto cleanup; +- } + } + + return 0; + + cleanup: +- while (i--) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ j = i - 1; ++ for (; i < idxd->max_wqs; i++) ++ put_device(&idxd->wqs[i]->conf_dev); + +- device_unregister(&wq->conf_dev); +- } ++ while (j--) ++ device_unregister(&idxd->wqs[j]->conf_dev); + return rc; + } + +-static int idxd_setup_device_sysfs(struct idxd_device *idxd) ++int idxd_register_devices(struct idxd_device *idxd) + { + struct device *dev = &idxd->pdev->dev; +- int rc; +- char devname[IDXD_NAME_SIZE]; +- +- sprintf(devname, "%s%d", idxd_get_dev_name(idxd), idxd->id); +- idxd->conf_dev.parent = dev; +- dev_set_name(&idxd->conf_dev, "%s", devname); +- idxd->conf_dev.bus = idxd_get_bus_type(idxd); +- idxd->conf_dev.groups = idxd_attribute_groups; +- idxd->conf_dev.type = idxd_get_device_type(idxd); +- +- dev_dbg(dev, "IDXD device register: %s\n", dev_name(&idxd->conf_dev)); +- rc = device_register(&idxd->conf_dev); +- if (rc < 0) { +- put_device(&idxd->conf_dev); +- return rc; +- } ++ int rc, i; + +- return 0; +-} +- +-int idxd_setup_sysfs(struct idxd_device *idxd) +-{ +- struct device *dev = &idxd->pdev->dev; +- int rc; +- +- rc = idxd_setup_device_sysfs(idxd); +- if (rc < 0) { +- dev_dbg(dev, "Device sysfs registering failed: %d\n", rc); ++ rc = device_add(&idxd->conf_dev); ++ if (rc < 0) + return rc; +- } + +- rc = idxd_setup_wq_sysfs(idxd); ++ rc = idxd_register_wq_devices(idxd); + if (rc < 0) { +- /* unregister conf dev */ +- dev_dbg(dev, "Work Queue sysfs registering failed: %d\n", rc); +- return rc; ++ dev_dbg(dev, "WQ devices registering failed: %d\n", rc); ++ goto err_wq; + } + +- rc = idxd_setup_group_sysfs(idxd); ++ rc = idxd_register_engine_devices(idxd); + if (rc < 0) { +- /* unregister conf dev */ +- dev_dbg(dev, "Group sysfs registering failed: %d\n", rc); +- return rc; ++ dev_dbg(dev, "Engine devices registering failed: %d\n", rc); ++ goto err_engine; + } + +- rc = idxd_setup_engine_sysfs(idxd); ++ rc = idxd_register_group_devices(idxd); + if (rc < 0) { +- /* unregister conf dev */ +- dev_dbg(dev, "Engine sysfs registering failed: %d\n", rc); +- return rc; ++ dev_dbg(dev, "Group device registering failed: %d\n", rc); ++ goto err_group; + } + + return 0; ++ ++ err_group: ++ for (i = 0; i < idxd->max_engines; i++) ++ device_unregister(&idxd->engines[i]->conf_dev); ++ err_engine: ++ for (i = 0; i < idxd->max_wqs; i++) ++ device_unregister(&idxd->wqs[i]->conf_dev); ++ err_wq: ++ device_del(&idxd->conf_dev); ++ return rc; + } + +-void idxd_cleanup_sysfs(struct idxd_device *idxd) ++void idxd_unregister_devices(struct idxd_device *idxd) + { + int i; + + for (i = 0; i < idxd->max_wqs; i++) { +- struct idxd_wq *wq = &idxd->wqs[i]; ++ struct idxd_wq *wq = idxd->wqs[i]; + + device_unregister(&wq->conf_dev); + } + + for (i = 0; i < idxd->max_engines; i++) { +- struct idxd_engine *engine = &idxd->engines[i]; ++ struct idxd_engine *engine = idxd->engines[i]; + + device_unregister(&engine->conf_dev); + } + + for (i = 0; i < idxd->max_groups; i++) { +- struct idxd_group *group = &idxd->groups[i]; ++ struct idxd_group *group = idxd->groups[i]; + + device_unregister(&group->conf_dev); + } +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +index 7645223ea0ef2..97c11aa47ad0b 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +@@ -77,6 +77,8 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, + } + + ib->ptr = amdgpu_sa_bo_cpu_addr(ib->sa_bo); ++ /* flush the cache before commit the IB */ ++ ib->flags = AMDGPU_IB_FLAG_EMIT_MEM_SYNC; + + if (!vm) + ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +index 0cdbfcd475ec9..71a15f68514b0 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +@@ -644,6 +644,7 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct + + /* File created at /sys/class/drm/card0/device/hdcp_srm*/ + hdcp_work[0].attr = data_attr; ++ sysfs_bin_attr_init(&hdcp_work[0].attr); + + if (sysfs_create_bin_file(&adev->dev->kobj, &hdcp_work[0].attr)) + DRM_WARN("Failed to create device file hdcp_srm"); +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c +index c0b827d162684..4781279024a96 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c +@@ -2546,6 +2546,10 @@ static void commit_planes_for_stream(struct dc *dc, + plane_state->triplebuffer_flips = true; + } + } ++ if (update_type == UPDATE_TYPE_FULL) { ++ /* force vsync flip when reconfiguring pipes to prevent underflow */ ++ plane_state->flip_immediate = false; ++ } + } + } + +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c +index bec7059f6d5d1..a1318c31bcfa5 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c +@@ -1,5 +1,5 @@ + /* +- * Copyright 2012-17 Advanced Micro Devices, Inc. ++ * Copyright 2012-2021 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), +@@ -181,11 +181,14 @@ void hubp2_vready_at_or_After_vsync(struct hubp *hubp, + else + Set HUBP_VREADY_AT_OR_AFTER_VSYNC = 0 + */ +- if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width +- + pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) { +- value = 1; +- } else +- value = 0; ++ if (pipe_dest->htotal != 0) { ++ if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width ++ + pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) { ++ value = 1; ++ } else ++ value = 0; ++ } ++ + REG_UPDATE(DCHUBP_CNTL, HUBP_VREADY_AT_OR_AFTER_VSYNC, value); + } + +diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c +index 904ce9b88088e..afbe8856468a0 100644 +--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c ++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c +@@ -791,6 +791,8 @@ enum mod_hdcp_status mod_hdcp_hdcp2_validate_rx_id_list(struct mod_hdcp *hdcp) + TA_HDCP2_MSG_AUTHENTICATION_STATUS__RECEIVERID_REVOKED) { + hdcp->connection.is_hdcp2_revoked = 1; + status = MOD_HDCP_STATUS_HDCP2_RX_ID_LIST_REVOKED; ++ } else { ++ status = MOD_HDCP_STATUS_HDCP2_VALIDATE_RX_ID_LIST_FAILURE; + } + } + mutex_unlock(&psp->hdcp_context.mutex); +diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c +index 775d89b6c3fc4..5a51036325647 100644 +--- a/drivers/gpu/drm/i915/display/intel_dp.c ++++ b/drivers/gpu/drm/i915/display/intel_dp.c +@@ -1174,44 +1174,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp, + return -EINVAL; + } + +-/* Optimize link config in order: max bpp, min lanes, min clock */ +-static int +-intel_dp_compute_link_config_fast(struct intel_dp *intel_dp, +- struct intel_crtc_state *pipe_config, +- const struct link_config_limits *limits) +-{ +- const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; +- int bpp, clock, lane_count; +- int mode_rate, link_clock, link_avail; +- +- for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { +- int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp); +- +- mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, +- output_bpp); +- +- for (lane_count = limits->min_lane_count; +- lane_count <= limits->max_lane_count; +- lane_count <<= 1) { +- for (clock = limits->min_clock; clock <= limits->max_clock; clock++) { +- link_clock = intel_dp->common_rates[clock]; +- link_avail = intel_dp_max_data_rate(link_clock, +- lane_count); +- +- if (mode_rate <= link_avail) { +- pipe_config->lane_count = lane_count; +- pipe_config->pipe_bpp = bpp; +- pipe_config->port_clock = link_clock; +- +- return 0; +- } +- } +- } +- } +- +- return -EINVAL; +-} +- + static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc) + { + int i, num_bpc; +@@ -1461,22 +1423,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, + intel_dp_can_bigjoiner(intel_dp)) + pipe_config->bigjoiner = true; + +- if (intel_dp_is_edp(intel_dp)) +- /* +- * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4 +- * section A.1: "It is recommended that the minimum number of +- * lanes be used, using the minimum link rate allowed for that +- * lane configuration." +- * +- * Note that we fall back to the max clock and lane count for eDP +- * panels that fail with the fast optimal settings (see +- * intel_dp->use_max_params), in which case the fast vs. wide +- * choice doesn't matter. +- */ +- ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config, &limits); +- else +- /* Optimize for slow and wide. */ +- ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits); ++ /* ++ * Optimize for slow and wide for everything, because there are some ++ * eDP 1.3 and 1.4 panels don't work well with fast and narrow. ++ */ ++ ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits); + + /* enable compression if the mode doesn't fit available BW */ + drm_dbg_kms(&i915->drm, "Force DSC en = %d\n", intel_dp->force_dsc_en); +diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c +index f455040fa989c..7cbc81da80b72 100644 +--- a/drivers/gpu/drm/i915/display/intel_overlay.c ++++ b/drivers/gpu/drm/i915/display/intel_overlay.c +@@ -383,7 +383,7 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay) + i830_overlay_clock_gating(dev_priv, true); + } + +-static void ++__i915_active_call static void + intel_overlay_last_flip_retire(struct i915_active *active) + { + struct intel_overlay *overlay = +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c +index ec28a6cde49bd..0b2434e29d002 100644 +--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c ++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c +@@ -189,7 +189,7 @@ compute_partial_view(const struct drm_i915_gem_object *obj, + struct i915_ggtt_view view; + + if (i915_gem_object_is_tiled(obj)) +- chunk = roundup(chunk, tile_row_pages(obj)); ++ chunk = roundup(chunk, tile_row_pages(obj) ?: 1); + + view.type = I915_GGTT_VIEW_PARTIAL; + view.partial.offset = rounddown(page_offset, chunk); +diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +index 755522ced60de..3ae16945bd436 100644 +--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c ++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +@@ -630,7 +630,6 @@ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt) + + err = pin_pt_dma(vm, pde->pt.base); + if (err) { +- i915_gem_object_put(pde->pt.base); + free_pd(vm, pde); + return err; + } +diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c +index 67de2b1895989..4b09490c20c0a 100644 +--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c ++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c +@@ -670,8 +670,8 @@ static void detect_bit_6_swizzle(struct i915_ggtt *ggtt) + * banks of memory are paired and unswizzled on the + * uneven portion, so leave that as unknown. + */ +- if (intel_uncore_read(uncore, C0DRB3) == +- intel_uncore_read(uncore, C1DRB3)) { ++ if (intel_uncore_read16(uncore, C0DRB3) == ++ intel_uncore_read16(uncore, C1DRB3)) { + swizzle_x = I915_BIT_6_SWIZZLE_9_10; + swizzle_y = I915_BIT_6_SWIZZLE_9; + } +diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c +index 3bc616cc1ad23..ea660e541c909 100644 +--- a/drivers/gpu/drm/i915/i915_active.c ++++ b/drivers/gpu/drm/i915/i915_active.c +@@ -1156,7 +1156,8 @@ static int auto_active(struct i915_active *ref) + return 0; + } + +-static void auto_retire(struct i915_active *ref) ++__i915_active_call static void ++auto_retire(struct i915_active *ref) + { + i915_active_put(ref); + } +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +index d553f62f4eeb8..b4d8e1b01ee4f 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +@@ -1153,10 +1153,6 @@ static void a6xx_llc_slices_init(struct platform_device *pdev, + { + struct device_node *phandle; + +- a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem", "gpu_cx"); +- if (IS_ERR(a6xx_gpu->llc_mmio)) +- return; +- + /* + * There is a different programming path for targets with an mmu500 + * attached, so detect if that is the case +@@ -1166,6 +1162,11 @@ static void a6xx_llc_slices_init(struct platform_device *pdev, + of_device_is_compatible(phandle, "arm,mmu-500")); + of_node_put(phandle); + ++ if (a6xx_gpu->have_mmu500) ++ a6xx_gpu->llc_mmio = NULL; ++ else ++ a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem", "gpu_cx"); ++ + a6xx_gpu->llc_slice = llcc_slice_getd(LLCC_GPU); + a6xx_gpu->htw_llc_slice = llcc_slice_getd(LLCC_GPUHTW); + +diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c +index 82a8673ab8daf..d7e4a39a904e2 100644 +--- a/drivers/gpu/drm/msm/dp/dp_audio.c ++++ b/drivers/gpu/drm/msm/dp/dp_audio.c +@@ -527,6 +527,7 @@ int dp_audio_hw_params(struct device *dev, + dp_audio_setup_acr(audio); + dp_audio_safe_to_exit_level(audio); + dp_audio_enable(audio, true); ++ dp_display_signal_audio_start(dp_display); + dp_display->audio_enabled = true; + + end: +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index 5a39da6e1eaf2..1784e119269b7 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -178,6 +178,15 @@ static int dp_del_event(struct dp_display_private *dp_priv, u32 event) + return 0; + } + ++void dp_display_signal_audio_start(struct msm_dp *dp_display) ++{ ++ struct dp_display_private *dp; ++ ++ dp = container_of(dp_display, struct dp_display_private, dp_display); ++ ++ reinit_completion(&dp->audio_comp); ++} ++ + void dp_display_signal_audio_complete(struct msm_dp *dp_display) + { + struct dp_display_private *dp; +@@ -586,10 +595,8 @@ static int dp_connect_pending_timeout(struct dp_display_private *dp, u32 data) + mutex_lock(&dp->event_mutex); + + state = dp->hpd_state; +- if (state == ST_CONNECT_PENDING) { +- dp_display_enable(dp, 0); ++ if (state == ST_CONNECT_PENDING) + dp->hpd_state = ST_CONNECTED; +- } + + mutex_unlock(&dp->event_mutex); + +@@ -651,7 +658,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data) + dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND); + + /* signal the disconnect event early to ensure proper teardown */ +- reinit_completion(&dp->audio_comp); + dp_display_handle_plugged_change(g_dp_display, false); + + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK | +@@ -669,10 +675,8 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data + mutex_lock(&dp->event_mutex); + + state = dp->hpd_state; +- if (state == ST_DISCONNECT_PENDING) { +- dp_display_disable(dp, 0); ++ if (state == ST_DISCONNECT_PENDING) + dp->hpd_state = ST_DISCONNECTED; +- } + + mutex_unlock(&dp->event_mutex); + +@@ -898,7 +902,6 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data) + /* wait only if audio was enabled */ + if (dp_display->audio_enabled) { + /* signal the disconnect event */ +- reinit_completion(&dp->audio_comp); + dp_display_handle_plugged_change(dp_display, false); + if (!wait_for_completion_timeout(&dp->audio_comp, + HZ * 5)) +@@ -1272,7 +1275,12 @@ static int dp_pm_resume(struct device *dev) + + status = dp_catalog_link_is_connected(dp->catalog); + +- if (status) ++ /* ++ * can not declared display is connected unless ++ * HDMI cable is plugged in and sink_count of ++ * dongle become 1 ++ */ ++ if (status && dp->link->sink_count) + dp->dp_display.is_connected = true; + else + dp->dp_display.is_connected = false; +diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h +index 6092ba1ed85ed..5173c89eedf7e 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.h ++++ b/drivers/gpu/drm/msm/dp/dp_display.h +@@ -34,6 +34,7 @@ int dp_display_get_modes(struct msm_dp *dp_display, + int dp_display_request_irq(struct msm_dp *dp_display); + bool dp_display_check_video_test(struct msm_dp *dp_display); + int dp_display_get_test_bpp(struct msm_dp *dp_display); ++void dp_display_signal_audio_start(struct msm_dp *dp_display); + void dp_display_signal_audio_complete(struct msm_dp *dp_display); + + #endif /* _DP_DISPLAY_H_ */ +diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h +index 3effc8c71494c..ea44423376c4d 100644 +--- a/drivers/gpu/drm/radeon/radeon.h ++++ b/drivers/gpu/drm/radeon/radeon.h +@@ -1558,6 +1558,7 @@ struct radeon_dpm { + void *priv; + u32 new_active_crtcs; + int new_active_crtc_count; ++ int high_pixelclock_count; + u32 current_active_crtcs; + int current_active_crtc_count; + bool single_display; +diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c +index 42301b4e56f50..28c4413f4dc8d 100644 +--- a/drivers/gpu/drm/radeon/radeon_atombios.c ++++ b/drivers/gpu/drm/radeon/radeon_atombios.c +@@ -2120,11 +2120,14 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev) + return state_index; + /* last mode is usually default, array is low to high */ + for (i = 0; i < num_modes; i++) { +- rdev->pm.power_state[state_index].clock_info = +- kcalloc(1, sizeof(struct radeon_pm_clock_info), +- GFP_KERNEL); ++ /* avoid memory leaks from invalid modes or unknown frev. */ ++ if (!rdev->pm.power_state[state_index].clock_info) { ++ rdev->pm.power_state[state_index].clock_info = ++ kzalloc(sizeof(struct radeon_pm_clock_info), ++ GFP_KERNEL); ++ } + if (!rdev->pm.power_state[state_index].clock_info) +- return state_index; ++ goto out; + rdev->pm.power_state[state_index].num_clock_modes = 1; + rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE; + switch (frev) { +@@ -2243,17 +2246,24 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev) + break; + } + } ++out: ++ /* free any unused clock_info allocation. */ ++ if (state_index && state_index < num_modes) { ++ kfree(rdev->pm.power_state[state_index].clock_info); ++ rdev->pm.power_state[state_index].clock_info = NULL; ++ } ++ + /* last mode is usually default */ +- if (rdev->pm.default_power_state_index == -1) { ++ if (state_index && rdev->pm.default_power_state_index == -1) { + rdev->pm.power_state[state_index - 1].type = + POWER_STATE_TYPE_DEFAULT; + rdev->pm.default_power_state_index = state_index - 1; + rdev->pm.power_state[state_index - 1].default_clock_mode = + &rdev->pm.power_state[state_index - 1].clock_info[0]; +- rdev->pm.power_state[state_index].flags &= ++ rdev->pm.power_state[state_index - 1].flags &= + ~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY; +- rdev->pm.power_state[state_index].misc = 0; +- rdev->pm.power_state[state_index].misc2 = 0; ++ rdev->pm.power_state[state_index - 1].misc = 0; ++ rdev->pm.power_state[state_index - 1].misc2 = 0; + } + return state_index; + } +diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c +index 1995dad59dd09..2db4a8b1542d3 100644 +--- a/drivers/gpu/drm/radeon/radeon_pm.c ++++ b/drivers/gpu/drm/radeon/radeon_pm.c +@@ -1775,6 +1775,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev) + struct drm_device *ddev = rdev->ddev; + struct drm_crtc *crtc; + struct radeon_crtc *radeon_crtc; ++ struct radeon_connector *radeon_connector; + + if (!rdev->pm.dpm_enabled) + return; +@@ -1784,6 +1785,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev) + /* update active crtc counts */ + rdev->pm.dpm.new_active_crtcs = 0; + rdev->pm.dpm.new_active_crtc_count = 0; ++ rdev->pm.dpm.high_pixelclock_count = 0; + if (rdev->num_crtc && rdev->mode_info.mode_config_initialized) { + list_for_each_entry(crtc, + &ddev->mode_config.crtc_list, head) { +@@ -1791,6 +1793,12 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev) + if (crtc->enabled) { + rdev->pm.dpm.new_active_crtcs |= (1 << radeon_crtc->crtc_id); + rdev->pm.dpm.new_active_crtc_count++; ++ if (!radeon_crtc->connector) ++ continue; ++ ++ radeon_connector = to_radeon_connector(radeon_crtc->connector); ++ if (radeon_connector->pixelclock_for_modeset > 297000) ++ rdev->pm.dpm.high_pixelclock_count++; + } + } + } +diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c +index 91bfc4762767b..43b63705d0737 100644 +--- a/drivers/gpu/drm/radeon/si_dpm.c ++++ b/drivers/gpu/drm/radeon/si_dpm.c +@@ -2979,6 +2979,9 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev, + (rdev->pdev->device == 0x6605)) { + max_sclk = 75000; + } ++ ++ if (rdev->pm.dpm.high_pixelclock_count > 1) ++ disable_sclk_switching = true; + } + + if (rps->vce_active) { +diff --git a/drivers/hwmon/ltc2992.c b/drivers/hwmon/ltc2992.c +index 4382105bf1420..2a4bed0ab226b 100644 +--- a/drivers/hwmon/ltc2992.c ++++ b/drivers/hwmon/ltc2992.c +@@ -900,11 +900,15 @@ static int ltc2992_parse_dt(struct ltc2992_state *st) + + fwnode_for_each_available_child_node(fwnode, child) { + ret = fwnode_property_read_u32(child, "reg", &addr); +- if (ret < 0) ++ if (ret < 0) { ++ fwnode_handle_put(child); + return ret; ++ } + +- if (addr > 1) ++ if (addr > 1) { ++ fwnode_handle_put(child); + return -EINVAL; ++ } + + ret = fwnode_property_read_u32(child, "shunt-resistor-micro-ohms", &val); + if (!ret) +diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c +index 7a5e539b567bf..580e63d7daa00 100644 +--- a/drivers/hwmon/occ/common.c ++++ b/drivers/hwmon/occ/common.c +@@ -217,9 +217,9 @@ int occ_update_response(struct occ *occ) + return rc; + + /* limit the maximum rate of polling the OCC */ +- if (time_after(jiffies, occ->last_update + OCC_UPDATE_FREQUENCY)) { ++ if (time_after(jiffies, occ->next_update)) { + rc = occ_poll(occ); +- occ->last_update = jiffies; ++ occ->next_update = jiffies + OCC_UPDATE_FREQUENCY; + } else { + rc = occ->last_error; + } +@@ -1164,6 +1164,7 @@ int occ_setup(struct occ *occ, const char *name) + return rc; + } + ++ occ->next_update = jiffies + OCC_UPDATE_FREQUENCY; + occ_parse_poll_response(occ); + + rc = occ_setup_sensor_attrs(occ); +diff --git a/drivers/hwmon/occ/common.h b/drivers/hwmon/occ/common.h +index 67e6968b8978e..e6df719770e81 100644 +--- a/drivers/hwmon/occ/common.h ++++ b/drivers/hwmon/occ/common.h +@@ -99,7 +99,7 @@ struct occ { + u8 poll_cmd_data; /* to perform OCC poll command */ + int (*send_cmd)(struct occ *occ, u8 *cmd); + +- unsigned long last_update; ++ unsigned long next_update; + struct mutex lock; /* lock OCC access */ + + struct device *hwmon; +diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c +index 3629b7885aca9..c594f45319fc5 100644 +--- a/drivers/hwtracing/coresight/coresight-platform.c ++++ b/drivers/hwtracing/coresight/coresight-platform.c +@@ -90,6 +90,12 @@ static void of_coresight_get_ports_legacy(const struct device_node *node, + struct of_endpoint endpoint; + int in = 0, out = 0; + ++ /* ++ * Avoid warnings in of_graph_get_next_endpoint() ++ * if the device doesn't have any graph connections ++ */ ++ if (!of_graph_is_present(node)) ++ return; + do { + ep = of_graph_get_next_endpoint(node, ep); + if (!ep) +diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c +index 4acee6f9e5a30..99d446763530e 100644 +--- a/drivers/i2c/busses/i2c-i801.c ++++ b/drivers/i2c/busses/i2c-i801.c +@@ -73,6 +73,7 @@ + * Comet Lake-V (PCH) 0xa3a3 32 hard yes yes yes + * Alder Lake-S (PCH) 0x7aa3 32 hard yes yes yes + * Alder Lake-P (PCH) 0x51a3 32 hard yes yes yes ++ * Alder Lake-M (PCH) 0x54a3 32 hard yes yes yes + * + * Features supported by this driver: + * Software PEC no +@@ -230,6 +231,7 @@ + #define PCI_DEVICE_ID_INTEL_ELKHART_LAKE_SMBUS 0x4b23 + #define PCI_DEVICE_ID_INTEL_JASPER_LAKE_SMBUS 0x4da3 + #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_P_SMBUS 0x51a3 ++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_M_SMBUS 0x54a3 + #define PCI_DEVICE_ID_INTEL_BROXTON_SMBUS 0x5ad4 + #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_S_SMBUS 0x7aa3 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_SMBUS 0x8c22 +@@ -1087,6 +1089,7 @@ static const struct pci_device_id i801_ids[] = { + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_JASPER_LAKE_SMBUS) }, + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ALDER_LAKE_S_SMBUS) }, + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ALDER_LAKE_P_SMBUS) }, ++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ALDER_LAKE_M_SMBUS) }, + { 0, } + }; + +@@ -1771,6 +1774,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id) + case PCI_DEVICE_ID_INTEL_EBG_SMBUS: + case PCI_DEVICE_ID_INTEL_ALDER_LAKE_S_SMBUS: + case PCI_DEVICE_ID_INTEL_ALDER_LAKE_P_SMBUS: ++ case PCI_DEVICE_ID_INTEL_ALDER_LAKE_M_SMBUS: + priv->features |= FEATURE_BLOCK_PROC; + priv->features |= FEATURE_I2C_BLOCK_READ; + priv->features |= FEATURE_IRQ; +diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c +index dc9c4b4cc25a1..dc5ca71906dbc 100644 +--- a/drivers/i2c/busses/i2c-imx.c ++++ b/drivers/i2c/busses/i2c-imx.c +@@ -801,7 +801,7 @@ static int i2c_imx_reg_slave(struct i2c_client *client) + i2c_imx->last_slave_event = I2C_SLAVE_STOP; + + /* Resume */ +- ret = pm_runtime_get_sync(i2c_imx->adapter.dev.parent); ++ ret = pm_runtime_resume_and_get(i2c_imx->adapter.dev.parent); + if (ret < 0) { + dev_err(&i2c_imx->adapter.dev, "failed to resume i2c controller"); + return ret; +diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c +index 86f70c7513192..bf25acba2ed53 100644 +--- a/drivers/i2c/busses/i2c-mt65xx.c ++++ b/drivers/i2c/busses/i2c-mt65xx.c +@@ -564,7 +564,7 @@ static const struct i2c_spec_values *mtk_i2c_get_spec(unsigned int speed) + + static int mtk_i2c_max_step_cnt(unsigned int target_speed) + { +- if (target_speed > I2C_MAX_FAST_MODE_FREQ) ++ if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) + return MAX_HS_STEP_CNT_DIV; + else + return MAX_STEP_CNT_DIV; +@@ -635,7 +635,7 @@ static int mtk_i2c_check_ac_timing(struct mtk_i2c *i2c, + if (sda_min > sda_max) + return -3; + +- if (check_speed > I2C_MAX_FAST_MODE_FREQ) { ++ if (check_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) { + if (i2c->dev_comp->ltiming_adjust) { + i2c->ac_timing.hs = I2C_TIME_DEFAULT_VALUE | + (sample_cnt << 12) | (high_cnt << 8); +@@ -850,7 +850,7 @@ static int mtk_i2c_do_transfer(struct mtk_i2c *i2c, struct i2c_msg *msgs, + + control_reg = mtk_i2c_readw(i2c, OFFSET_CONTROL) & + ~(I2C_CONTROL_DIR_CHANGE | I2C_CONTROL_RS); +- if ((i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ) || (left_num >= 1)) ++ if ((i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ) || (left_num >= 1)) + control_reg |= I2C_CONTROL_RS; + + if (i2c->op == I2C_MASTER_WRRD) +@@ -1067,7 +1067,8 @@ static int mtk_i2c_transfer(struct i2c_adapter *adap, + } + } + +- if (i2c->auto_restart && num >= 2 && i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ) ++ if (i2c->auto_restart && num >= 2 && ++ i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ) + /* ignore the first restart irq after the master code, + * otherwise the first transfer will be discarded. + */ +diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c +index 6ceb11cc4be18..6ef38a8ee95cb 100644 +--- a/drivers/i2c/i2c-dev.c ++++ b/drivers/i2c/i2c-dev.c +@@ -440,8 +440,13 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + sizeof(rdwr_arg))) + return -EFAULT; + +- /* Put an arbitrary limit on the number of messages that can +- * be sent at once */ ++ if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0) ++ return -EINVAL; ++ ++ /* ++ * Put an arbitrary limit on the number of messages that can ++ * be sent at once ++ */ + if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS) + return -EINVAL; + +diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig +index 2e0c62c391550..8acf277b8b258 100644 +--- a/drivers/iio/accel/Kconfig ++++ b/drivers/iio/accel/Kconfig +@@ -211,7 +211,6 @@ config DMARD10 + config HID_SENSOR_ACCEL_3D + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID Accelerometers 3D" +diff --git a/drivers/iio/common/hid-sensors/Kconfig b/drivers/iio/common/hid-sensors/Kconfig +index 24d4925673363..2a3dd3b907bee 100644 +--- a/drivers/iio/common/hid-sensors/Kconfig ++++ b/drivers/iio/common/hid-sensors/Kconfig +@@ -19,6 +19,7 @@ config HID_SENSOR_IIO_TRIGGER + tristate "Common module (trigger) for all HID Sensor IIO drivers" + depends on HID_SENSOR_HUB && HID_SENSOR_IIO_COMMON && IIO_BUFFER + select IIO_TRIGGER ++ select IIO_TRIGGERED_BUFFER + help + Say yes here to build trigger support for HID sensors. + Triggers will be send if all requested attributes were read. +diff --git a/drivers/iio/gyro/Kconfig b/drivers/iio/gyro/Kconfig +index 5824f2edf9758..20b5ac7ab66af 100644 +--- a/drivers/iio/gyro/Kconfig ++++ b/drivers/iio/gyro/Kconfig +@@ -111,7 +111,6 @@ config FXAS21002C_SPI + config HID_SENSOR_GYRO_3D + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID Gyroscope 3D" +diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c +index ac90be03332af..f17a935195352 100644 +--- a/drivers/iio/gyro/mpu3050-core.c ++++ b/drivers/iio/gyro/mpu3050-core.c +@@ -272,7 +272,16 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev, + case IIO_CHAN_INFO_OFFSET: + switch (chan->type) { + case IIO_TEMP: +- /* The temperature scaling is (x+23000)/280 Celsius */ ++ /* ++ * The temperature scaling is (x+23000)/280 Celsius ++ * for the "best fit straight line" temperature range ++ * of -30C..85C. The 23000 includes room temperature ++ * offset of +35C, 280 is the precision scale and x is ++ * the 16-bit signed integer reported by hardware. ++ * ++ * Temperature value itself represents temperature of ++ * the sensor die. ++ */ + *val = 23000; + return IIO_VAL_INT; + default: +@@ -329,7 +338,7 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev, + goto out_read_raw_unlock; + } + +- *val = be16_to_cpu(raw_val); ++ *val = (s16)be16_to_cpu(raw_val); + ret = IIO_VAL_INT; + + goto out_read_raw_unlock; +diff --git a/drivers/iio/humidity/Kconfig b/drivers/iio/humidity/Kconfig +index 6549fcf6db698..2de5494e7c225 100644 +--- a/drivers/iio/humidity/Kconfig ++++ b/drivers/iio/humidity/Kconfig +@@ -52,7 +52,6 @@ config HID_SENSOR_HUMIDITY + tristate "HID Environmental humidity sensor" + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + help +diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c +index 7db761afa578c..36f3a900878db 100644 +--- a/drivers/iio/industrialio-core.c ++++ b/drivers/iio/industrialio-core.c +@@ -1734,7 +1734,6 @@ static long iio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) + if (!indio_dev->info) + goto out_unlock; + +- ret = -EINVAL; + list_for_each_entry(h, &iio_dev_opaque->ioctl_handlers, entry) { + ret = h->ioctl(indio_dev, filp, cmd, arg); + if (ret != IIO_IOCTL_UNHANDLED) +@@ -1742,7 +1741,7 @@ static long iio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) + } + + if (ret == IIO_IOCTL_UNHANDLED) +- ret = -EINVAL; ++ ret = -ENODEV; + + out_unlock: + mutex_unlock(&indio_dev->info_exist_lock); +@@ -1864,9 +1863,6 @@ EXPORT_SYMBOL(__iio_device_register); + **/ + void iio_device_unregister(struct iio_dev *indio_dev) + { +- struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); +- struct iio_ioctl_handler *h, *t; +- + cdev_device_del(&indio_dev->chrdev, &indio_dev->dev); + + mutex_lock(&indio_dev->info_exist_lock); +@@ -1877,9 +1873,6 @@ void iio_device_unregister(struct iio_dev *indio_dev) + + indio_dev->info = NULL; + +- list_for_each_entry_safe(h, t, &iio_dev_opaque->ioctl_handlers, entry) +- list_del(&h->entry); +- + iio_device_wakeup_eventset(indio_dev); + iio_buffer_wakeup_poll(indio_dev); + +diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig +index 33ad4dd0b5c7b..917f9becf9c75 100644 +--- a/drivers/iio/light/Kconfig ++++ b/drivers/iio/light/Kconfig +@@ -256,7 +256,6 @@ config ISL29125 + config HID_SENSOR_ALS + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID ALS" +@@ -270,7 +269,6 @@ config HID_SENSOR_ALS + config HID_SENSOR_PROX + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID PROX" +diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c +index 7ba7aa59437c3..040d8429a6e00 100644 +--- a/drivers/iio/light/gp2ap002.c ++++ b/drivers/iio/light/gp2ap002.c +@@ -583,7 +583,7 @@ static int gp2ap002_probe(struct i2c_client *client, + "gp2ap002", indio_dev); + if (ret) { + dev_err(dev, "unable to request IRQ\n"); +- goto out_disable_vio; ++ goto out_put_pm; + } + gp2ap002->irq = client->irq; + +@@ -613,8 +613,9 @@ static int gp2ap002_probe(struct i2c_client *client, + + return 0; + +-out_disable_pm: ++out_put_pm: + pm_runtime_put_noidle(dev); ++out_disable_pm: + pm_runtime_disable(dev); + out_disable_vio: + regulator_disable(gp2ap002->vio); +diff --git a/drivers/iio/light/tsl2583.c b/drivers/iio/light/tsl2583.c +index 0f787bfc88fc4..c9d8f07a6fcdd 100644 +--- a/drivers/iio/light/tsl2583.c ++++ b/drivers/iio/light/tsl2583.c +@@ -341,6 +341,14 @@ static int tsl2583_als_calibrate(struct iio_dev *indio_dev) + return lux_val; + } + ++ /* Avoid division by zero of lux_value later on */ ++ if (lux_val == 0) { ++ dev_err(&chip->client->dev, ++ "%s: lux_val of 0 will produce out of range trim_value\n", ++ __func__); ++ return -ENODATA; ++ } ++ + gain_trim_val = (unsigned int)(((chip->als_settings.als_cal_target) + * chip->als_settings.als_gain_trim) / lux_val); + if ((gain_trim_val < 250) || (gain_trim_val > 4000)) { +diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig +index 5d4ffd66032e9..74ad5701c6c29 100644 +--- a/drivers/iio/magnetometer/Kconfig ++++ b/drivers/iio/magnetometer/Kconfig +@@ -95,7 +95,6 @@ config MAG3110 + config HID_SENSOR_MAGNETOMETER_3D + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID Magenetometer 3D" +diff --git a/drivers/iio/orientation/Kconfig b/drivers/iio/orientation/Kconfig +index a505583cc2fda..396cbbb867f4c 100644 +--- a/drivers/iio/orientation/Kconfig ++++ b/drivers/iio/orientation/Kconfig +@@ -9,7 +9,6 @@ menu "Inclinometer sensors" + config HID_SENSOR_INCLINOMETER_3D + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID Inclinometer 3D" +@@ -20,7 +19,6 @@ config HID_SENSOR_INCLINOMETER_3D + config HID_SENSOR_DEVICE_ROTATION + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID Device Rotation" +diff --git a/drivers/iio/pressure/Kconfig b/drivers/iio/pressure/Kconfig +index 689b978db4f95..fc0d3cfca4186 100644 +--- a/drivers/iio/pressure/Kconfig ++++ b/drivers/iio/pressure/Kconfig +@@ -79,7 +79,6 @@ config DPS310 + config HID_SENSOR_PRESS + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + tristate "HID PRESS" +diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c +index c685f10b5ae48..cc206bfa09c78 100644 +--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c ++++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c +@@ -160,6 +160,7 @@ static int lidar_get_measurement(struct lidar_data *data, u16 *reg) + ret = lidar_write_control(data, LIDAR_REG_CONTROL_ACQUIRE); + if (ret < 0) { + dev_err(&client->dev, "cannot send start measurement command"); ++ pm_runtime_put_noidle(&client->dev); + return ret; + } + +diff --git a/drivers/iio/temperature/Kconfig b/drivers/iio/temperature/Kconfig +index f1f2a1499c9e2..4df60082c1fa8 100644 +--- a/drivers/iio/temperature/Kconfig ++++ b/drivers/iio/temperature/Kconfig +@@ -45,7 +45,6 @@ config HID_SENSOR_TEMP + tristate "HID Environmental temperature sensor" + depends on HID_SENSOR_HUB + select IIO_BUFFER +- select IIO_TRIGGERED_BUFFER + select HID_SENSOR_IIO_COMMON + select HID_SENSOR_IIO_TRIGGER + help +diff --git a/drivers/infiniband/hw/hfi1/ipoib.h b/drivers/infiniband/hw/hfi1/ipoib.h +index f650cac9d424c..d30c23b6527aa 100644 +--- a/drivers/infiniband/hw/hfi1/ipoib.h ++++ b/drivers/infiniband/hw/hfi1/ipoib.h +@@ -52,8 +52,9 @@ union hfi1_ipoib_flow { + * @producer_lock: producer sync lock + * @consumer_lock: consumer sync lock + */ ++struct ipoib_txreq; + struct hfi1_ipoib_circ_buf { +- void **items; ++ struct ipoib_txreq **items; + unsigned long head; + unsigned long tail; + unsigned long max_items; +diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c +index edd4eeac8dd1d..cdc26ee3cf52d 100644 +--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c ++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c +@@ -702,14 +702,14 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv) + + priv->tx_napis = kcalloc_node(dev->num_tx_queues, + sizeof(struct napi_struct), +- GFP_ATOMIC, ++ GFP_KERNEL, + priv->dd->node); + if (!priv->tx_napis) + goto free_txreq_cache; + + priv->txqs = kcalloc_node(dev->num_tx_queues, + sizeof(struct hfi1_ipoib_txq), +- GFP_ATOMIC, ++ GFP_KERNEL, + priv->dd->node); + if (!priv->txqs) + goto free_tx_napis; +@@ -741,9 +741,9 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv) + priv->dd->node); + + txq->tx_ring.items = +- vzalloc_node(array_size(tx_ring_size, +- sizeof(struct ipoib_txreq)), +- priv->dd->node); ++ kcalloc_node(tx_ring_size, ++ sizeof(struct ipoib_txreq *), ++ GFP_KERNEL, priv->dd->node); + if (!txq->tx_ring.items) + goto free_txqs; + +@@ -764,7 +764,7 @@ free_txqs: + struct hfi1_ipoib_txq *txq = &priv->txqs[i]; + + netif_napi_del(txq->napi); +- vfree(txq->tx_ring.items); ++ kfree(txq->tx_ring.items); + } + + kfree(priv->txqs); +@@ -817,7 +817,7 @@ void hfi1_ipoib_txreq_deinit(struct hfi1_ipoib_dev_priv *priv) + hfi1_ipoib_drain_tx_list(txq); + netif_napi_del(txq->napi); + (void)hfi1_ipoib_drain_tx_ring(txq, txq->tx_ring.max_items); +- vfree(txq->tx_ring.items); ++ kfree(txq->tx_ring.items); + } + + kfree(priv->txqs); +diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c +index f7e31018cd0b9..df7b19ff0a9ed 100644 +--- a/drivers/iommu/amd/init.c ++++ b/drivers/iommu/amd/init.c +@@ -12,7 +12,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -257,8 +256,6 @@ static enum iommu_init_state init_state = IOMMU_START_STATE; + static int amd_iommu_enable_interrupts(void); + static int __init iommu_go_to_state(enum iommu_init_state state); + static void init_device_table_dma(void); +-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, +- u8 fxn, u64 *value, bool is_write); + + static bool amd_iommu_pre_enabled = true; + +@@ -1717,53 +1714,16 @@ static int __init init_iommu_all(struct acpi_table_header *table) + return 0; + } + +-static void __init init_iommu_perf_ctr(struct amd_iommu *iommu) ++static void init_iommu_perf_ctr(struct amd_iommu *iommu) + { +- int retry; ++ u64 val; + struct pci_dev *pdev = iommu->dev; +- u64 val = 0xabcd, val2 = 0, save_reg, save_src; + + if (!iommu_feature(iommu, FEATURE_PC)) + return; + + amd_iommu_pc_present = true; + +- /* save the value to restore, if writable */ +- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) || +- iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false)) +- goto pc_false; +- +- /* +- * Disable power gating by programing the performance counter +- * source to 20 (i.e. counts the reads and writes from/to IOMMU +- * Reserved Register [MMIO Offset 1FF8h] that are ignored.), +- * which never get incremented during this init phase. +- * (Note: The event is also deprecated.) +- */ +- val = 20; +- if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true)) +- goto pc_false; +- +- /* Check if the performance counters can be written to */ +- val = 0xabcd; +- for (retry = 5; retry; retry--) { +- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) || +- iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) || +- val2) +- break; +- +- /* Wait about 20 msec for power gating to disable and retry. */ +- msleep(20); +- } +- +- /* restore */ +- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) || +- iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true)) +- goto pc_false; +- +- if (val != val2) +- goto pc_false; +- + pci_info(pdev, "IOMMU performance counters supported\n"); + + val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET); +@@ -1771,11 +1731,6 @@ static void __init init_iommu_perf_ctr(struct amd_iommu *iommu) + iommu->max_counters = (u8) ((val >> 7) & 0xf); + + return; +- +-pc_false: +- pci_err(pdev, "Unable to read/write to IOMMU perf counter.\n"); +- amd_iommu_pc_present = false; +- return; + } + + static ssize_t amd_iommu_show_cap(struct device *dev, +diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +index 8594b4a830437..941ba5484731e 100644 +--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c ++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +@@ -2305,6 +2305,9 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain, + { + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + ++ if (!gather->pgsize) ++ return; ++ + arm_smmu_tlb_inv_range_domain(gather->start, + gather->end - gather->start + 1, + gather->pgsize, true, smmu_domain); +diff --git a/drivers/leds/blink/Kconfig b/drivers/leds/blink/Kconfig +index 265b53476a80f..6dedc58c47b3e 100644 +--- a/drivers/leds/blink/Kconfig ++++ b/drivers/leds/blink/Kconfig +@@ -9,6 +9,7 @@ if LEDS_BLINK + + config LEDS_BLINK_LGM + tristate "LED support for Intel LGM SoC series" ++ depends on GPIOLIB + depends on LEDS_CLASS + depends on MFD_SYSCON + depends on OF +diff --git a/drivers/net/can/dev/skb.c b/drivers/net/can/dev/skb.c +index 6a64fe410987e..c3508109263ec 100644 +--- a/drivers/net/can/dev/skb.c ++++ b/drivers/net/can/dev/skb.c +@@ -151,7 +151,11 @@ void can_free_echo_skb(struct net_device *dev, unsigned int idx) + { + struct can_priv *priv = netdev_priv(dev); + +- BUG_ON(idx >= priv->echo_skb_max); ++ if (idx >= priv->echo_skb_max) { ++ netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n", ++ __func__, idx, priv->echo_skb_max); ++ return; ++ } + + if (priv->echo_skb[idx]) { + dev_kfree_skb_any(priv->echo_skb[idx]); +diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c +index 0c8d36bc668c8..f71127229caf7 100644 +--- a/drivers/net/can/m_can/m_can.c ++++ b/drivers/net/can/m_can/m_can.c +@@ -1455,6 +1455,8 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev) + int i; + int putidx; + ++ cdev->tx_skb = NULL; ++ + /* Generate ID field for TX buffer Element */ + /* Common to all supported M_CAN versions */ + if (cf->can_id & CAN_EFF_FLAG) { +@@ -1571,7 +1573,6 @@ static void m_can_tx_work_queue(struct work_struct *ws) + tx_work); + + m_can_tx_handler(cdev); +- cdev->tx_skb = NULL; + } + + static netdev_tx_t m_can_start_xmit(struct sk_buff *skb, +diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c +index a57da43680d8f..bd7d0251be104 100644 +--- a/drivers/net/can/spi/mcp251x.c ++++ b/drivers/net/can/spi/mcp251x.c +@@ -956,8 +956,6 @@ static int mcp251x_stop(struct net_device *net) + + priv->force_quit = 1; + free_irq(spi->irq, priv); +- destroy_workqueue(priv->wq); +- priv->wq = NULL; + + mutex_lock(&priv->mcp_lock); + +@@ -1224,24 +1222,15 @@ static int mcp251x_open(struct net_device *net) + goto out_close; + } + +- priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM, +- 0); +- if (!priv->wq) { +- ret = -ENOMEM; +- goto out_clean; +- } +- INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler); +- INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler); +- + ret = mcp251x_hw_wake(spi); + if (ret) +- goto out_free_wq; ++ goto out_free_irq; + ret = mcp251x_setup(net, spi); + if (ret) +- goto out_free_wq; ++ goto out_free_irq; + ret = mcp251x_set_normal_mode(spi); + if (ret) +- goto out_free_wq; ++ goto out_free_irq; + + can_led_event(net, CAN_LED_EVENT_OPEN); + +@@ -1250,9 +1239,7 @@ static int mcp251x_open(struct net_device *net) + + return 0; + +-out_free_wq: +- destroy_workqueue(priv->wq); +-out_clean: ++out_free_irq: + free_irq(spi->irq, priv); + mcp251x_hw_sleep(spi); + out_close: +@@ -1373,6 +1360,15 @@ static int mcp251x_can_probe(struct spi_device *spi) + if (ret) + goto out_clk; + ++ priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM, ++ 0); ++ if (!priv->wq) { ++ ret = -ENOMEM; ++ goto out_clk; ++ } ++ INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler); ++ INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler); ++ + priv->spi = spi; + mutex_init(&priv->mcp_lock); + +@@ -1417,6 +1413,8 @@ static int mcp251x_can_probe(struct spi_device *spi) + return 0; + + error_probe: ++ destroy_workqueue(priv->wq); ++ priv->wq = NULL; + mcp251x_power_enable(priv->power, 0); + + out_clk: +@@ -1438,6 +1436,9 @@ static int mcp251x_can_remove(struct spi_device *spi) + + mcp251x_power_enable(priv->power, 0); + ++ destroy_workqueue(priv->wq); ++ priv->wq = NULL; ++ + clk_disable_unprepare(priv->clk); + + free_candev(net); +diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c +index 799e9d5d34813..4a742aa5c417b 100644 +--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c ++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c +@@ -2856,8 +2856,8 @@ static int mcp251xfd_probe(struct spi_device *spi) + + clk = devm_clk_get(&spi->dev, NULL); + if (IS_ERR(clk)) +- dev_err_probe(&spi->dev, PTR_ERR(clk), +- "Failed to get Oscillator (clock)!\n"); ++ return dev_err_probe(&spi->dev, PTR_ERR(clk), ++ "Failed to get Oscillator (clock)!\n"); + freq = clk_get_rate(clk); + + /* Sanity check */ +@@ -2957,10 +2957,12 @@ static int mcp251xfd_probe(struct spi_device *spi) + + err = mcp251xfd_register(priv); + if (err) +- goto out_free_candev; ++ goto out_can_rx_offload_del; + + return 0; + ++ out_can_rx_offload_del: ++ can_rx_offload_del(&priv->offload); + out_free_candev: + spi->max_speed_hz = priv->spi_max_speed_hz_orig; + +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index 73239d3eaca15..cf4249d59383c 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -122,7 +122,10 @@ enum board_idx { + NETXTREME_E_VF, + NETXTREME_C_VF, + NETXTREME_S_VF, ++ NETXTREME_C_VF_HV, ++ NETXTREME_E_VF_HV, + NETXTREME_E_P5_VF, ++ NETXTREME_E_P5_VF_HV, + }; + + /* indexed by enum above */ +@@ -170,7 +173,10 @@ static const struct { + [NETXTREME_E_VF] = { "Broadcom NetXtreme-E Ethernet Virtual Function" }, + [NETXTREME_C_VF] = { "Broadcom NetXtreme-C Ethernet Virtual Function" }, + [NETXTREME_S_VF] = { "Broadcom NetXtreme-S Ethernet Virtual Function" }, ++ [NETXTREME_C_VF_HV] = { "Broadcom NetXtreme-C Virtual Function for Hyper-V" }, ++ [NETXTREME_E_VF_HV] = { "Broadcom NetXtreme-E Virtual Function for Hyper-V" }, + [NETXTREME_E_P5_VF] = { "Broadcom BCM5750X NetXtreme-E Ethernet Virtual Function" }, ++ [NETXTREME_E_P5_VF_HV] = { "Broadcom BCM5750X NetXtreme-E Virtual Function for Hyper-V" }, + }; + + static const struct pci_device_id bnxt_pci_tbl[] = { +@@ -222,15 +228,25 @@ static const struct pci_device_id bnxt_pci_tbl[] = { + { PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 }, + #ifdef CONFIG_BNXT_SRIOV + { PCI_VDEVICE(BROADCOM, 0x1606), .driver_data = NETXTREME_E_VF }, ++ { PCI_VDEVICE(BROADCOM, 0x1607), .driver_data = NETXTREME_E_VF_HV }, ++ { PCI_VDEVICE(BROADCOM, 0x1608), .driver_data = NETXTREME_E_VF_HV }, + { PCI_VDEVICE(BROADCOM, 0x1609), .driver_data = NETXTREME_E_VF }, ++ { PCI_VDEVICE(BROADCOM, 0x16bd), .driver_data = NETXTREME_E_VF_HV }, + { PCI_VDEVICE(BROADCOM, 0x16c1), .driver_data = NETXTREME_E_VF }, ++ { PCI_VDEVICE(BROADCOM, 0x16c2), .driver_data = NETXTREME_C_VF_HV }, ++ { PCI_VDEVICE(BROADCOM, 0x16c3), .driver_data = NETXTREME_C_VF_HV }, ++ { PCI_VDEVICE(BROADCOM, 0x16c4), .driver_data = NETXTREME_E_VF_HV }, ++ { PCI_VDEVICE(BROADCOM, 0x16c5), .driver_data = NETXTREME_E_VF_HV }, + { PCI_VDEVICE(BROADCOM, 0x16cb), .driver_data = NETXTREME_C_VF }, + { PCI_VDEVICE(BROADCOM, 0x16d3), .driver_data = NETXTREME_E_VF }, + { PCI_VDEVICE(BROADCOM, 0x16dc), .driver_data = NETXTREME_E_VF }, + { PCI_VDEVICE(BROADCOM, 0x16e1), .driver_data = NETXTREME_C_VF }, + { PCI_VDEVICE(BROADCOM, 0x16e5), .driver_data = NETXTREME_C_VF }, ++ { PCI_VDEVICE(BROADCOM, 0x16e6), .driver_data = NETXTREME_C_VF_HV }, + { PCI_VDEVICE(BROADCOM, 0x1806), .driver_data = NETXTREME_E_P5_VF }, + { PCI_VDEVICE(BROADCOM, 0x1807), .driver_data = NETXTREME_E_P5_VF }, ++ { PCI_VDEVICE(BROADCOM, 0x1808), .driver_data = NETXTREME_E_P5_VF_HV }, ++ { PCI_VDEVICE(BROADCOM, 0x1809), .driver_data = NETXTREME_E_P5_VF_HV }, + { PCI_VDEVICE(BROADCOM, 0xd800), .driver_data = NETXTREME_S_VF }, + #endif + { 0 } +@@ -265,7 +281,8 @@ static struct workqueue_struct *bnxt_pf_wq; + static bool bnxt_vf_pciid(enum board_idx idx) + { + return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF || +- idx == NETXTREME_S_VF || idx == NETXTREME_E_P5_VF); ++ idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV || ++ idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF); + } + + #define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID) +diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c +index f04ec53544ae5..b1443ff439de9 100644 +--- a/drivers/net/ethernet/cisco/enic/enic_main.c ++++ b/drivers/net/ethernet/cisco/enic/enic_main.c +@@ -768,7 +768,7 @@ static inline int enic_queue_wq_skb_encap(struct enic *enic, struct vnic_wq *wq, + return err; + } + +-static inline void enic_queue_wq_skb(struct enic *enic, ++static inline int enic_queue_wq_skb(struct enic *enic, + struct vnic_wq *wq, struct sk_buff *skb) + { + unsigned int mss = skb_shinfo(skb)->gso_size; +@@ -814,6 +814,7 @@ static inline void enic_queue_wq_skb(struct enic *enic, + wq->to_use = buf->next; + dev_kfree_skb(skb); + } ++ return err; + } + + /* netif_tx_lock held, process context with BHs disabled, or BH */ +@@ -857,7 +858,8 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb, + return NETDEV_TX_BUSY; + } + +- enic_queue_wq_skb(enic, wq, skb); ++ if (enic_queue_wq_skb(enic, wq, skb)) ++ goto error; + + if (vnic_wq_desc_avail(wq) < MAX_SKB_FRAGS + ENIC_DESC_MAX_SPLITS) + netif_tx_stop_queue(txq); +@@ -865,6 +867,7 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb, + if (!netdev_xmit_more() || netif_xmit_stopped(txq)) + vnic_wq_doorbell(wq); + ++error: + spin_unlock(&enic->wq_lock[txq_map]); + + return NETDEV_TX_OK; +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c +index 3db882322b2bd..70aea9c274fed 100644 +--- a/drivers/net/ethernet/freescale/fec_main.c ++++ b/drivers/net/ethernet/freescale/fec_main.c +@@ -2048,6 +2048,8 @@ static int fec_enet_mii_probe(struct net_device *ndev) + fep->link = 0; + fep->full_duplex = 0; + ++ phy_dev->mac_managed_pm = 1; ++ + phy_attached_info(phy_dev); + + return 0; +@@ -3864,6 +3866,7 @@ static int __maybe_unused fec_resume(struct device *dev) + netif_device_attach(ndev); + netif_tx_unlock_bh(ndev); + napi_enable(&fep->napi); ++ phy_init_hw(ndev->phydev); + phy_start(ndev->phydev); + } + rtnl_unlock(); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +index 65752f363f431..0f70158c2551d 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +@@ -576,8 +576,8 @@ static int hns3_nic_net_stop(struct net_device *netdev) + if (h->ae_algo->ops->set_timer_task) + h->ae_algo->ops->set_timer_task(priv->ae_handle, false); + +- netif_tx_stop_all_queues(netdev); + netif_carrier_off(netdev); ++ netif_tx_disable(netdev); + + hns3_nic_net_down(netdev); + +@@ -823,7 +823,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto, + * and it is udp packet, which has a dest port as the IANA assigned. + * the hardware is expected to do the checksum offload, but the + * hardware will not do the checksum offload when udp dest port is +- * 4789 or 6081. ++ * 4789, 4790 or 6081. + */ + static bool hns3_tunnel_csum_bug(struct sk_buff *skb) + { +@@ -841,7 +841,8 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb) + + if (!(!skb->encapsulation && + (l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) || +- l4.udp->dest == htons(GENEVE_UDP_PORT)))) ++ l4.udp->dest == htons(GENEVE_UDP_PORT) || ++ l4.udp->dest == htons(4790)))) + return false; + + skb_checksum_help(skb); +@@ -1277,23 +1278,21 @@ static unsigned int hns3_skb_bd_num(struct sk_buff *skb, unsigned int *bd_size, + } + + static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size, +- u8 max_non_tso_bd_num) ++ u8 max_non_tso_bd_num, unsigned int bd_num, ++ unsigned int recursion_level) + { ++#define HNS3_MAX_RECURSION_LEVEL 24 ++ + struct sk_buff *frag_skb; +- unsigned int bd_num = 0; + + /* If the total len is within the max bd limit */ +- if (likely(skb->len <= HNS3_MAX_BD_SIZE && !skb_has_frag_list(skb) && ++ if (likely(skb->len <= HNS3_MAX_BD_SIZE && !recursion_level && ++ !skb_has_frag_list(skb) && + skb_shinfo(skb)->nr_frags < max_non_tso_bd_num)) + return skb_shinfo(skb)->nr_frags + 1U; + +- /* The below case will always be linearized, return +- * HNS3_MAX_BD_NUM_TSO + 1U to make sure it is linearized. +- */ +- if (unlikely(skb->len > HNS3_MAX_TSO_SIZE || +- (!skb_is_gso(skb) && skb->len > +- HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num)))) +- return HNS3_MAX_TSO_BD_NUM + 1U; ++ if (unlikely(recursion_level >= HNS3_MAX_RECURSION_LEVEL)) ++ return UINT_MAX; + + bd_num = hns3_skb_bd_num(skb, bd_size, bd_num); + +@@ -1301,7 +1300,8 @@ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size, + return bd_num; + + skb_walk_frags(skb, frag_skb) { +- bd_num = hns3_skb_bd_num(frag_skb, bd_size, bd_num); ++ bd_num = hns3_tx_bd_num(frag_skb, bd_size, max_non_tso_bd_num, ++ bd_num, recursion_level + 1); + if (bd_num > HNS3_MAX_TSO_BD_NUM) + return bd_num; + } +@@ -1361,6 +1361,43 @@ void hns3_shinfo_pack(struct skb_shared_info *shinfo, __u32 *size) + size[i] = skb_frag_size(&shinfo->frags[i]); + } + ++static int hns3_skb_linearize(struct hns3_enet_ring *ring, ++ struct sk_buff *skb, ++ u8 max_non_tso_bd_num, ++ unsigned int bd_num) ++{ ++ /* 'bd_num == UINT_MAX' means the skb' fraglist has a ++ * recursion level of over HNS3_MAX_RECURSION_LEVEL. ++ */ ++ if (bd_num == UINT_MAX) { ++ u64_stats_update_begin(&ring->syncp); ++ ring->stats.over_max_recursion++; ++ u64_stats_update_end(&ring->syncp); ++ return -ENOMEM; ++ } ++ ++ /* The skb->len has exceeded the hw limitation, linearization ++ * will not help. ++ */ ++ if (skb->len > HNS3_MAX_TSO_SIZE || ++ (!skb_is_gso(skb) && skb->len > ++ HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) { ++ u64_stats_update_begin(&ring->syncp); ++ ring->stats.hw_limitation++; ++ u64_stats_update_end(&ring->syncp); ++ return -ENOMEM; ++ } ++ ++ if (__skb_linearize(skb)) { ++ u64_stats_update_begin(&ring->syncp); ++ ring->stats.sw_err_cnt++; ++ u64_stats_update_end(&ring->syncp); ++ return -ENOMEM; ++ } ++ ++ return 0; ++} ++ + static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring, + struct net_device *netdev, + struct sk_buff *skb) +@@ -1370,7 +1407,7 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring, + unsigned int bd_size[HNS3_MAX_TSO_BD_NUM + 1U]; + unsigned int bd_num; + +- bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num); ++ bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num, 0, 0); + if (unlikely(bd_num > max_non_tso_bd_num)) { + if (bd_num <= HNS3_MAX_TSO_BD_NUM && skb_is_gso(skb) && + !hns3_skb_need_linearized(skb, bd_size, bd_num, +@@ -1379,16 +1416,11 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring, + goto out; + } + +- if (__skb_linearize(skb)) ++ if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num, ++ bd_num)) + return -ENOMEM; + + bd_num = hns3_tx_bd_count(skb->len); +- if ((skb_is_gso(skb) && bd_num > HNS3_MAX_TSO_BD_NUM) || +- (!skb_is_gso(skb) && +- bd_num > max_non_tso_bd_num)) { +- trace_hns3_over_max_bd(skb); +- return -ENOMEM; +- } + + u64_stats_update_begin(&ring->syncp); + ring->stats.tx_copy++; +@@ -1412,6 +1444,10 @@ out: + return bd_num; + } + ++ u64_stats_update_begin(&ring->syncp); ++ ring->stats.tx_busy++; ++ u64_stats_update_end(&ring->syncp); ++ + return -EBUSY; + } + +@@ -1459,6 +1495,7 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring, + struct sk_buff *skb, enum hns_desc_type type) + { + unsigned int size = skb_headlen(skb); ++ struct sk_buff *frag_skb; + int i, ret, bd_num = 0; + + if (size) { +@@ -1483,6 +1520,15 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring, + bd_num += ret; + } + ++ skb_walk_frags(skb, frag_skb) { ++ ret = hns3_fill_skb_to_desc(ring, frag_skb, ++ DESC_TYPE_FRAGLIST_SKB); ++ if (unlikely(ret < 0)) ++ return ret; ++ ++ bd_num += ret; ++ } ++ + return bd_num; + } + +@@ -1513,8 +1559,6 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) + struct hns3_enet_ring *ring = &priv->ring[skb->queue_mapping]; + struct netdev_queue *dev_queue; + int pre_ntu, next_to_use_head; +- struct sk_buff *frag_skb; +- int bd_num = 0; + bool doorbell; + int ret; + +@@ -1530,15 +1574,8 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) + ret = hns3_nic_maybe_stop_tx(ring, netdev, skb); + if (unlikely(ret <= 0)) { + if (ret == -EBUSY) { +- u64_stats_update_begin(&ring->syncp); +- ring->stats.tx_busy++; +- u64_stats_update_end(&ring->syncp); + hns3_tx_doorbell(ring, 0, true); + return NETDEV_TX_BUSY; +- } else if (ret == -ENOMEM) { +- u64_stats_update_begin(&ring->syncp); +- ring->stats.sw_err_cnt++; +- u64_stats_update_end(&ring->syncp); + } + + hns3_rl_err(netdev, "xmit error: %d!\n", ret); +@@ -1551,21 +1588,14 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) + if (unlikely(ret < 0)) + goto fill_err; + ++ /* 'ret < 0' means filling error, 'ret == 0' means skb->len is ++ * zero, which is unlikely, and 'ret > 0' means how many tx desc ++ * need to be notified to the hw. ++ */ + ret = hns3_fill_skb_to_desc(ring, skb, DESC_TYPE_SKB); +- if (unlikely(ret < 0)) ++ if (unlikely(ret <= 0)) + goto fill_err; + +- bd_num += ret; +- +- skb_walk_frags(skb, frag_skb) { +- ret = hns3_fill_skb_to_desc(ring, frag_skb, +- DESC_TYPE_FRAGLIST_SKB); +- if (unlikely(ret < 0)) +- goto fill_err; +- +- bd_num += ret; +- } +- + pre_ntu = ring->next_to_use ? (ring->next_to_use - 1) : + (ring->desc_num - 1); + ring->desc[pre_ntu].tx.bdtp_fe_sc_vld_ra_ri |= +@@ -1576,7 +1606,7 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev) + dev_queue = netdev_get_tx_queue(netdev, ring->queue_index); + doorbell = __netdev_tx_sent_queue(dev_queue, skb->len, + netdev_xmit_more()); +- hns3_tx_doorbell(ring, bd_num, doorbell); ++ hns3_tx_doorbell(ring, ret, doorbell); + + return NETDEV_TX_OK; + +@@ -1748,11 +1778,15 @@ static void hns3_nic_get_stats64(struct net_device *netdev, + tx_drop += ring->stats.tx_l4_proto_err; + tx_drop += ring->stats.tx_l2l3l4_err; + tx_drop += ring->stats.tx_tso_err; ++ tx_drop += ring->stats.over_max_recursion; ++ tx_drop += ring->stats.hw_limitation; + tx_errors += ring->stats.sw_err_cnt; + tx_errors += ring->stats.tx_vlan_err; + tx_errors += ring->stats.tx_l4_proto_err; + tx_errors += ring->stats.tx_l2l3l4_err; + tx_errors += ring->stats.tx_tso_err; ++ tx_errors += ring->stats.over_max_recursion; ++ tx_errors += ring->stats.hw_limitation; + } while (u64_stats_fetch_retry_irq(&ring->syncp, start)); + + /* fetch the rx stats */ +@@ -4555,6 +4589,11 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle) + struct hns3_nic_priv *priv = netdev_priv(kinfo->netdev); + int ret = 0; + ++ if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state)) { ++ netdev_err(kinfo->netdev, "device is not initialized yet\n"); ++ return -EFAULT; ++ } ++ + clear_bit(HNS3_NIC_STATE_RESETTING, &priv->state); + + if (netif_running(kinfo->netdev)) { +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +index d069b04ee5873..e44224e233150 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +@@ -376,6 +376,8 @@ struct ring_stats { + u64 tx_l4_proto_err; + u64 tx_l2l3l4_err; + u64 tx_tso_err; ++ u64 over_max_recursion; ++ u64 hw_limitation; + }; + struct { + u64 rx_pkts; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +index adcec4ea7cb91..d20f2e2460178 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +@@ -44,6 +44,8 @@ static const struct hns3_stats hns3_txq_stats[] = { + HNS3_TQP_STAT("l4_proto_err", tx_l4_proto_err), + HNS3_TQP_STAT("l2l3l4_err", tx_l2l3l4_err), + HNS3_TQP_STAT("tso_err", tx_tso_err), ++ HNS3_TQP_STAT("over_max_recursion", over_max_recursion), ++ HNS3_TQP_STAT("hw_limitation", hw_limitation), + }; + + #define HNS3_TXQ_STATS_COUNT ARRAY_SIZE(hns3_txq_stats) +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c +index 0ca7f1b984bfb..78d3eb142df83 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c +@@ -753,8 +753,9 @@ static int hclge_config_igu_egu_hw_err_int(struct hclge_dev *hdev, bool en) + + /* configure IGU,EGU error interrupts */ + hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_COMMON_INT_EN, false); ++ desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_TYPE); + if (en) +- desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN); ++ desc.data[0] |= cpu_to_le32(HCLGE_IGU_ERR_INT_EN); + + desc.data[1] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN_MASK); + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h +index 608fe26fc3fed..d647f3c841345 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h +@@ -32,7 +32,8 @@ + #define HCLGE_TQP_ECC_ERR_INT_EN_MASK 0x0FFF + #define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK 0x0F000000 + #define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN 0x0F000000 +-#define HCLGE_IGU_ERR_INT_EN 0x0000066F ++#define HCLGE_IGU_ERR_INT_EN 0x0000000F ++#define HCLGE_IGU_ERR_INT_TYPE 0x00000660 + #define HCLGE_IGU_ERR_INT_EN_MASK 0x000F + #define HCLGE_IGU_TNL_ERR_INT_EN 0x0002AABF + #define HCLGE_IGU_TNL_ERR_INT_EN_MASK 0x003F +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index b0dbe6dcaa7b5..7a560d0e19b9c 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -11379,7 +11379,6 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num, + #define REG_LEN_PER_LINE (REG_NUM_PER_LINE * sizeof(u32)) + #define REG_SEPARATOR_LINE 1 + #define REG_NUM_REMAIN_MASK 3 +-#define BD_LIST_MAX_NUM 30 + + int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev, struct hclge_desc *desc) + { +@@ -11473,15 +11472,19 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len) + { + u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list); + int data_len_per_desc, bd_num, i; +- int bd_num_list[BD_LIST_MAX_NUM]; ++ int *bd_num_list; + u32 data_len; + int ret; + ++ bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL); ++ if (!bd_num_list) ++ return -ENOMEM; ++ + ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num); + if (ret) { + dev_err(&hdev->pdev->dev, + "Get dfx reg bd num fail, status is %d.\n", ret); +- return ret; ++ goto out; + } + + data_len_per_desc = sizeof_field(struct hclge_desc, data); +@@ -11492,6 +11495,8 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len) + *len += (data_len / REG_LEN_PER_LINE + 1) * REG_LEN_PER_LINE; + } + ++out: ++ kfree(bd_num_list); + return ret; + } + +@@ -11499,16 +11504,20 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data) + { + u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list); + int bd_num, bd_num_max, buf_len, i; +- int bd_num_list[BD_LIST_MAX_NUM]; + struct hclge_desc *desc_src; ++ int *bd_num_list; + u32 *reg = data; + int ret; + ++ bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL); ++ if (!bd_num_list) ++ return -ENOMEM; ++ + ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num); + if (ret) { + dev_err(&hdev->pdev->dev, + "Get dfx reg bd num fail, status is %d.\n", ret); +- return ret; ++ goto out; + } + + bd_num_max = bd_num_list[0]; +@@ -11517,8 +11526,10 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data) + + buf_len = sizeof(*desc_src) * bd_num_max; + desc_src = kzalloc(buf_len, GFP_KERNEL); +- if (!desc_src) +- return -ENOMEM; ++ if (!desc_src) { ++ ret = -ENOMEM; ++ goto out; ++ } + + for (i = 0; i < dfx_reg_type_num; i++) { + bd_num = bd_num_list[i]; +@@ -11534,6 +11545,8 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data) + } + + kfree(desc_src); ++out: ++ kfree(bd_num_list); + return ret; + } + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index 51a36e74f0881..c3bb16b1f0600 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -535,7 +535,7 @@ static void hclge_get_link_mode(struct hclge_vport *vport, + unsigned long advertising; + unsigned long supported; + unsigned long send_data; +- u8 msg_data[10]; ++ u8 msg_data[10] = {}; + u8 dest_vfid; + + advertising = hdev->hw.mac.advertising[0]; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c +index e898207025406..c194bba187d6c 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c +@@ -255,6 +255,8 @@ void hclge_mac_start_phy(struct hclge_dev *hdev) + if (!phydev) + return; + ++ phy_loopback(phydev, false); ++ + phy_start(phydev); + } + +diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h +index 15f93b3550990..5069f690cf0b8 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e.h ++++ b/drivers/net/ethernet/intel/i40e/i40e.h +@@ -1142,7 +1142,6 @@ static inline bool i40e_is_sw_dcb(struct i40e_pf *pf) + return !!(pf->flags & I40E_FLAG_DISABLE_FW_LLDP); + } + +-void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable); + #ifdef CONFIG_I40E_DCB + void i40e_dcbnl_flush_apps(struct i40e_pf *pf, + struct i40e_dcbx_config *old_cfg, +diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h +index ce626eace692a..140b677f114db 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h ++++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h +@@ -1566,8 +1566,10 @@ enum i40e_aq_phy_type { + I40E_PHY_TYPE_25GBASE_LR = 0x22, + I40E_PHY_TYPE_25GBASE_AOC = 0x23, + I40E_PHY_TYPE_25GBASE_ACC = 0x24, +- I40E_PHY_TYPE_2_5GBASE_T = 0x30, +- I40E_PHY_TYPE_5GBASE_T = 0x31, ++ I40E_PHY_TYPE_2_5GBASE_T = 0x26, ++ I40E_PHY_TYPE_5GBASE_T = 0x27, ++ I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS = 0x30, ++ I40E_PHY_TYPE_5GBASE_T_LINK_STATUS = 0x31, + I40E_PHY_TYPE_MAX, + I40E_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP = 0xFD, + I40E_PHY_TYPE_EMPTY = 0xFE, +diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c +index a2dba32383f63..32f3facbed1a5 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_client.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c +@@ -375,6 +375,7 @@ void i40e_client_subtask(struct i40e_pf *pf) + clear_bit(__I40E_CLIENT_INSTANCE_OPENED, + &cdev->state); + i40e_client_del_instance(pf); ++ return; + } + } + } +diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c +index ec19e18305ecf..ce35e064cf607 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_common.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c +@@ -1154,8 +1154,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw) + break; + case I40E_PHY_TYPE_100BASE_TX: + case I40E_PHY_TYPE_1000BASE_T: +- case I40E_PHY_TYPE_2_5GBASE_T: +- case I40E_PHY_TYPE_5GBASE_T: ++ case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS: ++ case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS: + case I40E_PHY_TYPE_10GBASE_T: + media = I40E_MEDIA_TYPE_BASET; + break; +diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +index 0e92668012e36..93dd58fda272f 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +@@ -841,8 +841,8 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw, + 10000baseT_Full); + break; + case I40E_PHY_TYPE_10GBASE_T: +- case I40E_PHY_TYPE_5GBASE_T: +- case I40E_PHY_TYPE_2_5GBASE_T: ++ case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS: ++ case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS: + case I40E_PHY_TYPE_1000BASE_T: + case I40E_PHY_TYPE_100BASE_TX: + ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg); +@@ -1409,7 +1409,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg) + + memset(&config, 0, sizeof(config)); + config.phy_type = abilities.phy_type; +- config.abilities = abilities.abilities; ++ config.abilities = abilities.abilities | ++ I40E_AQ_PHY_ENABLE_ATOMIC_LINK; + config.phy_type_ext = abilities.phy_type_ext; + config.link_speed = abilities.link_speed; + config.eee_capability = abilities.eee_capability; +@@ -5287,7 +5288,6 @@ flags_complete: + i40e_aq_cfg_lldp_mib_change_event(&pf->hw, false, NULL); + i40e_aq_stop_lldp(&pf->hw, true, false, NULL); + } else { +- i40e_set_lldp_forwarding(pf, false); + status = i40e_aq_start_lldp(&pf->hw, false, NULL); + if (status) { + adq_err = pf->hw.aq.asq_last_status; +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index 527023ee4c076..ac4b44fc19f17 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -6878,40 +6878,6 @@ out: + } + #endif /* CONFIG_I40E_DCB */ + +-/** +- * i40e_set_lldp_forwarding - set forwarding of lldp frames +- * @pf: PF being configured +- * @enable: if forwarding to OS shall be enabled +- * +- * Toggle forwarding of lldp frames behavior, +- * When passing DCB control from firmware to software +- * lldp frames must be forwarded to the software based +- * lldp agent. +- */ +-void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable) +-{ +- if (pf->lan_vsi == I40E_NO_VSI) +- return; +- +- if (!pf->vsi[pf->lan_vsi]) +- return; +- +- /* No need to check the outcome, commands may fail +- * if desired value is already set +- */ +- i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP, +- I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX | +- I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC, +- pf->vsi[pf->lan_vsi]->seid, 0, +- enable, NULL, NULL); +- +- i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP, +- I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX | +- I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC, +- pf->vsi[pf->lan_vsi]->seid, 0, +- enable, NULL, NULL); +-} +- + /** + * i40e_print_link_message - print link up or down + * @vsi: the VSI for which link needs a message +@@ -10735,10 +10701,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) + */ + i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw, + pf->main_vsi_seid); +-#ifdef CONFIG_I40E_DCB +- if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) +- i40e_set_lldp_forwarding(pf, true); +-#endif /* CONFIG_I40E_DCB */ + + /* restart the VSIs that were rebuilt and running before the reset */ + i40e_pf_unquiesce_all_vsi(pf); +@@ -15753,10 +15715,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + */ + i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw, + pf->main_vsi_seid); +-#ifdef CONFIG_I40E_DCB +- if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) +- i40e_set_lldp_forwarding(pf, true); +-#endif /* CONFIG_I40E_DCB */ + + if ((pf->hw.device_id == I40E_DEV_ID_10G_BASE_T) || + (pf->hw.device_id == I40E_DEV_ID_10G_BASE_T4)) +diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c +index 06b4271219b14..70b515049540f 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c +@@ -1961,10 +1961,6 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb, + union i40e_rx_desc *rx_desc) + + { +- /* XDP packets use error pointer so abort at this point */ +- if (IS_ERR(skb)) +- return true; +- + /* ERR_MASK will only have valid bits if EOP set, and + * what we are doing here is actually checking + * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in +@@ -2534,7 +2530,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) + } + + /* exit if we failed to retrieve a buffer */ +- if (!skb) { ++ if (!xdp_res && !skb) { + rx_ring->rx_stats.alloc_buff_failed++; + rx_buffer->pagecnt_bias++; + break; +@@ -2547,7 +2543,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) + if (i40e_is_non_eop(rx_ring, rx_desc)) + continue; + +- if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) { ++ if (xdp_res || i40e_cleanup_headers(rx_ring, skb, rx_desc)) { + skb = NULL; + continue; + } +diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h +index 5c10faaca790e..c81109a63e90c 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_type.h ++++ b/drivers/net/ethernet/intel/i40e/i40e_type.h +@@ -239,11 +239,8 @@ struct i40e_phy_info { + #define I40E_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(I40E_PHY_TYPE_25GBASE_ACC + \ + I40E_PHY_TYPE_OFFSET) + /* Offset for 2.5G/5G PHY Types value to bit number conversion */ +-#define I40E_PHY_TYPE_OFFSET2 (-10) +-#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T + \ +- I40E_PHY_TYPE_OFFSET2) +-#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T + \ +- I40E_PHY_TYPE_OFFSET2) ++#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T) ++#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T) + #define I40E_HW_CAP_MAX_GPIO 30 + /* Capabilities of a PF or a VF or the whole device */ + struct i40e_hw_capabilities { +diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c +index dc5b3c06d1e01..ebd08543791bd 100644 +--- a/drivers/net/ethernet/intel/iavf/iavf_main.c ++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c +@@ -3899,8 +3899,6 @@ static void iavf_remove(struct pci_dev *pdev) + + iounmap(hw->hw_addr); + pci_release_regions(pdev); +- iavf_free_all_tx_resources(adapter); +- iavf_free_all_rx_resources(adapter); + iavf_free_queues(adapter); + kfree(adapter->vf_res); + spin_lock_bh(&adapter->mac_vlan_list_lock); +diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c +index d13c7fc8fb0a2..195d122c9cb22 100644 +--- a/drivers/net/ethernet/intel/ice/ice_lib.c ++++ b/drivers/net/ethernet/intel/ice/ice_lib.c +@@ -2818,38 +2818,46 @@ int ice_vsi_release(struct ice_vsi *vsi) + } + + /** +- * ice_vsi_rebuild_update_coalesce - set coalesce for a q_vector ++ * ice_vsi_rebuild_update_coalesce_intrl - set interrupt rate limit for a q_vector + * @q_vector: pointer to q_vector which is being updated +- * @coalesce: pointer to array of struct with stored coalesce ++ * @stored_intrl_setting: original INTRL setting + * + * Set coalesce param in q_vector and update these parameters in HW. + */ + static void +-ice_vsi_rebuild_update_coalesce(struct ice_q_vector *q_vector, +- struct ice_coalesce_stored *coalesce) ++ice_vsi_rebuild_update_coalesce_intrl(struct ice_q_vector *q_vector, ++ u16 stored_intrl_setting) + { +- struct ice_ring_container *rx_rc = &q_vector->rx; +- struct ice_ring_container *tx_rc = &q_vector->tx; + struct ice_hw *hw = &q_vector->vsi->back->hw; + +- tx_rc->itr_setting = coalesce->itr_tx; +- rx_rc->itr_setting = coalesce->itr_rx; +- +- /* dynamic ITR values will be updated during Tx/Rx */ +- if (!ITR_IS_DYNAMIC(tx_rc->itr_setting)) +- wr32(hw, GLINT_ITR(tx_rc->itr_idx, q_vector->reg_idx), +- ITR_REG_ALIGN(tx_rc->itr_setting) >> +- ICE_ITR_GRAN_S); +- if (!ITR_IS_DYNAMIC(rx_rc->itr_setting)) +- wr32(hw, GLINT_ITR(rx_rc->itr_idx, q_vector->reg_idx), +- ITR_REG_ALIGN(rx_rc->itr_setting) >> +- ICE_ITR_GRAN_S); +- +- q_vector->intrl = coalesce->intrl; ++ q_vector->intrl = stored_intrl_setting; + wr32(hw, GLINT_RATE(q_vector->reg_idx), + ice_intrl_usec_to_reg(q_vector->intrl, hw->intrl_gran)); + } + ++/** ++ * ice_vsi_rebuild_update_coalesce_itr - set coalesce for a q_vector ++ * @q_vector: pointer to q_vector which is being updated ++ * @rc: pointer to ring container ++ * @stored_itr_setting: original ITR setting ++ * ++ * Set coalesce param in q_vector and update these parameters in HW. ++ */ ++static void ++ice_vsi_rebuild_update_coalesce_itr(struct ice_q_vector *q_vector, ++ struct ice_ring_container *rc, ++ u16 stored_itr_setting) ++{ ++ struct ice_hw *hw = &q_vector->vsi->back->hw; ++ ++ rc->itr_setting = stored_itr_setting; ++ ++ /* dynamic ITR values will be updated during Tx/Rx */ ++ if (!ITR_IS_DYNAMIC(rc->itr_setting)) ++ wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx), ++ ITR_REG_ALIGN(rc->itr_setting) >> ICE_ITR_GRAN_S); ++} ++ + /** + * ice_vsi_rebuild_get_coalesce - get coalesce from all q_vectors + * @vsi: VSI connected with q_vectors +@@ -2869,6 +2877,11 @@ ice_vsi_rebuild_get_coalesce(struct ice_vsi *vsi, + coalesce[i].itr_tx = q_vector->tx.itr_setting; + coalesce[i].itr_rx = q_vector->rx.itr_setting; + coalesce[i].intrl = q_vector->intrl; ++ ++ if (i < vsi->num_txq) ++ coalesce[i].tx_valid = true; ++ if (i < vsi->num_rxq) ++ coalesce[i].rx_valid = true; + } + + return vsi->num_q_vectors; +@@ -2893,17 +2906,59 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi, + if ((size && !coalesce) || !vsi) + return; + +- for (i = 0; i < size && i < vsi->num_q_vectors; i++) +- ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i], +- &coalesce[i]); +- +- /* number of q_vectors increased, so assume coalesce settings were +- * changed globally (i.e. ethtool -C eth0 instead of per-queue) and use +- * the previous settings from q_vector 0 for all of the new q_vectors ++ /* There are a couple of cases that have to be handled here: ++ * 1. The case where the number of queue vectors stays the same, but ++ * the number of Tx or Rx rings changes (the first for loop) ++ * 2. The case where the number of queue vectors increased (the ++ * second for loop) + */ +- for (; i < vsi->num_q_vectors; i++) +- ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i], +- &coalesce[0]); ++ for (i = 0; i < size && i < vsi->num_q_vectors; i++) { ++ /* There are 2 cases to handle here and they are the same for ++ * both Tx and Rx: ++ * if the entry was valid previously (coalesce[i].[tr]x_valid ++ * and the loop variable is less than the number of rings ++ * allocated, then write the previous values ++ * ++ * if the entry was not valid previously, but the number of ++ * rings is less than are allocated (this means the number of ++ * rings increased from previously), then write out the ++ * values in the first element ++ */ ++ if (i < vsi->alloc_rxq && coalesce[i].rx_valid) ++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i], ++ &vsi->q_vectors[i]->rx, ++ coalesce[i].itr_rx); ++ else if (i < vsi->alloc_rxq) ++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i], ++ &vsi->q_vectors[i]->rx, ++ coalesce[0].itr_rx); ++ ++ if (i < vsi->alloc_txq && coalesce[i].tx_valid) ++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i], ++ &vsi->q_vectors[i]->tx, ++ coalesce[i].itr_tx); ++ else if (i < vsi->alloc_txq) ++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i], ++ &vsi->q_vectors[i]->tx, ++ coalesce[0].itr_tx); ++ ++ ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i], ++ coalesce[i].intrl); ++ } ++ ++ /* the number of queue vectors increased so write whatever is in ++ * the first element ++ */ ++ for (; i < vsi->num_q_vectors; i++) { ++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i], ++ &vsi->q_vectors[i]->tx, ++ coalesce[0].itr_tx); ++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i], ++ &vsi->q_vectors[i]->rx, ++ coalesce[0].itr_rx); ++ ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i], ++ coalesce[0].intrl); ++ } + } + + /** +@@ -2932,9 +2987,11 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) + + coalesce = kcalloc(vsi->num_q_vectors, + sizeof(struct ice_coalesce_stored), GFP_KERNEL); +- if (coalesce) +- prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, +- coalesce); ++ if (!coalesce) ++ return -ENOMEM; ++ ++ prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); ++ + ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); + ice_vsi_free_q_vectors(vsi); + +diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h +index 5dab77504fa5b..672a7ff0ee364 100644 +--- a/drivers/net/ethernet/intel/ice/ice_txrx.h ++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h +@@ -351,6 +351,8 @@ struct ice_coalesce_stored { + u16 itr_tx; + u16 itr_rx; + u8 intrl; ++ u8 tx_valid; ++ u8 rx_valid; + }; + + /* iterator for handling rings in ring container */ +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index 01d3ee4b58292..bcd5e7ae8482f 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -1319,7 +1319,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, + skb->protocol = eth_type_trans(skb, netdev); + + if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX && +- RX_DMA_VID(trxd.rxd3)) ++ (trxd.rxd2 & RX_DMA_VTAG)) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + RX_DMA_VID(trxd.rxd3)); + skb_record_rx_queue(skb, 0); +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +index fd3cec8f06bae..c47272100615d 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +@@ -296,6 +296,7 @@ + #define RX_DMA_LSO BIT(30) + #define RX_DMA_PLEN0(_x) (((_x) & 0x3fff) << 16) + #define RX_DMA_GET_PLEN0(_x) (((_x) >> 16) & 0x3fff) ++#define RX_DMA_VTAG BIT(15) + + /* QDMA descriptor rxd3 */ + #define RX_DMA_VID(_x) ((_x) & 0xfff) +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +index bdbffe484fce4..d2efe24559555 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +@@ -576,7 +576,7 @@ static void mlx5e_tx_mpwqe_session_start(struct mlx5e_txqsq *sq, + + pi = mlx5e_txqsq_get_next_pi(sq, MLX5E_TX_MPW_MAX_WQEBBS); + wqe = MLX5E_TX_FETCH_WQE(sq, pi); +- prefetchw(wqe->data); ++ net_prefetchw(wqe->data); + + *session = (struct mlx5e_tx_mpwqe) { + .wqe = wqe, +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c +index bf3250e0e59ca..749585fe6fc96 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c +@@ -352,6 +352,8 @@ static int ipq806x_gmac_probe(struct platform_device *pdev) + plat_dat->bsp_priv = gmac; + plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed; + plat_dat->multicast_filter_bins = 0; ++ plat_dat->tx_fifo_size = 8192; ++ plat_dat->rx_fifo_size = 8192; + + err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); + if (err) +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +index 29f765a246a05..aaf37598cbd3c 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +@@ -638,6 +638,7 @@ static void dwmac4_set_filter(struct mac_device_info *hw, + value &= ~GMAC_PACKET_FILTER_PCF; + value &= ~GMAC_PACKET_FILTER_PM; + value &= ~GMAC_PACKET_FILTER_PR; ++ value &= ~GMAC_PACKET_FILTER_RA; + if (dev->flags & IFF_PROMISC) { + /* VLAN Tag Filter Fail Packets Queuing */ + if (hw->vlan_fail_q_en) { +diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c +index 390d3403386aa..144892060718e 100644 +--- a/drivers/net/ipa/gsi.c ++++ b/drivers/net/ipa/gsi.c +@@ -211,8 +211,8 @@ static void gsi_irq_setup(struct gsi *gsi) + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + + /* The inter-EE registers are in the non-adjusted address range */ +- iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_CH_IRQ_OFFSET); +- iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET); ++ iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET); ++ iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET); + + iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); + } +diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h +index 1622d8cf8dea4..48ef04afab79f 100644 +--- a/drivers/net/ipa/gsi_reg.h ++++ b/drivers/net/ipa/gsi_reg.h +@@ -53,15 +53,15 @@ + #define GSI_EE_REG_ADJUST 0x0000d000 /* IPA v4.5+ */ + + /* The two inter-EE IRQ register offsets are relative to gsi->virt_raw */ +-#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \ +- GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +-#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \ +- (0x0000c018 + 0x1000 * (ee)) +- +-#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \ +- GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +-#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \ +- (0x0000c01c + 0x1000 * (ee)) ++#define GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET \ ++ GSI_INTER_EE_N_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP) ++#define GSI_INTER_EE_N_SRC_CH_IRQ_MSK_OFFSET(ee) \ ++ (0x0000c020 + 0x1000 * (ee)) ++ ++#define GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET \ ++ GSI_INTER_EE_N_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP) ++#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \ ++ (0x0000c024 + 0x1000 * (ee)) + + /* All other register offsets are relative to gsi->virt */ + #define GSI_CH_C_CNTXT_0_OFFSET(ch) \ +diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c +index cc38e326405a6..af2e1759b5231 100644 +--- a/drivers/net/phy/phy_device.c ++++ b/drivers/net/phy/phy_device.c +@@ -273,6 +273,9 @@ static __maybe_unused int mdio_bus_phy_suspend(struct device *dev) + { + struct phy_device *phydev = to_phy_device(dev); + ++ if (phydev->mac_managed_pm) ++ return 0; ++ + /* We must stop the state machine manually, otherwise it stops out of + * control, possibly with the phydev->lock held. Upon resume, netdev + * may call phy routines that try to grab the same lock, and that may +@@ -294,6 +297,9 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev) + struct phy_device *phydev = to_phy_device(dev); + int ret; + ++ if (phydev->mac_managed_pm) ++ return 0; ++ + if (!phydev->suspended_by_mdio_bus) + goto no_resume; + +diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c +index cccfd3bd4d27e..ca5cda890d58e 100644 +--- a/drivers/net/wireless/ath/ath11k/wmi.c ++++ b/drivers/net/wireless/ath/ath11k/wmi.c +@@ -5417,31 +5417,6 @@ int ath11k_wmi_pull_fw_stats(struct ath11k_base *ab, struct sk_buff *skb, + return 0; + } + +-static int +-ath11k_pull_pdev_temp_ev(struct ath11k_base *ab, u8 *evt_buf, +- u32 len, const struct wmi_pdev_temperature_event *ev) +-{ +- const void **tb; +- int ret; +- +- tb = ath11k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC); +- if (IS_ERR(tb)) { +- ret = PTR_ERR(tb); +- ath11k_warn(ab, "failed to parse tlv: %d\n", ret); +- return ret; +- } +- +- ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT]; +- if (!ev) { +- ath11k_warn(ab, "failed to fetch pdev temp ev"); +- kfree(tb); +- return -EPROTO; +- } +- +- kfree(tb); +- return 0; +-} +- + size_t ath11k_wmi_fw_stats_num_vdevs(struct list_head *head) + { + struct ath11k_fw_stats_vdev *i; +@@ -6849,23 +6824,37 @@ ath11k_wmi_pdev_temperature_event(struct ath11k_base *ab, + struct sk_buff *skb) + { + struct ath11k *ar; +- struct wmi_pdev_temperature_event ev = {0}; ++ const void **tb; ++ const struct wmi_pdev_temperature_event *ev; ++ int ret; ++ ++ tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC); ++ if (IS_ERR(tb)) { ++ ret = PTR_ERR(tb); ++ ath11k_warn(ab, "failed to parse tlv: %d\n", ret); ++ return; ++ } + +- if (ath11k_pull_pdev_temp_ev(ab, skb->data, skb->len, &ev) != 0) { +- ath11k_warn(ab, "failed to extract pdev temperature event"); ++ ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT]; ++ if (!ev) { ++ ath11k_warn(ab, "failed to fetch pdev temp ev"); ++ kfree(tb); + return; + } + + ath11k_dbg(ab, ATH11K_DBG_WMI, +- "pdev temperature ev temp %d pdev_id %d\n", ev.temp, ev.pdev_id); ++ "pdev temperature ev temp %d pdev_id %d\n", ev->temp, ev->pdev_id); + +- ar = ath11k_mac_get_ar_by_pdev_id(ab, ev.pdev_id); ++ ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id); + if (!ar) { +- ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev.pdev_id); ++ ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev->pdev_id); ++ kfree(tb); + return; + } + +- ath11k_thermal_event_temperature(ar, ev.temp); ++ ath11k_thermal_event_temperature(ar, ev->temp); ++ ++ kfree(tb); + } + + static void ath11k_fils_discovery_event(struct ath11k_base *ab, +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c +index 60e0db4a5e201..9236f91068261 100644 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c +@@ -2,7 +2,7 @@ + /* + * Copyright (C) 2015 Intel Mobile Communications GmbH + * Copyright (C) 2016-2017 Intel Deutschland GmbH +- * Copyright (C) 2019-2020 Intel Corporation ++ * Copyright (C) 2019-2021 Intel Corporation + */ + #include + #include +@@ -21,7 +21,6 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, + const struct iwl_cfg_trans_params *cfg_trans) + { + struct iwl_trans *trans; +- int txcmd_size, txcmd_align; + #ifdef CONFIG_LOCKDEP + static struct lock_class_key __key; + #endif +@@ -31,10 +30,40 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, + return NULL; + + trans->trans_cfg = cfg_trans; +- if (!cfg_trans->gen2) { ++ ++#ifdef CONFIG_LOCKDEP ++ lockdep_init_map(&trans->sync_cmd_lockdep_map, "sync_cmd_lockdep_map", ++ &__key, 0); ++#endif ++ ++ trans->dev = dev; ++ trans->ops = ops; ++ trans->num_rx_queues = 1; ++ ++ WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty); ++ ++ if (trans->trans_cfg->use_tfh) { ++ trans->txqs.tfd.addr_size = 64; ++ trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS; ++ trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd); ++ } else { ++ trans->txqs.tfd.addr_size = 36; ++ trans->txqs.tfd.max_tbs = IWL_NUM_OF_TBS; ++ trans->txqs.tfd.size = sizeof(struct iwl_tfd); ++ } ++ trans->max_skb_frags = IWL_TRANS_MAX_FRAGS(trans); ++ ++ return trans; ++} ++ ++int iwl_trans_init(struct iwl_trans *trans) ++{ ++ int txcmd_size, txcmd_align; ++ ++ if (!trans->trans_cfg->gen2) { + txcmd_size = sizeof(struct iwl_tx_cmd); + txcmd_align = sizeof(void *); +- } else if (cfg_trans->device_family < IWL_DEVICE_FAMILY_AX210) { ++ } else if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) { + txcmd_size = sizeof(struct iwl_tx_cmd_gen2); + txcmd_align = 64; + } else { +@@ -46,17 +75,8 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, + txcmd_size += 36; /* biggest possible 802.11 header */ + + /* Ensure device TX cmd cannot reach/cross a page boundary in gen2 */ +- if (WARN_ON(cfg_trans->gen2 && txcmd_size >= txcmd_align)) +- return ERR_PTR(-EINVAL); +- +-#ifdef CONFIG_LOCKDEP +- lockdep_init_map(&trans->sync_cmd_lockdep_map, "sync_cmd_lockdep_map", +- &__key, 0); +-#endif +- +- trans->dev = dev; +- trans->ops = ops; +- trans->num_rx_queues = 1; ++ if (WARN_ON(trans->trans_cfg->gen2 && txcmd_size >= txcmd_align)) ++ return -EINVAL; + + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) + trans->txqs.bc_tbl_size = sizeof(struct iwl_gen3_bc_tbl); +@@ -68,23 +88,16 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, + * allocate here. + */ + if (trans->trans_cfg->gen2) { +- trans->txqs.bc_pool = dmam_pool_create("iwlwifi:bc", dev, ++ trans->txqs.bc_pool = dmam_pool_create("iwlwifi:bc", trans->dev, + trans->txqs.bc_tbl_size, + 256, 0); + if (!trans->txqs.bc_pool) +- return NULL; ++ return -ENOMEM; + } + +- if (trans->trans_cfg->use_tfh) { +- trans->txqs.tfd.addr_size = 64; +- trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS; +- trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd); +- } else { +- trans->txqs.tfd.addr_size = 36; +- trans->txqs.tfd.max_tbs = IWL_NUM_OF_TBS; +- trans->txqs.tfd.size = sizeof(struct iwl_tfd); +- } +- trans->max_skb_frags = IWL_TRANS_MAX_FRAGS(trans); ++ /* Some things must not change even if the config does */ ++ WARN_ON(trans->txqs.tfd.addr_size != ++ (trans->trans_cfg->use_tfh ? 64 : 36)); + + snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name), + "iwl_cmd_pool:%s", dev_name(trans->dev)); +@@ -93,35 +106,35 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, + txcmd_size, txcmd_align, + SLAB_HWCACHE_ALIGN, NULL); + if (!trans->dev_cmd_pool) +- return NULL; +- +- WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty); ++ return -ENOMEM; + + trans->txqs.tso_hdr_page = alloc_percpu(struct iwl_tso_hdr_page); + if (!trans->txqs.tso_hdr_page) { + kmem_cache_destroy(trans->dev_cmd_pool); +- return NULL; ++ return -ENOMEM; + } + + /* Initialize the wait queue for commands */ + init_waitqueue_head(&trans->wait_command_queue); + +- return trans; ++ return 0; + } + + void iwl_trans_free(struct iwl_trans *trans) + { + int i; + +- for_each_possible_cpu(i) { +- struct iwl_tso_hdr_page *p = +- per_cpu_ptr(trans->txqs.tso_hdr_page, i); ++ if (trans->txqs.tso_hdr_page) { ++ for_each_possible_cpu(i) { ++ struct iwl_tso_hdr_page *p = ++ per_cpu_ptr(trans->txqs.tso_hdr_page, i); + +- if (p->page) +- __free_page(p->page); +- } ++ if (p && p->page) ++ __free_page(p->page); ++ } + +- free_percpu(trans->txqs.tso_hdr_page); ++ free_percpu(trans->txqs.tso_hdr_page); ++ } + + kmem_cache_destroy(trans->dev_cmd_pool); + } +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +index 4a5822c1be136..3e0df6fbb6424 100644 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +@@ -1438,6 +1438,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, + struct device *dev, + const struct iwl_trans_ops *ops, + const struct iwl_cfg_trans_params *cfg_trans); ++int iwl_trans_init(struct iwl_trans *trans); + void iwl_trans_free(struct iwl_trans *trans); + + /***************************************************** +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +index 558a0b2ef0fc8..66faf7914bd8c 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +@@ -17,10 +17,20 @@ + #include "iwl-prph.h" + #include "internal.h" + ++#define TRANS_CFG_MARKER BIT(0) ++#define _IS_A(cfg, _struct) __builtin_types_compatible_p(typeof(cfg), \ ++ struct _struct) ++extern int _invalid_type; ++#define _TRANS_CFG_MARKER(cfg) \ ++ (__builtin_choose_expr(_IS_A(cfg, iwl_cfg_trans_params), \ ++ TRANS_CFG_MARKER, \ ++ __builtin_choose_expr(_IS_A(cfg, iwl_cfg), 0, _invalid_type))) ++#define _ASSIGN_CFG(cfg) (_TRANS_CFG_MARKER(cfg) + (kernel_ulong_t)&(cfg)) ++ + #define IWL_PCI_DEVICE(dev, subdev, cfg) \ + .vendor = PCI_VENDOR_ID_INTEL, .device = (dev), \ + .subvendor = PCI_ANY_ID, .subdevice = (subdev), \ +- .driver_data = (kernel_ulong_t)&(cfg) ++ .driver_data = _ASSIGN_CFG(cfg) + + /* Hardware specific file defines the PCI IDs table for that hardware module */ + static const struct pci_device_id iwl_hw_card_ids[] = { +@@ -1075,19 +1085,22 @@ static const struct iwl_dev_info iwl_dev_info_table[] = { + + static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + { +- const struct iwl_cfg_trans_params *trans = +- (struct iwl_cfg_trans_params *)(ent->driver_data); ++ const struct iwl_cfg_trans_params *trans; + const struct iwl_cfg *cfg_7265d __maybe_unused = NULL; + struct iwl_trans *iwl_trans; + struct iwl_trans_pcie *trans_pcie; + int i, ret; ++ const struct iwl_cfg *cfg; ++ ++ trans = (void *)(ent->driver_data & ~TRANS_CFG_MARKER); ++ + /* + * This is needed for backwards compatibility with the old + * tables, so we don't need to change all the config structs + * at the same time. The cfg is used to compare with the old + * full cfg structs. + */ +- const struct iwl_cfg *cfg = (struct iwl_cfg *)(ent->driver_data); ++ cfg = (void *)(ent->driver_data & ~TRANS_CFG_MARKER); + + /* make sure trans is the first element in iwl_cfg */ + BUILD_BUG_ON(offsetof(struct iwl_cfg, trans)); +@@ -1202,11 +1215,19 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + + #endif + /* +- * If we didn't set the cfg yet, assume the trans is actually +- * a full cfg from the old tables. ++ * If we didn't set the cfg yet, the PCI ID table entry should have ++ * been a full config - if yes, use it, otherwise fail. + */ +- if (!iwl_trans->cfg) ++ if (!iwl_trans->cfg) { ++ if (ent->driver_data & TRANS_CFG_MARKER) { ++ pr_err("No config found for PCI dev %04x/%04x, rev=0x%x, rfid=0x%x\n", ++ pdev->device, pdev->subsystem_device, ++ iwl_trans->hw_rev, iwl_trans->hw_rf_id); ++ ret = -EINVAL; ++ goto out_free_trans; ++ } + iwl_trans->cfg = cfg; ++ } + + /* if we don't have a name yet, copy name from the old cfg */ + if (!iwl_trans->name) +@@ -1222,6 +1243,10 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + trans_pcie->num_rx_bufs = RX_QUEUE_SIZE; + } + ++ ret = iwl_trans_init(iwl_trans); ++ if (ret) ++ goto out_free_trans; ++ + pci_set_drvdata(pdev, iwl_trans); + iwl_trans->drv = iwl_drv_start(iwl_trans); + +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c +index 94ffc1ae484dc..af9412bd697ee 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c +@@ -1,7 +1,7 @@ + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause + /* + * Copyright (C) 2017 Intel Deutschland GmbH +- * Copyright (C) 2018-2020 Intel Corporation ++ * Copyright (C) 2018-2021 Intel Corporation + */ + #include "iwl-trans.h" + #include "iwl-prph.h" +@@ -143,7 +143,7 @@ void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans) + if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) { + IWL_DEBUG_INFO(trans, + "DEVICE_ENABLED bit was set and is now cleared\n"); +- iwl_txq_gen2_tx_stop(trans); ++ iwl_txq_gen2_tx_free(trans); + iwl_pcie_rx_stop(trans); + } + +diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c +index 833f43d1ca7a0..810dcb3df242c 100644 +--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c ++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c +@@ -13,30 +13,6 @@ + #include "iwl-scd.h" + #include + +-/* +- * iwl_txq_gen2_tx_stop - Stop all Tx DMA channels +- */ +-void iwl_txq_gen2_tx_stop(struct iwl_trans *trans) +-{ +- int txq_id; +- +- /* +- * This function can be called before the op_mode disabled the +- * queues. This happens when we have an rfkill interrupt. +- * Since we stop Tx altogether - mark the queues as stopped. +- */ +- memset(trans->txqs.queue_stopped, 0, +- sizeof(trans->txqs.queue_stopped)); +- memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used)); +- +- /* Unmap DMA from host system and free skb's */ +- for (txq_id = 0; txq_id < ARRAY_SIZE(trans->txqs.txq); txq_id++) { +- if (!trans->txqs.txq[txq_id]) +- continue; +- iwl_txq_gen2_unmap(trans, txq_id); +- } +-} +- + /* + * iwl_txq_update_byte_tbl - Set up entry in Tx byte-count array + */ +@@ -1189,6 +1165,12 @@ static int iwl_txq_alloc_response(struct iwl_trans *trans, struct iwl_txq *txq, + goto error_free_resp; + } + ++ if (WARN_ONCE(trans->txqs.txq[qid], ++ "queue %d already allocated\n", qid)) { ++ ret = -EIO; ++ goto error_free_resp; ++ } ++ + txq->id = qid; + trans->txqs.txq[qid] = txq; + wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1); +diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.h b/drivers/net/wireless/intel/iwlwifi/queue/tx.h +index af1dbdf5617a0..20efc62acf133 100644 +--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.h ++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.h +@@ -1,6 +1,6 @@ + /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ + /* +- * Copyright (C) 2020 Intel Corporation ++ * Copyright (C) 2020-2021 Intel Corporation + */ + #ifndef __iwl_trans_queue_tx_h__ + #define __iwl_trans_queue_tx_h__ +@@ -123,7 +123,6 @@ int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, + void iwl_txq_dyn_free(struct iwl_trans *trans, int queue); + void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq); + void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq); +-void iwl_txq_gen2_tx_stop(struct iwl_trans *trans); + void iwl_txq_gen2_tx_free(struct iwl_trans *trans); + int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num, + bool cmd_queue); +diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h +index 8bf45497cfca1..36a430f09f64c 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76.h ++++ b/drivers/net/wireless/mediatek/mt76/mt76.h +@@ -222,6 +222,7 @@ struct mt76_wcid { + + u16 idx; + u8 hw_key_idx; ++ u8 hw_key_idx2; + + u8 sta:1; + u8 ext_phy:1; +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c +index 2eab23898c778..6dbaaf95ee385 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c +@@ -86,6 +86,7 @@ static int mt7615_check_eeprom(struct mt76_dev *dev) + switch (val) { + case 0x7615: + case 0x7622: ++ case 0x7663: + return 0; + default: + return -EINVAL; +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c +index d73841480544a..8dccb589b756d 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c +@@ -1037,7 +1037,7 @@ EXPORT_SYMBOL_GPL(mt7615_mac_set_rates); + static int + mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, + struct ieee80211_key_conf *key, +- enum mt7615_cipher_type cipher, ++ enum mt7615_cipher_type cipher, u16 cipher_mask, + enum set_key_cmd cmd) + { + u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx) + 30 * 4; +@@ -1054,22 +1054,22 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, + memcpy(data + 16, key->key + 24, 8); + memcpy(data + 24, key->key + 16, 8); + } else { +- if (cipher != MT_CIPHER_BIP_CMAC_128 && wcid->cipher) +- memmove(data + 16, data, 16); +- if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher) ++ if (cipher_mask == BIT(cipher)) + memcpy(data, key->key, key->keylen); +- else if (cipher == MT_CIPHER_BIP_CMAC_128) ++ else if (cipher != MT_CIPHER_BIP_CMAC_128) ++ memcpy(data, key->key, 16); ++ if (cipher == MT_CIPHER_BIP_CMAC_128) + memcpy(data + 16, key->key, 16); + } + } else { +- if (wcid->cipher & ~BIT(cipher)) { +- if (cipher != MT_CIPHER_BIP_CMAC_128) +- memmove(data, data + 16, 16); ++ if (cipher == MT_CIPHER_BIP_CMAC_128) + memset(data + 16, 0, 16); +- } else { ++ else if (cipher_mask) ++ memset(data, 0, 16); ++ if (!cipher_mask) + memset(data, 0, sizeof(data)); +- } + } ++ + mt76_wr_copy(dev, addr, data, sizeof(data)); + + return 0; +@@ -1077,7 +1077,7 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid, + + static int + mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid, +- enum mt7615_cipher_type cipher, ++ enum mt7615_cipher_type cipher, u16 cipher_mask, + int keyidx, enum set_key_cmd cmd) + { + u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx), w0, w1; +@@ -1087,20 +1087,23 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid, + + w0 = mt76_rr(dev, addr); + w1 = mt76_rr(dev, addr + 4); +- if (cmd == SET_KEY) { +- w0 |= MT_WTBL_W0_RX_KEY_VALID | +- FIELD_PREP(MT_WTBL_W0_RX_IK_VALID, +- cipher == MT_CIPHER_BIP_CMAC_128); +- if (cipher != MT_CIPHER_BIP_CMAC_128 || +- !wcid->cipher) +- w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx); +- } else { +- if (!(wcid->cipher & ~BIT(cipher))) +- w0 &= ~(MT_WTBL_W0_RX_KEY_VALID | +- MT_WTBL_W0_KEY_IDX); +- if (cipher == MT_CIPHER_BIP_CMAC_128) +- w0 &= ~MT_WTBL_W0_RX_IK_VALID; ++ ++ if (cipher_mask) ++ w0 |= MT_WTBL_W0_RX_KEY_VALID; ++ else ++ w0 &= ~(MT_WTBL_W0_RX_KEY_VALID | MT_WTBL_W0_KEY_IDX); ++ if (cipher_mask & BIT(MT_CIPHER_BIP_CMAC_128)) ++ w0 |= MT_WTBL_W0_RX_IK_VALID; ++ else ++ w0 &= ~MT_WTBL_W0_RX_IK_VALID; ++ ++ if (cmd == SET_KEY && ++ (cipher != MT_CIPHER_BIP_CMAC_128 || ++ cipher_mask == BIT(cipher))) { ++ w0 &= ~MT_WTBL_W0_KEY_IDX; ++ w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx); + } ++ + mt76_wr(dev, MT_WTBL_RICR0, w0); + mt76_wr(dev, MT_WTBL_RICR1, w1); + +@@ -1113,24 +1116,25 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid, + + static void + mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid, +- enum mt7615_cipher_type cipher, ++ enum mt7615_cipher_type cipher, u16 cipher_mask, + enum set_key_cmd cmd) + { + u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx); + +- if (cmd == SET_KEY) { +- if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher) +- mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE, +- FIELD_PREP(MT_WTBL_W2_KEY_TYPE, cipher)); +- } else { +- if (cipher != MT_CIPHER_BIP_CMAC_128 && +- wcid->cipher & BIT(MT_CIPHER_BIP_CMAC_128)) +- mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE, +- FIELD_PREP(MT_WTBL_W2_KEY_TYPE, +- MT_CIPHER_BIP_CMAC_128)); +- else if (!(wcid->cipher & ~BIT(cipher))) +- mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE); ++ if (!cipher_mask) { ++ mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE); ++ return; + } ++ ++ if (cmd != SET_KEY) ++ return; ++ ++ if (cipher == MT_CIPHER_BIP_CMAC_128 && ++ cipher_mask & ~BIT(MT_CIPHER_BIP_CMAC_128)) ++ return; ++ ++ mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE, ++ FIELD_PREP(MT_WTBL_W2_KEY_TYPE, cipher)); + } + + int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, +@@ -1139,25 +1143,30 @@ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, + enum set_key_cmd cmd) + { + enum mt7615_cipher_type cipher; ++ u16 cipher_mask = wcid->cipher; + int err; + + cipher = mt7615_mac_get_cipher(key->cipher); + if (cipher == MT_CIPHER_NONE) + return -EOPNOTSUPP; + +- mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cmd); +- err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cmd); ++ if (cmd == SET_KEY) ++ cipher_mask |= BIT(cipher); ++ else ++ cipher_mask &= ~BIT(cipher); ++ ++ mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask, cmd); ++ err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask, ++ cmd); + if (err < 0) + return err; + +- err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, key->keyidx, cmd); ++ err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, cipher_mask, ++ key->keyidx, cmd); + if (err < 0) + return err; + +- if (cmd == SET_KEY) +- wcid->cipher |= BIT(cipher); +- else +- wcid->cipher &= ~BIT(cipher); ++ wcid->cipher = cipher_mask; + + return 0; + } +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c +index 6107e827b3836..d334491667a4e 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c +@@ -334,7 +334,8 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct mt7615_sta *msta = sta ? (struct mt7615_sta *)sta->drv_priv : + &mvif->sta; + struct mt76_wcid *wcid = &msta->wcid; +- int idx = key->keyidx, err; ++ int idx = key->keyidx, err = 0; ++ u8 *wcid_keyidx = &wcid->hw_key_idx; + + /* The hardware does not support per-STA RX GTK, fallback + * to software mode for these. +@@ -349,6 +350,7 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + /* fall back to sw encryption for unsupported ciphers */ + switch (key->cipher) { + case WLAN_CIPHER_SUITE_AES_CMAC: ++ wcid_keyidx = &wcid->hw_key_idx2; + key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE; + break; + case WLAN_CIPHER_SUITE_TKIP: +@@ -366,12 +368,13 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + + mt7615_mutex_acquire(dev); + +- if (cmd == SET_KEY) { +- key->hw_key_idx = wcid->idx; +- wcid->hw_key_idx = idx; +- } else if (idx == wcid->hw_key_idx) { +- wcid->hw_key_idx = -1; +- } ++ if (cmd == SET_KEY) ++ *wcid_keyidx = idx; ++ else if (idx == *wcid_keyidx) ++ *wcid_keyidx = -1; ++ else ++ goto out; ++ + mt76_wcid_key_setup(&dev->mt76, wcid, + cmd == SET_KEY ? key : NULL); + +@@ -380,6 +383,7 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + else + err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd); + ++out: + mt7615_mutex_release(dev); + + return err; +diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c +index 631596fc2f362..198e9025b6818 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c +@@ -291,12 +291,20 @@ static int mt7615_mcu_drv_pmctrl(struct mt7615_dev *dev) + u32 addr; + int err; + +- addr = is_mt7663(mdev) ? MT_PCIE_DOORBELL_PUSH : MT_CFG_LPCR_HOST; ++ if (is_mt7663(mdev)) { ++ /* Clear firmware own via N9 eint */ ++ mt76_wr(dev, MT_PCIE_DOORBELL_PUSH, MT_CFG_LPCR_HOST_DRV_OWN); ++ mt76_poll(dev, MT_CONN_ON_MISC, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000); ++ ++ addr = MT_CONN_HIF_ON_LPCTL; ++ } else { ++ addr = MT_CFG_LPCR_HOST; ++ } ++ + mt76_wr(dev, addr, MT_CFG_LPCR_HOST_DRV_OWN); + + mt7622_trigger_hif_int(dev, true); + +- addr = is_mt7663(mdev) ? MT_CONN_HIF_ON_LPCTL : MT_CFG_LPCR_HOST; + err = !mt76_poll_msec(dev, addr, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000); + + mt7622_trigger_hif_int(dev, false); +@@ -1040,6 +1048,9 @@ mt7615_mcu_sta_ba(struct mt7615_dev *dev, + + wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(&dev->mt76, &msta->wcid, + WTBL_SET, sta_wtbl, &skb); ++ if (IS_ERR(wtbl_hdr)) ++ return PTR_ERR(wtbl_hdr); ++ + mt76_connac_mcu_wtbl_ba_tlv(&dev->mt76, skb, params, enable, tx, + sta_wtbl, wtbl_hdr); + +diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c +index 76a61e8b7fb96..cefd33b74a875 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c +@@ -833,6 +833,9 @@ int mt76_connac_mcu_add_sta_cmd(struct mt76_phy *phy, + wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(dev, wcid, + WTBL_RESET_AND_SET, + sta_wtbl, &skb); ++ if (IS_ERR(wtbl_hdr)) ++ return PTR_ERR(wtbl_hdr); ++ + if (enable) { + mt76_connac_mcu_wtbl_generic_tlv(dev, skb, vif, sta, sta_wtbl, + wtbl_hdr); +diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c +index ab671e21f8827..02db5d66735dd 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c +@@ -447,6 +447,10 @@ int mt76x02_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) + return -EOPNOTSUPP; + ++ /* MT76x0 GTK offloading does not work with more than one VIF */ ++ if (is_mt76x0(dev) && !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) ++ return -EOPNOTSUPP; ++ + msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL; + wcid = msta ? &msta->wcid : &mvif->group_wcid; + +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c +index 660398ac53c24..738ecf8f4fa2f 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c +@@ -124,7 +124,7 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev, + struct ieee80211_channel *chan, + u8 chain_idx) + { +- int index; ++ int index, target_power; + bool tssi_on; + + if (chain_idx > 3) +@@ -133,15 +133,22 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev, + tssi_on = mt7915_tssi_enabled(dev, chan->band); + + if (chan->band == NL80211_BAND_2GHZ) { +- index = MT_EE_TX0_POWER_2G + chain_idx * 3 + !tssi_on; ++ index = MT_EE_TX0_POWER_2G + chain_idx * 3; ++ target_power = mt7915_eeprom_read(dev, index); ++ ++ if (!tssi_on) ++ target_power += mt7915_eeprom_read(dev, index + 1); + } else { +- int group = tssi_on ? +- mt7915_get_channel_group(chan->hw_value) : 8; ++ int group = mt7915_get_channel_group(chan->hw_value); ++ ++ index = MT_EE_TX0_POWER_5G + chain_idx * 12; ++ target_power = mt7915_eeprom_read(dev, index + group); + +- index = MT_EE_TX0_POWER_5G + chain_idx * 12 + group; ++ if (!tssi_on) ++ target_power += mt7915_eeprom_read(dev, index + 8); + } + +- return mt7915_eeprom_read(dev, index); ++ return target_power; + } + + static const u8 sku_cck_delta_map[] = { +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c +index 894016fdcf070..c7d4268d860af 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c +@@ -4,6 +4,7 @@ + #include + #include "mt7915.h" + #include "mac.h" ++#include "mcu.h" + #include "eeprom.h" + + #define CCK_RATE(_idx, _rate) { \ +@@ -283,9 +284,50 @@ static void mt7915_init_work(struct work_struct *work) + mt7915_register_ext_phy(dev); + } + ++static void mt7915_wfsys_reset(struct mt7915_dev *dev) ++{ ++ u32 val = MT_TOP_PWR_KEY | MT_TOP_PWR_SW_PWR_ON | MT_TOP_PWR_PWR_ON; ++ u32 reg = mt7915_reg_map_l1(dev, MT_TOP_MISC); ++ ++#define MT_MCU_DUMMY_RANDOM GENMASK(15, 0) ++#define MT_MCU_DUMMY_DEFAULT GENMASK(31, 16) ++ ++ mt76_wr(dev, MT_MCU_WFDMA0_DUMMY_CR, MT_MCU_DUMMY_RANDOM); ++ ++ /* change to software control */ ++ val |= MT_TOP_PWR_SW_RST; ++ mt76_wr(dev, MT_TOP_PWR_CTRL, val); ++ ++ /* reset wfsys */ ++ val &= ~MT_TOP_PWR_SW_RST; ++ mt76_wr(dev, MT_TOP_PWR_CTRL, val); ++ ++ /* release wfsys then mcu re-excutes romcode */ ++ val |= MT_TOP_PWR_SW_RST; ++ mt76_wr(dev, MT_TOP_PWR_CTRL, val); ++ ++ /* switch to hw control */ ++ val &= ~MT_TOP_PWR_SW_RST; ++ val |= MT_TOP_PWR_HW_CTRL; ++ mt76_wr(dev, MT_TOP_PWR_CTRL, val); ++ ++ /* check whether mcu resets to default */ ++ if (!mt76_poll_msec(dev, MT_MCU_WFDMA0_DUMMY_CR, MT_MCU_DUMMY_DEFAULT, ++ MT_MCU_DUMMY_DEFAULT, 1000)) { ++ dev_err(dev->mt76.dev, "wifi subsystem reset failure\n"); ++ return; ++ } ++ ++ /* wfsys reset won't clear host registers */ ++ mt76_clear(dev, reg, MT_TOP_MISC_FW_STATE); ++ ++ msleep(100); ++} ++ + static int mt7915_init_hardware(struct mt7915_dev *dev) + { + int ret, idx; ++ u32 val; + + mt76_wr(dev, MT_INT_SOURCE_CSR, ~0); + +@@ -295,6 +337,12 @@ static int mt7915_init_hardware(struct mt7915_dev *dev) + + dev->dbdc_support = !!(mt7915_l1_rr(dev, MT_HW_BOUND) & BIT(5)); + ++ val = mt76_rr(dev, mt7915_reg_map_l1(dev, MT_TOP_MISC)); ++ ++ /* If MCU was already running, it is likely in a bad state */ ++ if (FIELD_GET(MT_TOP_MISC_FW_STATE, val) > FW_STATE_FW_DOWNLOAD) ++ mt7915_wfsys_reset(dev); ++ + ret = mt7915_dma_init(dev); + if (ret) + return ret; +@@ -308,8 +356,14 @@ static int mt7915_init_hardware(struct mt7915_dev *dev) + mt76_wr(dev, MT_SWDEF_MODE, MT_SWDEF_NORMAL_MODE); + + ret = mt7915_mcu_init(dev); +- if (ret) +- return ret; ++ if (ret) { ++ /* Reset and try again */ ++ mt7915_wfsys_reset(dev); ++ ++ ret = mt7915_mcu_init(dev); ++ if (ret) ++ return ret; ++ } + + ret = mt7915_eeprom_init(dev); + if (ret < 0) +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c +index 98f4b49642a8c..bf032d943f744 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c +@@ -317,7 +317,9 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct mt7915_sta *msta = sta ? (struct mt7915_sta *)sta->drv_priv : + &mvif->sta; + struct mt76_wcid *wcid = &msta->wcid; ++ u8 *wcid_keyidx = &wcid->hw_key_idx; + int idx = key->keyidx; ++ int err = 0; + + /* The hardware does not support per-STA RX GTK, fallback + * to software mode for these. +@@ -332,6 +334,7 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + /* fall back to sw encryption for unsupported ciphers */ + switch (key->cipher) { + case WLAN_CIPHER_SUITE_AES_CMAC: ++ wcid_keyidx = &wcid->hw_key_idx2; + key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE; + break; + case WLAN_CIPHER_SUITE_TKIP: +@@ -347,16 +350,24 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + return -EOPNOTSUPP; + } + +- if (cmd == SET_KEY) { +- key->hw_key_idx = wcid->idx; +- wcid->hw_key_idx = idx; +- } else if (idx == wcid->hw_key_idx) { +- wcid->hw_key_idx = -1; +- } ++ mutex_lock(&dev->mt76.mutex); ++ ++ if (cmd == SET_KEY) ++ *wcid_keyidx = idx; ++ else if (idx == *wcid_keyidx) ++ *wcid_keyidx = -1; ++ else ++ goto out; ++ + mt76_wcid_key_setup(&dev->mt76, wcid, + cmd == SET_KEY ? key : NULL); + +- return mt7915_mcu_add_key(dev, vif, msta, key, cmd); ++ err = mt7915_mcu_add_key(dev, vif, msta, key, cmd); ++ ++out: ++ mutex_unlock(&dev->mt76.mutex); ++ ++ return err; + } + + static int mt7915_config(struct ieee80211_hw *hw, u32 changed) +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c +index 443cb09ae7cbd..f069a5a03e145 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c +@@ -1198,6 +1198,9 @@ mt7915_mcu_sta_ba(struct mt7915_dev *dev, + + wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, sta_wtbl, + &skb); ++ if (IS_ERR(wtbl_hdr)) ++ return PTR_ERR(wtbl_hdr); ++ + mt7915_mcu_wtbl_ba_tlv(skb, params, enable, tx, sta_wtbl, wtbl_hdr); + + ret = mt76_mcu_skb_send_msg(&dev->mt76, skb, +@@ -1714,6 +1717,9 @@ int mt7915_mcu_sta_update_hdr_trans(struct mt7915_dev *dev, + return -ENOMEM; + + wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, NULL, &skb); ++ if (IS_ERR(wtbl_hdr)) ++ return PTR_ERR(wtbl_hdr); ++ + mt7915_mcu_wtbl_hdr_trans_tlv(skb, vif, sta, NULL, wtbl_hdr); + + return mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_EXT_CMD(WTBL_UPDATE), +@@ -1738,6 +1744,9 @@ int mt7915_mcu_add_smps(struct mt7915_dev *dev, struct ieee80211_vif *vif, + + wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, sta_wtbl, + &skb); ++ if (IS_ERR(wtbl_hdr)) ++ return PTR_ERR(wtbl_hdr); ++ + mt7915_mcu_wtbl_smps_tlv(skb, sta, sta_wtbl, wtbl_hdr); + + return mt76_mcu_skb_send_msg(&dev->mt76, skb, +@@ -2263,6 +2272,9 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif, + + wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_RESET_AND_SET, + sta_wtbl, &skb); ++ if (IS_ERR(wtbl_hdr)) ++ return PTR_ERR(wtbl_hdr); ++ + if (enable) { + mt7915_mcu_wtbl_generic_tlv(skb, vif, sta, sta_wtbl, wtbl_hdr); + mt7915_mcu_wtbl_hdr_trans_tlv(skb, vif, sta, sta_wtbl, wtbl_hdr); +@@ -2752,21 +2764,8 @@ out: + + static int mt7915_load_firmware(struct mt7915_dev *dev) + { ++ u32 reg = mt7915_reg_map_l1(dev, MT_TOP_MISC); + int ret; +- u32 val, reg = mt7915_reg_map_l1(dev, MT_TOP_MISC); +- +- val = FIELD_PREP(MT_TOP_MISC_FW_STATE, FW_STATE_FW_DOWNLOAD); +- +- if (!mt76_poll_msec(dev, reg, MT_TOP_MISC_FW_STATE, val, 1000)) { +- /* restart firmware once */ +- __mt76_mcu_restart(&dev->mt76); +- if (!mt76_poll_msec(dev, reg, MT_TOP_MISC_FW_STATE, +- val, 1000)) { +- dev_err(dev->mt76.dev, +- "Firmware is not ready for download\n"); +- return -EIO; +- } +- } + + ret = mt7915_load_patch(dev); + if (ret) +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h +index ed0c9a24bb53d..dfb8880657bf6 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h +@@ -4,6 +4,11 @@ + #ifndef __MT7915_REGS_H + #define __MT7915_REGS_H + ++/* MCU WFDMA0 */ ++#define MT_MCU_WFDMA0_BASE 0x2000 ++#define MT_MCU_WFDMA0(ofs) (MT_MCU_WFDMA0_BASE + (ofs)) ++#define MT_MCU_WFDMA0_DUMMY_CR MT_MCU_WFDMA0(0x120) ++ + /* MCU WFDMA1 */ + #define MT_MCU_WFDMA1_BASE 0x3000 + #define MT_MCU_WFDMA1(ofs) (MT_MCU_WFDMA1_BASE + (ofs)) +@@ -396,6 +401,14 @@ + #define MT_WFDMA1_PCIE1_BUSY_ENA_TX_FIFO1 BIT(1) + #define MT_WFDMA1_PCIE1_BUSY_ENA_RX_FIFO BIT(2) + ++#define MT_TOP_RGU_BASE 0xf0000 ++#define MT_TOP_PWR_CTRL (MT_TOP_RGU_BASE + (0x0)) ++#define MT_TOP_PWR_KEY (0x5746 << 16) ++#define MT_TOP_PWR_SW_RST BIT(0) ++#define MT_TOP_PWR_SW_PWR_ON GENMASK(3, 2) ++#define MT_TOP_PWR_HW_CTRL BIT(4) ++#define MT_TOP_PWR_PWR_ON BIT(7) ++ + #define MT_INFRA_CFG_BASE 0xf1000 + #define MT_INFRA(ofs) (MT_INFRA_CFG_BASE + (ofs)) + +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c +index cd9fd0e24e3e6..ada943c7a9500 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c +@@ -413,7 +413,8 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct mt7921_sta *msta = sta ? (struct mt7921_sta *)sta->drv_priv : + &mvif->sta; + struct mt76_wcid *wcid = &msta->wcid; +- int idx = key->keyidx; ++ u8 *wcid_keyidx = &wcid->hw_key_idx; ++ int idx = key->keyidx, err = 0; + + /* The hardware does not support per-STA RX GTK, fallback + * to software mode for these. +@@ -429,6 +430,7 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + switch (key->cipher) { + case WLAN_CIPHER_SUITE_AES_CMAC: + key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE; ++ wcid_keyidx = &wcid->hw_key_idx2; + break; + case WLAN_CIPHER_SUITE_TKIP: + case WLAN_CIPHER_SUITE_CCMP: +@@ -443,16 +445,23 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + return -EOPNOTSUPP; + } + +- if (cmd == SET_KEY) { +- key->hw_key_idx = wcid->idx; +- wcid->hw_key_idx = idx; +- } else if (idx == wcid->hw_key_idx) { +- wcid->hw_key_idx = -1; +- } ++ mt7921_mutex_acquire(dev); ++ ++ if (cmd == SET_KEY) ++ *wcid_keyidx = idx; ++ else if (idx == *wcid_keyidx) ++ *wcid_keyidx = -1; ++ else ++ goto out; ++ + mt76_wcid_key_setup(&dev->mt76, wcid, + cmd == SET_KEY ? key : NULL); + +- return mt7921_mcu_add_key(dev, vif, msta, key, cmd); ++ err = mt7921_mcu_add_key(dev, vif, msta, key, cmd); ++out: ++ mt7921_mutex_release(dev); ++ ++ return err; + } + + static int mt7921_config(struct ieee80211_hw *hw, u32 changed) +diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c +index 1b205e7d97a81..37f40039e4caf 100644 +--- a/drivers/net/wireless/microchip/wilc1000/netdev.c ++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c +@@ -575,7 +575,6 @@ static int wilc_mac_open(struct net_device *ndev) + { + struct wilc_vif *vif = netdev_priv(ndev); + struct wilc *wl = vif->wilc; +- unsigned char mac_add[ETH_ALEN] = {0}; + int ret = 0; + struct mgmt_frame_regs mgmt_regs = {}; + +@@ -598,9 +597,12 @@ static int wilc_mac_open(struct net_device *ndev) + + wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype, + vif->idx); +- wilc_get_mac_address(vif, mac_add); +- netdev_dbg(ndev, "Mac address: %pM\n", mac_add); +- ether_addr_copy(ndev->dev_addr, mac_add); ++ ++ if (is_valid_ether_addr(ndev->dev_addr)) ++ wilc_set_mac_address(vif, ndev->dev_addr); ++ else ++ wilc_get_mac_address(vif, ndev->dev_addr); ++ netdev_dbg(ndev, "Mac address: %pM\n", ndev->dev_addr); + + if (!is_valid_ether_addr(ndev->dev_addr)) { + netdev_err(ndev, "Wrong MAC address\n"); +@@ -639,7 +641,14 @@ static int wilc_set_mac_addr(struct net_device *dev, void *p) + int srcu_idx; + + if (!is_valid_ether_addr(addr->sa_data)) +- return -EINVAL; ++ return -EADDRNOTAVAIL; ++ ++ if (!vif->mac_opened) { ++ eth_commit_mac_addr_change(dev, p); ++ return 0; ++ } ++ ++ /* Verify MAC Address is not already in use: */ + + srcu_idx = srcu_read_lock(&wilc->srcu); + list_for_each_entry_rcu(tmp_vif, &wilc->vif_list, list) { +@@ -647,7 +656,7 @@ static int wilc_set_mac_addr(struct net_device *dev, void *p) + if (ether_addr_equal(addr->sa_data, mac_addr)) { + if (vif != tmp_vif) { + srcu_read_unlock(&wilc->srcu, srcu_idx); +- return -EINVAL; ++ return -EADDRNOTAVAIL; + } + srcu_read_unlock(&wilc->srcu, srcu_idx); + return 0; +@@ -659,9 +668,7 @@ static int wilc_set_mac_addr(struct net_device *dev, void *p) + if (result) + return result; + +- ether_addr_copy(vif->bssid, addr->sa_data); +- ether_addr_copy(vif->ndev->dev_addr, addr->sa_data); +- ++ eth_commit_mac_addr_change(dev, p); + return result; + } + +diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c +index c775c177933b2..8dc80574d08d9 100644 +--- a/drivers/net/wireless/quantenna/qtnfmac/event.c ++++ b/drivers/net/wireless/quantenna/qtnfmac/event.c +@@ -570,8 +570,10 @@ qtnf_event_handle_external_auth(struct qtnf_vif *vif, + return 0; + + if (ev->ssid_len) { +- memcpy(auth.ssid.ssid, ev->ssid, ev->ssid_len); +- auth.ssid.ssid_len = ev->ssid_len; ++ int len = clamp_val(ev->ssid_len, 0, IEEE80211_MAX_SSID_LEN); ++ ++ memcpy(auth.ssid.ssid, ev->ssid, len); ++ auth.ssid.ssid_len = len; + } + + auth.key_mgmt_suite = le32_to_cpu(ev->akm_suite); +diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h +index 35afea91fd299..92b9cf1f95252 100644 +--- a/drivers/net/wireless/realtek/rtw88/main.h ++++ b/drivers/net/wireless/realtek/rtw88/main.h +@@ -1166,6 +1166,7 @@ struct rtw_chip_info { + bool en_dis_dpd; + u16 dpd_ratemask; + u8 iqk_threshold; ++ u8 lck_threshold; + const struct rtw_pwr_track_tbl *pwr_track_tbl; + + u8 bfer_su_max_num; +@@ -1534,6 +1535,7 @@ struct rtw_dm_info { + u32 rrsr_mask_min; + u8 thermal_avg[RTW_RF_PATH_MAX]; + u8 thermal_meter_k; ++ u8 thermal_meter_lck; + s8 delta_power_index[RTW_RF_PATH_MAX]; + s8 delta_power_index_last[RTW_RF_PATH_MAX]; + u8 default_ofdm_index; +diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c +index 0b3da5bef7036..21e77fcfa4d53 100644 +--- a/drivers/net/wireless/realtek/rtw88/phy.c ++++ b/drivers/net/wireless/realtek/rtw88/phy.c +@@ -2220,6 +2220,20 @@ s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev, + } + EXPORT_SYMBOL(rtw_phy_pwrtrack_get_pwridx); + ++bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev) ++{ ++ struct rtw_dm_info *dm_info = &rtwdev->dm_info; ++ u8 delta_lck; ++ ++ delta_lck = abs(dm_info->thermal_avg[0] - dm_info->thermal_meter_lck); ++ if (delta_lck >= rtwdev->chip->lck_threshold) { ++ dm_info->thermal_meter_lck = dm_info->thermal_avg[0]; ++ return true; ++ } ++ return false; ++} ++EXPORT_SYMBOL(rtw_phy_pwrtrack_need_lck); ++ + bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev) + { + struct rtw_dm_info *dm_info = &rtwdev->dm_info; +diff --git a/drivers/net/wireless/realtek/rtw88/phy.h b/drivers/net/wireless/realtek/rtw88/phy.h +index a4fcfb8785504..a0742a69446de 100644 +--- a/drivers/net/wireless/realtek/rtw88/phy.h ++++ b/drivers/net/wireless/realtek/rtw88/phy.h +@@ -55,6 +55,7 @@ u8 rtw_phy_pwrtrack_get_delta(struct rtw_dev *rtwdev, u8 path); + s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev, + struct rtw_swing_table *swing_table, + u8 tbl_path, u8 therm_path, u8 delta); ++bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev); + bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev); + void rtw_phy_config_swing_table(struct rtw_dev *rtwdev, + struct rtw_swing_table *swing_table); +diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h +index ea518aa78552e..819af34dac34b 100644 +--- a/drivers/net/wireless/realtek/rtw88/reg.h ++++ b/drivers/net/wireless/realtek/rtw88/reg.h +@@ -652,8 +652,13 @@ + #define RF_TXATANK 0x64 + #define RF_TRXIQ 0x66 + #define RF_RXIQGEN 0x8d ++#define RF_SYN_PFD 0xb0 + #define RF_XTALX2 0xb8 ++#define RF_SYN_CTRL 0xbb + #define RF_MALSEL 0xbe ++#define RF_SYN_AAC 0xc9 ++#define RF_AAC_CTRL 0xca ++#define RF_FAST_LCK 0xcc + #define RF_RCKD 0xde + #define RF_TXADBG 0xde + #define RF_LUTDBG 0xdf +diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c +index dd560c28abb2f..448922cb2e63d 100644 +--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c ++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c +@@ -1126,6 +1126,7 @@ static void rtw8822c_pwrtrack_init(struct rtw_dev *rtwdev) + + dm_info->pwr_trk_triggered = false; + dm_info->thermal_meter_k = rtwdev->efuse.thermal_meter_k; ++ dm_info->thermal_meter_lck = rtwdev->efuse.thermal_meter_k; + } + + static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev) +@@ -2108,6 +2109,26 @@ static void rtw8822c_false_alarm_statistics(struct rtw_dev *rtwdev) + rtw_write32_set(rtwdev, REG_RX_BREAK, BIT_COM_RX_GCK_EN); + } + ++static void rtw8822c_do_lck(struct rtw_dev *rtwdev) ++{ ++ u32 val; ++ ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_CTRL, RFREG_MASK, 0x80010); ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0FA); ++ fsleep(1); ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_AAC_CTRL, RFREG_MASK, 0x80000); ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_AAC, RFREG_MASK, 0x80001); ++ read_poll_timeout(rtw_read_rf, val, val != 0x1, 1000, 100000, ++ true, rtwdev, RF_PATH_A, RF_AAC_CTRL, 0x1000); ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0F8); ++ rtw_write_rf(rtwdev, RF_PATH_B, RF_SYN_CTRL, RFREG_MASK, 0x80010); ++ ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000); ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x4f000); ++ fsleep(1); ++ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000); ++} ++ + static void rtw8822c_do_iqk(struct rtw_dev *rtwdev) + { + struct rtw_iqk_para para = {0}; +@@ -3538,11 +3559,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev) + + rtw_phy_config_swing_table(rtwdev, &swing_table); + ++ if (rtw_phy_pwrtrack_need_lck(rtwdev)) ++ rtw8822c_do_lck(rtwdev); ++ + for (i = 0; i < rtwdev->hal.rf_path_num; i++) + rtw8822c_pwr_track_path(rtwdev, &swing_table, i); + +- if (rtw_phy_pwrtrack_need_iqk(rtwdev)) +- rtw8822c_do_iqk(rtwdev); + } + + static void rtw8822c_pwr_track(struct rtw_dev *rtwdev) +@@ -4351,6 +4373,7 @@ struct rtw_chip_info rtw8822c_hw_spec = { + .dpd_ratemask = DIS_DPD_RATEALL, + .pwr_track_tbl = &rtw8822c_rtw_pwr_track_tbl, + .iqk_threshold = 8, ++ .lck_threshold = 8, + .bfer_su_max_num = 2, + .bfer_mu_max_num = 1, + .rx_ldpc = true, +diff --git a/drivers/net/wireless/wl3501.h b/drivers/net/wireless/wl3501.h +index e98e04ee9a2c0..59b7b93c59636 100644 +--- a/drivers/net/wireless/wl3501.h ++++ b/drivers/net/wireless/wl3501.h +@@ -379,16 +379,7 @@ struct wl3501_get_confirm { + u8 mib_value[100]; + }; + +-struct wl3501_join_req { +- u16 next_blk; +- u8 sig_id; +- u8 reserved; +- struct iw_mgmt_data_rset operational_rset; +- u16 reserved2; +- u16 timeout; +- u16 probe_delay; +- u8 timestamp[8]; +- u8 local_time[8]; ++struct wl3501_req { + u16 beacon_period; + u16 dtim_period; + u16 cap_info; +@@ -401,6 +392,19 @@ struct wl3501_join_req { + struct iw_mgmt_data_rset bss_basic_rset; + }; + ++struct wl3501_join_req { ++ u16 next_blk; ++ u8 sig_id; ++ u8 reserved; ++ struct iw_mgmt_data_rset operational_rset; ++ u16 reserved2; ++ u16 timeout; ++ u16 probe_delay; ++ u8 timestamp[8]; ++ u8 local_time[8]; ++ struct wl3501_req req; ++}; ++ + struct wl3501_join_confirm { + u16 next_blk; + u8 sig_id; +@@ -443,16 +447,7 @@ struct wl3501_scan_confirm { + u16 status; + char timestamp[8]; + char localtime[8]; +- u16 beacon_period; +- u16 dtim_period; +- u16 cap_info; +- u8 bss_type; +- u8 bssid[ETH_ALEN]; +- struct iw_mgmt_essid_pset ssid; +- struct iw_mgmt_ds_pset ds_pset; +- struct iw_mgmt_cf_pset cf_pset; +- struct iw_mgmt_ibss_pset ibss_pset; +- struct iw_mgmt_data_rset bss_basic_rset; ++ struct wl3501_req req; + u8 rssi; + }; + +@@ -471,8 +466,10 @@ struct wl3501_md_req { + u16 size; + u8 pri; + u8 service_class; +- u8 daddr[ETH_ALEN]; +- u8 saddr[ETH_ALEN]; ++ struct { ++ u8 daddr[ETH_ALEN]; ++ u8 saddr[ETH_ALEN]; ++ } addr; + }; + + struct wl3501_md_ind { +@@ -484,8 +481,10 @@ struct wl3501_md_ind { + u8 reception; + u8 pri; + u8 service_class; +- u8 daddr[ETH_ALEN]; +- u8 saddr[ETH_ALEN]; ++ struct { ++ u8 daddr[ETH_ALEN]; ++ u8 saddr[ETH_ALEN]; ++ } addr; + }; + + struct wl3501_md_confirm { +diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c +index 8ca5789c7b378..672f5d5f3f2c7 100644 +--- a/drivers/net/wireless/wl3501_cs.c ++++ b/drivers/net/wireless/wl3501_cs.c +@@ -469,6 +469,7 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len) + struct wl3501_md_req sig = { + .sig_id = WL3501_SIG_MD_REQ, + }; ++ size_t sig_addr_len = sizeof(sig.addr); + u8 *pdata = (char *)data; + int rc = -EIO; + +@@ -484,9 +485,9 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len) + goto out; + } + rc = 0; +- memcpy(&sig.daddr[0], pdata, 12); +- pktlen = len - 12; +- pdata += 12; ++ memcpy(&sig.addr, pdata, sig_addr_len); ++ pktlen = len - sig_addr_len; ++ pdata += sig_addr_len; + sig.data = bf; + if (((*pdata) * 256 + (*(pdata + 1))) > 1500) { + u8 addr4[ETH_ALEN] = { +@@ -589,7 +590,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas) + struct wl3501_join_req sig = { + .sig_id = WL3501_SIG_JOIN_REQ, + .timeout = 10, +- .ds_pset = { ++ .req.ds_pset = { + .el = { + .id = IW_MGMT_INFO_ELEMENT_DS_PARAMETER_SET, + .len = 1, +@@ -598,7 +599,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas) + }, + }; + +- memcpy(&sig.beacon_period, &this->bss_set[stas].beacon_period, 72); ++ memcpy(&sig.req, &this->bss_set[stas].req, sizeof(sig.req)); + return wl3501_esbq_exec(this, &sig, sizeof(sig)); + } + +@@ -666,35 +667,37 @@ static void wl3501_mgmt_scan_confirm(struct wl3501_card *this, u16 addr) + if (sig.status == WL3501_STATUS_SUCCESS) { + pr_debug("success"); + if ((this->net_type == IW_MODE_INFRA && +- (sig.cap_info & WL3501_MGMT_CAPABILITY_ESS)) || ++ (sig.req.cap_info & WL3501_MGMT_CAPABILITY_ESS)) || + (this->net_type == IW_MODE_ADHOC && +- (sig.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) || ++ (sig.req.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) || + this->net_type == IW_MODE_AUTO) { + if (!this->essid.el.len) + matchflag = 1; + else if (this->essid.el.len == 3 && + !memcmp(this->essid.essid, "ANY", 3)) + matchflag = 1; +- else if (this->essid.el.len != sig.ssid.el.len) ++ else if (this->essid.el.len != sig.req.ssid.el.len) + matchflag = 0; +- else if (memcmp(this->essid.essid, sig.ssid.essid, ++ else if (memcmp(this->essid.essid, sig.req.ssid.essid, + this->essid.el.len)) + matchflag = 0; + else + matchflag = 1; + if (matchflag) { + for (i = 0; i < this->bss_cnt; i++) { +- if (ether_addr_equal_unaligned(this->bss_set[i].bssid, sig.bssid)) { ++ if (ether_addr_equal_unaligned(this->bss_set[i].req.bssid, ++ sig.req.bssid)) { + matchflag = 0; + break; + } + } + } + if (matchflag && (i < 20)) { +- memcpy(&this->bss_set[i].beacon_period, +- &sig.beacon_period, 73); ++ memcpy(&this->bss_set[i].req, ++ &sig.req, sizeof(sig.req)); + this->bss_cnt++; + this->rssi = sig.rssi; ++ this->bss_set[i].rssi = sig.rssi; + } + } + } else if (sig.status == WL3501_STATUS_TIMEOUT) { +@@ -886,19 +889,19 @@ static void wl3501_mgmt_join_confirm(struct net_device *dev, u16 addr) + if (this->join_sta_bss < this->bss_cnt) { + const int i = this->join_sta_bss; + memcpy(this->bssid, +- this->bss_set[i].bssid, ETH_ALEN); +- this->chan = this->bss_set[i].ds_pset.chan; ++ this->bss_set[i].req.bssid, ETH_ALEN); ++ this->chan = this->bss_set[i].req.ds_pset.chan; + iw_copy_mgmt_info_element(&this->keep_essid.el, +- &this->bss_set[i].ssid.el); ++ &this->bss_set[i].req.ssid.el); + wl3501_mgmt_auth(this); + } + } else { + const int i = this->join_sta_bss; + +- memcpy(&this->bssid, &this->bss_set[i].bssid, ETH_ALEN); +- this->chan = this->bss_set[i].ds_pset.chan; ++ memcpy(&this->bssid, &this->bss_set[i].req.bssid, ETH_ALEN); ++ this->chan = this->bss_set[i].req.ds_pset.chan; + iw_copy_mgmt_info_element(&this->keep_essid.el, +- &this->bss_set[i].ssid.el); ++ &this->bss_set[i].req.ssid.el); + wl3501_online(dev); + } + } else { +@@ -980,7 +983,8 @@ static inline void wl3501_md_ind_interrupt(struct net_device *dev, + } else { + skb->dev = dev; + skb_reserve(skb, 2); /* IP headers on 16 bytes boundaries */ +- skb_copy_to_linear_data(skb, (unsigned char *)&sig.daddr, 12); ++ skb_copy_to_linear_data(skb, (unsigned char *)&sig.addr, ++ sizeof(sig.addr)); + wl3501_receive(this, skb->data, pkt_len); + skb_put(skb, pkt_len); + skb->protocol = eth_type_trans(skb, dev); +@@ -1571,30 +1575,30 @@ static int wl3501_get_scan(struct net_device *dev, struct iw_request_info *info, + for (i = 0; i < this->bss_cnt; ++i) { + iwe.cmd = SIOCGIWAP; + iwe.u.ap_addr.sa_family = ARPHRD_ETHER; +- memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].bssid, ETH_ALEN); ++ memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].req.bssid, ETH_ALEN); + current_ev = iwe_stream_add_event(info, current_ev, + extra + IW_SCAN_MAX_DATA, + &iwe, IW_EV_ADDR_LEN); + iwe.cmd = SIOCGIWESSID; + iwe.u.data.flags = 1; +- iwe.u.data.length = this->bss_set[i].ssid.el.len; ++ iwe.u.data.length = this->bss_set[i].req.ssid.el.len; + current_ev = iwe_stream_add_point(info, current_ev, + extra + IW_SCAN_MAX_DATA, + &iwe, +- this->bss_set[i].ssid.essid); ++ this->bss_set[i].req.ssid.essid); + iwe.cmd = SIOCGIWMODE; +- iwe.u.mode = this->bss_set[i].bss_type; ++ iwe.u.mode = this->bss_set[i].req.bss_type; + current_ev = iwe_stream_add_event(info, current_ev, + extra + IW_SCAN_MAX_DATA, + &iwe, IW_EV_UINT_LEN); + iwe.cmd = SIOCGIWFREQ; +- iwe.u.freq.m = this->bss_set[i].ds_pset.chan; ++ iwe.u.freq.m = this->bss_set[i].req.ds_pset.chan; + iwe.u.freq.e = 0; + current_ev = iwe_stream_add_event(info, current_ev, + extra + IW_SCAN_MAX_DATA, + &iwe, IW_EV_FREQ_LEN); + iwe.cmd = SIOCGIWENCODE; +- if (this->bss_set[i].cap_info & WL3501_MGMT_CAPABILITY_PRIVACY) ++ if (this->bss_set[i].req.cap_info & WL3501_MGMT_CAPABILITY_PRIVACY) + iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY; + else + iwe.u.data.flags = IW_ENCODE_DISABLED; +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 0896e21642beb..d5d7e0cdd78d8 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -2681,7 +2681,8 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val) + + if (ctrl->ps_max_latency_us != latency) { + ctrl->ps_max_latency_us = latency; +- nvme_configure_apst(ctrl); ++ if (ctrl->state == NVME_CTRL_LIVE) ++ nvme_configure_apst(ctrl); + } + } + +diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c +index 9a8b3726a37c4..429263ca9b978 100644 +--- a/drivers/nvme/target/io-cmd-bdev.c ++++ b/drivers/nvme/target/io-cmd-bdev.c +@@ -258,7 +258,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) + + sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); + +- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { ++ if (nvmet_use_inline_bvec(req)) { + bio = &req->b.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + } else { +diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h +index 4b84edb49f22c..5aad34b106dc2 100644 +--- a/drivers/nvme/target/nvmet.h ++++ b/drivers/nvme/target/nvmet.h +@@ -614,4 +614,10 @@ static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba) + return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); + } + ++static inline bool nvmet_use_inline_bvec(struct nvmet_req *req) ++{ ++ return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN && ++ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC; ++} ++ + #endif /* _NVMET_H */ +diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c +index 2798944899b73..39b1473f7204e 100644 +--- a/drivers/nvme/target/passthru.c ++++ b/drivers/nvme/target/passthru.c +@@ -194,7 +194,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) + if (req->sg_cnt > BIO_MAX_VECS) + return -EINVAL; + +- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { ++ if (nvmet_use_inline_bvec(req)) { + bio = &req->p.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + } else { +diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c +index 6c1f3ab7649c7..7d607f435e366 100644 +--- a/drivers/nvme/target/rdma.c ++++ b/drivers/nvme/target/rdma.c +@@ -700,7 +700,7 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) + { + struct nvmet_rdma_rsp *rsp = + container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe); +- struct nvmet_rdma_queue *queue = cq->cq_context; ++ struct nvmet_rdma_queue *queue = wc->qp->qp_context; + + nvmet_rdma_release_rsp(rsp); + +@@ -786,7 +786,7 @@ static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc) + { + struct nvmet_rdma_rsp *rsp = + container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe); +- struct nvmet_rdma_queue *queue = cq->cq_context; ++ struct nvmet_rdma_queue *queue = wc->qp->qp_context; + struct rdma_cm_id *cm_id = rsp->queue->cm_id; + u16 status; + +diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c +index e330e6811f0b6..08bc788d9422c 100644 +--- a/drivers/pci/controller/pcie-brcmstb.c ++++ b/drivers/pci/controller/pcie-brcmstb.c +@@ -1148,6 +1148,7 @@ static int brcm_pcie_suspend(struct device *dev) + + brcm_pcie_turn_off(pcie); + ret = brcm_phy_stop(pcie); ++ reset_control_rearm(pcie->rescal); + clk_disable_unprepare(pcie->clk); + + return ret; +@@ -1163,9 +1164,13 @@ static int brcm_pcie_resume(struct device *dev) + base = pcie->base; + clk_prepare_enable(pcie->clk); + ++ ret = reset_control_reset(pcie->rescal); ++ if (ret) ++ goto err_disable_clk; ++ + ret = brcm_phy_start(pcie); + if (ret) +- goto err; ++ goto err_reset; + + /* Take bridge out of reset so we can access the SERDES reg */ + pcie->bridge_sw_init_set(pcie, 0); +@@ -1180,14 +1185,16 @@ static int brcm_pcie_resume(struct device *dev) + + ret = brcm_pcie_setup(pcie); + if (ret) +- goto err; ++ goto err_reset; + + if (pcie->msi) + brcm_msi_set_regs(pcie->msi); + + return 0; + +-err: ++err_reset: ++ reset_control_rearm(pcie->rescal); ++err_disable_clk: + clk_disable_unprepare(pcie->clk); + return ret; + } +@@ -1197,7 +1204,7 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie) + brcm_msi_remove(pcie); + brcm_pcie_turn_off(pcie); + brcm_phy_stop(pcie); +- reset_control_assert(pcie->rescal); ++ reset_control_rearm(pcie->rescal); + clk_disable_unprepare(pcie->clk); + } + +@@ -1278,13 +1285,13 @@ static int brcm_pcie_probe(struct platform_device *pdev) + return PTR_ERR(pcie->perst_reset); + } + +- ret = reset_control_deassert(pcie->rescal); ++ ret = reset_control_reset(pcie->rescal); + if (ret) + dev_err(&pdev->dev, "failed to deassert 'rescal'\n"); + + ret = brcm_phy_start(pcie); + if (ret) { +- reset_control_assert(pcie->rescal); ++ reset_control_rearm(pcie->rescal); + clk_disable_unprepare(pcie->clk); + return ret; + } +@@ -1296,6 +1303,7 @@ static int brcm_pcie_probe(struct platform_device *pdev) + pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION); + if (pcie->type == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) { + dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n"); ++ ret = -ENODEV; + goto fail; + } + +diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c +index 908475d27e0e7..eede4e8f3f75a 100644 +--- a/drivers/pci/controller/pcie-iproc-msi.c ++++ b/drivers/pci/controller/pcie-iproc-msi.c +@@ -271,7 +271,7 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain, + NULL, NULL); + } + +- return hwirq; ++ return 0; + } + + static void iproc_msi_irq_domain_free(struct irq_domain *domain, +diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c +index c0ac4e9cbe72f..f9760e73d568c 100644 +--- a/drivers/pci/endpoint/functions/pci-epf-test.c ++++ b/drivers/pci/endpoint/functions/pci-epf-test.c +@@ -833,15 +833,18 @@ static int pci_epf_test_bind(struct pci_epf *epf) + return -EINVAL; + + epc_features = pci_epc_get_features(epc, epf->func_no); +- if (epc_features) { +- linkup_notifier = epc_features->linkup_notifier; +- core_init_notifier = epc_features->core_init_notifier; +- test_reg_bar = pci_epc_get_first_free_bar(epc_features); +- if (test_reg_bar < 0) +- return -EINVAL; +- pci_epf_configure_bar(epf, epc_features); ++ if (!epc_features) { ++ dev_err(&epf->dev, "epc_features not implemented\n"); ++ return -EOPNOTSUPP; + } + ++ linkup_notifier = epc_features->linkup_notifier; ++ core_init_notifier = epc_features->core_init_notifier; ++ test_reg_bar = pci_epc_get_first_free_bar(epc_features); ++ if (test_reg_bar < 0) ++ return -EINVAL; ++ pci_epf_configure_bar(epf, epc_features); ++ + epf_test->test_reg_bar = test_reg_bar; + epf_test->epc_features = epc_features; + +@@ -922,6 +925,7 @@ static int __init pci_epf_test_init(void) + + ret = pci_epf_register_driver(&test_driver); + if (ret) { ++ destroy_workqueue(kpcitest_workqueue); + pr_err("Failed to register pci epf test driver --> %d\n", ret); + return ret; + } +@@ -932,6 +936,8 @@ module_init(pci_epf_test_init); + + static void __exit pci_epf_test_exit(void) + { ++ if (kpcitest_workqueue) ++ destroy_workqueue(kpcitest_workqueue); + pci_epf_unregister_driver(&test_driver); + } + module_exit(pci_epf_test_exit); +diff --git a/drivers/pci/pcie/rcec.c b/drivers/pci/pcie/rcec.c +index 2c5c552994e4c..d0bcd141ac9c6 100644 +--- a/drivers/pci/pcie/rcec.c ++++ b/drivers/pci/pcie/rcec.c +@@ -32,7 +32,7 @@ static bool rcec_assoc_rciep(struct pci_dev *rcec, struct pci_dev *rciep) + + /* Same bus, so check bitmap */ + for_each_set_bit(devn, &bitmap, 32) +- if (devn == rciep->devfn) ++ if (devn == PCI_SLOT(rciep->devfn)) + return true; + + return false; +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c +index 953f15abc850a..be51670572fa6 100644 +--- a/drivers/pci/probe.c ++++ b/drivers/pci/probe.c +@@ -2353,6 +2353,7 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn) + pci_set_of_node(dev); + + if (pci_setup_device(dev)) { ++ pci_release_of_node(dev); + pci_bus_put(dev->bus); + kfree(dev); + return NULL; +diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c +index 0cd7f33cdf257..2b99f4130e1e5 100644 +--- a/drivers/pinctrl/samsung/pinctrl-exynos.c ++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c +@@ -55,7 +55,7 @@ static void exynos_irq_mask(struct irq_data *irqd) + struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip); + struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd); + unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset; +- unsigned long mask; ++ unsigned int mask; + unsigned long flags; + + raw_spin_lock_irqsave(&bank->slock, flags); +@@ -83,7 +83,7 @@ static void exynos_irq_unmask(struct irq_data *irqd) + struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip); + struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd); + unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset; +- unsigned long mask; ++ unsigned int mask; + unsigned long flags; + + /* +@@ -483,7 +483,7 @@ static void exynos_irq_eint0_15(struct irq_desc *desc) + chained_irq_exit(chip, desc); + } + +-static inline void exynos_irq_demux_eint(unsigned long pend, ++static inline void exynos_irq_demux_eint(unsigned int pend, + struct irq_domain *domain) + { + unsigned int irq; +@@ -500,8 +500,8 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc) + { + struct irq_chip *chip = irq_desc_get_chip(desc); + struct exynos_muxed_weint_data *eintd = irq_desc_get_handler_data(desc); +- unsigned long pend; +- unsigned long mask; ++ unsigned int pend; ++ unsigned int mask; + int i; + + chained_irq_enter(chip, desc); +diff --git a/drivers/pwm/pwm-atmel.c b/drivers/pwm/pwm-atmel.c +index 5813339b597b9..3292158157b68 100644 +--- a/drivers/pwm/pwm-atmel.c ++++ b/drivers/pwm/pwm-atmel.c +@@ -319,7 +319,7 @@ static void atmel_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm, + + cdty = atmel_pwm_ch_readl(atmel_pwm, pwm->hwpwm, + atmel_pwm->data->regs.duty); +- tmp = (u64)cdty * NSEC_PER_SEC; ++ tmp = (u64)(cprd - cdty) * NSEC_PER_SEC; + tmp <<= pres; + state->duty_cycle = DIV64_U64_ROUND_UP(tmp, rate); + +diff --git a/drivers/remoteproc/pru_rproc.c b/drivers/remoteproc/pru_rproc.c +index dcb380e868dfd..549ed3fed6259 100644 +--- a/drivers/remoteproc/pru_rproc.c ++++ b/drivers/remoteproc/pru_rproc.c +@@ -266,12 +266,17 @@ static void pru_rproc_create_debug_entries(struct rproc *rproc) + + static void pru_dispose_irq_mapping(struct pru_rproc *pru) + { +- while (pru->evt_count--) { ++ if (!pru->mapped_irq) ++ return; ++ ++ while (pru->evt_count) { ++ pru->evt_count--; + if (pru->mapped_irq[pru->evt_count] > 0) + irq_dispose_mapping(pru->mapped_irq[pru->evt_count]); + } + + kfree(pru->mapped_irq); ++ pru->mapped_irq = NULL; + } + + /* +@@ -284,7 +289,7 @@ static int pru_handle_intrmap(struct rproc *rproc) + struct pru_rproc *pru = rproc->priv; + struct pru_irq_rsc *rsc = pru->pru_interrupt_map; + struct irq_fwspec fwspec; +- struct device_node *irq_parent; ++ struct device_node *parent, *irq_parent; + int i, ret = 0; + + /* not having pru_interrupt_map is not an error */ +@@ -307,16 +312,31 @@ static int pru_handle_intrmap(struct rproc *rproc) + pru->evt_count = rsc->num_evts; + pru->mapped_irq = kcalloc(pru->evt_count, sizeof(unsigned int), + GFP_KERNEL); +- if (!pru->mapped_irq) ++ if (!pru->mapped_irq) { ++ pru->evt_count = 0; + return -ENOMEM; ++ } + + /* + * parse and fill in system event to interrupt channel and +- * channel-to-host mapping ++ * channel-to-host mapping. The interrupt controller to be used ++ * for these mappings for a given PRU remoteproc is always its ++ * corresponding sibling PRUSS INTC node. + */ +- irq_parent = of_irq_find_parent(pru->dev->of_node); ++ parent = of_get_parent(dev_of_node(pru->dev)); ++ if (!parent) { ++ kfree(pru->mapped_irq); ++ pru->mapped_irq = NULL; ++ pru->evt_count = 0; ++ return -ENODEV; ++ } ++ ++ irq_parent = of_get_child_by_name(parent, "interrupt-controller"); ++ of_node_put(parent); + if (!irq_parent) { + kfree(pru->mapped_irq); ++ pru->mapped_irq = NULL; ++ pru->evt_count = 0; + return -ENODEV; + } + +@@ -332,16 +352,20 @@ static int pru_handle_intrmap(struct rproc *rproc) + + pru->mapped_irq[i] = irq_create_fwspec_mapping(&fwspec); + if (!pru->mapped_irq[i]) { +- dev_err(dev, "failed to get virq\n"); +- ret = pru->mapped_irq[i]; ++ dev_err(dev, "failed to get virq for fw mapping %d: event %d chnl %d host %d\n", ++ i, fwspec.param[0], fwspec.param[1], ++ fwspec.param[2]); ++ ret = -EINVAL; + goto map_fail; + } + } ++ of_node_put(irq_parent); + + return ret; + + map_fail: + pru_dispose_irq_mapping(pru); ++ of_node_put(irq_parent); + + return ret; + } +@@ -387,8 +411,7 @@ static int pru_rproc_stop(struct rproc *rproc) + pru_control_write_reg(pru, PRU_CTRL_CTRL, val); + + /* dispose irq mapping - new firmware can provide new mapping */ +- if (pru->mapped_irq) +- pru_dispose_irq_mapping(pru); ++ pru_dispose_irq_mapping(pru); + + return 0; + } +diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c +index 66106ba25ba30..14e0ce5f18f5f 100644 +--- a/drivers/remoteproc/qcom_q6v5_mss.c ++++ b/drivers/remoteproc/qcom_q6v5_mss.c +@@ -1210,6 +1210,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc) + goto release_firmware; + } + ++ if (phdr->p_filesz > phdr->p_memsz) { ++ dev_err(qproc->dev, ++ "refusing to load segment %d with p_filesz > p_memsz\n", ++ i); ++ ret = -EINVAL; ++ goto release_firmware; ++ } ++ + ptr = memremap(qproc->mpss_phys + offset, phdr->p_memsz, MEMREMAP_WC); + if (!ptr) { + dev_err(qproc->dev, +@@ -1241,6 +1249,16 @@ static int q6v5_mpss_load(struct q6v5 *qproc) + goto release_firmware; + } + ++ if (seg_fw->size != phdr->p_filesz) { ++ dev_err(qproc->dev, ++ "failed to load segment %d from truncated file %s\n", ++ i, fw_name); ++ ret = -EINVAL; ++ release_firmware(seg_fw); ++ memunmap(ptr); ++ goto release_firmware; ++ } ++ + release_firmware(seg_fw); + } + +diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c +index 27a05167c18c3..4840886532ff7 100644 +--- a/drivers/rpmsg/qcom_glink_native.c ++++ b/drivers/rpmsg/qcom_glink_native.c +@@ -857,6 +857,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail) + dev_err(glink->dev, + "no intent found for channel %s intent %d", + channel->name, liid); ++ ret = -ENOENT; + goto advance_rx; + } + } +diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c +index cd8e438bc9c46..8752620d8e34a 100644 +--- a/drivers/rtc/rtc-ds1307.c ++++ b/drivers/rtc/rtc-ds1307.c +@@ -296,7 +296,11 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t) + t->tm_min = bcd2bin(regs[DS1307_REG_MIN] & 0x7f); + tmp = regs[DS1307_REG_HOUR] & 0x3f; + t->tm_hour = bcd2bin(tmp); +- t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1; ++ /* rx8130 is bit position, not BCD */ ++ if (ds1307->type == rx_8130) ++ t->tm_wday = fls(regs[DS1307_REG_WDAY] & 0x7f); ++ else ++ t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1; + t->tm_mday = bcd2bin(regs[DS1307_REG_MDAY] & 0x3f); + tmp = regs[DS1307_REG_MONTH] & 0x1f; + t->tm_mon = bcd2bin(tmp) - 1; +@@ -343,7 +347,11 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t) + regs[DS1307_REG_SECS] = bin2bcd(t->tm_sec); + regs[DS1307_REG_MIN] = bin2bcd(t->tm_min); + regs[DS1307_REG_HOUR] = bin2bcd(t->tm_hour); +- regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1); ++ /* rx8130 is bit position, not BCD */ ++ if (ds1307->type == rx_8130) ++ regs[DS1307_REG_WDAY] = 1 << t->tm_wday; ++ else ++ regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1); + regs[DS1307_REG_MDAY] = bin2bcd(t->tm_mday); + regs[DS1307_REG_MONTH] = bin2bcd(t->tm_mon + 1); + +diff --git a/drivers/rtc/rtc-fsl-ftm-alarm.c b/drivers/rtc/rtc-fsl-ftm-alarm.c +index 57cc09d0a8067..c0df49fb978ce 100644 +--- a/drivers/rtc/rtc-fsl-ftm-alarm.c ++++ b/drivers/rtc/rtc-fsl-ftm-alarm.c +@@ -310,6 +310,7 @@ static const struct of_device_id ftm_rtc_match[] = { + { .compatible = "fsl,lx2160a-ftm-alarm", }, + { }, + }; ++MODULE_DEVICE_TABLE(of, ftm_rtc_match); + + static const struct acpi_device_id ftm_imx_acpi_ids[] = { + {"NXP0014",}, +diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c +index 288abb1abdb8d..bc89c62ccb9b5 100644 +--- a/drivers/rtc/rtc-tps65910.c ++++ b/drivers/rtc/rtc-tps65910.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + #include + #include + #include +diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c +index f01f07116bd3e..8cb0574cfa91b 100644 +--- a/drivers/scsi/qla2xxx/qla_init.c ++++ b/drivers/scsi/qla2xxx/qla_init.c +@@ -1194,6 +1194,9 @@ static int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport) + { + struct qla_work_evt *e; + ++ if (vha->host->active_mode == MODE_TARGET) ++ return QLA_FUNCTION_FAILED; ++ + e = qla2x00_alloc_work(vha, QLA_EVT_PRLI); + if (!e) + return QLA_FUNCTION_FAILED; +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c +index d3d05e997c135..0c71a159d08f1 100644 +--- a/drivers/scsi/ufs/ufshcd.c ++++ b/drivers/scsi/ufs/ufshcd.c +@@ -8599,7 +8599,7 @@ static void ufshcd_vreg_set_lpm(struct ufs_hba *hba) + } else if (!ufshcd_is_ufs_dev_active(hba)) { + ufshcd_toggle_vreg(hba->dev, hba->vreg_info.vcc, false); + vcc_off = true; +- if (!ufshcd_is_link_active(hba)) { ++ if (ufshcd_is_link_hibern8(hba) || ufshcd_is_link_off(hba)) { + ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq); + ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq2); + } +@@ -8621,7 +8621,7 @@ static int ufshcd_vreg_set_hpm(struct ufs_hba *hba) + !hba->dev_info.is_lu_power_on_wp) { + ret = ufshcd_setup_vreg(hba, true); + } else if (!ufshcd_is_ufs_dev_active(hba)) { +- if (!ret && !ufshcd_is_link_active(hba)) { ++ if (!ufshcd_is_link_active(hba)) { + ret = ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq); + if (ret) + goto vcc_disable; +@@ -8978,10 +8978,13 @@ int ufshcd_system_suspend(struct ufs_hba *hba) + if (!hba->is_powered) + return 0; + ++ cancel_delayed_work_sync(&hba->rpm_dev_flush_recheck_work); ++ + if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) == + hba->curr_dev_pwr_mode) && + (ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) == + hba->uic_link_state) && ++ pm_runtime_suspended(hba->dev) && + !hba->dev_info.b_rpm_dev_flush_capable) + goto out; + +diff --git a/drivers/soc/mediatek/mt8173-pm-domains.h b/drivers/soc/mediatek/mt8173-pm-domains.h +index 3e8ee5dabb437..654c717e54671 100644 +--- a/drivers/soc/mediatek/mt8173-pm-domains.h ++++ b/drivers/soc/mediatek/mt8173-pm-domains.h +@@ -12,24 +12,28 @@ + + static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = { + [MT8173_POWER_DOMAIN_VDEC] = { ++ .name = "vdec", + .sta_mask = PWR_STATUS_VDEC, + .ctl_offs = SPM_VDE_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8173_POWER_DOMAIN_VENC] = { ++ .name = "venc", + .sta_mask = PWR_STATUS_VENC, + .ctl_offs = SPM_VEN_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(15, 12), + }, + [MT8173_POWER_DOMAIN_ISP] = { ++ .name = "isp", + .sta_mask = PWR_STATUS_ISP, + .ctl_offs = SPM_ISP_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(13, 12), + }, + [MT8173_POWER_DOMAIN_MM] = { ++ .name = "mm", + .sta_mask = PWR_STATUS_DISP, + .ctl_offs = SPM_DIS_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), +@@ -40,18 +44,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = { + }, + }, + [MT8173_POWER_DOMAIN_VENC_LT] = { ++ .name = "venc_lt", + .sta_mask = PWR_STATUS_VENC_LT, + .ctl_offs = SPM_VEN2_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(15, 12), + }, + [MT8173_POWER_DOMAIN_AUDIO] = { ++ .name = "audio", + .sta_mask = PWR_STATUS_AUDIO, + .ctl_offs = SPM_AUDIO_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(15, 12), + }, + [MT8173_POWER_DOMAIN_USB] = { ++ .name = "usb", + .sta_mask = PWR_STATUS_USB, + .ctl_offs = SPM_USB_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), +@@ -59,18 +66,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = { + .caps = MTK_SCPD_ACTIVE_WAKEUP, + }, + [MT8173_POWER_DOMAIN_MFG_ASYNC] = { ++ .name = "mfg_async", + .sta_mask = PWR_STATUS_MFG_ASYNC, + .ctl_offs = SPM_MFG_ASYNC_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = 0, + }, + [MT8173_POWER_DOMAIN_MFG_2D] = { ++ .name = "mfg_2d", + .sta_mask = PWR_STATUS_MFG_2D, + .ctl_offs = SPM_MFG_2D_PWR_CON, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(13, 12), + }, + [MT8173_POWER_DOMAIN_MFG] = { ++ .name = "mfg", + .sta_mask = PWR_STATUS_MFG, + .ctl_offs = SPM_MFG_PWR_CON, + .sram_pdn_bits = GENMASK(13, 8), +diff --git a/drivers/soc/mediatek/mt8183-pm-domains.h b/drivers/soc/mediatek/mt8183-pm-domains.h +index aa5230e6c12f8..98a9940d05fbb 100644 +--- a/drivers/soc/mediatek/mt8183-pm-domains.h ++++ b/drivers/soc/mediatek/mt8183-pm-domains.h +@@ -12,12 +12,14 @@ + + static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + [MT8183_POWER_DOMAIN_AUDIO] = { ++ .name = "audio", + .sta_mask = PWR_STATUS_AUDIO, + .ctl_offs = 0x0314, + .sram_pdn_bits = GENMASK(11, 8), + .sram_pdn_ack_bits = GENMASK(15, 12), + }, + [MT8183_POWER_DOMAIN_CONN] = { ++ .name = "conn", + .sta_mask = PWR_STATUS_CONN, + .ctl_offs = 0x032c, + .sram_pdn_bits = 0, +@@ -28,12 +30,14 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_MFG_ASYNC] = { ++ .name = "mfg_async", + .sta_mask = PWR_STATUS_MFG_ASYNC, + .ctl_offs = 0x0334, + .sram_pdn_bits = 0, + .sram_pdn_ack_bits = 0, + }, + [MT8183_POWER_DOMAIN_MFG] = { ++ .name = "mfg", + .sta_mask = PWR_STATUS_MFG, + .ctl_offs = 0x0338, + .sram_pdn_bits = GENMASK(8, 8), +@@ -41,18 +45,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + .caps = MTK_SCPD_DOMAIN_SUPPLY, + }, + [MT8183_POWER_DOMAIN_MFG_CORE0] = { ++ .name = "mfg_core0", + .sta_mask = BIT(7), + .ctl_offs = 0x034c, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8183_POWER_DOMAIN_MFG_CORE1] = { ++ .name = "mfg_core1", + .sta_mask = BIT(20), + .ctl_offs = 0x0310, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8183_POWER_DOMAIN_MFG_2D] = { ++ .name = "mfg_2d", + .sta_mask = PWR_STATUS_MFG_2D, + .ctl_offs = 0x0348, + .sram_pdn_bits = GENMASK(8, 8), +@@ -65,6 +72,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_DISP] = { ++ .name = "disp", + .sta_mask = PWR_STATUS_DISP, + .ctl_offs = 0x030c, + .sram_pdn_bits = GENMASK(8, 8), +@@ -83,6 +91,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_CAM] = { ++ .name = "cam", + .sta_mask = BIT(25), + .ctl_offs = 0x0344, + .sram_pdn_bits = GENMASK(9, 8), +@@ -105,6 +114,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_ISP] = { ++ .name = "isp", + .sta_mask = PWR_STATUS_ISP, + .ctl_offs = 0x0308, + .sram_pdn_bits = GENMASK(9, 8), +@@ -127,6 +137,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_VDEC] = { ++ .name = "vdec", + .sta_mask = BIT(31), + .ctl_offs = 0x0300, + .sram_pdn_bits = GENMASK(8, 8), +@@ -139,6 +150,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_VENC] = { ++ .name = "venc", + .sta_mask = PWR_STATUS_VENC, + .ctl_offs = 0x0304, + .sram_pdn_bits = GENMASK(11, 8), +@@ -151,6 +163,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_VPU_TOP] = { ++ .name = "vpu_top", + .sta_mask = BIT(26), + .ctl_offs = 0x0324, + .sram_pdn_bits = GENMASK(8, 8), +@@ -177,6 +190,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + }, + }, + [MT8183_POWER_DOMAIN_VPU_CORE0] = { ++ .name = "vpu_core0", + .sta_mask = BIT(27), + .ctl_offs = 0x33c, + .sram_pdn_bits = GENMASK(11, 8), +@@ -194,6 +208,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = { + .caps = MTK_SCPD_SRAM_ISO, + }, + [MT8183_POWER_DOMAIN_VPU_CORE1] = { ++ .name = "vpu_core1", + .sta_mask = BIT(28), + .ctl_offs = 0x0340, + .sram_pdn_bits = GENMASK(11, 8), +diff --git a/drivers/soc/mediatek/mt8192-pm-domains.h b/drivers/soc/mediatek/mt8192-pm-domains.h +index 0fdf6dc6231f4..543dda70de014 100644 +--- a/drivers/soc/mediatek/mt8192-pm-domains.h ++++ b/drivers/soc/mediatek/mt8192-pm-domains.h +@@ -12,6 +12,7 @@ + + static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + [MT8192_POWER_DOMAIN_AUDIO] = { ++ .name = "audio", + .sta_mask = BIT(21), + .ctl_offs = 0x0354, + .sram_pdn_bits = GENMASK(8, 8), +@@ -24,6 +25,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_CONN] = { ++ .name = "conn", + .sta_mask = PWR_STATUS_CONN, + .ctl_offs = 0x0304, + .sram_pdn_bits = 0, +@@ -45,12 +47,14 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, + }, + [MT8192_POWER_DOMAIN_MFG0] = { ++ .name = "mfg0", + .sta_mask = BIT(2), + .ctl_offs = 0x0308, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_MFG1] = { ++ .name = "mfg1", + .sta_mask = BIT(3), + .ctl_offs = 0x030c, + .sram_pdn_bits = GENMASK(8, 8), +@@ -75,36 +79,42 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_MFG2] = { ++ .name = "mfg2", + .sta_mask = BIT(4), + .ctl_offs = 0x0310, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_MFG3] = { ++ .name = "mfg3", + .sta_mask = BIT(5), + .ctl_offs = 0x0314, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_MFG4] = { ++ .name = "mfg4", + .sta_mask = BIT(6), + .ctl_offs = 0x0318, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_MFG5] = { ++ .name = "mfg5", + .sta_mask = BIT(7), + .ctl_offs = 0x031c, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_MFG6] = { ++ .name = "mfg6", + .sta_mask = BIT(8), + .ctl_offs = 0x0320, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_DISP] = { ++ .name = "disp", + .sta_mask = BIT(20), + .ctl_offs = 0x0350, + .sram_pdn_bits = GENMASK(8, 8), +@@ -133,6 +143,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_IPE] = { ++ .name = "ipe", + .sta_mask = BIT(14), + .ctl_offs = 0x0338, + .sram_pdn_bits = GENMASK(8, 8), +@@ -149,6 +160,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_ISP] = { ++ .name = "isp", + .sta_mask = BIT(12), + .ctl_offs = 0x0330, + .sram_pdn_bits = GENMASK(8, 8), +@@ -165,6 +177,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_ISP2] = { ++ .name = "isp2", + .sta_mask = BIT(13), + .ctl_offs = 0x0334, + .sram_pdn_bits = GENMASK(8, 8), +@@ -181,6 +194,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_MDP] = { ++ .name = "mdp", + .sta_mask = BIT(19), + .ctl_offs = 0x034c, + .sram_pdn_bits = GENMASK(8, 8), +@@ -197,6 +211,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_VENC] = { ++ .name = "venc", + .sta_mask = BIT(17), + .ctl_offs = 0x0344, + .sram_pdn_bits = GENMASK(8, 8), +@@ -213,6 +228,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_VDEC] = { ++ .name = "vdec", + .sta_mask = BIT(15), + .ctl_offs = 0x033c, + .sram_pdn_bits = GENMASK(8, 8), +@@ -229,12 +245,14 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_VDEC2] = { ++ .name = "vdec2", + .sta_mask = BIT(16), + .ctl_offs = 0x0340, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_CAM] = { ++ .name = "cam", + .sta_mask = BIT(23), + .ctl_offs = 0x035c, + .sram_pdn_bits = GENMASK(8, 8), +@@ -263,18 +281,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = { + }, + }, + [MT8192_POWER_DOMAIN_CAM_RAWA] = { ++ .name = "cam_rawa", + .sta_mask = BIT(24), + .ctl_offs = 0x0360, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_CAM_RAWB] = { ++ .name = "cam_rawb", + .sta_mask = BIT(25), + .ctl_offs = 0x0364, + .sram_pdn_bits = GENMASK(8, 8), + .sram_pdn_ack_bits = GENMASK(12, 12), + }, + [MT8192_POWER_DOMAIN_CAM_RAWC] = { ++ .name = "cam_rawc", + .sta_mask = BIT(26), + .ctl_offs = 0x0368, + .sram_pdn_bits = GENMASK(8, 8), +diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c +index 06aaf03b194c0..0af00efa0ef83 100644 +--- a/drivers/soc/mediatek/mtk-pm-domains.c ++++ b/drivers/soc/mediatek/mtk-pm-domains.c +@@ -438,7 +438,11 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no + goto err_unprepare_subsys_clocks; + } + +- pd->genpd.name = node->name; ++ if (!pd->data->name) ++ pd->genpd.name = node->name; ++ else ++ pd->genpd.name = pd->data->name; ++ + pd->genpd.power_off = scpsys_power_off; + pd->genpd.power_on = scpsys_power_on; + +diff --git a/drivers/soc/mediatek/mtk-pm-domains.h b/drivers/soc/mediatek/mtk-pm-domains.h +index 141dc76054e69..21a4e113bbecb 100644 +--- a/drivers/soc/mediatek/mtk-pm-domains.h ++++ b/drivers/soc/mediatek/mtk-pm-domains.h +@@ -76,6 +76,7 @@ struct scpsys_bus_prot_data { + + /** + * struct scpsys_domain_data - scp domain data for power on/off flow ++ * @name: The name of the power domain. + * @sta_mask: The mask for power on/off status bit. + * @ctl_offs: The offset for main power control register. + * @sram_pdn_bits: The mask for sram power control bits. +@@ -85,6 +86,7 @@ struct scpsys_bus_prot_data { + * @bp_smi: bus protection for smi subsystem + */ + struct scpsys_domain_data { ++ const char *name; + u32 sta_mask; + int ctl_offs; + u32 sram_pdn_bits; +diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c +index 5f0219d117fb8..d821661d30f38 100644 +--- a/drivers/staging/media/rkvdec/rkvdec.c ++++ b/drivers/staging/media/rkvdec/rkvdec.c +@@ -1072,7 +1072,7 @@ static struct platform_driver rkvdec_driver = { + .remove = rkvdec_remove, + .driver = { + .name = "rkvdec", +- .of_match_table = of_match_ptr(of_rkvdec_match), ++ .of_match_table = of_rkvdec_match, + .pm = &rkvdec_pm_ops, + }, + }; +diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c +index d8ce3a687b80d..3c4c0516e58ab 100644 +--- a/drivers/thermal/qcom/tsens.c ++++ b/drivers/thermal/qcom/tsens.c +@@ -755,8 +755,10 @@ int __init init_common(struct tsens_priv *priv) + for (i = VER_MAJOR; i <= VER_STEP; i++) { + priv->rf[i] = devm_regmap_field_alloc(dev, priv->srot_map, + priv->fields[i]); +- if (IS_ERR(priv->rf[i])) +- return PTR_ERR(priv->rf[i]); ++ if (IS_ERR(priv->rf[i])) { ++ ret = PTR_ERR(priv->rf[i]); ++ goto err_put_device; ++ } + } + ret = regmap_field_read(priv->rf[VER_MINOR], &ver_minor); + if (ret) +diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c +index 69ef12f852b7d..5b76f9a1280d5 100644 +--- a/drivers/thermal/thermal_of.c ++++ b/drivers/thermal/thermal_of.c +@@ -704,14 +704,17 @@ static int thermal_of_populate_bind_params(struct device_node *np, + + count = of_count_phandle_with_args(np, "cooling-device", + "#cooling-cells"); +- if (!count) { ++ if (count <= 0) { + pr_err("Add a cooling_device property with at least one device\n"); ++ ret = -ENOENT; + goto end; + } + + __tcbp = kcalloc(count, sizeof(*__tcbp), GFP_KERNEL); +- if (!__tcbp) ++ if (!__tcbp) { ++ ret = -ENOMEM; + goto end; ++ } + + for (i = 0; i < count; i++) { + ret = of_parse_phandle_with_args(np, "cooling-device", +diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c +index 508b1c3f8b731..d1e4a7379bebd 100644 +--- a/drivers/usb/class/cdc-wdm.c ++++ b/drivers/usb/class/cdc-wdm.c +@@ -321,12 +321,23 @@ exit: + + } + +-static void kill_urbs(struct wdm_device *desc) ++static void poison_urbs(struct wdm_device *desc) + { + /* the order here is essential */ +- usb_kill_urb(desc->command); +- usb_kill_urb(desc->validity); +- usb_kill_urb(desc->response); ++ usb_poison_urb(desc->command); ++ usb_poison_urb(desc->validity); ++ usb_poison_urb(desc->response); ++} ++ ++static void unpoison_urbs(struct wdm_device *desc) ++{ ++ /* ++ * the order here is not essential ++ * it is symmetrical just to be nice ++ */ ++ usb_unpoison_urb(desc->response); ++ usb_unpoison_urb(desc->validity); ++ usb_unpoison_urb(desc->command); + } + + static void free_urbs(struct wdm_device *desc) +@@ -741,11 +752,12 @@ static int wdm_release(struct inode *inode, struct file *file) + if (!desc->count) { + if (!test_bit(WDM_DISCONNECTING, &desc->flags)) { + dev_dbg(&desc->intf->dev, "wdm_release: cleanup\n"); +- kill_urbs(desc); ++ poison_urbs(desc); + spin_lock_irq(&desc->iuspin); + desc->resp_count = 0; + spin_unlock_irq(&desc->iuspin); + desc->manage_power(desc->intf, 0); ++ unpoison_urbs(desc); + } else { + /* must avoid dev_printk here as desc->intf is invalid */ + pr_debug(KBUILD_MODNAME " %s: device gone - cleaning up\n", __func__); +@@ -1037,9 +1049,9 @@ static void wdm_disconnect(struct usb_interface *intf) + wake_up_all(&desc->wait); + mutex_lock(&desc->rlock); + mutex_lock(&desc->wlock); ++ poison_urbs(desc); + cancel_work_sync(&desc->rxwork); + cancel_work_sync(&desc->service_outs_intr); +- kill_urbs(desc); + mutex_unlock(&desc->wlock); + mutex_unlock(&desc->rlock); + +@@ -1080,9 +1092,10 @@ static int wdm_suspend(struct usb_interface *intf, pm_message_t message) + set_bit(WDM_SUSPENDING, &desc->flags); + spin_unlock_irq(&desc->iuspin); + /* callback submits work - order is essential */ +- kill_urbs(desc); ++ poison_urbs(desc); + cancel_work_sync(&desc->rxwork); + cancel_work_sync(&desc->service_outs_intr); ++ unpoison_urbs(desc); + } + if (!PMSG_IS_AUTO(message)) { + mutex_unlock(&desc->wlock); +@@ -1140,7 +1153,7 @@ static int wdm_pre_reset(struct usb_interface *intf) + wake_up_all(&desc->wait); + mutex_lock(&desc->rlock); + mutex_lock(&desc->wlock); +- kill_urbs(desc); ++ poison_urbs(desc); + cancel_work_sync(&desc->rxwork); + cancel_work_sync(&desc->service_outs_intr); + return 0; +@@ -1151,6 +1164,7 @@ static int wdm_post_reset(struct usb_interface *intf) + struct wdm_device *desc = wdm_find_device(intf); + int rv; + ++ unpoison_urbs(desc); + clear_bit(WDM_OVERFLOW, &desc->flags); + clear_bit(WDM_RESETTING, &desc->flags); + rv = recover_from_urb_loss(desc); +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 404507d1b76f1..13fe37fbbd2c8 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -3593,9 +3593,6 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg) + * sequence. + */ + status = hub_port_status(hub, port1, &portstatus, &portchange); +- +- /* TRSMRCY = 10 msec */ +- msleep(10); + } + + SuspendCleared: +@@ -3610,6 +3607,9 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg) + usb_clear_port_feature(hub->hdev, port1, + USB_PORT_FEAT_C_SUSPEND); + } ++ ++ /* TRSMRCY = 10 msec */ ++ msleep(10); + } + + if (udev->persist_enabled) +diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h +index 7161344c65221..641e4251cb7f1 100644 +--- a/drivers/usb/dwc2/core.h ++++ b/drivers/usb/dwc2/core.h +@@ -112,6 +112,7 @@ struct dwc2_hsotg_req; + * @debugfs: File entry for debugfs file for this endpoint. + * @dir_in: Set to true if this endpoint is of the IN direction, which + * means that it is sending data to the Host. ++ * @map_dir: Set to the value of dir_in when the DMA buffer is mapped. + * @index: The index for the endpoint registers. + * @mc: Multi Count - number of transactions per microframe + * @interval: Interval for periodic endpoints, in frames or microframes. +@@ -161,6 +162,7 @@ struct dwc2_hsotg_ep { + unsigned short fifo_index; + + unsigned char dir_in; ++ unsigned char map_dir; + unsigned char index; + unsigned char mc; + u16 interval; +diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c +index ad4c94366dadf..d2f623d83bf78 100644 +--- a/drivers/usb/dwc2/gadget.c ++++ b/drivers/usb/dwc2/gadget.c +@@ -422,7 +422,7 @@ static void dwc2_hsotg_unmap_dma(struct dwc2_hsotg *hsotg, + { + struct usb_request *req = &hs_req->req; + +- usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->dir_in); ++ usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->map_dir); + } + + /* +@@ -1242,6 +1242,7 @@ static int dwc2_hsotg_map_dma(struct dwc2_hsotg *hsotg, + { + int ret; + ++ hs_ep->map_dir = hs_ep->dir_in; + ret = usb_gadget_map_request(&hsotg->gadget, req, hs_ep->dir_in); + if (ret) + goto dma_error; +diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c +index 75f0042b998b1..84c1a4ac24449 100644 +--- a/drivers/usb/dwc3/dwc3-imx8mp.c ++++ b/drivers/usb/dwc3/dwc3-imx8mp.c +@@ -167,6 +167,7 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev) + + dwc3_np = of_get_child_by_name(node, "dwc3"); + if (!dwc3_np) { ++ err = -ENODEV; + dev_err(dev, "failed to find dwc3 core child\n"); + goto disable_rpm; + } +diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c +index 3db17806e92e7..e196673f5c647 100644 +--- a/drivers/usb/dwc3/dwc3-omap.c ++++ b/drivers/usb/dwc3/dwc3-omap.c +@@ -437,8 +437,13 @@ static int dwc3_omap_extcon_register(struct dwc3_omap *omap) + + if (extcon_get_state(edev, EXTCON_USB) == true) + dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_VALID); ++ else ++ dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_OFF); ++ + if (extcon_get_state(edev, EXTCON_USB_HOST) == true) + dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_GROUND); ++ else ++ dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_FLOAT); + + omap->edev = edev; + } +diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c +index 95f954a1d7be9..19789e94bbd00 100644 +--- a/drivers/usb/dwc3/dwc3-pci.c ++++ b/drivers/usb/dwc3/dwc3-pci.c +@@ -123,6 +123,7 @@ static const struct property_entry dwc3_pci_mrfld_properties[] = { + PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"), + PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"), + PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"), ++ PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"), + PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"), + {} + }; +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index f85eda6bc988e..8585b56d9f2df 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -1676,7 +1676,9 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req) + } + } + +- return __dwc3_gadget_kick_transfer(dep); ++ __dwc3_gadget_kick_transfer(dep); ++ ++ return 0; + } + + static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request, +@@ -2302,6 +2304,10 @@ static void dwc3_gadget_enable_irq(struct dwc3 *dwc) + if (DWC3_VER_IS_PRIOR(DWC3, 250A)) + reg |= DWC3_DEVTEN_ULSTCNGEN; + ++ /* On 2.30a and above this bit enables U3/L2-L1 Suspend Events */ ++ if (!DWC3_VER_IS_PRIOR(DWC3, 230A)) ++ reg |= DWC3_DEVTEN_EOPFEN; ++ + dwc3_writel(dwc->regs, DWC3_DEVTEN, reg); + } + +@@ -4024,8 +4030,9 @@ err0: + + void dwc3_gadget_exit(struct dwc3 *dwc) + { +- usb_del_gadget_udc(dwc->gadget); ++ usb_del_gadget(dwc->gadget); + dwc3_gadget_free_endpoints(dwc); ++ usb_put_gadget(dwc->gadget); + dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce, + dwc->bounce_addr); + kfree(dwc->setup_buf); +diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c +index 5617ef30530a6..f0e4a315cc81b 100644 +--- a/drivers/usb/host/fotg210-hcd.c ++++ b/drivers/usb/host/fotg210-hcd.c +@@ -5568,7 +5568,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev) + struct usb_hcd *hcd; + struct resource *res; + int irq; +- int retval = -ENODEV; ++ int retval; + struct fotg210_hcd *fotg210; + + if (usb_disabled()) +@@ -5588,7 +5588,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev) + hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev, + dev_name(dev)); + if (!hcd) { +- dev_err(dev, "failed to create hcd with err %d\n", retval); ++ dev_err(dev, "failed to create hcd\n"); + retval = -ENOMEM; + goto fail_create_hcd; + } +diff --git a/drivers/usb/host/xhci-ext-caps.h b/drivers/usb/host/xhci-ext-caps.h +index fa59b242cd515..e8af0a125f84b 100644 +--- a/drivers/usb/host/xhci-ext-caps.h ++++ b/drivers/usb/host/xhci-ext-caps.h +@@ -7,8 +7,9 @@ + * Author: Sarah Sharp + * Some code borrowed from the Linux EHCI driver. + */ +-/* Up to 16 ms to halt an HC */ +-#define XHCI_MAX_HALT_USEC (16*1000) ++ ++/* HC should halt within 16 ms, but use 32 ms as some hosts take longer */ ++#define XHCI_MAX_HALT_USEC (32 * 1000) + /* HC not running - set to 1 when run/stop bit is cleared. */ + #define XHCI_STS_HALT (1<<0) + +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index 5bbccc9a0179f..7bc18cf8042cc 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -57,6 +57,7 @@ + #define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af + #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13 + #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138 ++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e + + #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 + #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba +@@ -166,8 +167,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + (pdev->device == 0x15e0 || pdev->device == 0x15e1)) + xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND; + +- if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) ++ if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) { + xhci->quirks |= XHCI_DISABLE_SPARSE; ++ xhci->quirks |= XHCI_RESET_ON_RESUME; ++ } + + if (pdev->vendor == PCI_VENDOR_ID_AMD) + xhci->quirks |= XHCI_TRUST_TX_LENGTH; +@@ -243,7 +246,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI || + pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI || + pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI || +- pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI)) ++ pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI || ++ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI)) + xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; + + if (pdev->vendor == PCI_VENDOR_ID_ETRON && +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c +index 59d41d2c200df..6cdea0d00d194 100644 +--- a/drivers/usb/host/xhci-ring.c ++++ b/drivers/usb/host/xhci-ring.c +@@ -863,7 +863,7 @@ done: + return ret; + } + +-static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci, ++static int xhci_handle_halted_endpoint(struct xhci_hcd *xhci, + struct xhci_virt_ep *ep, unsigned int stream_id, + struct xhci_td *td, + enum xhci_ep_reset_type reset_type) +@@ -876,7 +876,7 @@ static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci, + * Device will be reset soon to recover the link so don't do anything + */ + if (ep->vdev->flags & VDEV_PORT_ERROR) +- return; ++ return -ENODEV; + + /* add td to cancelled list and let reset ep handler take care of it */ + if (reset_type == EP_HARD_RESET) { +@@ -889,16 +889,18 @@ static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci, + + if (ep->ep_state & EP_HALTED) { + xhci_dbg(xhci, "Reset ep command already pending\n"); +- return; ++ return 0; + } + + err = xhci_reset_halted_ep(xhci, slot_id, ep->ep_index, reset_type); + if (err) +- return; ++ return err; + + ep->ep_state |= EP_HALTED; + + xhci_ring_cmd_db(xhci); ++ ++ return 0; + } + + /* +@@ -1015,6 +1017,7 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id, + struct xhci_td *td = NULL; + enum xhci_ep_reset_type reset_type; + struct xhci_command *command; ++ int err; + + if (unlikely(TRB_TO_SUSPEND_PORT(le32_to_cpu(trb->generic.field[3])))) { + if (!xhci->devs[slot_id]) +@@ -1059,7 +1062,10 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id, + td->status = -EPROTO; + } + /* reset ep, reset handler cleans up cancelled tds */ +- xhci_handle_halted_endpoint(xhci, ep, 0, td, reset_type); ++ err = xhci_handle_halted_endpoint(xhci, ep, 0, td, ++ reset_type); ++ if (err) ++ break; + xhci_stop_watchdog_timer_in_irq(xhci, ep); + return; + case EP_STATE_RUNNING: +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index 6672c2f403034..0d2f1c37ab745 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -1514,7 +1514,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci, + * we need to issue an evaluate context command and wait on it. + */ + static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id, +- unsigned int ep_index, struct urb *urb) ++ unsigned int ep_index, struct urb *urb, gfp_t mem_flags) + { + struct xhci_container_ctx *out_ctx; + struct xhci_input_control_ctx *ctrl_ctx; +@@ -1545,7 +1545,7 @@ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id, + * changes max packet sizes. + */ + +- command = xhci_alloc_command(xhci, true, GFP_KERNEL); ++ command = xhci_alloc_command(xhci, true, mem_flags); + if (!command) + return -ENOMEM; + +@@ -1639,7 +1639,7 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag + */ + if (urb->dev->speed == USB_SPEED_FULL) { + ret = xhci_check_maxpacket(xhci, slot_id, +- ep_index, urb); ++ ep_index, urb, mem_flags); + if (ret < 0) { + xhci_urb_free_priv(urb_priv); + urb->hcpriv = NULL; +diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c +index eebeadd269461..6b92d037d8fc8 100644 +--- a/drivers/usb/musb/mediatek.c ++++ b/drivers/usb/musb/mediatek.c +@@ -518,8 +518,8 @@ static int mtk_musb_probe(struct platform_device *pdev) + + glue->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2); + if (IS_ERR(glue->xceiv)) { +- dev_err(dev, "fail to getting usb-phy %d\n", ret); + ret = PTR_ERR(glue->xceiv); ++ dev_err(dev, "fail to getting usb-phy %d\n", ret); + goto err_unregister_usb_phy; + } + +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c +index 1a086ba254d23..52acc884a61f1 100644 +--- a/drivers/usb/typec/tcpm/tcpm.c ++++ b/drivers/usb/typec/tcpm/tcpm.c +@@ -1877,7 +1877,6 @@ static void vdm_run_state_machine(struct tcpm_port *port) + } + + if (res < 0) { +- port->vdm_sm_running = false; + return; + } + } +@@ -1893,6 +1892,7 @@ static void vdm_run_state_machine(struct tcpm_port *port) + port->vdo_data[0] = port->vdo_retry; + port->vdo_count = 1; + port->vdm_state = VDM_STATE_READY; ++ tcpm_ams_finish(port); + break; + case VDM_STATE_BUSY: + port->vdm_state = VDM_STATE_ERR_TMOUT; +@@ -1958,7 +1958,7 @@ static void vdm_state_machine_work(struct kthread_work *work) + port->vdm_state != VDM_STATE_BUSY && + port->vdm_state != VDM_STATE_SEND_MESSAGE); + +- if (port->vdm_state == VDM_STATE_ERR_TMOUT) ++ if (port->vdm_state < VDM_STATE_READY) + port->vdm_sm_running = false; + + mutex_unlock(&port->lock); +@@ -2387,7 +2387,7 @@ static void tcpm_pd_data_request(struct tcpm_port *port, + port->nr_sink_caps = cnt; + port->sink_cap_done = true; + if (port->ams == GET_SINK_CAPABILITIES) +- tcpm_pd_handle_state(port, ready_state(port), NONE_AMS, 0); ++ tcpm_set_state(port, ready_state(port), 0); + /* Unexpected Sink Capabilities */ + else + tcpm_pd_handle_msg(port, +@@ -2549,6 +2549,16 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port, + port->sink_cap_done = true; + tcpm_set_state(port, ready_state(port), 0); + break; ++ case SRC_READY: ++ case SNK_READY: ++ if (port->vdm_state > VDM_STATE_READY) { ++ port->vdm_state = VDM_STATE_DONE; ++ if (tcpm_vdm_ams(port)) ++ tcpm_ams_finish(port); ++ mod_vdm_delayed_work(port, 0); ++ break; ++ } ++ fallthrough; + default: + tcpm_pd_handle_state(port, + port->pwr_role == TYPEC_SOURCE ? +@@ -3135,10 +3145,10 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port) + port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src), + pdo_pps_apdo_max_voltage(snk)); + port->pps_data.req_max_curr = min_pps_apdo_current(src, snk); +- port->pps_data.req_out_volt = min(port->pps_data.max_volt, +- max(port->pps_data.min_volt, ++ port->pps_data.req_out_volt = min(port->pps_data.req_max_volt, ++ max(port->pps_data.req_min_volt, + port->pps_data.req_out_volt)); +- port->pps_data.req_op_curr = min(port->pps_data.max_curr, ++ port->pps_data.req_op_curr = min(port->pps_data.req_max_curr, + port->pps_data.req_op_curr); + } + +diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c +index 244270755ae61..1e266f083bf8a 100644 +--- a/drivers/usb/typec/ucsi/ucsi.c ++++ b/drivers/usb/typec/ucsi/ucsi.c +@@ -495,7 +495,8 @@ static void ucsi_unregister_altmodes(struct ucsi_connector *con, u8 recipient) + } + } + +-static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner) ++static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner, ++ u32 *pdos, int offset, int num_pdos) + { + struct ucsi *ucsi = con->ucsi; + u64 command; +@@ -503,17 +504,39 @@ static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner) + + command = UCSI_COMMAND(UCSI_GET_PDOS) | UCSI_CONNECTOR_NUMBER(con->num); + command |= UCSI_GET_PDOS_PARTNER_PDO(is_partner); +- command |= UCSI_GET_PDOS_NUM_PDOS(UCSI_MAX_PDOS - 1); ++ command |= UCSI_GET_PDOS_PDO_OFFSET(offset); ++ command |= UCSI_GET_PDOS_NUM_PDOS(num_pdos - 1); + command |= UCSI_GET_PDOS_SRC_PDOS; +- ret = ucsi_send_command(ucsi, command, con->src_pdos, +- sizeof(con->src_pdos)); +- if (ret < 0) { ++ ret = ucsi_send_command(ucsi, command, pdos + offset, ++ num_pdos * sizeof(u32)); ++ if (ret < 0) + dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret); ++ if (ret == 0 && offset == 0) ++ dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n"); ++ ++ return ret; ++} ++ ++static void ucsi_get_src_pdos(struct ucsi_connector *con, int is_partner) ++{ ++ int ret; ++ ++ /* UCSI max payload means only getting at most 4 PDOs at a time */ ++ ret = ucsi_get_pdos(con, 1, con->src_pdos, 0, UCSI_MAX_PDOS); ++ if (ret < 0) + return; +- } ++ + con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */ +- if (ret == 0) +- dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n"); ++ if (con->num_pdos < UCSI_MAX_PDOS) ++ return; ++ ++ /* get the remaining PDOs, if any */ ++ ret = ucsi_get_pdos(con, 1, con->src_pdos, UCSI_MAX_PDOS, ++ PDO_MAX_OBJECTS - UCSI_MAX_PDOS); ++ if (ret < 0) ++ return; ++ ++ con->num_pdos += ret / sizeof(u32); + } + + static void ucsi_pwr_opmode_change(struct ucsi_connector *con) +@@ -522,7 +545,7 @@ static void ucsi_pwr_opmode_change(struct ucsi_connector *con) + case UCSI_CONSTAT_PWR_OPMODE_PD: + con->rdo = con->status.request_data_obj; + typec_set_pwr_opmode(con->port, TYPEC_PWR_MODE_PD); +- ucsi_get_pdos(con, 1); ++ ucsi_get_src_pdos(con, 1); + break; + case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5: + con->rdo = 0; +@@ -999,6 +1022,7 @@ static const struct typec_operations ucsi_ops = { + .pr_set = ucsi_pr_swap + }; + ++/* Caller must call fwnode_handle_put() after use */ + static struct fwnode_handle *ucsi_find_fwnode(struct ucsi_connector *con) + { + struct fwnode_handle *fwnode; +@@ -1033,7 +1057,7 @@ static int ucsi_register_port(struct ucsi *ucsi, int index) + command |= UCSI_CONNECTOR_NUMBER(con->num); + ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap)); + if (ret < 0) +- goto out; ++ goto out_unlock; + + if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP) + cap->data = TYPEC_PORT_DRD; +@@ -1151,6 +1175,8 @@ static int ucsi_register_port(struct ucsi *ucsi, int index) + trace_ucsi_register_port(con->num, &con->status); + + out: ++ fwnode_handle_put(cap->fwnode); ++out_unlock: + mutex_unlock(&con->lock); + return ret; + } +diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h +index 3920e20a9e9ef..cee666790907e 100644 +--- a/drivers/usb/typec/ucsi/ucsi.h ++++ b/drivers/usb/typec/ucsi/ucsi.h +@@ -8,6 +8,7 @@ + #include + #include + #include ++#include + #include + + /* -------------------------------------------------------------------------- */ +@@ -134,7 +135,9 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num); + + /* GET_PDOS command bits */ + #define UCSI_GET_PDOS_PARTNER_PDO(_r_) ((u64)(_r_) << 23) ++#define UCSI_GET_PDOS_PDO_OFFSET(_r_) ((u64)(_r_) << 24) + #define UCSI_GET_PDOS_NUM_PDOS(_r_) ((u64)(_r_) << 32) ++#define UCSI_MAX_PDOS (4) + #define UCSI_GET_PDOS_SRC_PDOS ((u64)1 << 34) + + /* -------------------------------------------------------------------------- */ +@@ -302,7 +305,6 @@ struct ucsi { + + #define UCSI_MAX_SVID 5 + #define UCSI_MAX_ALTMODES (UCSI_MAX_SVID * 6) +-#define UCSI_MAX_PDOS (4) + + #define UCSI_TYPEC_VSAFE5V 5000 + #define UCSI_TYPEC_1_5_CURRENT 1500 +@@ -330,7 +332,7 @@ struct ucsi_connector { + struct power_supply *psy; + struct power_supply_desc psy_desc; + u32 rdo; +- u32 src_pdos[UCSI_MAX_PDOS]; ++ u32 src_pdos[PDO_MAX_OBJECTS]; + int num_pdos; + + struct usb_role_switch *usb_role_sw; +diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c +index f01d58c7a042e..a3e7be96527d7 100644 +--- a/drivers/xen/gntdev.c ++++ b/drivers/xen/gntdev.c +@@ -1017,8 +1017,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma) + err = mmu_interval_notifier_insert_locked( + &map->notifier, vma->vm_mm, vma->vm_start, + vma->vm_end - vma->vm_start, &gntdev_mmu_ops); +- if (err) ++ if (err) { ++ map->vma = NULL; + goto out_unlock_put; ++ } + } + mutex_unlock(&priv->lock); + +diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c +index e64e6befc63b7..87e6b7db892f5 100644 +--- a/drivers/xen/unpopulated-alloc.c ++++ b/drivers/xen/unpopulated-alloc.c +@@ -39,8 +39,10 @@ static int fill_list(unsigned int nr_pages) + } + + pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL); +- if (!pgmap) ++ if (!pgmap) { ++ ret = -ENOMEM; + goto err_pgmap; ++ } + + pgmap->type = MEMORY_DEVICE_GENERIC; + pgmap->range = (struct range) { +diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c +index 649f04f112dc2..59c32c9b799fc 100644 +--- a/fs/9p/vfs_file.c ++++ b/fs/9p/vfs_file.c +@@ -86,8 +86,8 @@ int v9fs_file_open(struct inode *inode, struct file *file) + * to work. + */ + writeback_fid = v9fs_writeback_fid(file_dentry(file)); +- if (IS_ERR(fid)) { +- err = PTR_ERR(fid); ++ if (IS_ERR(writeback_fid)) { ++ err = PTR_ERR(writeback_fid); + mutex_unlock(&v9inode->v_mutex); + goto out_error; + } +diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h +index 9ae776ab39676..29ef969035df2 100644 +--- a/fs/btrfs/ctree.h ++++ b/fs/btrfs/ctree.h +@@ -3110,7 +3110,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, + struct btrfs_inode *inode, u64 new_size, + u32 min_type); + +-int btrfs_start_delalloc_snapshot(struct btrfs_root *root); ++int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context); + int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, long nr, + bool in_reclaim_context); + int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end, +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 36a3c973fda10..5b82050b871a7 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -1340,12 +1340,16 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr, + stripe = bbio->stripes; + for (i = 0; i < bbio->num_stripes; i++, stripe++) { + u64 bytes; ++ struct btrfs_device *device = stripe->dev; + +- if (!stripe->dev->bdev) { ++ if (!device->bdev) { + ASSERT(btrfs_test_opt(fs_info, DEGRADED)); + continue; + } + ++ if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) ++ continue; ++ + ret = do_discard_extent(stripe, &bytes); + if (!ret) { + discarded_bytes += bytes; +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index 6eb72c9b15a7d..abee4b62741da 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -2067,6 +2067,30 @@ static int start_ordered_ops(struct inode *inode, loff_t start, loff_t end) + return ret; + } + ++static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx) ++{ ++ struct btrfs_inode *inode = BTRFS_I(ctx->inode); ++ struct btrfs_fs_info *fs_info = inode->root->fs_info; ++ ++ if (btrfs_inode_in_log(inode, fs_info->generation) && ++ list_empty(&ctx->ordered_extents)) ++ return true; ++ ++ /* ++ * If we are doing a fast fsync we can not bail out if the inode's ++ * last_trans is <= then the last committed transaction, because we only ++ * update the last_trans of the inode during ordered extent completion, ++ * and for a fast fsync we don't wait for that, we only wait for the ++ * writeback to complete. ++ */ ++ if (inode->last_trans <= fs_info->last_trans_committed && ++ (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) || ++ list_empty(&ctx->ordered_extents))) ++ return true; ++ ++ return false; ++} ++ + /* + * fsync call for both files and directories. This logs the inode into + * the tree log instead of forcing full commits whenever possible. +@@ -2185,17 +2209,8 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + + atomic_inc(&root->log_batch); + +- /* +- * If we are doing a fast fsync we can not bail out if the inode's +- * last_trans is <= then the last committed transaction, because we only +- * update the last_trans of the inode during ordered extent completion, +- * and for a fast fsync we don't wait for that, we only wait for the +- * writeback to complete. +- */ + smp_mb(); +- if (btrfs_inode_in_log(BTRFS_I(inode), fs_info->generation) || +- (BTRFS_I(inode)->last_trans <= fs_info->last_trans_committed && +- (full_sync || list_empty(&ctx.ordered_extents)))) { ++ if (skip_inode_logging(&ctx)) { + /* + * We've had everything committed since the last time we were + * modified so clear this flag in case it was set for whatever +diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c +index 9988decd5717b..ac9c2691376d6 100644 +--- a/fs/btrfs/free-space-cache.c ++++ b/fs/btrfs/free-space-cache.c +@@ -3942,7 +3942,7 @@ static int cleanup_free_space_cache_v1(struct btrfs_fs_info *fs_info, + { + struct btrfs_block_group *block_group; + struct rb_node *node; +- int ret; ++ int ret = 0; + + btrfs_info(fs_info, "cleaning free space cache v1"); + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index a922c3bcb65e1..8c4d2eaa5d58b 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -9672,7 +9672,7 @@ out: + return ret; + } + +-int btrfs_start_delalloc_snapshot(struct btrfs_root *root) ++int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context) + { + struct writeback_control wbc = { + .nr_to_write = LONG_MAX, +@@ -9685,7 +9685,7 @@ int btrfs_start_delalloc_snapshot(struct btrfs_root *root) + if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) + return -EROFS; + +- return start_delalloc_inodes(root, &wbc, true, false); ++ return start_delalloc_inodes(root, &wbc, true, in_reclaim_context); + } + + int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, long nr, +diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c +index 5efd6c7fe6206..f9ecb6c0bf15e 100644 +--- a/fs/btrfs/ioctl.c ++++ b/fs/btrfs/ioctl.c +@@ -1046,7 +1046,7 @@ static noinline int btrfs_mksnapshot(const struct path *parent, + */ + btrfs_drew_read_lock(&root->snapshot_lock); + +- ret = btrfs_start_delalloc_snapshot(root); ++ ret = btrfs_start_delalloc_snapshot(root, false); + if (ret) + goto out; + +diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c +index 985a215584370..043e3fa961e03 100644 +--- a/fs/btrfs/ordered-data.c ++++ b/fs/btrfs/ordered-data.c +@@ -995,7 +995,7 @@ int btrfs_split_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pre, + + if (pre) + ret = clone_ordered_extent(ordered, 0, pre); +- if (post) ++ if (ret == 0 && post) + ret = clone_ordered_extent(ordered, pre + ordered->disk_num_bytes, + post); + +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c +index f0b9ef13153ad..2991287a71a87 100644 +--- a/fs/btrfs/qgroup.c ++++ b/fs/btrfs/qgroup.c +@@ -3579,7 +3579,7 @@ static int try_flush_qgroup(struct btrfs_root *root) + return 0; + } + +- ret = btrfs_start_delalloc_snapshot(root); ++ ret = btrfs_start_delalloc_snapshot(root, true); + if (ret < 0) + goto out; + btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1); +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c +index 8f323859156b5..8ae8f1732fd25 100644 +--- a/fs/btrfs/send.c ++++ b/fs/btrfs/send.c +@@ -7139,7 +7139,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx) + int i; + + if (root) { +- ret = btrfs_start_delalloc_snapshot(root); ++ ret = btrfs_start_delalloc_snapshot(root, false); + if (ret) + return ret; + btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX); +@@ -7147,7 +7147,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx) + + for (i = 0; i < sctx->clone_roots_cnt; i++) { + root = sctx->clone_roots[i].root; +- ret = btrfs_start_delalloc_snapshot(root); ++ ret = btrfs_start_delalloc_snapshot(root, false); + if (ret) + return ret; + btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX); +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 72c4b66ed5163..47e76e79b3d6b 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -6060,7 +6060,8 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans, + * (since logging them is pointless, a link count of 0 means they + * will never be accessible). + */ +- if (btrfs_inode_in_log(inode, trans->transid) || ++ if ((btrfs_inode_in_log(inode, trans->transid) && ++ list_empty(&ctx->ordered_extents)) || + inode->vfs_inode.i_nlink == 0) { + ret = BTRFS_NO_LOG_SYNC; + goto end_no_trans; +diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c +index 70b23a0d03b10..304ce64c70a44 100644 +--- a/fs/btrfs/zoned.c ++++ b/fs/btrfs/zoned.c +@@ -1126,6 +1126,11 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new) + goto out; + } + ++ if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) { ++ ret = -EIO; ++ goto out; ++ } ++ + switch (zone.cond) { + case BLK_ZONE_COND_OFFLINE: + case BLK_ZONE_COND_READONLY: +diff --git a/fs/ceph/export.c b/fs/ceph/export.c +index e088843a7734c..baa6368bece59 100644 +--- a/fs/ceph/export.c ++++ b/fs/ceph/export.c +@@ -178,8 +178,10 @@ static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino) + return ERR_CAST(inode); + /* We need LINK caps to reliably check i_nlink */ + err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false); +- if (err) ++ if (err) { ++ iput(inode); + return ERR_PTR(err); ++ } + /* -ESTALE if inode as been unlinked and no file is open */ + if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) { + iput(inode); +diff --git a/fs/dax.c b/fs/dax.c +index b3d27fdc67752..df5485b4bddf1 100644 +--- a/fs/dax.c ++++ b/fs/dax.c +@@ -144,6 +144,16 @@ struct wait_exceptional_entry_queue { + struct exceptional_entry_key key; + }; + ++/** ++ * enum dax_wake_mode: waitqueue wakeup behaviour ++ * @WAKE_ALL: wake all waiters in the waitqueue ++ * @WAKE_NEXT: wake only the first waiter in the waitqueue ++ */ ++enum dax_wake_mode { ++ WAKE_ALL, ++ WAKE_NEXT, ++}; ++ + static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas, + void *entry, struct exceptional_entry_key *key) + { +@@ -182,7 +192,8 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, + * The important information it's conveying is whether the entry at + * this index used to be a PMD entry. + */ +-static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all) ++static void dax_wake_entry(struct xa_state *xas, void *entry, ++ enum dax_wake_mode mode) + { + struct exceptional_entry_key key; + wait_queue_head_t *wq; +@@ -196,7 +207,7 @@ static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all) + * must be in the waitqueue and the following check will see them. + */ + if (waitqueue_active(wq)) +- __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key); ++ __wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key); + } + + /* +@@ -264,11 +275,11 @@ static void wait_entry_unlocked(struct xa_state *xas, void *entry) + finish_wait(wq, &ewait.wait); + } + +-static void put_unlocked_entry(struct xa_state *xas, void *entry) ++static void put_unlocked_entry(struct xa_state *xas, void *entry, ++ enum dax_wake_mode mode) + { +- /* If we were the only waiter woken, wake the next one */ + if (entry && !dax_is_conflict(entry)) +- dax_wake_entry(xas, entry, false); ++ dax_wake_entry(xas, entry, mode); + } + + /* +@@ -286,7 +297,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry) + old = xas_store(xas, entry); + xas_unlock_irq(xas); + BUG_ON(!dax_is_locked(old)); +- dax_wake_entry(xas, entry, false); ++ dax_wake_entry(xas, entry, WAKE_NEXT); + } + + /* +@@ -524,7 +535,7 @@ retry: + + dax_disassociate_entry(entry, mapping, false); + xas_store(xas, NULL); /* undo the PMD join */ +- dax_wake_entry(xas, entry, true); ++ dax_wake_entry(xas, entry, WAKE_ALL); + mapping->nrexceptional--; + entry = NULL; + xas_set(xas, index); +@@ -622,7 +633,7 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping, + entry = get_unlocked_entry(&xas, 0); + if (entry) + page = dax_busy_page(entry); +- put_unlocked_entry(&xas, entry); ++ put_unlocked_entry(&xas, entry, WAKE_NEXT); + if (page) + break; + if (++scanned % XA_CHECK_SCHED) +@@ -664,7 +675,7 @@ static int __dax_invalidate_entry(struct address_space *mapping, + mapping->nrexceptional--; + ret = 1; + out: +- put_unlocked_entry(&xas, entry); ++ put_unlocked_entry(&xas, entry, WAKE_ALL); + xas_unlock_irq(&xas); + return ret; + } +@@ -937,13 +948,13 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, + xas_lock_irq(xas); + xas_store(xas, entry); + xas_clear_mark(xas, PAGECACHE_TAG_DIRTY); +- dax_wake_entry(xas, entry, false); ++ dax_wake_entry(xas, entry, WAKE_NEXT); + + trace_dax_writeback_one(mapping->host, index, count); + return ret; + + put_unlocked: +- put_unlocked_entry(xas, entry); ++ put_unlocked_entry(xas, entry, WAKE_NEXT); + return ret; + } + +@@ -1684,7 +1695,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) + /* Did we race with someone splitting entry or so? */ + if (!entry || dax_is_conflict(entry) || + (order == 0 && !dax_is_pte_entry(entry))) { +- put_unlocked_entry(&xas, entry); ++ put_unlocked_entry(&xas, entry, WAKE_NEXT); + xas_unlock_irq(&xas); + trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf, + VM_FAULT_NOPAGE); +diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c +index 22e86ae4dd5a8..1d252164d97b6 100644 +--- a/fs/debugfs/inode.c ++++ b/fs/debugfs/inode.c +@@ -35,7 +35,7 @@ + static struct vfsmount *debugfs_mount; + static int debugfs_mount_count; + static bool debugfs_registered; +-static unsigned int debugfs_allow = DEFAULT_DEBUGFS_ALLOW_BITS; ++static unsigned int debugfs_allow __ro_after_init = DEFAULT_DEBUGFS_ALLOW_BITS; + + /* + * Don't allow access attributes to be changed whilst the kernel is locked down +diff --git a/fs/dlm/config.c b/fs/dlm/config.c +index 49c5f9407098e..88d95d96e36c5 100644 +--- a/fs/dlm/config.c ++++ b/fs/dlm/config.c +@@ -125,7 +125,7 @@ static ssize_t cluster_cluster_name_store(struct config_item *item, + CONFIGFS_ATTR(cluster_, cluster_name); + + static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field, +- int *info_field, bool (*check_cb)(unsigned int x), ++ int *info_field, int (*check_cb)(unsigned int x), + const char *buf, size_t len) + { + unsigned int x; +@@ -137,8 +137,11 @@ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field, + if (rc) + return rc; + +- if (check_cb && check_cb(x)) +- return -EINVAL; ++ if (check_cb) { ++ rc = check_cb(x); ++ if (rc) ++ return rc; ++ } + + *cl_field = x; + *info_field = x; +@@ -161,17 +164,53 @@ static ssize_t cluster_##name##_show(struct config_item *item, char *buf) \ + } \ + CONFIGFS_ATTR(cluster_, name); + +-static bool dlm_check_zero(unsigned int x) ++static int dlm_check_protocol_and_dlm_running(unsigned int x) ++{ ++ switch (x) { ++ case 0: ++ /* TCP */ ++ break; ++ case 1: ++ /* SCTP */ ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ if (dlm_allow_conn) ++ return -EBUSY; ++ ++ return 0; ++} ++ ++static int dlm_check_zero_and_dlm_running(unsigned int x) ++{ ++ if (!x) ++ return -EINVAL; ++ ++ if (dlm_allow_conn) ++ return -EBUSY; ++ ++ return 0; ++} ++ ++static int dlm_check_zero(unsigned int x) + { +- return !x; ++ if (!x) ++ return -EINVAL; ++ ++ return 0; + } + +-static bool dlm_check_buffer_size(unsigned int x) ++static int dlm_check_buffer_size(unsigned int x) + { +- return (x < DEFAULT_BUFFER_SIZE); ++ if (x < DEFAULT_BUFFER_SIZE) ++ return -EINVAL; ++ ++ return 0; + } + +-CLUSTER_ATTR(tcp_port, dlm_check_zero); ++CLUSTER_ATTR(tcp_port, dlm_check_zero_and_dlm_running); + CLUSTER_ATTR(buffer_size, dlm_check_buffer_size); + CLUSTER_ATTR(rsbtbl_size, dlm_check_zero); + CLUSTER_ATTR(recover_timer, dlm_check_zero); +@@ -179,7 +218,7 @@ CLUSTER_ATTR(toss_secs, dlm_check_zero); + CLUSTER_ATTR(scan_secs, dlm_check_zero); + CLUSTER_ATTR(log_debug, NULL); + CLUSTER_ATTR(log_info, NULL); +-CLUSTER_ATTR(protocol, NULL); ++CLUSTER_ATTR(protocol, dlm_check_protocol_and_dlm_running); + CLUSTER_ATTR(mark, NULL); + CLUSTER_ATTR(timewarn_cs, dlm_check_zero); + CLUSTER_ATTR(waitwarn_us, NULL); +@@ -688,6 +727,7 @@ static ssize_t comm_mark_show(struct config_item *item, char *buf) + static ssize_t comm_mark_store(struct config_item *item, const char *buf, + size_t len) + { ++ struct dlm_comm *comm; + unsigned int mark; + int rc; + +@@ -695,7 +735,15 @@ static ssize_t comm_mark_store(struct config_item *item, const char *buf, + if (rc) + return rc; + +- config_item_to_comm(item)->mark = mark; ++ if (mark == 0) ++ mark = dlm_config.ci_mark; ++ ++ comm = config_item_to_comm(item); ++ rc = dlm_lowcomms_nodes_set_mark(comm->nodeid, mark); ++ if (rc) ++ return rc; ++ ++ comm->mark = mark; + return len; + } + +@@ -870,24 +918,6 @@ int dlm_comm_seq(int nodeid, uint32_t *seq) + return 0; + } + +-void dlm_comm_mark(int nodeid, unsigned int *mark) +-{ +- struct dlm_comm *cm; +- +- cm = get_comm(nodeid); +- if (!cm) { +- *mark = dlm_config.ci_mark; +- return; +- } +- +- if (cm->mark) +- *mark = cm->mark; +- else +- *mark = dlm_config.ci_mark; +- +- put_comm(cm); +-} +- + int dlm_our_nodeid(void) + { + return local_comm ? local_comm->nodeid : 0; +diff --git a/fs/dlm/config.h b/fs/dlm/config.h +index c210250a25818..d2cd4bd20313f 100644 +--- a/fs/dlm/config.h ++++ b/fs/dlm/config.h +@@ -48,7 +48,6 @@ void dlm_config_exit(void); + int dlm_config_nodes(char *lsname, struct dlm_config_node **nodes_out, + int *count_out); + int dlm_comm_seq(int nodeid, uint32_t *seq); +-void dlm_comm_mark(int nodeid, unsigned int *mark); + int dlm_our_nodeid(void); + int dlm_our_addr(struct sockaddr_storage *addr, int num); + +diff --git a/fs/dlm/debug_fs.c b/fs/dlm/debug_fs.c +index d6bbccb0ed152..d5bd990bcab8b 100644 +--- a/fs/dlm/debug_fs.c ++++ b/fs/dlm/debug_fs.c +@@ -542,6 +542,7 @@ static void *table_seq_next(struct seq_file *seq, void *iter_ptr, loff_t *pos) + + if (bucket >= ls->ls_rsbtbl_size) { + kfree(ri); ++ ++*pos; + return NULL; + } + tree = toss ? &ls->ls_rsbtbl[bucket].toss : &ls->ls_rsbtbl[bucket].keep; +diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c +index 561dcad08ad6e..c14cf2b7faab3 100644 +--- a/fs/dlm/lockspace.c ++++ b/fs/dlm/lockspace.c +@@ -404,12 +404,6 @@ static int threads_start(void) + return error; + } + +-static void threads_stop(void) +-{ +- dlm_scand_stop(); +- dlm_lowcomms_stop(); +-} +- + static int new_lockspace(const char *name, const char *cluster, + uint32_t flags, int lvblen, + const struct dlm_lockspace_ops *ops, void *ops_arg, +@@ -702,8 +696,11 @@ int dlm_new_lockspace(const char *name, const char *cluster, + ls_count++; + if (error > 0) + error = 0; +- if (!ls_count) +- threads_stop(); ++ if (!ls_count) { ++ dlm_scand_stop(); ++ dlm_lowcomms_shutdown(); ++ dlm_lowcomms_stop(); ++ } + out: + mutex_unlock(&ls_lock); + return error; +@@ -788,6 +785,11 @@ static int release_lockspace(struct dlm_ls *ls, int force) + + dlm_recoverd_stop(ls); + ++ if (ls_count == 1) { ++ dlm_scand_stop(); ++ dlm_lowcomms_shutdown(); ++ } ++ + dlm_callback_stop(ls); + + remove_lockspace(ls); +@@ -880,7 +882,7 @@ int dlm_release_lockspace(void *lockspace, int force) + if (!error) + ls_count--; + if (!ls_count) +- threads_stop(); ++ dlm_lowcomms_stop(); + mutex_unlock(&ls_lock); + + return error; +diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c +index f7d2c52791f8f..45c2fdaf34c4d 100644 +--- a/fs/dlm/lowcomms.c ++++ b/fs/dlm/lowcomms.c +@@ -116,6 +116,7 @@ struct writequeue_entry { + struct dlm_node_addr { + struct list_head list; + int nodeid; ++ int mark; + int addr_count; + int curr_addr_index; + struct sockaddr_storage *addr[DLM_MAX_ADDR_COUNT]; +@@ -134,7 +135,7 @@ static DEFINE_SPINLOCK(dlm_node_addrs_spin); + static struct listen_connection listen_con; + static struct sockaddr_storage *dlm_local_addr[DLM_MAX_ADDR_COUNT]; + static int dlm_local_count; +-static int dlm_allow_conn; ++int dlm_allow_conn; + + /* Work queues */ + static struct workqueue_struct *recv_workqueue; +@@ -303,7 +304,8 @@ static int addr_compare(const struct sockaddr_storage *x, + } + + static int nodeid_to_addr(int nodeid, struct sockaddr_storage *sas_out, +- struct sockaddr *sa_out, bool try_new_addr) ++ struct sockaddr *sa_out, bool try_new_addr, ++ unsigned int *mark) + { + struct sockaddr_storage sas; + struct dlm_node_addr *na; +@@ -331,6 +333,8 @@ static int nodeid_to_addr(int nodeid, struct sockaddr_storage *sas_out, + if (!na->addr_count) + return -ENOENT; + ++ *mark = na->mark; ++ + if (sas_out) + memcpy(sas_out, &sas, sizeof(struct sockaddr_storage)); + +@@ -350,7 +354,8 @@ static int nodeid_to_addr(int nodeid, struct sockaddr_storage *sas_out, + return 0; + } + +-static int addr_to_nodeid(struct sockaddr_storage *addr, int *nodeid) ++static int addr_to_nodeid(struct sockaddr_storage *addr, int *nodeid, ++ unsigned int *mark) + { + struct dlm_node_addr *na; + int rv = -EEXIST; +@@ -364,6 +369,7 @@ static int addr_to_nodeid(struct sockaddr_storage *addr, int *nodeid) + for (addr_i = 0; addr_i < na->addr_count; addr_i++) { + if (addr_compare(na->addr[addr_i], addr)) { + *nodeid = na->nodeid; ++ *mark = na->mark; + rv = 0; + goto unlock; + } +@@ -412,6 +418,7 @@ int dlm_lowcomms_addr(int nodeid, struct sockaddr_storage *addr, int len) + new_node->nodeid = nodeid; + new_node->addr[0] = new_addr; + new_node->addr_count = 1; ++ new_node->mark = dlm_config.ci_mark; + list_add(&new_node->list, &dlm_node_addrs); + spin_unlock(&dlm_node_addrs_spin); + return 0; +@@ -519,6 +526,23 @@ int dlm_lowcomms_connect_node(int nodeid) + return 0; + } + ++int dlm_lowcomms_nodes_set_mark(int nodeid, unsigned int mark) ++{ ++ struct dlm_node_addr *na; ++ ++ spin_lock(&dlm_node_addrs_spin); ++ na = find_node_addr(nodeid); ++ if (!na) { ++ spin_unlock(&dlm_node_addrs_spin); ++ return -ENOENT; ++ } ++ ++ na->mark = mark; ++ spin_unlock(&dlm_node_addrs_spin); ++ ++ return 0; ++} ++ + static void lowcomms_error_report(struct sock *sk) + { + struct connection *con; +@@ -685,10 +709,7 @@ static void shutdown_connection(struct connection *con) + { + int ret; + +- if (cancel_work_sync(&con->swork)) { +- log_print("canceled swork for node %d", con->nodeid); +- clear_bit(CF_WRITE_PENDING, &con->flags); +- } ++ flush_work(&con->swork); + + mutex_lock(&con->sock_mutex); + /* nothing to shutdown */ +@@ -867,7 +888,7 @@ static int accept_from_sock(struct listen_connection *con) + + /* Get the new node's NODEID */ + make_sockaddr(&peeraddr, 0, &len); +- if (addr_to_nodeid(&peeraddr, &nodeid)) { ++ if (addr_to_nodeid(&peeraddr, &nodeid, &mark)) { + unsigned char *b=(unsigned char *)&peeraddr; + log_print("connect from non cluster node"); + print_hex_dump_bytes("ss: ", DUMP_PREFIX_NONE, +@@ -876,9 +897,6 @@ static int accept_from_sock(struct listen_connection *con) + return -1; + } + +- dlm_comm_mark(nodeid, &mark); +- sock_set_mark(newsock->sk, mark); +- + log_print("got connection from %d", nodeid); + + /* Check to see if we already have a connection to this node. This +@@ -892,6 +910,8 @@ static int accept_from_sock(struct listen_connection *con) + goto accept_err; + } + ++ sock_set_mark(newsock->sk, mark); ++ + mutex_lock(&newcon->sock_mutex); + if (newcon->sock) { + struct connection *othercon = newcon->othercon; +@@ -1016,8 +1036,6 @@ static void sctp_connect_to_sock(struct connection *con) + struct socket *sock; + unsigned int mark; + +- dlm_comm_mark(con->nodeid, &mark); +- + mutex_lock(&con->sock_mutex); + + /* Some odd races can cause double-connects, ignore them */ +@@ -1030,7 +1048,7 @@ static void sctp_connect_to_sock(struct connection *con) + } + + memset(&daddr, 0, sizeof(daddr)); +- result = nodeid_to_addr(con->nodeid, &daddr, NULL, true); ++ result = nodeid_to_addr(con->nodeid, &daddr, NULL, true, &mark); + if (result < 0) { + log_print("no address for nodeid %d", con->nodeid); + goto out; +@@ -1105,13 +1123,11 @@ out: + static void tcp_connect_to_sock(struct connection *con) + { + struct sockaddr_storage saddr, src_addr; ++ unsigned int mark; + int addr_len; + struct socket *sock = NULL; +- unsigned int mark; + int result; + +- dlm_comm_mark(con->nodeid, &mark); +- + mutex_lock(&con->sock_mutex); + if (con->retries++ > MAX_CONNECT_RETRIES) + goto out; +@@ -1126,15 +1142,15 @@ static void tcp_connect_to_sock(struct connection *con) + if (result < 0) + goto out_err; + +- sock_set_mark(sock->sk, mark); +- + memset(&saddr, 0, sizeof(saddr)); +- result = nodeid_to_addr(con->nodeid, &saddr, NULL, false); ++ result = nodeid_to_addr(con->nodeid, &saddr, NULL, false, &mark); + if (result < 0) { + log_print("no address for nodeid %d", con->nodeid); + goto out_err; + } + ++ sock_set_mark(sock->sk, mark); ++ + add_sock(sock, con); + + /* Bind to our cluster-known address connecting to avoid +@@ -1356,9 +1372,11 @@ void *dlm_lowcomms_get_buffer(int nodeid, int len, gfp_t allocation, char **ppc) + struct writequeue_entry *e; + int offset = 0; + +- if (len > LOWCOMMS_MAX_TX_BUFFER_LEN) { +- BUILD_BUG_ON(PAGE_SIZE < LOWCOMMS_MAX_TX_BUFFER_LEN); ++ if (len > DEFAULT_BUFFER_SIZE || ++ len < sizeof(struct dlm_header)) { ++ BUILD_BUG_ON(PAGE_SIZE < DEFAULT_BUFFER_SIZE); + log_print("failed to allocate a buffer of size %d", len); ++ WARN_ON(1); + return NULL; + } + +@@ -1590,6 +1608,29 @@ static int work_start(void) + return 0; + } + ++static void shutdown_conn(struct connection *con) ++{ ++ if (con->shutdown_action) ++ con->shutdown_action(con); ++} ++ ++void dlm_lowcomms_shutdown(void) ++{ ++ /* Set all the flags to prevent any ++ * socket activity. ++ */ ++ dlm_allow_conn = 0; ++ ++ if (recv_workqueue) ++ flush_workqueue(recv_workqueue); ++ if (send_workqueue) ++ flush_workqueue(send_workqueue); ++ ++ dlm_close_sock(&listen_con.sock); ++ ++ foreach_conn(shutdown_conn); ++} ++ + static void _stop_conn(struct connection *con, bool and_other) + { + mutex_lock(&con->sock_mutex); +@@ -1611,12 +1652,6 @@ static void stop_conn(struct connection *con) + _stop_conn(con, true); + } + +-static void shutdown_conn(struct connection *con) +-{ +- if (con->shutdown_action) +- con->shutdown_action(con); +-} +- + static void connection_release(struct rcu_head *rcu) + { + struct connection *con = container_of(rcu, struct connection, rcu); +@@ -1673,19 +1708,6 @@ static void work_flush(void) + + void dlm_lowcomms_stop(void) + { +- /* Set all the flags to prevent any +- socket activity. +- */ +- dlm_allow_conn = 0; +- +- if (recv_workqueue) +- flush_workqueue(recv_workqueue); +- if (send_workqueue) +- flush_workqueue(send_workqueue); +- +- dlm_close_sock(&listen_con.sock); +- +- foreach_conn(shutdown_conn); + work_flush(); + foreach_conn(free_conn); + work_stop(); +diff --git a/fs/dlm/lowcomms.h b/fs/dlm/lowcomms.h +index 0918f9376489f..48bbc4e187619 100644 +--- a/fs/dlm/lowcomms.h ++++ b/fs/dlm/lowcomms.h +@@ -14,13 +14,18 @@ + + #define LOWCOMMS_MAX_TX_BUFFER_LEN 4096 + ++/* switch to check if dlm is running */ ++extern int dlm_allow_conn; ++ + int dlm_lowcomms_start(void); ++void dlm_lowcomms_shutdown(void); + void dlm_lowcomms_stop(void); + void dlm_lowcomms_exit(void); + int dlm_lowcomms_close(int nodeid); + void *dlm_lowcomms_get_buffer(int nodeid, int len, gfp_t allocation, char **ppc); + void dlm_lowcomms_commit_buffer(void *mh); + int dlm_lowcomms_connect_node(int nodeid); ++int dlm_lowcomms_nodes_set_mark(int nodeid, unsigned int mark); + int dlm_lowcomms_addr(int nodeid, struct sockaddr_storage *addr, int len); + + #endif /* __LOWCOMMS_DOT_H__ */ +diff --git a/fs/dlm/midcomms.c b/fs/dlm/midcomms.c +index fde3a6afe4bea..0bedfa8606a26 100644 +--- a/fs/dlm/midcomms.c ++++ b/fs/dlm/midcomms.c +@@ -49,9 +49,10 @@ int dlm_process_incoming_buffer(int nodeid, unsigned char *buf, int len) + * cannot deliver this message to upper layers + */ + msglen = get_unaligned_le16(&hd->h_length); +- if (msglen > DEFAULT_BUFFER_SIZE) { +- log_print("received invalid length header: %u, will abort message parsing", +- msglen); ++ if (msglen > DEFAULT_BUFFER_SIZE || ++ msglen < sizeof(struct dlm_header)) { ++ log_print("received invalid length header: %u from node %d, will abort message parsing", ++ msglen, nodeid); + return -EBADMSG; + } + +diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c +index 312273ed8a9fe..eda14f630def4 100644 +--- a/fs/ext4/fast_commit.c ++++ b/fs/ext4/fast_commit.c +@@ -1736,7 +1736,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb, + } + + /* Range is mapped and needs a state change */ +- jbd_debug(1, "Converting from %d to %d %lld", ++ jbd_debug(1, "Converting from %ld to %d %lld", + map.m_flags & EXT4_MAP_UNWRITTEN, + ext4_ext_is_unwritten(ex), map.m_pblk); + ret = ext4_ext_replay_update_ex(inode, cur, map.m_len, +diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c +index 77fa342de38f1..66dd525a85542 100644 +--- a/fs/f2fs/compress.c ++++ b/fs/f2fs/compress.c +@@ -123,19 +123,6 @@ static void f2fs_unlock_rpages(struct compress_ctx *cc, int len) + f2fs_drop_rpages(cc, len, true); + } + +-static void f2fs_put_rpages_mapping(struct address_space *mapping, +- pgoff_t start, int len) +-{ +- int i; +- +- for (i = 0; i < len; i++) { +- struct page *page = find_get_page(mapping, start + i); +- +- put_page(page); +- put_page(page); +- } +-} +- + static void f2fs_put_rpages_wbc(struct compress_ctx *cc, + struct writeback_control *wbc, bool redirty, int unlock) + { +@@ -164,13 +151,14 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc) + return cc->rpages ? 0 : -ENOMEM; + } + +-void f2fs_destroy_compress_ctx(struct compress_ctx *cc) ++void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse) + { + page_array_free(cc->inode, cc->rpages, cc->cluster_size); + cc->rpages = NULL; + cc->nr_rpages = 0; + cc->nr_cpages = 0; +- cc->cluster_idx = NULL_CLUSTER; ++ if (!reuse) ++ cc->cluster_idx = NULL_CLUSTER; + } + + void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page) +@@ -1048,7 +1036,7 @@ retry: + } + + if (PageUptodate(page)) +- unlock_page(page); ++ f2fs_put_page(page, 1); + else + f2fs_compress_ctx_add_page(cc, page); + } +@@ -1058,33 +1046,35 @@ retry: + + ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size, + &last_block_in_bio, false, true); +- f2fs_destroy_compress_ctx(cc); ++ f2fs_put_rpages(cc); ++ f2fs_destroy_compress_ctx(cc, true); + if (ret) +- goto release_pages; ++ goto out; + if (bio) + f2fs_submit_bio(sbi, bio, DATA); + + ret = f2fs_init_compress_ctx(cc); + if (ret) +- goto release_pages; ++ goto out; + } + + for (i = 0; i < cc->cluster_size; i++) { + f2fs_bug_on(sbi, cc->rpages[i]); + + page = find_lock_page(mapping, start_idx + i); +- f2fs_bug_on(sbi, !page); ++ if (!page) { ++ /* page can be truncated */ ++ goto release_and_retry; ++ } + + f2fs_wait_on_page_writeback(page, DATA, true, true); +- + f2fs_compress_ctx_add_page(cc, page); +- f2fs_put_page(page, 0); + + if (!PageUptodate(page)) { ++release_and_retry: ++ f2fs_put_rpages(cc); + f2fs_unlock_rpages(cc, i + 1); +- f2fs_put_rpages_mapping(mapping, start_idx, +- cc->cluster_size); +- f2fs_destroy_compress_ctx(cc); ++ f2fs_destroy_compress_ctx(cc, true); + goto retry; + } + } +@@ -1115,10 +1105,10 @@ retry: + } + + unlock_pages: ++ f2fs_put_rpages(cc); + f2fs_unlock_rpages(cc, i); +-release_pages: +- f2fs_put_rpages_mapping(mapping, start_idx, i); +- f2fs_destroy_compress_ctx(cc); ++ f2fs_destroy_compress_ctx(cc, true); ++out: + return ret; + } + +@@ -1153,7 +1143,7 @@ bool f2fs_compress_write_end(struct inode *inode, void *fsdata, + set_cluster_dirty(&cc); + + f2fs_put_rpages_wbc(&cc, NULL, false, 1); +- f2fs_destroy_compress_ctx(&cc); ++ f2fs_destroy_compress_ctx(&cc, false); + + return first_index; + } +@@ -1372,7 +1362,7 @@ unlock_continue: + f2fs_put_rpages(cc); + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); + cc->cpages = NULL; +- f2fs_destroy_compress_ctx(cc); ++ f2fs_destroy_compress_ctx(cc, false); + return 0; + + out_destroy_crypt: +@@ -1383,7 +1373,8 @@ out_destroy_crypt: + for (i = 0; i < cc->nr_cpages; i++) { + if (!cc->cpages[i]) + continue; +- f2fs_put_page(cc->cpages[i], 1); ++ f2fs_compress_free_page(cc->cpages[i]); ++ cc->cpages[i] = NULL; + } + out_put_cic: + kmem_cache_free(cic_entry_slab, cic); +@@ -1533,7 +1524,7 @@ write: + err = f2fs_write_raw_pages(cc, submitted, wbc, io_type); + f2fs_put_rpages_wbc(cc, wbc, false, 0); + destroy_out: +- f2fs_destroy_compress_ctx(cc); ++ f2fs_destroy_compress_ctx(cc, false); + return err; + } + +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index 4e5257c763d01..8804a5d513801 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -2276,7 +2276,7 @@ static int f2fs_mpage_readpages(struct inode *inode, + max_nr_pages, + &last_block_in_bio, + rac != NULL, false); +- f2fs_destroy_compress_ctx(&cc); ++ f2fs_destroy_compress_ctx(&cc, false); + if (ret) + goto set_error_page; + } +@@ -2321,7 +2321,7 @@ next_page: + max_nr_pages, + &last_block_in_bio, + rac != NULL, false); +- f2fs_destroy_compress_ctx(&cc); ++ f2fs_destroy_compress_ctx(&cc, false); + } + } + #endif +@@ -3022,7 +3022,7 @@ next: + } + } + if (f2fs_compressed_file(inode)) +- f2fs_destroy_compress_ctx(&cc); ++ f2fs_destroy_compress_ctx(&cc, false); + #endif + if (retry) { + index = 0; +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index e2d302ae3a46d..f3fabb1edfe97 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -3376,6 +3376,7 @@ block_t f2fs_get_unusable_blocks(struct f2fs_sb_info *sbi); + int f2fs_disable_cp_again(struct f2fs_sb_info *sbi, block_t unusable); + void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi); + int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra); ++bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno); + void f2fs_init_inmem_curseg(struct f2fs_sb_info *sbi); + void f2fs_save_inmem_curseg(struct f2fs_sb_info *sbi); + void f2fs_restore_inmem_curseg(struct f2fs_sb_info *sbi); +@@ -3383,7 +3384,7 @@ void f2fs_get_new_segment(struct f2fs_sb_info *sbi, + unsigned int *newseg, bool new_sec, int dir); + void f2fs_allocate_segment_for_resize(struct f2fs_sb_info *sbi, int type, + unsigned int start, unsigned int end); +-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type); ++void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type); + void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi); + int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range); + bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi, +@@ -3547,7 +3548,7 @@ void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi); + int f2fs_start_gc_thread(struct f2fs_sb_info *sbi); + void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi); + block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode); +-int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, ++int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, bool force, + unsigned int segno); + void f2fs_build_gc_manager(struct f2fs_sb_info *sbi); + int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count); +@@ -3949,7 +3950,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc); + void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed); + void f2fs_put_page_dic(struct page *page); + int f2fs_init_compress_ctx(struct compress_ctx *cc); +-void f2fs_destroy_compress_ctx(struct compress_ctx *cc); ++void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse); + void f2fs_init_compress_info(struct f2fs_sb_info *sbi); + int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi); + void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi); +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index d26ff2ae3f5eb..dc79694e512ce 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -1619,9 +1619,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset, + struct f2fs_map_blocks map = { .m_next_pgofs = NULL, + .m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE, + .m_may_create = true }; +- pgoff_t pg_end; ++ pgoff_t pg_start, pg_end; + loff_t new_size = i_size_read(inode); + loff_t off_end; ++ block_t expanded = 0; + int err; + + err = inode_newsize_ok(inode, (len + offset)); +@@ -1634,11 +1635,12 @@ static int expand_inode_data(struct inode *inode, loff_t offset, + + f2fs_balance_fs(sbi, true); + ++ pg_start = ((unsigned long long)offset) >> PAGE_SHIFT; + pg_end = ((unsigned long long)offset + len) >> PAGE_SHIFT; + off_end = (offset + len) & (PAGE_SIZE - 1); + +- map.m_lblk = ((unsigned long long)offset) >> PAGE_SHIFT; +- map.m_len = pg_end - map.m_lblk; ++ map.m_lblk = pg_start; ++ map.m_len = pg_end - pg_start; + if (off_end) + map.m_len++; + +@@ -1646,19 +1648,15 @@ static int expand_inode_data(struct inode *inode, loff_t offset, + return 0; + + if (f2fs_is_pinned_file(inode)) { +- block_t len = (map.m_len >> sbi->log_blocks_per_seg) << +- sbi->log_blocks_per_seg; +- block_t done = 0; +- +- if (map.m_len % sbi->blocks_per_seg) +- len += sbi->blocks_per_seg; ++ block_t sec_blks = BLKS_PER_SEC(sbi); ++ block_t sec_len = roundup(map.m_len, sec_blks); + +- map.m_len = sbi->blocks_per_seg; ++ map.m_len = sec_blks; + next_alloc: + if (has_not_enough_free_secs(sbi, 0, + GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) { + down_write(&sbi->gc_lock); +- err = f2fs_gc(sbi, true, false, NULL_SEGNO); ++ err = f2fs_gc(sbi, true, false, false, NULL_SEGNO); + if (err && err != -ENODATA && err != -EAGAIN) + goto out_err; + } +@@ -1666,7 +1664,7 @@ next_alloc: + down_write(&sbi->pin_sem); + + f2fs_lock_op(sbi); +- f2fs_allocate_new_segment(sbi, CURSEG_COLD_DATA_PINNED); ++ f2fs_allocate_new_section(sbi, CURSEG_COLD_DATA_PINNED); + f2fs_unlock_op(sbi); + + map.m_seg_type = CURSEG_COLD_DATA_PINNED; +@@ -1674,24 +1672,25 @@ next_alloc: + + up_write(&sbi->pin_sem); + +- done += map.m_len; +- len -= map.m_len; ++ expanded += map.m_len; ++ sec_len -= map.m_len; + map.m_lblk += map.m_len; +- if (!err && len) ++ if (!err && sec_len) + goto next_alloc; + +- map.m_len = done; ++ map.m_len = expanded; + } else { + err = f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_AIO); ++ expanded = map.m_len; + } + out_err: + if (err) { + pgoff_t last_off; + +- if (!map.m_len) ++ if (!expanded) + return err; + +- last_off = map.m_lblk + map.m_len - 1; ++ last_off = pg_start + expanded - 1; + + /* update new size to the failed position */ + new_size = (last_off == pg_end) ? offset + len : +@@ -2489,7 +2488,7 @@ static int f2fs_ioc_gc(struct file *filp, unsigned long arg) + down_write(&sbi->gc_lock); + } + +- ret = f2fs_gc(sbi, sync, true, NULL_SEGNO); ++ ret = f2fs_gc(sbi, sync, true, false, NULL_SEGNO); + out: + mnt_drop_write_file(filp); + return ret; +@@ -2525,7 +2524,8 @@ do_more: + down_write(&sbi->gc_lock); + } + +- ret = f2fs_gc(sbi, range->sync, true, GET_SEGNO(sbi, range->start)); ++ ret = f2fs_gc(sbi, range->sync, true, false, ++ GET_SEGNO(sbi, range->start)); + if (ret) { + if (ret == -EBUSY) + ret = -EAGAIN; +@@ -2978,7 +2978,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg) + sm->last_victim[GC_CB] = end_segno + 1; + sm->last_victim[GC_GREEDY] = end_segno + 1; + sm->last_victim[ALLOC_NEXT] = end_segno + 1; +- ret = f2fs_gc(sbi, true, true, start_segno); ++ ret = f2fs_gc(sbi, true, true, true, start_segno); + if (ret == -EAGAIN) + ret = 0; + else if (ret < 0) +diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c +index 39330ad3c44e6..a8567cb476213 100644 +--- a/fs/f2fs/gc.c ++++ b/fs/f2fs/gc.c +@@ -112,7 +112,7 @@ do_gc: + sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC; + + /* if return value is not zero, no victim was selected */ +- if (f2fs_gc(sbi, sync_mode, true, NULL_SEGNO)) ++ if (f2fs_gc(sbi, sync_mode, true, false, NULL_SEGNO)) + wait_ms = gc_th->no_gc_sleep_time; + + trace_f2fs_background_gc(sbi->sb, wait_ms, +@@ -392,10 +392,6 @@ static void add_victim_entry(struct f2fs_sb_info *sbi, + if (p->gc_mode == GC_AT && + get_valid_blocks(sbi, segno, true) == 0) + return; +- +- if (p->alloc_mode == AT_SSR && +- get_seg_entry(sbi, segno)->ckpt_valid_blocks == 0) +- return; + } + + for (i = 0; i < sbi->segs_per_sec; i++) +@@ -728,11 +724,27 @@ retry: + + if (sec_usage_check(sbi, secno)) + goto next; ++ + /* Don't touch checkpointed data */ +- if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED) && +- get_ckpt_valid_blocks(sbi, segno) && +- p.alloc_mode == LFS)) +- goto next; ++ if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) { ++ if (p.alloc_mode == LFS) { ++ /* ++ * LFS is set to find source section during GC. ++ * The victim should have no checkpointed data. ++ */ ++ if (get_ckpt_valid_blocks(sbi, segno, true)) ++ goto next; ++ } else { ++ /* ++ * SSR | AT_SSR are set to find target segment ++ * for writes which can be full by checkpointed ++ * and newly written blocks. ++ */ ++ if (!f2fs_segment_has_free_slot(sbi, segno)) ++ goto next; ++ } ++ } ++ + if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap)) + goto next; + +@@ -1354,7 +1366,8 @@ out: + * the victim data block is ignored. + */ + static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, +- struct gc_inode_list *gc_list, unsigned int segno, int gc_type) ++ struct gc_inode_list *gc_list, unsigned int segno, int gc_type, ++ bool force_migrate) + { + struct super_block *sb = sbi->sb; + struct f2fs_summary *entry; +@@ -1383,8 +1396,8 @@ next_step: + * race condition along with SSR block allocation. + */ + if ((gc_type == BG_GC && has_not_enough_free_secs(sbi, 0, 0)) || +- get_valid_blocks(sbi, segno, true) == +- BLKS_PER_SEC(sbi)) ++ (!force_migrate && get_valid_blocks(sbi, segno, true) == ++ BLKS_PER_SEC(sbi))) + return submitted; + + if (check_valid_map(sbi, segno, off) == 0) +@@ -1519,7 +1532,8 @@ static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim, + + static int do_garbage_collect(struct f2fs_sb_info *sbi, + unsigned int start_segno, +- struct gc_inode_list *gc_list, int gc_type) ++ struct gc_inode_list *gc_list, int gc_type, ++ bool force_migrate) + { + struct page *sum_page; + struct f2fs_summary_block *sum; +@@ -1606,7 +1620,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, + gc_type); + else + submitted += gc_data_segment(sbi, sum->entries, gc_list, +- segno, gc_type); ++ segno, gc_type, ++ force_migrate); + + stat_inc_seg_count(sbi, type, gc_type); + migrated++; +@@ -1634,7 +1649,7 @@ skip: + } + + int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, +- bool background, unsigned int segno) ++ bool background, bool force, unsigned int segno) + { + int gc_type = sync ? FG_GC : BG_GC; + int sec_freed = 0, seg_freed = 0, total_freed = 0; +@@ -1696,7 +1711,7 @@ gc_more: + if (ret) + goto stop; + +- seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type); ++ seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type, force); + if (gc_type == FG_GC && + seg_freed == f2fs_usable_segs_in_sec(sbi, segno)) + sec_freed++; +@@ -1835,7 +1850,7 @@ static int free_segment_range(struct f2fs_sb_info *sbi, + .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS), + }; + +- do_garbage_collect(sbi, segno, &gc_list, FG_GC); ++ do_garbage_collect(sbi, segno, &gc_list, FG_GC, true); + put_gc_inode(&gc_list); + + if (!gc_only && get_valid_blocks(sbi, segno, true)) { +@@ -1974,7 +1989,20 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count) + + /* stop CP to protect MAIN_SEC in free_segment_range */ + f2fs_lock_op(sbi); ++ ++ spin_lock(&sbi->stat_lock); ++ if (shrunk_blocks + valid_user_blocks(sbi) + ++ sbi->current_reserved_blocks + sbi->unusable_block_count + ++ F2FS_OPTION(sbi).root_reserved_blocks > sbi->user_block_count) ++ err = -ENOSPC; ++ spin_unlock(&sbi->stat_lock); ++ ++ if (err) ++ goto out_unlock; ++ + err = free_segment_range(sbi, secs, true); ++ ++out_unlock: + f2fs_unlock_op(sbi); + up_write(&sbi->gc_lock); + if (err) +diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c +index 993caefcd2bb0..92652ca7a7c8b 100644 +--- a/fs/f2fs/inline.c ++++ b/fs/f2fs/inline.c +@@ -219,7 +219,8 @@ out: + + f2fs_put_page(page, 1); + +- f2fs_balance_fs(sbi, dn.node_changed); ++ if (!err) ++ f2fs_balance_fs(sbi, dn.node_changed); + + return err; + } +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index c2866561263e9..77456d228f2a1 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -324,23 +324,27 @@ void f2fs_drop_inmem_pages(struct inode *inode) + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct f2fs_inode_info *fi = F2FS_I(inode); + +- while (!list_empty(&fi->inmem_pages)) { ++ do { + mutex_lock(&fi->inmem_lock); ++ if (list_empty(&fi->inmem_pages)) { ++ fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0; ++ ++ spin_lock(&sbi->inode_lock[ATOMIC_FILE]); ++ if (!list_empty(&fi->inmem_ilist)) ++ list_del_init(&fi->inmem_ilist); ++ if (f2fs_is_atomic_file(inode)) { ++ clear_inode_flag(inode, FI_ATOMIC_FILE); ++ sbi->atomic_files--; ++ } ++ spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); ++ ++ mutex_unlock(&fi->inmem_lock); ++ break; ++ } + __revoke_inmem_pages(inode, &fi->inmem_pages, + true, false, true); + mutex_unlock(&fi->inmem_lock); +- } +- +- fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0; +- +- spin_lock(&sbi->inode_lock[ATOMIC_FILE]); +- if (!list_empty(&fi->inmem_ilist)) +- list_del_init(&fi->inmem_ilist); +- if (f2fs_is_atomic_file(inode)) { +- clear_inode_flag(inode, FI_ATOMIC_FILE); +- sbi->atomic_files--; +- } +- spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); ++ } while (1); + } + + void f2fs_drop_inmem_page(struct inode *inode, struct page *page) +@@ -504,7 +508,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need) + */ + if (has_not_enough_free_secs(sbi, 0, 0)) { + down_write(&sbi->gc_lock); +- f2fs_gc(sbi, false, false, NULL_SEGNO); ++ f2fs_gc(sbi, false, false, false, NULL_SEGNO); + } + } + +@@ -861,7 +865,7 @@ static void locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno) + mutex_lock(&dirty_i->seglist_lock); + + valid_blocks = get_valid_blocks(sbi, segno, false); +- ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno); ++ ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno, false); + + if (valid_blocks == 0 && (!is_sbi_flag_set(sbi, SBI_CP_DISABLED) || + ckpt_valid_blocks == usable_blocks)) { +@@ -946,7 +950,7 @@ static unsigned int get_free_segment(struct f2fs_sb_info *sbi) + for_each_set_bit(segno, dirty_i->dirty_segmap[DIRTY], MAIN_SEGS(sbi)) { + if (get_valid_blocks(sbi, segno, false)) + continue; +- if (get_ckpt_valid_blocks(sbi, segno)) ++ if (get_ckpt_valid_blocks(sbi, segno, false)) + continue; + mutex_unlock(&dirty_i->seglist_lock); + return segno; +@@ -2636,6 +2640,23 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi, + seg->next_blkoff++; + } + ++bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno) ++{ ++ struct seg_entry *se = get_seg_entry(sbi, segno); ++ int entries = SIT_VBLOCK_MAP_SIZE / sizeof(unsigned long); ++ unsigned long *target_map = SIT_I(sbi)->tmp_map; ++ unsigned long *ckpt_map = (unsigned long *)se->ckpt_valid_map; ++ unsigned long *cur_map = (unsigned long *)se->cur_valid_map; ++ int i, pos; ++ ++ for (i = 0; i < entries; i++) ++ target_map[i] = ckpt_map[i] | cur_map[i]; ++ ++ pos = __find_rev_next_zero_bit(target_map, sbi->blocks_per_seg, 0); ++ ++ return pos < sbi->blocks_per_seg; ++} ++ + /* + * This function always allocates a used segment(from dirty seglist) by SSR + * manner, so it should recover the existing segment information of valid blocks +@@ -2893,7 +2914,8 @@ unlock: + up_read(&SM_I(sbi)->curseg_lock); + } + +-static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type) ++static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type, ++ bool new_sec) + { + struct curseg_info *curseg = CURSEG_I(sbi, type); + unsigned int old_segno; +@@ -2901,32 +2923,42 @@ static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type) + if (!curseg->inited) + goto alloc; + +- if (!curseg->next_blkoff && +- !get_valid_blocks(sbi, curseg->segno, false) && +- !get_ckpt_valid_blocks(sbi, curseg->segno)) +- return; ++ if (curseg->next_blkoff || ++ get_valid_blocks(sbi, curseg->segno, new_sec)) ++ goto alloc; + ++ if (!get_ckpt_valid_blocks(sbi, curseg->segno, new_sec)) ++ return; + alloc: + old_segno = curseg->segno; + SIT_I(sbi)->s_ops->allocate_segment(sbi, type, true); + locate_dirty_segment(sbi, old_segno); + } + +-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type) ++static void __allocate_new_section(struct f2fs_sb_info *sbi, int type) + { ++ __allocate_new_segment(sbi, type, true); ++} ++ ++void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type) ++{ ++ down_read(&SM_I(sbi)->curseg_lock); + down_write(&SIT_I(sbi)->sentry_lock); +- __allocate_new_segment(sbi, type); ++ __allocate_new_section(sbi, type); + up_write(&SIT_I(sbi)->sentry_lock); ++ up_read(&SM_I(sbi)->curseg_lock); + } + + void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi) + { + int i; + ++ down_read(&SM_I(sbi)->curseg_lock); + down_write(&SIT_I(sbi)->sentry_lock); + for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) +- __allocate_new_segment(sbi, i); ++ __allocate_new_segment(sbi, i, false); + up_write(&SIT_I(sbi)->sentry_lock); ++ up_read(&SM_I(sbi)->curseg_lock); + } + + static const struct segment_allocation default_salloc_ops = { +@@ -3365,12 +3397,12 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, + f2fs_inode_chksum_set(sbi, page); + } + +- if (F2FS_IO_ALIGNED(sbi)) +- fio->retry = false; +- + if (fio) { + struct f2fs_bio_info *io; + ++ if (F2FS_IO_ALIGNED(sbi)) ++ fio->retry = false; ++ + INIT_LIST_HEAD(&fio->list); + fio->in_list = true; + io = sbi->write_io[fio->type] + fio->temp; +diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h +index e9a7a637d6887..afb175739de5e 100644 +--- a/fs/f2fs/segment.h ++++ b/fs/f2fs/segment.h +@@ -361,8 +361,20 @@ static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi, + } + + static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi, +- unsigned int segno) ++ unsigned int segno, bool use_section) + { ++ if (use_section && __is_large_section(sbi)) { ++ unsigned int start_segno = START_SEGNO(segno); ++ unsigned int blocks = 0; ++ int i; ++ ++ for (i = 0; i < sbi->segs_per_sec; i++, start_segno++) { ++ struct seg_entry *se = get_seg_entry(sbi, start_segno); ++ ++ blocks += se->ckpt_valid_blocks; ++ } ++ return blocks; ++ } + return get_seg_entry(sbi, segno)->ckpt_valid_blocks; + } + +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index 82592b19b4e02..d852d96773a32 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -1865,7 +1865,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi) + + while (!f2fs_time_over(sbi, DISABLE_TIME)) { + down_write(&sbi->gc_lock); +- err = f2fs_gc(sbi, true, false, NULL_SEGNO); ++ err = f2fs_gc(sbi, true, false, false, NULL_SEGNO); + if (err == -ENODATA) { + err = 0; + break; +@@ -3929,10 +3929,18 @@ try_onemore: + * previous checkpoint was not done by clean system shutdown. + */ + if (f2fs_hw_is_readonly(sbi)) { +- if (!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) +- f2fs_err(sbi, "Need to recover fsync data, but write access unavailable"); +- else +- f2fs_info(sbi, "write access unavailable, skipping recovery"); ++ if (!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) { ++ err = f2fs_recover_fsync_data(sbi, true); ++ if (err > 0) { ++ err = -EROFS; ++ f2fs_err(sbi, "Need to recover fsync data, but " ++ "write access unavailable, please try " ++ "mount w/ disable_roll_forward or norecovery"); ++ } ++ if (err < 0) ++ goto free_meta; ++ } ++ f2fs_info(sbi, "write access unavailable, skipping recovery"); + goto reset_checkpoint; + } + +diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c +index 45082269e6982..a37528b51798b 100644 +--- a/fs/fuse/cuse.c ++++ b/fs/fuse/cuse.c +@@ -627,6 +627,8 @@ static int __init cuse_init(void) + cuse_channel_fops.owner = THIS_MODULE; + cuse_channel_fops.open = cuse_channel_open; + cuse_channel_fops.release = cuse_channel_release; ++ /* CUSE is not prepared for FUSE_DEV_IOC_CLONE */ ++ cuse_channel_fops.unlocked_ioctl = NULL; + + cuse_class = class_create(THIS_MODULE, "cuse"); + if (IS_ERR(cuse_class)) +diff --git a/fs/fuse/file.c b/fs/fuse/file.c +index eff4abaa87da0..6e6d1e5998691 100644 +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -1776,8 +1776,17 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args, + container_of(args, typeof(*wpa), ia.ap.args); + struct inode *inode = wpa->inode; + struct fuse_inode *fi = get_fuse_inode(inode); ++ struct fuse_conn *fc = get_fuse_conn(inode); + + mapping_set_error(inode->i_mapping, error); ++ /* ++ * A writeback finished and this might have updated mtime/ctime on ++ * server making local mtime/ctime stale. Hence invalidate attrs. ++ * Do this only if writeback_cache is not enabled. If writeback_cache ++ * is enabled, we trust local ctime/mtime. ++ */ ++ if (!fc->writeback_cache) ++ fuse_invalidate_attr(inode); + spin_lock(&fi->lock); + rb_erase(&wpa->writepages_entry, &fi->writepages); + while (wpa->next) { +diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c +index 1e5affed158e9..005209b1cd50e 100644 +--- a/fs/fuse/virtio_fs.c ++++ b/fs/fuse/virtio_fs.c +@@ -1437,8 +1437,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc) + if (!fm) + goto out_err; + +- fuse_conn_init(fc, fm, get_user_ns(current_user_ns()), +- &virtio_fs_fiq_ops, fs); ++ fuse_conn_init(fc, fm, fsc->user_ns, &virtio_fs_fiq_ops, fs); + fc->release = fuse_free_conn; + fc->delete_stale = true; + fc->auto_submounts = true; +diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c +index a930ddd156819..7054a542689f9 100644 +--- a/fs/hfsplus/extents.c ++++ b/fs/hfsplus/extents.c +@@ -598,13 +598,15 @@ void hfsplus_file_truncate(struct inode *inode) + res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt); + if (res) + break; +- hfs_brec_remove(&fd); + +- mutex_unlock(&fd.tree->tree_lock); + start = hip->cached_start; ++ if (blk_cnt <= start) ++ hfs_brec_remove(&fd); ++ mutex_unlock(&fd.tree->tree_lock); + hfsplus_free_extents(sb, hip->cached_extents, + alloc_cnt - start, alloc_cnt - blk_cnt); + hfsplus_dump_extent(hip->cached_extents); ++ mutex_lock(&fd.tree->tree_lock); + if (blk_cnt > start) { + hip->extent_state |= HFSPLUS_EXT_DIRTY; + break; +@@ -612,7 +614,6 @@ void hfsplus_file_truncate(struct inode *inode) + alloc_cnt = start; + hip->cached_start = hip->cached_blocks = 0; + hip->extent_state &= ~(HFSPLUS_EXT_DIRTY | HFSPLUS_EXT_NEW); +- mutex_lock(&fd.tree->tree_lock); + } + hfs_find_exit(&fd); + +diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c +index 701c82c361383..99df69b848224 100644 +--- a/fs/hugetlbfs/inode.c ++++ b/fs/hugetlbfs/inode.c +@@ -131,6 +131,7 @@ static void huge_pagevec_release(struct pagevec *pvec) + static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma) + { + struct inode *inode = file_inode(file); ++ struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode); + loff_t len, vma_len; + int ret; + struct hstate *h = hstate_file(file); +@@ -146,6 +147,10 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma) + vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND; + vma->vm_ops = &hugetlb_vm_ops; + ++ ret = seal_check_future_write(info->seals, vma); ++ if (ret) ++ return ret; ++ + /* + * page based offset in vm_pgoff could be sufficiently large to + * overflow a loff_t when converted to byte offset. This can +diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c +index 69f18fe209236..d47a0d96bf309 100644 +--- a/fs/jbd2/recovery.c ++++ b/fs/jbd2/recovery.c +@@ -245,15 +245,14 @@ static int fc_do_one_pass(journal_t *journal, + return 0; + + while (next_fc_block <= journal->j_fc_last) { +- jbd_debug(3, "Fast commit replay: next block %ld", ++ jbd_debug(3, "Fast commit replay: next block %ld\n", + next_fc_block); + err = jread(&bh, journal, next_fc_block); + if (err) { +- jbd_debug(3, "Fast commit replay: read error"); ++ jbd_debug(3, "Fast commit replay: read error\n"); + break; + } + +- jbd_debug(3, "Processing fast commit blk with seq %d"); + err = journal->j_fc_replay_callback(journal, bh, pass, + next_fc_block - journal->j_fc_first, + expected_commit_id); +diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c +index f7786e00a6a7f..ed9d580826f5a 100644 +--- a/fs/nfs/callback_proc.c ++++ b/fs/nfs/callback_proc.c +@@ -137,12 +137,12 @@ static struct inode *nfs_layout_find_inode_by_stateid(struct nfs_client *clp, + list_for_each_entry_rcu(lo, &server->layouts, plh_layouts) { + if (!pnfs_layout_is_valid(lo)) + continue; +- if (stateid != NULL && +- !nfs4_stateid_match_other(stateid, &lo->plh_stateid)) ++ if (!nfs4_stateid_match_other(stateid, &lo->plh_stateid)) + continue; +- if (!nfs_sb_active(server->super)) +- continue; +- inode = igrab(lo->plh_inode); ++ if (nfs_sb_active(server->super)) ++ inode = igrab(lo->plh_inode); ++ else ++ inode = ERR_PTR(-EAGAIN); + rcu_read_unlock(); + if (inode) + return inode; +@@ -176,9 +176,10 @@ static struct inode *nfs_layout_find_inode_by_fh(struct nfs_client *clp, + continue; + if (nfsi->layout != lo) + continue; +- if (!nfs_sb_active(server->super)) +- continue; +- inode = igrab(lo->plh_inode); ++ if (nfs_sb_active(server->super)) ++ inode = igrab(lo->plh_inode); ++ else ++ inode = ERR_PTR(-EAGAIN); + rcu_read_unlock(); + if (inode) + return inode; +diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c +index fc4f490f2d785..0cd7c59a6601c 100644 +--- a/fs/nfs/dir.c ++++ b/fs/nfs/dir.c +@@ -866,6 +866,8 @@ static int nfs_readdir_xdr_to_array(struct nfs_readdir_descriptor *desc, + break; + } + ++ verf_arg = verf_res; ++ + status = nfs_readdir_page_filler(desc, entry, pages, pglen, + arrays, narrays); + } while (!status && nfs_readdir_page_needs_filling(page)); +@@ -927,7 +929,12 @@ static int find_and_lock_cache_page(struct nfs_readdir_descriptor *desc) + } + return res; + } +- memcpy(nfsi->cookieverf, verf, sizeof(nfsi->cookieverf)); ++ /* ++ * Set the cookie verifier if the page cache was empty ++ */ ++ if (desc->page_index == 0) ++ memcpy(nfsi->cookieverf, verf, ++ sizeof(nfsi->cookieverf)); + } + res = nfs_readdir_search_array(desc); + if (res == 0) { +@@ -974,10 +981,10 @@ static int readdir_search_pagecache(struct nfs_readdir_descriptor *desc) + /* + * Once we've found the start of the dirent within a page: fill 'er up... + */ +-static void nfs_do_filldir(struct nfs_readdir_descriptor *desc) ++static void nfs_do_filldir(struct nfs_readdir_descriptor *desc, ++ const __be32 *verf) + { + struct file *file = desc->file; +- struct nfs_inode *nfsi = NFS_I(file_inode(file)); + struct nfs_cache_array *array; + unsigned int i = 0; + +@@ -991,7 +998,7 @@ static void nfs_do_filldir(struct nfs_readdir_descriptor *desc) + desc->eof = true; + break; + } +- memcpy(desc->verf, nfsi->cookieverf, sizeof(desc->verf)); ++ memcpy(desc->verf, verf, sizeof(desc->verf)); + if (i < (array->size-1)) + desc->dir_cookie = array->array[i+1].cookie; + else +@@ -1048,7 +1055,7 @@ static int uncached_readdir(struct nfs_readdir_descriptor *desc) + + for (i = 0; !desc->eof && i < sz && arrays[i]; i++) { + desc->page = arrays[i]; +- nfs_do_filldir(desc); ++ nfs_do_filldir(desc, verf); + } + desc->page = NULL; + +@@ -1069,6 +1076,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx) + { + struct dentry *dentry = file_dentry(file); + struct inode *inode = d_inode(dentry); ++ struct nfs_inode *nfsi = NFS_I(inode); + struct nfs_open_dir_context *dir_ctx = file->private_data; + struct nfs_readdir_descriptor *desc; + int res; +@@ -1122,7 +1130,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx) + break; + } + if (res == -ETOOSMALL && desc->plus) { +- clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_I(inode)->flags); ++ clear_bit(NFS_INO_ADVISE_RDPLUS, &nfsi->flags); + nfs_zap_caches(inode); + desc->page_index = 0; + desc->plus = false; +@@ -1132,7 +1140,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx) + if (res < 0) + break; + +- nfs_do_filldir(desc); ++ nfs_do_filldir(desc, nfsi->cookieverf); + nfs_readdir_page_unlock_and_put_cached(desc); + } while (!desc->eof); + +diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c +index 872112bffcab2..d383de00d4868 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayout.c ++++ b/fs/nfs/flexfilelayout/flexfilelayout.c +@@ -106,7 +106,7 @@ static int decode_nfs_fh(struct xdr_stream *xdr, struct nfs_fh *fh) + if (unlikely(!p)) + return -ENOBUFS; + fh->size = be32_to_cpup(p++); +- if (fh->size > sizeof(struct nfs_fh)) { ++ if (fh->size > NFS_MAXFHSIZE) { + printk(KERN_ERR "NFS flexfiles: Too big fh received %d\n", + fh->size); + return -EOVERFLOW; +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index a7fb076a5f44b..7cfeee3eeef7f 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -1662,10 +1662,10 @@ EXPORT_SYMBOL_GPL(_nfs_display_fhandle); + */ + static int nfs_inode_attrs_need_update(const struct inode *inode, const struct nfs_fattr *fattr) + { +- const struct nfs_inode *nfsi = NFS_I(inode); ++ unsigned long attr_gencount = NFS_I(inode)->attr_gencount; + +- return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 || +- ((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0); ++ return (long)(fattr->gencount - attr_gencount) > 0 || ++ (long)(attr_gencount - nfs_read_attr_generation_counter()) > 0; + } + + static int nfs_refresh_inode_locked(struct inode *inode, struct nfs_fattr *fattr) +@@ -2094,7 +2094,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr) + nfsi->attrtimeo_timestamp = now; + } + /* Set the barrier to be more recent than this fattr */ +- if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0) ++ if ((long)(fattr->gencount - nfsi->attr_gencount) > 0) + nfsi->attr_gencount = fattr->gencount; + } + +diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c +index 094024b0aca19..3875120ef3ef0 100644 +--- a/fs/nfs/nfs42proc.c ++++ b/fs/nfs/nfs42proc.c +@@ -46,11 +46,12 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + { + struct inode *inode = file_inode(filep); + struct nfs_server *server = NFS_SERVER(inode); ++ u32 bitmask[3]; + struct nfs42_falloc_args args = { + .falloc_fh = NFS_FH(inode), + .falloc_offset = offset, + .falloc_length = len, +- .falloc_bitmask = nfs4_fattr_bitmap, ++ .falloc_bitmask = bitmask, + }; + struct nfs42_falloc_res res = { + .falloc_server = server, +@@ -68,6 +69,10 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + return status; + } + ++ memcpy(bitmask, server->cache_consistency_bitmask, sizeof(bitmask)); ++ if (server->attr_bitmask[1] & FATTR4_WORD1_SPACE_USED) ++ bitmask[1] |= FATTR4_WORD1_SPACE_USED; ++ + res.falloc_fattr = nfs_alloc_fattr(); + if (!res.falloc_fattr) + return -ENOMEM; +@@ -75,7 +80,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + status = nfs4_call_sync(server->client, server, msg, + &args.seq_args, &res.seq_res, 0); + if (status == 0) +- status = nfs_post_op_update_inode(inode, res.falloc_fattr); ++ status = nfs_post_op_update_inode_force_wcc(inode, ++ res.falloc_fattr); + + kfree(res.falloc_fattr); + return status; +@@ -84,7 +90,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + loff_t offset, loff_t len) + { +- struct nfs_server *server = NFS_SERVER(file_inode(filep)); ++ struct inode *inode = file_inode(filep); ++ struct nfs_server *server = NFS_SERVER(inode); + struct nfs4_exception exception = { }; + struct nfs_lock_context *lock; + int err; +@@ -93,9 +100,13 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + if (IS_ERR(lock)) + return PTR_ERR(lock); + +- exception.inode = file_inode(filep); ++ exception.inode = inode; + exception.state = lock->open_context->state; + ++ err = nfs_sync_inode(inode); ++ if (err) ++ goto out; ++ + do { + err = _nfs42_proc_fallocate(msg, filep, lock, offset, len); + if (err == -ENOTSUPP) { +@@ -104,7 +115,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, + } + err = nfs4_handle_exception(server, err, &exception); + } while (exception.retry); +- ++out: + nfs_put_lock_context(lock); + return err; + } +@@ -142,16 +153,13 @@ int nfs42_proc_deallocate(struct file *filep, loff_t offset, loff_t len) + return -EOPNOTSUPP; + + inode_lock(inode); +- err = nfs_sync_inode(inode); +- if (err) +- goto out_unlock; + + err = nfs42_proc_fallocate(&msg, filep, offset, len); + if (err == 0) + truncate_pagecache_range(inode, offset, (offset + len) -1); + if (err == -EOPNOTSUPP) + NFS_SERVER(inode)->caps &= ~NFS_CAP_DEALLOCATE; +-out_unlock: ++ + inode_unlock(inode); + return err; + } +@@ -261,6 +269,33 @@ out: + return status; + } + ++/** ++ * nfs42_copy_dest_done - perform inode cache updates after clone/copy offload ++ * @inode: pointer to destination inode ++ * @pos: destination offset ++ * @len: copy length ++ * ++ * Punch a hole in the inode page cache, so that the NFS client will ++ * know to retrieve new data. ++ * Update the file size if necessary, and then mark the inode as having ++ * invalid cached values for change attribute, ctime, mtime and space used. ++ */ ++static void nfs42_copy_dest_done(struct inode *inode, loff_t pos, loff_t len) ++{ ++ loff_t newsize = pos + len; ++ loff_t end = newsize - 1; ++ ++ truncate_pagecache_range(inode, pos, end); ++ spin_lock(&inode->i_lock); ++ if (newsize > i_size_read(inode)) ++ i_size_write(inode, newsize); ++ nfs_set_cache_invalid(inode, NFS_INO_INVALID_CHANGE | ++ NFS_INO_INVALID_CTIME | ++ NFS_INO_INVALID_MTIME | ++ NFS_INO_INVALID_BLOCKS); ++ spin_unlock(&inode->i_lock); ++} ++ + static ssize_t _nfs42_proc_copy(struct file *src, + struct nfs_lock_context *src_lock, + struct file *dst, +@@ -354,14 +389,8 @@ static ssize_t _nfs42_proc_copy(struct file *src, + goto out; + } + +- truncate_pagecache_range(dst_inode, pos_dst, +- pos_dst + res->write_res.count); +- spin_lock(&dst_inode->i_lock); +- nfs_set_cache_invalid( +- dst_inode, NFS_INO_REVAL_PAGECACHE | NFS_INO_REVAL_FORCED | +- NFS_INO_INVALID_SIZE | NFS_INO_INVALID_ATTR | +- NFS_INO_INVALID_DATA); +- spin_unlock(&dst_inode->i_lock); ++ nfs42_copy_dest_done(dst_inode, pos_dst, res->write_res.count); ++ + spin_lock(&src_inode->i_lock); + nfs_set_cache_invalid(src_inode, NFS_INO_REVAL_PAGECACHE | + NFS_INO_REVAL_FORCED | +@@ -659,7 +688,10 @@ static loff_t _nfs42_proc_llseek(struct file *filep, + if (status) + return status; + +- return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes); ++ if (whence == SEEK_DATA && res.sr_eof) ++ return -NFS4ERR_NXIO; ++ else ++ return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes); + } + + loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence) +@@ -1044,8 +1076,10 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f, + + status = nfs4_call_sync(server->client, server, msg, + &args.seq_args, &res.seq_res, 0); +- if (status == 0) ++ if (status == 0) { ++ nfs42_copy_dest_done(dst_inode, dst_offset, count); + status = nfs_post_op_update_inode(dst_inode, res.dst_fattr); ++ } + + kfree(res.dst_fattr); + return status; +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index c65c4b41e2c19..820abae88cf04 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -108,9 +108,10 @@ static int nfs41_test_stateid(struct nfs_server *, nfs4_stateid *, + static int nfs41_free_stateid(struct nfs_server *, const nfs4_stateid *, + const struct cred *, bool); + #endif +-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode, +- struct nfs_server *server, +- struct nfs4_label *label); ++static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ], ++ const __u32 *src, struct inode *inode, ++ struct nfs_server *server, ++ struct nfs4_label *label); + + #ifdef CONFIG_NFS_V4_SECURITY_LABEL + static inline struct nfs4_label * +@@ -3591,6 +3592,7 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data) + struct nfs4_closedata *calldata = data; + struct nfs4_state *state = calldata->state; + struct inode *inode = calldata->inode; ++ struct nfs_server *server = NFS_SERVER(inode); + struct pnfs_layout_hdr *lo; + bool is_rdonly, is_wronly, is_rdwr; + int call_close = 0; +@@ -3647,8 +3649,10 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data) + if (calldata->arg.fmode == 0 || calldata->arg.fmode == FMODE_READ) { + /* Close-to-open cache consistency revalidation */ + if (!nfs4_have_delegation(inode, FMODE_READ)) { +- calldata->arg.bitmask = NFS_SERVER(inode)->cache_consistency_bitmask; +- nfs4_bitmask_adjust(calldata->arg.bitmask, inode, NFS_SERVER(inode), NULL); ++ nfs4_bitmask_set(calldata->arg.bitmask_store, ++ server->cache_consistency_bitmask, ++ inode, server, NULL); ++ calldata->arg.bitmask = calldata->arg.bitmask_store; + } else + calldata->arg.bitmask = NULL; + } +@@ -5416,19 +5420,17 @@ bool nfs4_write_need_cache_consistency_data(struct nfs_pgio_header *hdr) + return nfs4_have_delegation(hdr->inode, FMODE_READ) == 0; + } + +-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode, +- struct nfs_server *server, +- struct nfs4_label *label) ++static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ], const __u32 *src, ++ struct inode *inode, struct nfs_server *server, ++ struct nfs4_label *label) + { +- + unsigned long cache_validity = READ_ONCE(NFS_I(inode)->cache_validity); ++ unsigned int i; + +- if ((cache_validity & NFS_INO_INVALID_DATA) || +- (cache_validity & NFS_INO_REVAL_PAGECACHE) || +- (cache_validity & NFS_INO_REVAL_FORCED) || +- (cache_validity & NFS_INO_INVALID_OTHER)) +- nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, label), inode); ++ memcpy(bitmask, src, sizeof(*bitmask) * NFS4_BITMASK_SZ); + ++ if (cache_validity & (NFS_INO_INVALID_CHANGE | NFS_INO_REVAL_PAGECACHE)) ++ bitmask[0] |= FATTR4_WORD0_CHANGE; + if (cache_validity & NFS_INO_INVALID_ATIME) + bitmask[1] |= FATTR4_WORD1_TIME_ACCESS; + if (cache_validity & NFS_INO_INVALID_OTHER) +@@ -5437,16 +5439,22 @@ static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode, + FATTR4_WORD1_NUMLINKS; + if (label && label->len && cache_validity & NFS_INO_INVALID_LABEL) + bitmask[2] |= FATTR4_WORD2_SECURITY_LABEL; +- if (cache_validity & NFS_INO_INVALID_CHANGE) +- bitmask[0] |= FATTR4_WORD0_CHANGE; + if (cache_validity & NFS_INO_INVALID_CTIME) + bitmask[1] |= FATTR4_WORD1_TIME_METADATA; + if (cache_validity & NFS_INO_INVALID_MTIME) + bitmask[1] |= FATTR4_WORD1_TIME_MODIFY; +- if (cache_validity & NFS_INO_INVALID_SIZE) +- bitmask[0] |= FATTR4_WORD0_SIZE; + if (cache_validity & NFS_INO_INVALID_BLOCKS) + bitmask[1] |= FATTR4_WORD1_SPACE_USED; ++ ++ if (nfs4_have_delegation(inode, FMODE_READ) && ++ !(cache_validity & NFS_INO_REVAL_FORCED)) ++ bitmask[0] &= ~FATTR4_WORD0_SIZE; ++ else if (cache_validity & ++ (NFS_INO_INVALID_SIZE | NFS_INO_REVAL_PAGECACHE)) ++ bitmask[0] |= FATTR4_WORD0_SIZE; ++ ++ for (i = 0; i < NFS4_BITMASK_SZ; i++) ++ bitmask[i] &= server->attr_bitmask[i]; + } + + static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr, +@@ -5459,8 +5467,10 @@ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr, + hdr->args.bitmask = NULL; + hdr->res.fattr = NULL; + } else { +- hdr->args.bitmask = server->cache_consistency_bitmask; +- nfs4_bitmask_adjust(hdr->args.bitmask, hdr->inode, server, NULL); ++ nfs4_bitmask_set(hdr->args.bitmask_store, ++ server->cache_consistency_bitmask, ++ hdr->inode, server, NULL); ++ hdr->args.bitmask = hdr->args.bitmask_store; + } + + if (!hdr->pgio_done_cb) +@@ -6502,8 +6512,10 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred, + + data->args.fhandle = &data->fh; + data->args.stateid = &data->stateid; +- data->args.bitmask = server->cache_consistency_bitmask; +- nfs4_bitmask_adjust(data->args.bitmask, inode, server, NULL); ++ nfs4_bitmask_set(data->args.bitmask_store, ++ server->cache_consistency_bitmask, inode, server, ++ NULL); ++ data->args.bitmask = data->args.bitmask_store; + nfs_copy_fh(&data->fh, NFS_FH(inode)); + nfs4_stateid_copy(&data->stateid, stateid); + data->res.fattr = &data->fattr; +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 97447a64bad0b..886e50ed07c20 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -4869,6 +4869,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, + if (nf) + nfsd_file_put(nf); + ++ status = nfserrno(nfsd_open_break_lease(cur_fh->fh_dentry->d_inode, ++ access)); ++ if (status) ++ goto out_put_access; ++ + status = nfsd4_truncate(rqstp, cur_fh, open); + if (status) + goto out_put_access; +@@ -6849,11 +6854,20 @@ out: + static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock) + { + struct nfsd_file *nf; +- __be32 err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf); +- if (!err) { +- err = nfserrno(vfs_test_lock(nf->nf_file, lock)); +- nfsd_file_put(nf); +- } ++ __be32 err; ++ ++ err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf); ++ if (err) ++ return err; ++ fh_lock(fhp); /* to block new leases till after test_lock: */ ++ err = nfserrno(nfsd_open_break_lease(fhp->fh_dentry->d_inode, ++ NFSD_MAY_READ)); ++ if (err) ++ goto out; ++ err = nfserrno(vfs_test_lock(nf->nf_file, lock)); ++out: ++ fh_unlock(fhp); ++ nfsd_file_put(nf); + return err; + } + +diff --git a/fs/proc/generic.c b/fs/proc/generic.c +index bc86aa87cc41a..5600da30e289b 100644 +--- a/fs/proc/generic.c ++++ b/fs/proc/generic.c +@@ -756,7 +756,7 @@ int remove_proc_subtree(const char *name, struct proc_dir_entry *parent) + while (1) { + next = pde_subdir_first(de); + if (next) { +- if (unlikely(pde_is_permanent(root))) { ++ if (unlikely(pde_is_permanent(next))) { + write_unlock(&proc_subdir_lock); + WARN(1, "removing permanent /proc entry '%s/%s'", + next->parent->name, next->name); +diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c +index 7b1128398976e..89d492916deaf 100644 +--- a/fs/squashfs/file.c ++++ b/fs/squashfs/file.c +@@ -211,11 +211,11 @@ failure: + * If the skip factor is limited in this way then the file will use multiple + * slots. + */ +-static inline int calculate_skip(int blocks) ++static inline int calculate_skip(u64 blocks) + { +- int skip = blocks / ((SQUASHFS_META_ENTRIES + 1) ++ u64 skip = blocks / ((SQUASHFS_META_ENTRIES + 1) + * SQUASHFS_META_INDEXES); +- return min(SQUASHFS_CACHED_BLKS - 1, skip + 1); ++ return min((u64) SQUASHFS_CACHED_BLKS - 1, skip + 1); + } + + +diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h +index f14adb8823381..cc7c3fda2aa6d 100644 +--- a/include/linux/cpuhotplug.h ++++ b/include/linux/cpuhotplug.h +@@ -135,6 +135,7 @@ enum cpuhp_state { + CPUHP_AP_RISCV_TIMER_STARTING, + CPUHP_AP_CLINT_TIMER_STARTING, + CPUHP_AP_CSKY_TIMER_STARTING, ++ CPUHP_AP_TI_GP_TIMER_STARTING, + CPUHP_AP_HYPERV_TIMER_STARTING, + CPUHP_AP_KVM_STARTING, + CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING, +diff --git a/include/linux/elevator.h b/include/linux/elevator.h +index 1fe8e105b83bf..dcb2f9022c1df 100644 +--- a/include/linux/elevator.h ++++ b/include/linux/elevator.h +@@ -34,7 +34,7 @@ struct elevator_mq_ops { + void (*depth_updated)(struct blk_mq_hw_ctx *); + + bool (*allow_merge)(struct request_queue *, struct request *, struct bio *); +- bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int); ++ bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int); + int (*request_merge)(struct request_queue *q, struct request **, struct bio *); + void (*request_merged)(struct request_queue *, struct request *, enum elv_merge); + void (*requests_merged)(struct request_queue *, struct request *, struct request *); +diff --git a/include/linux/i2c.h b/include/linux/i2c.h +index 56622658b2158..a670ae129f4b9 100644 +--- a/include/linux/i2c.h ++++ b/include/linux/i2c.h +@@ -687,6 +687,8 @@ struct i2c_adapter_quirks { + #define I2C_AQ_NO_ZERO_LEN_READ BIT(5) + #define I2C_AQ_NO_ZERO_LEN_WRITE BIT(6) + #define I2C_AQ_NO_ZERO_LEN (I2C_AQ_NO_ZERO_LEN_READ | I2C_AQ_NO_ZERO_LEN_WRITE) ++/* adapter cannot do repeated START */ ++#define I2C_AQ_NO_REP_START BIT(7) + + /* + * i2c_adapter is the structure used to identify a physical i2c bus along +diff --git a/include/linux/mm.h b/include/linux/mm.h +index 8ba434287387b..6c1b29bb35636 100644 +--- a/include/linux/mm.h ++++ b/include/linux/mm.h +@@ -3170,5 +3170,37 @@ extern int sysctl_nr_trim_pages; + + void mem_dump_obj(void *object); + ++/** ++ * seal_check_future_write - Check for F_SEAL_FUTURE_WRITE flag and handle it ++ * @seals: the seals to check ++ * @vma: the vma to operate on ++ * ++ * Check whether F_SEAL_FUTURE_WRITE is set; if so, do proper check/handling on ++ * the vma flags. Return 0 if check pass, or <0 for errors. ++ */ ++static inline int seal_check_future_write(int seals, struct vm_area_struct *vma) ++{ ++ if (seals & F_SEAL_FUTURE_WRITE) { ++ /* ++ * New PROT_WRITE and MAP_SHARED mmaps are not allowed when ++ * "future write" seal active. ++ */ ++ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) ++ return -EPERM; ++ ++ /* ++ * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as ++ * MAP_SHARED and read-only, take care to not allow mprotect to ++ * revert protections on such mappings. Do this only for shared ++ * mappings. For private mappings, don't need to mask ++ * VM_MAYWRITE as we still want them to be COW-writable. ++ */ ++ if (vma->vm_flags & VM_SHARED) ++ vma->vm_flags &= ~(VM_MAYWRITE); ++ } ++ ++ return 0; ++} ++ + #endif /* __KERNEL__ */ + #endif /* _LINUX_MM_H */ +diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h +index 6613b26a88946..5aacc1c10a45a 100644 +--- a/include/linux/mm_types.h ++++ b/include/linux/mm_types.h +@@ -97,10 +97,10 @@ struct page { + }; + struct { /* page_pool used by netstack */ + /** +- * @dma_addr: might require a 64-bit value even on ++ * @dma_addr: might require a 64-bit value on + * 32-bit architectures. + */ +- dma_addr_t dma_addr; ++ unsigned long dma_addr[2]; + }; + struct { /* slab, slob and slub */ + union { +diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h +index 3327239fa2f9a..cc29dee508f74 100644 +--- a/include/linux/nfs_xdr.h ++++ b/include/linux/nfs_xdr.h +@@ -15,6 +15,8 @@ + #define NFS_DEF_FILE_IO_SIZE (4096U) + #define NFS_MIN_FILE_IO_SIZE (1024U) + ++#define NFS_BITMASK_SZ 3 ++ + struct nfs4_string { + unsigned int len; + char *data; +@@ -525,7 +527,8 @@ struct nfs_closeargs { + struct nfs_seqid * seqid; + fmode_t fmode; + u32 share_access; +- u32 * bitmask; ++ const u32 * bitmask; ++ u32 bitmask_store[NFS_BITMASK_SZ]; + struct nfs4_layoutreturn_args *lr_args; + }; + +@@ -608,7 +611,8 @@ struct nfs4_delegreturnargs { + struct nfs4_sequence_args seq_args; + const struct nfs_fh *fhandle; + const nfs4_stateid *stateid; +- u32 * bitmask; ++ const u32 *bitmask; ++ u32 bitmask_store[NFS_BITMASK_SZ]; + struct nfs4_layoutreturn_args *lr_args; + }; + +@@ -648,7 +652,8 @@ struct nfs_pgio_args { + union { + unsigned int replen; /* used by read */ + struct { +- u32 * bitmask; /* used by write */ ++ const u32 * bitmask; /* used by write */ ++ u32 bitmask_store[NFS_BITMASK_SZ]; /* used by write */ + enum nfs3_stable_how stable; /* used by write */ + }; + }; +diff --git a/include/linux/phy.h b/include/linux/phy.h +index 1a12e4436b5b0..8644b097dea30 100644 +--- a/include/linux/phy.h ++++ b/include/linux/phy.h +@@ -493,6 +493,7 @@ struct macsec_ops; + * @loopback_enabled: Set true if this PHY has been loopbacked successfully. + * @downshifted_rate: Set true if link speed has been downshifted. + * @is_on_sfp_module: Set true if PHY is located on an SFP module. ++ * @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY + * @state: State of the PHY for management purposes + * @dev_flags: Device-specific flags used by the PHY driver. + * @irq: IRQ number of the PHY's interrupt (-1 if none) +@@ -567,6 +568,7 @@ struct phy_device { + unsigned loopback_enabled:1; + unsigned downshifted_rate:1; + unsigned is_on_sfp_module:1; ++ unsigned mac_managed_pm:1; + + unsigned autoneg:1; + /* The most recently read link state */ +diff --git a/include/linux/pm.h b/include/linux/pm.h +index 482313a8ccfc1..628718697679a 100644 +--- a/include/linux/pm.h ++++ b/include/linux/pm.h +@@ -602,6 +602,7 @@ struct dev_pm_info { + unsigned int idle_notification:1; + unsigned int request_pending:1; + unsigned int deferred_resume:1; ++ unsigned int needs_force_resume:1; + unsigned int runtime_auto:1; + bool ignore_children:1; + unsigned int no_callbacks:1; +diff --git a/include/net/page_pool.h b/include/net/page_pool.h +index b5b1953053468..e05744b9a1bc2 100644 +--- a/include/net/page_pool.h ++++ b/include/net/page_pool.h +@@ -198,7 +198,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, + + static inline dma_addr_t page_pool_get_dma_addr(struct page *page) + { +- return page->dma_addr; ++ dma_addr_t ret = page->dma_addr[0]; ++ if (sizeof(dma_addr_t) > sizeof(unsigned long)) ++ ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; ++ return ret; ++} ++ ++static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) ++{ ++ page->dma_addr[0] = addr; ++ if (sizeof(dma_addr_t) > sizeof(unsigned long)) ++ page->dma_addr[1] = upper_32_bits(addr); + } + + static inline bool is_page_pool_compiled_in(void) +diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h +index 036eb1f5c1335..2f01314de73a6 100644 +--- a/include/trace/events/sunrpc.h ++++ b/include/trace/events/sunrpc.h +@@ -1141,7 +1141,6 @@ DECLARE_EVENT_CLASS(xprt_writelock_event, + + DEFINE_WRITELOCK_EVENT(reserve_xprt); + DEFINE_WRITELOCK_EVENT(release_xprt); +-DEFINE_WRITELOCK_EVENT(transmit_queued); + + DECLARE_EVENT_CLASS(xprt_cong_event, + TP_PROTO( +diff --git a/include/uapi/linux/netfilter/xt_SECMARK.h b/include/uapi/linux/netfilter/xt_SECMARK.h +index 1f2a708413f5d..beb2cadba8a9c 100644 +--- a/include/uapi/linux/netfilter/xt_SECMARK.h ++++ b/include/uapi/linux/netfilter/xt_SECMARK.h +@@ -20,4 +20,10 @@ struct xt_secmark_target_info { + char secctx[SECMARK_SECCTX_MAX]; + }; + ++struct xt_secmark_target_info_v1 { ++ __u8 mode; ++ char secctx[SECMARK_SECCTX_MAX]; ++ __u32 secid; ++}; ++ + #endif /*_XT_SECMARK_H_target */ +diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c +index c10e855a03bc1..fe4c01c14ab2c 100644 +--- a/kernel/dma/swiotlb.c ++++ b/kernel/dma/swiotlb.c +@@ -608,7 +608,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, + enum dma_data_direction dir, unsigned long attrs) + { + unsigned int offset = swiotlb_align_offset(dev, orig_addr); +- unsigned int index, i; ++ unsigned int i; ++ int index; + phys_addr_t tlb_addr; + + if (no_iotlb_memory) +diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c +index 5c3447cf7ad58..33400ff051a84 100644 +--- a/kernel/kexec_file.c ++++ b/kernel/kexec_file.c +@@ -740,8 +740,10 @@ static int kexec_calculate_store_digests(struct kimage *image) + + sha_region_sz = KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region); + sha_regions = vzalloc(sha_region_sz); +- if (!sha_regions) ++ if (!sha_regions) { ++ ret = -ENOMEM; + goto out_free_desc; ++ } + + desc->tfm = tfm; + +diff --git a/kernel/resource.c b/kernel/resource.c +index 627e61b0c1241..16e0c7e8ed241 100644 +--- a/kernel/resource.c ++++ b/kernel/resource.c +@@ -457,7 +457,7 @@ int walk_system_ram_res(u64 start, u64 end, void *arg, + { + unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; + +- return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true, ++ return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false, + arg, func); + } + +@@ -470,7 +470,7 @@ int walk_mem_res(u64 start, u64 end, void *arg, + { + unsigned long flags = IORESOURCE_MEM | IORESOURCE_BUSY; + +- return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true, ++ return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false, + arg, func); + } + +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 17ad829a114c8..814200541f8f5 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -928,7 +928,7 @@ DEFINE_STATIC_KEY_FALSE(sched_uclamp_used); + + static inline unsigned int uclamp_bucket_id(unsigned int clamp_value) + { +- return clamp_value / UCLAMP_BUCKET_DELTA; ++ return min_t(unsigned int, clamp_value / UCLAMP_BUCKET_DELTA, UCLAMP_BUCKETS - 1); + } + + static inline unsigned int uclamp_none(enum uclamp_id clamp_id) +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index d078767f677f4..a073a839cd065 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -6212,7 +6212,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool + } + + if (has_idle_core) +- set_idle_cores(this, false); ++ set_idle_cores(target, false); + + if (sched_feat(SIS_PROP) && !has_idle_core) { + time = cpu_clock(this) - time; +@@ -10915,16 +10915,22 @@ static void propagate_entity_cfs_rq(struct sched_entity *se) + { + struct cfs_rq *cfs_rq; + ++ list_add_leaf_cfs_rq(cfs_rq_of(se)); ++ + /* Start to propagate at parent */ + se = se->parent; + + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + +- if (cfs_rq_throttled(cfs_rq)) +- break; ++ if (!cfs_rq_throttled(cfs_rq)){ ++ update_load_avg(cfs_rq, se, UPDATE_TG); ++ list_add_leaf_cfs_rq(cfs_rq); ++ continue; ++ } + +- update_load_avg(cfs_rq, se, UPDATE_TG); ++ if (list_add_leaf_cfs_rq(cfs_rq)) ++ break; + } + } + #else +diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c +index 4d94e2b5499d8..a7924fedf479e 100644 +--- a/kernel/time/alarmtimer.c ++++ b/kernel/time/alarmtimer.c +@@ -92,7 +92,7 @@ static int alarmtimer_rtc_add_device(struct device *dev, + if (rtcdev) + return -EBUSY; + +- if (!rtc->ops->set_alarm) ++ if (!test_bit(RTC_FEATURE_ALARM, rtc->features)) + return -1; + if (!device_may_wakeup(rtc->dev.parent)) + return -1; +diff --git a/kernel/watchdog.c b/kernel/watchdog.c +index 107bc38b19450..8cf0678378d29 100644 +--- a/kernel/watchdog.c ++++ b/kernel/watchdog.c +@@ -154,7 +154,11 @@ static void lockup_detector_update_enable(void) + + #ifdef CONFIG_SOFTLOCKUP_DETECTOR + +-#define SOFTLOCKUP_RESET ULONG_MAX ++/* ++ * Delay the soflockup report when running a known slow code. ++ * It does _not_ affect the timestamp of the last successdul reschedule. ++ */ ++#define SOFTLOCKUP_DELAY_REPORT ULONG_MAX + + #ifdef CONFIG_SMP + int __read_mostly sysctl_softlockup_all_cpu_backtrace; +@@ -169,10 +173,12 @@ unsigned int __read_mostly softlockup_panic = + static bool softlockup_initialized __read_mostly; + static u64 __read_mostly sample_period; + ++/* Timestamp taken after the last successful reschedule. */ + static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts); ++/* Timestamp of the last softlockup report. */ ++static DEFINE_PER_CPU(unsigned long, watchdog_report_ts); + static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer); + static DEFINE_PER_CPU(bool, softlockup_touch_sync); +-static DEFINE_PER_CPU(bool, soft_watchdog_warn); + static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts); + static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved); + static unsigned long soft_lockup_nmi_warn; +@@ -235,10 +241,16 @@ static void set_sample_period(void) + watchdog_update_hrtimer_threshold(sample_period); + } + ++static void update_report_ts(void) ++{ ++ __this_cpu_write(watchdog_report_ts, get_timestamp()); ++} ++ + /* Commands for resetting the watchdog */ +-static void __touch_watchdog(void) ++static void update_touch_ts(void) + { + __this_cpu_write(watchdog_touch_ts, get_timestamp()); ++ update_report_ts(); + } + + /** +@@ -252,10 +264,10 @@ static void __touch_watchdog(void) + notrace void touch_softlockup_watchdog_sched(void) + { + /* +- * Preemption can be enabled. It doesn't matter which CPU's timestamp +- * gets zeroed here, so use the raw_ operation. ++ * Preemption can be enabled. It doesn't matter which CPU's watchdog ++ * report period gets restarted here, so use the raw_ operation. + */ +- raw_cpu_write(watchdog_touch_ts, SOFTLOCKUP_RESET); ++ raw_cpu_write(watchdog_report_ts, SOFTLOCKUP_DELAY_REPORT); + } + + notrace void touch_softlockup_watchdog(void) +@@ -279,7 +291,7 @@ void touch_all_softlockup_watchdogs(void) + * the softlockup check. + */ + for_each_cpu(cpu, &watchdog_allowed_mask) { +- per_cpu(watchdog_touch_ts, cpu) = SOFTLOCKUP_RESET; ++ per_cpu(watchdog_report_ts, cpu) = SOFTLOCKUP_DELAY_REPORT; + wq_watchdog_touch(cpu); + } + } +@@ -287,16 +299,16 @@ void touch_all_softlockup_watchdogs(void) + void touch_softlockup_watchdog_sync(void) + { + __this_cpu_write(softlockup_touch_sync, true); +- __this_cpu_write(watchdog_touch_ts, SOFTLOCKUP_RESET); ++ __this_cpu_write(watchdog_report_ts, SOFTLOCKUP_DELAY_REPORT); + } + +-static int is_softlockup(unsigned long touch_ts) ++static int is_softlockup(unsigned long touch_ts, unsigned long period_ts) + { + unsigned long now = get_timestamp(); + + if ((watchdog_enabled & SOFT_WATCHDOG_ENABLED) && watchdog_thresh){ + /* Warn about unreasonable delays. */ +- if (time_after(now, touch_ts + get_softlockup_thresh())) ++ if (time_after(now, period_ts + get_softlockup_thresh())) + return now - touch_ts; + } + return 0; +@@ -332,7 +344,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work); + */ + static int softlockup_fn(void *data) + { +- __touch_watchdog(); ++ update_touch_ts(); + complete(this_cpu_ptr(&softlockup_completion)); + + return 0; +@@ -342,6 +354,7 @@ static int softlockup_fn(void *data) + static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) + { + unsigned long touch_ts = __this_cpu_read(watchdog_touch_ts); ++ unsigned long period_ts = __this_cpu_read(watchdog_report_ts); + struct pt_regs *regs = get_irq_regs(); + int duration; + int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace; +@@ -363,7 +376,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) + /* .. and repeat */ + hrtimer_forward_now(hrtimer, ns_to_ktime(sample_period)); + +- if (touch_ts == SOFTLOCKUP_RESET) { ++ /* Reset the interval when touched externally by a known slow code. */ ++ if (period_ts == SOFTLOCKUP_DELAY_REPORT) { + if (unlikely(__this_cpu_read(softlockup_touch_sync))) { + /* + * If the time stamp was touched atomically +@@ -375,7 +389,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) + + /* Clear the guest paused flag on watchdog reset */ + kvm_check_and_clear_guest_paused(); +- __touch_watchdog(); ++ update_report_ts(); ++ + return HRTIMER_RESTART; + } + +@@ -385,7 +400,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) + * indicate it is getting cpu time. If it hasn't then + * this is a good indication some task is hogging the cpu + */ +- duration = is_softlockup(touch_ts); ++ duration = is_softlockup(touch_ts, period_ts); + if (unlikely(duration)) { + /* + * If a virtual machine is stopped by the host it can look to +@@ -395,21 +410,18 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) + if (kvm_check_and_clear_guest_paused()) + return HRTIMER_RESTART; + +- /* only warn once */ +- if (__this_cpu_read(soft_watchdog_warn) == true) +- return HRTIMER_RESTART; +- ++ /* ++ * Prevent multiple soft-lockup reports if one cpu is already ++ * engaged in dumping all cpu back traces. ++ */ + if (softlockup_all_cpu_backtrace) { +- /* Prevent multiple soft-lockup reports if one cpu is already +- * engaged in dumping cpu back traces +- */ +- if (test_and_set_bit(0, &soft_lockup_nmi_warn)) { +- /* Someone else will report us. Let's give up */ +- __this_cpu_write(soft_watchdog_warn, true); ++ if (test_and_set_bit_lock(0, &soft_lockup_nmi_warn)) + return HRTIMER_RESTART; +- } + } + ++ /* Start period for the next softlockup warning. */ ++ update_report_ts(); ++ + pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", + smp_processor_id(), duration, + current->comm, task_pid_nr(current)); +@@ -421,22 +433,14 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer) + dump_stack(); + + if (softlockup_all_cpu_backtrace) { +- /* Avoid generating two back traces for current +- * given that one is already made above +- */ + trigger_allbutself_cpu_backtrace(); +- +- clear_bit(0, &soft_lockup_nmi_warn); +- /* Barrier to sync with other cpus */ +- smp_mb__after_atomic(); ++ clear_bit_unlock(0, &soft_lockup_nmi_warn); + } + + add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK); + if (softlockup_panic) + panic("softlockup: hung tasks"); +- __this_cpu_write(soft_watchdog_warn, true); +- } else +- __this_cpu_write(soft_watchdog_warn, false); ++ } + + return HRTIMER_RESTART; + } +@@ -461,7 +465,7 @@ static void watchdog_enable(unsigned int cpu) + HRTIMER_MODE_REL_PINNED_HARD); + + /* Initialize timestamp */ +- __touch_watchdog(); ++ update_touch_ts(); + /* Enable the perf event */ + if (watchdog_enabled & NMI_WATCHDOG_ENABLED) + watchdog_nmi_enable(cpu); +diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence +index 78f50ccb3b45f..e641add339475 100644 +--- a/lib/Kconfig.kfence ++++ b/lib/Kconfig.kfence +@@ -7,6 +7,7 @@ menuconfig KFENCE + bool "KFENCE: low-overhead sampling-based memory safety error detector" + depends on HAVE_ARCH_KFENCE && (SLAB || SLUB) + select STACKTRACE ++ select IRQ_WORK + help + KFENCE is a low-overhead sampling-based detector of heap out-of-bounds + access, use-after-free, and invalid-free errors. KFENCE is designed +diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c +index 7998affa45d49..c87d5b6a8a55a 100644 +--- a/lib/kobject_uevent.c ++++ b/lib/kobject_uevent.c +@@ -251,12 +251,13 @@ static int kobj_usermode_filter(struct kobject *kobj) + + static int init_uevent_argv(struct kobj_uevent_env *env, const char *subsystem) + { ++ int buffer_size = sizeof(env->buf) - env->buflen; + int len; + +- len = strlcpy(&env->buf[env->buflen], subsystem, +- sizeof(env->buf) - env->buflen); +- if (len >= (sizeof(env->buf) - env->buflen)) { +- WARN(1, KERN_ERR "init_uevent_argv: buffer size too small\n"); ++ len = strlcpy(&env->buf[env->buflen], subsystem, buffer_size); ++ if (len >= buffer_size) { ++ pr_warn("init_uevent_argv: buffer size of %d too small, needed %d\n", ++ buffer_size, len); + return -ENOMEM; + } + +diff --git a/lib/nlattr.c b/lib/nlattr.c +index 5b6116e81f9f2..1d051ef66afe5 100644 +--- a/lib/nlattr.c ++++ b/lib/nlattr.c +@@ -828,7 +828,7 @@ int nla_strcmp(const struct nlattr *nla, const char *str) + int attrlen = nla_len(nla); + int d; + +- if (attrlen > 0 && buf[attrlen - 1] == '\0') ++ while (attrlen > 0 && buf[attrlen - 1] == '\0') + attrlen--; + + d = attrlen - len; +diff --git a/lib/test_kasan.c b/lib/test_kasan.c +index e5647d147b350..be69c3aa615a7 100644 +--- a/lib/test_kasan.c ++++ b/lib/test_kasan.c +@@ -646,8 +646,20 @@ static char global_array[10]; + + static void kasan_global_oob(struct kunit *test) + { +- volatile int i = 3; +- char *p = &global_array[ARRAY_SIZE(global_array) + i]; ++ /* ++ * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS ++ * from failing here and panicing the kernel, access the array via a ++ * volatile pointer, which will prevent the compiler from being able to ++ * determine the array bounds. ++ * ++ * This access uses a volatile pointer to char (char *volatile) rather ++ * than the more conventional pointer to volatile char (volatile char *) ++ * because we want to prevent the compiler from making inferences about ++ * the pointer itself (i.e. its array bounds), not the data that it ++ * refers to. ++ */ ++ char *volatile array = global_array; ++ char *p = &array[ARRAY_SIZE(global_array) + 3]; + + /* Only generic mode instruments globals. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); +@@ -695,8 +707,9 @@ static void ksize_uaf(struct kunit *test) + static void kasan_stack_oob(struct kunit *test) + { + char stack_array[10]; +- volatile int i = OOB_TAG_OFF; +- char *p = &stack_array[ARRAY_SIZE(stack_array) + i]; ++ /* See comment in kasan_global_oob. */ ++ char *volatile array = stack_array; ++ char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF]; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); + +@@ -707,7 +720,9 @@ static void kasan_alloca_oob_left(struct kunit *test) + { + volatile int i = 10; + char alloca_array[i]; +- char *p = alloca_array - 1; ++ /* See comment in kasan_global_oob. */ ++ char *volatile array = alloca_array; ++ char *p = array - 1; + + /* Only generic mode instruments dynamic allocas. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); +@@ -720,7 +735,9 @@ static void kasan_alloca_oob_right(struct kunit *test) + { + volatile int i = 10; + char alloca_array[i]; +- char *p = alloca_array + i; ++ /* See comment in kasan_global_oob. */ ++ char *volatile array = alloca_array; ++ char *p = array + i; + + /* Only generic mode instruments dynamic allocas. */ + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); +diff --git a/mm/gup.c b/mm/gup.c +index ef7d2da9f03ff..333f5dfd89423 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1551,54 +1551,60 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, + struct vm_area_struct **vmas, + unsigned int gup_flags) + { +- unsigned long i; +- unsigned long step; +- bool drain_allow = true; +- bool migrate_allow = true; ++ unsigned long i, isolation_error_count; ++ bool drain_allow; + LIST_HEAD(cma_page_list); + long ret = nr_pages; ++ struct page *prev_head, *head; + struct migration_target_control mtc = { + .nid = NUMA_NO_NODE, + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + }; + + check_again: +- for (i = 0; i < nr_pages;) { +- +- struct page *head = compound_head(pages[i]); +- +- /* +- * gup may start from a tail page. Advance step by the left +- * part. +- */ +- step = compound_nr(head) - (pages[i] - head); ++ prev_head = NULL; ++ isolation_error_count = 0; ++ drain_allow = true; ++ for (i = 0; i < nr_pages; i++) { ++ head = compound_head(pages[i]); ++ if (head == prev_head) ++ continue; ++ prev_head = head; + /* + * If we get a page from the CMA zone, since we are going to + * be pinning these entries, we might as well move them out + * of the CMA zone if possible. + */ + if (is_migrate_cma_page(head)) { +- if (PageHuge(head)) +- isolate_huge_page(head, &cma_page_list); +- else { ++ if (PageHuge(head)) { ++ if (!isolate_huge_page(head, &cma_page_list)) ++ isolation_error_count++; ++ } else { + if (!PageLRU(head) && drain_allow) { + lru_add_drain_all(); + drain_allow = false; + } + +- if (!isolate_lru_page(head)) { +- list_add_tail(&head->lru, &cma_page_list); +- mod_node_page_state(page_pgdat(head), +- NR_ISOLATED_ANON + +- page_is_file_lru(head), +- thp_nr_pages(head)); ++ if (isolate_lru_page(head)) { ++ isolation_error_count++; ++ continue; + } ++ list_add_tail(&head->lru, &cma_page_list); ++ mod_node_page_state(page_pgdat(head), ++ NR_ISOLATED_ANON + ++ page_is_file_lru(head), ++ thp_nr_pages(head)); + } + } +- +- i += step; + } + ++ /* ++ * If list is empty, and no isolation errors, means that all pages are ++ * in the correct zone. ++ */ ++ if (list_empty(&cma_page_list) && !isolation_error_count) ++ return ret; ++ + if (!list_empty(&cma_page_list)) { + /* + * drop the above get_user_pages reference. +@@ -1609,34 +1615,28 @@ check_again: + for (i = 0; i < nr_pages; i++) + put_page(pages[i]); + +- if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, +- (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { +- /* +- * some of the pages failed migration. Do get_user_pages +- * without migration. +- */ +- migrate_allow = false; +- ++ ret = migrate_pages(&cma_page_list, alloc_migration_target, ++ NULL, (unsigned long)&mtc, MIGRATE_SYNC, ++ MR_CONTIG_RANGE); ++ if (ret) { + if (!list_empty(&cma_page_list)) + putback_movable_pages(&cma_page_list); ++ return ret > 0 ? -ENOMEM : ret; + } +- /* +- * We did migrate all the pages, Try to get the page references +- * again migrating any new CMA pages which we failed to isolate +- * earlier. +- */ +- ret = __get_user_pages_locked(mm, start, nr_pages, +- pages, vmas, NULL, +- gup_flags); +- +- if ((ret > 0) && migrate_allow) { +- nr_pages = ret; +- drain_allow = true; +- goto check_again; +- } ++ ++ /* We unpinned pages before migration, pin them again */ ++ ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, ++ NULL, gup_flags); ++ if (ret <= 0) ++ return ret; ++ nr_pages = ret; + } + +- return ret; ++ /* ++ * check again because pages were unpinned, and we also might have ++ * had isolation errors and need more pages to migrate. ++ */ ++ goto check_again; + } + #else + static long check_and_migrate_cma_pages(struct mm_struct *mm, +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index a86a58ef132d5..96b722af092e7 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -743,13 +743,20 @@ void hugetlb_fix_reserve_counts(struct inode *inode) + { + struct hugepage_subpool *spool = subpool_inode(inode); + long rsv_adjust; ++ bool reserved = false; + + rsv_adjust = hugepage_subpool_get_pages(spool, 1); +- if (rsv_adjust) { ++ if (rsv_adjust > 0) { + struct hstate *h = hstate_inode(inode); + +- hugetlb_acct_memory(h, 1); ++ if (!hugetlb_acct_memory(h, 1)) ++ reserved = true; ++ } else if (!rsv_adjust) { ++ reserved = true; + } ++ ++ if (!reserved) ++ pr_warn("hugetlb: Huge Page Reserved count may go negative.\n"); + } + + /* +@@ -3898,6 +3905,7 @@ again: + * See Documentation/vm/mmu_notifier.rst + */ + huge_ptep_set_wrprotect(src, addr, src_pte); ++ entry = huge_pte_wrprotect(entry); + } + + page_dup_rmap(ptepage, true); +diff --git a/mm/kfence/core.c b/mm/kfence/core.c +index d53c91f881a41..f0be2c5038b5d 100644 +--- a/mm/kfence/core.c ++++ b/mm/kfence/core.c +@@ -10,6 +10,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -586,6 +587,17 @@ late_initcall(kfence_debugfs_init); + + /* === Allocation Gate Timer ================================================ */ + ++#ifdef CONFIG_KFENCE_STATIC_KEYS ++/* Wait queue to wake up allocation-gate timer task. */ ++static DECLARE_WAIT_QUEUE_HEAD(allocation_wait); ++ ++static void wake_up_kfence_timer(struct irq_work *work) ++{ ++ wake_up(&allocation_wait); ++} ++static DEFINE_IRQ_WORK(wake_up_kfence_timer_work, wake_up_kfence_timer); ++#endif ++ + /* + * Set up delayed work, which will enable and disable the static key. We need to + * use a work queue (rather than a simple timer), since enabling and disabling a +@@ -603,25 +615,13 @@ static void toggle_allocation_gate(struct work_struct *work) + if (!READ_ONCE(kfence_enabled)) + return; + +- /* Enable static key, and await allocation to happen. */ + atomic_set(&kfence_allocation_gate, 0); + #ifdef CONFIG_KFENCE_STATIC_KEYS ++ /* Enable static key, and await allocation to happen. */ + static_branch_enable(&kfence_allocation_key); +- /* +- * Await an allocation. Timeout after 1 second, in case the kernel stops +- * doing allocations, to avoid stalling this worker task for too long. +- */ +- { +- unsigned long end_wait = jiffies + HZ; +- +- do { +- set_current_state(TASK_UNINTERRUPTIBLE); +- if (atomic_read(&kfence_allocation_gate) != 0) +- break; +- schedule_timeout(1); +- } while (time_before(jiffies, end_wait)); +- __set_current_state(TASK_RUNNING); +- } ++ ++ wait_event_timeout(allocation_wait, atomic_read(&kfence_allocation_gate), HZ); ++ + /* Disable static key and reset timer. */ + static_branch_disable(&kfence_allocation_key); + #endif +@@ -728,6 +728,19 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) + */ + if (atomic_read(&kfence_allocation_gate) || atomic_inc_return(&kfence_allocation_gate) > 1) + return NULL; ++#ifdef CONFIG_KFENCE_STATIC_KEYS ++ /* ++ * waitqueue_active() is fully ordered after the update of ++ * kfence_allocation_gate per atomic_inc_return(). ++ */ ++ if (waitqueue_active(&allocation_wait)) { ++ /* ++ * Calling wake_up() here may deadlock when allocations happen ++ * from within timer code. Use an irq_work to defer it. ++ */ ++ irq_work_queue(&wake_up_kfence_timer_work); ++ } ++#endif + + if (!READ_ONCE(kfence_enabled)) + return NULL; +diff --git a/mm/khugepaged.c b/mm/khugepaged.c +index a7d6cb912b051..2680d5ffee7f2 100644 +--- a/mm/khugepaged.c ++++ b/mm/khugepaged.c +@@ -716,17 +716,17 @@ next: + if (pte_write(pteval)) + writable = true; + } +- if (likely(writable)) { +- if (likely(referenced)) { +- result = SCAN_SUCCEED; +- trace_mm_collapse_huge_page_isolate(page, none_or_zero, +- referenced, writable, result); +- return 1; +- } +- } else { ++ ++ if (unlikely(!writable)) { + result = SCAN_PAGE_RO; ++ } else if (unlikely(!referenced)) { ++ result = SCAN_LACK_REFERENCED_PAGE; ++ } else { ++ result = SCAN_SUCCEED; ++ trace_mm_collapse_huge_page_isolate(page, none_or_zero, ++ referenced, writable, result); ++ return 1; + } +- + out: + release_pte_pages(pte, _pte, compound_pagelist); + trace_mm_collapse_huge_page_isolate(page, none_or_zero, +diff --git a/mm/ksm.c b/mm/ksm.c +index 9694ee2c71de5..b32391ccf6d57 100644 +--- a/mm/ksm.c ++++ b/mm/ksm.c +@@ -794,6 +794,7 @@ static void remove_rmap_item_from_tree(struct rmap_item *rmap_item) + stable_node->rmap_hlist_len--; + + put_anon_vma(rmap_item->anon_vma); ++ rmap_item->head = NULL; + rmap_item->address &= PAGE_MASK; + + } else if (rmap_item->address & UNSTABLE_FLAG) { +diff --git a/mm/migrate.c b/mm/migrate.c +index 62b81d5257aaa..773622cffe779 100644 +--- a/mm/migrate.c ++++ b/mm/migrate.c +@@ -2973,6 +2973,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, + + swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE); + entry = swp_entry_to_pte(swp_entry); ++ } else { ++ /* ++ * For now we only support migrating to un-addressable ++ * device memory. ++ */ ++ pr_warn_once("Unsupported ZONE_DEVICE page type.\n"); ++ goto abort; + } + } else { + entry = mk_pte(page, vma->vm_page_prot); +diff --git a/mm/shmem.c b/mm/shmem.c +index b2db4ed0fbc7c..6e99a4ad6e1f3 100644 +--- a/mm/shmem.c ++++ b/mm/shmem.c +@@ -2258,25 +2258,11 @@ out_nomem: + static int shmem_mmap(struct file *file, struct vm_area_struct *vma) + { + struct shmem_inode_info *info = SHMEM_I(file_inode(file)); ++ int ret; + +- if (info->seals & F_SEAL_FUTURE_WRITE) { +- /* +- * New PROT_WRITE and MAP_SHARED mmaps are not allowed when +- * "future write" seal active. +- */ +- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) +- return -EPERM; +- +- /* +- * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as +- * MAP_SHARED and read-only, take care to not allow mprotect to +- * revert protections on such mappings. Do this only for shared +- * mappings. For private mappings, don't need to mask +- * VM_MAYWRITE as we still want them to be COW-writable. +- */ +- if (vma->vm_flags & VM_SHARED) +- vma->vm_flags &= ~(VM_MAYWRITE); +- } ++ ret = seal_check_future_write(info->seals, vma); ++ if (ret) ++ return ret; + + /* arm64 - allow memory tagging on RAM-based files */ + vma->vm_flags |= VM_MTE_ALLOWED; +@@ -2375,8 +2361,18 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, + pgoff_t offset, max_off; + + ret = -ENOMEM; +- if (!shmem_inode_acct_block(inode, 1)) ++ if (!shmem_inode_acct_block(inode, 1)) { ++ /* ++ * We may have got a page, returned -ENOENT triggering a retry, ++ * and now we find ourselves with -ENOMEM. Release the page, to ++ * avoid a BUG_ON in our caller. ++ */ ++ if (unlikely(*pagep)) { ++ put_page(*pagep); ++ *pagep = NULL; ++ } + goto out; ++ } + + if (!*pagep) { + page = shmem_alloc_page(gfp, info, pgoff); +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 7a3e42e752350..82f4973a011d9 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -5912,7 +5912,7 @@ static void hci_le_phy_update_evt(struct hci_dev *hdev, struct sk_buff *skb) + + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); + +- if (!ev->status) ++ if (ev->status) + return; + + hci_dev_lock(hdev); +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index 72c2f5226d673..53ddbee459b99 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -451,6 +451,8 @@ struct l2cap_chan *l2cap_chan_create(void) + if (!chan) + return NULL; + ++ skb_queue_head_init(&chan->tx_q); ++ skb_queue_head_init(&chan->srej_q); + mutex_init(&chan->lock); + + /* Set default lock nesting level */ +@@ -516,7 +518,9 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan) + chan->flush_to = L2CAP_DEFAULT_FLUSH_TO; + chan->retrans_timeout = L2CAP_DEFAULT_RETRANS_TO; + chan->monitor_timeout = L2CAP_DEFAULT_MONITOR_TO; ++ + chan->conf_state = 0; ++ set_bit(CONF_NOT_COMPLETE, &chan->conf_state); + + set_bit(FLAG_FORCE_ACTIVE, &chan->flags); + } +diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c +index f1b1edd0b6974..c99d65ef13b1e 100644 +--- a/net/bluetooth/l2cap_sock.c ++++ b/net/bluetooth/l2cap_sock.c +@@ -179,9 +179,17 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr, + struct l2cap_chan *chan = l2cap_pi(sk)->chan; + struct sockaddr_l2 la; + int len, err = 0; ++ bool zapped; + + BT_DBG("sk %p", sk); + ++ lock_sock(sk); ++ zapped = sock_flag(sk, SOCK_ZAPPED); ++ release_sock(sk); ++ ++ if (zapped) ++ return -EINVAL; ++ + if (!addr || alen < offsetofend(struct sockaddr, sa_family) || + addr->sa_family != AF_BLUETOOTH) + return -EINVAL; +diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c +index 74971b4bd4570..939c6f77fecc2 100644 +--- a/net/bluetooth/mgmt.c ++++ b/net/bluetooth/mgmt.c +@@ -7976,7 +7976,6 @@ static int add_ext_adv_params(struct sock *sk, struct hci_dev *hdev, + goto unlock; + } + +- hdev->cur_adv_instance = cp->instance; + /* Submit request for advertising params if ext adv available */ + if (ext_adv_capable(hdev)) { + hci_req_init(&req, hdev); +diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c +index dfec65eca8a6e..3db1def4437b3 100644 +--- a/net/bridge/br_arp_nd_proxy.c ++++ b/net/bridge/br_arp_nd_proxy.c +@@ -160,7 +160,9 @@ void br_do_proxy_suppress_arp(struct sk_buff *skb, struct net_bridge *br, + if (br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED)) { + if (p && (p->flags & BR_NEIGH_SUPPRESS)) + return; +- if (ipv4_is_zeronet(sip) || sip == tip) { ++ if (parp->ar_op != htons(ARPOP_RREQUEST) && ++ parp->ar_op != htons(ARPOP_RREPLY) && ++ (ipv4_is_zeronet(sip) || sip == tip)) { + /* prevent flooding to neigh suppress ports */ + BR_INPUT_SKB_CB(skb)->proxyarp_replied = 1; + return; +diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c +index 229309d7b4ff9..226bb05c3b42d 100644 +--- a/net/bridge/br_multicast.c ++++ b/net/bridge/br_multicast.c +@@ -1593,7 +1593,8 @@ out: + spin_unlock(&br->multicast_lock); + } + +-static void br_mc_disabled_update(struct net_device *dev, bool value) ++static int br_mc_disabled_update(struct net_device *dev, bool value, ++ struct netlink_ext_ack *extack) + { + struct switchdev_attr attr = { + .orig_dev = dev, +@@ -1602,11 +1603,13 @@ static void br_mc_disabled_update(struct net_device *dev, bool value) + .u.mc_disabled = !value, + }; + +- switchdev_port_attr_set(dev, &attr, NULL); ++ return switchdev_port_attr_set(dev, &attr, extack); + } + + int br_multicast_add_port(struct net_bridge_port *port) + { ++ int err; ++ + port->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; + port->multicast_eht_hosts_limit = BR_MCAST_DEFAULT_EHT_HOSTS_LIMIT; + +@@ -1618,8 +1621,12 @@ int br_multicast_add_port(struct net_bridge_port *port) + timer_setup(&port->ip6_own_query.timer, + br_ip6_multicast_port_query_expired, 0); + #endif +- br_mc_disabled_update(port->dev, +- br_opt_get(port->br, BROPT_MULTICAST_ENABLED)); ++ err = br_mc_disabled_update(port->dev, ++ br_opt_get(port->br, ++ BROPT_MULTICAST_ENABLED), ++ NULL); ++ if (err && err != -EOPNOTSUPP) ++ return err; + + port->mcast_stats = netdev_alloc_pcpu_stats(struct bridge_mcast_stats); + if (!port->mcast_stats) +@@ -3543,16 +3550,23 @@ static void br_multicast_start_querier(struct net_bridge *br, + rcu_read_unlock(); + } + +-int br_multicast_toggle(struct net_bridge *br, unsigned long val) ++int br_multicast_toggle(struct net_bridge *br, unsigned long val, ++ struct netlink_ext_ack *extack) + { + struct net_bridge_port *port; + bool change_snoopers = false; ++ int err = 0; + + spin_lock_bh(&br->multicast_lock); + if (!!br_opt_get(br, BROPT_MULTICAST_ENABLED) == !!val) + goto unlock; + +- br_mc_disabled_update(br->dev, val); ++ err = br_mc_disabled_update(br->dev, val, extack); ++ if (err == -EOPNOTSUPP) ++ err = 0; ++ if (err) ++ goto unlock; ++ + br_opt_toggle(br, BROPT_MULTICAST_ENABLED, !!val); + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) { + change_snoopers = true; +@@ -3590,7 +3604,7 @@ unlock: + br_multicast_leave_snoopers(br); + } + +- return 0; ++ return err; + } + + bool br_multicast_enabled(const struct net_device *dev) +diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c +index f2b1343f8332a..0456593aceec1 100644 +--- a/net/bridge/br_netlink.c ++++ b/net/bridge/br_netlink.c +@@ -1293,7 +1293,9 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[], + if (data[IFLA_BR_MCAST_SNOOPING]) { + u8 mcast_snooping = nla_get_u8(data[IFLA_BR_MCAST_SNOOPING]); + +- br_multicast_toggle(br, mcast_snooping); ++ err = br_multicast_toggle(br, mcast_snooping, extack); ++ if (err) ++ return err; + } + + if (data[IFLA_BR_MCAST_QUERY_USE_IFADDR]) { +diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h +index d7d167e10b705..af3430c2d6ea8 100644 +--- a/net/bridge/br_private.h ++++ b/net/bridge/br_private.h +@@ -810,7 +810,8 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, + struct sk_buff *skb, bool local_rcv, bool local_orig); + int br_multicast_set_router(struct net_bridge *br, unsigned long val); + int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val); +-int br_multicast_toggle(struct net_bridge *br, unsigned long val); ++int br_multicast_toggle(struct net_bridge *br, unsigned long val, ++ struct netlink_ext_ack *extack); + int br_multicast_set_querier(struct net_bridge *br, unsigned long val); + int br_multicast_set_hash_max(struct net_bridge *br, unsigned long val); + int br_multicast_set_igmp_version(struct net_bridge *br, unsigned long val); +diff --git a/net/bridge/br_sysfs_br.c b/net/bridge/br_sysfs_br.c +index 072e29840082a..381467b691d5f 100644 +--- a/net/bridge/br_sysfs_br.c ++++ b/net/bridge/br_sysfs_br.c +@@ -409,17 +409,11 @@ static ssize_t multicast_snooping_show(struct device *d, + return sprintf(buf, "%d\n", br_opt_get(br, BROPT_MULTICAST_ENABLED)); + } + +-static int toggle_multicast(struct net_bridge *br, unsigned long val, +- struct netlink_ext_ack *extack) +-{ +- return br_multicast_toggle(br, val); +-} +- + static ssize_t multicast_snooping_store(struct device *d, + struct device_attribute *attr, + const char *buf, size_t len) + { +- return store_bridge_parm(d, buf, len, toggle_multicast); ++ return store_bridge_parm(d, buf, len, br_multicast_toggle); + } + static DEVICE_ATTR_RW(multicast_snooping); + +diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c +index a96a4f5de0ce2..3f36b04d86a0a 100644 +--- a/net/core/flow_dissector.c ++++ b/net/core/flow_dissector.c +@@ -828,8 +828,10 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys, + key_addrs = skb_flow_dissector_target(flow_dissector, + FLOW_DISSECTOR_KEY_IPV6_ADDRS, + target_container); +- memcpy(&key_addrs->v6addrs, &flow_keys->ipv6_src, +- sizeof(key_addrs->v6addrs)); ++ memcpy(&key_addrs->v6addrs.src, &flow_keys->ipv6_src, ++ sizeof(key_addrs->v6addrs.src)); ++ memcpy(&key_addrs->v6addrs.dst, &flow_keys->ipv6_dst, ++ sizeof(key_addrs->v6addrs.dst)); + key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS; + } + +diff --git a/net/core/page_pool.c b/net/core/page_pool.c +index ad8b0707af04b..f014fd8c19a6b 100644 +--- a/net/core/page_pool.c ++++ b/net/core/page_pool.c +@@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, + struct page *page, + unsigned int dma_sync_size) + { ++ dma_addr_t dma_addr = page_pool_get_dma_addr(page); ++ + dma_sync_size = min(dma_sync_size, pool->p.max_len); +- dma_sync_single_range_for_device(pool->p.dev, page->dma_addr, ++ dma_sync_single_range_for_device(pool->p.dev, dma_addr, + pool->p.offset, dma_sync_size, + pool->p.dma_dir); + } +@@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, + put_page(page); + return NULL; + } +- page->dma_addr = dma; ++ page_pool_set_dma_addr(page, dma); + + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + page_pool_dma_sync_for_device(pool, page, pool->p.max_len); +@@ -294,13 +296,13 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) + */ + goto skip_dma_unmap; + +- dma = page->dma_addr; ++ dma = page_pool_get_dma_addr(page); + +- /* When page is unmapped, it cannot be returned our pool */ ++ /* When page is unmapped, it cannot be returned to our pool */ + dma_unmap_page_attrs(pool->p.dev, dma, + PAGE_SIZE << pool->p.order, pool->p.dma_dir, + DMA_ATTR_SKIP_CPU_SYNC); +- page->dma_addr = 0; ++ page_pool_set_dma_addr(page, 0); + skip_dma_unmap: + /* This may be the last page returned, releasing the pool, so + * it is not safe to reference pool afterwards. +diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c +index 771688e1b0da9..2603966da904d 100644 +--- a/net/ethtool/ioctl.c ++++ b/net/ethtool/ioctl.c +@@ -489,7 +489,7 @@ store_link_ksettings_for_user(void __user *to, + { + struct ethtool_link_usettings link_usettings; + +- memcpy(&link_usettings.base, &from->base, sizeof(link_usettings)); ++ memcpy(&link_usettings, from, sizeof(link_usettings)); + bitmap_to_arr32(link_usettings.link_modes.supported, + from->link_modes.supported, + __ETHTOOL_LINK_MODE_MASK_NBITS); +diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c +index 50d3c8896f917..25a55086d2b66 100644 +--- a/net/ethtool/netlink.c ++++ b/net/ethtool/netlink.c +@@ -384,7 +384,8 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev, + int ret; + + ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, +- ðtool_genl_family, 0, ctx->ops->reply_cmd); ++ ðtool_genl_family, NLM_F_MULTI, ++ ctx->ops->reply_cmd); + if (!ehdr) + return -EMSGSIZE; + +diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c +index e0cc32e458801..932b15b13053d 100644 +--- a/net/ipv6/ip6_vti.c ++++ b/net/ipv6/ip6_vti.c +@@ -193,7 +193,6 @@ static int vti6_tnl_create2(struct net_device *dev) + + strcpy(t->parms.name, dev->name); + +- dev_hold(dev); + vti6_tnl_link(ip6n, t); + + return 0; +@@ -934,6 +933,7 @@ static inline int vti6_dev_init_gen(struct net_device *dev) + dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); + if (!dev->tstats) + return -ENOMEM; ++ dev_hold(dev); + return 0; + } + +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index 96f487fc00713..0fe91dc9817eb 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -1295,6 +1295,11 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata) + + sdata->vif.csa_active = false; + ifmgd->csa_waiting_bcn = false; ++ /* ++ * If the CSA IE is still present on the beacon after the switch, ++ * we need to consider it as a new CSA (possibly to self). ++ */ ++ ifmgd->beacon_crc_valid = false; + + ret = drv_post_channel_switch(sdata); + if (ret) { +@@ -1400,11 +1405,8 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata, + ch_switch.delay = csa_ie.max_switch_time; + } + +- if (res < 0) { +- ieee80211_queue_work(&local->hw, +- &ifmgd->csa_connection_drop_work); +- return; +- } ++ if (res < 0) ++ goto lock_and_drop_connection; + + if (beacon && sdata->vif.csa_active && !ifmgd->csa_waiting_bcn) { + if (res) +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c +index 3b3bcefbf6577..28422d6870967 100644 +--- a/net/mac80211/tx.c ++++ b/net/mac80211/tx.c +@@ -2267,17 +2267,6 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb, + payload[7]); + } + +- /* Initialize skb->priority for QoS frames. If the DONT_REORDER flag +- * is set, stick to the default value for skb->priority to assure +- * frames injected with this flag are not reordered relative to each +- * other. +- */ +- if (ieee80211_is_data_qos(hdr->frame_control) && +- !(info->control.flags & IEEE80211_TX_CTRL_DONT_REORDER)) { +- u8 *p = ieee80211_get_qos_ctl(hdr); +- skb->priority = *p & IEEE80211_QOS_CTL_TAG1D_MASK; +- } +- + rcu_read_lock(); + + /* +@@ -2341,6 +2330,15 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb, + + info->band = chandef->chan->band; + ++ /* Initialize skb->priority according to frame type and TID class, ++ * with respect to the sub interface that the frame will actually ++ * be transmitted on. If the DONT_REORDER flag is set, the original ++ * skb-priority is preserved to assure frames injected with this ++ * flag are not reordered relative to each other. ++ */ ++ ieee80211_select_queue_80211(sdata, skb, hdr); ++ skb_set_queue_mapping(skb, ieee80211_ac_from_tid(skb->priority)); ++ + /* remove the injection radiotap header */ + skb_pull(skb, len_rthdr); + +diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c +index d17d39ccdf34b..4fe7acaa472f2 100644 +--- a/net/mptcp/subflow.c ++++ b/net/mptcp/subflow.c +@@ -524,8 +524,7 @@ static void mptcp_sock_destruct(struct sock *sk) + * ESTABLISHED state and will not have the SOCK_DEAD flag. + * Both result in warnings from inet_sock_destruct. + */ +- +- if (sk->sk_state == TCP_ESTABLISHED) { ++ if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) { + sk->sk_state = TCP_CLOSE; + WARN_ON_ONCE(sk->sk_socket); + sock_orphan(sk); +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 589d2f6978d38..878ed49d0c569 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -6246,9 +6246,9 @@ err_obj_ht: + INIT_LIST_HEAD(&obj->list); + return err; + err_trans: +- kfree(obj->key.name); +-err_userdata: + kfree(obj->udata); ++err_userdata: ++ kfree(obj->key.name); + err_strdup: + if (obj->ops->destroy) + obj->ops->destroy(&ctx, obj); +diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c +index 916a3c7f9eafe..79fbf37291f38 100644 +--- a/net/netfilter/nfnetlink_osf.c ++++ b/net/netfilter/nfnetlink_osf.c +@@ -186,6 +186,8 @@ static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx, + + ctx->optp = skb_header_pointer(skb, ip_hdrlen(skb) + + sizeof(struct tcphdr), ctx->optsize, opts); ++ if (!ctx->optp) ++ return NULL; + } + + return tcp; +diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c +index bf618b7ec1aea..560c2cda52ee3 100644 +--- a/net/netfilter/nft_set_hash.c ++++ b/net/netfilter/nft_set_hash.c +@@ -406,9 +406,17 @@ static void nft_rhash_destroy(const struct nft_set *set) + (void *)set); + } + ++/* Number of buckets is stored in u32, so cap our result to 1U<<31 */ ++#define NFT_MAX_BUCKETS (1U << 31) ++ + static u32 nft_hash_buckets(u32 size) + { +- return roundup_pow_of_two(size * 4 / 3); ++ u64 val = div_u64((u64)size * 4, 3); ++ ++ if (val >= NFT_MAX_BUCKETS) ++ return NFT_MAX_BUCKETS; ++ ++ return roundup_pow_of_two(val); + } + + static bool nft_rhash_estimate(const struct nft_set_desc *desc, u32 features, +diff --git a/net/netfilter/xt_SECMARK.c b/net/netfilter/xt_SECMARK.c +index 75625d13e976c..498a0bf6f0444 100644 +--- a/net/netfilter/xt_SECMARK.c ++++ b/net/netfilter/xt_SECMARK.c +@@ -24,10 +24,9 @@ MODULE_ALIAS("ip6t_SECMARK"); + static u8 mode; + + static unsigned int +-secmark_tg(struct sk_buff *skb, const struct xt_action_param *par) ++secmark_tg(struct sk_buff *skb, const struct xt_secmark_target_info_v1 *info) + { + u32 secmark = 0; +- const struct xt_secmark_target_info *info = par->targinfo; + + switch (mode) { + case SECMARK_MODE_SEL: +@@ -41,7 +40,7 @@ secmark_tg(struct sk_buff *skb, const struct xt_action_param *par) + return XT_CONTINUE; + } + +-static int checkentry_lsm(struct xt_secmark_target_info *info) ++static int checkentry_lsm(struct xt_secmark_target_info_v1 *info) + { + int err; + +@@ -73,15 +72,15 @@ static int checkentry_lsm(struct xt_secmark_target_info *info) + return 0; + } + +-static int secmark_tg_check(const struct xt_tgchk_param *par) ++static int ++secmark_tg_check(const char *table, struct xt_secmark_target_info_v1 *info) + { +- struct xt_secmark_target_info *info = par->targinfo; + int err; + +- if (strcmp(par->table, "mangle") != 0 && +- strcmp(par->table, "security") != 0) { ++ if (strcmp(table, "mangle") != 0 && ++ strcmp(table, "security") != 0) { + pr_info_ratelimited("only valid in \'mangle\' or \'security\' table, not \'%s\'\n", +- par->table); ++ table); + return -EINVAL; + } + +@@ -116,25 +115,76 @@ static void secmark_tg_destroy(const struct xt_tgdtor_param *par) + } + } + +-static struct xt_target secmark_tg_reg __read_mostly = { +- .name = "SECMARK", +- .revision = 0, +- .family = NFPROTO_UNSPEC, +- .checkentry = secmark_tg_check, +- .destroy = secmark_tg_destroy, +- .target = secmark_tg, +- .targetsize = sizeof(struct xt_secmark_target_info), +- .me = THIS_MODULE, ++static int secmark_tg_check_v0(const struct xt_tgchk_param *par) ++{ ++ struct xt_secmark_target_info *info = par->targinfo; ++ struct xt_secmark_target_info_v1 newinfo = { ++ .mode = info->mode, ++ }; ++ int ret; ++ ++ memcpy(newinfo.secctx, info->secctx, SECMARK_SECCTX_MAX); ++ ++ ret = secmark_tg_check(par->table, &newinfo); ++ info->secid = newinfo.secid; ++ ++ return ret; ++} ++ ++static unsigned int ++secmark_tg_v0(struct sk_buff *skb, const struct xt_action_param *par) ++{ ++ const struct xt_secmark_target_info *info = par->targinfo; ++ struct xt_secmark_target_info_v1 newinfo = { ++ .secid = info->secid, ++ }; ++ ++ return secmark_tg(skb, &newinfo); ++} ++ ++static int secmark_tg_check_v1(const struct xt_tgchk_param *par) ++{ ++ return secmark_tg_check(par->table, par->targinfo); ++} ++ ++static unsigned int ++secmark_tg_v1(struct sk_buff *skb, const struct xt_action_param *par) ++{ ++ return secmark_tg(skb, par->targinfo); ++} ++ ++static struct xt_target secmark_tg_reg[] __read_mostly = { ++ { ++ .name = "SECMARK", ++ .revision = 0, ++ .family = NFPROTO_UNSPEC, ++ .checkentry = secmark_tg_check_v0, ++ .destroy = secmark_tg_destroy, ++ .target = secmark_tg_v0, ++ .targetsize = sizeof(struct xt_secmark_target_info), ++ .me = THIS_MODULE, ++ }, ++ { ++ .name = "SECMARK", ++ .revision = 1, ++ .family = NFPROTO_UNSPEC, ++ .checkentry = secmark_tg_check_v1, ++ .destroy = secmark_tg_destroy, ++ .target = secmark_tg_v1, ++ .targetsize = sizeof(struct xt_secmark_target_info_v1), ++ .usersize = offsetof(struct xt_secmark_target_info_v1, secid), ++ .me = THIS_MODULE, ++ }, + }; + + static int __init secmark_tg_init(void) + { +- return xt_register_target(&secmark_tg_reg); ++ return xt_register_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg)); + } + + static void __exit secmark_tg_exit(void) + { +- xt_unregister_target(&secmark_tg_reg); ++ xt_unregister_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg)); + } + + module_init(secmark_tg_init); +diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c +index c69a4ba9c33fd..3035f96c6e6c8 100644 +--- a/net/sched/cls_flower.c ++++ b/net/sched/cls_flower.c +@@ -209,16 +209,16 @@ static bool fl_range_port_dst_cmp(struct cls_fl_filter *filter, + struct fl_flow_key *key, + struct fl_flow_key *mkey) + { +- __be16 min_mask, max_mask, min_val, max_val; ++ u16 min_mask, max_mask, min_val, max_val; + +- min_mask = htons(filter->mask->key.tp_range.tp_min.dst); +- max_mask = htons(filter->mask->key.tp_range.tp_max.dst); +- min_val = htons(filter->key.tp_range.tp_min.dst); +- max_val = htons(filter->key.tp_range.tp_max.dst); ++ min_mask = ntohs(filter->mask->key.tp_range.tp_min.dst); ++ max_mask = ntohs(filter->mask->key.tp_range.tp_max.dst); ++ min_val = ntohs(filter->key.tp_range.tp_min.dst); ++ max_val = ntohs(filter->key.tp_range.tp_max.dst); + + if (min_mask && max_mask) { +- if (htons(key->tp_range.tp.dst) < min_val || +- htons(key->tp_range.tp.dst) > max_val) ++ if (ntohs(key->tp_range.tp.dst) < min_val || ++ ntohs(key->tp_range.tp.dst) > max_val) + return false; + + /* skb does not have min and max values */ +@@ -232,16 +232,16 @@ static bool fl_range_port_src_cmp(struct cls_fl_filter *filter, + struct fl_flow_key *key, + struct fl_flow_key *mkey) + { +- __be16 min_mask, max_mask, min_val, max_val; ++ u16 min_mask, max_mask, min_val, max_val; + +- min_mask = htons(filter->mask->key.tp_range.tp_min.src); +- max_mask = htons(filter->mask->key.tp_range.tp_max.src); +- min_val = htons(filter->key.tp_range.tp_min.src); +- max_val = htons(filter->key.tp_range.tp_max.src); ++ min_mask = ntohs(filter->mask->key.tp_range.tp_min.src); ++ max_mask = ntohs(filter->mask->key.tp_range.tp_max.src); ++ min_val = ntohs(filter->key.tp_range.tp_min.src); ++ max_val = ntohs(filter->key.tp_range.tp_max.src); + + if (min_mask && max_mask) { +- if (htons(key->tp_range.tp.src) < min_val || +- htons(key->tp_range.tp.src) > max_val) ++ if (ntohs(key->tp_range.tp.src) < min_val || ++ ntohs(key->tp_range.tp.src) > max_val) + return false; + + /* skb does not have min and max values */ +@@ -783,16 +783,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key, + TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src)); + + if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst && +- htons(key->tp_range.tp_max.dst) <= +- htons(key->tp_range.tp_min.dst)) { ++ ntohs(key->tp_range.tp_max.dst) <= ++ ntohs(key->tp_range.tp_min.dst)) { + NL_SET_ERR_MSG_ATTR(extack, + tb[TCA_FLOWER_KEY_PORT_DST_MIN], + "Invalid destination port range (min must be strictly smaller than max)"); + return -EINVAL; + } + if (mask->tp_range.tp_min.src && mask->tp_range.tp_max.src && +- htons(key->tp_range.tp_max.src) <= +- htons(key->tp_range.tp_min.src)) { ++ ntohs(key->tp_range.tp_max.src) <= ++ ntohs(key->tp_range.tp_min.src)) { + NL_SET_ERR_MSG_ATTR(extack, + tb[TCA_FLOWER_KEY_PORT_SRC_MIN], + "Invalid source port range (min must be strictly smaller than max)"); +diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c +index 8287894541e3c..909c798b74030 100644 +--- a/net/sched/sch_taprio.c ++++ b/net/sched/sch_taprio.c +@@ -901,6 +901,12 @@ static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb, + + list_for_each_entry(entry, &new->entries, list) + cycle = ktime_add_ns(cycle, entry->interval); ++ ++ if (!cycle) { ++ NL_SET_ERR_MSG(extack, "'cycle_time' can never be 0"); ++ return -EINVAL; ++ } ++ + new->cycle_time = cycle; + } + +diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c +index f77484df097b7..da4ce0947c3aa 100644 +--- a/net/sctp/sm_make_chunk.c ++++ b/net/sctp/sm_make_chunk.c +@@ -3147,7 +3147,7 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc, + * primary. + */ + if (af->is_any(&addr)) +- memcpy(&addr.v4, sctp_source(asconf), sizeof(addr)); ++ memcpy(&addr, sctp_source(asconf), sizeof(addr)); + + if (security_sctp_bind_connect(asoc->ep->base.sk, + SCTP_PARAM_SET_PRIMARY, +diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c +index af2b7041fa4eb..73bb4c6e9201a 100644 +--- a/net/sctp/sm_statefuns.c ++++ b/net/sctp/sm_statefuns.c +@@ -1852,20 +1852,35 @@ static enum sctp_disposition sctp_sf_do_dupcook_a( + SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO)); + sctp_add_cmd_sf(commands, SCTP_CMD_PURGE_ASCONF_QUEUE, SCTP_NULL()); + +- repl = sctp_make_cookie_ack(new_asoc, chunk); ++ /* Update the content of current association. */ ++ if (sctp_assoc_update((struct sctp_association *)asoc, new_asoc)) { ++ struct sctp_chunk *abort; ++ ++ abort = sctp_make_abort(asoc, NULL, sizeof(struct sctp_errhdr)); ++ if (abort) { ++ sctp_init_cause(abort, SCTP_ERROR_RSRC_LOW, 0); ++ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort)); ++ } ++ sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, SCTP_ERROR(ECONNABORTED)); ++ sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED, ++ SCTP_PERR(SCTP_ERROR_RSRC_LOW)); ++ SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS); ++ SCTP_DEC_STATS(net, SCTP_MIB_CURRESTAB); ++ goto nomem; ++ } ++ ++ repl = sctp_make_cookie_ack(asoc, chunk); + if (!repl) + goto nomem; + + /* Report association restart to upper layer. */ + ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_RESTART, 0, +- new_asoc->c.sinit_num_ostreams, +- new_asoc->c.sinit_max_instreams, ++ asoc->c.sinit_num_ostreams, ++ asoc->c.sinit_max_instreams, + NULL, GFP_ATOMIC); + if (!ev) + goto nomem_ev; + +- /* Update the content of current association. */ +- sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); + sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); + if ((sctp_state(asoc, SHUTDOWN_PENDING) || + sctp_state(asoc, SHUTDOWN_SENT)) && +@@ -1929,7 +1944,8 @@ static enum sctp_disposition sctp_sf_do_dupcook_b( + sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); + sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, + SCTP_STATE(SCTP_STATE_ESTABLISHED)); +- SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB); ++ if (asoc->state < SCTP_STATE_ESTABLISHED) ++ SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB); + sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL()); + + repl = sctp_make_cookie_ack(new_asoc, chunk); +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index 47340b3b514f3..cb23cca72c24c 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -2162,6 +2162,9 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, + struct smc_sock *smc; + int val, rc; + ++ if (level == SOL_TCP && optname == TCP_ULP) ++ return -EOPNOTSUPP; ++ + smc = smc_sk(sk); + + /* generic setsockopts reaching us here always apply to the +@@ -2186,7 +2189,6 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, + if (rc || smc->use_fallback) + goto out; + switch (optname) { +- case TCP_ULP: + case TCP_FASTOPEN: + case TCP_FASTOPEN_CONNECT: + case TCP_FASTOPEN_KEY: +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c +index 612f0a641f4cf..f555d335e910d 100644 +--- a/net/sunrpc/clnt.c ++++ b/net/sunrpc/clnt.c +@@ -1799,7 +1799,6 @@ call_allocate(struct rpc_task *task) + + status = xprt->ops->buf_alloc(task); + trace_rpc_buf_alloc(task, status); +- xprt_inject_disconnect(xprt); + if (status == 0) + return; + if (status != -ENOMEM) { +@@ -2457,12 +2456,6 @@ call_decode(struct rpc_task *task) + task->tk_flags &= ~RPC_CALL_MAJORSEEN; + } + +- /* +- * Ensure that we see all writes made by xprt_complete_rqst() +- * before it changed req->rq_reply_bytes_recvd. +- */ +- smp_rmb(); +- + /* + * Did we ever call xprt_complete_rqst()? If not, we should assume + * the message is incomplete. +@@ -2471,6 +2464,11 @@ call_decode(struct rpc_task *task) + if (!req->rq_reply_bytes_recvd) + goto out; + ++ /* Ensure that we see all writes made by xprt_complete_rqst() ++ * before it changed req->rq_reply_bytes_recvd. ++ */ ++ smp_rmb(); ++ + req->rq_rcv_buf.len = req->rq_private_buf.len; + trace_rpc_xdr_recvfrom(task, &req->rq_rcv_buf); + +diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c +index d76dc9d95d163..0de918cb3d90d 100644 +--- a/net/sunrpc/svc.c ++++ b/net/sunrpc/svc.c +@@ -846,7 +846,8 @@ void + svc_rqst_free(struct svc_rqst *rqstp) + { + svc_release_buffer(rqstp); +- put_page(rqstp->rq_scratch_page); ++ if (rqstp->rq_scratch_page) ++ put_page(rqstp->rq_scratch_page); + kfree(rqstp->rq_resp); + kfree(rqstp->rq_argp); + kfree(rqstp->rq_auth_data); +diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c +index 2e2f007dfc9f1..7cde41a936a43 100644 +--- a/net/sunrpc/svcsock.c ++++ b/net/sunrpc/svcsock.c +@@ -1171,7 +1171,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp) + tcp_sock_set_cork(svsk->sk_sk, true); + err = svc_tcp_sendmsg(svsk->sk_sock, xdr, marker, &sent); + xdr_free_bvec(xdr); +- trace_svcsock_tcp_send(xprt, err < 0 ? err : sent); ++ trace_svcsock_tcp_send(xprt, err < 0 ? (long)err : sent); + if (err < 0 || sent != (xdr->len + sizeof(marker))) + goto out_close; + if (atomic_dec_and_test(&svsk->sk_sendqlen)) +diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c +index 691ccf8049a48..20fe31b1b776f 100644 +--- a/net/sunrpc/xprt.c ++++ b/net/sunrpc/xprt.c +@@ -698,9 +698,9 @@ int xprt_adjust_timeout(struct rpc_rqst *req) + const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout; + int status = 0; + +- if (time_before(jiffies, req->rq_minortimeo)) +- return status; + if (time_before(jiffies, req->rq_majortimeo)) { ++ if (time_before(jiffies, req->rq_minortimeo)) ++ return status; + if (to->to_exponential) + req->rq_timeout <<= 1; + else +@@ -1469,8 +1469,6 @@ bool xprt_prepare_transmit(struct rpc_task *task) + struct rpc_xprt *xprt = req->rq_xprt; + + if (!xprt_lock_write(xprt, task)) { +- trace_xprt_transmit_queued(xprt, task); +- + /* Race breaker: someone may have transmitted us */ + if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) + rpc_wake_up_queued_task_set_status(&xprt->sending, +@@ -1483,7 +1481,10 @@ bool xprt_prepare_transmit(struct rpc_task *task) + + void xprt_end_transmit(struct rpc_task *task) + { +- xprt_release_write(task->tk_rqstp->rq_xprt, task); ++ struct rpc_xprt *xprt = task->tk_rqstp->rq_xprt; ++ ++ xprt_inject_disconnect(xprt); ++ xprt_release_write(xprt, task); + } + + /** +@@ -1885,7 +1886,6 @@ void xprt_release(struct rpc_task *task) + spin_unlock(&xprt->transport_lock); + if (req->rq_buffer) + xprt->ops->buf_free(task); +- xprt_inject_disconnect(xprt); + xdr_free_bvec(&req->rq_rcv_buf); + xdr_free_bvec(&req->rq_snd_buf); + if (req->rq_cred != NULL) +diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c +index 766a1048a48ad..aca2228095db3 100644 +--- a/net/sunrpc/xprtrdma/frwr_ops.c ++++ b/net/sunrpc/xprtrdma/frwr_ops.c +@@ -257,6 +257,7 @@ int frwr_query_device(struct rpcrdma_ep *ep, const struct ib_device *device) + ep->re_attr.cap.max_send_wr += 1; /* for ib_drain_sq */ + ep->re_attr.cap.max_recv_wr = ep->re_max_requests; + ep->re_attr.cap.max_recv_wr += RPCRDMA_BACKWARD_WRS; ++ ep->re_attr.cap.max_recv_wr += RPCRDMA_MAX_RECV_BATCH; + ep->re_attr.cap.max_recv_wr += 1; /* for ib_drain_rq */ + + ep->re_max_rdma_segs = +@@ -575,7 +576,6 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req) + mr = container_of(frwr, struct rpcrdma_mr, frwr); + bad_wr = bad_wr->next; + +- list_del_init(&mr->mr_list); + frwr_mr_recycle(mr); + } + } +diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c +index 292f066d006e5..21ddd78a8c351 100644 +--- a/net/sunrpc/xprtrdma/rpc_rdma.c ++++ b/net/sunrpc/xprtrdma/rpc_rdma.c +@@ -1430,9 +1430,10 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep) + credits = 1; /* don't deadlock */ + else if (credits > r_xprt->rx_ep->re_max_requests) + credits = r_xprt->rx_ep->re_max_requests; ++ rpcrdma_post_recvs(r_xprt, credits + (buf->rb_bc_srv_max_requests << 1), ++ false); + if (buf->rb_credits != credits) + rpcrdma_update_cwnd(r_xprt, credits); +- rpcrdma_post_recvs(r_xprt, false); + + req = rpcr_to_rdmar(rqst); + if (unlikely(req->rl_reply)) +diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c +index 78d29d1bcc203..09953597d055a 100644 +--- a/net/sunrpc/xprtrdma/transport.c ++++ b/net/sunrpc/xprtrdma/transport.c +@@ -262,8 +262,10 @@ xprt_rdma_connect_worker(struct work_struct *work) + * xprt_rdma_inject_disconnect - inject a connection fault + * @xprt: transport context + * +- * If @xprt is connected, disconnect it to simulate spurious connection +- * loss. ++ * If @xprt is connected, disconnect it to simulate spurious ++ * connection loss. Caller must hold @xprt's send lock to ++ * ensure that data structures and hardware resources are ++ * stable during the rdma_disconnect() call. + */ + static void + xprt_rdma_inject_disconnect(struct rpc_xprt *xprt) +diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c +index ec912cf9c618c..f3fffc74ab0fa 100644 +--- a/net/sunrpc/xprtrdma/verbs.c ++++ b/net/sunrpc/xprtrdma/verbs.c +@@ -535,7 +535,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt) + * outstanding Receives. + */ + rpcrdma_ep_get(ep); +- rpcrdma_post_recvs(r_xprt, true); ++ rpcrdma_post_recvs(r_xprt, 1, true); + + rc = rdma_connect(ep->re_id, &ep->re_remote_cma); + if (rc) +@@ -1364,21 +1364,21 @@ int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req) + /** + * rpcrdma_post_recvs - Refill the Receive Queue + * @r_xprt: controlling transport instance +- * @temp: mark Receive buffers to be deleted after use ++ * @needed: current credit grant ++ * @temp: mark Receive buffers to be deleted after one use + * + */ +-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp) ++void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp) + { + struct rpcrdma_buffer *buf = &r_xprt->rx_buf; + struct rpcrdma_ep *ep = r_xprt->rx_ep; + struct ib_recv_wr *wr, *bad_wr; + struct rpcrdma_rep *rep; +- int needed, count, rc; ++ int count, rc; + + rc = 0; + count = 0; + +- needed = buf->rb_credits + (buf->rb_bc_srv_max_requests << 1); + if (likely(ep->re_receive_count > needed)) + goto out; + needed -= ep->re_receive_count; +diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h +index fe3be985e239a..28af11fbe6438 100644 +--- a/net/sunrpc/xprtrdma/xprt_rdma.h ++++ b/net/sunrpc/xprtrdma/xprt_rdma.h +@@ -461,7 +461,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt); + void rpcrdma_xprt_disconnect(struct rpcrdma_xprt *r_xprt); + + int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req); +-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp); ++void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp); + + /* + * Buffer calls - xprtrdma/verbs.c +diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c +index 5a1ce64039f72..0749df80454d4 100644 +--- a/net/tipc/netlink_compat.c ++++ b/net/tipc/netlink_compat.c +@@ -696,7 +696,7 @@ static int tipc_nl_compat_link_dump(struct tipc_nl_compat_msg *msg, + if (err) + return err; + +- link_info.dest = nla_get_flag(link[TIPC_NLA_LINK_DEST]); ++ link_info.dest = htonl(nla_get_flag(link[TIPC_NLA_LINK_DEST])); + link_info.up = htonl(nla_get_flag(link[TIPC_NLA_LINK_UP])); + nla_strscpy(link_info.str, link[TIPC_NLA_LINK_NAME], + TIPC_MAX_LINK_NAME); +diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h +index 2823b7c3302d0..40f359bf20440 100644 +--- a/net/xdp/xsk_queue.h ++++ b/net/xdp/xsk_queue.h +@@ -128,13 +128,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr) + static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool, + struct xdp_desc *desc) + { +- u64 chunk, chunk_end; ++ u64 chunk; + +- chunk = xp_aligned_extract_addr(pool, desc->addr); +- chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len); +- if (chunk != chunk_end) ++ if (desc->len > pool->chunk_size) + return false; + ++ chunk = xp_aligned_extract_addr(pool, desc->addr); + if (chunk >= pool->addrs_cnt) + return false; + +diff --git a/samples/bpf/tracex1_kern.c b/samples/bpf/tracex1_kern.c +index 3f4599c9a2022..ef30d2b353b0f 100644 +--- a/samples/bpf/tracex1_kern.c ++++ b/samples/bpf/tracex1_kern.c +@@ -26,7 +26,7 @@ + SEC("kprobe/__netif_receive_skb_core") + int bpf_prog1(struct pt_regs *ctx) + { +- /* attaches to kprobe netif_receive_skb, ++ /* attaches to kprobe __netif_receive_skb_core, + * looks for packets on loobpack device and prints them + */ + char devname[IFNAMSIZ]; +@@ -35,7 +35,7 @@ int bpf_prog1(struct pt_regs *ctx) + int len; + + /* non-portable! works for the given kernel only */ +- skb = (struct sk_buff *) PT_REGS_PARM1(ctx); ++ bpf_probe_read_kernel(&skb, sizeof(skb), (void *)PT_REGS_PARM1(ctx)); + dev = _(skb->dev); + len = _(skb->len); + +diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost +index 066beffca09af..4ca5579af4e4a 100644 +--- a/scripts/Makefile.modpost ++++ b/scripts/Makefile.modpost +@@ -68,7 +68,20 @@ else + ifeq ($(KBUILD_EXTMOD),) + + input-symdump := vmlinux.symvers +-output-symdump := Module.symvers ++output-symdump := modules-only.symvers ++ ++quiet_cmd_cat = GEN $@ ++ cmd_cat = cat $(real-prereqs) > $@ ++ ++ifneq ($(wildcard vmlinux.symvers),) ++ ++__modpost: Module.symvers ++Module.symvers: vmlinux.symvers modules-only.symvers FORCE ++ $(call if_changed,cat) ++ ++targets += Module.symvers ++ ++endif + + else + +diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c +index e0f9655291665..af814b39b8765 100644 +--- a/scripts/kconfig/nconf.c ++++ b/scripts/kconfig/nconf.c +@@ -504,8 +504,8 @@ static int get_mext_match(const char *match_str, match_f flag) + else if (flag == FIND_NEXT_MATCH_UP) + --match_start; + ++ match_start = (match_start + items_num) % items_num; + index = match_start; +- index = (index + items_num) % items_num; + while (true) { + char *str = k_menu_items[index].str; + if (strcasestr(str, match_str) != NULL) +diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c +index 24725e50c7b4b..10c3fba26f03a 100644 +--- a/scripts/mod/modpost.c ++++ b/scripts/mod/modpost.c +@@ -2423,19 +2423,6 @@ fail: + fatal("parse error in symbol dump file\n"); + } + +-/* For normal builds always dump all symbols. +- * For external modules only dump symbols +- * that are not read from kernel Module.symvers. +- **/ +-static int dump_sym(struct symbol *sym) +-{ +- if (!external_module) +- return 1; +- if (sym->module->from_dump) +- return 0; +- return 1; +-} +- + static void write_dump(const char *fname) + { + struct buffer buf = { }; +@@ -2446,7 +2433,7 @@ static void write_dump(const char *fname) + for (n = 0; n < SYMBOL_HASH_SIZE ; n++) { + symbol = symbolhash[n]; + while (symbol) { +- if (dump_sym(symbol)) { ++ if (!symbol->module->from_dump) { + namespace = symbol->namespace; + buf_printf(&buf, "0x%08x\t%s\t%s\t%s\t%s\n", + symbol->crc, symbol->name, +diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c +index 1e13c9f7ea8c1..56c9b48460d9e 100644 +--- a/security/keys/trusted-keys/trusted_tpm1.c ++++ b/security/keys/trusted-keys/trusted_tpm1.c +@@ -500,10 +500,12 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype, + + ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE); + if (ret < 0) +- return ret; ++ goto out; + +- if (ret != TPM_NONCE_SIZE) +- return -EIO; ++ if (ret != TPM_NONCE_SIZE) { ++ ret = -EIO; ++ goto out; ++ } + + ordinal = htonl(TPM_ORD_SEAL); + datsize = htonl(datalen); +diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c +index bbae04793c50e..c18017e0a3d95 100644 +--- a/sound/firewire/bebob/bebob_stream.c ++++ b/sound/firewire/bebob/bebob_stream.c +@@ -517,20 +517,22 @@ int snd_bebob_stream_init_duplex(struct snd_bebob *bebob) + static int keep_resources(struct snd_bebob *bebob, struct amdtp_stream *stream, + unsigned int rate, unsigned int index) + { +- struct snd_bebob_stream_formation *formation; ++ unsigned int pcm_channels; ++ unsigned int midi_ports; + struct cmp_connection *conn; + int err; + + if (stream == &bebob->tx_stream) { +- formation = bebob->tx_stream_formations + index; ++ pcm_channels = bebob->tx_stream_formations[index].pcm; ++ midi_ports = bebob->midi_input_ports; + conn = &bebob->out_conn; + } else { +- formation = bebob->rx_stream_formations + index; ++ pcm_channels = bebob->rx_stream_formations[index].pcm; ++ midi_ports = bebob->midi_output_ports; + conn = &bebob->in_conn; + } + +- err = amdtp_am824_set_parameters(stream, rate, formation->pcm, +- formation->midi, false); ++ err = amdtp_am824_set_parameters(stream, rate, pcm_channels, midi_ports, false); + if (err < 0) + return err; + +diff --git a/sound/pci/hda/ideapad_s740_helper.c b/sound/pci/hda/ideapad_s740_helper.c +new file mode 100644 +index 0000000000000..564b9086e52db +--- /dev/null ++++ b/sound/pci/hda/ideapad_s740_helper.c +@@ -0,0 +1,492 @@ ++// SPDX-License-Identifier: GPL-2.0 ++/* Fixes for Lenovo Ideapad S740, to be included from codec driver */ ++ ++static const struct hda_verb alc285_ideapad_s740_coefs[] = { ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x10 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0320 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 }, ++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 }, ++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, ++{} ++}; ++ ++static void alc285_fixup_ideapad_s740_coef(struct hda_codec *codec, ++ const struct hda_fixup *fix, ++ int action) ++{ ++ switch (action) { ++ case HDA_FIXUP_ACT_PRE_PROBE: ++ snd_hda_add_verbs(codec, alc285_ideapad_s740_coefs); ++ break; ++ } ++} +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index 45ae845e82df6..4b2cc8cb55c49 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1848,16 +1848,12 @@ static int hdmi_add_pin(struct hda_codec *codec, hda_nid_t pin_nid) + */ + if (spec->intel_hsw_fixup) { + /* +- * On Intel platforms, device entries number is +- * changed dynamically. If there is a DP MST +- * hub connected, the device entries number is 3. +- * Otherwise, it is 1. +- * Here we manually set dev_num to 3, so that +- * we can initialize all the device entries when +- * bootup statically. ++ * On Intel platforms, device entries count returned ++ * by AC_PAR_DEVLIST_LEN is dynamic, and depends on ++ * the type of receiver that is connected. Allocate pin ++ * structures based on worst case. + */ +- dev_num = 3; +- spec->dev_num = 3; ++ dev_num = spec->dev_num; + } else if (spec->dyn_pcm_assign && codec->dp_mst) { + dev_num = snd_hda_get_num_devices(codec, pin_nid) + 1; + /* +@@ -2658,7 +2654,7 @@ static void generic_acomp_pin_eld_notify(void *audio_ptr, int port, int dev_id) + /* skip notification during system suspend (but not in runtime PM); + * the state will be updated at resume + */ +- if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0) ++ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND) + return; + /* ditto during suspend/resume process itself */ + if (snd_hdac_is_in_pm(&codec->core)) +@@ -2844,7 +2840,7 @@ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe) + /* skip notification during system suspend (but not in runtime PM); + * the state will be updated at resume + */ +- if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0) ++ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND) + return; + /* ditto during suspend/resume process itself */ + if (snd_hdac_is_in_pm(&codec->core)) +@@ -2942,7 +2938,7 @@ static int parse_intel_hdmi(struct hda_codec *codec) + + /* Intel Haswell and onwards; audio component with eld notifier */ + static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid, +- const int *port_map, int port_num) ++ const int *port_map, int port_num, int dev_num) + { + struct hdmi_spec *spec; + int err; +@@ -2957,6 +2953,7 @@ static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid, + spec->port_map = port_map; + spec->port_num = port_num; + spec->intel_hsw_fixup = true; ++ spec->dev_num = dev_num; + + intel_haswell_enable_all_pins(codec, true); + intel_haswell_fixup_enable_dp12(codec); +@@ -2982,12 +2979,12 @@ static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid, + + static int patch_i915_hsw_hdmi(struct hda_codec *codec) + { +- return intel_hsw_common_init(codec, 0x08, NULL, 0); ++ return intel_hsw_common_init(codec, 0x08, NULL, 0, 3); + } + + static int patch_i915_glk_hdmi(struct hda_codec *codec) + { +- return intel_hsw_common_init(codec, 0x0b, NULL, 0); ++ return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3); + } + + static int patch_i915_icl_hdmi(struct hda_codec *codec) +@@ -2998,7 +2995,7 @@ static int patch_i915_icl_hdmi(struct hda_codec *codec) + */ + static const int map[] = {0x0, 0x4, 0x6, 0x8, 0xa, 0xb}; + +- return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map)); ++ return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3); + } + + static int patch_i915_tgl_hdmi(struct hda_codec *codec) +@@ -3010,7 +3007,7 @@ static int patch_i915_tgl_hdmi(struct hda_codec *codec) + static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf}; + int ret; + +- ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map)); ++ ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4); + if (!ret) { + struct hdmi_spec *spec = codec->spec; + +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 8ec57bd351dfe..1fe70f2fe4fe8 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -6282,6 +6282,9 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec, + /* for alc295_fixup_hp_top_speakers */ + #include "hp_x360_helper.c" + ++/* for alc285_fixup_ideapad_s740_coef() */ ++#include "ideapad_s740_helper.c" ++ + enum { + ALC269_FIXUP_GPIO2, + ALC269_FIXUP_SONY_VAIO, +@@ -6481,6 +6484,7 @@ enum { + ALC282_FIXUP_ACER_DISABLE_LINEOUT, + ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST, + ALC256_FIXUP_ACER_HEADSET_MIC, ++ ALC285_FIXUP_IDEAPAD_S740_COEF, + }; + + static const struct hda_fixup alc269_fixups[] = { +@@ -7973,6 +7977,12 @@ static const struct hda_fixup alc269_fixups[] = { + .chained = true, + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC + }, ++ [ALC285_FIXUP_IDEAPAD_S740_COEF] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc285_fixup_ideapad_s740_coef, ++ .chained = true, ++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI, ++ }, + }; + + static const struct snd_pci_quirk alc269_fixup_tbl[] = { +@@ -8320,6 +8330,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME), ++ SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF), + SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), + SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), + SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), +diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c +index 4cf879c42dc4c..720297cbdf875 100644 +--- a/sound/pci/rme9652/hdsp.c ++++ b/sound/pci/rme9652/hdsp.c +@@ -5390,7 +5390,8 @@ static int snd_hdsp_free(struct hdsp *hdsp) + if (hdsp->port) + pci_release_regions(hdsp->pci); + +- pci_disable_device(hdsp->pci); ++ if (pci_is_enabled(hdsp->pci)) ++ pci_disable_device(hdsp->pci); + return 0; + } + +diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c +index 8d900c132f0f4..97a0bff96b288 100644 +--- a/sound/pci/rme9652/hdspm.c ++++ b/sound/pci/rme9652/hdspm.c +@@ -6883,7 +6883,8 @@ static int snd_hdspm_free(struct hdspm * hdspm) + if (hdspm->port) + pci_release_regions(hdspm->pci); + +- pci_disable_device(hdspm->pci); ++ if (pci_is_enabled(hdspm->pci)) ++ pci_disable_device(hdspm->pci); + return 0; + } + +diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c +index 4df992e846f23..7a4d395abceeb 100644 +--- a/sound/pci/rme9652/rme9652.c ++++ b/sound/pci/rme9652/rme9652.c +@@ -1731,7 +1731,8 @@ static int snd_rme9652_free(struct snd_rme9652 *rme9652) + if (rme9652->port) + pci_release_regions(rme9652->pci); + +- pci_disable_device(rme9652->pci); ++ if (pci_is_enabled(rme9652->pci)) ++ pci_disable_device(rme9652->pci); + return 0; + } + +diff --git a/sound/soc/codecs/rt286.c b/sound/soc/codecs/rt286.c +index 8abe232ca4a4c..ff23a7d4d2ac5 100644 +--- a/sound/soc/codecs/rt286.c ++++ b/sound/soc/codecs/rt286.c +@@ -171,6 +171,9 @@ static bool rt286_readable_register(struct device *dev, unsigned int reg) + case RT286_PROC_COEF: + case RT286_SET_AMP_GAIN_ADC_IN1: + case RT286_SET_AMP_GAIN_ADC_IN2: ++ case RT286_SET_GPIO_MASK: ++ case RT286_SET_GPIO_DIRECTION: ++ case RT286_SET_GPIO_DATA: + case RT286_SET_POWER(RT286_DAC_OUT1): + case RT286_SET_POWER(RT286_DAC_OUT2): + case RT286_SET_POWER(RT286_ADC_IN1): +@@ -1117,12 +1120,11 @@ static const struct dmi_system_id force_combo_jack_table[] = { + { } + }; + +-static const struct dmi_system_id dmi_dell_dino[] = { ++static const struct dmi_system_id dmi_dell[] = { + { +- .ident = "Dell Dino", ++ .ident = "Dell", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), +- DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343") + } + }, + { } +@@ -1133,7 +1135,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c, + { + struct rt286_platform_data *pdata = dev_get_platdata(&i2c->dev); + struct rt286_priv *rt286; +- int i, ret, val; ++ int i, ret, vendor_id; + + rt286 = devm_kzalloc(&i2c->dev, sizeof(*rt286), + GFP_KERNEL); +@@ -1149,14 +1151,15 @@ static int rt286_i2c_probe(struct i2c_client *i2c, + } + + ret = regmap_read(rt286->regmap, +- RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &val); ++ RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &vendor_id); + if (ret != 0) { + dev_err(&i2c->dev, "I2C error %d\n", ret); + return ret; + } +- if (val != RT286_VENDOR_ID && val != RT288_VENDOR_ID) { ++ if (vendor_id != RT286_VENDOR_ID && vendor_id != RT288_VENDOR_ID) { + dev_err(&i2c->dev, +- "Device with ID register %#x is not rt286\n", val); ++ "Device with ID register %#x is not rt286\n", ++ vendor_id); + return -ENODEV; + } + +@@ -1180,8 +1183,8 @@ static int rt286_i2c_probe(struct i2c_client *i2c, + if (pdata) + rt286->pdata = *pdata; + +- if (dmi_check_system(force_combo_jack_table) || +- dmi_check_system(dmi_dell_dino)) ++ if ((vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) || ++ dmi_check_system(force_combo_jack_table)) + rt286->pdata.cbj_en = true; + + regmap_write(rt286->regmap, RT286_SET_AUDIO_POWER, AC_PWRST_D3); +@@ -1220,7 +1223,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c, + regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL3, 0xf777, 0x4737); + regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL4, 0x00ff, 0x003f); + +- if (dmi_check_system(dmi_dell_dino)) { ++ if (vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) { + regmap_update_bits(rt286->regmap, + RT286_SET_GPIO_MASK, 0x40, 0x40); + regmap_update_bits(rt286->regmap, +diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c +index 4063aac2a4431..dd69d874bad2a 100644 +--- a/sound/soc/codecs/rt5670.c ++++ b/sound/soc/codecs/rt5670.c +@@ -2980,6 +2980,18 @@ static const struct dmi_system_id dmi_platform_intel_quirks[] = { + RT5670_GPIO1_IS_IRQ | + RT5670_JD_MODE3), + }, ++ { ++ .callback = rt5670_quirk_cb, ++ .ident = "Dell Venue 10 Pro 5055", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Venue 10 Pro 5055"), ++ }, ++ .driver_data = (unsigned long *)(RT5670_DMIC_EN | ++ RT5670_DMIC2_INR | ++ RT5670_GPIO1_IS_IRQ | ++ RT5670_JD_MODE1), ++ }, + { + .callback = rt5670_quirk_cb, + .ident = "Aegex 10 tablet (RU2)", +diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c +index 5d48cc359c3da..22912cab5e638 100644 +--- a/sound/soc/intel/boards/bytcr_rt5640.c ++++ b/sound/soc/intel/boards/bytcr_rt5640.c +@@ -482,6 +482,9 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = { + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TAF"), + }, + .driver_data = (void *)(BYT_RT5640_IN1_MAP | ++ BYT_RT5640_JD_SRC_JD2_IN4N | ++ BYT_RT5640_OVCD_TH_2000UA | ++ BYT_RT5640_OVCD_SF_0P75 | + BYT_RT5640_MONO_SPEAKER | + BYT_RT5640_DIFF_MIC | + BYT_RT5640_SSP0_AIF2 | +@@ -515,6 +518,23 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = { + BYT_RT5640_SSP0_AIF1 | + BYT_RT5640_MCLK_EN), + }, ++ { ++ /* Chuwi Hi8 (CWI509) */ ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"), ++ DMI_MATCH(DMI_BOARD_NAME, "BYT-PA03C"), ++ DMI_MATCH(DMI_SYS_VENDOR, "ilife"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "S806"), ++ }, ++ .driver_data = (void *)(BYT_RT5640_IN1_MAP | ++ BYT_RT5640_JD_SRC_JD2_IN4N | ++ BYT_RT5640_OVCD_TH_2000UA | ++ BYT_RT5640_OVCD_SF_0P75 | ++ BYT_RT5640_MONO_SPEAKER | ++ BYT_RT5640_DIFF_MIC | ++ BYT_RT5640_SSP0_AIF1 | ++ BYT_RT5640_MCLK_EN), ++ }, + { + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Circuitco"), +diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c +index 8adce6417b021..ecd3f90f4bbea 100644 +--- a/sound/soc/intel/boards/sof_sdw.c ++++ b/sound/soc/intel/boards/sof_sdw.c +@@ -187,6 +187,17 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = { + SOF_RT715_DAI_ID_FIX | + SOF_SDW_FOUR_SPK), + }, ++ /* AlderLake devices */ ++ { ++ .callback = sof_sdw_quirk_cb, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Alder Lake Client Platform"), ++ }, ++ .driver_data = (void *)(SOF_RT711_JD_SRC_JD1 | ++ SOF_SDW_TGL_HDMI | ++ SOF_SDW_PCH_DMIC), ++ }, + {} + }; + +diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c +index 1029d8d9d800a..d2b4632d9c2a4 100644 +--- a/sound/soc/sh/rcar/core.c ++++ b/sound/soc/sh/rcar/core.c +@@ -1428,8 +1428,75 @@ static int rsnd_hw_params(struct snd_soc_component *component, + } + if (io->converted_chan) + dev_dbg(dev, "convert channels = %d\n", io->converted_chan); +- if (io->converted_rate) ++ if (io->converted_rate) { ++ /* ++ * SRC supports convert rates from params_rate(hw_params)/k_down ++ * to params_rate(hw_params)*k_up, where k_up is always 6, and ++ * k_down depends on number of channels and SRC unit. ++ * So all SRC units can upsample audio up to 6 times regardless ++ * its number of channels. And all SRC units can downsample ++ * 2 channel audio up to 6 times too. ++ */ ++ int k_up = 6; ++ int k_down = 6; ++ int channel; ++ struct rsnd_mod *src_mod = rsnd_io_to_mod_src(io); ++ + dev_dbg(dev, "convert rate = %d\n", io->converted_rate); ++ ++ channel = io->converted_chan ? io->converted_chan : ++ params_channels(hw_params); ++ ++ switch (rsnd_mod_id(src_mod)) { ++ /* ++ * SRC0 can downsample 4, 6 and 8 channel audio up to 4 times. ++ * SRC1, SRC3 and SRC4 can downsample 4 channel audio ++ * up to 4 times. ++ * SRC1, SRC3 and SRC4 can downsample 6 and 8 channel audio ++ * no more than twice. ++ */ ++ case 1: ++ case 3: ++ case 4: ++ if (channel > 4) { ++ k_down = 2; ++ break; ++ } ++ fallthrough; ++ case 0: ++ if (channel > 2) ++ k_down = 4; ++ break; ++ ++ /* Other SRC units do not support more than 2 channels */ ++ default: ++ if (channel > 2) ++ return -EINVAL; ++ } ++ ++ if (params_rate(hw_params) > io->converted_rate * k_down) { ++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min = ++ io->converted_rate * k_down; ++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max = ++ io->converted_rate * k_down; ++ hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE; ++ } else if (params_rate(hw_params) * k_up < io->converted_rate) { ++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min = ++ (io->converted_rate + k_up - 1) / k_up; ++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max = ++ (io->converted_rate + k_up - 1) / k_up; ++ hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE; ++ } ++ ++ /* ++ * TBD: Max SRC input and output rates also depend on number ++ * of channels and SRC unit: ++ * SRC1, SRC3 and SRC4 do not support more than 128kHz ++ * for 6 channel and 96kHz for 8 channel audio. ++ * Perhaps this function should return EINVAL if the input or ++ * the output rate exceeds the limitation. ++ */ ++ } + } + + return rsnd_dai_call(hw_params, io, substream, hw_params); +diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c +index d0ded427a8363..042207c116514 100644 +--- a/sound/soc/sh/rcar/ssi.c ++++ b/sound/soc/sh/rcar/ssi.c +@@ -507,10 +507,15 @@ static int rsnd_ssi_init(struct rsnd_mod *mod, + struct rsnd_priv *priv) + { + struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod); ++ int ret; + + if (!rsnd_ssi_is_run_mods(mod, io)) + return 0; + ++ ret = rsnd_ssi_master_clk_start(mod, io); ++ if (ret < 0) ++ return ret; ++ + ssi->usrcnt++; + + rsnd_mod_power_on(mod); +@@ -792,7 +797,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod, + SSI_SYS_STATUS(i * 2), + 0xf << (id * 4)); + stop = true; +- break; + } + } + break; +@@ -810,7 +814,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod, + SSI_SYS_STATUS((i * 2) + 1), + 0xf << 4); + stop = true; +- break; + } + } + break; +@@ -1060,13 +1063,6 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod, + return 0; + } + +-static int rsnd_ssi_prepare(struct rsnd_mod *mod, +- struct rsnd_dai_stream *io, +- struct rsnd_priv *priv) +-{ +- return rsnd_ssi_master_clk_start(mod, io); +-} +- + static struct rsnd_mod_ops rsnd_ssi_pio_ops = { + .name = SSI_NAME, + .probe = rsnd_ssi_common_probe, +@@ -1079,7 +1075,6 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = { + .pointer = rsnd_ssi_pio_pointer, + .pcm_new = rsnd_ssi_pcm_new, + .hw_params = rsnd_ssi_hw_params, +- .prepare = rsnd_ssi_prepare, + .get_status = rsnd_ssi_get_status, + }; + +@@ -1166,7 +1161,6 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = { + .pcm_new = rsnd_ssi_pcm_new, + .fallback = rsnd_ssi_fallback, + .hw_params = rsnd_ssi_hw_params, +- .prepare = rsnd_ssi_prepare, + .get_status = rsnd_ssi_get_status, + }; + +diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c +index 246a5e32e22a2..b4810266f5e5d 100644 +--- a/sound/soc/soc-compress.c ++++ b/sound/soc/soc-compress.c +@@ -153,7 +153,9 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream) + fe->dpcm[stream].state = SND_SOC_DPCM_STATE_OPEN; + fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; + ++ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass); + snd_soc_runtime_activate(fe, stream); ++ mutex_unlock(&fe->card->pcm_mutex); + + mutex_unlock(&fe->card->mutex); + +@@ -181,7 +183,9 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream) + + mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); + ++ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass); + snd_soc_runtime_deactivate(fe, stream); ++ mutex_unlock(&fe->card->pcm_mutex); + + fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; + +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h +index 48facd2626585..8a8fe2b980a18 100644 +--- a/sound/usb/quirks-table.h ++++ b/sound/usb/quirks-table.h +@@ -3827,6 +3827,69 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), + } + } + }, ++{ ++ /* ++ * Pioneer DJ DJM-850 ++ * 8 channels playback and 8 channels capture @ 44.1/48/96kHz S24LE ++ * Playback on EP 0x05 ++ * Capture on EP 0x86 ++ */ ++ USB_DEVICE_VENDOR_SPEC(0x08e4, 0x0163), ++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { ++ .ifnum = QUIRK_ANY_INTERFACE, ++ .type = QUIRK_COMPOSITE, ++ .data = (const struct snd_usb_audio_quirk[]) { ++ { ++ .ifnum = 0, ++ .type = QUIRK_AUDIO_FIXED_ENDPOINT, ++ .data = &(const struct audioformat) { ++ .formats = SNDRV_PCM_FMTBIT_S24_3LE, ++ .channels = 8, ++ .iface = 0, ++ .altsetting = 1, ++ .altset_idx = 1, ++ .endpoint = 0x05, ++ .ep_attr = USB_ENDPOINT_XFER_ISOC| ++ USB_ENDPOINT_SYNC_ASYNC| ++ USB_ENDPOINT_USAGE_DATA, ++ .rates = SNDRV_PCM_RATE_44100| ++ SNDRV_PCM_RATE_48000| ++ SNDRV_PCM_RATE_96000, ++ .rate_min = 44100, ++ .rate_max = 96000, ++ .nr_rates = 3, ++ .rate_table = (unsigned int[]) { 44100, 48000, 96000 } ++ } ++ }, ++ { ++ .ifnum = 0, ++ .type = QUIRK_AUDIO_FIXED_ENDPOINT, ++ .data = &(const struct audioformat) { ++ .formats = SNDRV_PCM_FMTBIT_S24_3LE, ++ .channels = 8, ++ .iface = 0, ++ .altsetting = 1, ++ .altset_idx = 1, ++ .endpoint = 0x86, ++ .ep_idx = 1, ++ .ep_attr = USB_ENDPOINT_XFER_ISOC| ++ USB_ENDPOINT_SYNC_ASYNC| ++ USB_ENDPOINT_USAGE_DATA, ++ .rates = SNDRV_PCM_RATE_44100| ++ SNDRV_PCM_RATE_48000| ++ SNDRV_PCM_RATE_96000, ++ .rate_min = 44100, ++ .rate_max = 96000, ++ .nr_rates = 3, ++ .rate_table = (unsigned int[]) { 44100, 48000, 96000 } ++ } ++ }, ++ { ++ .ifnum = -1 ++ } ++ } ++ } ++}, + { + /* + * Pioneer DJ DJM-450 +diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c +index e7a8d847161f2..1d80ad4e0de8d 100644 +--- a/tools/lib/bpf/ringbuf.c ++++ b/tools/lib/bpf/ringbuf.c +@@ -202,9 +202,11 @@ static inline int roundup_len(__u32 len) + return (len + 7) / 8 * 8; + } + +-static int ringbuf_process_ring(struct ring* r) ++static int64_t ringbuf_process_ring(struct ring* r) + { +- int *len_ptr, len, err, cnt = 0; ++ int *len_ptr, len, err; ++ /* 64-bit to avoid overflow in case of extreme application behavior */ ++ int64_t cnt = 0; + unsigned long cons_pos, prod_pos; + bool got_new_data; + void *sample; +@@ -244,12 +246,14 @@ done: + } + + /* Consume available ring buffer(s) data without event polling. +- * Returns number of records consumed across all registered ring buffers, or +- * negative number if any of the callbacks return error. ++ * Returns number of records consumed across all registered ring buffers (or ++ * INT_MAX, whichever is less), or negative number if any of the callbacks ++ * return error. + */ + int ring_buffer__consume(struct ring_buffer *rb) + { +- int i, err, res = 0; ++ int64_t err, res = 0; ++ int i; + + for (i = 0; i < rb->ring_cnt; i++) { + struct ring *ring = &rb->rings[i]; +@@ -259,18 +263,24 @@ int ring_buffer__consume(struct ring_buffer *rb) + return err; + res += err; + } ++ if (res > INT_MAX) ++ return INT_MAX; + return res; + } + + /* Poll for available data and consume records, if any are available. +- * Returns number of records consumed, or negative number, if any of the +- * registered callbacks returned error. ++ * Returns number of records consumed (or INT_MAX, whichever is less), or ++ * negative number, if any of the registered callbacks returned error. + */ + int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms) + { +- int i, cnt, err, res = 0; ++ int i, cnt; ++ int64_t err, res = 0; + + cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms); ++ if (cnt < 0) ++ return -errno; ++ + for (i = 0; i < cnt; i++) { + __u32 ring_id = rb->events[i].data.fd; + struct ring *ring = &rb->rings[ring_id]; +@@ -280,7 +290,9 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms) + return err; + res += err; + } +- return cnt < 0 ? -errno : res; ++ if (res > INT_MAX) ++ return INT_MAX; ++ return res; + } + + /* Get an fd that can be used to sleep until data is available in the ring(s) */ +diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config +index d8e59d31399a5..c955cd683e220 100644 +--- a/tools/perf/Makefile.config ++++ b/tools/perf/Makefile.config +@@ -530,6 +530,7 @@ ifndef NO_LIBELF + ifdef LIBBPF_DYNAMIC + ifeq ($(feature-libbpf), 1) + EXTLIBS += -lbpf ++ $(call detected,CONFIG_LIBBPF_DYNAMIC) + else + dummy := $(error Error: No libbpf devel library found, please install libbpf-devel); + endif +diff --git a/tools/perf/util/Build b/tools/perf/util/Build +index e3e12f9d4733e..5a296ac694157 100644 +--- a/tools/perf/util/Build ++++ b/tools/perf/util/Build +@@ -141,7 +141,14 @@ perf-$(CONFIG_LIBELF) += symbol-elf.o + perf-$(CONFIG_LIBELF) += probe-file.o + perf-$(CONFIG_LIBELF) += probe-event.o + ++ifdef CONFIG_LIBBPF_DYNAMIC ++ hashmap := 1 ++endif + ifndef CONFIG_LIBBPF ++ hashmap := 1 ++endif ++ ++ifdef hashmap + perf-y += hashmap.o + endif + +diff --git a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh +index 6f3a70df63bc6..e00435753008a 100644 +--- a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh ++++ b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh +@@ -120,12 +120,13 @@ __mirror_gre_test() + sleep 5 + + for ((i = 0; i < count; ++i)); do ++ local sip=$(mirror_gre_ipv6_addr 1 $i)::1 + local dip=$(mirror_gre_ipv6_addr 1 $i)::2 + local htun=h3-gt6-$i + local message + + icmp6_capture_install $htun +- mirror_test v$h1 "" $dip $htun 100 10 ++ mirror_test v$h1 $sip $dip $htun 100 10 + icmp6_capture_uninstall $htun + done + } +diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh +index b0cb1aaffddab..33ddd01689bee 100644 +--- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh ++++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh +@@ -507,8 +507,8 @@ do_red_test() + check_err $? "backlog $backlog / $limit Got $pct% marked packets, expected == 0." + local diff=$((limit - backlog)) + pct=$((100 * diff / limit)) +- ((0 <= pct && pct <= 5)) +- check_err $? "backlog $backlog / $limit expected <= 5% distance" ++ ((0 <= pct && pct <= 10)) ++ check_err $? "backlog $backlog / $limit expected <= 10% distance" + log_test "TC $((vlan - 10)): RED backlog > limit" + + stop_traffic +diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk +index be17462fe1467..0af84ad48aa77 100644 +--- a/tools/testing/selftests/lib.mk ++++ b/tools/testing/selftests/lib.mk +@@ -1,6 +1,10 @@ + # This mimics the top-level Makefile. We do it explicitly here so that this + # Makefile can operate with or without the kbuild infrastructure. ++ifneq ($(LLVM),) ++CC := clang ++else + CC := $(CROSS_COMPILE)gcc ++endif + + ifeq (0,$(MAKELEVEL)) + ifeq ($(OUTPUT),) +diff --git a/tools/testing/selftests/net/forwarding/mirror_lib.sh b/tools/testing/selftests/net/forwarding/mirror_lib.sh +index 13db1cb50e57b..6406cd76a19d8 100644 +--- a/tools/testing/selftests/net/forwarding/mirror_lib.sh ++++ b/tools/testing/selftests/net/forwarding/mirror_lib.sh +@@ -20,6 +20,13 @@ mirror_uninstall() + tc filter del dev $swp1 $direction pref 1000 + } + ++is_ipv6() ++{ ++ local addr=$1; shift ++ ++ [[ -z ${addr//[0-9a-fA-F:]/} ]] ++} ++ + mirror_test() + { + local vrf_name=$1; shift +@@ -29,9 +36,17 @@ mirror_test() + local pref=$1; shift + local expect=$1; shift + ++ if is_ipv6 $dip; then ++ local proto=-6 ++ local type="icmp6 type=128" # Echo request. ++ else ++ local proto= ++ local type="icmp echoreq" ++ fi ++ + local t0=$(tc_rule_stats_get $dev $pref) +- $MZ $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \ +- -c 10 -d 100msec -t icmp type=8 ++ $MZ $proto $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \ ++ -c 10 -d 100msec -t $type + sleep 0.5 + local t1=$(tc_rule_stats_get $dev $pref) + local delta=$((t1 - t0)) +diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh +index 39edce4f541c2..2674ba20d5249 100755 +--- a/tools/testing/selftests/net/mptcp/diag.sh ++++ b/tools/testing/selftests/net/mptcp/diag.sh +@@ -5,8 +5,9 @@ rndh=$(printf %x $sec)-$(mktemp -u XXXXXX) + ns="ns1-$rndh" + ksft_skip=4 + test_cnt=1 ++timeout_poll=100 ++timeout_test=$((timeout_poll * 2 + 1)) + ret=0 +-pids=() + + flush_pids() + { +@@ -14,18 +15,14 @@ flush_pids() + # give it some time + sleep 1.1 + +- for pid in ${pids[@]}; do +- [ -d /proc/$pid ] && kill -SIGUSR1 $pid >/dev/null 2>&1 +- done +- pids=() ++ ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGUSR1 &>/dev/null + } + + cleanup() + { ++ ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGKILL &>/dev/null ++ + ip netns del $ns +- for pid in ${pids[@]}; do +- [ -d /proc/$pid ] && kill -9 $pid >/dev/null 2>&1 +- done + } + + ip -Version > /dev/null 2>&1 +@@ -79,39 +76,57 @@ trap cleanup EXIT + ip netns add $ns + ip -n $ns link set dev lo up + +-echo "a" | ip netns exec $ns ./mptcp_connect -p 10000 -l 0.0.0.0 -t 100 >/dev/null & ++echo "a" | \ ++ timeout ${timeout_test} \ ++ ip netns exec $ns \ ++ ./mptcp_connect -p 10000 -l -t ${timeout_poll} \ ++ 0.0.0.0 >/dev/null & + sleep 0.1 +-pids[0]=$! + chk_msk_nr 0 "no msk on netns creation" + +-echo "b" | ip netns exec $ns ./mptcp_connect -p 10000 127.0.0.1 -j -t 100 >/dev/null & ++echo "b" | \ ++ timeout ${timeout_test} \ ++ ip netns exec $ns \ ++ ./mptcp_connect -p 10000 -j -t ${timeout_poll} \ ++ 127.0.0.1 >/dev/null & + sleep 0.1 +-pids[1]=$! + chk_msk_nr 2 "after MPC handshake " + chk_msk_remote_key_nr 2 "....chk remote_key" + chk_msk_fallback_nr 0 "....chk no fallback" + flush_pids + + +-echo "a" | ip netns exec $ns ./mptcp_connect -p 10001 -s TCP -l 0.0.0.0 -t 100 >/dev/null & +-pids[0]=$! ++echo "a" | \ ++ timeout ${timeout_test} \ ++ ip netns exec $ns \ ++ ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \ ++ 0.0.0.0 >/dev/null & + sleep 0.1 +-echo "b" | ip netns exec $ns ./mptcp_connect -p 10001 127.0.0.1 -j -t 100 >/dev/null & +-pids[1]=$! ++echo "b" | \ ++ timeout ${timeout_test} \ ++ ip netns exec $ns \ ++ ./mptcp_connect -p 10001 -j -t ${timeout_poll} \ ++ 127.0.0.1 >/dev/null & + sleep 0.1 + chk_msk_fallback_nr 1 "check fallback" + flush_pids + + NR_CLIENTS=100 + for I in `seq 1 $NR_CLIENTS`; do +- echo "a" | ip netns exec $ns ./mptcp_connect -p $((I+10001)) -l 0.0.0.0 -t 100 -w 10 >/dev/null & +- pids[$((I*2))]=$! ++ echo "a" | \ ++ timeout ${timeout_test} \ ++ ip netns exec $ns \ ++ ./mptcp_connect -p $((I+10001)) -l -w 10 \ ++ -t ${timeout_poll} 0.0.0.0 >/dev/null & + done + sleep 0.1 + + for I in `seq 1 $NR_CLIENTS`; do +- echo "b" | ip netns exec $ns ./mptcp_connect -p $((I+10001)) 127.0.0.1 -t 100 -w 10 >/dev/null & +- pids[$((I*2 + 1))]=$! ++ echo "b" | \ ++ timeout ${timeout_test} \ ++ ip netns exec $ns \ ++ ./mptcp_connect -p $((I+10001)) -w 10 \ ++ -t ${timeout_poll} 127.0.0.1 >/dev/null & + done + sleep 1.5 + +diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh +index 10a030b53b23e..65b3b983efc26 100755 +--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh ++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh +@@ -11,7 +11,8 @@ cin="" + cout="" + ksft_skip=4 + capture=false +-timeout=30 ++timeout_poll=30 ++timeout_test=$((timeout_poll * 2 + 1)) + ipv6=true + ethtool_random_on=true + tc_delay="$((RANDOM%50))" +@@ -273,7 +274,7 @@ check_mptcp_disabled() + ip netns exec ${disabled_ns} sysctl -q net.mptcp.enabled=0 + + local err=0 +- LANG=C ip netns exec ${disabled_ns} ./mptcp_connect -t $timeout -p 10000 -s MPTCP 127.0.0.1 < "$cin" 2>&1 | \ ++ LANG=C ip netns exec ${disabled_ns} ./mptcp_connect -p 10000 -s MPTCP 127.0.0.1 < "$cin" 2>&1 | \ + grep -q "^socket: Protocol not available$" && err=1 + ip netns delete ${disabled_ns} + +@@ -430,14 +431,20 @@ do_transfer() + local stat_cookietx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent") + local stat_cookierx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv") + +- ip netns exec ${listener_ns} ./mptcp_connect -t $timeout -l -p $port -s ${srv_proto} $extra_args $local_addr < "$sin" > "$sout" & ++ timeout ${timeout_test} \ ++ ip netns exec ${listener_ns} \ ++ ./mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \ ++ $extra_args $local_addr < "$sin" > "$sout" & + local spid=$! + + wait_local_port_listen "${listener_ns}" "${port}" + + local start + start=$(date +%s%3N) +- ip netns exec ${connector_ns} ./mptcp_connect -t $timeout -p $port -s ${cl_proto} $extra_args $connect_addr < "$cin" > "$cout" & ++ timeout ${timeout_test} \ ++ ip netns exec ${connector_ns} \ ++ ./mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \ ++ $extra_args $connect_addr < "$cin" > "$cout" & + local cpid=$! + + wait $cpid +diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh +index ad32240fbfdad..43ed99de77343 100755 +--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh ++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh +@@ -8,7 +8,8 @@ cin="" + cinsent="" + cout="" + ksft_skip=4 +-timeout=30 ++timeout_poll=30 ++timeout_test=$((timeout_poll * 2 + 1)) + mptcp_connect="" + capture=0 + do_all_tests=1 +@@ -245,17 +246,26 @@ do_transfer() + local_addr="0.0.0.0" + fi + +- ip netns exec ${listener_ns} $mptcp_connect -t $timeout -l -p $port \ +- -s ${srv_proto} ${local_addr} < "$sin" > "$sout" & ++ timeout ${timeout_test} \ ++ ip netns exec ${listener_ns} \ ++ $mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \ ++ ${local_addr} < "$sin" > "$sout" & + spid=$! + + sleep 1 + + if [ "$test_link_fail" -eq 0 ];then +- ip netns exec ${connector_ns} $mptcp_connect -t $timeout -p $port -s ${cl_proto} $connect_addr < "$cin" > "$cout" & ++ timeout ${timeout_test} \ ++ ip netns exec ${connector_ns} \ ++ $mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \ ++ $connect_addr < "$cin" > "$cout" & + else +- ( cat "$cin" ; sleep 2; link_failure $listener_ns ; cat "$cin" ) | tee "$cinsent" | \ +- ip netns exec ${connector_ns} $mptcp_connect -t $timeout -p $port -s ${cl_proto} $connect_addr > "$cout" & ++ ( cat "$cin" ; sleep 2; link_failure $listener_ns ; cat "$cin" ) | \ ++ tee "$cinsent" | \ ++ timeout ${timeout_test} \ ++ ip netns exec ${connector_ns} \ ++ $mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \ ++ $connect_addr > "$cout" & + fi + cpid=$! + +diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh +index f039ee57eb3c7..3aeef3bcb1018 100755 +--- a/tools/testing/selftests/net/mptcp/simult_flows.sh ++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh +@@ -7,7 +7,8 @@ ns2="ns2-$rndh" + ns3="ns3-$rndh" + capture=false + ksft_skip=4 +-timeout=30 ++timeout_poll=30 ++timeout_test=$((timeout_poll * 2 + 1)) + test_cnt=1 + ret=0 + bail=0 +@@ -157,14 +158,20 @@ do_transfer() + sleep 1 + fi + +- ip netns exec ${ns3} ./mptcp_connect -jt $timeout -l -p $port 0.0.0.0 < "$sin" > "$sout" & ++ timeout ${timeout_test} \ ++ ip netns exec ${ns3} \ ++ ./mptcp_connect -jt ${timeout_poll} -l -p $port \ ++ 0.0.0.0 < "$sin" > "$sout" & + local spid=$! + + wait_local_port_listen "${ns3}" "${port}" + + local start + start=$(date +%s%3N) +- ip netns exec ${ns1} ./mptcp_connect -jt $timeout -p $port 10.0.3.3 < "$cin" > "$cout" & ++ timeout ${timeout_test} \ ++ ip netns exec ${ns1} \ ++ ./mptcp_connect -jt ${timeout_poll} -p $port \ ++ 10.0.3.3 < "$cin" > "$cout" & + local cpid=$! + + wait $cpid +diff --git a/tools/testing/selftests/powerpc/security/entry_flush.c b/tools/testing/selftests/powerpc/security/entry_flush.c +index 78cf914fa3217..68ce377b205e9 100644 +--- a/tools/testing/selftests/powerpc/security/entry_flush.c ++++ b/tools/testing/selftests/powerpc/security/entry_flush.c +@@ -53,7 +53,7 @@ int entry_flush_test(void) + + entry_flush = entry_flush_orig; + +- fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1); ++ fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1); + FAIL_IF(fd < 0); + + p = (char *)memalign(zero_size, CACHELINE_SIZE); +diff --git a/tools/testing/selftests/powerpc/security/flush_utils.h b/tools/testing/selftests/powerpc/security/flush_utils.h +index 07a5eb3014669..7a3d60292916e 100644 +--- a/tools/testing/selftests/powerpc/security/flush_utils.h ++++ b/tools/testing/selftests/powerpc/security/flush_utils.h +@@ -9,6 +9,10 @@ + + #define CACHELINE_SIZE 128 + ++#define PERF_L1D_READ_MISS_CONFIG ((PERF_COUNT_HW_CACHE_L1D) | \ ++ (PERF_COUNT_HW_CACHE_OP_READ << 8) | \ ++ (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)) ++ + void syscall_loop(char *p, unsigned long iterations, + unsigned long zero_size); + +diff --git a/tools/testing/selftests/powerpc/security/rfi_flush.c b/tools/testing/selftests/powerpc/security/rfi_flush.c +index 7565fd786640f..f73484a6470fa 100644 +--- a/tools/testing/selftests/powerpc/security/rfi_flush.c ++++ b/tools/testing/selftests/powerpc/security/rfi_flush.c +@@ -54,7 +54,7 @@ int rfi_flush_test(void) + + rfi_flush = rfi_flush_orig; + +- fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1); ++ fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1); + FAIL_IF(fd < 0); + + p = (char *)memalign(zero_size, CACHELINE_SIZE); +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index ab1fa6f92c825..5cabc6c748db1 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -2758,8 +2758,8 @@ static void grow_halt_poll_ns(struct kvm_vcpu *vcpu) + if (val < grow_start) + val = grow_start; + +- if (val > halt_poll_ns) +- val = halt_poll_ns; ++ if (val > vcpu->kvm->max_halt_poll_ns) ++ val = vcpu->kvm->max_halt_poll_ns; + + vcpu->halt_poll_ns = val; + out: +@@ -2838,7 +2838,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) + goto out; + } + poll_end = cur = ktime_get(); +- } while (single_task_running() && ktime_before(cur, stop)); ++ } while (single_task_running() && !need_resched() && ++ ktime_before(cur, stop)); + } + + prepare_to_rcuwait(&vcpu->wait);