From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.11 commit in: /
Date: Wed, 19 May 2021 12:25:04 +0000 (UTC) [thread overview]
Message-ID: <1621427094.338e6a0008513fbb06451cb85a66bcc37307837e.mpagano@gentoo> (raw)
commit: 338e6a0008513fbb06451cb85a66bcc37307837e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 19 12:24:54 2021 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 19 12:24:54 2021 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=338e6a00
Linux patch 5.11.22
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1021_linux-5.11.22.patch | 14593 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 14597 insertions(+)
diff --git a/0000_README b/0000_README
index 0fbd0c9..cc25bf1 100644
--- a/0000_README
+++ b/0000_README
@@ -127,6 +127,10 @@ Patch: 1020_linux-5.11.21.patch
From: http://www.kernel.org
Desc: Linux 5.11.21
+Patch: 1021_linux-5.11.22.patch
+From: http://www.kernel.org
+Desc: Linux 5.11.22
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1021_linux-5.11.22.patch b/1021_linux-5.11.22.patch
new file mode 100644
index 0000000..0b01fb2
--- /dev/null
+++ b/1021_linux-5.11.22.patch
@@ -0,0 +1,14593 @@
+diff --git a/.gitignore b/.gitignore
+index d01cda8e11779..67d2f35031283 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -55,6 +55,7 @@ modules.order
+ /tags
+ /TAGS
+ /linux
++/modules-only.symvers
+ /vmlinux
+ /vmlinux.32
+ /vmlinux.symvers
+diff --git a/Documentation/devicetree/bindings/media/renesas,vin.yaml b/Documentation/devicetree/bindings/media/renesas,vin.yaml
+index ad2fe660364bd..c69cf8d0cb15b 100644
+--- a/Documentation/devicetree/bindings/media/renesas,vin.yaml
++++ b/Documentation/devicetree/bindings/media/renesas,vin.yaml
+@@ -278,23 +278,35 @@ required:
+ - interrupts
+ - clocks
+ - power-domains
+- - resets
+-
+-if:
+- properties:
+- compatible:
+- contains:
+- enum:
+- - renesas,vin-r8a7778
+- - renesas,vin-r8a7779
+- - renesas,rcar-gen2-vin
+-then:
+- required:
+- - port
+-else:
+- required:
+- - renesas,id
+- - ports
++
++allOf:
++ - if:
++ not:
++ properties:
++ compatible:
++ contains:
++ enum:
++ - renesas,vin-r8a7778
++ - renesas,vin-r8a7779
++ then:
++ required:
++ - resets
++
++ - if:
++ properties:
++ compatible:
++ contains:
++ enum:
++ - renesas,vin-r8a7778
++ - renesas,vin-r8a7779
++ - renesas,rcar-gen2-vin
++ then:
++ required:
++ - port
++ else:
++ required:
++ - renesas,id
++ - ports
+
+ additionalProperties: false
+
+diff --git a/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml b/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml
+index 4a2bcc0158e2d..8fdfbc763d704 100644
+--- a/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml
++++ b/Documentation/devicetree/bindings/pci/rcar-pci-host.yaml
+@@ -17,6 +17,7 @@ allOf:
+ properties:
+ compatible:
+ oneOf:
++ - const: renesas,pcie-r8a7779 # R-Car H1
+ - items:
+ - enum:
+ - renesas,pcie-r8a7742 # RZ/G1H
+@@ -74,7 +75,16 @@ required:
+ - clocks
+ - clock-names
+ - power-domains
+- - resets
++
++if:
++ not:
++ properties:
++ compatible:
++ contains:
++ const: renesas,pcie-r8a7779
++then:
++ required:
++ - resets
+
+ unevaluatedProperties: false
+
+diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml
+index f54cae9ff7b28..d3f87f2bfdc25 100644
+--- a/Documentation/devicetree/bindings/serial/8250.yaml
++++ b/Documentation/devicetree/bindings/serial/8250.yaml
+@@ -93,11 +93,6 @@ properties:
+ - mediatek,mt7622-btif
+ - mediatek,mt7623-btif
+ - const: mediatek,mtk-btif
+- - items:
+- - enum:
+- - mediatek,mt7622-btif
+- - mediatek,mt7623-btif
+- - const: mediatek,mtk-btif
+ - items:
+ - const: mrvl,mmp-uart
+ - const: intel,xscale-uart
+diff --git a/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml b/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml
+index b33a76eeac4e4..f963204e0b162 100644
+--- a/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml
++++ b/Documentation/devicetree/bindings/thermal/rcar-gen3-thermal.yaml
+@@ -28,14 +28,7 @@ properties:
+ - renesas,r8a77980-thermal # R-Car V3H
+ - renesas,r8a779a0-thermal # R-Car V3U
+
+- reg:
+- minItems: 2
+- maxItems: 4
+- items:
+- - description: TSC1 registers
+- - description: TSC2 registers
+- - description: TSC3 registers
+- - description: TSC4 registers
++ reg: true
+
+ interrupts:
+ items:
+@@ -71,8 +64,25 @@ if:
+ enum:
+ - renesas,r8a779a0-thermal
+ then:
++ properties:
++ reg:
++ minItems: 2
++ maxItems: 3
++ items:
++ - description: TSC1 registers
++ - description: TSC2 registers
++ - description: TSC3 registers
+ required:
+ - interrupts
++else:
++ properties:
++ reg:
++ items:
++ - description: TSC0 registers
++ - description: TSC1 registers
++ - description: TSC2 registers
++ - description: TSC3 registers
++ - description: TSC4 registers
+
+ additionalProperties: false
+
+@@ -111,3 +121,20 @@ examples:
+ };
+ };
+ };
++ - |
++ #include <dt-bindings/clock/r8a779a0-cpg-mssr.h>
++ #include <dt-bindings/interrupt-controller/arm-gic.h>
++ #include <dt-bindings/power/r8a779a0-sysc.h>
++
++ tsc_r8a779a0: thermal@e6190000 {
++ compatible = "renesas,r8a779a0-thermal";
++ reg = <0xe6190000 0x200>,
++ <0xe6198000 0x200>,
++ <0xe61a0000 0x200>,
++ <0xe61a8000 0x200>,
++ <0xe61b0000 0x200>;
++ clocks = <&cpg CPG_MOD 919>;
++ power-domains = <&sysc R8A779A0_PD_ALWAYS_ON>;
++ resets = <&cpg 919>;
++ #thermal-sensor-cells = <1>;
++ };
+diff --git a/Documentation/dontdiff b/Documentation/dontdiff
+index e361fc95ca293..82e3eee7363b0 100644
+--- a/Documentation/dontdiff
++++ b/Documentation/dontdiff
+@@ -178,6 +178,7 @@ mktables
+ mktree
+ mkutf8data
+ modpost
++modules-only.symvers
+ modules.builtin
+ modules.builtin.modinfo
+ modules.nsdeps
+diff --git a/Makefile b/Makefile
+index 11ca74eabf47d..ff363cc6b11f1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 11
+-SUBLEVEL = 21
++SUBLEVEL = 22
+ EXTRAVERSION =
+ NAME = 💕 Valentine's Day Edition 💕
+
+@@ -1482,7 +1482,7 @@ endif # CONFIG_MODULES
+ # make distclean Remove editor backup files, patch leftover files and the like
+
+ # Directories & files removed with 'make clean'
+-CLEAN_FILES += include/ksym vmlinux.symvers \
++CLEAN_FILES += include/ksym vmlinux.symvers modules-only.symvers \
+ modules.builtin modules.builtin.modinfo modules.nsdeps \
+ compile_commands.json
+
+diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
+index ad9b7fe4dba36..4a9d33372fe2b 100644
+--- a/arch/arc/include/asm/page.h
++++ b/arch/arc/include/asm/page.h
+@@ -7,6 +7,18 @@
+
+ #include <uapi/asm/page.h>
+
++#ifdef CONFIG_ARC_HAS_PAE40
++
++#define MAX_POSSIBLE_PHYSMEM_BITS 40
++#define PAGE_MASK_PHYS (0xff00000000ull | PAGE_MASK)
++
++#else /* CONFIG_ARC_HAS_PAE40 */
++
++#define MAX_POSSIBLE_PHYSMEM_BITS 32
++#define PAGE_MASK_PHYS PAGE_MASK
++
++#endif /* CONFIG_ARC_HAS_PAE40 */
++
+ #ifndef __ASSEMBLY__
+
+ #define clear_page(paddr) memset((paddr), 0, PAGE_SIZE)
+diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
+index 163641726a2b9..5878846f00cfe 100644
+--- a/arch/arc/include/asm/pgtable.h
++++ b/arch/arc/include/asm/pgtable.h
+@@ -107,8 +107,8 @@
+ #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE)
+
+ /* Set of bits not changed in pte_modify */
+-#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL)
+-
++#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \
++ _PAGE_SPECIAL)
+ /* More Abbrevaited helpers */
+ #define PAGE_U_NONE __pgprot(___DEF)
+ #define PAGE_U_R __pgprot(___DEF | _PAGE_READ)
+@@ -132,13 +132,7 @@
+ #define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ)
+ #define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ)
+
+-#ifdef CONFIG_ARC_HAS_PAE40
+-#define PTE_BITS_NON_RWX_IN_PD1 (0xff00000000 | PAGE_MASK | _PAGE_CACHEABLE)
+-#define MAX_POSSIBLE_PHYSMEM_BITS 40
+-#else
+-#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK | _PAGE_CACHEABLE)
+-#define MAX_POSSIBLE_PHYSMEM_BITS 32
+-#endif
++#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE)
+
+ /**************************************************************************
+ * Mapping of vm_flags (Generic VM) to PTE flags (arch specific)
+diff --git a/arch/arc/include/uapi/asm/page.h b/arch/arc/include/uapi/asm/page.h
+index 2a97e2718a219..2a4ad619abfba 100644
+--- a/arch/arc/include/uapi/asm/page.h
++++ b/arch/arc/include/uapi/asm/page.h
+@@ -33,5 +33,4 @@
+
+ #define PAGE_MASK (~(PAGE_SIZE-1))
+
+-
+ #endif /* _UAPI__ASM_ARC_PAGE_H */
+diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
+index 1743506081da6..2cb8dfe866b66 100644
+--- a/arch/arc/kernel/entry.S
++++ b/arch/arc/kernel/entry.S
+@@ -177,7 +177,7 @@ tracesys:
+
+ ; Do the Sys Call as we normally would.
+ ; Validate the Sys Call number
+- cmp r8, NR_syscalls
++ cmp r8, NR_syscalls - 1
+ mov.hi r0, -ENOSYS
+ bhi tracesys_exit
+
+@@ -255,7 +255,7 @@ ENTRY(EV_Trap)
+ ;============ Normal syscall case
+
+ ; syscall num shd not exceed the total system calls avail
+- cmp r8, NR_syscalls
++ cmp r8, NR_syscalls - 1
+ mov.hi r0, -ENOSYS
+ bhi .Lret_from_system_call
+
+diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
+index ce07e697916c8..1bcc6985b9a0e 100644
+--- a/arch/arc/mm/init.c
++++ b/arch/arc/mm/init.c
+@@ -157,7 +157,16 @@ void __init setup_arch_memory(void)
+ min_high_pfn = PFN_DOWN(high_mem_start);
+ max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
+
+- max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn;
++ /*
++ * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE.
++ * For HIGHMEM without PAE max_high_pfn should be less than
++ * min_low_pfn to guarantee that these two regions don't overlap.
++ * For PAE case highmem is greater than lowmem, so it is natural
++ * to use max_high_pfn.
++ *
++ * In both cases, holes should be handled by pfn_valid().
++ */
++ max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
+
+ high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
+
+diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c
+index fac4adc902044..95c649fbc95af 100644
+--- a/arch/arc/mm/ioremap.c
++++ b/arch/arc/mm/ioremap.c
+@@ -53,9 +53,10 @@ EXPORT_SYMBOL(ioremap);
+ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
+ unsigned long flags)
+ {
++ unsigned int off;
+ unsigned long vaddr;
+ struct vm_struct *area;
+- phys_addr_t off, end;
++ phys_addr_t end;
+ pgprot_t prot = __pgprot(flags);
+
+ /* Don't allow wraparound, zero size */
+@@ -72,7 +73,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
+
+ /* Mappings have to be page-aligned */
+ off = paddr & ~PAGE_MASK;
+- paddr &= PAGE_MASK;
++ paddr &= PAGE_MASK_PHYS;
+ size = PAGE_ALIGN(end + 1) - paddr;
+
+ /*
+diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
+index 9bb3c24f36770..9c7c682472896 100644
+--- a/arch/arc/mm/tlb.c
++++ b/arch/arc/mm/tlb.c
+@@ -576,7 +576,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned,
+ pte_t *ptep)
+ {
+ unsigned long vaddr = vaddr_unaligned & PAGE_MASK;
+- phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK;
++ phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS;
+ struct page *page = pfn_to_page(pte_pfn(*ptep));
+
+ create_tlb(vma, vaddr, ptep);
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 3bf90d9e33353..a294a02f2d232 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -1168,7 +1168,7 @@
+ };
+ };
+
+- target-module@34000 { /* 0x48034000, ap 7 46.0 */
++ timer3_target: target-module@34000 { /* 0x48034000, ap 7 46.0 */
+ compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ reg = <0x34000 0x4>,
+ <0x34010 0x4>;
+@@ -1195,7 +1195,7 @@
+ };
+ };
+
+- target-module@36000 { /* 0x48036000, ap 9 4e.0 */
++ timer4_target: target-module@36000 { /* 0x48036000, ap 9 4e.0 */
+ compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ reg = <0x36000 0x4>,
+ <0x36010 0x4>;
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index ce1194744f840..53d68786a61f2 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -46,6 +46,7 @@
+
+ timer {
+ compatible = "arm,armv7-timer";
++ status = "disabled"; /* See ARM architected timer wrap erratum i940 */
+ interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
+@@ -1241,3 +1242,22 @@
+ assigned-clock-parents = <&sys_32k_ck>;
+ };
+ };
++
++/* Local timers, see ARM architected timer wrap erratum i940 */
++&timer3_target {
++ ti,no-reset-on-init;
++ ti,no-idle;
++ timer@0 {
++ assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
++ assigned-clock-parents = <&timer_sys_clk_div>;
++ };
++};
++
++&timer4_target {
++ ti,no-reset-on-init;
++ ti,no-idle;
++ timer@0 {
++ assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
++ assigned-clock-parents = <&timer_sys_clk_div>;
++ };
++};
+diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
+index 08660ae9dcbce..b1423fb130ea4 100644
+--- a/arch/arm/kernel/hw_breakpoint.c
++++ b/arch/arm/kernel/hw_breakpoint.c
+@@ -886,7 +886,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs)
+ info->trigger = addr;
+ pr_debug("breakpoint fired: address = 0x%x\n", addr);
+ perf_bp_event(bp, regs);
+- if (!bp->overflow_handler)
++ if (is_default_overflow_handler(bp))
+ enable_single_step(bp, addr);
+ goto unlock;
+ }
+diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
+index 1c26d7baa67f8..cfdde3a568059 100644
+--- a/arch/arm64/include/asm/daifflags.h
++++ b/arch/arm64/include/asm/daifflags.h
+@@ -131,6 +131,9 @@ static inline void local_daif_inherit(struct pt_regs *regs)
+ if (interrupts_enabled(regs))
+ trace_hardirqs_on();
+
++ if (system_uses_irq_prio_masking())
++ gic_write_pmr(regs->pmr_save);
++
+ /*
+ * We can't use local_daif_restore(regs->pstate) here as
+ * system_has_prio_mask_debugging() won't restore the I bit if it can
+diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
+index 5346953e4382e..ead1ecffe054a 100644
+--- a/arch/arm64/kernel/entry-common.c
++++ b/arch/arm64/kernel/entry-common.c
+@@ -177,14 +177,6 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
+ {
+ unsigned long far = read_sysreg(far_el1);
+
+- /*
+- * The CPU masked interrupts, and we are leaving them masked during
+- * do_debug_exception(). Update PMR as if we had called
+- * local_daif_mask().
+- */
+- if (system_uses_irq_prio_masking())
+- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ arm64_enter_el1_dbg(regs);
+ do_debug_exception(far, esr, regs);
+ arm64_exit_el1_dbg(regs);
+@@ -348,9 +340,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
+ /* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
+ unsigned long far = read_sysreg(far_el1);
+
+- if (system_uses_irq_prio_masking())
+- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ enter_from_user_mode();
+ do_debug_exception(far, esr, regs);
+ local_daif_restore(DAIF_PROCCTX_NOIRQ);
+@@ -358,9 +347,6 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
+
+ static void noinstr el0_svc(struct pt_regs *regs)
+ {
+- if (system_uses_irq_prio_masking())
+- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ enter_from_user_mode();
+ do_el0_svc(regs);
+ }
+@@ -435,9 +421,6 @@ static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
+
+ static void noinstr el0_svc_compat(struct pt_regs *regs)
+ {
+- if (system_uses_irq_prio_masking())
+- gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+-
+ enter_from_user_mode();
+ do_el0_svc_compat(regs);
+ }
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 14d5119489fe1..0deb0194fcd23 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -292,6 +292,8 @@ alternative_else_nop_endif
+ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+ mrs_s x20, SYS_ICC_PMR_EL1
+ str x20, [sp, #S_PMR_SAVE]
++ mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
++ msr_s SYS_ICC_PMR_EL1, x20
+ alternative_else_nop_endif
+
+ /* Re-enable tag checking (TCO set on exception entry) */
+@@ -493,8 +495,8 @@ tsk .req x28 // current thread_info
+ /*
+ * Interrupt handling.
+ */
+- .macro irq_handler
+- ldr_l x1, handle_arch_irq
++ .macro irq_handler, handler:req
++ ldr_l x1, \handler
+ mov x0, sp
+ irq_stack_entry
+ blr x1
+@@ -524,13 +526,41 @@ alternative_endif
+ #endif
+ .endm
+
+- .macro gic_prio_irq_setup, pmr:req, tmp:req
+-#ifdef CONFIG_ARM64_PSEUDO_NMI
+- alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+- orr \tmp, \pmr, #GIC_PRIO_PSR_I_SET
+- msr_s SYS_ICC_PMR_EL1, \tmp
+- alternative_else_nop_endif
++ .macro el1_interrupt_handler, handler:req
++ enable_da_f
++
++ mov x0, sp
++ bl enter_el1_irq_or_nmi
++
++ irq_handler \handler
++
++#ifdef CONFIG_PREEMPTION
++ ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count
++alternative_if ARM64_HAS_IRQ_PRIO_MASKING
++ /*
++ * DA_F were cleared at start of handling. If anything is set in DAIF,
++ * we come back from an NMI, so skip preemption
++ */
++ mrs x0, daif
++ orr x24, x24, x0
++alternative_else_nop_endif
++ cbnz x24, 1f // preempt count != 0 || NMI return path
++ bl arm64_preempt_schedule_irq // irq en/disable is done inside
++1:
+ #endif
++
++ mov x0, sp
++ bl exit_el1_irq_or_nmi
++ .endm
++
++ .macro el0_interrupt_handler, handler:req
++ user_exit_irqoff
++ enable_da_f
++
++ tbz x22, #55, 1f
++ bl do_el0_irq_bp_hardening
++1:
++ irq_handler \handler
+ .endm
+
+ .text
+@@ -662,32 +692,7 @@ SYM_CODE_END(el1_sync)
+ .align 6
+ SYM_CODE_START_LOCAL_NOALIGN(el1_irq)
+ kernel_entry 1
+- gic_prio_irq_setup pmr=x20, tmp=x1
+- enable_da_f
+-
+- mov x0, sp
+- bl enter_el1_irq_or_nmi
+-
+- irq_handler
+-
+-#ifdef CONFIG_PREEMPTION
+- ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count
+-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+- /*
+- * DA_F were cleared at start of handling. If anything is set in DAIF,
+- * we come back from an NMI, so skip preemption
+- */
+- mrs x0, daif
+- orr x24, x24, x0
+-alternative_else_nop_endif
+- cbnz x24, 1f // preempt count != 0 || NMI return path
+- bl arm64_preempt_schedule_irq // irq en/disable is done inside
+-1:
+-#endif
+-
+- mov x0, sp
+- bl exit_el1_irq_or_nmi
+-
++ el1_interrupt_handler handle_arch_irq
+ kernel_exit 1
+ SYM_CODE_END(el1_irq)
+
+@@ -727,22 +732,13 @@ SYM_CODE_END(el0_error_compat)
+ SYM_CODE_START_LOCAL_NOALIGN(el0_irq)
+ kernel_entry 0
+ el0_irq_naked:
+- gic_prio_irq_setup pmr=x20, tmp=x0
+- user_exit_irqoff
+- enable_da_f
+-
+- tbz x22, #55, 1f
+- bl do_el0_irq_bp_hardening
+-1:
+- irq_handler
+-
++ el0_interrupt_handler handle_arch_irq
+ b ret_to_user
+ SYM_CODE_END(el0_irq)
+
+ SYM_CODE_START_LOCAL(el1_error)
+ kernel_entry 1
+ mrs x1, esr_el1
+- gic_prio_kentry_setup tmp=x2
+ enable_dbg
+ mov x0, sp
+ bl do_serror
+@@ -753,7 +749,6 @@ SYM_CODE_START_LOCAL(el0_error)
+ kernel_entry 0
+ el0_error_naked:
+ mrs x25, esr_el1
+- gic_prio_kentry_setup tmp=x2
+ user_exit_irqoff
+ enable_dbg
+ mov x0, sp
+diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
+index ac485163a4a76..6d44c028d1c9e 100644
+--- a/arch/arm64/mm/flush.c
++++ b/arch/arm64/mm/flush.c
+@@ -55,8 +55,10 @@ void __sync_icache_dcache(pte_t pte)
+ {
+ struct page *page = pte_page(pte);
+
+- if (!test_and_set_bit(PG_dcache_clean, &page->flags))
++ if (!test_bit(PG_dcache_clean, &page->flags)) {
+ sync_icache_aliases(page_address(page), page_size(page));
++ set_bit(PG_dcache_clean, &page->flags);
++ }
+ }
+ EXPORT_SYMBOL_GPL(__sync_icache_dcache);
+
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 1f7ee8c8b7b81..434b2d9f570e2 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -454,6 +454,18 @@ SYM_FUNC_START(__cpu_setup)
+ mov x10, #(SYS_GCR_EL1_RRND | SYS_GCR_EL1_EXCL_MASK)
+ msr_s SYS_GCR_EL1, x10
+
++ /*
++ * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then
++ * RGSR_EL1.SEED must be non-zero for IRG to produce
++ * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we
++ * must initialize it.
++ */
++ mrs x10, CNTVCT_EL0
++ ands x10, x10, #SYS_RGSR_EL1_SEED_MASK
++ csinc x10, x10, xzr, ne
++ lsl x10, x10, #SYS_RGSR_EL1_SEED_SHIFT
++ msr_s SYS_RGSR_EL1, x10
++
+ /* clear any pending tag check faults in TFSR*_EL1 */
+ msr_s SYS_TFSR_EL1, xzr
+ msr_s SYS_TFSRE0_EL1, xzr
+diff --git a/arch/ia64/include/asm/module.h b/arch/ia64/include/asm/module.h
+index 5a29652e6defc..7271b9c5fc760 100644
+--- a/arch/ia64/include/asm/module.h
++++ b/arch/ia64/include/asm/module.h
+@@ -14,16 +14,20 @@
+ struct elf64_shdr; /* forward declration */
+
+ struct mod_arch_specific {
++ /* Used only at module load time. */
+ struct elf64_shdr *core_plt; /* core PLT section */
+ struct elf64_shdr *init_plt; /* init PLT section */
+ struct elf64_shdr *got; /* global offset table */
+ struct elf64_shdr *opd; /* official procedure descriptors */
+ struct elf64_shdr *unwind; /* unwind-table section */
+ unsigned long gp; /* global-pointer for module */
++ unsigned int next_got_entry; /* index of next available got entry */
+
++ /* Used at module run and cleanup time. */
+ void *core_unw_table; /* core unwind-table cookie returned by unwinder */
+ void *init_unw_table; /* init unwind-table cookie returned by unwinder */
+- unsigned int next_got_entry; /* index of next available got entry */
++ void *opd_addr; /* symbolize uses .opd to get to actual function */
++ unsigned long opd_size;
+ };
+
+ #define ARCH_SHF_SMALL SHF_IA_64_SHORT
+diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c
+index 00a496cb346f6..2cba53c1da82e 100644
+--- a/arch/ia64/kernel/module.c
++++ b/arch/ia64/kernel/module.c
+@@ -905,9 +905,31 @@ register_unwind_table (struct module *mod)
+ int
+ module_finalize (const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod)
+ {
++ struct mod_arch_specific *mas = &mod->arch;
++
+ DEBUGP("%s: init: entry=%p\n", __func__, mod->init);
+- if (mod->arch.unwind)
++ if (mas->unwind)
+ register_unwind_table(mod);
++
++ /*
++ * ".opd" was already relocated to the final destination. Store
++ * it's address for use in symbolizer.
++ */
++ mas->opd_addr = (void *)mas->opd->sh_addr;
++ mas->opd_size = mas->opd->sh_size;
++
++ /*
++ * Module relocation was already done at this point. Section
++ * headers are about to be deleted. Wipe out load-time context.
++ */
++ mas->core_plt = NULL;
++ mas->init_plt = NULL;
++ mas->got = NULL;
++ mas->opd = NULL;
++ mas->unwind = NULL;
++ mas->gp = 0;
++ mas->next_got_entry = 0;
++
+ return 0;
+ }
+
+@@ -926,10 +948,9 @@ module_arch_cleanup (struct module *mod)
+
+ void *dereference_module_function_descriptor(struct module *mod, void *ptr)
+ {
+- Elf64_Shdr *opd = mod->arch.opd;
++ struct mod_arch_specific *mas = &mod->arch;
+
+- if (ptr < (void *)opd->sh_addr ||
+- ptr >= (void *)(opd->sh_addr + opd->sh_size))
++ if (ptr < mas->opd_addr || ptr >= mas->opd_addr + mas->opd_size)
+ return ptr;
+
+ return dereference_function_descriptor(ptr);
+diff --git a/arch/mips/include/asm/div64.h b/arch/mips/include/asm/div64.h
+index dc5ea57364408..ceece76fc971a 100644
+--- a/arch/mips/include/asm/div64.h
++++ b/arch/mips/include/asm/div64.h
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (C) 2000, 2004 Maciej W. Rozycki
++ * Copyright (C) 2000, 2004, 2021 Maciej W. Rozycki
+ * Copyright (C) 2003, 07 Ralf Baechle (ralf@linux-mips.org)
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+@@ -9,25 +9,18 @@
+ #ifndef __ASM_DIV64_H
+ #define __ASM_DIV64_H
+
+-#include <asm-generic/div64.h>
+-
+-#if BITS_PER_LONG == 64
++#include <asm/bitsperlong.h>
+
+-#include <linux/types.h>
++#if BITS_PER_LONG == 32
+
+ /*
+ * No traps on overflows for any of these...
+ */
+
+-#define __div64_32(n, base) \
+-({ \
++#define do_div64_32(res, high, low, base) ({ \
+ unsigned long __cf, __tmp, __tmp2, __i; \
+ unsigned long __quot32, __mod32; \
+- unsigned long __high, __low; \
+- unsigned long long __n; \
+ \
+- __high = *__n >> 32; \
+- __low = __n; \
+ __asm__( \
+ " .set push \n" \
+ " .set noat \n" \
+@@ -51,18 +44,48 @@
+ " subu %0, %0, %z6 \n" \
+ " addiu %2, %2, 1 \n" \
+ "3: \n" \
+- " bnez %4, 0b\n\t" \
+- " srl %5, %1, 0x1f\n\t" \
++ " bnez %4, 0b \n" \
++ " srl %5, %1, 0x1f \n" \
+ " .set pop" \
+ : "=&r" (__mod32), "=&r" (__tmp), \
+ "=&r" (__quot32), "=&r" (__cf), \
+ "=&r" (__i), "=&r" (__tmp2) \
+- : "Jr" (base), "0" (__high), "1" (__low)); \
++ : "Jr" (base), "0" (high), "1" (low)); \
+ \
+- (__n) = __quot32; \
++ (res) = __quot32; \
+ __mod32; \
+ })
+
+-#endif /* BITS_PER_LONG == 64 */
++#define __div64_32(n, base) ({ \
++ unsigned long __upper, __low, __high, __radix; \
++ unsigned long long __quot; \
++ unsigned long long __div; \
++ unsigned long __mod; \
++ \
++ __div = (*n); \
++ __radix = (base); \
++ \
++ __high = __div >> 32; \
++ __low = __div; \
++ \
++ if (__high < __radix) { \
++ __upper = __high; \
++ __high = 0; \
++ } else { \
++ __upper = __high % __radix; \
++ __high /= __radix; \
++ } \
++ \
++ __mod = do_div64_32(__low, __upper, __low, __radix); \
++ \
++ __quot = __high; \
++ __quot = __quot << 32 | __low; \
++ (*n) = __quot; \
++ __mod; \
++})
++
++#endif /* BITS_PER_LONG == 32 */
++
++#include <asm-generic/div64.h>
+
+ #endif /* __ASM_DIV64_H */
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 21794db53c05a..8895eb6568cae 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1743,7 +1743,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ set_isa(c, MIPS_CPU_ISA_M64R2);
+ break;
+ }
+- c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_EXT |
+ MIPS_ASE_LOONGSON_EXT2);
+ break;
+@@ -1773,7 +1772,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ * register, we correct it here.
+ */
+ c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
+- c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
+ MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
+ c->ases &= ~MIPS_ASE_VZ; /* VZ of Loongson-3A2000/3000 is incomplete */
+@@ -1784,7 +1782,6 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ set_elf_platform(cpu, "loongson3a");
+ set_isa(c, MIPS_CPU_ISA_M64R2);
+ decode_cpucfg(c);
+- c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ break;
+ default:
+ panic("Unknown Loongson Processor ID!");
+diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
+index abc7b603ab65c..294dd0082ad2f 100644
+--- a/arch/powerpc/kernel/head_32.h
++++ b/arch/powerpc/kernel/head_32.h
+@@ -331,11 +331,7 @@ label:
+ lis r1, emergency_ctx@ha
+ #endif
+ lwz r1, emergency_ctx@l(r1)
+- cmpwi cr1, r1, 0
+- bne cr1, 1f
+- lis r1, init_thread_union@ha
+- addi r1, r1, init_thread_union@l
+-1: addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
++ addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ EXCEPTION_PROLOG_2
+ SAVE_NVGPRS(r11)
+ addi r3, r1, STACK_FRAME_OVERHEAD
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index 5b69a6a72a0e2..6806eefa52ceb 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -1050,7 +1050,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
+
+ spin_lock_irqsave(&tbl->large_pool.lock, flags);
+ for (i = 0; i < tbl->nr_pools; i++)
+- spin_lock(&tbl->pools[i].lock);
++ spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);
+
+ iommu_table_release_pages(tbl);
+
+@@ -1078,7 +1078,7 @@ void iommu_release_ownership(struct iommu_table *tbl)
+
+ spin_lock_irqsave(&tbl->large_pool.lock, flags);
+ for (i = 0; i < tbl->nr_pools; i++)
+- spin_lock(&tbl->pools[i].lock);
++ spin_lock_nest_lock(&tbl->pools[i].lock, &tbl->large_pool.lock);
+
+ memset(tbl->it_map, 0, sz);
+
+diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
+index 8ba49a6bf5159..d7c1f92152af6 100644
+--- a/arch/powerpc/kernel/setup_32.c
++++ b/arch/powerpc/kernel/setup_32.c
+@@ -164,7 +164,7 @@ void __init irqstack_early_init(void)
+ }
+
+ #ifdef CONFIG_VMAP_STACK
+-void *emergency_ctx[NR_CPUS] __ro_after_init;
++void *emergency_ctx[NR_CPUS] __ro_after_init = {[0] = &init_stack};
+
+ void __init emergency_stack_init(void)
+ {
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index d1bc51a128b29..e285d55f9213a 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1545,6 +1545,9 @@ void start_secondary(void *unused)
+
+ vdso_getcpu_init();
+ #endif
++ set_numa_node(numa_cpu_lookup_table[cpu]);
++ set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
++
+ /* Update topology CPU masks */
+ add_cpu_to_masks(cpu);
+
+@@ -1563,9 +1566,6 @@ void start_secondary(void *unused)
+ shared_caches = true;
+ }
+
+- set_numa_node(numa_cpu_lookup_table[cpu]);
+- set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
+-
+ smp_wmb();
+ notify_cpu_starting(cpu);
+ set_cpu_online(cpu, true);
+diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
+index 1fd31b4b0e139..0aefa6a4a259b 100644
+--- a/arch/powerpc/lib/feature-fixups.c
++++ b/arch/powerpc/lib/feature-fixups.c
+@@ -14,6 +14,7 @@
+ #include <linux/string.h>
+ #include <linux/init.h>
+ #include <linux/sched/mm.h>
++#include <linux/stop_machine.h>
+ #include <asm/cputable.h>
+ #include <asm/code-patching.h>
+ #include <asm/page.h>
+@@ -227,11 +228,25 @@ static void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+ : "unknown");
+ }
+
++static int __do_stf_barrier_fixups(void *data)
++{
++ enum stf_barrier_type *types = data;
++
++ do_stf_entry_barrier_fixups(*types);
++ do_stf_exit_barrier_fixups(*types);
++
++ return 0;
++}
+
+ void do_stf_barrier_fixups(enum stf_barrier_type types)
+ {
+- do_stf_entry_barrier_fixups(types);
+- do_stf_exit_barrier_fixups(types);
++ /*
++ * The call to the fallback entry flush, and the fallback/sync-ori exit
++ * flush can not be safely patched in/out while other CPUs are executing
++ * them. So call __do_stf_barrier_fixups() on one CPU while all other CPUs
++ * spin in the stop machine core with interrupts hard disabled.
++ */
++ stop_machine(__do_stf_barrier_fixups, &types, NULL);
+ }
+
+ void do_uaccess_flush_fixups(enum l1d_flush_type types)
+@@ -284,8 +299,9 @@ void do_uaccess_flush_fixups(enum l1d_flush_type types)
+ : "unknown");
+ }
+
+-void do_entry_flush_fixups(enum l1d_flush_type types)
++static int __do_entry_flush_fixups(void *data)
+ {
++ enum l1d_flush_type types = *(enum l1d_flush_type *)data;
+ unsigned int instrs[3], *dest;
+ long *start, *end;
+ int i;
+@@ -354,6 +370,19 @@ void do_entry_flush_fixups(enum l1d_flush_type types)
+ : "ori type" :
+ (types & L1D_FLUSH_MTTRIG) ? "mttrig type"
+ : "unknown");
++
++ return 0;
++}
++
++void do_entry_flush_fixups(enum l1d_flush_type types)
++{
++ /*
++ * The call to the fallback flush can not be safely patched in/out while
++ * other CPUs are executing it. So call __do_entry_flush_fixups() on one
++ * CPU while all other CPUs spin in the stop machine core with interrupts
++ * hard disabled.
++ */
++ stop_machine(__do_entry_flush_fixups, &types, NULL);
+ }
+
+ void do_rfi_flush_fixups(enum l1d_flush_type types)
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 73b06adb6eebe..f81b09769e0b6 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -337,7 +337,7 @@ repeat:
+ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
+ int psize, int ssize)
+ {
+- unsigned long vaddr;
++ unsigned long vaddr, time_limit;
+ unsigned int step, shift;
+ int rc;
+ int ret = 0;
+@@ -350,8 +350,19 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
+
+ /* Unmap the full range specificied */
+ vaddr = ALIGN_DOWN(vstart, step);
++ time_limit = jiffies + HZ;
++
+ for (;vaddr < vend; vaddr += step) {
+ rc = mmu_hash_ops.hpte_removebolted(vaddr, psize, ssize);
++
++ /*
++ * For large number of mappings introduce a cond_resched()
++ * to prevent softlockup warnings.
++ */
++ if (time_after(jiffies, time_limit)) {
++ cond_resched();
++ time_limit = jiffies + HZ;
++ }
+ if (rc == -ENOENT) {
+ ret = -ENOENT;
+ continue;
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 12cbffd3c2e32..325f3b220f360 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -47,9 +47,6 @@ static void rtas_stop_self(void)
+
+ BUG_ON(rtas_stop_self_token == RTAS_UNKNOWN_SERVICE);
+
+- printk("cpu %u (hwid %u) Ready to die...\n",
+- smp_processor_id(), hard_smp_processor_id());
+-
+ rtas_call_unlocked(&args, rtas_stop_self_token, 0, 1, NULL);
+
+ panic("Alas, I survived.\n");
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 5cacb632eb37a..31b657c377353 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -1341,17 +1341,14 @@ static int xive_prepare_cpu(unsigned int cpu)
+
+ xc = per_cpu(xive_cpu, cpu);
+ if (!xc) {
+- struct device_node *np;
+-
+ xc = kzalloc_node(sizeof(struct xive_cpu),
+ GFP_KERNEL, cpu_to_node(cpu));
+ if (!xc)
+ return -ENOMEM;
+- np = of_get_cpu_node(cpu, NULL);
+- if (np)
+- xc->chip_id = of_get_ibm_chip_id(np);
+- of_node_put(np);
+ xc->hw_ipi = XIVE_BAD_IRQ;
++ xc->chip_id = XIVE_INVALID_CHIP_ID;
++ if (xive_ops->prepare_cpu)
++ xive_ops->prepare_cpu(cpu, xc);
+
+ per_cpu(xive_cpu, cpu) = xc;
+ }
+diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
+index 05a800a3104ed..57e3f15404354 100644
+--- a/arch/powerpc/sysdev/xive/native.c
++++ b/arch/powerpc/sysdev/xive/native.c
+@@ -380,6 +380,11 @@ static void xive_native_update_pending(struct xive_cpu *xc)
+ }
+ }
+
++static void xive_native_prepare_cpu(unsigned int cpu, struct xive_cpu *xc)
++{
++ xc->chip_id = cpu_to_chip_id(cpu);
++}
++
+ static void xive_native_setup_cpu(unsigned int cpu, struct xive_cpu *xc)
+ {
+ s64 rc;
+@@ -462,6 +467,7 @@ static const struct xive_ops xive_native_ops = {
+ .match = xive_native_match,
+ .shutdown = xive_native_shutdown,
+ .update_pending = xive_native_update_pending,
++ .prepare_cpu = xive_native_prepare_cpu,
+ .setup_cpu = xive_native_setup_cpu,
+ .teardown_cpu = xive_native_teardown_cpu,
+ .sync_source = xive_native_sync_source,
+diff --git a/arch/powerpc/sysdev/xive/xive-internal.h b/arch/powerpc/sysdev/xive/xive-internal.h
+index 9cf57c722faa3..6478be19b4d36 100644
+--- a/arch/powerpc/sysdev/xive/xive-internal.h
++++ b/arch/powerpc/sysdev/xive/xive-internal.h
+@@ -46,6 +46,7 @@ struct xive_ops {
+ u32 *sw_irq);
+ int (*setup_queue)(unsigned int cpu, struct xive_cpu *xc, u8 prio);
+ void (*cleanup_queue)(unsigned int cpu, struct xive_cpu *xc, u8 prio);
++ void (*prepare_cpu)(unsigned int cpu, struct xive_cpu *xc);
+ void (*setup_cpu)(unsigned int cpu, struct xive_cpu *xc);
+ void (*teardown_cpu)(unsigned int cpu, struct xive_cpu *xc);
+ bool (*match)(struct device_node *np);
+diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
+index ea028d9e0d242..d44567490d911 100644
+--- a/arch/riscv/kernel/smp.c
++++ b/arch/riscv/kernel/smp.c
+@@ -54,7 +54,7 @@ int riscv_hartid_to_cpuid(int hartid)
+ return i;
+
+ pr_err("Couldn't find cpu id for hartid [%d]\n", hartid);
+- return i;
++ return -ENOENT;
+ }
+
+ void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
+diff --git a/arch/sh/kernel/traps.c b/arch/sh/kernel/traps.c
+index f5beecdac6938..e76b221570999 100644
+--- a/arch/sh/kernel/traps.c
++++ b/arch/sh/kernel/traps.c
+@@ -180,7 +180,6 @@ static inline void arch_ftrace_nmi_exit(void) { }
+
+ BUILD_TRAP_HANDLER(nmi)
+ {
+- unsigned int cpu = smp_processor_id();
+ TRAP_HANDLER_DECL;
+
+ arch_ftrace_nmi_enter();
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index f656aabd1545c..0e3325790f3a9 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -588,6 +588,21 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_MC, exc_machine_check);
+ #endif
+
+ /* NMI */
++
++#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
++/*
++ * Special NOIST entry point for VMX which invokes this on the kernel
++ * stack. asm_exc_nmi() requires an IST to work correctly vs. the NMI
++ * 'executing' marker.
++ *
++ * On 32bit this just uses the regular NMI entry point because 32-bit does
++ * not have ISTs.
++ */
++DECLARE_IDTENTRY(X86_TRAP_NMI, exc_nmi_noist);
++#else
++#define asm_exc_nmi_noist asm_exc_nmi
++#endif
++
+ DECLARE_IDTENTRY_NMI(X86_TRAP_NMI, exc_nmi);
+ #ifdef CONFIG_XEN_PV
+ DECLARE_IDTENTRY_RAW(X86_TRAP_NMI, xenpv_exc_nmi);
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index e0cfd620b2934..d5b365e670ac0 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -358,8 +358,6 @@ struct kvm_mmu {
+ int (*sync_page)(struct kvm_vcpu *vcpu,
+ struct kvm_mmu_page *sp);
+ void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
+- void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+- u64 *spte, const void *pte);
+ hpa_t root_hpa;
+ gpa_t root_pgd;
+ union kvm_mmu_role mmu_role;
+@@ -1035,7 +1033,6 @@ struct kvm_arch {
+ struct kvm_vm_stat {
+ ulong mmu_shadow_zapped;
+ ulong mmu_pte_write;
+- ulong mmu_pte_updated;
+ ulong mmu_pde_zapped;
+ ulong mmu_flooded;
+ ulong mmu_recycled;
+@@ -1697,6 +1694,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
+ unsigned long icr, int op_64_bit);
+
+ void kvm_define_user_return_msr(unsigned index, u32 msr);
++int kvm_probe_user_return_msr(u32 msr);
+ int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
+
+ u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc);
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index c66df6368909f..ea72b3d83240a 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -805,8 +805,10 @@ DECLARE_PER_CPU(u64, msr_misc_features_shadow);
+
+ #ifdef CONFIG_CPU_SUP_AMD
+ extern u32 amd_get_nodes_per_socket(void);
++extern u32 amd_get_highest_perf(void);
+ #else
+ static inline u32 amd_get_nodes_per_socket(void) { return 0; }
++static inline u32 amd_get_highest_perf(void) { return 0; }
+ #endif
+
+ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 347a956f71ca0..eedb2b320946f 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1170,3 +1170,19 @@ void set_dr_addr_mask(unsigned long mask, int dr)
+ break;
+ }
+ }
++
++u32 amd_get_highest_perf(void)
++{
++ struct cpuinfo_x86 *c = &boot_cpu_data;
++
++ if (c->x86 == 0x17 && ((c->x86_model >= 0x30 && c->x86_model < 0x40) ||
++ (c->x86_model >= 0x70 && c->x86_model < 0x80)))
++ return 166;
++
++ if (c->x86 == 0x19 && ((c->x86_model >= 0x20 && c->x86_model < 0x30) ||
++ (c->x86_model >= 0x40 && c->x86_model < 0x70)))
++ return 166;
++
++ return 255;
++}
++EXPORT_SYMBOL_GPL(amd_get_highest_perf);
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index bf250a339655f..2ef961cf4cfc5 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -524,6 +524,16 @@ nmi_restart:
+ mds_user_clear_cpu_buffers();
+ }
+
++#if defined(CONFIG_X86_64) && IS_ENABLED(CONFIG_KVM_INTEL)
++DEFINE_IDTENTRY_RAW(exc_nmi_noist)
++{
++ exc_nmi(regs);
++}
++#endif
++#if IS_MODULE(CONFIG_KVM_INTEL)
++EXPORT_SYMBOL_GPL(asm_exc_nmi_noist);
++#endif
++
+ void stop_nmi(void)
+ {
+ ignore_nmis++;
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 6b08d1eb173fd..363b36bbd791a 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -2046,7 +2046,7 @@ static bool amd_set_max_freq_ratio(void)
+ return false;
+ }
+
+- highest_perf = perf_caps.highest_perf;
++ highest_perf = amd_get_highest_perf();
+ nominal_perf = perf_caps.nominal_perf;
+
+ if (!highest_perf || !nominal_perf) {
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 38172ca627d36..0bd815101ff48 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -573,7 +573,8 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func)
+ case 7:
+ entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
+ entry->eax = 0;
+- entry->ecx = F(RDPID);
++ if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP))
++ entry->ecx = F(RDPID);
+ ++array->nent;
+ default:
+ break;
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index d3f2b63167451..e82151ba95c09 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -4502,7 +4502,7 @@ static const struct opcode group8[] = {
+ * from the register case of group9.
+ */
+ static const struct gprefix pfx_0f_c7_7 = {
+- N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdtscp),
++ N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdpid),
+ };
+
+
+diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
+index 43c93ffa76edf..7d5be04dc6616 100644
+--- a/arch/x86/kvm/kvm_emulate.h
++++ b/arch/x86/kvm/kvm_emulate.h
+@@ -468,6 +468,7 @@ enum x86_intercept {
+ x86_intercept_clgi,
+ x86_intercept_skinit,
+ x86_intercept_rdtscp,
++ x86_intercept_rdpid,
+ x86_intercept_icebp,
+ x86_intercept_wbinvd,
+ x86_intercept_monitor,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 570fa298083cd..70eb00f4317fe 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1908,8 +1908,8 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
+ if (!apic->lapic_timer.hv_timer_in_use)
+ goto out;
+ WARN_ON(rcuwait_active(&vcpu->wait));
+- cancel_hv_timer(apic);
+ apic_timer_expired(apic, false);
++ cancel_hv_timer(apic);
+
+ if (apic_lvtt_period(apic) && apic->lapic_timer.period) {
+ advance_periodic_target_expiration(apic);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 9dabd689a8129..b3987e338fbea 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1723,13 +1723,6 @@ static int nonpaging_sync_page(struct kvm_vcpu *vcpu,
+ return 0;
+ }
+
+-static void nonpaging_update_pte(struct kvm_vcpu *vcpu,
+- struct kvm_mmu_page *sp, u64 *spte,
+- const void *pte)
+-{
+- WARN_ON(1);
+-}
+-
+ #define KVM_PAGE_ARRAY_NR 16
+
+ struct kvm_mmu_pages {
+@@ -3833,7 +3826,6 @@ static void nonpaging_init_context(struct kvm_vcpu *vcpu,
+ context->gva_to_gpa = nonpaging_gva_to_gpa;
+ context->sync_page = nonpaging_sync_page;
+ context->invlpg = NULL;
+- context->update_pte = nonpaging_update_pte;
+ context->root_level = 0;
+ context->shadow_root_level = PT32E_ROOT_LEVEL;
+ context->direct_map = true;
+@@ -4415,7 +4407,6 @@ static void paging64_init_context_common(struct kvm_vcpu *vcpu,
+ context->gva_to_gpa = paging64_gva_to_gpa;
+ context->sync_page = paging64_sync_page;
+ context->invlpg = paging64_invlpg;
+- context->update_pte = paging64_update_pte;
+ context->shadow_root_level = level;
+ context->direct_map = false;
+ }
+@@ -4444,7 +4435,6 @@ static void paging32_init_context(struct kvm_vcpu *vcpu,
+ context->gva_to_gpa = paging32_gva_to_gpa;
+ context->sync_page = paging32_sync_page;
+ context->invlpg = paging32_invlpg;
+- context->update_pte = paging32_update_pte;
+ context->shadow_root_level = PT32E_ROOT_LEVEL;
+ context->direct_map = false;
+ }
+@@ -4526,7 +4516,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
+ context->page_fault = kvm_tdp_page_fault;
+ context->sync_page = nonpaging_sync_page;
+ context->invlpg = NULL;
+- context->update_pte = nonpaging_update_pte;
+ context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu);
+ context->direct_map = true;
+ context->get_guest_pgd = get_cr3;
+@@ -4703,7 +4692,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
+ context->gva_to_gpa = ept_gva_to_gpa;
+ context->sync_page = ept_sync_page;
+ context->invlpg = ept_invlpg;
+- context->update_pte = ept_update_pte;
+ context->root_level = level;
+ context->direct_map = false;
+ context->mmu_role.as_u64 = new_role.as_u64;
+@@ -4851,19 +4839,6 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_unload);
+
+-static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
+- struct kvm_mmu_page *sp, u64 *spte,
+- const void *new)
+-{
+- if (sp->role.level != PG_LEVEL_4K) {
+- ++vcpu->kvm->stat.mmu_pde_zapped;
+- return;
+- }
+-
+- ++vcpu->kvm->stat.mmu_pte_updated;
+- vcpu->arch.mmu->update_pte(vcpu, sp, spte, new);
+-}
+-
+ static bool need_remote_flush(u64 old, u64 new)
+ {
+ if (!is_shadow_present_pte(old))
+@@ -4979,22 +4954,6 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
+ return spte;
+ }
+
+-/*
+- * Ignore various flags when determining if a SPTE can be immediately
+- * overwritten for the current MMU.
+- * - level: explicitly checked in mmu_pte_write_new_pte(), and will never
+- * match the current MMU role, as MMU's level tracks the root level.
+- * - access: updated based on the new guest PTE
+- * - quadrant: handled by get_written_sptes()
+- * - invalid: always false (loop only walks valid shadow pages)
+- */
+-static const union kvm_mmu_page_role role_ign = {
+- .level = 0xf,
+- .access = 0x7,
+- .quadrant = 0x3,
+- .invalid = 0x1,
+-};
+-
+ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+ const u8 *new, int bytes,
+ struct kvm_page_track_notifier_node *node)
+@@ -5045,14 +5004,10 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+
+ local_flush = true;
+ while (npte--) {
+- u32 base_role = vcpu->arch.mmu->mmu_role.base.word;
+-
+ entry = *spte;
+ mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL);
+- if (gentry &&
+- !((sp->role.word ^ base_role) & ~role_ign.word) &&
+- rmap_can_add(vcpu))
+- mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
++ if (gentry && sp->role.level != PG_LEVEL_4K)
++ ++vcpu->kvm->stat.mmu_pde_zapped;
+ if (need_remote_flush(entry, *spte))
+ remote_flush = true;
+ ++spte;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 7c233c79c124d..965f1f901cf3a 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2062,5 +2062,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
+ * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a
+ * non-zero value.
+ */
++ if (!svm->ghcb)
++ return;
++
+ ghcb_set_sw_exit_info_2(svm->ghcb, 1);
+ }
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 15a69500819d2..9006fe1230a11 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2724,7 +2724,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
+- if (!sev_es_guest(svm->vcpu.kvm) || !err)
++ if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->ghcb))
+ return kvm_complete_insn_gp(&svm->vcpu, err);
+
+ ghcb_set_sw_exit_info_1(svm->ghcb, 1);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 0c41ffb7957f9..9aec6b4476cd9 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3140,15 +3140,8 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu)
+ nested_vmx_handle_enlightened_vmptrld(vcpu, false);
+
+ if (evmptrld_status == EVMPTRLD_VMFAIL ||
+- evmptrld_status == EVMPTRLD_ERROR) {
+- pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
+- __func__);
+- vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+- vcpu->run->internal.suberror =
+- KVM_INTERNAL_ERROR_EMULATION;
+- vcpu->run->internal.ndata = 0;
++ evmptrld_status == EVMPTRLD_ERROR)
+ return false;
+- }
+ }
+
+ return true;
+@@ -3236,8 +3229,16 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+
+ static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
+ {
+- if (!nested_get_evmcs_page(vcpu))
++ if (!nested_get_evmcs_page(vcpu)) {
++ pr_debug_ratelimited("%s: enlightened vmptrld failed\n",
++ __func__);
++ vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
++ vcpu->run->internal.suberror =
++ KVM_INTERNAL_ERROR_EMULATION;
++ vcpu->run->internal.ndata = 0;
++
+ return false;
++ }
+
+ if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu))
+ return false;
+@@ -4467,7 +4468,15 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ /* trying to cancel vmlaunch/vmresume is a bug */
+ WARN_ON_ONCE(vmx->nested.nested_run_pending);
+
+- kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
++ if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
++ /*
++ * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map
++ * Enlightened VMCS after migration and we still need to
++ * do that when something is forcing L2->L1 exit prior to
++ * the first L2 run.
++ */
++ (void)nested_get_evmcs_page(vcpu);
++ }
+
+ /* Service the TLB flush request for L2 before switching to L1. */
+ if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 852cfb4c063e8..d3ec6ba3acb5c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -36,6 +36,7 @@
+ #include <asm/debugreg.h>
+ #include <asm/desc.h>
+ #include <asm/fpu/internal.h>
++#include <asm/idtentry.h>
+ #include <asm/io.h>
+ #include <asm/irq_remapping.h>
+ #include <asm/kexec.h>
+@@ -6334,18 +6335,17 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+
+ void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
+
+-static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
++static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu,
++ unsigned long entry)
+ {
+- unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
+- gate_desc *desc = (gate_desc *)host_idt_base + vector;
+-
+ kvm_before_interrupt(vcpu);
+- vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
++ vmx_do_interrupt_nmi_irqoff(entry);
+ kvm_after_interrupt(vcpu);
+ }
+
+ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
+ {
++ const unsigned long nmi_entry = (unsigned long)asm_exc_nmi_noist;
+ u32 intr_info = vmx_get_intr_info(&vmx->vcpu);
+
+ /* if exit due to PF check for async PF */
+@@ -6356,18 +6356,20 @@ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
+ kvm_machine_check();
+ /* We need to handle NMIs before interrupts are enabled */
+ else if (is_nmi(intr_info))
+- handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info);
++ handle_interrupt_nmi_irqoff(&vmx->vcpu, nmi_entry);
+ }
+
+ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
+ {
+ u32 intr_info = vmx_get_intr_info(vcpu);
++ unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
++ gate_desc *desc = (gate_desc *)host_idt_base + vector;
+
+ if (WARN_ONCE(!is_external_intr(intr_info),
+ "KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info))
+ return;
+
+- handle_interrupt_nmi_irqoff(vcpu, intr_info);
++ handle_interrupt_nmi_irqoff(vcpu, gate_offset(desc));
+ }
+
+ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+@@ -6848,12 +6850,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+
+ for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) {
+ u32 index = vmx_uret_msrs_list[i];
+- u32 data_low, data_high;
+ int j = vmx->nr_uret_msrs;
+
+- if (rdmsr_safe(index, &data_low, &data_high) < 0)
+- continue;
+- if (wrmsr_safe(index, data_low, data_high) < 0)
++ if (kvm_probe_user_return_msr(index))
+ continue;
+
+ vmx->guest_uret_msrs[j].slot = i;
+@@ -7286,9 +7285,11 @@ static __init void vmx_set_cpu_caps(void)
+ if (!cpu_has_vmx_xsaves())
+ kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
+
+- /* CPUID 0x80000001 */
+- if (!cpu_has_vmx_rdtscp())
++ /* CPUID 0x80000001 and 0x7 (RDPID) */
++ if (!cpu_has_vmx_rdtscp()) {
+ kvm_cpu_cap_clear(X86_FEATURE_RDTSCP);
++ kvm_cpu_cap_clear(X86_FEATURE_RDPID);
++ }
+
+ if (cpu_has_vmx_waitpkg())
+ kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG);
+@@ -7344,8 +7345,9 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
+ /*
+ * RDPID causes #UD if disabled through secondary execution controls.
+ * Because it is marked as EmulateOnUD, we need to intercept it here.
++ * Note, RDPID is hidden behind ENABLE_RDTSCP.
+ */
+- case x86_intercept_rdtscp:
++ case x86_intercept_rdpid:
+ if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_RDTSCP)) {
+ exception->vector = UD_VECTOR;
+ exception->error_code_valid = false;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 38c3e7860aa90..95e28358f443a 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -234,7 +234,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
+ VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
+ VM_STAT("mmu_pte_write", mmu_pte_write),
+- VM_STAT("mmu_pte_updated", mmu_pte_updated),
+ VM_STAT("mmu_pde_zapped", mmu_pde_zapped),
+ VM_STAT("mmu_flooded", mmu_flooded),
+ VM_STAT("mmu_recycled", mmu_recycled),
+@@ -324,6 +323,22 @@ static void kvm_on_user_return(struct user_return_notifier *urn)
+ }
+ }
+
++int kvm_probe_user_return_msr(u32 msr)
++{
++ u64 val;
++ int ret;
++
++ preempt_disable();
++ ret = rdmsrl_safe(msr, &val);
++ if (ret)
++ goto out;
++ ret = wrmsrl_safe(msr, val);
++out:
++ preempt_enable();
++ return ret;
++}
++EXPORT_SYMBOL_GPL(kvm_probe_user_return_msr);
++
+ void kvm_define_user_return_msr(unsigned slot, u32 msr)
+ {
+ BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS);
+@@ -7873,6 +7888,18 @@ static void pvclock_gtod_update_fn(struct work_struct *work)
+
+ static DECLARE_WORK(pvclock_gtod_work, pvclock_gtod_update_fn);
+
++/*
++ * Indirection to move queue_work() out of the tk_core.seq write held
++ * region to prevent possible deadlocks against time accessors which
++ * are invoked with work related locks held.
++ */
++static void pvclock_irq_work_fn(struct irq_work *w)
++{
++ queue_work(system_long_wq, &pvclock_gtod_work);
++}
++
++static DEFINE_IRQ_WORK(pvclock_irq_work, pvclock_irq_work_fn);
++
+ /*
+ * Notification about pvclock gtod data update.
+ */
+@@ -7884,13 +7911,14 @@ static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused,
+
+ update_pvclock_gtod(tk);
+
+- /* disable master clock if host does not trust, or does not
+- * use, TSC based clocksource.
++ /*
++ * Disable master clock if host does not trust, or does not use,
++ * TSC based clocksource. Delegate queue_work() to irq_work as
++ * this is invoked with tk_core.seq write held.
+ */
+ if (!gtod_is_based_on_tsc(gtod->clock.vclock_mode) &&
+ atomic_read(&kvm_guest_has_master_clock) != 0)
+- queue_work(system_long_wq, &pvclock_gtod_work);
+-
++ irq_work_queue(&pvclock_irq_work);
+ return 0;
+ }
+
+@@ -8006,6 +8034,8 @@ void kvm_arch_exit(void)
+ cpuhp_remove_state_nocalls(CPUHP_AP_X86_KVM_CLK_ONLINE);
+ #ifdef CONFIG_X86_64
+ pvclock_gtod_unregister_notifier(&pvclock_gtod_notifier);
++ irq_work_sync(&pvclock_irq_work);
++ cancel_work_sync(&pvclock_gtod_work);
+ #endif
+ kvm_x86_ops.hardware_enable = NULL;
+ kvm_mmu_module_exit();
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 5720978e4d09b..c91dca641eb46 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2210,10 +2210,9 @@ static void bfq_remove_request(struct request_queue *q,
+
+ }
+
+-static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
++static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
+ unsigned int nr_segs)
+ {
+- struct request_queue *q = hctx->queue;
+ struct bfq_data *bfqd = q->elevator->elevator_data;
+ struct request *free = NULL;
+ /*
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 98d656bdb42b7..4fbc875f7cb29 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1073,7 +1073,17 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
+
+ lockdep_assert_held(&ioc->lock);
+
+- inuse = clamp_t(u32, inuse, 1, active);
++ /*
++ * For an active leaf node, its inuse shouldn't be zero or exceed
++ * @active. An active internal node's inuse is solely determined by the
++ * inuse to active ratio of its children regardless of @inuse.
++ */
++ if (list_empty(&iocg->active_list) && iocg->child_active_sum) {
++ inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum,
++ iocg->child_active_sum);
++ } else {
++ inuse = clamp_t(u32, inuse, 1, active);
++ }
+
+ iocg->last_inuse = iocg->inuse;
+ if (save)
+@@ -1090,7 +1100,7 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
+ /* update the level sums */
+ parent->child_active_sum += (s32)(active - child->active);
+ parent->child_inuse_sum += (s32)(inuse - child->inuse);
+- /* apply the udpates */
++ /* apply the updates */
+ child->active = active;
+ child->inuse = inuse;
+
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index deff4e826e234..d93b458347769 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -348,14 +348,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
+ unsigned int nr_segs)
+ {
+ struct elevator_queue *e = q->elevator;
+- struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
+- struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
++ struct blk_mq_ctx *ctx;
++ struct blk_mq_hw_ctx *hctx;
+ bool ret = false;
+ enum hctx_type type;
+
+ if (e && e->type->ops.bio_merge)
+- return e->type->ops.bio_merge(hctx, bio, nr_segs);
++ return e->type->ops.bio_merge(q, bio, nr_segs);
+
++ ctx = blk_mq_get_ctx(q);
++ hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
+ type = hctx->type;
+ if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
+ list_empty_careful(&ctx->rq_lists[type]))
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index f285a9123a8b0..88c843fa8d134 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2189,8 +2189,9 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
+ /* Bypass scheduler for flush requests */
+ blk_insert_flush(rq);
+ blk_mq_run_hw_queue(data.hctx, true);
+- } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs ||
+- !blk_queue_nonrot(q))) {
++ } else if (plug && (q->nr_hw_queues == 1 ||
++ blk_mq_is_sbitmap_shared(rq->mq_hctx->flags) ||
++ q->mq_ops->commit_rqs || !blk_queue_nonrot(q))) {
+ /*
+ * Use plugging if we have a ->commit_rqs() hook as well, as
+ * we know the driver uses bd->last in a smart fashion.
+@@ -3243,10 +3244,12 @@ EXPORT_SYMBOL(blk_mq_init_allocated_queue);
+ /* tags can _not_ be used after returning from blk_mq_exit_queue */
+ void blk_mq_exit_queue(struct request_queue *q)
+ {
+- struct blk_mq_tag_set *set = q->tag_set;
++ struct blk_mq_tag_set *set = q->tag_set;
+
+- blk_mq_del_queue_tag_set(q);
++ /* Checks hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED. */
+ blk_mq_exit_hw_queues(q, set, set->nr_hw_queues);
++ /* May clear BLK_MQ_F_TAG_QUEUE_SHARED in hctx->flags. */
++ blk_mq_del_queue_tag_set(q);
+ }
+
+ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index dc89199bc8c69..7f9ef773bf444 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -562,11 +562,12 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
+ }
+ }
+
+-static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
++static bool kyber_bio_merge(struct request_queue *q, struct bio *bio,
+ unsigned int nr_segs)
+ {
++ struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
++ struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
+ struct kyber_hctx_data *khd = hctx->sched_data;
+- struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue);
+ struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]];
+ unsigned int sched_domain = kyber_sched_domain(bio->bi_opf);
+ struct list_head *rq_list = &kcq->rq_list[sched_domain];
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 800ac902809b8..2b9635d0dcba8 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -461,10 +461,9 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
+ return ELEVATOR_NO_MERGE;
+ }
+
+-static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
++static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
+ unsigned int nr_segs)
+ {
+- struct request_queue *q = hctx->queue;
+ struct deadline_data *dd = q->elevator->elevator_data;
+ struct request *free = NULL;
+ bool ret;
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 3586434d0ded9..13bc4ed2a26a7 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -1314,6 +1314,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ {"PNP0C0B", }, /* Generic ACPI fan */
+ {"INT3404", }, /* Fan */
+ {"INTC1044", }, /* Fan for Tiger Lake generation */
++ {"INTC1048", }, /* Fan for Alder Lake generation */
+ {}
+ };
+ struct acpi_device *adev = ACPI_COMPANION(dev);
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 239eeeafc62f6..32a9bd8788526 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -705,6 +705,7 @@ int acpi_device_add(struct acpi_device *device,
+
+ result = acpi_device_set_name(device, acpi_device_bus_id);
+ if (result) {
++ kfree_const(acpi_device_bus_id->bus_id);
+ kfree(acpi_device_bus_id);
+ goto err_unlock;
+ }
+diff --git a/drivers/ata/ahci_brcm.c b/drivers/ata/ahci_brcm.c
+index 5b32df5d33adc..6e9c5ade4c2ea 100644
+--- a/drivers/ata/ahci_brcm.c
++++ b/drivers/ata/ahci_brcm.c
+@@ -86,7 +86,8 @@ struct brcm_ahci_priv {
+ u32 port_mask;
+ u32 quirks;
+ enum brcm_ahci_version version;
+- struct reset_control *rcdev;
++ struct reset_control *rcdev_rescal;
++ struct reset_control *rcdev_ahci;
+ };
+
+ static inline u32 brcm_sata_readreg(void __iomem *addr)
+@@ -352,8 +353,8 @@ static int brcm_ahci_suspend(struct device *dev)
+ else
+ ret = 0;
+
+- if (priv->version != BRCM_SATA_BCM7216)
+- reset_control_assert(priv->rcdev);
++ reset_control_assert(priv->rcdev_ahci);
++ reset_control_rearm(priv->rcdev_rescal);
+
+ return ret;
+ }
+@@ -365,10 +366,10 @@ static int __maybe_unused brcm_ahci_resume(struct device *dev)
+ struct brcm_ahci_priv *priv = hpriv->plat_data;
+ int ret = 0;
+
+- if (priv->version == BRCM_SATA_BCM7216)
+- ret = reset_control_reset(priv->rcdev);
+- else
+- ret = reset_control_deassert(priv->rcdev);
++ ret = reset_control_deassert(priv->rcdev_ahci);
++ if (ret)
++ return ret;
++ ret = reset_control_reset(priv->rcdev_rescal);
+ if (ret)
+ return ret;
+
+@@ -434,7 +435,6 @@ static int brcm_ahci_probe(struct platform_device *pdev)
+ {
+ const struct of_device_id *of_id;
+ struct device *dev = &pdev->dev;
+- const char *reset_name = NULL;
+ struct brcm_ahci_priv *priv;
+ struct ahci_host_priv *hpriv;
+ struct resource *res;
+@@ -456,15 +456,15 @@ static int brcm_ahci_probe(struct platform_device *pdev)
+ if (IS_ERR(priv->top_ctrl))
+ return PTR_ERR(priv->top_ctrl);
+
+- /* Reset is optional depending on platform and named differently */
+- if (priv->version == BRCM_SATA_BCM7216)
+- reset_name = "rescal";
+- else
+- reset_name = "ahci";
+-
+- priv->rcdev = devm_reset_control_get_optional(&pdev->dev, reset_name);
+- if (IS_ERR(priv->rcdev))
+- return PTR_ERR(priv->rcdev);
++ if (priv->version == BRCM_SATA_BCM7216) {
++ priv->rcdev_rescal = devm_reset_control_get_optional_shared(
++ &pdev->dev, "rescal");
++ if (IS_ERR(priv->rcdev_rescal))
++ return PTR_ERR(priv->rcdev_rescal);
++ }
++ priv->rcdev_ahci = devm_reset_control_get_optional(&pdev->dev, "ahci");
++ if (IS_ERR(priv->rcdev_ahci))
++ return PTR_ERR(priv->rcdev_ahci);
+
+ hpriv = ahci_platform_get_resources(pdev, 0);
+ if (IS_ERR(hpriv))
+@@ -485,10 +485,10 @@ static int brcm_ahci_probe(struct platform_device *pdev)
+ break;
+ }
+
+- if (priv->version == BRCM_SATA_BCM7216)
+- ret = reset_control_reset(priv->rcdev);
+- else
+- ret = reset_control_deassert(priv->rcdev);
++ ret = reset_control_reset(priv->rcdev_rescal);
++ if (ret)
++ return ret;
++ ret = reset_control_deassert(priv->rcdev_ahci);
+ if (ret)
+ return ret;
+
+@@ -539,8 +539,8 @@ out_disable_regulators:
+ out_disable_clks:
+ ahci_platform_disable_clks(hpriv);
+ out_reset:
+- if (priv->version != BRCM_SATA_BCM7216)
+- reset_control_assert(priv->rcdev);
++ reset_control_assert(priv->rcdev_ahci);
++ reset_control_rearm(priv->rcdev_rescal);
+ return ret;
+ }
+
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index d6d73ff94e88f..bc649da4899a0 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1637,6 +1637,7 @@ void pm_runtime_init(struct device *dev)
+ dev->power.request_pending = false;
+ dev->power.request = RPM_REQ_NONE;
+ dev->power.deferred_resume = false;
++ dev->power.needs_force_resume = 0;
+ INIT_WORK(&dev->power.work, pm_runtime_work);
+
+ dev->power.timer_expires = 0;
+@@ -1804,10 +1805,12 @@ int pm_runtime_force_suspend(struct device *dev)
+ * its parent, but set its status to RPM_SUSPENDED anyway in case this
+ * function will be called again for it in the meantime.
+ */
+- if (pm_runtime_need_not_resume(dev))
++ if (pm_runtime_need_not_resume(dev)) {
+ pm_runtime_set_suspended(dev);
+- else
++ } else {
+ __update_runtime_status(dev, RPM_SUSPENDED);
++ dev->power.needs_force_resume = 1;
++ }
+
+ return 0;
+
+@@ -1834,7 +1837,7 @@ int pm_runtime_force_resume(struct device *dev)
+ int (*callback)(struct device *);
+ int ret = 0;
+
+- if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev))
++ if (!pm_runtime_status_suspended(dev) || !dev->power.needs_force_resume)
+ goto out;
+
+ /*
+@@ -1853,6 +1856,7 @@ int pm_runtime_force_resume(struct device *dev)
+
+ pm_runtime_mark_last_busy(dev);
+ out:
++ dev->power.needs_force_resume = 0;
+ pm_runtime_enable(dev);
+ return ret;
+ }
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 0f3bab47c0d6c..b21eb58d6a455 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2000,7 +2000,8 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ * config ref and try to destroy the workqueue from inside the work
+ * queue.
+ */
+- flush_workqueue(nbd->recv_workq);
++ if (nbd->recv_workq)
++ flush_workqueue(nbd->recv_workq);
+ if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+ &nbd->config->runtime_flags))
+ nbd_config_put(nbd);
+diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
+index 45a4700766524..5ab7319ff2ead 100644
+--- a/drivers/block/rnbd/rnbd-clt.c
++++ b/drivers/block/rnbd/rnbd-clt.c
+@@ -693,7 +693,11 @@ static void remap_devs(struct rnbd_clt_session *sess)
+ return;
+ }
+
+- rtrs_clt_query(sess->rtrs, &attrs);
++ err = rtrs_clt_query(sess->rtrs, &attrs);
++ if (err) {
++ pr_err("rtrs_clt_query(\"%s\"): %d\n", sess->sessname, err);
++ return;
++ }
+ mutex_lock(&sess->lock);
+ sess->max_io_size = attrs.max_io_size;
+
+@@ -1234,7 +1238,11 @@ find_and_get_or_create_sess(const char *sessname,
+ err = PTR_ERR(sess->rtrs);
+ goto wake_up_and_put;
+ }
+- rtrs_clt_query(sess->rtrs, &attrs);
++
++ err = rtrs_clt_query(sess->rtrs, &attrs);
++ if (err)
++ goto close_rtrs;
++
+ sess->max_io_size = attrs.max_io_size;
+ sess->queue_depth = attrs.queue_depth;
+
+diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h
+index 537d499dad3b0..73d9808405310 100644
+--- a/drivers/block/rnbd/rnbd-clt.h
++++ b/drivers/block/rnbd/rnbd-clt.h
+@@ -87,7 +87,7 @@ struct rnbd_clt_session {
+ DECLARE_BITMAP(cpu_queues_bm, NR_CPUS);
+ int __percpu *cpu_rr; /* per-cpu var for CPU round-robin */
+ atomic_t busy;
+- int queue_depth;
++ size_t queue_depth;
+ u32 max_io_size;
+ struct blk_mq_tag_set tag_set;
+ struct mutex lock; /* protects state and devs_list */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a4f834a50a988..3620981e8b1c2 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -397,7 +397,9 @@ static const struct usb_device_id blacklist_table[] = {
+
+ /* MediaTek Bluetooth devices */
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0e8d, 0xe0, 0x01, 0x01),
+- .driver_info = BTUSB_MEDIATEK },
++ .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH |
++ BTUSB_VALID_LE_STATES },
+
+ /* Additional MediaTek MT7615E Bluetooth devices */
+ { USB_DEVICE(0x13d3, 0x3560), .driver_info = BTUSB_MEDIATEK},
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index eff1f12d981ab..c84d239512197 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -656,6 +656,7 @@ int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip)
+
+ if (nr_commands !=
+ be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) {
++ rc = -EFAULT;
+ tpm_buf_destroy(&buf);
+ goto out;
+ }
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index a2e0395cbe618..55b9d3965ae1b 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -709,16 +709,14 @@ static int tpm_tis_gen_interrupt(struct tpm_chip *chip)
+ cap_t cap;
+ int ret;
+
+- /* TPM 2.0 */
+- if (chip->flags & TPM_CHIP_FLAG_TPM2)
+- return tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
+-
+- /* TPM 1.2 */
+ ret = request_locality(chip, 0);
+ if (ret < 0)
+ return ret;
+
+- ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
++ if (chip->flags & TPM_CHIP_FLAG_TPM2)
++ ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc);
++ else
++ ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0);
+
+ release_locality(chip, 0);
+
+@@ -1127,12 +1125,20 @@ int tpm_tis_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- /* TPM 1.2 requires self-test on resume. This function actually returns
++ /*
++ * TPM 1.2 requires self-test on resume. This function actually returns
+ * an error code but for unknown reason it isn't handled.
+ */
+- if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
++ if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
++ ret = request_locality(chip, 0);
++ if (ret < 0)
++ return ret;
++
+ tpm1_do_selftest(chip);
+
++ release_locality(chip, 0);
++ }
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_resume);
+diff --git a/drivers/clk/samsung/clk-exynos7.c b/drivers/clk/samsung/clk-exynos7.c
+index 87ee1bad9a9a8..4a5d2a914bd66 100644
+--- a/drivers/clk/samsung/clk-exynos7.c
++++ b/drivers/clk/samsung/clk-exynos7.c
+@@ -537,8 +537,13 @@ static const struct samsung_gate_clock top1_gate_clks[] __initconst = {
+ GATE(CLK_ACLK_FSYS0_200, "aclk_fsys0_200", "dout_aclk_fsys0_200",
+ ENABLE_ACLK_TOP13, 28, CLK_SET_RATE_PARENT |
+ CLK_IS_CRITICAL, 0),
++ /*
++ * This clock is required for the CMU_FSYS1 registers access, keep it
++ * enabled permanently until proper runtime PM support is added.
++ */
+ GATE(CLK_ACLK_FSYS1_200, "aclk_fsys1_200", "dout_aclk_fsys1_200",
+- ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT, 0),
++ ENABLE_ACLK_TOP13, 24, CLK_SET_RATE_PARENT |
++ CLK_IS_CRITICAL, 0),
+
+ GATE(CLK_SCLK_PHY_FSYS1_26M, "sclk_phy_fsys1_26m",
+ "dout_sclk_phy_fsys1_26m", ENABLE_SCLK_TOP1_FSYS11,
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 3fae9ebb58b83..b6f97960d8ee0 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -2,6 +2,7 @@
+ #include <linux/clk.h>
+ #include <linux/clocksource.h>
+ #include <linux/clockchips.h>
++#include <linux/cpuhotplug.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
+@@ -530,17 +531,17 @@ static void omap_clockevent_unidle(struct clock_event_device *evt)
+ writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
+ }
+
+-static int __init dmtimer_clockevent_init(struct device_node *np)
++static int __init dmtimer_clkevt_init_common(struct dmtimer_clockevent *clkevt,
++ struct device_node *np,
++ unsigned int features,
++ const struct cpumask *cpumask,
++ const char *name,
++ int rating)
+ {
+- struct dmtimer_clockevent *clkevt;
+ struct clock_event_device *dev;
+ struct dmtimer_systimer *t;
+ int error;
+
+- clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
+- if (!clkevt)
+- return -ENOMEM;
+-
+ t = &clkevt->t;
+ dev = &clkevt->dev;
+
+@@ -548,25 +549,23 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
+ * We mostly use cpuidle_coupled with ARM local timers for runtime,
+ * so there's probably no use for CLOCK_EVT_FEAT_DYNIRQ here.
+ */
+- dev->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
+- dev->rating = 300;
++ dev->features = features;
++ dev->rating = rating;
+ dev->set_next_event = dmtimer_set_next_event;
+ dev->set_state_shutdown = dmtimer_clockevent_shutdown;
+ dev->set_state_periodic = dmtimer_set_periodic;
+ dev->set_state_oneshot = dmtimer_clockevent_shutdown;
+ dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
+ dev->tick_resume = dmtimer_clockevent_shutdown;
+- dev->cpumask = cpu_possible_mask;
++ dev->cpumask = cpumask;
+
+ dev->irq = irq_of_parse_and_map(np, 0);
+- if (!dev->irq) {
+- error = -ENXIO;
+- goto err_out_free;
+- }
++ if (!dev->irq)
++ return -ENXIO;
+
+ error = dmtimer_systimer_setup(np, &clkevt->t);
+ if (error)
+- goto err_out_free;
++ return error;
+
+ clkevt->period = 0xffffffff - DIV_ROUND_CLOSEST(t->rate, HZ);
+
+@@ -578,38 +577,132 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
+ writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
+
+ error = request_irq(dev->irq, dmtimer_clockevent_interrupt,
+- IRQF_TIMER, "clockevent", clkevt);
++ IRQF_TIMER, name, clkevt);
+ if (error)
+ goto err_out_unmap;
+
+ writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena);
+ writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
+
+- pr_info("TI gptimer clockevent: %s%lu Hz at %pOF\n",
+- of_find_property(np, "ti,timer-alwon", NULL) ?
++ pr_info("TI gptimer %s: %s%lu Hz at %pOF\n",
++ name, of_find_property(np, "ti,timer-alwon", NULL) ?
+ "always-on " : "", t->rate, np->parent);
+
+- clockevents_config_and_register(dev, t->rate,
+- 3, /* Timer internal resynch latency */
++ return 0;
++
++err_out_unmap:
++ iounmap(t->base);
++
++ return error;
++}
++
++static int __init dmtimer_clockevent_init(struct device_node *np)
++{
++ struct dmtimer_clockevent *clkevt;
++ int error;
++
++ clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
++ if (!clkevt)
++ return -ENOMEM;
++
++ error = dmtimer_clkevt_init_common(clkevt, np,
++ CLOCK_EVT_FEAT_PERIODIC |
++ CLOCK_EVT_FEAT_ONESHOT,
++ cpu_possible_mask, "clockevent",
++ 300);
++ if (error)
++ goto err_out_free;
++
++ clockevents_config_and_register(&clkevt->dev, clkevt->t.rate,
++ 3, /* Timer internal resync latency */
+ 0xffffffff);
+
+ if (of_machine_is_compatible("ti,am33xx") ||
+ of_machine_is_compatible("ti,am43")) {
+- dev->suspend = omap_clockevent_idle;
+- dev->resume = omap_clockevent_unidle;
++ clkevt->dev.suspend = omap_clockevent_idle;
++ clkevt->dev.resume = omap_clockevent_unidle;
+ }
+
+ return 0;
+
+-err_out_unmap:
+- iounmap(t->base);
+-
+ err_out_free:
+ kfree(clkevt);
+
+ return error;
+ }
+
++/* Dmtimer as percpu timer. See dra7 ARM architected timer wrap erratum i940 */
++static DEFINE_PER_CPU(struct dmtimer_clockevent, dmtimer_percpu_timer);
++
++static int __init dmtimer_percpu_timer_init(struct device_node *np, int cpu)
++{
++ struct dmtimer_clockevent *clkevt;
++ int error;
++
++ if (!cpu_possible(cpu))
++ return -EINVAL;
++
++ if (!of_property_read_bool(np->parent, "ti,no-reset-on-init") ||
++ !of_property_read_bool(np->parent, "ti,no-idle"))
++ pr_warn("Incomplete dtb for percpu dmtimer %pOF\n", np->parent);
++
++ clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
++
++ error = dmtimer_clkevt_init_common(clkevt, np, CLOCK_EVT_FEAT_ONESHOT,
++ cpumask_of(cpu), "percpu-dmtimer",
++ 500);
++ if (error)
++ return error;
++
++ return 0;
++}
++
++/* See TRM for timer internal resynch latency */
++static int omap_dmtimer_starting_cpu(unsigned int cpu)
++{
++ struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
++ struct clock_event_device *dev = &clkevt->dev;
++ struct dmtimer_systimer *t = &clkevt->t;
++
++ clockevents_config_and_register(dev, t->rate, 3, ULONG_MAX);
++ irq_force_affinity(dev->irq, cpumask_of(cpu));
++
++ return 0;
++}
++
++static int __init dmtimer_percpu_timer_startup(void)
++{
++ struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, 0);
++ struct dmtimer_systimer *t = &clkevt->t;
++
++ if (t->sysc) {
++ cpuhp_setup_state(CPUHP_AP_TI_GP_TIMER_STARTING,
++ "clockevents/omap/gptimer:starting",
++ omap_dmtimer_starting_cpu, NULL);
++ }
++
++ return 0;
++}
++subsys_initcall(dmtimer_percpu_timer_startup);
++
++static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
++{
++ struct device_node *arm_timer;
++
++ arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
++ if (of_device_is_available(arm_timer)) {
++ pr_warn_once("ARM architected timer wrap issue i940 detected\n");
++ return 0;
++ }
++
++ if (pa == 0x48034000) /* dra7 dmtimer3 */
++ return dmtimer_percpu_timer_init(np, 0);
++ else if (pa == 0x48036000) /* dra7 dmtimer4 */
++ return dmtimer_percpu_timer_init(np, 1);
++
++ return 0;
++}
++
+ /* Clocksource */
+ static struct dmtimer_clocksource *
+ to_dmtimer_clocksource(struct clocksource *cs)
+@@ -743,6 +836,9 @@ static int __init dmtimer_systimer_init(struct device_node *np)
+ if (clockevent == pa)
+ return dmtimer_clockevent_init(np);
+
++ if (of_machine_is_compatible("ti,dra7"))
++ return dmtimer_percpu_quirk_init(np, pa);
++
+ return 0;
+ }
+
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index d1bbc16fba4b4..7e7450453714d 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -646,7 +646,11 @@ static u64 get_max_boost_ratio(unsigned int cpu)
+ return 0;
+ }
+
+- highest_perf = perf_caps.highest_perf;
++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
++ highest_perf = amd_get_highest_perf();
++ else
++ highest_perf = perf_caps.highest_perf;
++
+ nominal_perf = perf_caps.nominal_perf;
+
+ if (!highest_perf || !nominal_perf) {
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index c4d8a5126d611..d483383dcfb92 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -3053,6 +3053,14 @@ static const struct x86_cpu_id hwp_support_ids[] __initconst = {
+ {}
+ };
+
++static bool intel_pstate_hwp_is_enabled(void)
++{
++ u64 value;
++
++ rdmsrl(MSR_PM_ENABLE, value);
++ return !!(value & 0x1);
++}
++
+ static int __init intel_pstate_init(void)
+ {
+ const struct x86_cpu_id *id;
+@@ -3071,8 +3079,12 @@ static int __init intel_pstate_init(void)
+ * Avoid enabling HWP for processors without EPP support,
+ * because that means incomplete HWP implementation which is a
+ * corner case and supporting it is generally problematic.
++ *
++ * If HWP is enabled already, though, there is no choice but to
++ * deal with it.
+ */
+- if (!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) {
++ if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) ||
++ intel_pstate_hwp_is_enabled()) {
+ hwp_active++;
+ hwp_mode_bdw = id->driver_data;
+ intel_pstate.attr = hwp_cpufreq_attrs;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 5b82ba7acc7cb..21caed429cc52 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -989,7 +989,7 @@ int sev_dev_init(struct psp_device *psp)
+ if (!sev->vdata) {
+ ret = -ENODEV;
+ dev_err(dev, "sev: missing driver data\n");
+- goto e_err;
++ goto e_sev;
+ }
+
+ psp_set_sev_irq_handler(psp, sev_irq_handler, sev);
+@@ -1004,6 +1004,8 @@ int sev_dev_init(struct psp_device *psp)
+
+ e_irq:
+ psp_clear_sev_irq_handler(psp);
++e_sev:
++ devm_kfree(dev, sev);
+ e_err:
+ psp->sev_data = NULL;
+
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index 0db9b82ed8cf5..1d8a3876b7452 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -39,15 +39,15 @@ struct idxd_user_context {
+ struct iommu_sva *sva;
+ };
+
+-enum idxd_cdev_cleanup {
+- CDEV_NORMAL = 0,
+- CDEV_FAILED,
+-};
+-
+ static void idxd_cdev_dev_release(struct device *dev)
+ {
+- dev_dbg(dev, "releasing cdev device\n");
+- kfree(dev);
++ struct idxd_cdev *idxd_cdev = container_of(dev, struct idxd_cdev, dev);
++ struct idxd_cdev_context *cdev_ctx;
++ struct idxd_wq *wq = idxd_cdev->wq;
++
++ cdev_ctx = &ictx[wq->idxd->type];
++ ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
++ kfree(idxd_cdev);
+ }
+
+ static struct device_type idxd_cdev_device_type = {
+@@ -62,14 +62,11 @@ static inline struct idxd_cdev *inode_idxd_cdev(struct inode *inode)
+ return container_of(cdev, struct idxd_cdev, cdev);
+ }
+
+-static inline struct idxd_wq *idxd_cdev_wq(struct idxd_cdev *idxd_cdev)
+-{
+- return container_of(idxd_cdev, struct idxd_wq, idxd_cdev);
+-}
+-
+ static inline struct idxd_wq *inode_wq(struct inode *inode)
+ {
+- return idxd_cdev_wq(inode_idxd_cdev(inode));
++ struct idxd_cdev *idxd_cdev = inode_idxd_cdev(inode);
++
++ return idxd_cdev->wq;
+ }
+
+ static int idxd_cdev_open(struct inode *inode, struct file *filp)
+@@ -220,11 +217,10 @@ static __poll_t idxd_cdev_poll(struct file *filp,
+ struct idxd_user_context *ctx = filp->private_data;
+ struct idxd_wq *wq = ctx->wq;
+ struct idxd_device *idxd = wq->idxd;
+- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+ unsigned long flags;
+ __poll_t out = 0;
+
+- poll_wait(filp, &idxd_cdev->err_queue, wait);
++ poll_wait(filp, &wq->err_queue, wait);
+ spin_lock_irqsave(&idxd->dev_lock, flags);
+ if (idxd->sw_err.valid)
+ out = EPOLLIN | EPOLLRDNORM;
+@@ -246,98 +242,67 @@ int idxd_cdev_get_major(struct idxd_device *idxd)
+ return MAJOR(ictx[idxd->type].devt);
+ }
+
+-static int idxd_wq_cdev_dev_setup(struct idxd_wq *wq)
++int idxd_wq_add_cdev(struct idxd_wq *wq)
+ {
+ struct idxd_device *idxd = wq->idxd;
+- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+- struct idxd_cdev_context *cdev_ctx;
++ struct idxd_cdev *idxd_cdev;
++ struct cdev *cdev;
+ struct device *dev;
+- int minor, rc;
++ struct idxd_cdev_context *cdev_ctx;
++ int rc, minor;
+
+- idxd_cdev->dev = kzalloc(sizeof(*idxd_cdev->dev), GFP_KERNEL);
+- if (!idxd_cdev->dev)
++ idxd_cdev = kzalloc(sizeof(*idxd_cdev), GFP_KERNEL);
++ if (!idxd_cdev)
+ return -ENOMEM;
+
+- dev = idxd_cdev->dev;
+- dev->parent = &idxd->pdev->dev;
+- dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
+- idxd->id, wq->id);
+- dev->bus = idxd_get_bus_type(idxd);
+-
++ idxd_cdev->wq = wq;
++ cdev = &idxd_cdev->cdev;
++ dev = &idxd_cdev->dev;
+ cdev_ctx = &ictx[wq->idxd->type];
+ minor = ida_simple_get(&cdev_ctx->minor_ida, 0, MINORMASK, GFP_KERNEL);
+ if (minor < 0) {
+- rc = minor;
+- kfree(dev);
+- goto ida_err;
+- }
+-
+- dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
+- dev->type = &idxd_cdev_device_type;
+- rc = device_register(dev);
+- if (rc < 0) {
+- dev_err(&idxd->pdev->dev, "device register failed\n");
+- goto dev_reg_err;
++ kfree(idxd_cdev);
++ return minor;
+ }
+ idxd_cdev->minor = minor;
+
+- return 0;
+-
+- dev_reg_err:
+- ida_simple_remove(&cdev_ctx->minor_ida, MINOR(dev->devt));
+- put_device(dev);
+- ida_err:
+- idxd_cdev->dev = NULL;
+- return rc;
+-}
+-
+-static void idxd_wq_cdev_cleanup(struct idxd_wq *wq,
+- enum idxd_cdev_cleanup cdev_state)
+-{
+- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+- struct idxd_cdev_context *cdev_ctx;
+-
+- cdev_ctx = &ictx[wq->idxd->type];
+- if (cdev_state == CDEV_NORMAL)
+- cdev_del(&idxd_cdev->cdev);
+- device_unregister(idxd_cdev->dev);
+- /*
+- * The device_type->release() will be called on the device and free
+- * the allocated struct device. We can just forget it.
+- */
+- ida_simple_remove(&cdev_ctx->minor_ida, idxd_cdev->minor);
+- idxd_cdev->dev = NULL;
+- idxd_cdev->minor = -1;
+-}
+-
+-int idxd_wq_add_cdev(struct idxd_wq *wq)
+-{
+- struct idxd_cdev *idxd_cdev = &wq->idxd_cdev;
+- struct cdev *cdev = &idxd_cdev->cdev;
+- struct device *dev;
+- int rc;
++ device_initialize(dev);
++ dev->parent = &wq->conf_dev;
++ dev->bus = idxd_get_bus_type(idxd);
++ dev->type = &idxd_cdev_device_type;
++ dev->devt = MKDEV(MAJOR(cdev_ctx->devt), minor);
+
+- rc = idxd_wq_cdev_dev_setup(wq);
++ rc = dev_set_name(dev, "%s/wq%u.%u", idxd_get_dev_name(idxd),
++ idxd->id, wq->id);
+ if (rc < 0)
+- return rc;
++ goto err;
+
+- dev = idxd_cdev->dev;
++ wq->idxd_cdev = idxd_cdev;
+ cdev_init(cdev, &idxd_cdev_fops);
+- cdev_set_parent(cdev, &dev->kobj);
+- rc = cdev_add(cdev, dev->devt, 1);
++ rc = cdev_device_add(cdev, dev);
+ if (rc) {
+ dev_dbg(&wq->idxd->pdev->dev, "cdev_add failed: %d\n", rc);
+- idxd_wq_cdev_cleanup(wq, CDEV_FAILED);
+- return rc;
++ goto err;
+ }
+
+- init_waitqueue_head(&idxd_cdev->err_queue);
+ return 0;
++
++ err:
++ put_device(dev);
++ wq->idxd_cdev = NULL;
++ return rc;
+ }
+
+ void idxd_wq_del_cdev(struct idxd_wq *wq)
+ {
+- idxd_wq_cdev_cleanup(wq, CDEV_NORMAL);
++ struct idxd_cdev *idxd_cdev;
++ struct idxd_cdev_context *cdev_ctx;
++
++ cdev_ctx = &ictx[wq->idxd->type];
++ idxd_cdev = wq->idxd_cdev;
++ wq->idxd_cdev = NULL;
++ cdev_device_del(&idxd_cdev->cdev, &idxd_cdev->dev);
++ put_device(&idxd_cdev->dev);
+ }
+
+ int idxd_cdev_register(void)
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 31c819544a229..4fef57717049e 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -19,7 +19,7 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+ /* Interrupt control bits */
+ void idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id)
+ {
+- struct irq_data *data = irq_get_irq_data(idxd->msix_entries[vec_id].vector);
++ struct irq_data *data = irq_get_irq_data(idxd->irq_entries[vec_id].vector);
+
+ pci_msi_mask_irq(data);
+ }
+@@ -36,7 +36,7 @@ void idxd_mask_msix_vectors(struct idxd_device *idxd)
+
+ void idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id)
+ {
+- struct irq_data *data = irq_get_irq_data(idxd->msix_entries[vec_id].vector);
++ struct irq_data *data = irq_get_irq_data(idxd->irq_entries[vec_id].vector);
+
+ pci_msi_unmask_irq(data);
+ }
+@@ -186,8 +186,6 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
+ desc->id = i;
+ desc->wq = wq;
+ desc->cpu = -1;
+- dma_async_tx_descriptor_init(&desc->txd, &wq->dma_chan);
+- desc->txd.tx_submit = idxd_dma_tx_submit;
+ }
+
+ return 0;
+@@ -451,7 +449,8 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+
+ if (idxd_device_is_halted(idxd)) {
+ dev_warn(&idxd->pdev->dev, "Device is HALTED!\n");
+- *status = IDXD_CMDSTS_HW_ERR;
++ if (status)
++ *status = IDXD_CMDSTS_HW_ERR;
+ return;
+ }
+
+@@ -521,7 +520,7 @@ void idxd_device_wqs_clear_state(struct idxd_device *idxd)
+ lockdep_assert_held(&idxd->dev_lock);
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ if (wq->state == IDXD_WQ_ENABLED) {
+ idxd_wq_disable_cleanup(wq);
+@@ -660,7 +659,7 @@ static int idxd_groups_config_write(struct idxd_device *idxd)
+ ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET));
+
+ for (i = 0; i < idxd->max_groups; i++) {
+- struct idxd_group *group = &idxd->groups[i];
++ struct idxd_group *group = idxd->groups[i];
+
+ idxd_group_config_write(group);
+ }
+@@ -739,7 +738,7 @@ static int idxd_wqs_config_write(struct idxd_device *idxd)
+ int i, rc;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ rc = idxd_wq_config_write(wq);
+ if (rc < 0)
+@@ -755,7 +754,7 @@ static void idxd_group_flags_setup(struct idxd_device *idxd)
+
+ /* TC-A 0 and TC-B 1 should be defaults */
+ for (i = 0; i < idxd->max_groups; i++) {
+- struct idxd_group *group = &idxd->groups[i];
++ struct idxd_group *group = idxd->groups[i];
+
+ if (group->tc_a == -1)
+ group->tc_a = group->grpcfg.flags.tc_a = 0;
+@@ -782,12 +781,12 @@ static int idxd_engines_setup(struct idxd_device *idxd)
+ struct idxd_group *group;
+
+ for (i = 0; i < idxd->max_groups; i++) {
+- group = &idxd->groups[i];
++ group = idxd->groups[i];
+ group->grpcfg.engines = 0;
+ }
+
+ for (i = 0; i < idxd->max_engines; i++) {
+- eng = &idxd->engines[i];
++ eng = idxd->engines[i];
+ group = eng->group;
+
+ if (!group)
+@@ -811,13 +810,13 @@ static int idxd_wqs_setup(struct idxd_device *idxd)
+ struct device *dev = &idxd->pdev->dev;
+
+ for (i = 0; i < idxd->max_groups; i++) {
+- group = &idxd->groups[i];
++ group = idxd->groups[i];
+ for (j = 0; j < 4; j++)
+ group->grpcfg.wqs[j] = 0;
+ }
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- wq = &idxd->wqs[i];
++ wq = idxd->wqs[i];
+ group = wq->group;
+
+ if (!wq->group)
+diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
+index a15e50126434e..77439b6450448 100644
+--- a/drivers/dma/idxd/dma.c
++++ b/drivers/dma/idxd/dma.c
+@@ -14,7 +14,10 @@
+
+ static inline struct idxd_wq *to_idxd_wq(struct dma_chan *c)
+ {
+- return container_of(c, struct idxd_wq, dma_chan);
++ struct idxd_dma_chan *idxd_chan;
++
++ idxd_chan = container_of(c, struct idxd_dma_chan, chan);
++ return idxd_chan->wq;
+ }
+
+ void idxd_dma_complete_txd(struct idxd_desc *desc,
+@@ -135,7 +138,7 @@ static void idxd_dma_issue_pending(struct dma_chan *dma_chan)
+ {
+ }
+
+-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
++static dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+ {
+ struct dma_chan *c = tx->chan;
+ struct idxd_wq *wq = to_idxd_wq(c);
+@@ -156,14 +159,25 @@ dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+
+ static void idxd_dma_release(struct dma_device *device)
+ {
++ struct idxd_dma_dev *idxd_dma = container_of(device, struct idxd_dma_dev, dma);
++
++ kfree(idxd_dma);
+ }
+
+ int idxd_register_dma_device(struct idxd_device *idxd)
+ {
+- struct dma_device *dma = &idxd->dma_dev;
++ struct idxd_dma_dev *idxd_dma;
++ struct dma_device *dma;
++ struct device *dev = &idxd->pdev->dev;
++ int rc;
+
++ idxd_dma = kzalloc_node(sizeof(*idxd_dma), GFP_KERNEL, dev_to_node(dev));
++ if (!idxd_dma)
++ return -ENOMEM;
++
++ dma = &idxd_dma->dma;
+ INIT_LIST_HEAD(&dma->channels);
+- dma->dev = &idxd->pdev->dev;
++ dma->dev = dev;
+
+ dma_cap_set(DMA_PRIVATE, dma->cap_mask);
+ dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
+@@ -179,35 +193,72 @@ int idxd_register_dma_device(struct idxd_device *idxd)
+ dma->device_alloc_chan_resources = idxd_dma_alloc_chan_resources;
+ dma->device_free_chan_resources = idxd_dma_free_chan_resources;
+
+- return dma_async_device_register(&idxd->dma_dev);
++ rc = dma_async_device_register(dma);
++ if (rc < 0) {
++ kfree(idxd_dma);
++ return rc;
++ }
++
++ idxd_dma->idxd = idxd;
++ /*
++ * This pointer is protected by the refs taken by the dma_chan. It will remain valid
++ * as long as there are outstanding channels.
++ */
++ idxd->idxd_dma = idxd_dma;
++ return 0;
+ }
+
+ void idxd_unregister_dma_device(struct idxd_device *idxd)
+ {
+- dma_async_device_unregister(&idxd->dma_dev);
++ dma_async_device_unregister(&idxd->idxd_dma->dma);
+ }
+
+ int idxd_register_dma_channel(struct idxd_wq *wq)
+ {
+ struct idxd_device *idxd = wq->idxd;
+- struct dma_device *dma = &idxd->dma_dev;
+- struct dma_chan *chan = &wq->dma_chan;
+- int rc;
++ struct dma_device *dma = &idxd->idxd_dma->dma;
++ struct device *dev = &idxd->pdev->dev;
++ struct idxd_dma_chan *idxd_chan;
++ struct dma_chan *chan;
++ int rc, i;
++
++ idxd_chan = kzalloc_node(sizeof(*idxd_chan), GFP_KERNEL, dev_to_node(dev));
++ if (!idxd_chan)
++ return -ENOMEM;
+
+- memset(&wq->dma_chan, 0, sizeof(struct dma_chan));
++ chan = &idxd_chan->chan;
+ chan->device = dma;
+ list_add_tail(&chan->device_node, &dma->channels);
++
++ for (i = 0; i < wq->num_descs; i++) {
++ struct idxd_desc *desc = wq->descs[i];
++
++ dma_async_tx_descriptor_init(&desc->txd, chan);
++ desc->txd.tx_submit = idxd_dma_tx_submit;
++ }
++
+ rc = dma_async_device_channel_register(dma, chan);
+- if (rc < 0)
++ if (rc < 0) {
++ kfree(idxd_chan);
+ return rc;
++ }
++
++ wq->idxd_chan = idxd_chan;
++ idxd_chan->wq = wq;
++ get_device(&wq->conf_dev);
+
+ return 0;
+ }
+
+ void idxd_unregister_dma_channel(struct idxd_wq *wq)
+ {
+- struct dma_chan *chan = &wq->dma_chan;
++ struct idxd_dma_chan *idxd_chan = wq->idxd_chan;
++ struct dma_chan *chan = &idxd_chan->chan;
++ struct idxd_dma_dev *idxd_dma = wq->idxd->idxd_dma;
+
+- dma_async_device_channel_unregister(&wq->idxd->dma_dev, chan);
++ dma_async_device_channel_unregister(&idxd_dma->dma, chan);
+ list_del(&chan->device_node);
++ kfree(wq->idxd_chan);
++ wq->idxd_chan = NULL;
++ put_device(&wq->conf_dev);
+ }
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index 76014c14f4732..89daf746d1215 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -8,12 +8,16 @@
+ #include <linux/percpu-rwsem.h>
+ #include <linux/wait.h>
+ #include <linux/cdev.h>
++#include <linux/idr.h>
+ #include "registers.h"
+
+ #define IDXD_DRIVER_VERSION "1.00"
+
+ extern struct kmem_cache *idxd_desc_pool;
+
++struct idxd_device;
++struct idxd_wq;
++
+ #define IDXD_REG_TIMEOUT 50
+ #define IDXD_DRAIN_TIMEOUT 5000
+
+@@ -33,6 +37,7 @@ struct idxd_device_driver {
+ struct idxd_irq_entry {
+ struct idxd_device *idxd;
+ int id;
++ int vector;
+ struct llist_head pending_llist;
+ struct list_head work_list;
+ /*
+@@ -75,10 +80,10 @@ enum idxd_wq_type {
+ };
+
+ struct idxd_cdev {
++ struct idxd_wq *wq;
+ struct cdev cdev;
+- struct device *dev;
++ struct device dev;
+ int minor;
+- struct wait_queue_head err_queue;
+ };
+
+ #define IDXD_ALLOCATED_BATCH_SIZE 128U
+@@ -96,10 +101,16 @@ enum idxd_complete_type {
+ IDXD_COMPLETE_DEV_FAIL,
+ };
+
++struct idxd_dma_chan {
++ struct dma_chan chan;
++ struct idxd_wq *wq;
++};
++
+ struct idxd_wq {
+ void __iomem *portal;
+ struct device conf_dev;
+- struct idxd_cdev idxd_cdev;
++ struct idxd_cdev *idxd_cdev;
++ struct wait_queue_head err_queue;
+ struct idxd_device *idxd;
+ int id;
+ enum idxd_wq_type type;
+@@ -125,7 +136,7 @@ struct idxd_wq {
+ int compls_size;
+ struct idxd_desc **descs;
+ struct sbitmap_queue sbq;
+- struct dma_chan dma_chan;
++ struct idxd_dma_chan *idxd_chan;
+ char name[WQ_NAME_SIZE + 1];
+ u64 max_xfer_bytes;
+ u32 max_batch_size;
+@@ -162,6 +173,11 @@ enum idxd_device_flag {
+ IDXD_FLAG_PASID_ENABLED,
+ };
+
++struct idxd_dma_dev {
++ struct idxd_device *idxd;
++ struct dma_device dma;
++};
++
+ struct idxd_device {
+ enum idxd_type type;
+ struct device conf_dev;
+@@ -178,9 +194,9 @@ struct idxd_device {
+
+ spinlock_t dev_lock; /* spinlock for device */
+ struct completion *cmd_done;
+- struct idxd_group *groups;
+- struct idxd_wq *wqs;
+- struct idxd_engine *engines;
++ struct idxd_group **groups;
++ struct idxd_wq **wqs;
++ struct idxd_engine **engines;
+
+ struct iommu_sva *sva;
+ unsigned int pasid;
+@@ -206,11 +222,10 @@ struct idxd_device {
+
+ union sw_err_reg sw_err;
+ wait_queue_head_t cmd_waitq;
+- struct msix_entry *msix_entries;
+ int num_wq_irqs;
+ struct idxd_irq_entry *irq_entries;
+
+- struct dma_device dma_dev;
++ struct idxd_dma_dev *idxd_dma;
+ struct workqueue_struct *wq;
+ struct work_struct work;
+ };
+@@ -242,6 +257,43 @@ extern struct bus_type dsa_bus_type;
+ extern struct bus_type iax_bus_type;
+
+ extern bool support_enqcmd;
++extern struct device_type dsa_device_type;
++extern struct device_type iax_device_type;
++extern struct device_type idxd_wq_device_type;
++extern struct device_type idxd_engine_device_type;
++extern struct device_type idxd_group_device_type;
++
++static inline bool is_dsa_dev(struct device *dev)
++{
++ return dev->type == &dsa_device_type;
++}
++
++static inline bool is_iax_dev(struct device *dev)
++{
++ return dev->type == &iax_device_type;
++}
++
++static inline bool is_idxd_dev(struct device *dev)
++{
++ return is_dsa_dev(dev) || is_iax_dev(dev);
++}
++
++static inline bool is_idxd_wq_dev(struct device *dev)
++{
++ return dev->type == &idxd_wq_device_type;
++}
++
++static inline bool is_idxd_wq_dmaengine(struct idxd_wq *wq)
++{
++ if (wq->type == IDXD_WQT_KERNEL && strcmp(wq->name, "dmaengine") == 0)
++ return true;
++ return false;
++}
++
++static inline bool is_idxd_wq_cdev(struct idxd_wq *wq)
++{
++ return wq->type == IDXD_WQT_USER;
++}
+
+ static inline bool wq_dedicated(struct idxd_wq *wq)
+ {
+@@ -279,18 +331,6 @@ static inline int idxd_get_wq_portal_full_offset(int wq_id,
+ return ((wq_id * 4) << PAGE_SHIFT) + idxd_get_wq_portal_offset(prot);
+ }
+
+-static inline void idxd_set_type(struct idxd_device *idxd)
+-{
+- struct pci_dev *pdev = idxd->pdev;
+-
+- if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0)
+- idxd->type = IDXD_TYPE_DSA;
+- else if (pdev->device == PCI_DEVICE_ID_INTEL_IAX_SPR0)
+- idxd->type = IDXD_TYPE_IAX;
+- else
+- idxd->type = IDXD_TYPE_UNKNOWN;
+-}
+-
+ static inline void idxd_wq_get(struct idxd_wq *wq)
+ {
+ wq->client_count++;
+@@ -306,14 +346,16 @@ static inline int idxd_wq_refcount(struct idxd_wq *wq)
+ return wq->client_count;
+ };
+
++struct ida *idxd_ida(struct idxd_device *idxd);
+ const char *idxd_get_dev_name(struct idxd_device *idxd);
+ int idxd_register_bus_type(void);
+ void idxd_unregister_bus_type(void);
+-int idxd_setup_sysfs(struct idxd_device *idxd);
+-void idxd_cleanup_sysfs(struct idxd_device *idxd);
++int idxd_register_devices(struct idxd_device *idxd);
++void idxd_unregister_devices(struct idxd_device *idxd);
+ int idxd_register_driver(void);
+ void idxd_unregister_driver(void);
+ struct bus_type *idxd_get_bus_type(struct idxd_device *idxd);
++struct device_type *idxd_get_device_type(struct idxd_device *idxd);
+
+ /* device interrupt control */
+ void idxd_msix_perm_setup(struct idxd_device *idxd);
+@@ -363,7 +405,6 @@ void idxd_unregister_dma_channel(struct idxd_wq *wq);
+ void idxd_parse_completion_status(u8 status, enum dmaengine_tx_result *res);
+ void idxd_dma_complete_txd(struct idxd_desc *desc,
+ enum idxd_complete_type comp_type);
+-dma_cookie_t idxd_dma_tx_submit(struct dma_async_tx_descriptor *tx);
+
+ /* cdev */
+ int idxd_cdev_register(void);
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 8f3df64aa1be1..3d43c08f3a767 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -30,8 +30,7 @@ MODULE_AUTHOR("Intel Corporation");
+
+ bool support_enqcmd;
+
+-static struct idr idxd_idrs[IDXD_TYPE_MAX];
+-static struct mutex idxd_idr_lock;
++static struct ida idxd_idas[IDXD_TYPE_MAX];
+
+ static struct pci_device_id idxd_pci_tbl[] = {
+ /* DSA ver 1.0 platforms */
+@@ -48,6 +47,11 @@ static char *idxd_name[] = {
+ "iax"
+ };
+
++struct ida *idxd_ida(struct idxd_device *idxd)
++{
++ return &idxd_idas[idxd->type];
++}
++
+ const char *idxd_get_dev_name(struct idxd_device *idxd)
+ {
+ return idxd_name[idxd->type];
+@@ -57,7 +61,6 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
+ {
+ struct pci_dev *pdev = idxd->pdev;
+ struct device *dev = &pdev->dev;
+- struct msix_entry *msix;
+ struct idxd_irq_entry *irq_entry;
+ int i, msixcnt;
+ int rc = 0;
+@@ -65,23 +68,13 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
+ msixcnt = pci_msix_vec_count(pdev);
+ if (msixcnt < 0) {
+ dev_err(dev, "Not MSI-X interrupt capable.\n");
+- goto err_no_irq;
+- }
+-
+- idxd->msix_entries = devm_kzalloc(dev, sizeof(struct msix_entry) *
+- msixcnt, GFP_KERNEL);
+- if (!idxd->msix_entries) {
+- rc = -ENOMEM;
+- goto err_no_irq;
++ return -ENOSPC;
+ }
+
+- for (i = 0; i < msixcnt; i++)
+- idxd->msix_entries[i].entry = i;
+-
+- rc = pci_enable_msix_exact(pdev, idxd->msix_entries, msixcnt);
+- if (rc) {
+- dev_err(dev, "Failed enabling %d MSIX entries.\n", msixcnt);
+- goto err_no_irq;
++ rc = pci_alloc_irq_vectors(pdev, msixcnt, msixcnt, PCI_IRQ_MSIX);
++ if (rc != msixcnt) {
++ dev_err(dev, "Failed enabling %d MSIX entries: %d\n", msixcnt, rc);
++ return -ENOSPC;
+ }
+ dev_dbg(dev, "Enabled %d msix vectors\n", msixcnt);
+
+@@ -89,119 +82,236 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
+ * We implement 1 completion list per MSI-X entry except for
+ * entry 0, which is for errors and others.
+ */
+- idxd->irq_entries = devm_kcalloc(dev, msixcnt,
+- sizeof(struct idxd_irq_entry),
+- GFP_KERNEL);
++ idxd->irq_entries = kcalloc_node(msixcnt, sizeof(struct idxd_irq_entry),
++ GFP_KERNEL, dev_to_node(dev));
+ if (!idxd->irq_entries) {
+ rc = -ENOMEM;
+- goto err_no_irq;
++ goto err_irq_entries;
+ }
+
+ for (i = 0; i < msixcnt; i++) {
+ idxd->irq_entries[i].id = i;
+ idxd->irq_entries[i].idxd = idxd;
++ idxd->irq_entries[i].vector = pci_irq_vector(pdev, i);
+ spin_lock_init(&idxd->irq_entries[i].list_lock);
+ }
+
+- msix = &idxd->msix_entries[0];
+ irq_entry = &idxd->irq_entries[0];
+- rc = devm_request_threaded_irq(dev, msix->vector, idxd_irq_handler,
+- idxd_misc_thread, 0, "idxd-misc",
+- irq_entry);
++ rc = request_threaded_irq(irq_entry->vector, idxd_irq_handler, idxd_misc_thread,
++ 0, "idxd-misc", irq_entry);
+ if (rc < 0) {
+ dev_err(dev, "Failed to allocate misc interrupt.\n");
+- goto err_no_irq;
++ goto err_misc_irq;
+ }
+
+- dev_dbg(dev, "Allocated idxd-misc handler on msix vector %d\n",
+- msix->vector);
++ dev_dbg(dev, "Allocated idxd-misc handler on msix vector %d\n", irq_entry->vector);
+
+ /* first MSI-X entry is not for wq interrupts */
+ idxd->num_wq_irqs = msixcnt - 1;
+
+ for (i = 1; i < msixcnt; i++) {
+- msix = &idxd->msix_entries[i];
+ irq_entry = &idxd->irq_entries[i];
+
+ init_llist_head(&idxd->irq_entries[i].pending_llist);
+ INIT_LIST_HEAD(&idxd->irq_entries[i].work_list);
+- rc = devm_request_threaded_irq(dev, msix->vector,
+- idxd_irq_handler,
+- idxd_wq_thread, 0,
+- "idxd-portal", irq_entry);
++ rc = request_threaded_irq(irq_entry->vector, idxd_irq_handler,
++ idxd_wq_thread, 0, "idxd-portal", irq_entry);
+ if (rc < 0) {
+- dev_err(dev, "Failed to allocate irq %d.\n",
+- msix->vector);
+- goto err_no_irq;
++ dev_err(dev, "Failed to allocate irq %d.\n", irq_entry->vector);
++ goto err_wq_irqs;
+ }
+- dev_dbg(dev, "Allocated idxd-msix %d for vector %d\n",
+- i, msix->vector);
++ dev_dbg(dev, "Allocated idxd-msix %d for vector %d\n", i, irq_entry->vector);
+ }
+
+ idxd_unmask_error_interrupts(idxd);
+ idxd_msix_perm_setup(idxd);
+ return 0;
+
+- err_no_irq:
++ err_wq_irqs:
++ while (--i >= 0) {
++ irq_entry = &idxd->irq_entries[i];
++ free_irq(irq_entry->vector, irq_entry);
++ }
++ err_misc_irq:
+ /* Disable error interrupt generation */
+ idxd_mask_error_interrupts(idxd);
+- pci_disable_msix(pdev);
++ err_irq_entries:
++ pci_free_irq_vectors(pdev);
+ dev_err(dev, "No usable interrupts\n");
+ return rc;
+ }
+
+-static int idxd_setup_internals(struct idxd_device *idxd)
++static int idxd_setup_wqs(struct idxd_device *idxd)
+ {
+ struct device *dev = &idxd->pdev->dev;
+- int i;
+-
+- init_waitqueue_head(&idxd->cmd_waitq);
+- idxd->groups = devm_kcalloc(dev, idxd->max_groups,
+- sizeof(struct idxd_group), GFP_KERNEL);
+- if (!idxd->groups)
+- return -ENOMEM;
+-
+- for (i = 0; i < idxd->max_groups; i++) {
+- idxd->groups[i].idxd = idxd;
+- idxd->groups[i].id = i;
+- idxd->groups[i].tc_a = -1;
+- idxd->groups[i].tc_b = -1;
+- }
++ struct idxd_wq *wq;
++ int i, rc;
+
+- idxd->wqs = devm_kcalloc(dev, idxd->max_wqs, sizeof(struct idxd_wq),
+- GFP_KERNEL);
++ idxd->wqs = kcalloc_node(idxd->max_wqs, sizeof(struct idxd_wq *),
++ GFP_KERNEL, dev_to_node(dev));
+ if (!idxd->wqs)
+ return -ENOMEM;
+
+- idxd->engines = devm_kcalloc(dev, idxd->max_engines,
+- sizeof(struct idxd_engine), GFP_KERNEL);
+- if (!idxd->engines)
+- return -ENOMEM;
+-
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ wq = kzalloc_node(sizeof(*wq), GFP_KERNEL, dev_to_node(dev));
++ if (!wq) {
++ rc = -ENOMEM;
++ goto err;
++ }
+
+ wq->id = i;
+ wq->idxd = idxd;
++ device_initialize(&wq->conf_dev);
++ wq->conf_dev.parent = &idxd->conf_dev;
++ wq->conf_dev.bus = idxd_get_bus_type(idxd);
++ wq->conf_dev.type = &idxd_wq_device_type;
++ rc = dev_set_name(&wq->conf_dev, "wq%d.%d", idxd->id, wq->id);
++ if (rc < 0) {
++ put_device(&wq->conf_dev);
++ goto err;
++ }
++
+ mutex_init(&wq->wq_lock);
+- wq->idxd_cdev.minor = -1;
++ init_waitqueue_head(&wq->err_queue);
+ wq->max_xfer_bytes = idxd->max_xfer_bytes;
+ wq->max_batch_size = idxd->max_batch_size;
+- wq->wqcfg = devm_kzalloc(dev, idxd->wqcfg_size, GFP_KERNEL);
+- if (!wq->wqcfg)
+- return -ENOMEM;
++ wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
++ if (!wq->wqcfg) {
++ put_device(&wq->conf_dev);
++ rc = -ENOMEM;
++ goto err;
++ }
++ idxd->wqs[i] = wq;
+ }
+
++ return 0;
++
++ err:
++ while (--i >= 0)
++ put_device(&idxd->wqs[i]->conf_dev);
++ return rc;
++}
++
++static int idxd_setup_engines(struct idxd_device *idxd)
++{
++ struct idxd_engine *engine;
++ struct device *dev = &idxd->pdev->dev;
++ int i, rc;
++
++ idxd->engines = kcalloc_node(idxd->max_engines, sizeof(struct idxd_engine *),
++ GFP_KERNEL, dev_to_node(dev));
++ if (!idxd->engines)
++ return -ENOMEM;
++
+ for (i = 0; i < idxd->max_engines; i++) {
+- idxd->engines[i].idxd = idxd;
+- idxd->engines[i].id = i;
++ engine = kzalloc_node(sizeof(*engine), GFP_KERNEL, dev_to_node(dev));
++ if (!engine) {
++ rc = -ENOMEM;
++ goto err;
++ }
++
++ engine->id = i;
++ engine->idxd = idxd;
++ device_initialize(&engine->conf_dev);
++ engine->conf_dev.parent = &idxd->conf_dev;
++ engine->conf_dev.type = &idxd_engine_device_type;
++ rc = dev_set_name(&engine->conf_dev, "engine%d.%d", idxd->id, engine->id);
++ if (rc < 0) {
++ put_device(&engine->conf_dev);
++ goto err;
++ }
++
++ idxd->engines[i] = engine;
+ }
+
+- idxd->wq = create_workqueue(dev_name(dev));
+- if (!idxd->wq)
++ return 0;
++
++ err:
++ while (--i >= 0)
++ put_device(&idxd->engines[i]->conf_dev);
++ return rc;
++}
++
++static int idxd_setup_groups(struct idxd_device *idxd)
++{
++ struct device *dev = &idxd->pdev->dev;
++ struct idxd_group *group;
++ int i, rc;
++
++ idxd->groups = kcalloc_node(idxd->max_groups, sizeof(struct idxd_group *),
++ GFP_KERNEL, dev_to_node(dev));
++ if (!idxd->groups)
+ return -ENOMEM;
+
++ for (i = 0; i < idxd->max_groups; i++) {
++ group = kzalloc_node(sizeof(*group), GFP_KERNEL, dev_to_node(dev));
++ if (!group) {
++ rc = -ENOMEM;
++ goto err;
++ }
++
++ group->id = i;
++ group->idxd = idxd;
++ device_initialize(&group->conf_dev);
++ group->conf_dev.parent = &idxd->conf_dev;
++ group->conf_dev.bus = idxd_get_bus_type(idxd);
++ group->conf_dev.type = &idxd_group_device_type;
++ rc = dev_set_name(&group->conf_dev, "group%d.%d", idxd->id, group->id);
++ if (rc < 0) {
++ put_device(&group->conf_dev);
++ goto err;
++ }
++
++ idxd->groups[i] = group;
++ group->tc_a = -1;
++ group->tc_b = -1;
++ }
++
++ return 0;
++
++ err:
++ while (--i >= 0)
++ put_device(&idxd->groups[i]->conf_dev);
++ return rc;
++}
++
++static int idxd_setup_internals(struct idxd_device *idxd)
++{
++ struct device *dev = &idxd->pdev->dev;
++ int rc, i;
++
++ init_waitqueue_head(&idxd->cmd_waitq);
++
++ rc = idxd_setup_wqs(idxd);
++ if (rc < 0)
++ return rc;
++
++ rc = idxd_setup_engines(idxd);
++ if (rc < 0)
++ goto err_engine;
++
++ rc = idxd_setup_groups(idxd);
++ if (rc < 0)
++ goto err_group;
++
++ idxd->wq = create_workqueue(dev_name(dev));
++ if (!idxd->wq) {
++ rc = -ENOMEM;
++ goto err_wkq_create;
++ }
++
+ return 0;
++
++ err_wkq_create:
++ for (i = 0; i < idxd->max_groups; i++)
++ put_device(&idxd->groups[i]->conf_dev);
++ err_group:
++ for (i = 0; i < idxd->max_engines; i++)
++ put_device(&idxd->engines[i]->conf_dev);
++ err_engine:
++ for (i = 0; i < idxd->max_wqs; i++)
++ put_device(&idxd->wqs[i]->conf_dev);
++ return rc;
+ }
+
+ static void idxd_read_table_offsets(struct idxd_device *idxd)
+@@ -271,16 +381,44 @@ static void idxd_read_caps(struct idxd_device *idxd)
+ }
+ }
+
++static inline void idxd_set_type(struct idxd_device *idxd)
++{
++ struct pci_dev *pdev = idxd->pdev;
++
++ if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0)
++ idxd->type = IDXD_TYPE_DSA;
++ else if (pdev->device == PCI_DEVICE_ID_INTEL_IAX_SPR0)
++ idxd->type = IDXD_TYPE_IAX;
++ else
++ idxd->type = IDXD_TYPE_UNKNOWN;
++}
++
+ static struct idxd_device *idxd_alloc(struct pci_dev *pdev)
+ {
+ struct device *dev = &pdev->dev;
+ struct idxd_device *idxd;
++ int rc;
+
+- idxd = devm_kzalloc(dev, sizeof(struct idxd_device), GFP_KERNEL);
++ idxd = kzalloc_node(sizeof(*idxd), GFP_KERNEL, dev_to_node(dev));
+ if (!idxd)
+ return NULL;
+
+ idxd->pdev = pdev;
++ idxd_set_type(idxd);
++ idxd->id = ida_alloc(idxd_ida(idxd), GFP_KERNEL);
++ if (idxd->id < 0)
++ return NULL;
++
++ device_initialize(&idxd->conf_dev);
++ idxd->conf_dev.parent = dev;
++ idxd->conf_dev.bus = idxd_get_bus_type(idxd);
++ idxd->conf_dev.type = idxd_get_device_type(idxd);
++ rc = dev_set_name(&idxd->conf_dev, "%s%d", idxd_get_dev_name(idxd), idxd->id);
++ if (rc < 0) {
++ put_device(&idxd->conf_dev);
++ return NULL;
++ }
++
+ spin_lock_init(&idxd->dev_lock);
+
+ return idxd;
+@@ -346,31 +484,20 @@ static int idxd_probe(struct idxd_device *idxd)
+
+ rc = idxd_setup_internals(idxd);
+ if (rc)
+- goto err_setup;
++ goto err;
+
+ rc = idxd_setup_interrupts(idxd);
+ if (rc)
+- goto err_setup;
++ goto err;
+
+ dev_dbg(dev, "IDXD interrupt setup complete.\n");
+
+- mutex_lock(&idxd_idr_lock);
+- idxd->id = idr_alloc(&idxd_idrs[idxd->type], idxd, 0, 0, GFP_KERNEL);
+- mutex_unlock(&idxd_idr_lock);
+- if (idxd->id < 0) {
+- rc = -ENOMEM;
+- goto err_idr_fail;
+- }
+-
+ idxd->major = idxd_cdev_get_major(idxd);
+
+ dev_dbg(dev, "IDXD device %d probed successfully\n", idxd->id);
+ return 0;
+
+- err_idr_fail:
+- idxd_mask_error_interrupts(idxd);
+- idxd_mask_msix_vectors(idxd);
+- err_setup:
++ err:
+ if (device_pasid_enabled(idxd))
+ idxd_disable_system_pasid(idxd);
+ return rc;
+@@ -390,34 +517,37 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ struct idxd_device *idxd;
+ int rc;
+
+- rc = pcim_enable_device(pdev);
++ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ dev_dbg(dev, "Alloc IDXD context\n");
+ idxd = idxd_alloc(pdev);
+- if (!idxd)
+- return -ENOMEM;
++ if (!idxd) {
++ rc = -ENOMEM;
++ goto err_idxd_alloc;
++ }
+
+ dev_dbg(dev, "Mapping BARs\n");
+- idxd->reg_base = pcim_iomap(pdev, IDXD_MMIO_BAR, 0);
+- if (!idxd->reg_base)
+- return -ENOMEM;
++ idxd->reg_base = pci_iomap(pdev, IDXD_MMIO_BAR, 0);
++ if (!idxd->reg_base) {
++ rc = -ENOMEM;
++ goto err_iomap;
++ }
+
+ dev_dbg(dev, "Set DMA masks\n");
+ rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (rc)
+ rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (rc)
+- return rc;
++ goto err;
+
+ rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (rc)
+ rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (rc)
+- return rc;
++ goto err;
+
+- idxd_set_type(idxd);
+
+ idxd_type_init(idxd);
+
+@@ -429,13 +559,13 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ rc = idxd_probe(idxd);
+ if (rc) {
+ dev_err(dev, "Intel(R) IDXD DMA Engine init failed\n");
+- return -ENODEV;
++ goto err;
+ }
+
+- rc = idxd_setup_sysfs(idxd);
++ rc = idxd_register_devices(idxd);
+ if (rc) {
+ dev_err(dev, "IDXD sysfs setup failed\n");
+- return -ENODEV;
++ goto err;
+ }
+
+ idxd->state = IDXD_DEV_CONF_READY;
+@@ -444,6 +574,14 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ idxd->hw.version);
+
+ return 0;
++
++ err:
++ pci_iounmap(pdev, idxd->reg_base);
++ err_iomap:
++ put_device(&idxd->conf_dev);
++ err_idxd_alloc:
++ pci_disable_device(pdev);
++ return rc;
+ }
+
+ static void idxd_flush_pending_llist(struct idxd_irq_entry *ie)
+@@ -489,7 +627,8 @@ static void idxd_shutdown(struct pci_dev *pdev)
+
+ for (i = 0; i < msixcnt; i++) {
+ irq_entry = &idxd->irq_entries[i];
+- synchronize_irq(idxd->msix_entries[i].vector);
++ synchronize_irq(irq_entry->vector);
++ free_irq(irq_entry->vector, irq_entry);
+ if (i == 0)
+ continue;
+ idxd_flush_pending_llist(irq_entry);
+@@ -497,6 +636,9 @@ static void idxd_shutdown(struct pci_dev *pdev)
+ }
+
+ idxd_msix_perm_clear(idxd);
++ pci_free_irq_vectors(pdev);
++ pci_iounmap(pdev, idxd->reg_base);
++ pci_disable_device(pdev);
+ destroy_workqueue(idxd->wq);
+ }
+
+@@ -505,13 +647,10 @@ static void idxd_remove(struct pci_dev *pdev)
+ struct idxd_device *idxd = pci_get_drvdata(pdev);
+
+ dev_dbg(&pdev->dev, "%s called\n", __func__);
+- idxd_cleanup_sysfs(idxd);
+ idxd_shutdown(pdev);
+ if (device_pasid_enabled(idxd))
+ idxd_disable_system_pasid(idxd);
+- mutex_lock(&idxd_idr_lock);
+- idr_remove(&idxd_idrs[idxd->type], idxd->id);
+- mutex_unlock(&idxd_idr_lock);
++ idxd_unregister_devices(idxd);
+ }
+
+ static struct pci_driver idxd_pci_driver = {
+@@ -540,9 +679,8 @@ static int __init idxd_init_module(void)
+ else
+ support_enqcmd = true;
+
+- mutex_init(&idxd_idr_lock);
+ for (i = 0; i < IDXD_TYPE_MAX; i++)
+- idr_init(&idxd_idrs[i]);
++ ida_init(&idxd_idas[i]);
+
+ err = idxd_register_bus_type();
+ if (err < 0)
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index f1463fc581125..fc0781e3f36d4 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -45,7 +45,7 @@ static void idxd_device_reinit(struct work_struct *work)
+ goto out;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ if (wq->state == IDXD_WQ_ENABLED) {
+ rc = idxd_wq_enable(wq);
+@@ -130,18 +130,18 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
+
+ if (idxd->sw_err.valid && idxd->sw_err.wq_idx_valid) {
+ int id = idxd->sw_err.wq_idx;
+- struct idxd_wq *wq = &idxd->wqs[id];
++ struct idxd_wq *wq = idxd->wqs[id];
+
+ if (wq->type == IDXD_WQT_USER)
+- wake_up_interruptible(&wq->idxd_cdev.err_queue);
++ wake_up_interruptible(&wq->err_queue);
+ } else {
+ int i;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ if (wq->type == IDXD_WQT_USER)
+- wake_up_interruptible(&wq->idxd_cdev.err_queue);
++ wake_up_interruptible(&wq->err_queue);
+ }
+ }
+
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 18bf4d1489890..9586b55abce56 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -16,69 +16,6 @@ static char *idxd_wq_type_names[] = {
+ [IDXD_WQT_USER] = "user",
+ };
+
+-static void idxd_conf_device_release(struct device *dev)
+-{
+- dev_dbg(dev, "%s for %s\n", __func__, dev_name(dev));
+-}
+-
+-static struct device_type idxd_group_device_type = {
+- .name = "group",
+- .release = idxd_conf_device_release,
+-};
+-
+-static struct device_type idxd_wq_device_type = {
+- .name = "wq",
+- .release = idxd_conf_device_release,
+-};
+-
+-static struct device_type idxd_engine_device_type = {
+- .name = "engine",
+- .release = idxd_conf_device_release,
+-};
+-
+-static struct device_type dsa_device_type = {
+- .name = "dsa",
+- .release = idxd_conf_device_release,
+-};
+-
+-static struct device_type iax_device_type = {
+- .name = "iax",
+- .release = idxd_conf_device_release,
+-};
+-
+-static inline bool is_dsa_dev(struct device *dev)
+-{
+- return dev ? dev->type == &dsa_device_type : false;
+-}
+-
+-static inline bool is_iax_dev(struct device *dev)
+-{
+- return dev ? dev->type == &iax_device_type : false;
+-}
+-
+-static inline bool is_idxd_dev(struct device *dev)
+-{
+- return is_dsa_dev(dev) || is_iax_dev(dev);
+-}
+-
+-static inline bool is_idxd_wq_dev(struct device *dev)
+-{
+- return dev ? dev->type == &idxd_wq_device_type : false;
+-}
+-
+-static inline bool is_idxd_wq_dmaengine(struct idxd_wq *wq)
+-{
+- if (wq->type == IDXD_WQT_KERNEL &&
+- strcmp(wq->name, "dmaengine") == 0)
+- return true;
+- return false;
+-}
+-
+-static inline bool is_idxd_wq_cdev(struct idxd_wq *wq)
+-{
+- return wq->type == IDXD_WQT_USER;
+-}
+-
+ static int idxd_config_bus_match(struct device *dev,
+ struct device_driver *drv)
+ {
+@@ -322,7 +259,7 @@ static int idxd_config_bus_remove(struct device *dev)
+ dev_dbg(dev, "%s removing dev %s\n", __func__,
+ dev_name(&idxd->conf_dev));
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ if (wq->state == IDXD_WQ_DISABLED)
+ continue;
+@@ -334,7 +271,7 @@ static int idxd_config_bus_remove(struct device *dev)
+ idxd_unregister_dma_device(idxd);
+ rc = idxd_device_disable(idxd);
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ mutex_lock(&wq->wq_lock);
+ idxd_wq_disable_cleanup(wq);
+@@ -405,7 +342,7 @@ struct bus_type *idxd_get_bus_type(struct idxd_device *idxd)
+ return idxd_bus_types[idxd->type];
+ }
+
+-static struct device_type *idxd_get_device_type(struct idxd_device *idxd)
++struct device_type *idxd_get_device_type(struct idxd_device *idxd)
+ {
+ if (idxd->type == IDXD_TYPE_DSA)
+ return &dsa_device_type;
+@@ -488,7 +425,7 @@ static ssize_t engine_group_id_store(struct device *dev,
+
+ if (prevg)
+ prevg->num_engines--;
+- engine->group = &idxd->groups[id];
++ engine->group = idxd->groups[id];
+ engine->group->num_engines++;
+
+ return count;
+@@ -512,6 +449,19 @@ static const struct attribute_group *idxd_engine_attribute_groups[] = {
+ NULL,
+ };
+
++static void idxd_conf_engine_release(struct device *dev)
++{
++ struct idxd_engine *engine = container_of(dev, struct idxd_engine, conf_dev);
++
++ kfree(engine);
++}
++
++struct device_type idxd_engine_device_type = {
++ .name = "engine",
++ .release = idxd_conf_engine_release,
++ .groups = idxd_engine_attribute_groups,
++};
++
+ /* Group attributes */
+
+ static void idxd_set_free_tokens(struct idxd_device *idxd)
+@@ -519,7 +469,7 @@ static void idxd_set_free_tokens(struct idxd_device *idxd)
+ int i, tokens;
+
+ for (i = 0, tokens = 0; i < idxd->max_groups; i++) {
+- struct idxd_group *g = &idxd->groups[i];
++ struct idxd_group *g = idxd->groups[i];
+
+ tokens += g->tokens_reserved;
+ }
+@@ -674,7 +624,7 @@ static ssize_t group_engines_show(struct device *dev,
+ struct idxd_device *idxd = group->idxd;
+
+ for (i = 0; i < idxd->max_engines; i++) {
+- struct idxd_engine *engine = &idxd->engines[i];
++ struct idxd_engine *engine = idxd->engines[i];
+
+ if (!engine->group)
+ continue;
+@@ -703,7 +653,7 @@ static ssize_t group_work_queues_show(struct device *dev,
+ struct idxd_device *idxd = group->idxd;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ if (!wq->group)
+ continue;
+@@ -824,6 +774,19 @@ static const struct attribute_group *idxd_group_attribute_groups[] = {
+ NULL,
+ };
+
++static void idxd_conf_group_release(struct device *dev)
++{
++ struct idxd_group *group = container_of(dev, struct idxd_group, conf_dev);
++
++ kfree(group);
++}
++
++struct device_type idxd_group_device_type = {
++ .name = "group",
++ .release = idxd_conf_group_release,
++ .groups = idxd_group_attribute_groups,
++};
++
+ /* IDXD work queue attribs */
+ static ssize_t wq_clients_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+@@ -896,7 +859,7 @@ static ssize_t wq_group_id_store(struct device *dev,
+ return count;
+ }
+
+- group = &idxd->groups[id];
++ group = idxd->groups[id];
+ prevg = wq->group;
+
+ if (prevg)
+@@ -960,7 +923,7 @@ static int total_claimed_wq_size(struct idxd_device *idxd)
+ int wq_size = 0;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ wq_size += wq->size;
+ }
+@@ -1206,8 +1169,16 @@ static ssize_t wq_cdev_minor_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
++ int minor = -1;
+
+- return sprintf(buf, "%d\n", wq->idxd_cdev.minor);
++ mutex_lock(&wq->wq_lock);
++ if (wq->idxd_cdev)
++ minor = wq->idxd_cdev->minor;
++ mutex_unlock(&wq->wq_lock);
++
++ if (minor == -1)
++ return -ENXIO;
++ return sysfs_emit(buf, "%d\n", minor);
+ }
+
+ static struct device_attribute dev_attr_wq_cdev_minor =
+@@ -1356,6 +1327,20 @@ static const struct attribute_group *idxd_wq_attribute_groups[] = {
+ NULL,
+ };
+
++static void idxd_conf_wq_release(struct device *dev)
++{
++ struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
++
++ kfree(wq->wqcfg);
++ kfree(wq);
++}
++
++struct device_type idxd_wq_device_type = {
++ .name = "wq",
++ .release = idxd_conf_wq_release,
++ .groups = idxd_wq_attribute_groups,
++};
++
+ /* IDXD device attribs */
+ static ssize_t version_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+@@ -1486,7 +1471,7 @@ static ssize_t clients_show(struct device *dev,
+
+ spin_lock_irqsave(&idxd->dev_lock, flags);
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ count += wq->client_count;
+ }
+@@ -1644,183 +1629,160 @@ static const struct attribute_group *idxd_attribute_groups[] = {
+ NULL,
+ };
+
+-static int idxd_setup_engine_sysfs(struct idxd_device *idxd)
++static void idxd_conf_device_release(struct device *dev)
+ {
+- struct device *dev = &idxd->pdev->dev;
+- int i, rc;
++ struct idxd_device *idxd = container_of(dev, struct idxd_device, conf_dev);
++
++ kfree(idxd->groups);
++ kfree(idxd->wqs);
++ kfree(idxd->engines);
++ kfree(idxd->irq_entries);
++ ida_free(idxd_ida(idxd), idxd->id);
++ kfree(idxd);
++}
++
++struct device_type dsa_device_type = {
++ .name = "dsa",
++ .release = idxd_conf_device_release,
++ .groups = idxd_attribute_groups,
++};
++
++struct device_type iax_device_type = {
++ .name = "iax",
++ .release = idxd_conf_device_release,
++ .groups = idxd_attribute_groups,
++};
++
++static int idxd_register_engine_devices(struct idxd_device *idxd)
++{
++ int i, j, rc;
+
+ for (i = 0; i < idxd->max_engines; i++) {
+- struct idxd_engine *engine = &idxd->engines[i];
+-
+- engine->conf_dev.parent = &idxd->conf_dev;
+- dev_set_name(&engine->conf_dev, "engine%d.%d",
+- idxd->id, engine->id);
+- engine->conf_dev.bus = idxd_get_bus_type(idxd);
+- engine->conf_dev.groups = idxd_engine_attribute_groups;
+- engine->conf_dev.type = &idxd_engine_device_type;
+- dev_dbg(dev, "Engine device register: %s\n",
+- dev_name(&engine->conf_dev));
+- rc = device_register(&engine->conf_dev);
+- if (rc < 0) {
+- put_device(&engine->conf_dev);
++ struct idxd_engine *engine = idxd->engines[i];
++
++ rc = device_add(&engine->conf_dev);
++ if (rc < 0)
+ goto cleanup;
+- }
+ }
+
+ return 0;
+
+ cleanup:
+- while (i--) {
+- struct idxd_engine *engine = &idxd->engines[i];
++ j = i - 1;
++ for (; i < idxd->max_engines; i++)
++ put_device(&idxd->engines[i]->conf_dev);
+
+- device_unregister(&engine->conf_dev);
+- }
++ while (j--)
++ device_unregister(&idxd->engines[j]->conf_dev);
+ return rc;
+ }
+
+-static int idxd_setup_group_sysfs(struct idxd_device *idxd)
++static int idxd_register_group_devices(struct idxd_device *idxd)
+ {
+- struct device *dev = &idxd->pdev->dev;
+- int i, rc;
++ int i, j, rc;
+
+ for (i = 0; i < idxd->max_groups; i++) {
+- struct idxd_group *group = &idxd->groups[i];
+-
+- group->conf_dev.parent = &idxd->conf_dev;
+- dev_set_name(&group->conf_dev, "group%d.%d",
+- idxd->id, group->id);
+- group->conf_dev.bus = idxd_get_bus_type(idxd);
+- group->conf_dev.groups = idxd_group_attribute_groups;
+- group->conf_dev.type = &idxd_group_device_type;
+- dev_dbg(dev, "Group device register: %s\n",
+- dev_name(&group->conf_dev));
+- rc = device_register(&group->conf_dev);
+- if (rc < 0) {
+- put_device(&group->conf_dev);
++ struct idxd_group *group = idxd->groups[i];
++
++ rc = device_add(&group->conf_dev);
++ if (rc < 0)
+ goto cleanup;
+- }
+ }
+
+ return 0;
+
+ cleanup:
+- while (i--) {
+- struct idxd_group *group = &idxd->groups[i];
++ j = i - 1;
++ for (; i < idxd->max_groups; i++)
++ put_device(&idxd->groups[i]->conf_dev);
+
+- device_unregister(&group->conf_dev);
+- }
++ while (j--)
++ device_unregister(&idxd->groups[j]->conf_dev);
+ return rc;
+ }
+
+-static int idxd_setup_wq_sysfs(struct idxd_device *idxd)
++static int idxd_register_wq_devices(struct idxd_device *idxd)
+ {
+- struct device *dev = &idxd->pdev->dev;
+- int i, rc;
++ int i, rc, j;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
+-
+- wq->conf_dev.parent = &idxd->conf_dev;
+- dev_set_name(&wq->conf_dev, "wq%d.%d", idxd->id, wq->id);
+- wq->conf_dev.bus = idxd_get_bus_type(idxd);
+- wq->conf_dev.groups = idxd_wq_attribute_groups;
+- wq->conf_dev.type = &idxd_wq_device_type;
+- dev_dbg(dev, "WQ device register: %s\n",
+- dev_name(&wq->conf_dev));
+- rc = device_register(&wq->conf_dev);
+- if (rc < 0) {
+- put_device(&wq->conf_dev);
++ struct idxd_wq *wq = idxd->wqs[i];
++
++ rc = device_add(&wq->conf_dev);
++ if (rc < 0)
+ goto cleanup;
+- }
+ }
+
+ return 0;
+
+ cleanup:
+- while (i--) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ j = i - 1;
++ for (; i < idxd->max_wqs; i++)
++ put_device(&idxd->wqs[i]->conf_dev);
+
+- device_unregister(&wq->conf_dev);
+- }
++ while (j--)
++ device_unregister(&idxd->wqs[j]->conf_dev);
+ return rc;
+ }
+
+-static int idxd_setup_device_sysfs(struct idxd_device *idxd)
++int idxd_register_devices(struct idxd_device *idxd)
+ {
+ struct device *dev = &idxd->pdev->dev;
+- int rc;
+- char devname[IDXD_NAME_SIZE];
+-
+- sprintf(devname, "%s%d", idxd_get_dev_name(idxd), idxd->id);
+- idxd->conf_dev.parent = dev;
+- dev_set_name(&idxd->conf_dev, "%s", devname);
+- idxd->conf_dev.bus = idxd_get_bus_type(idxd);
+- idxd->conf_dev.groups = idxd_attribute_groups;
+- idxd->conf_dev.type = idxd_get_device_type(idxd);
+-
+- dev_dbg(dev, "IDXD device register: %s\n", dev_name(&idxd->conf_dev));
+- rc = device_register(&idxd->conf_dev);
+- if (rc < 0) {
+- put_device(&idxd->conf_dev);
+- return rc;
+- }
++ int rc, i;
+
+- return 0;
+-}
+-
+-int idxd_setup_sysfs(struct idxd_device *idxd)
+-{
+- struct device *dev = &idxd->pdev->dev;
+- int rc;
+-
+- rc = idxd_setup_device_sysfs(idxd);
+- if (rc < 0) {
+- dev_dbg(dev, "Device sysfs registering failed: %d\n", rc);
++ rc = device_add(&idxd->conf_dev);
++ if (rc < 0)
+ return rc;
+- }
+
+- rc = idxd_setup_wq_sysfs(idxd);
++ rc = idxd_register_wq_devices(idxd);
+ if (rc < 0) {
+- /* unregister conf dev */
+- dev_dbg(dev, "Work Queue sysfs registering failed: %d\n", rc);
+- return rc;
++ dev_dbg(dev, "WQ devices registering failed: %d\n", rc);
++ goto err_wq;
+ }
+
+- rc = idxd_setup_group_sysfs(idxd);
++ rc = idxd_register_engine_devices(idxd);
+ if (rc < 0) {
+- /* unregister conf dev */
+- dev_dbg(dev, "Group sysfs registering failed: %d\n", rc);
+- return rc;
++ dev_dbg(dev, "Engine devices registering failed: %d\n", rc);
++ goto err_engine;
+ }
+
+- rc = idxd_setup_engine_sysfs(idxd);
++ rc = idxd_register_group_devices(idxd);
+ if (rc < 0) {
+- /* unregister conf dev */
+- dev_dbg(dev, "Engine sysfs registering failed: %d\n", rc);
+- return rc;
++ dev_dbg(dev, "Group device registering failed: %d\n", rc);
++ goto err_group;
+ }
+
+ return 0;
++
++ err_group:
++ for (i = 0; i < idxd->max_engines; i++)
++ device_unregister(&idxd->engines[i]->conf_dev);
++ err_engine:
++ for (i = 0; i < idxd->max_wqs; i++)
++ device_unregister(&idxd->wqs[i]->conf_dev);
++ err_wq:
++ device_del(&idxd->conf_dev);
++ return rc;
+ }
+
+-void idxd_cleanup_sysfs(struct idxd_device *idxd)
++void idxd_unregister_devices(struct idxd_device *idxd)
+ {
+ int i;
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+- struct idxd_wq *wq = &idxd->wqs[i];
++ struct idxd_wq *wq = idxd->wqs[i];
+
+ device_unregister(&wq->conf_dev);
+ }
+
+ for (i = 0; i < idxd->max_engines; i++) {
+- struct idxd_engine *engine = &idxd->engines[i];
++ struct idxd_engine *engine = idxd->engines[i];
+
+ device_unregister(&engine->conf_dev);
+ }
+
+ for (i = 0; i < idxd->max_groups; i++) {
+- struct idxd_group *group = &idxd->groups[i];
++ struct idxd_group *group = idxd->groups[i];
+
+ device_unregister(&group->conf_dev);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 024d0a563a652..f41764cee6906 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -77,6 +77,8 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ }
+
+ ib->ptr = amdgpu_sa_bo_cpu_addr(ib->sa_bo);
++ /* flush the cache before commit the IB */
++ ib->flags = AMDGPU_IB_FLAG_EMIT_MEM_SYNC;
+
+ if (!vm)
+ ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index 79de68ac03f20..0c3b15992b814 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -643,6 +643,7 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct
+
+ /* File created at /sys/class/drm/card0/device/hdcp_srm*/
+ hdcp_work[0].attr = data_attr;
++ sysfs_bin_attr_init(&hdcp_work[0].attr);
+
+ if (sysfs_create_bin_file(&adev->dev->kobj, &hdcp_work[0].attr))
+ DRM_WARN("Failed to create device file hdcp_srm");
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index ccac86347315d..2d4f8780922e8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2528,6 +2528,10 @@ static void commit_planes_for_stream(struct dc *dc,
+ plane_state->triplebuffer_flips = true;
+ }
+ }
++ if (update_type == UPDATE_TYPE_FULL) {
++ /* force vsync flip when reconfiguring pipes to prevent underflow */
++ plane_state->flip_immediate = false;
++ }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
+index bec7059f6d5d1..a1318c31bcfa5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright 2012-17 Advanced Micro Devices, Inc.
++ * Copyright 2012-2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+@@ -181,11 +181,14 @@ void hubp2_vready_at_or_After_vsync(struct hubp *hubp,
+ else
+ Set HUBP_VREADY_AT_OR_AFTER_VSYNC = 0
+ */
+- if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width
+- + pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) {
+- value = 1;
+- } else
+- value = 0;
++ if (pipe_dest->htotal != 0) {
++ if ((pipe_dest->vstartup_start - (pipe_dest->vready_offset+pipe_dest->vupdate_width
++ + pipe_dest->vupdate_offset) / pipe_dest->htotal) <= pipe_dest->vblank_end) {
++ value = 1;
++ } else
++ value = 0;
++ }
++
+ REG_UPDATE(DCHUBP_CNTL, HUBP_VREADY_AT_OR_AFTER_VSYNC, value);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index 3a367a5968ae1..972f2600f967f 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -789,6 +789,8 @@ enum mod_hdcp_status mod_hdcp_hdcp2_validate_rx_id_list(struct mod_hdcp *hdcp)
+ TA_HDCP2_MSG_AUTHENTICATION_STATUS__RECEIVERID_REVOKED) {
+ hdcp->connection.is_hdcp2_revoked = 1;
+ status = MOD_HDCP_STATUS_HDCP2_RX_ID_LIST_REVOKED;
++ } else {
++ status = MOD_HDCP_STATUS_HDCP2_VALIDATE_RX_ID_LIST_FAILURE;
+ }
+ }
+ mutex_unlock(&psp->hdcp_context.mutex);
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index b73d51e766ce8..0e60aec0bb191 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -382,7 +382,7 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
+ i830_overlay_clock_gating(dev_priv, true);
+ }
+
+-static void
++__i915_active_call static void
+ intel_overlay_last_flip_retire(struct i915_active *active)
+ {
+ struct intel_overlay *overlay =
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index ec28a6cde49bd..0b2434e29d002 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -189,7 +189,7 @@ compute_partial_view(const struct drm_i915_gem_object *obj,
+ struct i915_ggtt_view view;
+
+ if (i915_gem_object_is_tiled(obj))
+- chunk = roundup(chunk, tile_row_pages(obj));
++ chunk = roundup(chunk, tile_row_pages(obj) ?: 1);
+
+ view.type = I915_GGTT_VIEW_PARTIAL;
+ view.partial.offset = rounddown(page_offset, chunk);
+diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+index a37c968ef8f7c..787c0a93caefe 100644
+--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+@@ -631,7 +631,6 @@ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt)
+
+ err = pin_pt_dma(vm, pde->pt.base);
+ if (err) {
+- i915_gem_object_put(pde->pt.base);
+ free_pd(vm, pde);
+ return err;
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+index 6614f67364862..b5937b39145a4 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+@@ -652,8 +652,8 @@ static void detect_bit_6_swizzle(struct i915_ggtt *ggtt)
+ * banks of memory are paired and unswizzled on the
+ * uneven portion, so leave that as unknown.
+ */
+- if (intel_uncore_read(uncore, C0DRB3) ==
+- intel_uncore_read(uncore, C1DRB3)) {
++ if (intel_uncore_read16(uncore, C0DRB3) ==
++ intel_uncore_read16(uncore, C1DRB3)) {
+ swizzle_x = I915_BIT_6_SWIZZLE_9_10;
+ swizzle_y = I915_BIT_6_SWIZZLE_9;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
+index 9ed19b8bca600..c4c2d24dc5094 100644
+--- a/drivers/gpu/drm/i915/i915_active.c
++++ b/drivers/gpu/drm/i915/i915_active.c
+@@ -1159,7 +1159,8 @@ static int auto_active(struct i915_active *ref)
+ return 0;
+ }
+
+-static void auto_retire(struct i915_active *ref)
++__i915_active_call static void
++auto_retire(struct i915_active *ref)
+ {
+ i915_active_put(ref);
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index b6e8ff2782da3..50ddc5834cabb 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1152,10 +1152,6 @@ static void a6xx_llc_slices_init(struct platform_device *pdev,
+ {
+ struct device_node *phandle;
+
+- a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem", "gpu_cx");
+- if (IS_ERR(a6xx_gpu->llc_mmio))
+- return;
+-
+ /*
+ * There is a different programming path for targets with an mmu500
+ * attached, so detect if that is the case
+@@ -1165,6 +1161,11 @@ static void a6xx_llc_slices_init(struct platform_device *pdev,
+ of_device_is_compatible(phandle, "arm,mmu-500"));
+ of_node_put(phandle);
+
++ if (a6xx_gpu->have_mmu500)
++ a6xx_gpu->llc_mmio = NULL;
++ else
++ a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem", "gpu_cx");
++
+ a6xx_gpu->llc_slice = llcc_slice_getd(LLCC_GPU);
+ a6xx_gpu->htw_llc_slice = llcc_slice_getd(LLCC_GPUHTW);
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
+index 82a8673ab8daf..d7e4a39a904e2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.c
++++ b/drivers/gpu/drm/msm/dp/dp_audio.c
+@@ -527,6 +527,7 @@ int dp_audio_hw_params(struct device *dev,
+ dp_audio_setup_acr(audio);
+ dp_audio_safe_to_exit_level(audio);
+ dp_audio_enable(audio, true);
++ dp_display_signal_audio_start(dp_display);
+ dp_display->audio_enabled = true;
+
+ end:
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 81f6794a25100..9abb6bddb52b1 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -178,6 +178,15 @@ static int dp_del_event(struct dp_display_private *dp_priv, u32 event)
+ return 0;
+ }
+
++void dp_display_signal_audio_start(struct msm_dp *dp_display)
++{
++ struct dp_display_private *dp;
++
++ dp = container_of(dp_display, struct dp_display_private, dp_display);
++
++ reinit_completion(&dp->audio_comp);
++}
++
+ void dp_display_signal_audio_complete(struct msm_dp *dp_display)
+ {
+ struct dp_display_private *dp;
+@@ -586,10 +595,8 @@ static int dp_connect_pending_timeout(struct dp_display_private *dp, u32 data)
+ mutex_lock(&dp->event_mutex);
+
+ state = dp->hpd_state;
+- if (state == ST_CONNECT_PENDING) {
+- dp_display_enable(dp, 0);
++ if (state == ST_CONNECT_PENDING)
+ dp->hpd_state = ST_CONNECTED;
+- }
+
+ mutex_unlock(&dp->event_mutex);
+
+@@ -651,7 +658,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND);
+
+ /* signal the disconnect event early to ensure proper teardown */
+- reinit_completion(&dp->audio_comp);
+ dp_display_handle_plugged_change(g_dp_display, false);
+
+ dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK |
+@@ -669,10 +675,8 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data
+ mutex_lock(&dp->event_mutex);
+
+ state = dp->hpd_state;
+- if (state == ST_DISCONNECT_PENDING) {
+- dp_display_disable(dp, 0);
++ if (state == ST_DISCONNECT_PENDING)
+ dp->hpd_state = ST_DISCONNECTED;
+- }
+
+ mutex_unlock(&dp->event_mutex);
+
+@@ -891,7 +895,6 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+ /* wait only if audio was enabled */
+ if (dp_display->audio_enabled) {
+ /* signal the disconnect event */
+- reinit_completion(&dp->audio_comp);
+ dp_display_handle_plugged_change(dp_display, false);
+ if (!wait_for_completion_timeout(&dp->audio_comp,
+ HZ * 5))
+@@ -1265,7 +1268,12 @@ static int dp_pm_resume(struct device *dev)
+
+ status = dp_catalog_link_is_connected(dp->catalog);
+
+- if (status)
++ /*
++ * can not declared display is connected unless
++ * HDMI cable is plugged in and sink_count of
++ * dongle become 1
++ */
++ if (status && dp->link->sink_count)
+ dp->dp_display.is_connected = true;
+ else
+ dp->dp_display.is_connected = false;
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h
+index 6092ba1ed85ed..5173c89eedf7e 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.h
++++ b/drivers/gpu/drm/msm/dp/dp_display.h
+@@ -34,6 +34,7 @@ int dp_display_get_modes(struct msm_dp *dp_display,
+ int dp_display_request_irq(struct msm_dp *dp_display);
+ bool dp_display_check_video_test(struct msm_dp *dp_display);
+ int dp_display_get_test_bpp(struct msm_dp *dp_display);
++void dp_display_signal_audio_start(struct msm_dp *dp_display);
+ void dp_display_signal_audio_complete(struct msm_dp *dp_display);
+
+ #endif /* _DP_DISPLAY_H_ */
+diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
+index aa3b589f30a18..1d6130563fe51 100644
+--- a/drivers/gpu/drm/radeon/radeon.h
++++ b/drivers/gpu/drm/radeon/radeon.h
+@@ -1559,6 +1559,7 @@ struct radeon_dpm {
+ void *priv;
+ u32 new_active_crtcs;
+ int new_active_crtc_count;
++ int high_pixelclock_count;
+ u32 current_active_crtcs;
+ int current_active_crtc_count;
+ bool single_display;
+diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
+index be96d9b64e43b..70821d73f58ff 100644
+--- a/drivers/gpu/drm/radeon/radeon_atombios.c
++++ b/drivers/gpu/drm/radeon/radeon_atombios.c
+@@ -2119,11 +2119,14 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev)
+ return state_index;
+ /* last mode is usually default, array is low to high */
+ for (i = 0; i < num_modes; i++) {
+- rdev->pm.power_state[state_index].clock_info =
+- kcalloc(1, sizeof(struct radeon_pm_clock_info),
+- GFP_KERNEL);
++ /* avoid memory leaks from invalid modes or unknown frev. */
++ if (!rdev->pm.power_state[state_index].clock_info) {
++ rdev->pm.power_state[state_index].clock_info =
++ kzalloc(sizeof(struct radeon_pm_clock_info),
++ GFP_KERNEL);
++ }
+ if (!rdev->pm.power_state[state_index].clock_info)
+- return state_index;
++ goto out;
+ rdev->pm.power_state[state_index].num_clock_modes = 1;
+ rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
+ switch (frev) {
+@@ -2242,17 +2245,24 @@ static int radeon_atombios_parse_power_table_1_3(struct radeon_device *rdev)
+ break;
+ }
+ }
++out:
++ /* free any unused clock_info allocation. */
++ if (state_index && state_index < num_modes) {
++ kfree(rdev->pm.power_state[state_index].clock_info);
++ rdev->pm.power_state[state_index].clock_info = NULL;
++ }
++
+ /* last mode is usually default */
+- if (rdev->pm.default_power_state_index == -1) {
++ if (state_index && rdev->pm.default_power_state_index == -1) {
+ rdev->pm.power_state[state_index - 1].type =
+ POWER_STATE_TYPE_DEFAULT;
+ rdev->pm.default_power_state_index = state_index - 1;
+ rdev->pm.power_state[state_index - 1].default_clock_mode =
+ &rdev->pm.power_state[state_index - 1].clock_info[0];
+- rdev->pm.power_state[state_index].flags &=
++ rdev->pm.power_state[state_index - 1].flags &=
+ ~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+- rdev->pm.power_state[state_index].misc = 0;
+- rdev->pm.power_state[state_index].misc2 = 0;
++ rdev->pm.power_state[state_index - 1].misc = 0;
++ rdev->pm.power_state[state_index - 1].misc2 = 0;
+ }
+ return state_index;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
+index 1995dad59dd09..2db4a8b1542d3 100644
+--- a/drivers/gpu/drm/radeon/radeon_pm.c
++++ b/drivers/gpu/drm/radeon/radeon_pm.c
+@@ -1775,6 +1775,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ struct drm_device *ddev = rdev->ddev;
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
++ struct radeon_connector *radeon_connector;
+
+ if (!rdev->pm.dpm_enabled)
+ return;
+@@ -1784,6 +1785,7 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ /* update active crtc counts */
+ rdev->pm.dpm.new_active_crtcs = 0;
+ rdev->pm.dpm.new_active_crtc_count = 0;
++ rdev->pm.dpm.high_pixelclock_count = 0;
+ if (rdev->num_crtc && rdev->mode_info.mode_config_initialized) {
+ list_for_each_entry(crtc,
+ &ddev->mode_config.crtc_list, head) {
+@@ -1791,6 +1793,12 @@ static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
+ if (crtc->enabled) {
+ rdev->pm.dpm.new_active_crtcs |= (1 << radeon_crtc->crtc_id);
+ rdev->pm.dpm.new_active_crtc_count++;
++ if (!radeon_crtc->connector)
++ continue;
++
++ radeon_connector = to_radeon_connector(radeon_crtc->connector);
++ if (radeon_connector->pixelclock_for_modeset > 297000)
++ rdev->pm.dpm.high_pixelclock_count++;
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index 91bfc4762767b..43b63705d0737 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2979,6 +2979,9 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
+ (rdev->pdev->device == 0x6605)) {
+ max_sclk = 75000;
+ }
++
++ if (rdev->pm.dpm.high_pixelclock_count > 1)
++ disable_sclk_switching = true;
+ }
+
+ if (rps->vce_active) {
+diff --git a/drivers/hwmon/ltc2992.c b/drivers/hwmon/ltc2992.c
+index 4382105bf1420..2a4bed0ab226b 100644
+--- a/drivers/hwmon/ltc2992.c
++++ b/drivers/hwmon/ltc2992.c
+@@ -900,11 +900,15 @@ static int ltc2992_parse_dt(struct ltc2992_state *st)
+
+ fwnode_for_each_available_child_node(fwnode, child) {
+ ret = fwnode_property_read_u32(child, "reg", &addr);
+- if (ret < 0)
++ if (ret < 0) {
++ fwnode_handle_put(child);
+ return ret;
++ }
+
+- if (addr > 1)
++ if (addr > 1) {
++ fwnode_handle_put(child);
+ return -EINVAL;
++ }
+
+ ret = fwnode_property_read_u32(child, "shunt-resistor-micro-ohms", &val);
+ if (!ret)
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index 7a5e539b567bf..580e63d7daa00 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -217,9 +217,9 @@ int occ_update_response(struct occ *occ)
+ return rc;
+
+ /* limit the maximum rate of polling the OCC */
+- if (time_after(jiffies, occ->last_update + OCC_UPDATE_FREQUENCY)) {
++ if (time_after(jiffies, occ->next_update)) {
+ rc = occ_poll(occ);
+- occ->last_update = jiffies;
++ occ->next_update = jiffies + OCC_UPDATE_FREQUENCY;
+ } else {
+ rc = occ->last_error;
+ }
+@@ -1164,6 +1164,7 @@ int occ_setup(struct occ *occ, const char *name)
+ return rc;
+ }
+
++ occ->next_update = jiffies + OCC_UPDATE_FREQUENCY;
+ occ_parse_poll_response(occ);
+
+ rc = occ_setup_sensor_attrs(occ);
+diff --git a/drivers/hwmon/occ/common.h b/drivers/hwmon/occ/common.h
+index 67e6968b8978e..e6df719770e81 100644
+--- a/drivers/hwmon/occ/common.h
++++ b/drivers/hwmon/occ/common.h
+@@ -99,7 +99,7 @@ struct occ {
+ u8 poll_cmd_data; /* to perform OCC poll command */
+ int (*send_cmd)(struct occ *occ, u8 *cmd);
+
+- unsigned long last_update;
++ unsigned long next_update;
+ struct mutex lock; /* lock OCC access */
+
+ struct device *hwmon;
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index 3629b7885aca9..c594f45319fc5 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -90,6 +90,12 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
+ struct of_endpoint endpoint;
+ int in = 0, out = 0;
+
++ /*
++ * Avoid warnings in of_graph_get_next_endpoint()
++ * if the device doesn't have any graph connections
++ */
++ if (!of_graph_is_present(node))
++ return;
+ do {
+ ep = of_graph_get_next_endpoint(node, ep);
+ if (!ep)
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 8a694b2eebfdb..d6b3fdf09b8f0 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -763,7 +763,7 @@ static int i2c_imx_reg_slave(struct i2c_client *client)
+ i2c_imx->slave = client;
+
+ /* Resume */
+- ret = pm_runtime_get_sync(i2c_imx->adapter.dev.parent);
++ ret = pm_runtime_resume_and_get(i2c_imx->adapter.dev.parent);
+ if (ret < 0) {
+ dev_err(&i2c_imx->adapter.dev, "failed to resume i2c controller");
+ return ret;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 86f70c7513192..bf25acba2ed53 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -564,7 +564,7 @@ static const struct i2c_spec_values *mtk_i2c_get_spec(unsigned int speed)
+
+ static int mtk_i2c_max_step_cnt(unsigned int target_speed)
+ {
+- if (target_speed > I2C_MAX_FAST_MODE_FREQ)
++ if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ)
+ return MAX_HS_STEP_CNT_DIV;
+ else
+ return MAX_STEP_CNT_DIV;
+@@ -635,7 +635,7 @@ static int mtk_i2c_check_ac_timing(struct mtk_i2c *i2c,
+ if (sda_min > sda_max)
+ return -3;
+
+- if (check_speed > I2C_MAX_FAST_MODE_FREQ) {
++ if (check_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) {
+ if (i2c->dev_comp->ltiming_adjust) {
+ i2c->ac_timing.hs = I2C_TIME_DEFAULT_VALUE |
+ (sample_cnt << 12) | (high_cnt << 8);
+@@ -850,7 +850,7 @@ static int mtk_i2c_do_transfer(struct mtk_i2c *i2c, struct i2c_msg *msgs,
+
+ control_reg = mtk_i2c_readw(i2c, OFFSET_CONTROL) &
+ ~(I2C_CONTROL_DIR_CHANGE | I2C_CONTROL_RS);
+- if ((i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ) || (left_num >= 1))
++ if ((i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ) || (left_num >= 1))
+ control_reg |= I2C_CONTROL_RS;
+
+ if (i2c->op == I2C_MASTER_WRRD)
+@@ -1067,7 +1067,8 @@ static int mtk_i2c_transfer(struct i2c_adapter *adap,
+ }
+ }
+
+- if (i2c->auto_restart && num >= 2 && i2c->speed_hz > I2C_MAX_FAST_MODE_FREQ)
++ if (i2c->auto_restart && num >= 2 &&
++ i2c->speed_hz > I2C_MAX_FAST_MODE_PLUS_FREQ)
+ /* ignore the first restart irq after the master code,
+ * otherwise the first transfer will be discarded.
+ */
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 6ceb11cc4be18..6ef38a8ee95cb 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -440,8 +440,13 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ sizeof(rdwr_arg)))
+ return -EFAULT;
+
+- /* Put an arbitrary limit on the number of messages that can
+- * be sent at once */
++ if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0)
++ return -EINVAL;
++
++ /*
++ * Put an arbitrary limit on the number of messages that can
++ * be sent at once
++ */
+ if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS)
+ return -EINVAL;
+
+diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
+index 2e0c62c391550..8acf277b8b258 100644
+--- a/drivers/iio/accel/Kconfig
++++ b/drivers/iio/accel/Kconfig
+@@ -211,7 +211,6 @@ config DMARD10
+ config HID_SENSOR_ACCEL_3D
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID Accelerometers 3D"
+diff --git a/drivers/iio/common/hid-sensors/Kconfig b/drivers/iio/common/hid-sensors/Kconfig
+index 24d4925673363..2a3dd3b907bee 100644
+--- a/drivers/iio/common/hid-sensors/Kconfig
++++ b/drivers/iio/common/hid-sensors/Kconfig
+@@ -19,6 +19,7 @@ config HID_SENSOR_IIO_TRIGGER
+ tristate "Common module (trigger) for all HID Sensor IIO drivers"
+ depends on HID_SENSOR_HUB && HID_SENSOR_IIO_COMMON && IIO_BUFFER
+ select IIO_TRIGGER
++ select IIO_TRIGGERED_BUFFER
+ help
+ Say yes here to build trigger support for HID sensors.
+ Triggers will be send if all requested attributes were read.
+diff --git a/drivers/iio/gyro/Kconfig b/drivers/iio/gyro/Kconfig
+index 5824f2edf9758..20b5ac7ab66af 100644
+--- a/drivers/iio/gyro/Kconfig
++++ b/drivers/iio/gyro/Kconfig
+@@ -111,7 +111,6 @@ config FXAS21002C_SPI
+ config HID_SENSOR_GYRO_3D
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID Gyroscope 3D"
+diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
+index ac90be03332af..f17a935195352 100644
+--- a/drivers/iio/gyro/mpu3050-core.c
++++ b/drivers/iio/gyro/mpu3050-core.c
+@@ -272,7 +272,16 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev,
+ case IIO_CHAN_INFO_OFFSET:
+ switch (chan->type) {
+ case IIO_TEMP:
+- /* The temperature scaling is (x+23000)/280 Celsius */
++ /*
++ * The temperature scaling is (x+23000)/280 Celsius
++ * for the "best fit straight line" temperature range
++ * of -30C..85C. The 23000 includes room temperature
++ * offset of +35C, 280 is the precision scale and x is
++ * the 16-bit signed integer reported by hardware.
++ *
++ * Temperature value itself represents temperature of
++ * the sensor die.
++ */
+ *val = 23000;
+ return IIO_VAL_INT;
+ default:
+@@ -329,7 +338,7 @@ static int mpu3050_read_raw(struct iio_dev *indio_dev,
+ goto out_read_raw_unlock;
+ }
+
+- *val = be16_to_cpu(raw_val);
++ *val = (s16)be16_to_cpu(raw_val);
+ ret = IIO_VAL_INT;
+
+ goto out_read_raw_unlock;
+diff --git a/drivers/iio/humidity/Kconfig b/drivers/iio/humidity/Kconfig
+index 6549fcf6db698..2de5494e7c225 100644
+--- a/drivers/iio/humidity/Kconfig
++++ b/drivers/iio/humidity/Kconfig
+@@ -52,7 +52,6 @@ config HID_SENSOR_HUMIDITY
+ tristate "HID Environmental humidity sensor"
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ help
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index c2e4c267c36b2..4b3ecae0ae123 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1698,7 +1698,6 @@ static long iio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ if (!indio_dev->info)
+ goto out_unlock;
+
+- ret = -EINVAL;
+ list_for_each_entry(h, &iio_dev_opaque->ioctl_handlers, entry) {
+ ret = h->ioctl(indio_dev, filp, cmd, arg);
+ if (ret != IIO_IOCTL_UNHANDLED)
+@@ -1706,7 +1705,7 @@ static long iio_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ }
+
+ if (ret == IIO_IOCTL_UNHANDLED)
+- ret = -EINVAL;
++ ret = -ENODEV;
+
+ out_unlock:
+ mutex_unlock(&indio_dev->info_exist_lock);
+@@ -1828,9 +1827,6 @@ EXPORT_SYMBOL(__iio_device_register);
+ **/
+ void iio_device_unregister(struct iio_dev *indio_dev)
+ {
+- struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
+- struct iio_ioctl_handler *h, *t;
+-
+ cdev_device_del(&indio_dev->chrdev, &indio_dev->dev);
+
+ mutex_lock(&indio_dev->info_exist_lock);
+@@ -1841,9 +1837,6 @@ void iio_device_unregister(struct iio_dev *indio_dev)
+
+ indio_dev->info = NULL;
+
+- list_for_each_entry_safe(h, t, &iio_dev_opaque->ioctl_handlers, entry)
+- list_del(&h->entry);
+-
+ iio_device_wakeup_eventset(indio_dev);
+ iio_buffer_wakeup_poll(indio_dev);
+
+diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
+index 33ad4dd0b5c7b..917f9becf9c75 100644
+--- a/drivers/iio/light/Kconfig
++++ b/drivers/iio/light/Kconfig
+@@ -256,7 +256,6 @@ config ISL29125
+ config HID_SENSOR_ALS
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID ALS"
+@@ -270,7 +269,6 @@ config HID_SENSOR_ALS
+ config HID_SENSOR_PROX
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID PROX"
+diff --git a/drivers/iio/light/gp2ap002.c b/drivers/iio/light/gp2ap002.c
+index 7ba7aa59437c3..040d8429a6e00 100644
+--- a/drivers/iio/light/gp2ap002.c
++++ b/drivers/iio/light/gp2ap002.c
+@@ -583,7 +583,7 @@ static int gp2ap002_probe(struct i2c_client *client,
+ "gp2ap002", indio_dev);
+ if (ret) {
+ dev_err(dev, "unable to request IRQ\n");
+- goto out_disable_vio;
++ goto out_put_pm;
+ }
+ gp2ap002->irq = client->irq;
+
+@@ -613,8 +613,9 @@ static int gp2ap002_probe(struct i2c_client *client,
+
+ return 0;
+
+-out_disable_pm:
++out_put_pm:
+ pm_runtime_put_noidle(dev);
++out_disable_pm:
+ pm_runtime_disable(dev);
+ out_disable_vio:
+ regulator_disable(gp2ap002->vio);
+diff --git a/drivers/iio/light/tsl2583.c b/drivers/iio/light/tsl2583.c
+index 9e5490b7473bd..40b7dd266b314 100644
+--- a/drivers/iio/light/tsl2583.c
++++ b/drivers/iio/light/tsl2583.c
+@@ -341,6 +341,14 @@ static int tsl2583_als_calibrate(struct iio_dev *indio_dev)
+ return lux_val;
+ }
+
++ /* Avoid division by zero of lux_value later on */
++ if (lux_val == 0) {
++ dev_err(&chip->client->dev,
++ "%s: lux_val of 0 will produce out of range trim_value\n",
++ __func__);
++ return -ENODATA;
++ }
++
+ gain_trim_val = (unsigned int)(((chip->als_settings.als_cal_target)
+ * chip->als_settings.als_gain_trim) / lux_val);
+ if ((gain_trim_val < 250) || (gain_trim_val > 4000)) {
+diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig
+index 1697a8c03506c..7e9489a355714 100644
+--- a/drivers/iio/magnetometer/Kconfig
++++ b/drivers/iio/magnetometer/Kconfig
+@@ -95,7 +95,6 @@ config MAG3110
+ config HID_SENSOR_MAGNETOMETER_3D
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID Magenetometer 3D"
+diff --git a/drivers/iio/orientation/Kconfig b/drivers/iio/orientation/Kconfig
+index a505583cc2fda..396cbbb867f4c 100644
+--- a/drivers/iio/orientation/Kconfig
++++ b/drivers/iio/orientation/Kconfig
+@@ -9,7 +9,6 @@ menu "Inclinometer sensors"
+ config HID_SENSOR_INCLINOMETER_3D
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID Inclinometer 3D"
+@@ -20,7 +19,6 @@ config HID_SENSOR_INCLINOMETER_3D
+ config HID_SENSOR_DEVICE_ROTATION
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID Device Rotation"
+diff --git a/drivers/iio/pressure/Kconfig b/drivers/iio/pressure/Kconfig
+index 689b978db4f95..fc0d3cfca4186 100644
+--- a/drivers/iio/pressure/Kconfig
++++ b/drivers/iio/pressure/Kconfig
+@@ -79,7 +79,6 @@ config DPS310
+ config HID_SENSOR_PRESS
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ tristate "HID PRESS"
+diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+index c685f10b5ae48..cc206bfa09c78 100644
+--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
++++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+@@ -160,6 +160,7 @@ static int lidar_get_measurement(struct lidar_data *data, u16 *reg)
+ ret = lidar_write_control(data, LIDAR_REG_CONTROL_ACQUIRE);
+ if (ret < 0) {
+ dev_err(&client->dev, "cannot send start measurement command");
++ pm_runtime_put_noidle(&client->dev);
+ return ret;
+ }
+
+diff --git a/drivers/iio/temperature/Kconfig b/drivers/iio/temperature/Kconfig
+index f1f2a1499c9e2..4df60082c1fa8 100644
+--- a/drivers/iio/temperature/Kconfig
++++ b/drivers/iio/temperature/Kconfig
+@@ -45,7 +45,6 @@ config HID_SENSOR_TEMP
+ tristate "HID Environmental temperature sensor"
+ depends on HID_SENSOR_HUB
+ select IIO_BUFFER
+- select IIO_TRIGGERED_BUFFER
+ select HID_SENSOR_IIO_COMMON
+ select HID_SENSOR_IIO_TRIGGER
+ help
+diff --git a/drivers/infiniband/hw/hfi1/ipoib.h b/drivers/infiniband/hw/hfi1/ipoib.h
+index f650cac9d424c..d30c23b6527aa 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib.h
++++ b/drivers/infiniband/hw/hfi1/ipoib.h
+@@ -52,8 +52,9 @@ union hfi1_ipoib_flow {
+ * @producer_lock: producer sync lock
+ * @consumer_lock: consumer sync lock
+ */
++struct ipoib_txreq;
+ struct hfi1_ipoib_circ_buf {
+- void **items;
++ struct ipoib_txreq **items;
+ unsigned long head;
+ unsigned long tail;
+ unsigned long max_items;
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index edd4eeac8dd1d..cdc26ee3cf52d 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -702,14 +702,14 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+
+ priv->tx_napis = kcalloc_node(dev->num_tx_queues,
+ sizeof(struct napi_struct),
+- GFP_ATOMIC,
++ GFP_KERNEL,
+ priv->dd->node);
+ if (!priv->tx_napis)
+ goto free_txreq_cache;
+
+ priv->txqs = kcalloc_node(dev->num_tx_queues,
+ sizeof(struct hfi1_ipoib_txq),
+- GFP_ATOMIC,
++ GFP_KERNEL,
+ priv->dd->node);
+ if (!priv->txqs)
+ goto free_tx_napis;
+@@ -741,9 +741,9 @@ int hfi1_ipoib_txreq_init(struct hfi1_ipoib_dev_priv *priv)
+ priv->dd->node);
+
+ txq->tx_ring.items =
+- vzalloc_node(array_size(tx_ring_size,
+- sizeof(struct ipoib_txreq)),
+- priv->dd->node);
++ kcalloc_node(tx_ring_size,
++ sizeof(struct ipoib_txreq *),
++ GFP_KERNEL, priv->dd->node);
+ if (!txq->tx_ring.items)
+ goto free_txqs;
+
+@@ -764,7 +764,7 @@ free_txqs:
+ struct hfi1_ipoib_txq *txq = &priv->txqs[i];
+
+ netif_napi_del(txq->napi);
+- vfree(txq->tx_ring.items);
++ kfree(txq->tx_ring.items);
+ }
+
+ kfree(priv->txqs);
+@@ -817,7 +817,7 @@ void hfi1_ipoib_txreq_deinit(struct hfi1_ipoib_dev_priv *priv)
+ hfi1_ipoib_drain_tx_list(txq);
+ netif_napi_del(txq->napi);
+ (void)hfi1_ipoib_drain_tx_ring(txq, txq->tx_ring.max_items);
+- vfree(txq->tx_ring.items);
++ kfree(txq->tx_ring.items);
+ }
+
+ kfree(priv->txqs);
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 9846b01a52140..735ad74e2c8f3 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -12,7 +12,6 @@
+ #include <linux/acpi.h>
+ #include <linux/list.h>
+ #include <linux/bitmap.h>
+-#include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/interrupt.h>
+@@ -255,8 +254,6 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
+ static int amd_iommu_enable_interrupts(void);
+ static int __init iommu_go_to_state(enum iommu_init_state state);
+ static void init_device_table_dma(void);
+-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+- u8 fxn, u64 *value, bool is_write);
+
+ static bool amd_iommu_pre_enabled = true;
+
+@@ -1715,53 +1712,16 @@ static int __init init_iommu_all(struct acpi_table_header *table)
+ return 0;
+ }
+
+-static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
++static void init_iommu_perf_ctr(struct amd_iommu *iommu)
+ {
+- int retry;
++ u64 val;
+ struct pci_dev *pdev = iommu->dev;
+- u64 val = 0xabcd, val2 = 0, save_reg, save_src;
+
+ if (!iommu_feature(iommu, FEATURE_PC))
+ return;
+
+ amd_iommu_pc_present = true;
+
+- /* save the value to restore, if writable */
+- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
+- iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
+- goto pc_false;
+-
+- /*
+- * Disable power gating by programing the performance counter
+- * source to 20 (i.e. counts the reads and writes from/to IOMMU
+- * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
+- * which never get incremented during this init phase.
+- * (Note: The event is also deprecated.)
+- */
+- val = 20;
+- if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
+- goto pc_false;
+-
+- /* Check if the performance counters can be written to */
+- val = 0xabcd;
+- for (retry = 5; retry; retry--) {
+- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
+- iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
+- val2)
+- break;
+-
+- /* Wait about 20 msec for power gating to disable and retry. */
+- msleep(20);
+- }
+-
+- /* restore */
+- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
+- iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
+- goto pc_false;
+-
+- if (val != val2)
+- goto pc_false;
+-
+ pci_info(pdev, "IOMMU performance counters supported\n");
+
+ val = readl(iommu->mmio_base + MMIO_CNTR_CONF_OFFSET);
+@@ -1769,11 +1729,6 @@ static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
+ iommu->max_counters = (u8) ((val >> 7) & 0xf);
+
+ return;
+-
+-pc_false:
+- pci_err(pdev, "Unable to read/write to IOMMU perf counter.\n");
+- amd_iommu_pc_present = false;
+- return;
+ }
+
+ static ssize_t amd_iommu_show_cap(struct device *dev,
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 44b3f4b3aea5c..f6c135d0a35fb 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1455,6 +1455,8 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ int i;
+ int putidx;
+
++ cdev->tx_skb = NULL;
++
+ /* Generate ID field for TX buffer Element */
+ /* Common to all supported M_CAN versions */
+ if (cf->can_id & CAN_EFF_FLAG) {
+@@ -1571,7 +1573,6 @@ static void m_can_tx_work_queue(struct work_struct *ws)
+ tx_work);
+
+ m_can_tx_handler(cdev);
+- cdev->tx_skb = NULL;
+ }
+
+ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index e7be36dc2159a..24ae221c2f107 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -956,8 +956,6 @@ static int mcp251x_stop(struct net_device *net)
+
+ priv->force_quit = 1;
+ free_irq(spi->irq, priv);
+- destroy_workqueue(priv->wq);
+- priv->wq = NULL;
+
+ mutex_lock(&priv->mcp_lock);
+
+@@ -1224,24 +1222,15 @@ static int mcp251x_open(struct net_device *net)
+ goto out_close;
+ }
+
+- priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
+- 0);
+- if (!priv->wq) {
+- ret = -ENOMEM;
+- goto out_clean;
+- }
+- INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
+- INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
+-
+ ret = mcp251x_hw_wake(spi);
+ if (ret)
+- goto out_free_wq;
++ goto out_free_irq;
+ ret = mcp251x_setup(net, spi);
+ if (ret)
+- goto out_free_wq;
++ goto out_free_irq;
+ ret = mcp251x_set_normal_mode(spi);
+ if (ret)
+- goto out_free_wq;
++ goto out_free_irq;
+
+ can_led_event(net, CAN_LED_EVENT_OPEN);
+
+@@ -1250,9 +1239,7 @@ static int mcp251x_open(struct net_device *net)
+
+ return 0;
+
+-out_free_wq:
+- destroy_workqueue(priv->wq);
+-out_clean:
++out_free_irq:
+ free_irq(spi->irq, priv);
+ mcp251x_hw_sleep(spi);
+ out_close:
+@@ -1373,6 +1360,15 @@ static int mcp251x_can_probe(struct spi_device *spi)
+ if (ret)
+ goto out_clk;
+
++ priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
++ 0);
++ if (!priv->wq) {
++ ret = -ENOMEM;
++ goto out_clk;
++ }
++ INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
++ INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
++
+ priv->spi = spi;
+ mutex_init(&priv->mcp_lock);
+
+@@ -1417,6 +1413,8 @@ static int mcp251x_can_probe(struct spi_device *spi)
+ return 0;
+
+ error_probe:
++ destroy_workqueue(priv->wq);
++ priv->wq = NULL;
+ mcp251x_power_enable(priv->power, 0);
+
+ out_clk:
+@@ -1438,6 +1436,9 @@ static int mcp251x_can_remove(struct spi_device *spi)
+
+ mcp251x_power_enable(priv->power, 0);
+
++ destroy_workqueue(priv->wq);
++ priv->wq = NULL;
++
+ clk_disable_unprepare(priv->clk);
+
+ free_candev(net);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index ee39e79927efb..486dbd3357aaa 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2947,10 +2947,12 @@ static int mcp251xfd_probe(struct spi_device *spi)
+
+ err = mcp251xfd_register(priv);
+ if (err)
+- goto out_free_candev;
++ goto out_can_rx_offload_del;
+
+ return 0;
+
++ out_can_rx_offload_del:
++ can_rx_offload_del(&priv->offload);
+ out_free_candev:
+ spi->max_speed_hz = priv->spi_max_speed_hz_orig;
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index f3c659bc6bb68..87406d85d9145 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -122,7 +122,10 @@ enum board_idx {
+ NETXTREME_E_VF,
+ NETXTREME_C_VF,
+ NETXTREME_S_VF,
++ NETXTREME_C_VF_HV,
++ NETXTREME_E_VF_HV,
+ NETXTREME_E_P5_VF,
++ NETXTREME_E_P5_VF_HV,
+ };
+
+ /* indexed by enum above */
+@@ -170,7 +173,10 @@ static const struct {
+ [NETXTREME_E_VF] = { "Broadcom NetXtreme-E Ethernet Virtual Function" },
+ [NETXTREME_C_VF] = { "Broadcom NetXtreme-C Ethernet Virtual Function" },
+ [NETXTREME_S_VF] = { "Broadcom NetXtreme-S Ethernet Virtual Function" },
++ [NETXTREME_C_VF_HV] = { "Broadcom NetXtreme-C Virtual Function for Hyper-V" },
++ [NETXTREME_E_VF_HV] = { "Broadcom NetXtreme-E Virtual Function for Hyper-V" },
+ [NETXTREME_E_P5_VF] = { "Broadcom BCM5750X NetXtreme-E Ethernet Virtual Function" },
++ [NETXTREME_E_P5_VF_HV] = { "Broadcom BCM5750X NetXtreme-E Virtual Function for Hyper-V" },
+ };
+
+ static const struct pci_device_id bnxt_pci_tbl[] = {
+@@ -222,15 +228,25 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
+ { PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 },
+ #ifdef CONFIG_BNXT_SRIOV
+ { PCI_VDEVICE(BROADCOM, 0x1606), .driver_data = NETXTREME_E_VF },
++ { PCI_VDEVICE(BROADCOM, 0x1607), .driver_data = NETXTREME_E_VF_HV },
++ { PCI_VDEVICE(BROADCOM, 0x1608), .driver_data = NETXTREME_E_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x1609), .driver_data = NETXTREME_E_VF },
++ { PCI_VDEVICE(BROADCOM, 0x16bd), .driver_data = NETXTREME_E_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x16c1), .driver_data = NETXTREME_E_VF },
++ { PCI_VDEVICE(BROADCOM, 0x16c2), .driver_data = NETXTREME_C_VF_HV },
++ { PCI_VDEVICE(BROADCOM, 0x16c3), .driver_data = NETXTREME_C_VF_HV },
++ { PCI_VDEVICE(BROADCOM, 0x16c4), .driver_data = NETXTREME_E_VF_HV },
++ { PCI_VDEVICE(BROADCOM, 0x16c5), .driver_data = NETXTREME_E_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x16cb), .driver_data = NETXTREME_C_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16d3), .driver_data = NETXTREME_E_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16dc), .driver_data = NETXTREME_E_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16e1), .driver_data = NETXTREME_C_VF },
+ { PCI_VDEVICE(BROADCOM, 0x16e5), .driver_data = NETXTREME_C_VF },
++ { PCI_VDEVICE(BROADCOM, 0x16e6), .driver_data = NETXTREME_C_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0x1806), .driver_data = NETXTREME_E_P5_VF },
+ { PCI_VDEVICE(BROADCOM, 0x1807), .driver_data = NETXTREME_E_P5_VF },
++ { PCI_VDEVICE(BROADCOM, 0x1808), .driver_data = NETXTREME_E_P5_VF_HV },
++ { PCI_VDEVICE(BROADCOM, 0x1809), .driver_data = NETXTREME_E_P5_VF_HV },
+ { PCI_VDEVICE(BROADCOM, 0xd800), .driver_data = NETXTREME_S_VF },
+ #endif
+ { 0 }
+@@ -263,7 +279,8 @@ static struct workqueue_struct *bnxt_pf_wq;
+ static bool bnxt_vf_pciid(enum board_idx idx)
+ {
+ return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF ||
+- idx == NETXTREME_S_VF || idx == NETXTREME_E_P5_VF);
++ idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV ||
++ idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF);
+ }
+
+ #define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID)
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index fb269d587b741..548d8095c0a79 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -768,7 +768,7 @@ static inline int enic_queue_wq_skb_encap(struct enic *enic, struct vnic_wq *wq,
+ return err;
+ }
+
+-static inline void enic_queue_wq_skb(struct enic *enic,
++static inline int enic_queue_wq_skb(struct enic *enic,
+ struct vnic_wq *wq, struct sk_buff *skb)
+ {
+ unsigned int mss = skb_shinfo(skb)->gso_size;
+@@ -814,6 +814,7 @@ static inline void enic_queue_wq_skb(struct enic *enic,
+ wq->to_use = buf->next;
+ dev_kfree_skb(skb);
+ }
++ return err;
+ }
+
+ /* netif_tx_lock held, process context with BHs disabled, or BH */
+@@ -857,7 +858,8 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
+ return NETDEV_TX_BUSY;
+ }
+
+- enic_queue_wq_skb(enic, wq, skb);
++ if (enic_queue_wq_skb(enic, wq, skb))
++ goto error;
+
+ if (vnic_wq_desc_avail(wq) < MAX_SKB_FRAGS + ENIC_DESC_MAX_SPLITS)
+ netif_tx_stop_queue(txq);
+@@ -865,6 +867,7 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
+ if (!netdev_xmit_more() || netif_xmit_stopped(txq))
+ vnic_wq_doorbell(wq);
+
++error:
+ spin_unlock(&enic->wq_lock[txq_map]);
+
+ return NETDEV_TX_OK;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index c8a43a725ebcc..3b8074e83476f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -576,8 +576,8 @@ static int hns3_nic_net_stop(struct net_device *netdev)
+ if (h->ae_algo->ops->set_timer_task)
+ h->ae_algo->ops->set_timer_task(priv->ae_handle, false);
+
+- netif_tx_stop_all_queues(netdev);
+ netif_carrier_off(netdev);
++ netif_tx_disable(netdev);
+
+ hns3_nic_net_down(netdev);
+
+@@ -823,7 +823,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto,
+ * and it is udp packet, which has a dest port as the IANA assigned.
+ * the hardware is expected to do the checksum offload, but the
+ * hardware will not do the checksum offload when udp dest port is
+- * 4789 or 6081.
++ * 4789, 4790 or 6081.
+ */
+ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+ {
+@@ -841,7 +841,8 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
+
+ if (!(!skb->encapsulation &&
+ (l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) ||
+- l4.udp->dest == htons(GENEVE_UDP_PORT))))
++ l4.udp->dest == htons(GENEVE_UDP_PORT) ||
++ l4.udp->dest == htons(4790))))
+ return false;
+
+ skb_checksum_help(skb);
+@@ -1277,23 +1278,21 @@ static unsigned int hns3_skb_bd_num(struct sk_buff *skb, unsigned int *bd_size,
+ }
+
+ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
+- u8 max_non_tso_bd_num)
++ u8 max_non_tso_bd_num, unsigned int bd_num,
++ unsigned int recursion_level)
+ {
++#define HNS3_MAX_RECURSION_LEVEL 24
++
+ struct sk_buff *frag_skb;
+- unsigned int bd_num = 0;
+
+ /* If the total len is within the max bd limit */
+- if (likely(skb->len <= HNS3_MAX_BD_SIZE && !skb_has_frag_list(skb) &&
++ if (likely(skb->len <= HNS3_MAX_BD_SIZE && !recursion_level &&
++ !skb_has_frag_list(skb) &&
+ skb_shinfo(skb)->nr_frags < max_non_tso_bd_num))
+ return skb_shinfo(skb)->nr_frags + 1U;
+
+- /* The below case will always be linearized, return
+- * HNS3_MAX_BD_NUM_TSO + 1U to make sure it is linearized.
+- */
+- if (unlikely(skb->len > HNS3_MAX_TSO_SIZE ||
+- (!skb_is_gso(skb) && skb->len >
+- HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))))
+- return HNS3_MAX_TSO_BD_NUM + 1U;
++ if (unlikely(recursion_level >= HNS3_MAX_RECURSION_LEVEL))
++ return UINT_MAX;
+
+ bd_num = hns3_skb_bd_num(skb, bd_size, bd_num);
+
+@@ -1301,7 +1300,8 @@ static unsigned int hns3_tx_bd_num(struct sk_buff *skb, unsigned int *bd_size,
+ return bd_num;
+
+ skb_walk_frags(skb, frag_skb) {
+- bd_num = hns3_skb_bd_num(frag_skb, bd_size, bd_num);
++ bd_num = hns3_tx_bd_num(frag_skb, bd_size, max_non_tso_bd_num,
++ bd_num, recursion_level + 1);
+ if (bd_num > HNS3_MAX_TSO_BD_NUM)
+ return bd_num;
+ }
+@@ -1361,6 +1361,43 @@ void hns3_shinfo_pack(struct skb_shared_info *shinfo, __u32 *size)
+ size[i] = skb_frag_size(&shinfo->frags[i]);
+ }
+
++static int hns3_skb_linearize(struct hns3_enet_ring *ring,
++ struct sk_buff *skb,
++ u8 max_non_tso_bd_num,
++ unsigned int bd_num)
++{
++ /* 'bd_num == UINT_MAX' means the skb' fraglist has a
++ * recursion level of over HNS3_MAX_RECURSION_LEVEL.
++ */
++ if (bd_num == UINT_MAX) {
++ u64_stats_update_begin(&ring->syncp);
++ ring->stats.over_max_recursion++;
++ u64_stats_update_end(&ring->syncp);
++ return -ENOMEM;
++ }
++
++ /* The skb->len has exceeded the hw limitation, linearization
++ * will not help.
++ */
++ if (skb->len > HNS3_MAX_TSO_SIZE ||
++ (!skb_is_gso(skb) && skb->len >
++ HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {
++ u64_stats_update_begin(&ring->syncp);
++ ring->stats.hw_limitation++;
++ u64_stats_update_end(&ring->syncp);
++ return -ENOMEM;
++ }
++
++ if (__skb_linearize(skb)) {
++ u64_stats_update_begin(&ring->syncp);
++ ring->stats.sw_err_cnt++;
++ u64_stats_update_end(&ring->syncp);
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
+ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ struct net_device *netdev,
+ struct sk_buff *skb)
+@@ -1370,7 +1407,7 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ unsigned int bd_size[HNS3_MAX_TSO_BD_NUM + 1U];
+ unsigned int bd_num;
+
+- bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num);
++ bd_num = hns3_tx_bd_num(skb, bd_size, max_non_tso_bd_num, 0, 0);
+ if (unlikely(bd_num > max_non_tso_bd_num)) {
+ if (bd_num <= HNS3_MAX_TSO_BD_NUM && skb_is_gso(skb) &&
+ !hns3_skb_need_linearized(skb, bd_size, bd_num,
+@@ -1379,16 +1416,11 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ goto out;
+ }
+
+- if (__skb_linearize(skb))
++ if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,
++ bd_num))
+ return -ENOMEM;
+
+ bd_num = hns3_tx_bd_count(skb->len);
+- if ((skb_is_gso(skb) && bd_num > HNS3_MAX_TSO_BD_NUM) ||
+- (!skb_is_gso(skb) &&
+- bd_num > max_non_tso_bd_num)) {
+- trace_hns3_over_max_bd(skb);
+- return -ENOMEM;
+- }
+
+ u64_stats_update_begin(&ring->syncp);
+ ring->stats.tx_copy++;
+@@ -1412,6 +1444,10 @@ out:
+ return bd_num;
+ }
+
++ u64_stats_update_begin(&ring->syncp);
++ ring->stats.tx_busy++;
++ u64_stats_update_end(&ring->syncp);
++
+ return -EBUSY;
+ }
+
+@@ -1459,6 +1495,7 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring,
+ struct sk_buff *skb, enum hns_desc_type type)
+ {
+ unsigned int size = skb_headlen(skb);
++ struct sk_buff *frag_skb;
+ int i, ret, bd_num = 0;
+
+ if (size) {
+@@ -1483,6 +1520,15 @@ static int hns3_fill_skb_to_desc(struct hns3_enet_ring *ring,
+ bd_num += ret;
+ }
+
++ skb_walk_frags(skb, frag_skb) {
++ ret = hns3_fill_skb_to_desc(ring, frag_skb,
++ DESC_TYPE_FRAGLIST_SKB);
++ if (unlikely(ret < 0))
++ return ret;
++
++ bd_num += ret;
++ }
++
+ return bd_num;
+ }
+
+@@ -1513,8 +1559,6 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ struct hns3_enet_ring *ring = &priv->ring[skb->queue_mapping];
+ struct netdev_queue *dev_queue;
+ int pre_ntu, next_to_use_head;
+- struct sk_buff *frag_skb;
+- int bd_num = 0;
+ bool doorbell;
+ int ret;
+
+@@ -1530,15 +1574,8 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ ret = hns3_nic_maybe_stop_tx(ring, netdev, skb);
+ if (unlikely(ret <= 0)) {
+ if (ret == -EBUSY) {
+- u64_stats_update_begin(&ring->syncp);
+- ring->stats.tx_busy++;
+- u64_stats_update_end(&ring->syncp);
+ hns3_tx_doorbell(ring, 0, true);
+ return NETDEV_TX_BUSY;
+- } else if (ret == -ENOMEM) {
+- u64_stats_update_begin(&ring->syncp);
+- ring->stats.sw_err_cnt++;
+- u64_stats_update_end(&ring->syncp);
+ }
+
+ hns3_rl_err(netdev, "xmit error: %d!\n", ret);
+@@ -1551,21 +1588,14 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ if (unlikely(ret < 0))
+ goto fill_err;
+
++ /* 'ret < 0' means filling error, 'ret == 0' means skb->len is
++ * zero, which is unlikely, and 'ret > 0' means how many tx desc
++ * need to be notified to the hw.
++ */
+ ret = hns3_fill_skb_to_desc(ring, skb, DESC_TYPE_SKB);
+- if (unlikely(ret < 0))
++ if (unlikely(ret <= 0))
+ goto fill_err;
+
+- bd_num += ret;
+-
+- skb_walk_frags(skb, frag_skb) {
+- ret = hns3_fill_skb_to_desc(ring, frag_skb,
+- DESC_TYPE_FRAGLIST_SKB);
+- if (unlikely(ret < 0))
+- goto fill_err;
+-
+- bd_num += ret;
+- }
+-
+ pre_ntu = ring->next_to_use ? (ring->next_to_use - 1) :
+ (ring->desc_num - 1);
+ ring->desc[pre_ntu].tx.bdtp_fe_sc_vld_ra_ri |=
+@@ -1576,7 +1606,7 @@ netdev_tx_t hns3_nic_net_xmit(struct sk_buff *skb, struct net_device *netdev)
+ dev_queue = netdev_get_tx_queue(netdev, ring->queue_index);
+ doorbell = __netdev_tx_sent_queue(dev_queue, skb->len,
+ netdev_xmit_more());
+- hns3_tx_doorbell(ring, bd_num, doorbell);
++ hns3_tx_doorbell(ring, ret, doorbell);
+
+ return NETDEV_TX_OK;
+
+@@ -1748,11 +1778,15 @@ static void hns3_nic_get_stats64(struct net_device *netdev,
+ tx_drop += ring->stats.tx_l4_proto_err;
+ tx_drop += ring->stats.tx_l2l3l4_err;
+ tx_drop += ring->stats.tx_tso_err;
++ tx_drop += ring->stats.over_max_recursion;
++ tx_drop += ring->stats.hw_limitation;
+ tx_errors += ring->stats.sw_err_cnt;
+ tx_errors += ring->stats.tx_vlan_err;
+ tx_errors += ring->stats.tx_l4_proto_err;
+ tx_errors += ring->stats.tx_l2l3l4_err;
+ tx_errors += ring->stats.tx_tso_err;
++ tx_errors += ring->stats.over_max_recursion;
++ tx_errors += ring->stats.hw_limitation;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+
+ /* fetch the rx stats */
+@@ -4579,6 +4613,11 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle)
+ struct hns3_nic_priv *priv = netdev_priv(kinfo->netdev);
+ int ret = 0;
+
++ if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
++ netdev_err(kinfo->netdev, "device is not initialized yet\n");
++ return -EFAULT;
++ }
++
+ clear_bit(HNS3_NIC_STATE_RESETTING, &priv->state);
+
+ if (netif_running(kinfo->netdev)) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 0a7b606e7c938..0b531e107e264 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -377,6 +377,8 @@ struct ring_stats {
+ u64 tx_l4_proto_err;
+ u64 tx_l2l3l4_err;
+ u64 tx_tso_err;
++ u64 over_max_recursion;
++ u64 hw_limitation;
+ };
+ struct {
+ u64 rx_pkts;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index e2fc443fe92ca..7276cfaa8c3b8 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -44,6 +44,8 @@ static const struct hns3_stats hns3_txq_stats[] = {
+ HNS3_TQP_STAT("l4_proto_err", tx_l4_proto_err),
+ HNS3_TQP_STAT("l2l3l4_err", tx_l2l3l4_err),
+ HNS3_TQP_STAT("tso_err", tx_tso_err),
++ HNS3_TQP_STAT("over_max_recursion", over_max_recursion),
++ HNS3_TQP_STAT("hw_limitation", hw_limitation),
+ };
+
+ #define HNS3_TXQ_STATS_COUNT ARRAY_SIZE(hns3_txq_stats)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index 9ee55ee0487d9..3226ca1761556 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -753,8 +753,9 @@ static int hclge_config_igu_egu_hw_err_int(struct hclge_dev *hdev, bool en)
+
+ /* configure IGU,EGU error interrupts */
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_COMMON_INT_EN, false);
++ desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_TYPE);
+ if (en)
+- desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
++ desc.data[0] |= cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
+
+ desc.data[1] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN_MASK);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+index 608fe26fc3fed..d647f3c841345 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+@@ -32,7 +32,8 @@
+ #define HCLGE_TQP_ECC_ERR_INT_EN_MASK 0x0FFF
+ #define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK 0x0F000000
+ #define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN 0x0F000000
+-#define HCLGE_IGU_ERR_INT_EN 0x0000066F
++#define HCLGE_IGU_ERR_INT_EN 0x0000000F
++#define HCLGE_IGU_ERR_INT_TYPE 0x00000660
+ #define HCLGE_IGU_ERR_INT_EN_MASK 0x000F
+ #define HCLGE_IGU_TNL_ERR_INT_EN 0x0002AABF
+ #define HCLGE_IGU_TNL_ERR_INT_EN_MASK 0x003F
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 67764d9304355..1c13cf34ae9f6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -11284,7 +11284,6 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
+ #define REG_LEN_PER_LINE (REG_NUM_PER_LINE * sizeof(u32))
+ #define REG_SEPARATOR_LINE 1
+ #define REG_NUM_REMAIN_MASK 3
+-#define BD_LIST_MAX_NUM 30
+
+ int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev, struct hclge_desc *desc)
+ {
+@@ -11378,15 +11377,19 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
+ {
+ u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
+ int data_len_per_desc, bd_num, i;
+- int bd_num_list[BD_LIST_MAX_NUM];
++ int *bd_num_list;
+ u32 data_len;
+ int ret;
+
++ bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
++ if (!bd_num_list)
++ return -ENOMEM;
++
+ ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "Get dfx reg bd num fail, status is %d.\n", ret);
+- return ret;
++ goto out;
+ }
+
+ data_len_per_desc = sizeof_field(struct hclge_desc, data);
+@@ -11397,6 +11400,8 @@ static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
+ *len += (data_len / REG_LEN_PER_LINE + 1) * REG_LEN_PER_LINE;
+ }
+
++out:
++ kfree(bd_num_list);
+ return ret;
+ }
+
+@@ -11404,16 +11409,20 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ {
+ u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
+ int bd_num, bd_num_max, buf_len, i;
+- int bd_num_list[BD_LIST_MAX_NUM];
+ struct hclge_desc *desc_src;
++ int *bd_num_list;
+ u32 *reg = data;
+ int ret;
+
++ bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
++ if (!bd_num_list)
++ return -ENOMEM;
++
+ ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "Get dfx reg bd num fail, status is %d.\n", ret);
+- return ret;
++ goto out;
+ }
+
+ bd_num_max = bd_num_list[0];
+@@ -11422,8 +11431,10 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+
+ buf_len = sizeof(*desc_src) * bd_num_max;
+ desc_src = kzalloc(buf_len, GFP_KERNEL);
+- if (!desc_src)
+- return -ENOMEM;
++ if (!desc_src) {
++ ret = -ENOMEM;
++ goto out;
++ }
+
+ for (i = 0; i < dfx_reg_type_num; i++) {
+ bd_num = bd_num_list[i];
+@@ -11439,6 +11450,8 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ }
+
+ kfree(desc_src);
++out:
++ kfree(bd_num_list);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index ffb416e088a97..cdd77430e4def 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -535,7 +535,7 @@ static void hclge_get_link_mode(struct hclge_vport *vport,
+ unsigned long advertising;
+ unsigned long supported;
+ unsigned long send_data;
+- u8 msg_data[10];
++ u8 msg_data[10] = {};
+ u8 dest_vfid;
+
+ advertising = hdev->hw.mac.advertising[0];
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index e898207025406..c194bba187d6c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -255,6 +255,8 @@ void hclge_mac_start_phy(struct hclge_dev *hdev)
+ if (!phydev)
+ return;
+
++ phy_loopback(phydev, false);
++
+ phy_start(phydev);
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+index 1e960c3c7ef05..e84054fb8213d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+@@ -1565,8 +1565,10 @@ enum i40e_aq_phy_type {
+ I40E_PHY_TYPE_25GBASE_LR = 0x22,
+ I40E_PHY_TYPE_25GBASE_AOC = 0x23,
+ I40E_PHY_TYPE_25GBASE_ACC = 0x24,
+- I40E_PHY_TYPE_2_5GBASE_T = 0x30,
+- I40E_PHY_TYPE_5GBASE_T = 0x31,
++ I40E_PHY_TYPE_2_5GBASE_T = 0x26,
++ I40E_PHY_TYPE_5GBASE_T = 0x27,
++ I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS = 0x30,
++ I40E_PHY_TYPE_5GBASE_T_LINK_STATUS = 0x31,
+ I40E_PHY_TYPE_MAX,
+ I40E_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP = 0xFD,
+ I40E_PHY_TYPE_EMPTY = 0xFE,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
+index a2dba32383f63..32f3facbed1a5 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
+@@ -375,6 +375,7 @@ void i40e_client_subtask(struct i40e_pf *pf)
+ clear_bit(__I40E_CLIENT_INSTANCE_OPENED,
+ &cdev->state);
+ i40e_client_del_instance(pf);
++ return;
+ }
+ }
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index adc9e4fa47891..ba109073d6052 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1154,8 +1154,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw)
+ break;
+ case I40E_PHY_TYPE_100BASE_TX:
+ case I40E_PHY_TYPE_1000BASE_T:
+- case I40E_PHY_TYPE_2_5GBASE_T:
+- case I40E_PHY_TYPE_5GBASE_T:
++ case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
++ case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
+ case I40E_PHY_TYPE_10GBASE_T:
+ media = I40E_MEDIA_TYPE_BASET;
+ break;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 31d48a85cfaf0..5d48bc0c3f6c4 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -841,8 +841,8 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
+ 10000baseT_Full);
+ break;
+ case I40E_PHY_TYPE_10GBASE_T:
+- case I40E_PHY_TYPE_5GBASE_T:
+- case I40E_PHY_TYPE_2_5GBASE_T:
++ case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
++ case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
+ case I40E_PHY_TYPE_1000BASE_T:
+ case I40E_PHY_TYPE_100BASE_TX:
+ ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
+@@ -1409,7 +1409,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
+
+ memset(&config, 0, sizeof(config));
+ config.phy_type = abilities.phy_type;
+- config.abilities = abilities.abilities;
++ config.abilities = abilities.abilities |
++ I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
+ config.phy_type_ext = abilities.phy_type_ext;
+ config.link_speed = abilities.link_speed;
+ config.eee_capability = abilities.eee_capability;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 92ce835bc79e3..c779512f44f46 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -1821,10 +1821,6 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb,
+ union i40e_rx_desc *rx_desc)
+
+ {
+- /* XDP packets use error pointer so abort at this point */
+- if (IS_ERR(skb))
+- return true;
+-
+ /* ERR_MASK will only have valid bits if EOP set, and
+ * what we are doing here is actually checking
+ * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
+@@ -2437,7 +2433,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ }
+
+ /* exit if we failed to retrieve a buffer */
+- if (!skb) {
++ if (!xdp_res && !skb) {
+ rx_ring->rx_stats.alloc_buff_failed++;
+ rx_buffer->pagecnt_bias++;
+ break;
+@@ -2449,7 +2445,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
+ if (i40e_is_non_eop(rx_ring, rx_desc, skb))
+ continue;
+
+- if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
++ if (xdp_res || i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
+ skb = NULL;
+ continue;
+ }
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
+index c0bdc666f5571..add67f7b73e8b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
+@@ -239,11 +239,8 @@ struct i40e_phy_info {
+ #define I40E_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(I40E_PHY_TYPE_25GBASE_ACC + \
+ I40E_PHY_TYPE_OFFSET)
+ /* Offset for 2.5G/5G PHY Types value to bit number conversion */
+-#define I40E_PHY_TYPE_OFFSET2 (-10)
+-#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T + \
+- I40E_PHY_TYPE_OFFSET2)
+-#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T + \
+- I40E_PHY_TYPE_OFFSET2)
++#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T)
++#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T)
+ #define I40E_HW_CAP_MAX_GPIO 30
+ /* Capabilities of a PF or a VF or the whole device */
+ struct i40e_hw_capabilities {
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index dc5b3c06d1e01..ebd08543791bd 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -3899,8 +3899,6 @@ static void iavf_remove(struct pci_dev *pdev)
+
+ iounmap(hw->hw_addr);
+ pci_release_regions(pdev);
+- iavf_free_all_tx_resources(adapter);
+- iavf_free_all_rx_resources(adapter);
+ iavf_free_queues(adapter);
+ kfree(adapter->vf_res);
+ spin_lock_bh(&adapter->mac_vlan_list_lock);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 170367eaa95aa..e1384503dd4d5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2684,38 +2684,46 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ }
+
+ /**
+- * ice_vsi_rebuild_update_coalesce - set coalesce for a q_vector
++ * ice_vsi_rebuild_update_coalesce_intrl - set interrupt rate limit for a q_vector
+ * @q_vector: pointer to q_vector which is being updated
+- * @coalesce: pointer to array of struct with stored coalesce
++ * @stored_intrl_setting: original INTRL setting
+ *
+ * Set coalesce param in q_vector and update these parameters in HW.
+ */
+ static void
+-ice_vsi_rebuild_update_coalesce(struct ice_q_vector *q_vector,
+- struct ice_coalesce_stored *coalesce)
++ice_vsi_rebuild_update_coalesce_intrl(struct ice_q_vector *q_vector,
++ u16 stored_intrl_setting)
+ {
+- struct ice_ring_container *rx_rc = &q_vector->rx;
+- struct ice_ring_container *tx_rc = &q_vector->tx;
+ struct ice_hw *hw = &q_vector->vsi->back->hw;
+
+- tx_rc->itr_setting = coalesce->itr_tx;
+- rx_rc->itr_setting = coalesce->itr_rx;
+-
+- /* dynamic ITR values will be updated during Tx/Rx */
+- if (!ITR_IS_DYNAMIC(tx_rc->itr_setting))
+- wr32(hw, GLINT_ITR(tx_rc->itr_idx, q_vector->reg_idx),
+- ITR_REG_ALIGN(tx_rc->itr_setting) >>
+- ICE_ITR_GRAN_S);
+- if (!ITR_IS_DYNAMIC(rx_rc->itr_setting))
+- wr32(hw, GLINT_ITR(rx_rc->itr_idx, q_vector->reg_idx),
+- ITR_REG_ALIGN(rx_rc->itr_setting) >>
+- ICE_ITR_GRAN_S);
+-
+- q_vector->intrl = coalesce->intrl;
++ q_vector->intrl = stored_intrl_setting;
+ wr32(hw, GLINT_RATE(q_vector->reg_idx),
+ ice_intrl_usec_to_reg(q_vector->intrl, hw->intrl_gran));
+ }
+
++/**
++ * ice_vsi_rebuild_update_coalesce_itr - set coalesce for a q_vector
++ * @q_vector: pointer to q_vector which is being updated
++ * @rc: pointer to ring container
++ * @stored_itr_setting: original ITR setting
++ *
++ * Set coalesce param in q_vector and update these parameters in HW.
++ */
++static void
++ice_vsi_rebuild_update_coalesce_itr(struct ice_q_vector *q_vector,
++ struct ice_ring_container *rc,
++ u16 stored_itr_setting)
++{
++ struct ice_hw *hw = &q_vector->vsi->back->hw;
++
++ rc->itr_setting = stored_itr_setting;
++
++ /* dynamic ITR values will be updated during Tx/Rx */
++ if (!ITR_IS_DYNAMIC(rc->itr_setting))
++ wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx),
++ ITR_REG_ALIGN(rc->itr_setting) >> ICE_ITR_GRAN_S);
++}
++
+ /**
+ * ice_vsi_rebuild_get_coalesce - get coalesce from all q_vectors
+ * @vsi: VSI connected with q_vectors
+@@ -2735,6 +2743,11 @@ ice_vsi_rebuild_get_coalesce(struct ice_vsi *vsi,
+ coalesce[i].itr_tx = q_vector->tx.itr_setting;
+ coalesce[i].itr_rx = q_vector->rx.itr_setting;
+ coalesce[i].intrl = q_vector->intrl;
++
++ if (i < vsi->num_txq)
++ coalesce[i].tx_valid = true;
++ if (i < vsi->num_rxq)
++ coalesce[i].rx_valid = true;
+ }
+
+ return vsi->num_q_vectors;
+@@ -2759,17 +2772,59 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
+ if ((size && !coalesce) || !vsi)
+ return;
+
+- for (i = 0; i < size && i < vsi->num_q_vectors; i++)
+- ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
+- &coalesce[i]);
+-
+- /* number of q_vectors increased, so assume coalesce settings were
+- * changed globally (i.e. ethtool -C eth0 instead of per-queue) and use
+- * the previous settings from q_vector 0 for all of the new q_vectors
++ /* There are a couple of cases that have to be handled here:
++ * 1. The case where the number of queue vectors stays the same, but
++ * the number of Tx or Rx rings changes (the first for loop)
++ * 2. The case where the number of queue vectors increased (the
++ * second for loop)
+ */
+- for (; i < vsi->num_q_vectors; i++)
+- ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
+- &coalesce[0]);
++ for (i = 0; i < size && i < vsi->num_q_vectors; i++) {
++ /* There are 2 cases to handle here and they are the same for
++ * both Tx and Rx:
++ * if the entry was valid previously (coalesce[i].[tr]x_valid
++ * and the loop variable is less than the number of rings
++ * allocated, then write the previous values
++ *
++ * if the entry was not valid previously, but the number of
++ * rings is less than are allocated (this means the number of
++ * rings increased from previously), then write out the
++ * values in the first element
++ */
++ if (i < vsi->alloc_rxq && coalesce[i].rx_valid)
++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++ &vsi->q_vectors[i]->rx,
++ coalesce[i].itr_rx);
++ else if (i < vsi->alloc_rxq)
++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++ &vsi->q_vectors[i]->rx,
++ coalesce[0].itr_rx);
++
++ if (i < vsi->alloc_txq && coalesce[i].tx_valid)
++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++ &vsi->q_vectors[i]->tx,
++ coalesce[i].itr_tx);
++ else if (i < vsi->alloc_txq)
++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++ &vsi->q_vectors[i]->tx,
++ coalesce[0].itr_tx);
++
++ ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
++ coalesce[i].intrl);
++ }
++
++ /* the number of queue vectors increased so write whatever is in
++ * the first element
++ */
++ for (; i < vsi->num_q_vectors; i++) {
++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++ &vsi->q_vectors[i]->tx,
++ coalesce[0].itr_tx);
++ ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
++ &vsi->q_vectors[i]->rx,
++ coalesce[0].itr_rx);
++ ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
++ coalesce[0].intrl);
++ }
+ }
+
+ /**
+@@ -2798,9 +2853,11 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
+
+ coalesce = kcalloc(vsi->num_q_vectors,
+ sizeof(struct ice_coalesce_stored), GFP_KERNEL);
+- if (coalesce)
+- prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi,
+- coalesce);
++ if (!coalesce)
++ return -ENOMEM;
++
++ prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
++
+ ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ ice_vsi_free_q_vectors(vsi);
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index ff1a1cbd078e7..eab7ceae926b3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -351,6 +351,8 @@ struct ice_coalesce_stored {
+ u16 itr_tx;
+ u16 itr_rx;
+ u8 intrl;
++ u8 tx_valid;
++ u8 rx_valid;
+ };
+
+ /* iterator for handling rings in ring container */
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 6d2d60675ffd7..d930fcda9c3b6 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1319,7 +1319,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ skb->protocol = eth_type_trans(skb, netdev);
+
+ if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX &&
+- RX_DMA_VID(trxd.rxd3))
++ (trxd.rxd2 & RX_DMA_VTAG))
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ RX_DMA_VID(trxd.rxd3));
+ skb_record_rx_queue(skb, 0);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 454cfcd465fda..73ce1f0f307a4 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -295,6 +295,7 @@
+ #define RX_DMA_LSO BIT(30)
+ #define RX_DMA_PLEN0(_x) (((_x) & 0x3fff) << 16)
+ #define RX_DMA_GET_PLEN0(_x) (((_x) >> 16) & 0x3fff)
++#define RX_DMA_VTAG BIT(15)
+
+ /* QDMA descriptor rxd3 */
+ #define RX_DMA_VID(_x) ((_x) & 0xfff)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index 61ed671fe741b..1b3c93c3fd23c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -553,7 +553,7 @@ static void mlx5e_tx_mpwqe_session_start(struct mlx5e_txqsq *sq,
+
+ pi = mlx5e_txqsq_get_next_pi(sq, MLX5E_TX_MPW_MAX_WQEBBS);
+ wqe = MLX5E_TX_FETCH_WQE(sq, pi);
+- prefetchw(wqe->data);
++ net_prefetchw(wqe->data);
+
+ *session = (struct mlx5e_tx_mpwqe) {
+ .wqe = wqe,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index bf3250e0e59ca..749585fe6fc96 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -352,6 +352,8 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ plat_dat->bsp_priv = gmac;
+ plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed;
+ plat_dat->multicast_filter_bins = 0;
++ plat_dat->tx_fifo_size = 8192;
++ plat_dat->rx_fifo_size = 8192;
+
+ err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index 29f765a246a05..aaf37598cbd3c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -638,6 +638,7 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
+ value &= ~GMAC_PACKET_FILTER_PCF;
+ value &= ~GMAC_PACKET_FILTER_PM;
+ value &= ~GMAC_PACKET_FILTER_PR;
++ value &= ~GMAC_PACKET_FILTER_RA;
+ if (dev->flags & IFF_PROMISC) {
+ /* VLAN Tag Filter Fail Packets Queuing */
+ if (hw->vlan_fail_q_en) {
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index febfac75dd6a1..537853b9301bd 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -205,8 +205,8 @@ static void gsi_irq_setup(struct gsi *gsi)
+ iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+
+ /* The inter-EE registers are in the non-adjusted address range */
+- iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_CH_IRQ_OFFSET);
+- iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET);
++ iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET);
++ iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET);
+
+ iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
+ }
+diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h
+index 1622d8cf8dea4..48ef04afab79f 100644
+--- a/drivers/net/ipa/gsi_reg.h
++++ b/drivers/net/ipa/gsi_reg.h
+@@ -53,15 +53,15 @@
+ #define GSI_EE_REG_ADJUST 0x0000d000 /* IPA v4.5+ */
+
+ /* The two inter-EE IRQ register offsets are relative to gsi->virt_raw */
+-#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \
+- GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP)
+-#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \
+- (0x0000c018 + 0x1000 * (ee))
+-
+-#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \
+- GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP)
+-#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \
+- (0x0000c01c + 0x1000 * (ee))
++#define GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET \
++ GSI_INTER_EE_N_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP)
++#define GSI_INTER_EE_N_SRC_CH_IRQ_MSK_OFFSET(ee) \
++ (0x0000c020 + 0x1000 * (ee))
++
++#define GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET \
++ GSI_INTER_EE_N_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP)
++#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \
++ (0x0000c024 + 0x1000 * (ee))
+
+ /* All other register offsets are relative to gsi->virt */
+ #define GSI_CH_C_CNTXT_0_OFFSET(ch) \
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 73869d445c5b3..f457a089b63ca 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -5190,31 +5190,6 @@ int ath11k_wmi_pull_fw_stats(struct ath11k_base *ab, struct sk_buff *skb,
+ return 0;
+ }
+
+-static int
+-ath11k_pull_pdev_temp_ev(struct ath11k_base *ab, u8 *evt_buf,
+- u32 len, const struct wmi_pdev_temperature_event *ev)
+-{
+- const void **tb;
+- int ret;
+-
+- tb = ath11k_wmi_tlv_parse_alloc(ab, evt_buf, len, GFP_ATOMIC);
+- if (IS_ERR(tb)) {
+- ret = PTR_ERR(tb);
+- ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
+- return ret;
+- }
+-
+- ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
+- if (!ev) {
+- ath11k_warn(ab, "failed to fetch pdev temp ev");
+- kfree(tb);
+- return -EPROTO;
+- }
+-
+- kfree(tb);
+- return 0;
+-}
+-
+ size_t ath11k_wmi_fw_stats_num_vdevs(struct list_head *head)
+ {
+ struct ath11k_fw_stats_vdev *i;
+@@ -6622,23 +6597,37 @@ ath11k_wmi_pdev_temperature_event(struct ath11k_base *ab,
+ struct sk_buff *skb)
+ {
+ struct ath11k *ar;
+- struct wmi_pdev_temperature_event ev = {0};
++ const void **tb;
++ const struct wmi_pdev_temperature_event *ev;
++ int ret;
++
++ tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
++ if (IS_ERR(tb)) {
++ ret = PTR_ERR(tb);
++ ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
++ return;
++ }
+
+- if (ath11k_pull_pdev_temp_ev(ab, skb->data, skb->len, &ev) != 0) {
+- ath11k_warn(ab, "failed to extract pdev temperature event");
++ ev = tb[WMI_TAG_PDEV_TEMPERATURE_EVENT];
++ if (!ev) {
++ ath11k_warn(ab, "failed to fetch pdev temp ev");
++ kfree(tb);
+ return;
+ }
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+- "pdev temperature ev temp %d pdev_id %d\n", ev.temp, ev.pdev_id);
++ "pdev temperature ev temp %d pdev_id %d\n", ev->temp, ev->pdev_id);
+
+- ar = ath11k_mac_get_ar_by_pdev_id(ab, ev.pdev_id);
++ ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
+ if (!ar) {
+- ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev.pdev_id);
++ ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev->pdev_id);
++ kfree(tb);
+ return;
+ }
+
+- ath11k_thermal_event_temperature(ar, ev.temp);
++ ath11k_thermal_event_temperature(ar, ev->temp);
++
++ kfree(tb);
+ }
+
+ static void ath11k_fils_discovery_event(struct ath11k_base *ab,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 018daa84ddd28..70752f0c67b0d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -17,10 +17,20 @@
+ #include "iwl-prph.h"
+ #include "internal.h"
+
++#define TRANS_CFG_MARKER BIT(0)
++#define _IS_A(cfg, _struct) __builtin_types_compatible_p(typeof(cfg), \
++ struct _struct)
++extern int _invalid_type;
++#define _TRANS_CFG_MARKER(cfg) \
++ (__builtin_choose_expr(_IS_A(cfg, iwl_cfg_trans_params), \
++ TRANS_CFG_MARKER, \
++ __builtin_choose_expr(_IS_A(cfg, iwl_cfg), 0, _invalid_type)))
++#define _ASSIGN_CFG(cfg) (_TRANS_CFG_MARKER(cfg) + (kernel_ulong_t)&(cfg))
++
+ #define IWL_PCI_DEVICE(dev, subdev, cfg) \
+ .vendor = PCI_VENDOR_ID_INTEL, .device = (dev), \
+ .subvendor = PCI_ANY_ID, .subdevice = (subdev), \
+- .driver_data = (kernel_ulong_t)&(cfg)
++ .driver_data = _ASSIGN_CFG(cfg)
+
+ /* Hardware specific file defines the PCI IDs table for that hardware module */
+ static const struct pci_device_id iwl_hw_card_ids[] = {
+@@ -988,19 +998,22 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+
+ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+- const struct iwl_cfg_trans_params *trans =
+- (struct iwl_cfg_trans_params *)(ent->driver_data);
++ const struct iwl_cfg_trans_params *trans;
+ const struct iwl_cfg *cfg_7265d __maybe_unused = NULL;
+ struct iwl_trans *iwl_trans;
+ struct iwl_trans_pcie *trans_pcie;
+ int i, ret;
++ const struct iwl_cfg *cfg;
++
++ trans = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
++
+ /*
+ * This is needed for backwards compatibility with the old
+ * tables, so we don't need to change all the config structs
+ * at the same time. The cfg is used to compare with the old
+ * full cfg structs.
+ */
+- const struct iwl_cfg *cfg = (struct iwl_cfg *)(ent->driver_data);
++ cfg = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
+
+ /* make sure trans is the first element in iwl_cfg */
+ BUILD_BUG_ON(offsetof(struct iwl_cfg, trans));
+@@ -1102,11 +1115,19 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ #endif
+ /*
+- * If we didn't set the cfg yet, assume the trans is actually
+- * a full cfg from the old tables.
++ * If we didn't set the cfg yet, the PCI ID table entry should have
++ * been a full config - if yes, use it, otherwise fail.
+ */
+- if (!iwl_trans->cfg)
++ if (!iwl_trans->cfg) {
++ if (ent->driver_data & TRANS_CFG_MARKER) {
++ pr_err("No config found for PCI dev %04x/%04x, rev=0x%x, rfid=0x%x\n",
++ pdev->device, pdev->subsystem_device,
++ iwl_trans->hw_rev, iwl_trans->hw_rf_id);
++ ret = -EINVAL;
++ goto out_free_trans;
++ }
+ iwl_trans->cfg = cfg;
++ }
+
+ /* if we don't have a name yet, copy name from the old cfg */
+ if (!iwl_trans->name)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 08788bc906830..fd7398daaf65b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+ * Copyright (C) 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+ */
+ #include "iwl-trans.h"
+ #include "iwl-prph.h"
+@@ -141,7 +141,7 @@ void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans)
+ if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
+ IWL_DEBUG_INFO(trans,
+ "DEVICE_ENABLED bit was set and is now cleared\n");
+- iwl_txq_gen2_tx_stop(trans);
++ iwl_txq_gen2_tx_free(trans);
+ iwl_pcie_rx_stop(trans);
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+index 7ff1bb0ccc9cd..cd5b06ce3e9c2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+@@ -13,30 +13,6 @@
+ #include "iwl-scd.h"
+ #include <linux/dmapool.h>
+
+-/*
+- * iwl_txq_gen2_tx_stop - Stop all Tx DMA channels
+- */
+-void iwl_txq_gen2_tx_stop(struct iwl_trans *trans)
+-{
+- int txq_id;
+-
+- /*
+- * This function can be called before the op_mode disabled the
+- * queues. This happens when we have an rfkill interrupt.
+- * Since we stop Tx altogether - mark the queues as stopped.
+- */
+- memset(trans->txqs.queue_stopped, 0,
+- sizeof(trans->txqs.queue_stopped));
+- memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used));
+-
+- /* Unmap DMA from host system and free skb's */
+- for (txq_id = 0; txq_id < ARRAY_SIZE(trans->txqs.txq); txq_id++) {
+- if (!trans->txqs.txq[txq_id])
+- continue;
+- iwl_txq_gen2_unmap(trans, txq_id);
+- }
+-}
+-
+ /*
+ * iwl_txq_update_byte_tbl - Set up entry in Tx byte-count array
+ */
+@@ -1189,6 +1165,12 @@ static int iwl_txq_alloc_response(struct iwl_trans *trans, struct iwl_txq *txq,
+ goto error_free_resp;
+ }
+
++ if (WARN_ONCE(trans->txqs.txq[qid],
++ "queue %d already allocated\n", qid)) {
++ ret = -EIO;
++ goto error_free_resp;
++ }
++
+ txq->id = qid;
+ trans->txqs.txq[qid] = txq;
+ wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1);
+diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.h b/drivers/net/wireless/intel/iwlwifi/queue/tx.h
+index cff694c25cccf..d32256d78917d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.h
++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2020 Intel Corporation
++ * Copyright (C) 2020-2021 Intel Corporation
+ */
+ #ifndef __iwl_trans_queue_tx_h__
+ #define __iwl_trans_queue_tx_h__
+@@ -123,7 +123,6 @@ int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
+ void iwl_txq_dyn_free(struct iwl_trans *trans, int queue);
+ void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq);
+ void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq);
+-void iwl_txq_gen2_tx_stop(struct iwl_trans *trans);
+ void iwl_txq_gen2_tx_free(struct iwl_trans *trans);
+ int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+ bool cmd_queue);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 5da6b74687ed6..7a551811d2034 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -222,6 +222,7 @@ struct mt76_wcid {
+
+ u16 idx;
+ u8 hw_key_idx;
++ u8 hw_key_idx2;
+
+ u8 sta:1;
+ u8 ext_phy:1;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+index 3232ebd5eda69..a31fa2017f52a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+@@ -86,6 +86,7 @@ static int mt7615_check_eeprom(struct mt76_dev *dev)
+ switch (val) {
+ case 0x7615:
+ case 0x7622:
++ case 0x7663:
+ return 0;
+ default:
+ return -EINVAL;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 2cb24c26a0745..b2f6cda5a6815 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1031,7 +1031,7 @@ EXPORT_SYMBOL_GPL(mt7615_mac_set_rates);
+ static int
+ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+ struct ieee80211_key_conf *key,
+- enum mt7615_cipher_type cipher,
++ enum mt7615_cipher_type cipher, u16 cipher_mask,
+ enum set_key_cmd cmd)
+ {
+ u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx) + 30 * 4;
+@@ -1048,22 +1048,22 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+ memcpy(data + 16, key->key + 24, 8);
+ memcpy(data + 24, key->key + 16, 8);
+ } else {
+- if (cipher != MT_CIPHER_BIP_CMAC_128 && wcid->cipher)
+- memmove(data + 16, data, 16);
+- if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher)
++ if (cipher_mask == BIT(cipher))
+ memcpy(data, key->key, key->keylen);
+- else if (cipher == MT_CIPHER_BIP_CMAC_128)
++ else if (cipher != MT_CIPHER_BIP_CMAC_128)
++ memcpy(data, key->key, 16);
++ if (cipher == MT_CIPHER_BIP_CMAC_128)
+ memcpy(data + 16, key->key, 16);
+ }
+ } else {
+- if (wcid->cipher & ~BIT(cipher)) {
+- if (cipher != MT_CIPHER_BIP_CMAC_128)
+- memmove(data, data + 16, 16);
++ if (cipher == MT_CIPHER_BIP_CMAC_128)
+ memset(data + 16, 0, 16);
+- } else {
++ else if (cipher_mask)
++ memset(data, 0, 16);
++ if (!cipher_mask)
+ memset(data, 0, sizeof(data));
+- }
+ }
++
+ mt76_wr_copy(dev, addr, data, sizeof(data));
+
+ return 0;
+@@ -1071,7 +1071,7 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+
+ static int
+ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+- enum mt7615_cipher_type cipher,
++ enum mt7615_cipher_type cipher, u16 cipher_mask,
+ int keyidx, enum set_key_cmd cmd)
+ {
+ u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx), w0, w1;
+@@ -1081,20 +1081,23 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+
+ w0 = mt76_rr(dev, addr);
+ w1 = mt76_rr(dev, addr + 4);
+- if (cmd == SET_KEY) {
+- w0 |= MT_WTBL_W0_RX_KEY_VALID |
+- FIELD_PREP(MT_WTBL_W0_RX_IK_VALID,
+- cipher == MT_CIPHER_BIP_CMAC_128);
+- if (cipher != MT_CIPHER_BIP_CMAC_128 ||
+- !wcid->cipher)
+- w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx);
+- } else {
+- if (!(wcid->cipher & ~BIT(cipher)))
+- w0 &= ~(MT_WTBL_W0_RX_KEY_VALID |
+- MT_WTBL_W0_KEY_IDX);
+- if (cipher == MT_CIPHER_BIP_CMAC_128)
+- w0 &= ~MT_WTBL_W0_RX_IK_VALID;
++
++ if (cipher_mask)
++ w0 |= MT_WTBL_W0_RX_KEY_VALID;
++ else
++ w0 &= ~(MT_WTBL_W0_RX_KEY_VALID | MT_WTBL_W0_KEY_IDX);
++ if (cipher_mask & BIT(MT_CIPHER_BIP_CMAC_128))
++ w0 |= MT_WTBL_W0_RX_IK_VALID;
++ else
++ w0 &= ~MT_WTBL_W0_RX_IK_VALID;
++
++ if (cmd == SET_KEY &&
++ (cipher != MT_CIPHER_BIP_CMAC_128 ||
++ cipher_mask == BIT(cipher))) {
++ w0 &= ~MT_WTBL_W0_KEY_IDX;
++ w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx);
+ }
++
+ mt76_wr(dev, MT_WTBL_RICR0, w0);
+ mt76_wr(dev, MT_WTBL_RICR1, w1);
+
+@@ -1107,24 +1110,25 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+
+ static void
+ mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+- enum mt7615_cipher_type cipher,
++ enum mt7615_cipher_type cipher, u16 cipher_mask,
+ enum set_key_cmd cmd)
+ {
+ u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx);
+
+- if (cmd == SET_KEY) {
+- if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher)
+- mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE,
+- FIELD_PREP(MT_WTBL_W2_KEY_TYPE, cipher));
+- } else {
+- if (cipher != MT_CIPHER_BIP_CMAC_128 &&
+- wcid->cipher & BIT(MT_CIPHER_BIP_CMAC_128))
+- mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE,
+- FIELD_PREP(MT_WTBL_W2_KEY_TYPE,
+- MT_CIPHER_BIP_CMAC_128));
+- else if (!(wcid->cipher & ~BIT(cipher)))
+- mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE);
++ if (!cipher_mask) {
++ mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE);
++ return;
+ }
++
++ if (cmd != SET_KEY)
++ return;
++
++ if (cipher == MT_CIPHER_BIP_CMAC_128 &&
++ cipher_mask & ~BIT(MT_CIPHER_BIP_CMAC_128))
++ return;
++
++ mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE,
++ FIELD_PREP(MT_WTBL_W2_KEY_TYPE, cipher));
+ }
+
+ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
+@@ -1133,25 +1137,30 @@ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
+ enum set_key_cmd cmd)
+ {
+ enum mt7615_cipher_type cipher;
++ u16 cipher_mask = wcid->cipher;
+ int err;
+
+ cipher = mt7615_mac_get_cipher(key->cipher);
+ if (cipher == MT_CIPHER_NONE)
+ return -EOPNOTSUPP;
+
+- mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cmd);
+- err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cmd);
++ if (cmd == SET_KEY)
++ cipher_mask |= BIT(cipher);
++ else
++ cipher_mask &= ~BIT(cipher);
++
++ mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask, cmd);
++ err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask,
++ cmd);
+ if (err < 0)
+ return err;
+
+- err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, key->keyidx, cmd);
++ err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, cipher_mask,
++ key->keyidx, cmd);
+ if (err < 0)
+ return err;
+
+- if (cmd == SET_KEY)
+- wcid->cipher |= BIT(cipher);
+- else
+- wcid->cipher &= ~BIT(cipher);
++ wcid->cipher = cipher_mask;
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 0ec836af211c0..cbfcf00377dbe 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -347,7 +347,8 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ struct mt7615_sta *msta = sta ? (struct mt7615_sta *)sta->drv_priv :
+ &mvif->sta;
+ struct mt76_wcid *wcid = &msta->wcid;
+- int idx = key->keyidx, err;
++ int idx = key->keyidx, err = 0;
++ u8 *wcid_keyidx = &wcid->hw_key_idx;
+
+ /* The hardware does not support per-STA RX GTK, fallback
+ * to software mode for these.
+@@ -362,6 +363,7 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ /* fall back to sw encryption for unsupported ciphers */
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_AES_CMAC:
++ wcid_keyidx = &wcid->hw_key_idx2;
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE;
+ break;
+ case WLAN_CIPHER_SUITE_TKIP:
+@@ -379,12 +381,13 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+
+ mt7615_mutex_acquire(dev);
+
+- if (cmd == SET_KEY) {
+- key->hw_key_idx = wcid->idx;
+- wcid->hw_key_idx = idx;
+- } else if (idx == wcid->hw_key_idx) {
+- wcid->hw_key_idx = -1;
+- }
++ if (cmd == SET_KEY)
++ *wcid_keyidx = idx;
++ else if (idx == *wcid_keyidx)
++ *wcid_keyidx = -1;
++ else
++ goto out;
++
+ mt76_wcid_key_setup(&dev->mt76, wcid,
+ cmd == SET_KEY ? key : NULL);
+
+@@ -393,6 +396,7 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ else
+ err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd);
+
++out:
+ mt7615_mutex_release(dev);
+
+ return err;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index c13547841a4e9..4c7083d17418a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -291,12 +291,20 @@ static int mt7615_mcu_drv_pmctrl(struct mt7615_dev *dev)
+ u32 addr;
+ int err;
+
+- addr = is_mt7663(mdev) ? MT_PCIE_DOORBELL_PUSH : MT_CFG_LPCR_HOST;
++ if (is_mt7663(mdev)) {
++ /* Clear firmware own via N9 eint */
++ mt76_wr(dev, MT_PCIE_DOORBELL_PUSH, MT_CFG_LPCR_HOST_DRV_OWN);
++ mt76_poll(dev, MT_CONN_ON_MISC, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000);
++
++ addr = MT_CONN_HIF_ON_LPCTL;
++ } else {
++ addr = MT_CFG_LPCR_HOST;
++ }
++
+ mt76_wr(dev, addr, MT_CFG_LPCR_HOST_DRV_OWN);
+
+ mt7622_trigger_hif_int(dev, true);
+
+- addr = is_mt7663(mdev) ? MT_CONN_HIF_ON_LPCTL : MT_CFG_LPCR_HOST;
+ err = !mt76_poll_msec(dev, addr, MT_CFG_LPCR_HOST_FW_OWN, 0, 3000);
+
+ mt7622_trigger_hif_int(dev, false);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+index 7ac20d3c16d71..aaa597b941cd5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+@@ -447,6 +447,10 @@ int mt76x02_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
+ return -EOPNOTSUPP;
+
++ /* MT76x0 GTK offloading does not work with more than one VIF */
++ if (is_mt76x0(dev) && !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
++ return -EOPNOTSUPP;
++
+ msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL;
+ wcid = msta ? &msta->wcid : &mvif->group_wcid;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index 7a2be3f61398e..c3e32555cf242 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -114,7 +114,7 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev,
+ struct ieee80211_channel *chan,
+ u8 chain_idx)
+ {
+- int index;
++ int index, target_power;
+ bool tssi_on;
+
+ if (chain_idx > 3)
+@@ -123,15 +123,22 @@ int mt7915_eeprom_get_target_power(struct mt7915_dev *dev,
+ tssi_on = mt7915_tssi_enabled(dev, chan->band);
+
+ if (chan->band == NL80211_BAND_2GHZ) {
+- index = MT_EE_TX0_POWER_2G + chain_idx * 3 + !tssi_on;
++ index = MT_EE_TX0_POWER_2G + chain_idx * 3;
++ target_power = mt7915_eeprom_read(dev, index);
++
++ if (!tssi_on)
++ target_power += mt7915_eeprom_read(dev, index + 1);
+ } else {
+- int group = tssi_on ?
+- mt7915_get_channel_group(chan->hw_value) : 8;
++ int group = mt7915_get_channel_group(chan->hw_value);
++
++ index = MT_EE_TX0_POWER_5G + chain_idx * 12;
++ target_power = mt7915_eeprom_read(dev, index + group);
+
+- index = MT_EE_TX0_POWER_5G + chain_idx * 12 + group;
++ if (!tssi_on)
++ target_power += mt7915_eeprom_read(dev, index + 8);
+ }
+
+- return mt7915_eeprom_read(dev, index);
++ return target_power;
+ }
+
+ static const u8 sku_cck_delta_map[] = {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 148a92efdd4ee..76358f8d42a1d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -4,6 +4,7 @@
+ #include <linux/etherdevice.h>
+ #include "mt7915.h"
+ #include "mac.h"
++#include "mcu.h"
+ #include "eeprom.h"
+
+ #define CCK_RATE(_idx, _rate) { \
+@@ -282,9 +283,50 @@ static void mt7915_init_work(struct work_struct *work)
+ mt7915_register_ext_phy(dev);
+ }
+
++static void mt7915_wfsys_reset(struct mt7915_dev *dev)
++{
++ u32 val = MT_TOP_PWR_KEY | MT_TOP_PWR_SW_PWR_ON | MT_TOP_PWR_PWR_ON;
++ u32 reg = mt7915_reg_map_l1(dev, MT_TOP_MISC);
++
++#define MT_MCU_DUMMY_RANDOM GENMASK(15, 0)
++#define MT_MCU_DUMMY_DEFAULT GENMASK(31, 16)
++
++ mt76_wr(dev, MT_MCU_WFDMA0_DUMMY_CR, MT_MCU_DUMMY_RANDOM);
++
++ /* change to software control */
++ val |= MT_TOP_PWR_SW_RST;
++ mt76_wr(dev, MT_TOP_PWR_CTRL, val);
++
++ /* reset wfsys */
++ val &= ~MT_TOP_PWR_SW_RST;
++ mt76_wr(dev, MT_TOP_PWR_CTRL, val);
++
++ /* release wfsys then mcu re-excutes romcode */
++ val |= MT_TOP_PWR_SW_RST;
++ mt76_wr(dev, MT_TOP_PWR_CTRL, val);
++
++ /* switch to hw control */
++ val &= ~MT_TOP_PWR_SW_RST;
++ val |= MT_TOP_PWR_HW_CTRL;
++ mt76_wr(dev, MT_TOP_PWR_CTRL, val);
++
++ /* check whether mcu resets to default */
++ if (!mt76_poll_msec(dev, MT_MCU_WFDMA0_DUMMY_CR, MT_MCU_DUMMY_DEFAULT,
++ MT_MCU_DUMMY_DEFAULT, 1000)) {
++ dev_err(dev->mt76.dev, "wifi subsystem reset failure\n");
++ return;
++ }
++
++ /* wfsys reset won't clear host registers */
++ mt76_clear(dev, reg, MT_TOP_MISC_FW_STATE);
++
++ msleep(100);
++}
++
+ static int mt7915_init_hardware(struct mt7915_dev *dev)
+ {
+ int ret, idx;
++ u32 val;
+
+ mt76_wr(dev, MT_INT_SOURCE_CSR, ~0);
+
+@@ -294,6 +336,12 @@ static int mt7915_init_hardware(struct mt7915_dev *dev)
+
+ dev->dbdc_support = !!(mt7915_l1_rr(dev, MT_HW_BOUND) & BIT(5));
+
++ val = mt76_rr(dev, mt7915_reg_map_l1(dev, MT_TOP_MISC));
++
++ /* If MCU was already running, it is likely in a bad state */
++ if (FIELD_GET(MT_TOP_MISC_FW_STATE, val) > FW_STATE_FW_DOWNLOAD)
++ mt7915_wfsys_reset(dev);
++
+ ret = mt7915_dma_init(dev);
+ if (ret)
+ return ret;
+@@ -307,8 +355,14 @@ static int mt7915_init_hardware(struct mt7915_dev *dev)
+ mt76_wr(dev, MT_SWDEF_MODE, MT_SWDEF_NORMAL_MODE);
+
+ ret = mt7915_mcu_init(dev);
+- if (ret)
+- return ret;
++ if (ret) {
++ /* Reset and try again */
++ mt7915_wfsys_reset(dev);
++
++ ret = mt7915_mcu_init(dev);
++ if (ret)
++ return ret;
++ }
+
+ ret = mt7915_eeprom_init(dev);
+ if (ret < 0)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index 0721e9d85b655..2f3527179b7d6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -314,7 +314,9 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ struct mt7915_sta *msta = sta ? (struct mt7915_sta *)sta->drv_priv :
+ &mvif->sta;
+ struct mt76_wcid *wcid = &msta->wcid;
++ u8 *wcid_keyidx = &wcid->hw_key_idx;
+ int idx = key->keyidx;
++ int err = 0;
+
+ /* The hardware does not support per-STA RX GTK, fallback
+ * to software mode for these.
+@@ -329,6 +331,7 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ /* fall back to sw encryption for unsupported ciphers */
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_AES_CMAC:
++ wcid_keyidx = &wcid->hw_key_idx2;
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE;
+ break;
+ case WLAN_CIPHER_SUITE_TKIP:
+@@ -344,16 +347,24 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ return -EOPNOTSUPP;
+ }
+
+- if (cmd == SET_KEY) {
+- key->hw_key_idx = wcid->idx;
+- wcid->hw_key_idx = idx;
+- } else if (idx == wcid->hw_key_idx) {
+- wcid->hw_key_idx = -1;
+- }
++ mutex_lock(&dev->mt76.mutex);
++
++ if (cmd == SET_KEY)
++ *wcid_keyidx = idx;
++ else if (idx == *wcid_keyidx)
++ *wcid_keyidx = -1;
++ else
++ goto out;
++
+ mt76_wcid_key_setup(&dev->mt76, wcid,
+ cmd == SET_KEY ? key : NULL);
+
+- return mt7915_mcu_add_key(dev, vif, msta, key, cmd);
++ err = mt7915_mcu_add_key(dev, vif, msta, key, cmd);
++
++out:
++ mutex_unlock(&dev->mt76.mutex);
++
++ return err;
+ }
+
+ static int mt7915_config(struct ieee80211_hw *hw, u32 changed)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 35bfa197dff6d..db204cbcde960 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -1180,6 +1180,9 @@ mt7915_mcu_sta_ba(struct mt7915_dev *dev,
+
+ wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, sta_wtbl,
+ &skb);
++ if (IS_ERR(wtbl_hdr))
++ return PTR_ERR(wtbl_hdr);
++
+ mt7915_mcu_wtbl_ba_tlv(skb, params, enable, tx, sta_wtbl, wtbl_hdr);
+
+ ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+@@ -1696,6 +1699,9 @@ int mt7915_mcu_sta_update_hdr_trans(struct mt7915_dev *dev,
+ return -ENOMEM;
+
+ wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, NULL, &skb);
++ if (IS_ERR(wtbl_hdr))
++ return PTR_ERR(wtbl_hdr);
++
+ mt7915_mcu_wtbl_hdr_trans_tlv(skb, vif, sta, NULL, wtbl_hdr);
+
+ return mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_EXT_CMD_WTBL_UPDATE,
+@@ -1720,6 +1726,9 @@ int mt7915_mcu_add_smps(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+
+ wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_SET, sta_wtbl,
+ &skb);
++ if (IS_ERR(wtbl_hdr))
++ return PTR_ERR(wtbl_hdr);
++
+ mt7915_mcu_wtbl_smps_tlv(skb, sta, sta_wtbl, wtbl_hdr);
+
+ return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+@@ -2289,6 +2298,9 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+
+ wtbl_hdr = mt7915_mcu_alloc_wtbl_req(dev, msta, WTBL_RESET_AND_SET,
+ sta_wtbl, &skb);
++ if (IS_ERR(wtbl_hdr))
++ return PTR_ERR(wtbl_hdr);
++
+ if (enable) {
+ mt7915_mcu_wtbl_generic_tlv(skb, vif, sta, sta_wtbl, wtbl_hdr);
+ mt7915_mcu_wtbl_hdr_trans_tlv(skb, vif, sta, sta_wtbl, wtbl_hdr);
+@@ -2778,21 +2790,8 @@ out:
+
+ static int mt7915_load_firmware(struct mt7915_dev *dev)
+ {
++ u32 reg = mt7915_reg_map_l1(dev, MT_TOP_MISC);
+ int ret;
+- u32 val, reg = mt7915_reg_map_l1(dev, MT_TOP_MISC);
+-
+- val = FIELD_PREP(MT_TOP_MISC_FW_STATE, FW_STATE_FW_DOWNLOAD);
+-
+- if (!mt76_poll_msec(dev, reg, MT_TOP_MISC_FW_STATE, val, 1000)) {
+- /* restart firmware once */
+- __mt76_mcu_restart(&dev->mt76);
+- if (!mt76_poll_msec(dev, reg, MT_TOP_MISC_FW_STATE,
+- val, 1000)) {
+- dev_err(dev->mt76.dev,
+- "Firmware is not ready for download\n");
+- return -EIO;
+- }
+- }
+
+ ret = mt7915_load_patch(dev);
+ if (ret)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+index 294cc07693315..12bbe565cdd17 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+@@ -4,6 +4,11 @@
+ #ifndef __MT7915_REGS_H
+ #define __MT7915_REGS_H
+
++/* MCU WFDMA0 */
++#define MT_MCU_WFDMA0_BASE 0x2000
++#define MT_MCU_WFDMA0(ofs) (MT_MCU_WFDMA0_BASE + (ofs))
++#define MT_MCU_WFDMA0_DUMMY_CR MT_MCU_WFDMA0(0x120)
++
+ /* MCU WFDMA1 */
+ #define MT_MCU_WFDMA1_BASE 0x3000
+ #define MT_MCU_WFDMA1(ofs) (MT_MCU_WFDMA1_BASE + (ofs))
+@@ -376,6 +381,14 @@
+ #define MT_WFDMA1_PCIE1_BUSY_ENA_TX_FIFO1 BIT(1)
+ #define MT_WFDMA1_PCIE1_BUSY_ENA_RX_FIFO BIT(2)
+
++#define MT_TOP_RGU_BASE 0xf0000
++#define MT_TOP_PWR_CTRL (MT_TOP_RGU_BASE + (0x0))
++#define MT_TOP_PWR_KEY (0x5746 << 16)
++#define MT_TOP_PWR_SW_RST BIT(0)
++#define MT_TOP_PWR_SW_PWR_ON GENMASK(3, 2)
++#define MT_TOP_PWR_HW_CTRL BIT(4)
++#define MT_TOP_PWR_PWR_ON BIT(7)
++
+ #define MT_INFRA_CFG_BASE 0xf1000
+ #define MT_INFRA(ofs) (MT_INFRA_CFG_BASE + (ofs))
+
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 0c188310919e1..acf7ed4bfe57b 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -575,7 +575,6 @@ static int wilc_mac_open(struct net_device *ndev)
+ {
+ struct wilc_vif *vif = netdev_priv(ndev);
+ struct wilc *wl = vif->wilc;
+- unsigned char mac_add[ETH_ALEN] = {0};
+ int ret = 0;
+ struct mgmt_frame_regs mgmt_regs = {};
+
+@@ -598,9 +597,12 @@ static int wilc_mac_open(struct net_device *ndev)
+
+ wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
+ vif->idx);
+- wilc_get_mac_address(vif, mac_add);
+- netdev_dbg(ndev, "Mac address: %pM\n", mac_add);
+- ether_addr_copy(ndev->dev_addr, mac_add);
++
++ if (is_valid_ether_addr(ndev->dev_addr))
++ wilc_set_mac_address(vif, ndev->dev_addr);
++ else
++ wilc_get_mac_address(vif, ndev->dev_addr);
++ netdev_dbg(ndev, "Mac address: %pM\n", ndev->dev_addr);
+
+ if (!is_valid_ether_addr(ndev->dev_addr)) {
+ netdev_err(ndev, "Wrong MAC address\n");
+@@ -639,7 +641,14 @@ static int wilc_set_mac_addr(struct net_device *dev, void *p)
+ int srcu_idx;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+- return -EINVAL;
++ return -EADDRNOTAVAIL;
++
++ if (!vif->mac_opened) {
++ eth_commit_mac_addr_change(dev, p);
++ return 0;
++ }
++
++ /* Verify MAC Address is not already in use: */
+
+ srcu_idx = srcu_read_lock(&wilc->srcu);
+ list_for_each_entry_rcu(tmp_vif, &wilc->vif_list, list) {
+@@ -647,7 +656,7 @@ static int wilc_set_mac_addr(struct net_device *dev, void *p)
+ if (ether_addr_equal(addr->sa_data, mac_addr)) {
+ if (vif != tmp_vif) {
+ srcu_read_unlock(&wilc->srcu, srcu_idx);
+- return -EINVAL;
++ return -EADDRNOTAVAIL;
+ }
+ srcu_read_unlock(&wilc->srcu, srcu_idx);
+ return 0;
+@@ -659,9 +668,7 @@ static int wilc_set_mac_addr(struct net_device *dev, void *p)
+ if (result)
+ return result;
+
+- ether_addr_copy(vif->bssid, addr->sa_data);
+- ether_addr_copy(vif->ndev->dev_addr, addr->sa_data);
+-
++ eth_commit_mac_addr_change(dev, p);
+ return result;
+ }
+
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
+index c775c177933b2..8dc80574d08d9 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
+@@ -570,8 +570,10 @@ qtnf_event_handle_external_auth(struct qtnf_vif *vif,
+ return 0;
+
+ if (ev->ssid_len) {
+- memcpy(auth.ssid.ssid, ev->ssid, ev->ssid_len);
+- auth.ssid.ssid_len = ev->ssid_len;
++ int len = clamp_val(ev->ssid_len, 0, IEEE80211_MAX_SSID_LEN);
++
++ memcpy(auth.ssid.ssid, ev->ssid, len);
++ auth.ssid.ssid_len = len;
+ }
+
+ auth.key_mgmt_suite = le32_to_cpu(ev->akm_suite);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 9a318dfd04f90..3d51394edb4a3 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -1157,6 +1157,7 @@ struct rtw_chip_info {
+ bool en_dis_dpd;
+ u16 dpd_ratemask;
+ u8 iqk_threshold;
++ u8 lck_threshold;
+ const struct rtw_pwr_track_tbl *pwr_track_tbl;
+
+ u8 bfer_su_max_num;
+@@ -1520,6 +1521,7 @@ struct rtw_dm_info {
+ u8 tx_rate;
+ u8 thermal_avg[RTW_RF_PATH_MAX];
+ u8 thermal_meter_k;
++ u8 thermal_meter_lck;
+ s8 delta_power_index[RTW_RF_PATH_MAX];
+ s8 delta_power_index_last[RTW_RF_PATH_MAX];
+ u8 default_ofdm_index;
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
+index a76aac514fc80..e655f6a76cc3a 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.c
++++ b/drivers/net/wireless/realtek/rtw88/phy.c
+@@ -2160,6 +2160,20 @@ s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev,
+ }
+ EXPORT_SYMBOL(rtw_phy_pwrtrack_get_pwridx);
+
++bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev)
++{
++ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
++ u8 delta_lck;
++
++ delta_lck = abs(dm_info->thermal_avg[0] - dm_info->thermal_meter_lck);
++ if (delta_lck >= rtwdev->chip->lck_threshold) {
++ dm_info->thermal_meter_lck = dm_info->thermal_avg[0];
++ return true;
++ }
++ return false;
++}
++EXPORT_SYMBOL(rtw_phy_pwrtrack_need_lck);
++
+ bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev)
+ {
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.h b/drivers/net/wireless/realtek/rtw88/phy.h
+index b924ed07630a6..9623248c94667 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.h
++++ b/drivers/net/wireless/realtek/rtw88/phy.h
+@@ -55,6 +55,7 @@ u8 rtw_phy_pwrtrack_get_delta(struct rtw_dev *rtwdev, u8 path);
+ s8 rtw_phy_pwrtrack_get_pwridx(struct rtw_dev *rtwdev,
+ struct rtw_swing_table *swing_table,
+ u8 tbl_path, u8 therm_path, u8 delta);
++bool rtw_phy_pwrtrack_need_lck(struct rtw_dev *rtwdev);
+ bool rtw_phy_pwrtrack_need_iqk(struct rtw_dev *rtwdev);
+ void rtw_phy_config_swing_table(struct rtw_dev *rtwdev,
+ struct rtw_swing_table *swing_table);
+diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
+index cf9a3b674d303..767f7777d409f 100644
+--- a/drivers/net/wireless/realtek/rtw88/reg.h
++++ b/drivers/net/wireless/realtek/rtw88/reg.h
+@@ -650,8 +650,13 @@
+ #define RF_TXATANK 0x64
+ #define RF_TRXIQ 0x66
+ #define RF_RXIQGEN 0x8d
++#define RF_SYN_PFD 0xb0
+ #define RF_XTALX2 0xb8
++#define RF_SYN_CTRL 0xbb
+ #define RF_MALSEL 0xbe
++#define RF_SYN_AAC 0xc9
++#define RF_AAC_CTRL 0xca
++#define RF_FAST_LCK 0xcc
+ #define RF_RCKD 0xde
+ #define RF_TXADBG 0xde
+ #define RF_LUTDBG 0xdf
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index dd560c28abb2f..448922cb2e63d 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -1126,6 +1126,7 @@ static void rtw8822c_pwrtrack_init(struct rtw_dev *rtwdev)
+
+ dm_info->pwr_trk_triggered = false;
+ dm_info->thermal_meter_k = rtwdev->efuse.thermal_meter_k;
++ dm_info->thermal_meter_lck = rtwdev->efuse.thermal_meter_k;
+ }
+
+ static void rtw8822c_phy_set_param(struct rtw_dev *rtwdev)
+@@ -2108,6 +2109,26 @@ static void rtw8822c_false_alarm_statistics(struct rtw_dev *rtwdev)
+ rtw_write32_set(rtwdev, REG_RX_BREAK, BIT_COM_RX_GCK_EN);
+ }
+
++static void rtw8822c_do_lck(struct rtw_dev *rtwdev)
++{
++ u32 val;
++
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_CTRL, RFREG_MASK, 0x80010);
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0FA);
++ fsleep(1);
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_AAC_CTRL, RFREG_MASK, 0x80000);
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_AAC, RFREG_MASK, 0x80001);
++ read_poll_timeout(rtw_read_rf, val, val != 0x1, 1000, 100000,
++ true, rtwdev, RF_PATH_A, RF_AAC_CTRL, 0x1000);
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_SYN_PFD, RFREG_MASK, 0x1F0F8);
++ rtw_write_rf(rtwdev, RF_PATH_B, RF_SYN_CTRL, RFREG_MASK, 0x80010);
++
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000);
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x4f000);
++ fsleep(1);
++ rtw_write_rf(rtwdev, RF_PATH_A, RF_FAST_LCK, RFREG_MASK, 0x0f000);
++}
++
+ static void rtw8822c_do_iqk(struct rtw_dev *rtwdev)
+ {
+ struct rtw_iqk_para para = {0};
+@@ -3538,11 +3559,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+
+ rtw_phy_config_swing_table(rtwdev, &swing_table);
+
++ if (rtw_phy_pwrtrack_need_lck(rtwdev))
++ rtw8822c_do_lck(rtwdev);
++
+ for (i = 0; i < rtwdev->hal.rf_path_num; i++)
+ rtw8822c_pwr_track_path(rtwdev, &swing_table, i);
+
+- if (rtw_phy_pwrtrack_need_iqk(rtwdev))
+- rtw8822c_do_iqk(rtwdev);
+ }
+
+ static void rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+@@ -4351,6 +4373,7 @@ struct rtw_chip_info rtw8822c_hw_spec = {
+ .dpd_ratemask = DIS_DPD_RATEALL,
+ .pwr_track_tbl = &rtw8822c_rtw_pwr_track_tbl,
+ .iqk_threshold = 8,
++ .lck_threshold = 8,
+ .bfer_su_max_num = 2,
+ .bfer_mu_max_num = 1,
+ .rx_ldpc = true,
+diff --git a/drivers/net/wireless/wl3501.h b/drivers/net/wireless/wl3501.h
+index b446cb3695579..87195c1dadf2c 100644
+--- a/drivers/net/wireless/wl3501.h
++++ b/drivers/net/wireless/wl3501.h
+@@ -379,16 +379,7 @@ struct wl3501_get_confirm {
+ u8 mib_value[100];
+ };
+
+-struct wl3501_join_req {
+- u16 next_blk;
+- u8 sig_id;
+- u8 reserved;
+- struct iw_mgmt_data_rset operational_rset;
+- u16 reserved2;
+- u16 timeout;
+- u16 probe_delay;
+- u8 timestamp[8];
+- u8 local_time[8];
++struct wl3501_req {
+ u16 beacon_period;
+ u16 dtim_period;
+ u16 cap_info;
+@@ -401,6 +392,19 @@ struct wl3501_join_req {
+ struct iw_mgmt_data_rset bss_basic_rset;
+ };
+
++struct wl3501_join_req {
++ u16 next_blk;
++ u8 sig_id;
++ u8 reserved;
++ struct iw_mgmt_data_rset operational_rset;
++ u16 reserved2;
++ u16 timeout;
++ u16 probe_delay;
++ u8 timestamp[8];
++ u8 local_time[8];
++ struct wl3501_req req;
++};
++
+ struct wl3501_join_confirm {
+ u16 next_blk;
+ u8 sig_id;
+@@ -443,16 +447,7 @@ struct wl3501_scan_confirm {
+ u16 status;
+ char timestamp[8];
+ char localtime[8];
+- u16 beacon_period;
+- u16 dtim_period;
+- u16 cap_info;
+- u8 bss_type;
+- u8 bssid[ETH_ALEN];
+- struct iw_mgmt_essid_pset ssid;
+- struct iw_mgmt_ds_pset ds_pset;
+- struct iw_mgmt_cf_pset cf_pset;
+- struct iw_mgmt_ibss_pset ibss_pset;
+- struct iw_mgmt_data_rset bss_basic_rset;
++ struct wl3501_req req;
+ u8 rssi;
+ };
+
+@@ -471,8 +466,10 @@ struct wl3501_md_req {
+ u16 size;
+ u8 pri;
+ u8 service_class;
+- u8 daddr[ETH_ALEN];
+- u8 saddr[ETH_ALEN];
++ struct {
++ u8 daddr[ETH_ALEN];
++ u8 saddr[ETH_ALEN];
++ } addr;
+ };
+
+ struct wl3501_md_ind {
+@@ -484,8 +481,10 @@ struct wl3501_md_ind {
+ u8 reception;
+ u8 pri;
+ u8 service_class;
+- u8 daddr[ETH_ALEN];
+- u8 saddr[ETH_ALEN];
++ struct {
++ u8 daddr[ETH_ALEN];
++ u8 saddr[ETH_ALEN];
++ } addr;
+ };
+
+ struct wl3501_md_confirm {
+diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
+index 8ca5789c7b378..672f5d5f3f2c7 100644
+--- a/drivers/net/wireless/wl3501_cs.c
++++ b/drivers/net/wireless/wl3501_cs.c
+@@ -469,6 +469,7 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
+ struct wl3501_md_req sig = {
+ .sig_id = WL3501_SIG_MD_REQ,
+ };
++ size_t sig_addr_len = sizeof(sig.addr);
+ u8 *pdata = (char *)data;
+ int rc = -EIO;
+
+@@ -484,9 +485,9 @@ static int wl3501_send_pkt(struct wl3501_card *this, u8 *data, u16 len)
+ goto out;
+ }
+ rc = 0;
+- memcpy(&sig.daddr[0], pdata, 12);
+- pktlen = len - 12;
+- pdata += 12;
++ memcpy(&sig.addr, pdata, sig_addr_len);
++ pktlen = len - sig_addr_len;
++ pdata += sig_addr_len;
+ sig.data = bf;
+ if (((*pdata) * 256 + (*(pdata + 1))) > 1500) {
+ u8 addr4[ETH_ALEN] = {
+@@ -589,7 +590,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas)
+ struct wl3501_join_req sig = {
+ .sig_id = WL3501_SIG_JOIN_REQ,
+ .timeout = 10,
+- .ds_pset = {
++ .req.ds_pset = {
+ .el = {
+ .id = IW_MGMT_INFO_ELEMENT_DS_PARAMETER_SET,
+ .len = 1,
+@@ -598,7 +599,7 @@ static int wl3501_mgmt_join(struct wl3501_card *this, u16 stas)
+ },
+ };
+
+- memcpy(&sig.beacon_period, &this->bss_set[stas].beacon_period, 72);
++ memcpy(&sig.req, &this->bss_set[stas].req, sizeof(sig.req));
+ return wl3501_esbq_exec(this, &sig, sizeof(sig));
+ }
+
+@@ -666,35 +667,37 @@ static void wl3501_mgmt_scan_confirm(struct wl3501_card *this, u16 addr)
+ if (sig.status == WL3501_STATUS_SUCCESS) {
+ pr_debug("success");
+ if ((this->net_type == IW_MODE_INFRA &&
+- (sig.cap_info & WL3501_MGMT_CAPABILITY_ESS)) ||
++ (sig.req.cap_info & WL3501_MGMT_CAPABILITY_ESS)) ||
+ (this->net_type == IW_MODE_ADHOC &&
+- (sig.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) ||
++ (sig.req.cap_info & WL3501_MGMT_CAPABILITY_IBSS)) ||
+ this->net_type == IW_MODE_AUTO) {
+ if (!this->essid.el.len)
+ matchflag = 1;
+ else if (this->essid.el.len == 3 &&
+ !memcmp(this->essid.essid, "ANY", 3))
+ matchflag = 1;
+- else if (this->essid.el.len != sig.ssid.el.len)
++ else if (this->essid.el.len != sig.req.ssid.el.len)
+ matchflag = 0;
+- else if (memcmp(this->essid.essid, sig.ssid.essid,
++ else if (memcmp(this->essid.essid, sig.req.ssid.essid,
+ this->essid.el.len))
+ matchflag = 0;
+ else
+ matchflag = 1;
+ if (matchflag) {
+ for (i = 0; i < this->bss_cnt; i++) {
+- if (ether_addr_equal_unaligned(this->bss_set[i].bssid, sig.bssid)) {
++ if (ether_addr_equal_unaligned(this->bss_set[i].req.bssid,
++ sig.req.bssid)) {
+ matchflag = 0;
+ break;
+ }
+ }
+ }
+ if (matchflag && (i < 20)) {
+- memcpy(&this->bss_set[i].beacon_period,
+- &sig.beacon_period, 73);
++ memcpy(&this->bss_set[i].req,
++ &sig.req, sizeof(sig.req));
+ this->bss_cnt++;
+ this->rssi = sig.rssi;
++ this->bss_set[i].rssi = sig.rssi;
+ }
+ }
+ } else if (sig.status == WL3501_STATUS_TIMEOUT) {
+@@ -886,19 +889,19 @@ static void wl3501_mgmt_join_confirm(struct net_device *dev, u16 addr)
+ if (this->join_sta_bss < this->bss_cnt) {
+ const int i = this->join_sta_bss;
+ memcpy(this->bssid,
+- this->bss_set[i].bssid, ETH_ALEN);
+- this->chan = this->bss_set[i].ds_pset.chan;
++ this->bss_set[i].req.bssid, ETH_ALEN);
++ this->chan = this->bss_set[i].req.ds_pset.chan;
+ iw_copy_mgmt_info_element(&this->keep_essid.el,
+- &this->bss_set[i].ssid.el);
++ &this->bss_set[i].req.ssid.el);
+ wl3501_mgmt_auth(this);
+ }
+ } else {
+ const int i = this->join_sta_bss;
+
+- memcpy(&this->bssid, &this->bss_set[i].bssid, ETH_ALEN);
+- this->chan = this->bss_set[i].ds_pset.chan;
++ memcpy(&this->bssid, &this->bss_set[i].req.bssid, ETH_ALEN);
++ this->chan = this->bss_set[i].req.ds_pset.chan;
+ iw_copy_mgmt_info_element(&this->keep_essid.el,
+- &this->bss_set[i].ssid.el);
++ &this->bss_set[i].req.ssid.el);
+ wl3501_online(dev);
+ }
+ } else {
+@@ -980,7 +983,8 @@ static inline void wl3501_md_ind_interrupt(struct net_device *dev,
+ } else {
+ skb->dev = dev;
+ skb_reserve(skb, 2); /* IP headers on 16 bytes boundaries */
+- skb_copy_to_linear_data(skb, (unsigned char *)&sig.daddr, 12);
++ skb_copy_to_linear_data(skb, (unsigned char *)&sig.addr,
++ sizeof(sig.addr));
+ wl3501_receive(this, skb->data, pkt_len);
+ skb_put(skb, pkt_len);
+ skb->protocol = eth_type_trans(skb, dev);
+@@ -1571,30 +1575,30 @@ static int wl3501_get_scan(struct net_device *dev, struct iw_request_info *info,
+ for (i = 0; i < this->bss_cnt; ++i) {
+ iwe.cmd = SIOCGIWAP;
+ iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
+- memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].bssid, ETH_ALEN);
++ memcpy(iwe.u.ap_addr.sa_data, this->bss_set[i].req.bssid, ETH_ALEN);
+ current_ev = iwe_stream_add_event(info, current_ev,
+ extra + IW_SCAN_MAX_DATA,
+ &iwe, IW_EV_ADDR_LEN);
+ iwe.cmd = SIOCGIWESSID;
+ iwe.u.data.flags = 1;
+- iwe.u.data.length = this->bss_set[i].ssid.el.len;
++ iwe.u.data.length = this->bss_set[i].req.ssid.el.len;
+ current_ev = iwe_stream_add_point(info, current_ev,
+ extra + IW_SCAN_MAX_DATA,
+ &iwe,
+- this->bss_set[i].ssid.essid);
++ this->bss_set[i].req.ssid.essid);
+ iwe.cmd = SIOCGIWMODE;
+- iwe.u.mode = this->bss_set[i].bss_type;
++ iwe.u.mode = this->bss_set[i].req.bss_type;
+ current_ev = iwe_stream_add_event(info, current_ev,
+ extra + IW_SCAN_MAX_DATA,
+ &iwe, IW_EV_UINT_LEN);
+ iwe.cmd = SIOCGIWFREQ;
+- iwe.u.freq.m = this->bss_set[i].ds_pset.chan;
++ iwe.u.freq.m = this->bss_set[i].req.ds_pset.chan;
+ iwe.u.freq.e = 0;
+ current_ev = iwe_stream_add_event(info, current_ev,
+ extra + IW_SCAN_MAX_DATA,
+ &iwe, IW_EV_FREQ_LEN);
+ iwe.cmd = SIOCGIWENCODE;
+- if (this->bss_set[i].cap_info & WL3501_MGMT_CAPABILITY_PRIVACY)
++ if (this->bss_set[i].req.cap_info & WL3501_MGMT_CAPABILITY_PRIVACY)
+ iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
+ else
+ iwe.u.data.flags = IW_ENCODE_DISABLED;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6199bce5d3a4f..36c5932bd3f22 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2676,7 +2676,8 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val)
+
+ if (ctrl->ps_max_latency_us != latency) {
+ ctrl->ps_max_latency_us = latency;
+- nvme_configure_apst(ctrl);
++ if (ctrl->state == NVME_CTRL_LIVE)
++ nvme_configure_apst(ctrl);
+ }
+ }
+
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 125dde3f410ee..6a9626ff07135 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -256,10 +256,9 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
+ if (is_pci_p2pdma_page(sg_page(req->sg)))
+ op |= REQ_NOMERGE;
+
+- sector = le64_to_cpu(req->cmd->rw.slba);
+- sector <<= (req->ns->blksize_shift - 9);
++ sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
+
+- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
++ if (nvmet_use_inline_bvec(req)) {
+ bio = &req->b.inline_bio;
+ bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
+ } else {
+@@ -345,7 +344,7 @@ static u16 nvmet_bdev_discard_range(struct nvmet_req *req,
+ int ret;
+
+ ret = __blkdev_issue_discard(ns->bdev,
+- le64_to_cpu(range->slba) << (ns->blksize_shift - 9),
++ nvmet_lba_to_sect(ns, range->slba),
+ le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
+ GFP_KERNEL, 0, bio);
+ if (ret && ret != -EOPNOTSUPP) {
+@@ -414,8 +413,7 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req)
+ if (!nvmet_check_transfer_len(req, 0))
+ return;
+
+- sector = le64_to_cpu(write_zeroes->slba) <<
+- (req->ns->blksize_shift - 9);
++ sector = nvmet_lba_to_sect(req->ns, write_zeroes->slba);
+ nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length) + 1) <<
+ (req->ns->blksize_shift - 9));
+
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 592763732065b..7f8712de77e02 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -603,4 +603,20 @@ static inline bool nvmet_ns_has_pi(struct nvmet_ns *ns)
+ return ns->pi_type && ns->metadata_size == sizeof(struct t10_pi_tuple);
+ }
+
++static inline __le64 nvmet_sect_to_lba(struct nvmet_ns *ns, sector_t sect)
++{
++ return cpu_to_le64(sect >> (ns->blksize_shift - SECTOR_SHIFT));
++}
++
++static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba)
++{
++ return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);
++}
++
++static inline bool nvmet_use_inline_bvec(struct nvmet_req *req)
++{
++ return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
++ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC;
++}
++
+ #endif /* _NVMET_H */
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index b9776fc8f08f4..df6f64870cec4 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -194,7 +194,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
+ if (req->sg_cnt > BIO_MAX_PAGES)
+ return -EINVAL;
+
+- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
++ if (nvmet_use_inline_bvec(req)) {
+ bio = &req->p.inline_bio;
+ bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
+ } else {
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 6c1f3ab7649c7..7d607f435e366 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -700,7 +700,7 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ struct nvmet_rdma_rsp *rsp =
+ container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe);
+- struct nvmet_rdma_queue *queue = cq->cq_context;
++ struct nvmet_rdma_queue *queue = wc->qp->qp_context;
+
+ nvmet_rdma_release_rsp(rsp);
+
+@@ -786,7 +786,7 @@ static void nvmet_rdma_write_data_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ struct nvmet_rdma_rsp *rsp =
+ container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe);
+- struct nvmet_rdma_queue *queue = cq->cq_context;
++ struct nvmet_rdma_queue *queue = wc->qp->qp_context;
+ struct rdma_cm_id *cm_id = rsp->queue->cm_id;
+ u16 status;
+
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index d41257f43a8f3..7cbd56d8a5ff7 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -1127,6 +1127,7 @@ static int brcm_pcie_suspend(struct device *dev)
+
+ brcm_pcie_turn_off(pcie);
+ ret = brcm_phy_stop(pcie);
++ reset_control_rearm(pcie->rescal);
+ clk_disable_unprepare(pcie->clk);
+
+ return ret;
+@@ -1142,9 +1143,13 @@ static int brcm_pcie_resume(struct device *dev)
+ base = pcie->base;
+ clk_prepare_enable(pcie->clk);
+
++ ret = reset_control_reset(pcie->rescal);
++ if (ret)
++ goto err_disable_clk;
++
+ ret = brcm_phy_start(pcie);
+ if (ret)
+- goto err;
++ goto err_reset;
+
+ /* Take bridge out of reset so we can access the SERDES reg */
+ pcie->bridge_sw_init_set(pcie, 0);
+@@ -1159,14 +1164,16 @@ static int brcm_pcie_resume(struct device *dev)
+
+ ret = brcm_pcie_setup(pcie);
+ if (ret)
+- goto err;
++ goto err_reset;
+
+ if (pcie->msi)
+ brcm_msi_set_regs(pcie->msi);
+
+ return 0;
+
+-err:
++err_reset:
++ reset_control_rearm(pcie->rescal);
++err_disable_clk:
+ clk_disable_unprepare(pcie->clk);
+ return ret;
+ }
+@@ -1176,7 +1183,7 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie)
+ brcm_msi_remove(pcie);
+ brcm_pcie_turn_off(pcie);
+ brcm_phy_stop(pcie);
+- reset_control_assert(pcie->rescal);
++ reset_control_rearm(pcie->rescal);
+ clk_disable_unprepare(pcie->clk);
+ }
+
+@@ -1251,13 +1258,13 @@ static int brcm_pcie_probe(struct platform_device *pdev)
+ return PTR_ERR(pcie->rescal);
+ }
+
+- ret = reset_control_deassert(pcie->rescal);
++ ret = reset_control_reset(pcie->rescal);
+ if (ret)
+ dev_err(&pdev->dev, "failed to deassert 'rescal'\n");
+
+ ret = brcm_phy_start(pcie);
+ if (ret) {
+- reset_control_assert(pcie->rescal);
++ reset_control_rearm(pcie->rescal);
+ clk_disable_unprepare(pcie->clk);
+ return ret;
+ }
+diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
+index 908475d27e0e7..eede4e8f3f75a 100644
+--- a/drivers/pci/controller/pcie-iproc-msi.c
++++ b/drivers/pci/controller/pcie-iproc-msi.c
+@@ -271,7 +271,7 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
+ NULL, NULL);
+ }
+
+- return hwirq;
++ return 0;
+ }
+
+ static void iproc_msi_irq_domain_free(struct irq_domain *domain,
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index e4e51d884553f..d41570715dc7f 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -830,13 +830,18 @@ static int pci_epf_test_bind(struct pci_epf *epf)
+ return -EINVAL;
+
+ epc_features = pci_epc_get_features(epc, epf->func_no);
+- if (epc_features) {
+- linkup_notifier = epc_features->linkup_notifier;
+- core_init_notifier = epc_features->core_init_notifier;
+- test_reg_bar = pci_epc_get_first_free_bar(epc_features);
+- pci_epf_configure_bar(epf, epc_features);
++ if (!epc_features) {
++ dev_err(&epf->dev, "epc_features not implemented\n");
++ return -EOPNOTSUPP;
+ }
+
++ linkup_notifier = epc_features->linkup_notifier;
++ core_init_notifier = epc_features->core_init_notifier;
++ test_reg_bar = pci_epc_get_first_free_bar(epc_features);
++ if (test_reg_bar < 0)
++ return -EINVAL;
++ pci_epf_configure_bar(epf, epc_features);
++
+ epf_test->test_reg_bar = test_reg_bar;
+ epf_test->epc_features = epc_features;
+
+@@ -917,6 +922,7 @@ static int __init pci_epf_test_init(void)
+
+ ret = pci_epf_register_driver(&test_driver);
+ if (ret) {
++ destroy_workqueue(kpcitest_workqueue);
+ pr_err("Failed to register pci epf test driver --> %d\n", ret);
+ return ret;
+ }
+@@ -927,6 +933,8 @@ module_init(pci_epf_test_init);
+
+ static void __exit pci_epf_test_exit(void)
+ {
++ if (kpcitest_workqueue)
++ destroy_workqueue(kpcitest_workqueue);
+ pci_epf_unregister_driver(&test_driver);
+ }
+ module_exit(pci_epf_test_exit);
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index cadd3db0cbb08..ea7e7465ce7a6 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -87,24 +87,50 @@ EXPORT_SYMBOL_GPL(pci_epc_get);
+ * pci_epc_get_first_free_bar() - helper to get first unreserved BAR
+ * @epc_features: pci_epc_features structure that holds the reserved bar bitmap
+ *
+- * Invoke to get the first unreserved BAR that can be used for endpoint
++ * Invoke to get the first unreserved BAR that can be used by the endpoint
+ * function. For any incorrect value in reserved_bar return '0'.
+ */
+-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
+- *epc_features)
++enum pci_barno
++pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features)
+ {
+- int free_bar;
++ return pci_epc_get_next_free_bar(epc_features, BAR_0);
++}
++EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
++
++/**
++ * pci_epc_get_next_free_bar() - helper to get unreserved BAR starting from @bar
++ * @epc_features: pci_epc_features structure that holds the reserved bar bitmap
++ * @bar: the starting BAR number from where unreserved BAR should be searched
++ *
++ * Invoke to get the next unreserved BAR starting from @bar that can be used
++ * for endpoint function. For any incorrect value in reserved_bar return '0'.
++ */
++enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
++ *epc_features, enum pci_barno bar)
++{
++ unsigned long free_bar;
+
+ if (!epc_features)
+- return 0;
++ return BAR_0;
++
++ /* If 'bar - 1' is a 64-bit BAR, move to the next BAR */
++ if ((epc_features->bar_fixed_64bit << 1) & 1 << bar)
++ bar++;
++
++ /* Find if the reserved BAR is also a 64-bit BAR */
++ free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit;
+
+- free_bar = ffz(epc_features->reserved_bar);
++ /* Set the adjacent bit if the reserved BAR is also a 64-bit BAR */
++ free_bar <<= 1;
++ free_bar |= epc_features->reserved_bar;
++
++ free_bar = find_next_zero_bit(&free_bar, 6, bar);
+ if (free_bar > 5)
+- return 0;
++ return NO_BAR;
+
+ return free_bar;
+ }
+-EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
++EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
+
+ /**
+ * pci_epc_get_features() - get the features supported by EPC
+diff --git a/drivers/pci/pcie/rcec.c b/drivers/pci/pcie/rcec.c
+index 2c5c552994e4c..d0bcd141ac9c6 100644
+--- a/drivers/pci/pcie/rcec.c
++++ b/drivers/pci/pcie/rcec.c
+@@ -32,7 +32,7 @@ static bool rcec_assoc_rciep(struct pci_dev *rcec, struct pci_dev *rciep)
+
+ /* Same bus, so check bitmap */
+ for_each_set_bit(devn, &bitmap, 32)
+- if (devn == rciep->devfn)
++ if (devn == PCI_SLOT(rciep->devfn))
+ return true;
+
+ return false;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 953f15abc850a..be51670572fa6 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2353,6 +2353,7 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
+ pci_set_of_node(dev);
+
+ if (pci_setup_device(dev)) {
++ pci_release_of_node(dev);
+ pci_bus_put(dev->bus);
+ kfree(dev);
+ return NULL;
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index b9ea09fabf840..493079a47d054 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -55,7 +55,7 @@ static void exynos_irq_mask(struct irq_data *irqd)
+ struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
+ struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
+ unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset;
+- unsigned long mask;
++ unsigned int mask;
+ unsigned long flags;
+
+ spin_lock_irqsave(&bank->slock, flags);
+@@ -83,7 +83,7 @@ static void exynos_irq_unmask(struct irq_data *irqd)
+ struct exynos_irq_chip *our_chip = to_exynos_irq_chip(chip);
+ struct samsung_pin_bank *bank = irq_data_get_irq_chip_data(irqd);
+ unsigned long reg_mask = our_chip->eint_mask + bank->eint_offset;
+- unsigned long mask;
++ unsigned int mask;
+ unsigned long flags;
+
+ /*
+@@ -483,7 +483,7 @@ static void exynos_irq_eint0_15(struct irq_desc *desc)
+ chained_irq_exit(chip, desc);
+ }
+
+-static inline void exynos_irq_demux_eint(unsigned long pend,
++static inline void exynos_irq_demux_eint(unsigned int pend,
+ struct irq_domain *domain)
+ {
+ unsigned int irq;
+@@ -500,8 +500,8 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
+ {
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ struct exynos_muxed_weint_data *eintd = irq_desc_get_handler_data(desc);
+- unsigned long pend;
+- unsigned long mask;
++ unsigned int pend;
++ unsigned int mask;
+ int i;
+
+ chained_irq_enter(chip, desc);
+diff --git a/drivers/pwm/pwm-atmel.c b/drivers/pwm/pwm-atmel.c
+index 5813339b597b9..3292158157b68 100644
+--- a/drivers/pwm/pwm-atmel.c
++++ b/drivers/pwm/pwm-atmel.c
+@@ -319,7 +319,7 @@ static void atmel_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+
+ cdty = atmel_pwm_ch_readl(atmel_pwm, pwm->hwpwm,
+ atmel_pwm->data->regs.duty);
+- tmp = (u64)cdty * NSEC_PER_SEC;
++ tmp = (u64)(cprd - cdty) * NSEC_PER_SEC;
+ tmp <<= pres;
+ state->duty_cycle = DIV64_U64_ROUND_UP(tmp, rate);
+
+diff --git a/drivers/remoteproc/pru_rproc.c b/drivers/remoteproc/pru_rproc.c
+index dcb380e868dfd..549ed3fed6259 100644
+--- a/drivers/remoteproc/pru_rproc.c
++++ b/drivers/remoteproc/pru_rproc.c
+@@ -266,12 +266,17 @@ static void pru_rproc_create_debug_entries(struct rproc *rproc)
+
+ static void pru_dispose_irq_mapping(struct pru_rproc *pru)
+ {
+- while (pru->evt_count--) {
++ if (!pru->mapped_irq)
++ return;
++
++ while (pru->evt_count) {
++ pru->evt_count--;
+ if (pru->mapped_irq[pru->evt_count] > 0)
+ irq_dispose_mapping(pru->mapped_irq[pru->evt_count]);
+ }
+
+ kfree(pru->mapped_irq);
++ pru->mapped_irq = NULL;
+ }
+
+ /*
+@@ -284,7 +289,7 @@ static int pru_handle_intrmap(struct rproc *rproc)
+ struct pru_rproc *pru = rproc->priv;
+ struct pru_irq_rsc *rsc = pru->pru_interrupt_map;
+ struct irq_fwspec fwspec;
+- struct device_node *irq_parent;
++ struct device_node *parent, *irq_parent;
+ int i, ret = 0;
+
+ /* not having pru_interrupt_map is not an error */
+@@ -307,16 +312,31 @@ static int pru_handle_intrmap(struct rproc *rproc)
+ pru->evt_count = rsc->num_evts;
+ pru->mapped_irq = kcalloc(pru->evt_count, sizeof(unsigned int),
+ GFP_KERNEL);
+- if (!pru->mapped_irq)
++ if (!pru->mapped_irq) {
++ pru->evt_count = 0;
+ return -ENOMEM;
++ }
+
+ /*
+ * parse and fill in system event to interrupt channel and
+- * channel-to-host mapping
++ * channel-to-host mapping. The interrupt controller to be used
++ * for these mappings for a given PRU remoteproc is always its
++ * corresponding sibling PRUSS INTC node.
+ */
+- irq_parent = of_irq_find_parent(pru->dev->of_node);
++ parent = of_get_parent(dev_of_node(pru->dev));
++ if (!parent) {
++ kfree(pru->mapped_irq);
++ pru->mapped_irq = NULL;
++ pru->evt_count = 0;
++ return -ENODEV;
++ }
++
++ irq_parent = of_get_child_by_name(parent, "interrupt-controller");
++ of_node_put(parent);
+ if (!irq_parent) {
+ kfree(pru->mapped_irq);
++ pru->mapped_irq = NULL;
++ pru->evt_count = 0;
+ return -ENODEV;
+ }
+
+@@ -332,16 +352,20 @@ static int pru_handle_intrmap(struct rproc *rproc)
+
+ pru->mapped_irq[i] = irq_create_fwspec_mapping(&fwspec);
+ if (!pru->mapped_irq[i]) {
+- dev_err(dev, "failed to get virq\n");
+- ret = pru->mapped_irq[i];
++ dev_err(dev, "failed to get virq for fw mapping %d: event %d chnl %d host %d\n",
++ i, fwspec.param[0], fwspec.param[1],
++ fwspec.param[2]);
++ ret = -EINVAL;
+ goto map_fail;
+ }
+ }
++ of_node_put(irq_parent);
+
+ return ret;
+
+ map_fail:
+ pru_dispose_irq_mapping(pru);
++ of_node_put(irq_parent);
+
+ return ret;
+ }
+@@ -387,8 +411,7 @@ static int pru_rproc_stop(struct rproc *rproc)
+ pru_control_write_reg(pru, PRU_CTRL_CTRL, val);
+
+ /* dispose irq mapping - new firmware can provide new mapping */
+- if (pru->mapped_irq)
+- pru_dispose_irq_mapping(pru);
++ pru_dispose_irq_mapping(pru);
+
+ return 0;
+ }
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 66106ba25ba30..14e0ce5f18f5f 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1210,6 +1210,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ goto release_firmware;
+ }
+
++ if (phdr->p_filesz > phdr->p_memsz) {
++ dev_err(qproc->dev,
++ "refusing to load segment %d with p_filesz > p_memsz\n",
++ i);
++ ret = -EINVAL;
++ goto release_firmware;
++ }
++
+ ptr = memremap(qproc->mpss_phys + offset, phdr->p_memsz, MEMREMAP_WC);
+ if (!ptr) {
+ dev_err(qproc->dev,
+@@ -1241,6 +1249,16 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ goto release_firmware;
+ }
+
++ if (seg_fw->size != phdr->p_filesz) {
++ dev_err(qproc->dev,
++ "failed to load segment %d from truncated file %s\n",
++ i, fw_name);
++ ret = -EINVAL;
++ release_firmware(seg_fw);
++ memunmap(ptr);
++ goto release_firmware;
++ }
++
+ release_firmware(seg_fw);
+ }
+
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 27a05167c18c3..4840886532ff7 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -857,6 +857,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
+ dev_err(glink->dev,
+ "no intent found for channel %s intent %d",
+ channel->name, liid);
++ ret = -ENOENT;
+ goto advance_rx;
+ }
+ }
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 183cf7c01364c..c6c16961385bc 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -296,7 +296,11 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t)
+ t->tm_min = bcd2bin(regs[DS1307_REG_MIN] & 0x7f);
+ tmp = regs[DS1307_REG_HOUR] & 0x3f;
+ t->tm_hour = bcd2bin(tmp);
+- t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1;
++ /* rx8130 is bit position, not BCD */
++ if (ds1307->type == rx_8130)
++ t->tm_wday = fls(regs[DS1307_REG_WDAY] & 0x7f);
++ else
++ t->tm_wday = bcd2bin(regs[DS1307_REG_WDAY] & 0x07) - 1;
+ t->tm_mday = bcd2bin(regs[DS1307_REG_MDAY] & 0x3f);
+ tmp = regs[DS1307_REG_MONTH] & 0x1f;
+ t->tm_mon = bcd2bin(tmp) - 1;
+@@ -343,7 +347,11 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
+ regs[DS1307_REG_SECS] = bin2bcd(t->tm_sec);
+ regs[DS1307_REG_MIN] = bin2bcd(t->tm_min);
+ regs[DS1307_REG_HOUR] = bin2bcd(t->tm_hour);
+- regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1);
++ /* rx8130 is bit position, not BCD */
++ if (ds1307->type == rx_8130)
++ regs[DS1307_REG_WDAY] = 1 << t->tm_wday;
++ else
++ regs[DS1307_REG_WDAY] = bin2bcd(t->tm_wday + 1);
+ regs[DS1307_REG_MDAY] = bin2bcd(t->tm_mday);
+ regs[DS1307_REG_MONTH] = bin2bcd(t->tm_mon + 1);
+
+diff --git a/drivers/rtc/rtc-fsl-ftm-alarm.c b/drivers/rtc/rtc-fsl-ftm-alarm.c
+index 57cc09d0a8067..c0df49fb978ce 100644
+--- a/drivers/rtc/rtc-fsl-ftm-alarm.c
++++ b/drivers/rtc/rtc-fsl-ftm-alarm.c
+@@ -310,6 +310,7 @@ static const struct of_device_id ftm_rtc_match[] = {
+ { .compatible = "fsl,lx2160a-ftm-alarm", },
+ { },
+ };
++MODULE_DEVICE_TABLE(of, ftm_rtc_match);
+
+ static const struct acpi_device_id ftm_imx_acpi_ids[] = {
+ {"NXP0014",},
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index dcc0f0d823db3..5d985d50eab73 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1190,6 +1190,9 @@ static int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ struct qla_work_evt *e;
+
++ if (vha->host->active_mode == MODE_TARGET)
++ return QLA_FUNCTION_FAILED;
++
+ e = qla2x00_alloc_work(vha, QLA_EVT_PRLI);
+ if (!e)
+ return QLA_FUNCTION_FAILED;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index e53a3f89e8635..ab3a5c1b5723e 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -8577,7 +8577,7 @@ static void ufshcd_vreg_set_lpm(struct ufs_hba *hba)
+ } else if (!ufshcd_is_ufs_dev_active(hba)) {
+ ufshcd_toggle_vreg(hba->dev, hba->vreg_info.vcc, false);
+ vcc_off = true;
+- if (!ufshcd_is_link_active(hba)) {
++ if (ufshcd_is_link_hibern8(hba) || ufshcd_is_link_off(hba)) {
+ ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq);
+ ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq2);
+ }
+@@ -8599,7 +8599,7 @@ static int ufshcd_vreg_set_hpm(struct ufs_hba *hba)
+ !hba->dev_info.is_lu_power_on_wp) {
+ ret = ufshcd_setup_vreg(hba, true);
+ } else if (!ufshcd_is_ufs_dev_active(hba)) {
+- if (!ret && !ufshcd_is_link_active(hba)) {
++ if (!ufshcd_is_link_active(hba)) {
+ ret = ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq);
+ if (ret)
+ goto vcc_disable;
+@@ -8972,10 +8972,13 @@ int ufshcd_system_suspend(struct ufs_hba *hba)
+ if (!hba->is_powered)
+ return 0;
+
++ cancel_delayed_work_sync(&hba->rpm_dev_flush_recheck_work);
++
+ if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) ==
+ hba->curr_dev_pwr_mode) &&
+ (ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) ==
+ hba->uic_link_state) &&
++ pm_runtime_suspended(hba->dev) &&
+ !hba->dev_info.b_rpm_dev_flush_capable)
+ goto out;
+
+diff --git a/drivers/soc/mediatek/mt8173-pm-domains.h b/drivers/soc/mediatek/mt8173-pm-domains.h
+index 3e8ee5dabb437..654c717e54671 100644
+--- a/drivers/soc/mediatek/mt8173-pm-domains.h
++++ b/drivers/soc/mediatek/mt8173-pm-domains.h
+@@ -12,24 +12,28 @@
+
+ static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = {
+ [MT8173_POWER_DOMAIN_VDEC] = {
++ .name = "vdec",
+ .sta_mask = PWR_STATUS_VDEC,
+ .ctl_offs = SPM_VDE_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8173_POWER_DOMAIN_VENC] = {
++ .name = "venc",
+ .sta_mask = PWR_STATUS_VENC,
+ .ctl_offs = SPM_VEN_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(15, 12),
+ },
+ [MT8173_POWER_DOMAIN_ISP] = {
++ .name = "isp",
+ .sta_mask = PWR_STATUS_ISP,
+ .ctl_offs = SPM_ISP_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(13, 12),
+ },
+ [MT8173_POWER_DOMAIN_MM] = {
++ .name = "mm",
+ .sta_mask = PWR_STATUS_DISP,
+ .ctl_offs = SPM_DIS_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+@@ -40,18 +44,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = {
+ },
+ },
+ [MT8173_POWER_DOMAIN_VENC_LT] = {
++ .name = "venc_lt",
+ .sta_mask = PWR_STATUS_VENC_LT,
+ .ctl_offs = SPM_VEN2_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(15, 12),
+ },
+ [MT8173_POWER_DOMAIN_AUDIO] = {
++ .name = "audio",
+ .sta_mask = PWR_STATUS_AUDIO,
+ .ctl_offs = SPM_AUDIO_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(15, 12),
+ },
+ [MT8173_POWER_DOMAIN_USB] = {
++ .name = "usb",
+ .sta_mask = PWR_STATUS_USB,
+ .ctl_offs = SPM_USB_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+@@ -59,18 +66,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8173[] = {
+ .caps = MTK_SCPD_ACTIVE_WAKEUP,
+ },
+ [MT8173_POWER_DOMAIN_MFG_ASYNC] = {
++ .name = "mfg_async",
+ .sta_mask = PWR_STATUS_MFG_ASYNC,
+ .ctl_offs = SPM_MFG_ASYNC_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = 0,
+ },
+ [MT8173_POWER_DOMAIN_MFG_2D] = {
++ .name = "mfg_2d",
+ .sta_mask = PWR_STATUS_MFG_2D,
+ .ctl_offs = SPM_MFG_2D_PWR_CON,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(13, 12),
+ },
+ [MT8173_POWER_DOMAIN_MFG] = {
++ .name = "mfg",
+ .sta_mask = PWR_STATUS_MFG,
+ .ctl_offs = SPM_MFG_PWR_CON,
+ .sram_pdn_bits = GENMASK(13, 8),
+diff --git a/drivers/soc/mediatek/mt8183-pm-domains.h b/drivers/soc/mediatek/mt8183-pm-domains.h
+index 8d996c5d2682d..45dbaff4c14dd 100644
+--- a/drivers/soc/mediatek/mt8183-pm-domains.h
++++ b/drivers/soc/mediatek/mt8183-pm-domains.h
+@@ -12,12 +12,14 @@
+
+ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ [MT8183_POWER_DOMAIN_AUDIO] = {
++ .name = "audio",
+ .sta_mask = PWR_STATUS_AUDIO,
+ .ctl_offs = 0x0314,
+ .sram_pdn_bits = GENMASK(11, 8),
+ .sram_pdn_ack_bits = GENMASK(15, 12),
+ },
+ [MT8183_POWER_DOMAIN_CONN] = {
++ .name = "conn",
+ .sta_mask = PWR_STATUS_CONN,
+ .ctl_offs = 0x032c,
+ .sram_pdn_bits = 0,
+@@ -28,30 +30,35 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_MFG_ASYNC] = {
++ .name = "mfg_async",
+ .sta_mask = PWR_STATUS_MFG_ASYNC,
+ .ctl_offs = 0x0334,
+ .sram_pdn_bits = 0,
+ .sram_pdn_ack_bits = 0,
+ },
+ [MT8183_POWER_DOMAIN_MFG] = {
++ .name = "mfg",
+ .sta_mask = PWR_STATUS_MFG,
+ .ctl_offs = 0x0338,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8183_POWER_DOMAIN_MFG_CORE0] = {
++ .name = "mfg_core0",
+ .sta_mask = BIT(7),
+ .ctl_offs = 0x034c,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8183_POWER_DOMAIN_MFG_CORE1] = {
++ .name = "mfg_core1",
+ .sta_mask = BIT(20),
+ .ctl_offs = 0x0310,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8183_POWER_DOMAIN_MFG_2D] = {
++ .name = "mfg_2d",
+ .sta_mask = PWR_STATUS_MFG_2D,
+ .ctl_offs = 0x0348,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -64,6 +71,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_DISP] = {
++ .name = "disp",
+ .sta_mask = PWR_STATUS_DISP,
+ .ctl_offs = 0x030c,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -82,6 +90,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_CAM] = {
++ .name = "cam",
+ .sta_mask = BIT(25),
+ .ctl_offs = 0x0344,
+ .sram_pdn_bits = GENMASK(9, 8),
+@@ -104,6 +113,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_ISP] = {
++ .name = "isp",
+ .sta_mask = PWR_STATUS_ISP,
+ .ctl_offs = 0x0308,
+ .sram_pdn_bits = GENMASK(9, 8),
+@@ -126,6 +136,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_VDEC] = {
++ .name = "vdec",
+ .sta_mask = BIT(31),
+ .ctl_offs = 0x0300,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -138,6 +149,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_VENC] = {
++ .name = "venc",
+ .sta_mask = PWR_STATUS_VENC,
+ .ctl_offs = 0x0304,
+ .sram_pdn_bits = GENMASK(11, 8),
+@@ -150,6 +162,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_VPU_TOP] = {
++ .name = "vpu_top",
+ .sta_mask = BIT(26),
+ .ctl_offs = 0x0324,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -176,6 +189,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ },
+ },
+ [MT8183_POWER_DOMAIN_VPU_CORE0] = {
++ .name = "vpu_core0",
+ .sta_mask = BIT(27),
+ .ctl_offs = 0x33c,
+ .sram_pdn_bits = GENMASK(11, 8),
+@@ -193,6 +207,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8183[] = {
+ .caps = MTK_SCPD_SRAM_ISO,
+ },
+ [MT8183_POWER_DOMAIN_VPU_CORE1] = {
++ .name = "vpu_core1",
+ .sta_mask = BIT(28),
+ .ctl_offs = 0x0340,
+ .sram_pdn_bits = GENMASK(11, 8),
+diff --git a/drivers/soc/mediatek/mt8192-pm-domains.h b/drivers/soc/mediatek/mt8192-pm-domains.h
+index 0fdf6dc6231f4..543dda70de014 100644
+--- a/drivers/soc/mediatek/mt8192-pm-domains.h
++++ b/drivers/soc/mediatek/mt8192-pm-domains.h
+@@ -12,6 +12,7 @@
+
+ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ [MT8192_POWER_DOMAIN_AUDIO] = {
++ .name = "audio",
+ .sta_mask = BIT(21),
+ .ctl_offs = 0x0354,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -24,6 +25,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_CONN] = {
++ .name = "conn",
+ .sta_mask = PWR_STATUS_CONN,
+ .ctl_offs = 0x0304,
+ .sram_pdn_bits = 0,
+@@ -45,12 +47,14 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ .caps = MTK_SCPD_KEEP_DEFAULT_OFF,
+ },
+ [MT8192_POWER_DOMAIN_MFG0] = {
++ .name = "mfg0",
+ .sta_mask = BIT(2),
+ .ctl_offs = 0x0308,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_MFG1] = {
++ .name = "mfg1",
+ .sta_mask = BIT(3),
+ .ctl_offs = 0x030c,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -75,36 +79,42 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_MFG2] = {
++ .name = "mfg2",
+ .sta_mask = BIT(4),
+ .ctl_offs = 0x0310,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_MFG3] = {
++ .name = "mfg3",
+ .sta_mask = BIT(5),
+ .ctl_offs = 0x0314,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_MFG4] = {
++ .name = "mfg4",
+ .sta_mask = BIT(6),
+ .ctl_offs = 0x0318,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_MFG5] = {
++ .name = "mfg5",
+ .sta_mask = BIT(7),
+ .ctl_offs = 0x031c,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_MFG6] = {
++ .name = "mfg6",
+ .sta_mask = BIT(8),
+ .ctl_offs = 0x0320,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_DISP] = {
++ .name = "disp",
+ .sta_mask = BIT(20),
+ .ctl_offs = 0x0350,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -133,6 +143,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_IPE] = {
++ .name = "ipe",
+ .sta_mask = BIT(14),
+ .ctl_offs = 0x0338,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -149,6 +160,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_ISP] = {
++ .name = "isp",
+ .sta_mask = BIT(12),
+ .ctl_offs = 0x0330,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -165,6 +177,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_ISP2] = {
++ .name = "isp2",
+ .sta_mask = BIT(13),
+ .ctl_offs = 0x0334,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -181,6 +194,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_MDP] = {
++ .name = "mdp",
+ .sta_mask = BIT(19),
+ .ctl_offs = 0x034c,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -197,6 +211,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_VENC] = {
++ .name = "venc",
+ .sta_mask = BIT(17),
+ .ctl_offs = 0x0344,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -213,6 +228,7 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_VDEC] = {
++ .name = "vdec",
+ .sta_mask = BIT(15),
+ .ctl_offs = 0x033c,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -229,12 +245,14 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_VDEC2] = {
++ .name = "vdec2",
+ .sta_mask = BIT(16),
+ .ctl_offs = 0x0340,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_CAM] = {
++ .name = "cam",
+ .sta_mask = BIT(23),
+ .ctl_offs = 0x035c,
+ .sram_pdn_bits = GENMASK(8, 8),
+@@ -263,18 +281,21 @@ static const struct scpsys_domain_data scpsys_domain_data_mt8192[] = {
+ },
+ },
+ [MT8192_POWER_DOMAIN_CAM_RAWA] = {
++ .name = "cam_rawa",
+ .sta_mask = BIT(24),
+ .ctl_offs = 0x0360,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_CAM_RAWB] = {
++ .name = "cam_rawb",
+ .sta_mask = BIT(25),
+ .ctl_offs = 0x0364,
+ .sram_pdn_bits = GENMASK(8, 8),
+ .sram_pdn_ack_bits = GENMASK(12, 12),
+ },
+ [MT8192_POWER_DOMAIN_CAM_RAWC] = {
++ .name = "cam_rawc",
+ .sta_mask = BIT(26),
+ .ctl_offs = 0x0368,
+ .sram_pdn_bits = GENMASK(8, 8),
+diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c
+index fb70cb3b07b36..d85bf2ef95974 100644
+--- a/drivers/soc/mediatek/mtk-pm-domains.c
++++ b/drivers/soc/mediatek/mtk-pm-domains.c
+@@ -397,7 +397,11 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ goto err_unprepare_subsys_clocks;
+ }
+
+- pd->genpd.name = node->name;
++ if (!pd->data->name)
++ pd->genpd.name = node->name;
++ else
++ pd->genpd.name = pd->data->name;
++
+ pd->genpd.power_off = scpsys_power_off;
+ pd->genpd.power_on = scpsys_power_on;
+
+diff --git a/drivers/soc/mediatek/mtk-pm-domains.h b/drivers/soc/mediatek/mtk-pm-domains.h
+index a2f4d8f97e058..c275bbaa9b0d5 100644
+--- a/drivers/soc/mediatek/mtk-pm-domains.h
++++ b/drivers/soc/mediatek/mtk-pm-domains.h
+@@ -74,6 +74,7 @@ struct scpsys_bus_prot_data {
+
+ /**
+ * struct scpsys_domain_data - scp domain data for power on/off flow
++ * @name: The name of the power domain.
+ * @sta_mask: The mask for power on/off status bit.
+ * @ctl_offs: The offset for main power control register.
+ * @sram_pdn_bits: The mask for sram power control bits.
+@@ -83,6 +84,7 @@ struct scpsys_bus_prot_data {
+ * @bp_smi: bus protection for smi subsystem
+ */
+ struct scpsys_domain_data {
++ const char *name;
+ u32 sta_mask;
+ int ctl_offs;
+ u32 sram_pdn_bits;
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index b1507f29fcc56..c8305330b4580 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -1072,7 +1072,7 @@ static struct platform_driver rkvdec_driver = {
+ .remove = rkvdec_remove,
+ .driver = {
+ .name = "rkvdec",
+- .of_match_table = of_match_ptr(of_rkvdec_match),
++ .of_match_table = of_rkvdec_match,
+ .pm = &rkvdec_pm_ops,
+ },
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index d8ce3a687b80d..3c4c0516e58ab 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -755,8 +755,10 @@ int __init init_common(struct tsens_priv *priv)
+ for (i = VER_MAJOR; i <= VER_STEP; i++) {
+ priv->rf[i] = devm_regmap_field_alloc(dev, priv->srot_map,
+ priv->fields[i]);
+- if (IS_ERR(priv->rf[i]))
+- return PTR_ERR(priv->rf[i]);
++ if (IS_ERR(priv->rf[i])) {
++ ret = PTR_ERR(priv->rf[i]);
++ goto err_put_device;
++ }
+ }
+ ret = regmap_field_read(priv->rf[VER_MINOR], &ver_minor);
+ if (ret)
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 69ef12f852b7d..5b76f9a1280d5 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -704,14 +704,17 @@ static int thermal_of_populate_bind_params(struct device_node *np,
+
+ count = of_count_phandle_with_args(np, "cooling-device",
+ "#cooling-cells");
+- if (!count) {
++ if (count <= 0) {
+ pr_err("Add a cooling_device property with at least one device\n");
++ ret = -ENOENT;
+ goto end;
+ }
+
+ __tcbp = kcalloc(count, sizeof(*__tcbp), GFP_KERNEL);
+- if (!__tcbp)
++ if (!__tcbp) {
++ ret = -ENOMEM;
+ goto end;
++ }
+
+ for (i = 0; i < count; i++) {
+ ret = of_parse_phandle_with_args(np, "cooling-device",
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 508b1c3f8b731..d1e4a7379bebd 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -321,12 +321,23 @@ exit:
+
+ }
+
+-static void kill_urbs(struct wdm_device *desc)
++static void poison_urbs(struct wdm_device *desc)
+ {
+ /* the order here is essential */
+- usb_kill_urb(desc->command);
+- usb_kill_urb(desc->validity);
+- usb_kill_urb(desc->response);
++ usb_poison_urb(desc->command);
++ usb_poison_urb(desc->validity);
++ usb_poison_urb(desc->response);
++}
++
++static void unpoison_urbs(struct wdm_device *desc)
++{
++ /*
++ * the order here is not essential
++ * it is symmetrical just to be nice
++ */
++ usb_unpoison_urb(desc->response);
++ usb_unpoison_urb(desc->validity);
++ usb_unpoison_urb(desc->command);
+ }
+
+ static void free_urbs(struct wdm_device *desc)
+@@ -741,11 +752,12 @@ static int wdm_release(struct inode *inode, struct file *file)
+ if (!desc->count) {
+ if (!test_bit(WDM_DISCONNECTING, &desc->flags)) {
+ dev_dbg(&desc->intf->dev, "wdm_release: cleanup\n");
+- kill_urbs(desc);
++ poison_urbs(desc);
+ spin_lock_irq(&desc->iuspin);
+ desc->resp_count = 0;
+ spin_unlock_irq(&desc->iuspin);
+ desc->manage_power(desc->intf, 0);
++ unpoison_urbs(desc);
+ } else {
+ /* must avoid dev_printk here as desc->intf is invalid */
+ pr_debug(KBUILD_MODNAME " %s: device gone - cleaning up\n", __func__);
+@@ -1037,9 +1049,9 @@ static void wdm_disconnect(struct usb_interface *intf)
+ wake_up_all(&desc->wait);
+ mutex_lock(&desc->rlock);
+ mutex_lock(&desc->wlock);
++ poison_urbs(desc);
+ cancel_work_sync(&desc->rxwork);
+ cancel_work_sync(&desc->service_outs_intr);
+- kill_urbs(desc);
+ mutex_unlock(&desc->wlock);
+ mutex_unlock(&desc->rlock);
+
+@@ -1080,9 +1092,10 @@ static int wdm_suspend(struct usb_interface *intf, pm_message_t message)
+ set_bit(WDM_SUSPENDING, &desc->flags);
+ spin_unlock_irq(&desc->iuspin);
+ /* callback submits work - order is essential */
+- kill_urbs(desc);
++ poison_urbs(desc);
+ cancel_work_sync(&desc->rxwork);
+ cancel_work_sync(&desc->service_outs_intr);
++ unpoison_urbs(desc);
+ }
+ if (!PMSG_IS_AUTO(message)) {
+ mutex_unlock(&desc->wlock);
+@@ -1140,7 +1153,7 @@ static int wdm_pre_reset(struct usb_interface *intf)
+ wake_up_all(&desc->wait);
+ mutex_lock(&desc->rlock);
+ mutex_lock(&desc->wlock);
+- kill_urbs(desc);
++ poison_urbs(desc);
+ cancel_work_sync(&desc->rxwork);
+ cancel_work_sync(&desc->service_outs_intr);
+ return 0;
+@@ -1151,6 +1164,7 @@ static int wdm_post_reset(struct usb_interface *intf)
+ struct wdm_device *desc = wdm_find_device(intf);
+ int rv;
+
++ unpoison_urbs(desc);
+ clear_bit(WDM_OVERFLOW, &desc->flags);
+ clear_bit(WDM_RESETTING, &desc->flags);
+ rv = recover_from_urb_loss(desc);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 404507d1b76f1..13fe37fbbd2c8 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3593,9 +3593,6 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ * sequence.
+ */
+ status = hub_port_status(hub, port1, &portstatus, &portchange);
+-
+- /* TRSMRCY = 10 msec */
+- msleep(10);
+ }
+
+ SuspendCleared:
+@@ -3610,6 +3607,9 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ usb_clear_port_feature(hub->hdev, port1,
+ USB_PORT_FEAT_C_SUSPEND);
+ }
++
++ /* TRSMRCY = 10 msec */
++ msleep(10);
+ }
+
+ if (udev->persist_enabled)
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index 7161344c65221..641e4251cb7f1 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -112,6 +112,7 @@ struct dwc2_hsotg_req;
+ * @debugfs: File entry for debugfs file for this endpoint.
+ * @dir_in: Set to true if this endpoint is of the IN direction, which
+ * means that it is sending data to the Host.
++ * @map_dir: Set to the value of dir_in when the DMA buffer is mapped.
+ * @index: The index for the endpoint registers.
+ * @mc: Multi Count - number of transactions per microframe
+ * @interval: Interval for periodic endpoints, in frames or microframes.
+@@ -161,6 +162,7 @@ struct dwc2_hsotg_ep {
+ unsigned short fifo_index;
+
+ unsigned char dir_in;
++ unsigned char map_dir;
+ unsigned char index;
+ unsigned char mc;
+ u16 interval;
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index ad4c94366dadf..d2f623d83bf78 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -422,7 +422,7 @@ static void dwc2_hsotg_unmap_dma(struct dwc2_hsotg *hsotg,
+ {
+ struct usb_request *req = &hs_req->req;
+
+- usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->dir_in);
++ usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->map_dir);
+ }
+
+ /*
+@@ -1242,6 +1242,7 @@ static int dwc2_hsotg_map_dma(struct dwc2_hsotg *hsotg,
+ {
+ int ret;
+
++ hs_ep->map_dir = hs_ep->dir_in;
+ ret = usb_gadget_map_request(&hsotg->gadget, req, hs_ep->dir_in);
+ if (ret)
+ goto dma_error;
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index 3db17806e92e7..e196673f5c647 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -437,8 +437,13 @@ static int dwc3_omap_extcon_register(struct dwc3_omap *omap)
+
+ if (extcon_get_state(edev, EXTCON_USB) == true)
+ dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_VALID);
++ else
++ dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_OFF);
++
+ if (extcon_get_state(edev, EXTCON_USB_HOST) == true)
+ dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_GROUND);
++ else
++ dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_FLOAT);
+
+ omap->edev = edev;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 598daed8086f6..17117870f6cea 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -120,6 +120,7 @@ static const struct property_entry dwc3_pci_mrfld_properties[] = {
+ PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
+ PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
+ PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
++ PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"),
+ PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
+ {}
+ };
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 84d1487e9f060..acf57a98969dc 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1676,7 +1676,9 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
+ }
+ }
+
+- return __dwc3_gadget_kick_transfer(dep);
++ __dwc3_gadget_kick_transfer(dep);
++
++ return 0;
+ }
+
+ static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request,
+@@ -2206,6 +2208,10 @@ static void dwc3_gadget_enable_irq(struct dwc3 *dwc)
+ if (DWC3_VER_IS_PRIOR(DWC3, 250A))
+ reg |= DWC3_DEVTEN_ULSTCNGEN;
+
++ /* On 2.30a and above this bit enables U3/L2-L1 Suspend Events */
++ if (!DWC3_VER_IS_PRIOR(DWC3, 230A))
++ reg |= DWC3_DEVTEN_EOPFEN;
++
+ dwc3_writel(dwc->regs, DWC3_DEVTEN, reg);
+ }
+
+@@ -3948,8 +3954,9 @@ err0:
+
+ void dwc3_gadget_exit(struct dwc3 *dwc)
+ {
+- usb_del_gadget_udc(dwc->gadget);
++ usb_del_gadget(dwc->gadget);
+ dwc3_gadget_free_endpoints(dwc);
++ usb_put_gadget(dwc->gadget);
+ dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce,
+ dwc->bounce_addr);
+ kfree(dwc->setup_buf);
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 5617ef30530a6..f0e4a315cc81b 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -5568,7 +5568,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
+ struct usb_hcd *hcd;
+ struct resource *res;
+ int irq;
+- int retval = -ENODEV;
++ int retval;
+ struct fotg210_hcd *fotg210;
+
+ if (usb_disabled())
+@@ -5588,7 +5588,7 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
+ hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev,
+ dev_name(dev));
+ if (!hcd) {
+- dev_err(dev, "failed to create hcd with err %d\n", retval);
++ dev_err(dev, "failed to create hcd\n");
+ retval = -ENOMEM;
+ goto fail_create_hcd;
+ }
+diff --git a/drivers/usb/host/xhci-ext-caps.h b/drivers/usb/host/xhci-ext-caps.h
+index fa59b242cd515..e8af0a125f84b 100644
+--- a/drivers/usb/host/xhci-ext-caps.h
++++ b/drivers/usb/host/xhci-ext-caps.h
+@@ -7,8 +7,9 @@
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ */
+-/* Up to 16 ms to halt an HC */
+-#define XHCI_MAX_HALT_USEC (16*1000)
++
++/* HC should halt within 16 ms, but use 32 ms as some hosts take longer */
++#define XHCI_MAX_HALT_USEC (32 * 1000)
+ /* HC not running - set to 1 when run/stop bit is cleared. */
+ #define XHCI_STS_HALT (1<<0)
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 5bbccc9a0179f..7bc18cf8042cc 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -57,6 +57,7 @@
+ #define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e
+
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
+@@ -166,8 +167,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ (pdev->device == 0x15e0 || pdev->device == 0x15e1))
+ xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND;
+
+- if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5)
++ if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) {
+ xhci->quirks |= XHCI_DISABLE_SPARSE;
++ xhci->quirks |= XHCI_RESET_ON_RESUME;
++ }
+
+ if (pdev->vendor == PCI_VENDOR_ID_AMD)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+@@ -243,7 +246,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+- pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
++ pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
++ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI))
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 66147f9179e59..e81f4175e2ebc 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1523,7 +1523,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
+ * we need to issue an evaluate context command and wait on it.
+ */
+ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
+- unsigned int ep_index, struct urb *urb)
++ unsigned int ep_index, struct urb *urb, gfp_t mem_flags)
+ {
+ struct xhci_container_ctx *out_ctx;
+ struct xhci_input_control_ctx *ctrl_ctx;
+@@ -1554,7 +1554,7 @@ static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
+ * changes max packet sizes.
+ */
+
+- command = xhci_alloc_command(xhci, true, GFP_KERNEL);
++ command = xhci_alloc_command(xhci, true, mem_flags);
+ if (!command)
+ return -ENOMEM;
+
+@@ -1648,7 +1648,7 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
+ */
+ if (urb->dev->speed == USB_SPEED_FULL) {
+ ret = xhci_check_maxpacket(xhci, slot_id,
+- ep_index, urb);
++ ep_index, urb, mem_flags);
+ if (ret < 0) {
+ xhci_urb_free_priv(urb_priv);
+ urb->hcpriv = NULL;
+diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c
+index eebeadd269461..6b92d037d8fc8 100644
+--- a/drivers/usb/musb/mediatek.c
++++ b/drivers/usb/musb/mediatek.c
+@@ -518,8 +518,8 @@ static int mtk_musb_probe(struct platform_device *pdev)
+
+ glue->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);
+ if (IS_ERR(glue->xceiv)) {
+- dev_err(dev, "fail to getting usb-phy %d\n", ret);
+ ret = PTR_ERR(glue->xceiv);
++ dev_err(dev, "fail to getting usb-phy %d\n", ret);
+ goto err_unregister_usb_phy;
+ }
+
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index c2bdfeb60e4f3..b237ed8046fbb 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2546,10 +2546,10 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
+ port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src),
+ pdo_pps_apdo_max_voltage(snk));
+ port->pps_data.req_max_curr = min_pps_apdo_current(src, snk);
+- port->pps_data.req_out_volt = min(port->pps_data.max_volt,
+- max(port->pps_data.min_volt,
++ port->pps_data.req_out_volt = min(port->pps_data.req_max_volt,
++ max(port->pps_data.req_min_volt,
+ port->pps_data.req_out_volt));
+- port->pps_data.req_op_curr = min(port->pps_data.max_curr,
++ port->pps_data.req_op_curr = min(port->pps_data.req_max_curr,
+ port->pps_data.req_op_curr);
+ }
+
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index f02958927cbd8..89055a05aa41d 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -495,7 +495,8 @@ static void ucsi_unregister_altmodes(struct ucsi_connector *con, u8 recipient)
+ }
+ }
+
+-static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
++static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner,
++ u32 *pdos, int offset, int num_pdos)
+ {
+ struct ucsi *ucsi = con->ucsi;
+ u64 command;
+@@ -503,17 +504,39 @@ static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner)
+
+ command = UCSI_COMMAND(UCSI_GET_PDOS) | UCSI_CONNECTOR_NUMBER(con->num);
+ command |= UCSI_GET_PDOS_PARTNER_PDO(is_partner);
+- command |= UCSI_GET_PDOS_NUM_PDOS(UCSI_MAX_PDOS - 1);
++ command |= UCSI_GET_PDOS_PDO_OFFSET(offset);
++ command |= UCSI_GET_PDOS_NUM_PDOS(num_pdos - 1);
+ command |= UCSI_GET_PDOS_SRC_PDOS;
+- ret = ucsi_send_command(ucsi, command, con->src_pdos,
+- sizeof(con->src_pdos));
+- if (ret < 0) {
++ ret = ucsi_send_command(ucsi, command, pdos + offset,
++ num_pdos * sizeof(u32));
++ if (ret < 0)
+ dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret);
++ if (ret == 0 && offset == 0)
++ dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
++
++ return ret;
++}
++
++static void ucsi_get_src_pdos(struct ucsi_connector *con, int is_partner)
++{
++ int ret;
++
++ /* UCSI max payload means only getting at most 4 PDOs at a time */
++ ret = ucsi_get_pdos(con, 1, con->src_pdos, 0, UCSI_MAX_PDOS);
++ if (ret < 0)
+ return;
+- }
++
+ con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */
+- if (ret == 0)
+- dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n");
++ if (con->num_pdos < UCSI_MAX_PDOS)
++ return;
++
++ /* get the remaining PDOs, if any */
++ ret = ucsi_get_pdos(con, 1, con->src_pdos, UCSI_MAX_PDOS,
++ PDO_MAX_OBJECTS - UCSI_MAX_PDOS);
++ if (ret < 0)
++ return;
++
++ con->num_pdos += ret / sizeof(u32);
+ }
+
+ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
+@@ -522,7 +545,7 @@ static void ucsi_pwr_opmode_change(struct ucsi_connector *con)
+ case UCSI_CONSTAT_PWR_OPMODE_PD:
+ con->rdo = con->status.request_data_obj;
+ typec_set_pwr_opmode(con->port, TYPEC_PWR_MODE_PD);
+- ucsi_get_pdos(con, 1);
++ ucsi_get_src_pdos(con, 1);
+ break;
+ case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5:
+ con->rdo = 0;
+@@ -972,6 +995,7 @@ static const struct typec_operations ucsi_ops = {
+ .pr_set = ucsi_pr_swap
+ };
+
++/* Caller must call fwnode_handle_put() after use */
+ static struct fwnode_handle *ucsi_find_fwnode(struct ucsi_connector *con)
+ {
+ struct fwnode_handle *fwnode;
+@@ -1005,7 +1029,7 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+ command |= UCSI_CONNECTOR_NUMBER(con->num);
+ ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap));
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+
+ if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP)
+ cap->data = TYPEC_PORT_DRD;
+@@ -1101,6 +1125,8 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
+ trace_ucsi_register_port(con->num, &con->status);
+
+ out:
++ fwnode_handle_put(cap->fwnode);
++out_unlock:
+ mutex_unlock(&con->lock);
+ return ret;
+ }
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index dd9ba60ab4a30..fce23ad16c6d0 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -8,6 +8,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/types.h>
+ #include <linux/usb/typec.h>
++#include <linux/usb/pd.h>
+
+ /* -------------------------------------------------------------------------- */
+
+@@ -133,7 +134,9 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num);
+
+ /* GET_PDOS command bits */
+ #define UCSI_GET_PDOS_PARTNER_PDO(_r_) ((u64)(_r_) << 23)
++#define UCSI_GET_PDOS_PDO_OFFSET(_r_) ((u64)(_r_) << 24)
+ #define UCSI_GET_PDOS_NUM_PDOS(_r_) ((u64)(_r_) << 32)
++#define UCSI_MAX_PDOS (4)
+ #define UCSI_GET_PDOS_SRC_PDOS ((u64)1 << 34)
+
+ /* -------------------------------------------------------------------------- */
+@@ -301,7 +304,6 @@ struct ucsi {
+
+ #define UCSI_MAX_SVID 5
+ #define UCSI_MAX_ALTMODES (UCSI_MAX_SVID * 6)
+-#define UCSI_MAX_PDOS (4)
+
+ #define UCSI_TYPEC_VSAFE5V 5000
+ #define UCSI_TYPEC_1_5_CURRENT 1500
+@@ -329,7 +331,7 @@ struct ucsi_connector {
+ struct power_supply *psy;
+ struct power_supply_desc psy_desc;
+ u32 rdo;
+- u32 src_pdos[UCSI_MAX_PDOS];
++ u32 src_pdos[PDO_MAX_OBJECTS];
+ int num_pdos;
+ };
+
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 5447c5156b2e6..b9651f797676c 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -1005,8 +1005,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+ err = mmu_interval_notifier_insert_locked(
+ &map->notifier, vma->vm_mm, vma->vm_start,
+ vma->vm_end - vma->vm_start, &gntdev_mmu_ops);
+- if (err)
++ if (err) {
++ map->vma = NULL;
+ goto out_unlock_put;
++ }
+ }
+ mutex_unlock(&priv->lock);
+
+diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
+index e64e6befc63b7..87e6b7db892f5 100644
+--- a/drivers/xen/unpopulated-alloc.c
++++ b/drivers/xen/unpopulated-alloc.c
+@@ -39,8 +39,10 @@ static int fill_list(unsigned int nr_pages)
+ }
+
+ pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
+- if (!pgmap)
++ if (!pgmap) {
++ ret = -ENOMEM;
+ goto err_pgmap;
++ }
+
+ pgmap->type = MEMORY_DEVICE_GENERIC;
+ pgmap->range = (struct range) {
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index 649f04f112dc2..59c32c9b799fc 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -86,8 +86,8 @@ int v9fs_file_open(struct inode *inode, struct file *file)
+ * to work.
+ */
+ writeback_fid = v9fs_writeback_fid(file_dentry(file));
+- if (IS_ERR(fid)) {
+- err = PTR_ERR(fid);
++ if (IS_ERR(writeback_fid)) {
++ err = PTR_ERR(writeback_fid);
+ mutex_unlock(&v9inode->v_mutex);
+ goto out_error;
+ }
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 0c8c55a41d7b2..c6e0f7a647cca 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3104,7 +3104,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
+ struct btrfs_inode *inode, u64 new_size,
+ u32 min_type);
+
+-int btrfs_start_delalloc_snapshot(struct btrfs_root *root);
++int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context);
+ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
+ bool in_reclaim_context);
+ int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index f851a1a63833d..f6aae90f83e6e 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2082,6 +2082,30 @@ static int start_ordered_ops(struct inode *inode, loff_t start, loff_t end)
+ return ret;
+ }
+
++static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx)
++{
++ struct btrfs_inode *inode = BTRFS_I(ctx->inode);
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
++
++ if (btrfs_inode_in_log(inode, fs_info->generation) &&
++ list_empty(&ctx->ordered_extents))
++ return true;
++
++ /*
++ * If we are doing a fast fsync we can not bail out if the inode's
++ * last_trans is <= then the last committed transaction, because we only
++ * update the last_trans of the inode during ordered extent completion,
++ * and for a fast fsync we don't wait for that, we only wait for the
++ * writeback to complete.
++ */
++ if (inode->last_trans <= fs_info->last_trans_committed &&
++ (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) ||
++ list_empty(&ctx->ordered_extents)))
++ return true;
++
++ return false;
++}
++
+ /*
+ * fsync call for both files and directories. This logs the inode into
+ * the tree log instead of forcing full commits whenever possible.
+@@ -2097,7 +2121,6 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+ struct dentry *dentry = file_dentry(file);
+ struct inode *inode = d_inode(dentry);
+- struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ struct btrfs_root *root = BTRFS_I(inode)->root;
+ struct btrfs_trans_handle *trans;
+ struct btrfs_log_ctx ctx;
+@@ -2196,17 +2219,8 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+
+ atomic_inc(&root->log_batch);
+
+- /*
+- * If we are doing a fast fsync we can not bail out if the inode's
+- * last_trans is <= then the last committed transaction, because we only
+- * update the last_trans of the inode during ordered extent completion,
+- * and for a fast fsync we don't wait for that, we only wait for the
+- * writeback to complete.
+- */
+ smp_mb();
+- if (btrfs_inode_in_log(BTRFS_I(inode), fs_info->generation) ||
+- (BTRFS_I(inode)->last_trans <= fs_info->last_trans_committed &&
+- (full_sync || list_empty(&ctx.ordered_extents)))) {
++ if (skip_inode_logging(&ctx)) {
+ /*
+ * We've had everything committed since the last time we were
+ * modified so clear this flag in case it was set for whatever
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index fe723eadced79..c4c26724a00c2 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9475,7 +9475,7 @@ out:
+ return ret;
+ }
+
+-int btrfs_start_delalloc_snapshot(struct btrfs_root *root)
++int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context)
+ {
+ struct writeback_control wbc = {
+ .nr_to_write = LONG_MAX,
+@@ -9488,7 +9488,7 @@ int btrfs_start_delalloc_snapshot(struct btrfs_root *root)
+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
+ return -EROFS;
+
+- return start_delalloc_inodes(root, &wbc, true, false);
++ return start_delalloc_inodes(root, &wbc, true, in_reclaim_context);
+ }
+
+ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index d06ad9a9abb33..1285837c27462 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1042,7 +1042,7 @@ static noinline int btrfs_mksnapshot(const struct path *parent,
+ */
+ btrfs_drew_read_lock(&root->snapshot_lock);
+
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, false);
+ if (ret)
+ goto out;
+
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index f0b9ef13153ad..2991287a71a87 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3579,7 +3579,7 @@ static int try_flush_qgroup(struct btrfs_root *root)
+ return 0;
+ }
+
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, true);
+ if (ret < 0)
+ goto out;
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 78a35374d4929..e405d68fe1e30 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -7159,7 +7159,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx)
+ int i;
+
+ if (root) {
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, false);
+ if (ret)
+ return ret;
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
+@@ -7167,7 +7167,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx)
+
+ for (i = 0; i < sctx->clone_roots_cnt; i++) {
+ root = sctx->clone_roots[i].root;
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, false);
+ if (ret)
+ return ret;
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 254c2ee43aae6..2fadd59748380 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6066,7 +6066,8 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ * (since logging them is pointless, a link count of 0 means they
+ * will never be accessible).
+ */
+- if (btrfs_inode_in_log(inode, trans->transid) ||
++ if ((btrfs_inode_in_log(inode, trans->transid) &&
++ list_empty(&ctx->ordered_extents)) ||
+ inode->vfs_inode.i_nlink == 0) {
+ ret = BTRFS_NO_LOG_SYNC;
+ goto end_no_trans;
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index e088843a7734c..baa6368bece59 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -178,8 +178,10 @@ static struct dentry *__fh_to_dentry(struct super_block *sb, u64 ino)
+ return ERR_CAST(inode);
+ /* We need LINK caps to reliably check i_nlink */
+ err = ceph_do_getattr(inode, CEPH_CAP_LINK_SHARED, false);
+- if (err)
++ if (err) {
++ iput(inode);
+ return ERR_PTR(err);
++ }
+ /* -ESTALE if inode as been unlinked and no file is open */
+ if ((inode->i_nlink == 0) && (atomic_read(&inode->i_count) == 1)) {
+ iput(inode);
+diff --git a/fs/dax.c b/fs/dax.c
+index b3d27fdc67752..df5485b4bddf1 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -144,6 +144,16 @@ struct wait_exceptional_entry_queue {
+ struct exceptional_entry_key key;
+ };
+
++/**
++ * enum dax_wake_mode: waitqueue wakeup behaviour
++ * @WAKE_ALL: wake all waiters in the waitqueue
++ * @WAKE_NEXT: wake only the first waiter in the waitqueue
++ */
++enum dax_wake_mode {
++ WAKE_ALL,
++ WAKE_NEXT,
++};
++
+ static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas,
+ void *entry, struct exceptional_entry_key *key)
+ {
+@@ -182,7 +192,8 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait,
+ * The important information it's conveying is whether the entry at
+ * this index used to be a PMD entry.
+ */
+-static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
++static void dax_wake_entry(struct xa_state *xas, void *entry,
++ enum dax_wake_mode mode)
+ {
+ struct exceptional_entry_key key;
+ wait_queue_head_t *wq;
+@@ -196,7 +207,7 @@ static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
+ * must be in the waitqueue and the following check will see them.
+ */
+ if (waitqueue_active(wq))
+- __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
++ __wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key);
+ }
+
+ /*
+@@ -264,11 +275,11 @@ static void wait_entry_unlocked(struct xa_state *xas, void *entry)
+ finish_wait(wq, &ewait.wait);
+ }
+
+-static void put_unlocked_entry(struct xa_state *xas, void *entry)
++static void put_unlocked_entry(struct xa_state *xas, void *entry,
++ enum dax_wake_mode mode)
+ {
+- /* If we were the only waiter woken, wake the next one */
+ if (entry && !dax_is_conflict(entry))
+- dax_wake_entry(xas, entry, false);
++ dax_wake_entry(xas, entry, mode);
+ }
+
+ /*
+@@ -286,7 +297,7 @@ static void dax_unlock_entry(struct xa_state *xas, void *entry)
+ old = xas_store(xas, entry);
+ xas_unlock_irq(xas);
+ BUG_ON(!dax_is_locked(old));
+- dax_wake_entry(xas, entry, false);
++ dax_wake_entry(xas, entry, WAKE_NEXT);
+ }
+
+ /*
+@@ -524,7 +535,7 @@ retry:
+
+ dax_disassociate_entry(entry, mapping, false);
+ xas_store(xas, NULL); /* undo the PMD join */
+- dax_wake_entry(xas, entry, true);
++ dax_wake_entry(xas, entry, WAKE_ALL);
+ mapping->nrexceptional--;
+ entry = NULL;
+ xas_set(xas, index);
+@@ -622,7 +633,7 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping,
+ entry = get_unlocked_entry(&xas, 0);
+ if (entry)
+ page = dax_busy_page(entry);
+- put_unlocked_entry(&xas, entry);
++ put_unlocked_entry(&xas, entry, WAKE_NEXT);
+ if (page)
+ break;
+ if (++scanned % XA_CHECK_SCHED)
+@@ -664,7 +675,7 @@ static int __dax_invalidate_entry(struct address_space *mapping,
+ mapping->nrexceptional--;
+ ret = 1;
+ out:
+- put_unlocked_entry(&xas, entry);
++ put_unlocked_entry(&xas, entry, WAKE_ALL);
+ xas_unlock_irq(&xas);
+ return ret;
+ }
+@@ -937,13 +948,13 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ xas_lock_irq(xas);
+ xas_store(xas, entry);
+ xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
+- dax_wake_entry(xas, entry, false);
++ dax_wake_entry(xas, entry, WAKE_NEXT);
+
+ trace_dax_writeback_one(mapping->host, index, count);
+ return ret;
+
+ put_unlocked:
+- put_unlocked_entry(xas, entry);
++ put_unlocked_entry(xas, entry, WAKE_NEXT);
+ return ret;
+ }
+
+@@ -1684,7 +1695,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
+ /* Did we race with someone splitting entry or so? */
+ if (!entry || dax_is_conflict(entry) ||
+ (order == 0 && !dax_is_pte_entry(entry))) {
+- put_unlocked_entry(&xas, entry);
++ put_unlocked_entry(&xas, entry, WAKE_NEXT);
+ xas_unlock_irq(&xas);
+ trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+ VM_FAULT_NOPAGE);
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 86c7f04896207..720d65f224f09 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -35,7 +35,7 @@
+ static struct vfsmount *debugfs_mount;
+ static int debugfs_mount_count;
+ static bool debugfs_registered;
+-static unsigned int debugfs_allow = DEFAULT_DEBUGFS_ALLOW_BITS;
++static unsigned int debugfs_allow __ro_after_init = DEFAULT_DEBUGFS_ALLOW_BITS;
+
+ /*
+ * Don't allow access attributes to be changed whilst the kernel is locked down
+diff --git a/fs/dlm/config.c b/fs/dlm/config.c
+index 49c5f9407098e..88d95d96e36c5 100644
+--- a/fs/dlm/config.c
++++ b/fs/dlm/config.c
+@@ -125,7 +125,7 @@ static ssize_t cluster_cluster_name_store(struct config_item *item,
+ CONFIGFS_ATTR(cluster_, cluster_name);
+
+ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field,
+- int *info_field, bool (*check_cb)(unsigned int x),
++ int *info_field, int (*check_cb)(unsigned int x),
+ const char *buf, size_t len)
+ {
+ unsigned int x;
+@@ -137,8 +137,11 @@ static ssize_t cluster_set(struct dlm_cluster *cl, unsigned int *cl_field,
+ if (rc)
+ return rc;
+
+- if (check_cb && check_cb(x))
+- return -EINVAL;
++ if (check_cb) {
++ rc = check_cb(x);
++ if (rc)
++ return rc;
++ }
+
+ *cl_field = x;
+ *info_field = x;
+@@ -161,17 +164,53 @@ static ssize_t cluster_##name##_show(struct config_item *item, char *buf) \
+ } \
+ CONFIGFS_ATTR(cluster_, name);
+
+-static bool dlm_check_zero(unsigned int x)
++static int dlm_check_protocol_and_dlm_running(unsigned int x)
++{
++ switch (x) {
++ case 0:
++ /* TCP */
++ break;
++ case 1:
++ /* SCTP */
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ if (dlm_allow_conn)
++ return -EBUSY;
++
++ return 0;
++}
++
++static int dlm_check_zero_and_dlm_running(unsigned int x)
++{
++ if (!x)
++ return -EINVAL;
++
++ if (dlm_allow_conn)
++ return -EBUSY;
++
++ return 0;
++}
++
++static int dlm_check_zero(unsigned int x)
+ {
+- return !x;
++ if (!x)
++ return -EINVAL;
++
++ return 0;
+ }
+
+-static bool dlm_check_buffer_size(unsigned int x)
++static int dlm_check_buffer_size(unsigned int x)
+ {
+- return (x < DEFAULT_BUFFER_SIZE);
++ if (x < DEFAULT_BUFFER_SIZE)
++ return -EINVAL;
++
++ return 0;
+ }
+
+-CLUSTER_ATTR(tcp_port, dlm_check_zero);
++CLUSTER_ATTR(tcp_port, dlm_check_zero_and_dlm_running);
+ CLUSTER_ATTR(buffer_size, dlm_check_buffer_size);
+ CLUSTER_ATTR(rsbtbl_size, dlm_check_zero);
+ CLUSTER_ATTR(recover_timer, dlm_check_zero);
+@@ -179,7 +218,7 @@ CLUSTER_ATTR(toss_secs, dlm_check_zero);
+ CLUSTER_ATTR(scan_secs, dlm_check_zero);
+ CLUSTER_ATTR(log_debug, NULL);
+ CLUSTER_ATTR(log_info, NULL);
+-CLUSTER_ATTR(protocol, NULL);
++CLUSTER_ATTR(protocol, dlm_check_protocol_and_dlm_running);
+ CLUSTER_ATTR(mark, NULL);
+ CLUSTER_ATTR(timewarn_cs, dlm_check_zero);
+ CLUSTER_ATTR(waitwarn_us, NULL);
+@@ -688,6 +727,7 @@ static ssize_t comm_mark_show(struct config_item *item, char *buf)
+ static ssize_t comm_mark_store(struct config_item *item, const char *buf,
+ size_t len)
+ {
++ struct dlm_comm *comm;
+ unsigned int mark;
+ int rc;
+
+@@ -695,7 +735,15 @@ static ssize_t comm_mark_store(struct config_item *item, const char *buf,
+ if (rc)
+ return rc;
+
+- config_item_to_comm(item)->mark = mark;
++ if (mark == 0)
++ mark = dlm_config.ci_mark;
++
++ comm = config_item_to_comm(item);
++ rc = dlm_lowcomms_nodes_set_mark(comm->nodeid, mark);
++ if (rc)
++ return rc;
++
++ comm->mark = mark;
+ return len;
+ }
+
+@@ -870,24 +918,6 @@ int dlm_comm_seq(int nodeid, uint32_t *seq)
+ return 0;
+ }
+
+-void dlm_comm_mark(int nodeid, unsigned int *mark)
+-{
+- struct dlm_comm *cm;
+-
+- cm = get_comm(nodeid);
+- if (!cm) {
+- *mark = dlm_config.ci_mark;
+- return;
+- }
+-
+- if (cm->mark)
+- *mark = cm->mark;
+- else
+- *mark = dlm_config.ci_mark;
+-
+- put_comm(cm);
+-}
+-
+ int dlm_our_nodeid(void)
+ {
+ return local_comm ? local_comm->nodeid : 0;
+diff --git a/fs/dlm/config.h b/fs/dlm/config.h
+index c210250a25818..d2cd4bd20313f 100644
+--- a/fs/dlm/config.h
++++ b/fs/dlm/config.h
+@@ -48,7 +48,6 @@ void dlm_config_exit(void);
+ int dlm_config_nodes(char *lsname, struct dlm_config_node **nodes_out,
+ int *count_out);
+ int dlm_comm_seq(int nodeid, uint32_t *seq);
+-void dlm_comm_mark(int nodeid, unsigned int *mark);
+ int dlm_our_nodeid(void);
+ int dlm_our_addr(struct sockaddr_storage *addr, int num);
+
+diff --git a/fs/dlm/debug_fs.c b/fs/dlm/debug_fs.c
+index d6bbccb0ed152..d5bd990bcab8b 100644
+--- a/fs/dlm/debug_fs.c
++++ b/fs/dlm/debug_fs.c
+@@ -542,6 +542,7 @@ static void *table_seq_next(struct seq_file *seq, void *iter_ptr, loff_t *pos)
+
+ if (bucket >= ls->ls_rsbtbl_size) {
+ kfree(ri);
++ ++*pos;
+ return NULL;
+ }
+ tree = toss ? &ls->ls_rsbtbl[bucket].toss : &ls->ls_rsbtbl[bucket].keep;
+diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
+index 561dcad08ad6e..c14cf2b7faab3 100644
+--- a/fs/dlm/lockspace.c
++++ b/fs/dlm/lockspace.c
+@@ -404,12 +404,6 @@ static int threads_start(void)
+ return error;
+ }
+
+-static void threads_stop(void)
+-{
+- dlm_scand_stop();
+- dlm_lowcomms_stop();
+-}
+-
+ static int new_lockspace(const char *name, const char *cluster,
+ uint32_t flags, int lvblen,
+ const struct dlm_lockspace_ops *ops, void *ops_arg,
+@@ -702,8 +696,11 @@ int dlm_new_lockspace(const char *name, const char *cluster,
+ ls_count++;
+ if (error > 0)
+ error = 0;
+- if (!ls_count)
+- threads_stop();
++ if (!ls_count) {
++ dlm_scand_stop();
++ dlm_lowcomms_shutdown();
++ dlm_lowcomms_stop();
++ }
+ out:
+ mutex_unlock(&ls_lock);
+ return error;
+@@ -788,6 +785,11 @@ static int release_lockspace(struct dlm_ls *ls, int force)
+
+ dlm_recoverd_stop(ls);
+
++ if (ls_count == 1) {
++ dlm_scand_stop();
++ dlm_lowcomms_shutdown();
++ }
++
+ dlm_callback_stop(ls);
+
+ remove_lockspace(ls);
+@@ -880,7 +882,7 @@ int dlm_release_lockspace(void *lockspace, int force)
+ if (!error)
+ ls_count--;
+ if (!ls_count)
+- threads_stop();
++ dlm_lowcomms_stop();
+ mutex_unlock(&ls_lock);
+
+ return error;
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index f7d2c52791f8f..45c2fdaf34c4d 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -116,6 +116,7 @@ struct writequeue_entry {
+ struct dlm_node_addr {
+ struct list_head list;
+ int nodeid;
++ int mark;
+ int addr_count;
+ int curr_addr_index;
+ struct sockaddr_storage *addr[DLM_MAX_ADDR_COUNT];
+@@ -134,7 +135,7 @@ static DEFINE_SPINLOCK(dlm_node_addrs_spin);
+ static struct listen_connection listen_con;
+ static struct sockaddr_storage *dlm_local_addr[DLM_MAX_ADDR_COUNT];
+ static int dlm_local_count;
+-static int dlm_allow_conn;
++int dlm_allow_conn;
+
+ /* Work queues */
+ static struct workqueue_struct *recv_workqueue;
+@@ -303,7 +304,8 @@ static int addr_compare(const struct sockaddr_storage *x,
+ }
+
+ static int nodeid_to_addr(int nodeid, struct sockaddr_storage *sas_out,
+- struct sockaddr *sa_out, bool try_new_addr)
++ struct sockaddr *sa_out, bool try_new_addr,
++ unsigned int *mark)
+ {
+ struct sockaddr_storage sas;
+ struct dlm_node_addr *na;
+@@ -331,6 +333,8 @@ static int nodeid_to_addr(int nodeid, struct sockaddr_storage *sas_out,
+ if (!na->addr_count)
+ return -ENOENT;
+
++ *mark = na->mark;
++
+ if (sas_out)
+ memcpy(sas_out, &sas, sizeof(struct sockaddr_storage));
+
+@@ -350,7 +354,8 @@ static int nodeid_to_addr(int nodeid, struct sockaddr_storage *sas_out,
+ return 0;
+ }
+
+-static int addr_to_nodeid(struct sockaddr_storage *addr, int *nodeid)
++static int addr_to_nodeid(struct sockaddr_storage *addr, int *nodeid,
++ unsigned int *mark)
+ {
+ struct dlm_node_addr *na;
+ int rv = -EEXIST;
+@@ -364,6 +369,7 @@ static int addr_to_nodeid(struct sockaddr_storage *addr, int *nodeid)
+ for (addr_i = 0; addr_i < na->addr_count; addr_i++) {
+ if (addr_compare(na->addr[addr_i], addr)) {
+ *nodeid = na->nodeid;
++ *mark = na->mark;
+ rv = 0;
+ goto unlock;
+ }
+@@ -412,6 +418,7 @@ int dlm_lowcomms_addr(int nodeid, struct sockaddr_storage *addr, int len)
+ new_node->nodeid = nodeid;
+ new_node->addr[0] = new_addr;
+ new_node->addr_count = 1;
++ new_node->mark = dlm_config.ci_mark;
+ list_add(&new_node->list, &dlm_node_addrs);
+ spin_unlock(&dlm_node_addrs_spin);
+ return 0;
+@@ -519,6 +526,23 @@ int dlm_lowcomms_connect_node(int nodeid)
+ return 0;
+ }
+
++int dlm_lowcomms_nodes_set_mark(int nodeid, unsigned int mark)
++{
++ struct dlm_node_addr *na;
++
++ spin_lock(&dlm_node_addrs_spin);
++ na = find_node_addr(nodeid);
++ if (!na) {
++ spin_unlock(&dlm_node_addrs_spin);
++ return -ENOENT;
++ }
++
++ na->mark = mark;
++ spin_unlock(&dlm_node_addrs_spin);
++
++ return 0;
++}
++
+ static void lowcomms_error_report(struct sock *sk)
+ {
+ struct connection *con;
+@@ -685,10 +709,7 @@ static void shutdown_connection(struct connection *con)
+ {
+ int ret;
+
+- if (cancel_work_sync(&con->swork)) {
+- log_print("canceled swork for node %d", con->nodeid);
+- clear_bit(CF_WRITE_PENDING, &con->flags);
+- }
++ flush_work(&con->swork);
+
+ mutex_lock(&con->sock_mutex);
+ /* nothing to shutdown */
+@@ -867,7 +888,7 @@ static int accept_from_sock(struct listen_connection *con)
+
+ /* Get the new node's NODEID */
+ make_sockaddr(&peeraddr, 0, &len);
+- if (addr_to_nodeid(&peeraddr, &nodeid)) {
++ if (addr_to_nodeid(&peeraddr, &nodeid, &mark)) {
+ unsigned char *b=(unsigned char *)&peeraddr;
+ log_print("connect from non cluster node");
+ print_hex_dump_bytes("ss: ", DUMP_PREFIX_NONE,
+@@ -876,9 +897,6 @@ static int accept_from_sock(struct listen_connection *con)
+ return -1;
+ }
+
+- dlm_comm_mark(nodeid, &mark);
+- sock_set_mark(newsock->sk, mark);
+-
+ log_print("got connection from %d", nodeid);
+
+ /* Check to see if we already have a connection to this node. This
+@@ -892,6 +910,8 @@ static int accept_from_sock(struct listen_connection *con)
+ goto accept_err;
+ }
+
++ sock_set_mark(newsock->sk, mark);
++
+ mutex_lock(&newcon->sock_mutex);
+ if (newcon->sock) {
+ struct connection *othercon = newcon->othercon;
+@@ -1016,8 +1036,6 @@ static void sctp_connect_to_sock(struct connection *con)
+ struct socket *sock;
+ unsigned int mark;
+
+- dlm_comm_mark(con->nodeid, &mark);
+-
+ mutex_lock(&con->sock_mutex);
+
+ /* Some odd races can cause double-connects, ignore them */
+@@ -1030,7 +1048,7 @@ static void sctp_connect_to_sock(struct connection *con)
+ }
+
+ memset(&daddr, 0, sizeof(daddr));
+- result = nodeid_to_addr(con->nodeid, &daddr, NULL, true);
++ result = nodeid_to_addr(con->nodeid, &daddr, NULL, true, &mark);
+ if (result < 0) {
+ log_print("no address for nodeid %d", con->nodeid);
+ goto out;
+@@ -1105,13 +1123,11 @@ out:
+ static void tcp_connect_to_sock(struct connection *con)
+ {
+ struct sockaddr_storage saddr, src_addr;
++ unsigned int mark;
+ int addr_len;
+ struct socket *sock = NULL;
+- unsigned int mark;
+ int result;
+
+- dlm_comm_mark(con->nodeid, &mark);
+-
+ mutex_lock(&con->sock_mutex);
+ if (con->retries++ > MAX_CONNECT_RETRIES)
+ goto out;
+@@ -1126,15 +1142,15 @@ static void tcp_connect_to_sock(struct connection *con)
+ if (result < 0)
+ goto out_err;
+
+- sock_set_mark(sock->sk, mark);
+-
+ memset(&saddr, 0, sizeof(saddr));
+- result = nodeid_to_addr(con->nodeid, &saddr, NULL, false);
++ result = nodeid_to_addr(con->nodeid, &saddr, NULL, false, &mark);
+ if (result < 0) {
+ log_print("no address for nodeid %d", con->nodeid);
+ goto out_err;
+ }
+
++ sock_set_mark(sock->sk, mark);
++
+ add_sock(sock, con);
+
+ /* Bind to our cluster-known address connecting to avoid
+@@ -1356,9 +1372,11 @@ void *dlm_lowcomms_get_buffer(int nodeid, int len, gfp_t allocation, char **ppc)
+ struct writequeue_entry *e;
+ int offset = 0;
+
+- if (len > LOWCOMMS_MAX_TX_BUFFER_LEN) {
+- BUILD_BUG_ON(PAGE_SIZE < LOWCOMMS_MAX_TX_BUFFER_LEN);
++ if (len > DEFAULT_BUFFER_SIZE ||
++ len < sizeof(struct dlm_header)) {
++ BUILD_BUG_ON(PAGE_SIZE < DEFAULT_BUFFER_SIZE);
+ log_print("failed to allocate a buffer of size %d", len);
++ WARN_ON(1);
+ return NULL;
+ }
+
+@@ -1590,6 +1608,29 @@ static int work_start(void)
+ return 0;
+ }
+
++static void shutdown_conn(struct connection *con)
++{
++ if (con->shutdown_action)
++ con->shutdown_action(con);
++}
++
++void dlm_lowcomms_shutdown(void)
++{
++ /* Set all the flags to prevent any
++ * socket activity.
++ */
++ dlm_allow_conn = 0;
++
++ if (recv_workqueue)
++ flush_workqueue(recv_workqueue);
++ if (send_workqueue)
++ flush_workqueue(send_workqueue);
++
++ dlm_close_sock(&listen_con.sock);
++
++ foreach_conn(shutdown_conn);
++}
++
+ static void _stop_conn(struct connection *con, bool and_other)
+ {
+ mutex_lock(&con->sock_mutex);
+@@ -1611,12 +1652,6 @@ static void stop_conn(struct connection *con)
+ _stop_conn(con, true);
+ }
+
+-static void shutdown_conn(struct connection *con)
+-{
+- if (con->shutdown_action)
+- con->shutdown_action(con);
+-}
+-
+ static void connection_release(struct rcu_head *rcu)
+ {
+ struct connection *con = container_of(rcu, struct connection, rcu);
+@@ -1673,19 +1708,6 @@ static void work_flush(void)
+
+ void dlm_lowcomms_stop(void)
+ {
+- /* Set all the flags to prevent any
+- socket activity.
+- */
+- dlm_allow_conn = 0;
+-
+- if (recv_workqueue)
+- flush_workqueue(recv_workqueue);
+- if (send_workqueue)
+- flush_workqueue(send_workqueue);
+-
+- dlm_close_sock(&listen_con.sock);
+-
+- foreach_conn(shutdown_conn);
+ work_flush();
+ foreach_conn(free_conn);
+ work_stop();
+diff --git a/fs/dlm/lowcomms.h b/fs/dlm/lowcomms.h
+index 0918f9376489f..48bbc4e187619 100644
+--- a/fs/dlm/lowcomms.h
++++ b/fs/dlm/lowcomms.h
+@@ -14,13 +14,18 @@
+
+ #define LOWCOMMS_MAX_TX_BUFFER_LEN 4096
+
++/* switch to check if dlm is running */
++extern int dlm_allow_conn;
++
+ int dlm_lowcomms_start(void);
++void dlm_lowcomms_shutdown(void);
+ void dlm_lowcomms_stop(void);
+ void dlm_lowcomms_exit(void);
+ int dlm_lowcomms_close(int nodeid);
+ void *dlm_lowcomms_get_buffer(int nodeid, int len, gfp_t allocation, char **ppc);
+ void dlm_lowcomms_commit_buffer(void *mh);
+ int dlm_lowcomms_connect_node(int nodeid);
++int dlm_lowcomms_nodes_set_mark(int nodeid, unsigned int mark);
+ int dlm_lowcomms_addr(int nodeid, struct sockaddr_storage *addr, int len);
+
+ #endif /* __LOWCOMMS_DOT_H__ */
+diff --git a/fs/dlm/midcomms.c b/fs/dlm/midcomms.c
+index fde3a6afe4bea..0bedfa8606a26 100644
+--- a/fs/dlm/midcomms.c
++++ b/fs/dlm/midcomms.c
+@@ -49,9 +49,10 @@ int dlm_process_incoming_buffer(int nodeid, unsigned char *buf, int len)
+ * cannot deliver this message to upper layers
+ */
+ msglen = get_unaligned_le16(&hd->h_length);
+- if (msglen > DEFAULT_BUFFER_SIZE) {
+- log_print("received invalid length header: %u, will abort message parsing",
+- msglen);
++ if (msglen > DEFAULT_BUFFER_SIZE ||
++ msglen < sizeof(struct dlm_header)) {
++ log_print("received invalid length header: %u from node %d, will abort message parsing",
++ msglen, nodeid);
+ return -EBADMSG;
+ }
+
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index 0d3e67e7b00d9..585ecbd7061f1 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -1743,7 +1743,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
+ }
+
+ /* Range is mapped and needs a state change */
+- jbd_debug(1, "Converting from %d to %d %lld",
++ jbd_debug(1, "Converting from %ld to %d %lld",
+ map.m_flags & EXT4_MAP_UNWRITTEN,
+ ext4_ext_is_unwritten(ex), map.m_pblk);
+ ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 7a774c9e4cb89..3a503e5a8c113 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -123,19 +123,6 @@ static void f2fs_unlock_rpages(struct compress_ctx *cc, int len)
+ f2fs_drop_rpages(cc, len, true);
+ }
+
+-static void f2fs_put_rpages_mapping(struct address_space *mapping,
+- pgoff_t start, int len)
+-{
+- int i;
+-
+- for (i = 0; i < len; i++) {
+- struct page *page = find_get_page(mapping, start + i);
+-
+- put_page(page);
+- put_page(page);
+- }
+-}
+-
+ static void f2fs_put_rpages_wbc(struct compress_ctx *cc,
+ struct writeback_control *wbc, bool redirty, int unlock)
+ {
+@@ -164,13 +151,14 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc)
+ return cc->rpages ? 0 : -ENOMEM;
+ }
+
+-void f2fs_destroy_compress_ctx(struct compress_ctx *cc)
++void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse)
+ {
+ page_array_free(cc->inode, cc->rpages, cc->cluster_size);
+ cc->rpages = NULL;
+ cc->nr_rpages = 0;
+ cc->nr_cpages = 0;
+- cc->cluster_idx = NULL_CLUSTER;
++ if (!reuse)
++ cc->cluster_idx = NULL_CLUSTER;
+ }
+
+ void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page)
+@@ -1008,7 +996,7 @@ retry:
+ }
+
+ if (PageUptodate(page))
+- unlock_page(page);
++ f2fs_put_page(page, 1);
+ else
+ f2fs_compress_ctx_add_page(cc, page);
+ }
+@@ -1018,33 +1006,35 @@ retry:
+
+ ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size,
+ &last_block_in_bio, false, true);
+- f2fs_destroy_compress_ctx(cc);
++ f2fs_put_rpages(cc);
++ f2fs_destroy_compress_ctx(cc, true);
+ if (ret)
+- goto release_pages;
++ goto out;
+ if (bio)
+ f2fs_submit_bio(sbi, bio, DATA);
+
+ ret = f2fs_init_compress_ctx(cc);
+ if (ret)
+- goto release_pages;
++ goto out;
+ }
+
+ for (i = 0; i < cc->cluster_size; i++) {
+ f2fs_bug_on(sbi, cc->rpages[i]);
+
+ page = find_lock_page(mapping, start_idx + i);
+- f2fs_bug_on(sbi, !page);
++ if (!page) {
++ /* page can be truncated */
++ goto release_and_retry;
++ }
+
+ f2fs_wait_on_page_writeback(page, DATA, true, true);
+-
+ f2fs_compress_ctx_add_page(cc, page);
+- f2fs_put_page(page, 0);
+
+ if (!PageUptodate(page)) {
++release_and_retry:
++ f2fs_put_rpages(cc);
+ f2fs_unlock_rpages(cc, i + 1);
+- f2fs_put_rpages_mapping(mapping, start_idx,
+- cc->cluster_size);
+- f2fs_destroy_compress_ctx(cc);
++ f2fs_destroy_compress_ctx(cc, true);
+ goto retry;
+ }
+ }
+@@ -1075,10 +1065,10 @@ retry:
+ }
+
+ unlock_pages:
++ f2fs_put_rpages(cc);
+ f2fs_unlock_rpages(cc, i);
+-release_pages:
+- f2fs_put_rpages_mapping(mapping, start_idx, i);
+- f2fs_destroy_compress_ctx(cc);
++ f2fs_destroy_compress_ctx(cc, true);
++out:
+ return ret;
+ }
+
+@@ -1113,7 +1103,7 @@ bool f2fs_compress_write_end(struct inode *inode, void *fsdata,
+ set_cluster_dirty(&cc);
+
+ f2fs_put_rpages_wbc(&cc, NULL, false, 1);
+- f2fs_destroy_compress_ctx(&cc);
++ f2fs_destroy_compress_ctx(&cc, false);
+
+ return first_index;
+ }
+@@ -1332,7 +1322,7 @@ unlock_continue:
+ f2fs_put_rpages(cc);
+ page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
+ cc->cpages = NULL;
+- f2fs_destroy_compress_ctx(cc);
++ f2fs_destroy_compress_ctx(cc, false);
+ return 0;
+
+ out_destroy_crypt:
+@@ -1343,7 +1333,8 @@ out_destroy_crypt:
+ for (i = 0; i < cc->nr_cpages; i++) {
+ if (!cc->cpages[i])
+ continue;
+- f2fs_put_page(cc->cpages[i], 1);
++ f2fs_compress_free_page(cc->cpages[i]);
++ cc->cpages[i] = NULL;
+ }
+ out_put_cic:
+ kmem_cache_free(cic_entry_slab, cic);
+@@ -1493,7 +1484,7 @@ write:
+ err = f2fs_write_raw_pages(cc, submitted, wbc, io_type);
+ f2fs_put_rpages_wbc(cc, wbc, false, 0);
+ destroy_out:
+- f2fs_destroy_compress_ctx(cc);
++ f2fs_destroy_compress_ctx(cc, false);
+ return err;
+ }
+
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 4d3ebf094f6d7..3802ad227a1e9 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2405,7 +2405,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ max_nr_pages,
+ &last_block_in_bio,
+ rac != NULL, false);
+- f2fs_destroy_compress_ctx(&cc);
++ f2fs_destroy_compress_ctx(&cc, false);
+ if (ret)
+ goto set_error_page;
+ }
+@@ -2450,7 +2450,7 @@ next_page:
+ max_nr_pages,
+ &last_block_in_bio,
+ rac != NULL, false);
+- f2fs_destroy_compress_ctx(&cc);
++ f2fs_destroy_compress_ctx(&cc, false);
+ }
+ }
+ #endif
+@@ -3154,7 +3154,7 @@ next:
+ }
+ }
+ if (f2fs_compressed_file(inode))
+- f2fs_destroy_compress_ctx(&cc);
++ f2fs_destroy_compress_ctx(&cc, false);
+ #endif
+ if (retry) {
+ index = 0;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 1578402c58444..43e76529d6740 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3322,6 +3322,7 @@ block_t f2fs_get_unusable_blocks(struct f2fs_sb_info *sbi);
+ int f2fs_disable_cp_again(struct f2fs_sb_info *sbi, block_t unusable);
+ void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi);
+ int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra);
++bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno);
+ void f2fs_init_inmem_curseg(struct f2fs_sb_info *sbi);
+ void f2fs_save_inmem_curseg(struct f2fs_sb_info *sbi);
+ void f2fs_restore_inmem_curseg(struct f2fs_sb_info *sbi);
+@@ -3329,7 +3330,7 @@ void f2fs_get_new_segment(struct f2fs_sb_info *sbi,
+ unsigned int *newseg, bool new_sec, int dir);
+ void f2fs_allocate_segment_for_resize(struct f2fs_sb_info *sbi, int type,
+ unsigned int start, unsigned int end);
+-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type);
++void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type);
+ void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi);
+ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range);
+ bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi,
+@@ -3490,7 +3491,7 @@ void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi);
+ int f2fs_start_gc_thread(struct f2fs_sb_info *sbi);
+ void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi);
+ block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
+-int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background,
++int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, bool force,
+ unsigned int segno);
+ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
+ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count);
+@@ -3893,7 +3894,7 @@ void f2fs_free_dic(struct decompress_io_ctx *dic);
+ void f2fs_decompress_end_io(struct page **rpages,
+ unsigned int cluster_size, bool err, bool verity);
+ int f2fs_init_compress_ctx(struct compress_ctx *cc);
+-void f2fs_destroy_compress_ctx(struct compress_ctx *cc);
++void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
+ void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
+ int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi);
+ void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index d5ebc67c7130b..42563d7c442d6 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1616,9 +1616,10 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
+ struct f2fs_map_blocks map = { .m_next_pgofs = NULL,
+ .m_next_extent = NULL, .m_seg_type = NO_CHECK_TYPE,
+ .m_may_create = true };
+- pgoff_t pg_end;
++ pgoff_t pg_start, pg_end;
+ loff_t new_size = i_size_read(inode);
+ loff_t off_end;
++ block_t expanded = 0;
+ int err;
+
+ err = inode_newsize_ok(inode, (len + offset));
+@@ -1631,11 +1632,12 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
+
+ f2fs_balance_fs(sbi, true);
+
++ pg_start = ((unsigned long long)offset) >> PAGE_SHIFT;
+ pg_end = ((unsigned long long)offset + len) >> PAGE_SHIFT;
+ off_end = (offset + len) & (PAGE_SIZE - 1);
+
+- map.m_lblk = ((unsigned long long)offset) >> PAGE_SHIFT;
+- map.m_len = pg_end - map.m_lblk;
++ map.m_lblk = pg_start;
++ map.m_len = pg_end - pg_start;
+ if (off_end)
+ map.m_len++;
+
+@@ -1643,19 +1645,15 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
+ return 0;
+
+ if (f2fs_is_pinned_file(inode)) {
+- block_t len = (map.m_len >> sbi->log_blocks_per_seg) <<
+- sbi->log_blocks_per_seg;
+- block_t done = 0;
+-
+- if (map.m_len % sbi->blocks_per_seg)
+- len += sbi->blocks_per_seg;
++ block_t sec_blks = BLKS_PER_SEC(sbi);
++ block_t sec_len = roundup(map.m_len, sec_blks);
+
+- map.m_len = sbi->blocks_per_seg;
++ map.m_len = sec_blks;
+ next_alloc:
+ if (has_not_enough_free_secs(sbi, 0,
+ GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
+ down_write(&sbi->gc_lock);
+- err = f2fs_gc(sbi, true, false, NULL_SEGNO);
++ err = f2fs_gc(sbi, true, false, false, NULL_SEGNO);
+ if (err && err != -ENODATA && err != -EAGAIN)
+ goto out_err;
+ }
+@@ -1663,7 +1661,7 @@ next_alloc:
+ down_write(&sbi->pin_sem);
+
+ f2fs_lock_op(sbi);
+- f2fs_allocate_new_segment(sbi, CURSEG_COLD_DATA_PINNED);
++ f2fs_allocate_new_section(sbi, CURSEG_COLD_DATA_PINNED);
+ f2fs_unlock_op(sbi);
+
+ map.m_seg_type = CURSEG_COLD_DATA_PINNED;
+@@ -1671,24 +1669,25 @@ next_alloc:
+
+ up_write(&sbi->pin_sem);
+
+- done += map.m_len;
+- len -= map.m_len;
++ expanded += map.m_len;
++ sec_len -= map.m_len;
+ map.m_lblk += map.m_len;
+- if (!err && len)
++ if (!err && sec_len)
+ goto next_alloc;
+
+- map.m_len = done;
++ map.m_len = expanded;
+ } else {
+ err = f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_AIO);
++ expanded = map.m_len;
+ }
+ out_err:
+ if (err) {
+ pgoff_t last_off;
+
+- if (!map.m_len)
++ if (!expanded)
+ return err;
+
+- last_off = map.m_lblk + map.m_len - 1;
++ last_off = pg_start + expanded - 1;
+
+ /* update new size to the failed position */
+ new_size = (last_off == pg_end) ? offset + len :
+@@ -2486,7 +2485,7 @@ static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
+ down_write(&sbi->gc_lock);
+ }
+
+- ret = f2fs_gc(sbi, sync, true, NULL_SEGNO);
++ ret = f2fs_gc(sbi, sync, true, false, NULL_SEGNO);
+ out:
+ mnt_drop_write_file(filp);
+ return ret;
+@@ -2522,7 +2521,8 @@ do_more:
+ down_write(&sbi->gc_lock);
+ }
+
+- ret = f2fs_gc(sbi, range->sync, true, GET_SEGNO(sbi, range->start));
++ ret = f2fs_gc(sbi, range->sync, true, false,
++ GET_SEGNO(sbi, range->start));
+ if (ret) {
+ if (ret == -EBUSY)
+ ret = -EAGAIN;
+@@ -2975,7 +2975,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
+ sm->last_victim[GC_CB] = end_segno + 1;
+ sm->last_victim[GC_GREEDY] = end_segno + 1;
+ sm->last_victim[ALLOC_NEXT] = end_segno + 1;
+- ret = f2fs_gc(sbi, true, true, start_segno);
++ ret = f2fs_gc(sbi, true, true, true, start_segno);
+ if (ret == -EAGAIN)
+ ret = 0;
+ else if (ret < 0)
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 3ef84e6ded411..f4e426352aadc 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -112,7 +112,7 @@ do_gc:
+ sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC;
+
+ /* if return value is not zero, no victim was selected */
+- if (f2fs_gc(sbi, sync_mode, true, NULL_SEGNO))
++ if (f2fs_gc(sbi, sync_mode, true, false, NULL_SEGNO))
+ wait_ms = gc_th->no_gc_sleep_time;
+
+ trace_f2fs_background_gc(sbi->sb, wait_ms,
+@@ -392,10 +392,6 @@ static void add_victim_entry(struct f2fs_sb_info *sbi,
+ if (p->gc_mode == GC_AT &&
+ get_valid_blocks(sbi, segno, true) == 0)
+ return;
+-
+- if (p->alloc_mode == AT_SSR &&
+- get_seg_entry(sbi, segno)->ckpt_valid_blocks == 0)
+- return;
+ }
+
+ for (i = 0; i < sbi->segs_per_sec; i++)
+@@ -728,11 +724,27 @@ retry:
+
+ if (sec_usage_check(sbi, secno))
+ goto next;
++
+ /* Don't touch checkpointed data */
+- if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED) &&
+- get_ckpt_valid_blocks(sbi, segno) &&
+- p.alloc_mode == LFS))
+- goto next;
++ if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ if (p.alloc_mode == LFS) {
++ /*
++ * LFS is set to find source section during GC.
++ * The victim should have no checkpointed data.
++ */
++ if (get_ckpt_valid_blocks(sbi, segno, true))
++ goto next;
++ } else {
++ /*
++ * SSR | AT_SSR are set to find target segment
++ * for writes which can be full by checkpointed
++ * and newly written blocks.
++ */
++ if (!f2fs_segment_has_free_slot(sbi, segno))
++ goto next;
++ }
++ }
++
+ if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
+ goto next;
+
+@@ -1356,7 +1368,8 @@ out:
+ * the victim data block is ignored.
+ */
+ static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+- struct gc_inode_list *gc_list, unsigned int segno, int gc_type)
++ struct gc_inode_list *gc_list, unsigned int segno, int gc_type,
++ bool force_migrate)
+ {
+ struct super_block *sb = sbi->sb;
+ struct f2fs_summary *entry;
+@@ -1385,8 +1398,8 @@ next_step:
+ * race condition along with SSR block allocation.
+ */
+ if ((gc_type == BG_GC && has_not_enough_free_secs(sbi, 0, 0)) ||
+- get_valid_blocks(sbi, segno, true) ==
+- BLKS_PER_SEC(sbi))
++ (!force_migrate && get_valid_blocks(sbi, segno, true) ==
++ BLKS_PER_SEC(sbi)))
+ return submitted;
+
+ if (check_valid_map(sbi, segno, off) == 0)
+@@ -1521,7 +1534,8 @@ static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim,
+
+ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ unsigned int start_segno,
+- struct gc_inode_list *gc_list, int gc_type)
++ struct gc_inode_list *gc_list, int gc_type,
++ bool force_migrate)
+ {
+ struct page *sum_page;
+ struct f2fs_summary_block *sum;
+@@ -1608,7 +1622,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ gc_type);
+ else
+ submitted += gc_data_segment(sbi, sum->entries, gc_list,
+- segno, gc_type);
++ segno, gc_type,
++ force_migrate);
+
+ stat_inc_seg_count(sbi, type, gc_type);
+ migrated++;
+@@ -1636,7 +1651,7 @@ skip:
+ }
+
+ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
+- bool background, unsigned int segno)
++ bool background, bool force, unsigned int segno)
+ {
+ int gc_type = sync ? FG_GC : BG_GC;
+ int sec_freed = 0, seg_freed = 0, total_freed = 0;
+@@ -1698,7 +1713,7 @@ gc_more:
+ if (ret)
+ goto stop;
+
+- seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type);
++ seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type, force);
+ if (gc_type == FG_GC &&
+ seg_freed == f2fs_usable_segs_in_sec(sbi, segno))
+ sec_freed++;
+@@ -1837,7 +1852,7 @@ static int free_segment_range(struct f2fs_sb_info *sbi,
+ .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ };
+
+- do_garbage_collect(sbi, segno, &gc_list, FG_GC);
++ do_garbage_collect(sbi, segno, &gc_list, FG_GC, true);
+ put_gc_inode(&gc_list);
+
+ if (!gc_only && get_valid_blocks(sbi, segno, true)) {
+@@ -1976,7 +1991,20 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+
+ /* stop CP to protect MAIN_SEC in free_segment_range */
+ f2fs_lock_op(sbi);
++
++ spin_lock(&sbi->stat_lock);
++ if (shrunk_blocks + valid_user_blocks(sbi) +
++ sbi->current_reserved_blocks + sbi->unusable_block_count +
++ F2FS_OPTION(sbi).root_reserved_blocks > sbi->user_block_count)
++ err = -ENOSPC;
++ spin_unlock(&sbi->stat_lock);
++
++ if (err)
++ goto out_unlock;
++
+ err = free_segment_range(sbi, secs, true);
++
++out_unlock:
+ f2fs_unlock_op(sbi);
+ up_write(&sbi->gc_lock);
+ if (err)
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 993caefcd2bb0..92652ca7a7c8b 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -219,7 +219,8 @@ out:
+
+ f2fs_put_page(page, 1);
+
+- f2fs_balance_fs(sbi, dn.node_changed);
++ if (!err)
++ f2fs_balance_fs(sbi, dn.node_changed);
+
+ return err;
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index deca74cb17dfd..b053e3c32e1f1 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -327,23 +327,27 @@ void f2fs_drop_inmem_pages(struct inode *inode)
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+
+- while (!list_empty(&fi->inmem_pages)) {
++ do {
+ mutex_lock(&fi->inmem_lock);
++ if (list_empty(&fi->inmem_pages)) {
++ fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
++
++ spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
++ if (!list_empty(&fi->inmem_ilist))
++ list_del_init(&fi->inmem_ilist);
++ if (f2fs_is_atomic_file(inode)) {
++ clear_inode_flag(inode, FI_ATOMIC_FILE);
++ sbi->atomic_files--;
++ }
++ spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++
++ mutex_unlock(&fi->inmem_lock);
++ break;
++ }
+ __revoke_inmem_pages(inode, &fi->inmem_pages,
+ true, false, true);
+ mutex_unlock(&fi->inmem_lock);
+- }
+-
+- fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
+-
+- spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
+- if (!list_empty(&fi->inmem_ilist))
+- list_del_init(&fi->inmem_ilist);
+- if (f2fs_is_atomic_file(inode)) {
+- clear_inode_flag(inode, FI_ATOMIC_FILE);
+- sbi->atomic_files--;
+- }
+- spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++ } while (1);
+ }
+
+ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+@@ -507,7 +511,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+ */
+ if (has_not_enough_free_secs(sbi, 0, 0)) {
+ down_write(&sbi->gc_lock);
+- f2fs_gc(sbi, false, false, NULL_SEGNO);
++ f2fs_gc(sbi, false, false, false, NULL_SEGNO);
+ }
+ }
+
+@@ -878,7 +882,7 @@ static void locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno)
+ mutex_lock(&dirty_i->seglist_lock);
+
+ valid_blocks = get_valid_blocks(sbi, segno, false);
+- ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno);
++ ckpt_valid_blocks = get_ckpt_valid_blocks(sbi, segno, false);
+
+ if (valid_blocks == 0 && (!is_sbi_flag_set(sbi, SBI_CP_DISABLED) ||
+ ckpt_valid_blocks == usable_blocks)) {
+@@ -963,7 +967,7 @@ static unsigned int get_free_segment(struct f2fs_sb_info *sbi)
+ for_each_set_bit(segno, dirty_i->dirty_segmap[DIRTY], MAIN_SEGS(sbi)) {
+ if (get_valid_blocks(sbi, segno, false))
+ continue;
+- if (get_ckpt_valid_blocks(sbi, segno))
++ if (get_ckpt_valid_blocks(sbi, segno, false))
+ continue;
+ mutex_unlock(&dirty_i->seglist_lock);
+ return segno;
+@@ -2653,6 +2657,23 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
+ seg->next_blkoff++;
+ }
+
++bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno)
++{
++ struct seg_entry *se = get_seg_entry(sbi, segno);
++ int entries = SIT_VBLOCK_MAP_SIZE / sizeof(unsigned long);
++ unsigned long *target_map = SIT_I(sbi)->tmp_map;
++ unsigned long *ckpt_map = (unsigned long *)se->ckpt_valid_map;
++ unsigned long *cur_map = (unsigned long *)se->cur_valid_map;
++ int i, pos;
++
++ for (i = 0; i < entries; i++)
++ target_map[i] = ckpt_map[i] | cur_map[i];
++
++ pos = __find_rev_next_zero_bit(target_map, sbi->blocks_per_seg, 0);
++
++ return pos < sbi->blocks_per_seg;
++}
++
+ /*
+ * This function always allocates a used segment(from dirty seglist) by SSR
+ * manner, so it should recover the existing segment information of valid blocks
+@@ -2910,7 +2931,8 @@ unlock:
+ up_read(&SM_I(sbi)->curseg_lock);
+ }
+
+-static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type)
++static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type,
++ bool new_sec)
+ {
+ struct curseg_info *curseg = CURSEG_I(sbi, type);
+ unsigned int old_segno;
+@@ -2918,32 +2940,42 @@ static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type)
+ if (!curseg->inited)
+ goto alloc;
+
+- if (!curseg->next_blkoff &&
+- !get_valid_blocks(sbi, curseg->segno, false) &&
+- !get_ckpt_valid_blocks(sbi, curseg->segno))
+- return;
++ if (curseg->next_blkoff ||
++ get_valid_blocks(sbi, curseg->segno, new_sec))
++ goto alloc;
+
++ if (!get_ckpt_valid_blocks(sbi, curseg->segno, new_sec))
++ return;
+ alloc:
+ old_segno = curseg->segno;
+ SIT_I(sbi)->s_ops->allocate_segment(sbi, type, true);
+ locate_dirty_segment(sbi, old_segno);
+ }
+
+-void f2fs_allocate_new_segment(struct f2fs_sb_info *sbi, int type)
++static void __allocate_new_section(struct f2fs_sb_info *sbi, int type)
+ {
++ __allocate_new_segment(sbi, type, true);
++}
++
++void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type)
++{
++ down_read(&SM_I(sbi)->curseg_lock);
+ down_write(&SIT_I(sbi)->sentry_lock);
+- __allocate_new_segment(sbi, type);
++ __allocate_new_section(sbi, type);
+ up_write(&SIT_I(sbi)->sentry_lock);
++ up_read(&SM_I(sbi)->curseg_lock);
+ }
+
+ void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi)
+ {
+ int i;
+
++ down_read(&SM_I(sbi)->curseg_lock);
+ down_write(&SIT_I(sbi)->sentry_lock);
+ for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++)
+- __allocate_new_segment(sbi, i);
++ __allocate_new_segment(sbi, i, false);
+ up_write(&SIT_I(sbi)->sentry_lock);
++ up_read(&SM_I(sbi)->curseg_lock);
+ }
+
+ static const struct segment_allocation default_salloc_ops = {
+@@ -3382,12 +3414,12 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
+ f2fs_inode_chksum_set(sbi, page);
+ }
+
+- if (F2FS_IO_ALIGNED(sbi))
+- fio->retry = false;
+-
+ if (fio) {
+ struct f2fs_bio_info *io;
+
++ if (F2FS_IO_ALIGNED(sbi))
++ fio->retry = false;
++
+ INIT_LIST_HEAD(&fio->list);
+ fio->in_list = true;
+ io = sbi->write_io[fio->type] + fio->temp;
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 229814b4f4a6c..1bf33fc27b8f8 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -361,8 +361,20 @@ static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi,
+ }
+
+ static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
+- unsigned int segno)
++ unsigned int segno, bool use_section)
+ {
++ if (use_section && __is_large_section(sbi)) {
++ unsigned int start_segno = START_SEGNO(segno);
++ unsigned int blocks = 0;
++ int i;
++
++ for (i = 0; i < sbi->segs_per_sec; i++, start_segno++) {
++ struct seg_entry *se = get_seg_entry(sbi, start_segno);
++
++ blocks += se->ckpt_valid_blocks;
++ }
++ return blocks;
++ }
+ return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
+ }
+
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 972736d71fa4d..e89655285120a 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1755,7 +1755,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
+
+ while (!f2fs_time_over(sbi, DISABLE_TIME)) {
+ down_write(&sbi->gc_lock);
+- err = f2fs_gc(sbi, true, false, NULL_SEGNO);
++ err = f2fs_gc(sbi, true, false, false, NULL_SEGNO);
+ if (err == -ENODATA) {
+ err = 0;
+ break;
+diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c
+index 45082269e6982..a37528b51798b 100644
+--- a/fs/fuse/cuse.c
++++ b/fs/fuse/cuse.c
+@@ -627,6 +627,8 @@ static int __init cuse_init(void)
+ cuse_channel_fops.owner = THIS_MODULE;
+ cuse_channel_fops.open = cuse_channel_open;
+ cuse_channel_fops.release = cuse_channel_release;
++ /* CUSE is not prepared for FUSE_DEV_IOC_CLONE */
++ cuse_channel_fops.unlocked_ioctl = NULL;
+
+ cuse_class = class_create(THIS_MODULE, "cuse");
+ if (IS_ERR(cuse_class))
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index eff4abaa87da0..6e6d1e5998691 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1776,8 +1776,17 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args,
+ container_of(args, typeof(*wpa), ia.ap.args);
+ struct inode *inode = wpa->inode;
+ struct fuse_inode *fi = get_fuse_inode(inode);
++ struct fuse_conn *fc = get_fuse_conn(inode);
+
+ mapping_set_error(inode->i_mapping, error);
++ /*
++ * A writeback finished and this might have updated mtime/ctime on
++ * server making local mtime/ctime stale. Hence invalidate attrs.
++ * Do this only if writeback_cache is not enabled. If writeback_cache
++ * is enabled, we trust local ctime/mtime.
++ */
++ if (!fc->writeback_cache)
++ fuse_invalidate_attr(inode);
+ spin_lock(&fi->lock);
+ rb_erase(&wpa->writepages_entry, &fi->writepages);
+ while (wpa->next) {
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 1e5affed158e9..005209b1cd50e 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1437,8 +1437,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ if (!fm)
+ goto out_err;
+
+- fuse_conn_init(fc, fm, get_user_ns(current_user_ns()),
+- &virtio_fs_fiq_ops, fs);
++ fuse_conn_init(fc, fm, fsc->user_ns, &virtio_fs_fiq_ops, fs);
+ fc->release = fuse_free_conn;
+ fc->delete_stale = true;
+ fc->auto_submounts = true;
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index a930ddd156819..7054a542689f9 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -598,13 +598,15 @@ void hfsplus_file_truncate(struct inode *inode)
+ res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt);
+ if (res)
+ break;
+- hfs_brec_remove(&fd);
+
+- mutex_unlock(&fd.tree->tree_lock);
+ start = hip->cached_start;
++ if (blk_cnt <= start)
++ hfs_brec_remove(&fd);
++ mutex_unlock(&fd.tree->tree_lock);
+ hfsplus_free_extents(sb, hip->cached_extents,
+ alloc_cnt - start, alloc_cnt - blk_cnt);
+ hfsplus_dump_extent(hip->cached_extents);
++ mutex_lock(&fd.tree->tree_lock);
+ if (blk_cnt > start) {
+ hip->extent_state |= HFSPLUS_EXT_DIRTY;
+ break;
+@@ -612,7 +614,6 @@ void hfsplus_file_truncate(struct inode *inode)
+ alloc_cnt = start;
+ hip->cached_start = hip->cached_blocks = 0;
+ hip->extent_state &= ~(HFSPLUS_EXT_DIRTY | HFSPLUS_EXT_NEW);
+- mutex_lock(&fd.tree->tree_lock);
+ }
+ hfs_find_exit(&fd);
+
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 21c20fd5f9ee7..b7c24d152604d 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -131,6 +131,7 @@ static void huge_pagevec_release(struct pagevec *pvec)
+ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ struct inode *inode = file_inode(file);
++ struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
+ loff_t len, vma_len;
+ int ret;
+ struct hstate *h = hstate_file(file);
+@@ -146,6 +147,10 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
+ vma->vm_ops = &hugetlb_vm_ops;
+
++ ret = seal_check_future_write(info->seals, vma);
++ if (ret)
++ return ret;
++
+ /*
+ * page based offset in vm_pgoff could be sufficiently large to
+ * overflow a loff_t when converted to byte offset. This can
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index dc0694fcfcd12..1e07dfac4d811 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -245,15 +245,14 @@ static int fc_do_one_pass(journal_t *journal,
+ return 0;
+
+ while (next_fc_block <= journal->j_fc_last) {
+- jbd_debug(3, "Fast commit replay: next block %ld",
++ jbd_debug(3, "Fast commit replay: next block %ld\n",
+ next_fc_block);
+ err = jread(&bh, journal, next_fc_block);
+ if (err) {
+- jbd_debug(3, "Fast commit replay: read error");
++ jbd_debug(3, "Fast commit replay: read error\n");
+ break;
+ }
+
+- jbd_debug(3, "Processing fast commit blk with seq %d");
+ err = journal->j_fc_replay_callback(journal, bh, pass,
+ next_fc_block - journal->j_fc_first,
+ expected_commit_id);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index f7786e00a6a7f..ed9d580826f5a 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -137,12 +137,12 @@ static struct inode *nfs_layout_find_inode_by_stateid(struct nfs_client *clp,
+ list_for_each_entry_rcu(lo, &server->layouts, plh_layouts) {
+ if (!pnfs_layout_is_valid(lo))
+ continue;
+- if (stateid != NULL &&
+- !nfs4_stateid_match_other(stateid, &lo->plh_stateid))
++ if (!nfs4_stateid_match_other(stateid, &lo->plh_stateid))
+ continue;
+- if (!nfs_sb_active(server->super))
+- continue;
+- inode = igrab(lo->plh_inode);
++ if (nfs_sb_active(server->super))
++ inode = igrab(lo->plh_inode);
++ else
++ inode = ERR_PTR(-EAGAIN);
+ rcu_read_unlock();
+ if (inode)
+ return inode;
+@@ -176,9 +176,10 @@ static struct inode *nfs_layout_find_inode_by_fh(struct nfs_client *clp,
+ continue;
+ if (nfsi->layout != lo)
+ continue;
+- if (!nfs_sb_active(server->super))
+- continue;
+- inode = igrab(lo->plh_inode);
++ if (nfs_sb_active(server->super))
++ inode = igrab(lo->plh_inode);
++ else
++ inode = ERR_PTR(-EAGAIN);
+ rcu_read_unlock();
+ if (inode)
+ return inode;
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 4db3018776f68..d5f28a1f3671c 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -865,6 +865,8 @@ static int nfs_readdir_xdr_to_array(struct nfs_readdir_descriptor *desc,
+ break;
+ }
+
++ verf_arg = verf_res;
++
+ status = nfs_readdir_page_filler(desc, entry, pages, pglen,
+ arrays, narrays);
+ } while (!status && nfs_readdir_page_needs_filling(page));
+@@ -926,7 +928,12 @@ static int find_and_lock_cache_page(struct nfs_readdir_descriptor *desc)
+ }
+ return res;
+ }
+- memcpy(nfsi->cookieverf, verf, sizeof(nfsi->cookieverf));
++ /*
++ * Set the cookie verifier if the page cache was empty
++ */
++ if (desc->page_index == 0)
++ memcpy(nfsi->cookieverf, verf,
++ sizeof(nfsi->cookieverf));
+ }
+ res = nfs_readdir_search_array(desc);
+ if (res == 0) {
+@@ -973,10 +980,10 @@ static int readdir_search_pagecache(struct nfs_readdir_descriptor *desc)
+ /*
+ * Once we've found the start of the dirent within a page: fill 'er up...
+ */
+-static void nfs_do_filldir(struct nfs_readdir_descriptor *desc)
++static void nfs_do_filldir(struct nfs_readdir_descriptor *desc,
++ const __be32 *verf)
+ {
+ struct file *file = desc->file;
+- struct nfs_inode *nfsi = NFS_I(file_inode(file));
+ struct nfs_cache_array *array;
+ unsigned int i = 0;
+
+@@ -990,7 +997,7 @@ static void nfs_do_filldir(struct nfs_readdir_descriptor *desc)
+ desc->eof = true;
+ break;
+ }
+- memcpy(desc->verf, nfsi->cookieverf, sizeof(desc->verf));
++ memcpy(desc->verf, verf, sizeof(desc->verf));
+ if (i < (array->size-1))
+ desc->dir_cookie = array->array[i+1].cookie;
+ else
+@@ -1047,7 +1054,7 @@ static int uncached_readdir(struct nfs_readdir_descriptor *desc)
+
+ for (i = 0; !desc->eof && i < sz && arrays[i]; i++) {
+ desc->page = arrays[i];
+- nfs_do_filldir(desc);
++ nfs_do_filldir(desc, verf);
+ }
+ desc->page = NULL;
+
+@@ -1068,6 +1075,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ {
+ struct dentry *dentry = file_dentry(file);
+ struct inode *inode = d_inode(dentry);
++ struct nfs_inode *nfsi = NFS_I(inode);
+ struct nfs_open_dir_context *dir_ctx = file->private_data;
+ struct nfs_readdir_descriptor *desc;
+ int res;
+@@ -1121,7 +1129,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ break;
+ }
+ if (res == -ETOOSMALL && desc->plus) {
+- clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_I(inode)->flags);
++ clear_bit(NFS_INO_ADVISE_RDPLUS, &nfsi->flags);
+ nfs_zap_caches(inode);
+ desc->page_index = 0;
+ desc->plus = false;
+@@ -1131,7 +1139,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
+ if (res < 0)
+ break;
+
+- nfs_do_filldir(desc);
++ nfs_do_filldir(desc, nfsi->cookieverf);
+ nfs_readdir_page_unlock_and_put_cached(desc);
+ } while (!desc->eof);
+
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 872112bffcab2..d383de00d4868 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -106,7 +106,7 @@ static int decode_nfs_fh(struct xdr_stream *xdr, struct nfs_fh *fh)
+ if (unlikely(!p))
+ return -ENOBUFS;
+ fh->size = be32_to_cpup(p++);
+- if (fh->size > sizeof(struct nfs_fh)) {
++ if (fh->size > NFS_MAXFHSIZE) {
+ printk(KERN_ERR "NFS flexfiles: Too big fh received %d\n",
+ fh->size);
+ return -EOVERFLOW;
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 522aa10a1a3e7..fd073b1caf6c8 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1635,10 +1635,10 @@ EXPORT_SYMBOL_GPL(_nfs_display_fhandle);
+ */
+ static int nfs_inode_attrs_need_update(const struct inode *inode, const struct nfs_fattr *fattr)
+ {
+- const struct nfs_inode *nfsi = NFS_I(inode);
++ unsigned long attr_gencount = NFS_I(inode)->attr_gencount;
+
+- return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 ||
+- ((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0);
++ return (long)(fattr->gencount - attr_gencount) > 0 ||
++ (long)(attr_gencount - nfs_read_attr_generation_counter()) > 0;
+ }
+
+ static int nfs_refresh_inode_locked(struct inode *inode, struct nfs_fattr *fattr)
+@@ -2067,7 +2067,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ nfsi->attrtimeo_timestamp = now;
+ }
+ /* Set the barrier to be more recent than this fattr */
+- if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0)
++ if ((long)(fattr->gencount - nfsi->attr_gencount) > 0)
+ nfsi->attr_gencount = fattr->gencount;
+ }
+
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index f3fd935620fcb..b85f7d56a155c 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -46,11 +46,12 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ {
+ struct inode *inode = file_inode(filep);
+ struct nfs_server *server = NFS_SERVER(inode);
++ u32 bitmask[3];
+ struct nfs42_falloc_args args = {
+ .falloc_fh = NFS_FH(inode),
+ .falloc_offset = offset,
+ .falloc_length = len,
+- .falloc_bitmask = nfs4_fattr_bitmap,
++ .falloc_bitmask = bitmask,
+ };
+ struct nfs42_falloc_res res = {
+ .falloc_server = server,
+@@ -68,6 +69,10 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ return status;
+ }
+
++ memcpy(bitmask, server->cache_consistency_bitmask, sizeof(bitmask));
++ if (server->attr_bitmask[1] & FATTR4_WORD1_SPACE_USED)
++ bitmask[1] |= FATTR4_WORD1_SPACE_USED;
++
+ res.falloc_fattr = nfs_alloc_fattr();
+ if (!res.falloc_fattr)
+ return -ENOMEM;
+@@ -75,7 +80,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ status = nfs4_call_sync(server->client, server, msg,
+ &args.seq_args, &res.seq_res, 0);
+ if (status == 0)
+- status = nfs_post_op_update_inode(inode, res.falloc_fattr);
++ status = nfs_post_op_update_inode_force_wcc(inode,
++ res.falloc_fattr);
+
+ kfree(res.falloc_fattr);
+ return status;
+@@ -84,7 +90,8 @@ static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ loff_t offset, loff_t len)
+ {
+- struct nfs_server *server = NFS_SERVER(file_inode(filep));
++ struct inode *inode = file_inode(filep);
++ struct nfs_server *server = NFS_SERVER(inode);
+ struct nfs4_exception exception = { };
+ struct nfs_lock_context *lock;
+ int err;
+@@ -93,9 +100,13 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ if (IS_ERR(lock))
+ return PTR_ERR(lock);
+
+- exception.inode = file_inode(filep);
++ exception.inode = inode;
+ exception.state = lock->open_context->state;
+
++ err = nfs_sync_inode(inode);
++ if (err)
++ goto out;
++
+ do {
+ err = _nfs42_proc_fallocate(msg, filep, lock, offset, len);
+ if (err == -ENOTSUPP) {
+@@ -104,7 +115,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ }
+ err = nfs4_handle_exception(server, err, &exception);
+ } while (exception.retry);
+-
++out:
+ nfs_put_lock_context(lock);
+ return err;
+ }
+@@ -142,16 +153,13 @@ int nfs42_proc_deallocate(struct file *filep, loff_t offset, loff_t len)
+ return -EOPNOTSUPP;
+
+ inode_lock(inode);
+- err = nfs_sync_inode(inode);
+- if (err)
+- goto out_unlock;
+
+ err = nfs42_proc_fallocate(&msg, filep, offset, len);
+ if (err == 0)
+ truncate_pagecache_range(inode, offset, (offset + len) -1);
+ if (err == -EOPNOTSUPP)
+ NFS_SERVER(inode)->caps &= ~NFS_CAP_DEALLOCATE;
+-out_unlock:
++
+ inode_unlock(inode);
+ return err;
+ }
+@@ -657,7 +665,10 @@ static loff_t _nfs42_proc_llseek(struct file *filep,
+ if (status)
+ return status;
+
+- return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes);
++ if (whence == SEEK_DATA && res.sr_eof)
++ return -NFS4ERR_NXIO;
++ else
++ return vfs_setpos(filep, res.sr_offset, inode->i_sb->s_maxbytes);
+ }
+
+ loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 95d3b8540f8ed..8b4d2fc0cb017 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -112,9 +112,10 @@ static int nfs41_test_stateid(struct nfs_server *, nfs4_stateid *,
+ static int nfs41_free_stateid(struct nfs_server *, const nfs4_stateid *,
+ const struct cred *, bool);
+ #endif
+-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+- struct nfs_server *server,
+- struct nfs4_label *label);
++static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ],
++ const __u32 *src, struct inode *inode,
++ struct nfs_server *server,
++ struct nfs4_label *label);
+
+ #ifdef CONFIG_NFS_V4_SECURITY_LABEL
+ static inline struct nfs4_label *
+@@ -3598,6 +3599,7 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
+ struct nfs4_closedata *calldata = data;
+ struct nfs4_state *state = calldata->state;
+ struct inode *inode = calldata->inode;
++ struct nfs_server *server = NFS_SERVER(inode);
+ struct pnfs_layout_hdr *lo;
+ bool is_rdonly, is_wronly, is_rdwr;
+ int call_close = 0;
+@@ -3654,8 +3656,10 @@ static void nfs4_close_prepare(struct rpc_task *task, void *data)
+ if (calldata->arg.fmode == 0 || calldata->arg.fmode == FMODE_READ) {
+ /* Close-to-open cache consistency revalidation */
+ if (!nfs4_have_delegation(inode, FMODE_READ)) {
+- calldata->arg.bitmask = NFS_SERVER(inode)->cache_consistency_bitmask;
+- nfs4_bitmask_adjust(calldata->arg.bitmask, inode, NFS_SERVER(inode), NULL);
++ nfs4_bitmask_set(calldata->arg.bitmask_store,
++ server->cache_consistency_bitmask,
++ inode, server, NULL);
++ calldata->arg.bitmask = calldata->arg.bitmask_store;
+ } else
+ calldata->arg.bitmask = NULL;
+ }
+@@ -5423,19 +5427,17 @@ bool nfs4_write_need_cache_consistency_data(struct nfs_pgio_header *hdr)
+ return nfs4_have_delegation(hdr->inode, FMODE_READ) == 0;
+ }
+
+-static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+- struct nfs_server *server,
+- struct nfs4_label *label)
++static void nfs4_bitmask_set(__u32 bitmask[NFS4_BITMASK_SZ], const __u32 *src,
++ struct inode *inode, struct nfs_server *server,
++ struct nfs4_label *label)
+ {
+-
+ unsigned long cache_validity = READ_ONCE(NFS_I(inode)->cache_validity);
++ unsigned int i;
+
+- if ((cache_validity & NFS_INO_INVALID_DATA) ||
+- (cache_validity & NFS_INO_REVAL_PAGECACHE) ||
+- (cache_validity & NFS_INO_REVAL_FORCED) ||
+- (cache_validity & NFS_INO_INVALID_OTHER))
+- nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, label), inode);
++ memcpy(bitmask, src, sizeof(*bitmask) * NFS4_BITMASK_SZ);
+
++ if (cache_validity & (NFS_INO_INVALID_CHANGE | NFS_INO_REVAL_PAGECACHE))
++ bitmask[0] |= FATTR4_WORD0_CHANGE;
+ if (cache_validity & NFS_INO_INVALID_ATIME)
+ bitmask[1] |= FATTR4_WORD1_TIME_ACCESS;
+ if (cache_validity & NFS_INO_INVALID_OTHER)
+@@ -5444,16 +5446,22 @@ static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode,
+ FATTR4_WORD1_NUMLINKS;
+ if (label && label->len && cache_validity & NFS_INO_INVALID_LABEL)
+ bitmask[2] |= FATTR4_WORD2_SECURITY_LABEL;
+- if (cache_validity & NFS_INO_INVALID_CHANGE)
+- bitmask[0] |= FATTR4_WORD0_CHANGE;
+ if (cache_validity & NFS_INO_INVALID_CTIME)
+ bitmask[1] |= FATTR4_WORD1_TIME_METADATA;
+ if (cache_validity & NFS_INO_INVALID_MTIME)
+ bitmask[1] |= FATTR4_WORD1_TIME_MODIFY;
+- if (cache_validity & NFS_INO_INVALID_SIZE)
+- bitmask[0] |= FATTR4_WORD0_SIZE;
+ if (cache_validity & NFS_INO_INVALID_BLOCKS)
+ bitmask[1] |= FATTR4_WORD1_SPACE_USED;
++
++ if (nfs4_have_delegation(inode, FMODE_READ) &&
++ !(cache_validity & NFS_INO_REVAL_FORCED))
++ bitmask[0] &= ~FATTR4_WORD0_SIZE;
++ else if (cache_validity &
++ (NFS_INO_INVALID_SIZE | NFS_INO_REVAL_PAGECACHE))
++ bitmask[0] |= FATTR4_WORD0_SIZE;
++
++ for (i = 0; i < NFS4_BITMASK_SZ; i++)
++ bitmask[i] &= server->attr_bitmask[i];
+ }
+
+ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
+@@ -5466,8 +5474,10 @@ static void nfs4_proc_write_setup(struct nfs_pgio_header *hdr,
+ hdr->args.bitmask = NULL;
+ hdr->res.fattr = NULL;
+ } else {
+- hdr->args.bitmask = server->cache_consistency_bitmask;
+- nfs4_bitmask_adjust(hdr->args.bitmask, hdr->inode, server, NULL);
++ nfs4_bitmask_set(hdr->args.bitmask_store,
++ server->cache_consistency_bitmask,
++ hdr->inode, server, NULL);
++ hdr->args.bitmask = hdr->args.bitmask_store;
+ }
+
+ if (!hdr->pgio_done_cb)
+@@ -6509,8 +6519,10 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+
+ data->args.fhandle = &data->fh;
+ data->args.stateid = &data->stateid;
+- data->args.bitmask = server->cache_consistency_bitmask;
+- nfs4_bitmask_adjust(data->args.bitmask, inode, server, NULL);
++ nfs4_bitmask_set(data->args.bitmask_store,
++ server->cache_consistency_bitmask, inode, server,
++ NULL);
++ data->args.bitmask = data->args.bitmask_store;
+ nfs_copy_fh(&data->fh, NFS_FH(inode));
+ nfs4_stateid_copy(&data->stateid, stateid);
+ data->res.fattr = &data->fattr;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index a501bb9a2fac1..eca36d804158a 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4874,6 +4874,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ if (nf)
+ nfsd_file_put(nf);
+
++ status = nfserrno(nfsd_open_break_lease(cur_fh->fh_dentry->d_inode,
++ access));
++ if (status)
++ goto out_put_access;
++
+ status = nfsd4_truncate(rqstp, cur_fh, open);
+ if (status)
+ goto out_put_access;
+@@ -6856,11 +6861,20 @@ out:
+ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
+ {
+ struct nfsd_file *nf;
+- __be32 err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
+- if (!err) {
+- err = nfserrno(vfs_test_lock(nf->nf_file, lock));
+- nfsd_file_put(nf);
+- }
++ __be32 err;
++
++ err = nfsd_file_acquire(rqstp, fhp, NFSD_MAY_READ, &nf);
++ if (err)
++ return err;
++ fh_lock(fhp); /* to block new leases till after test_lock: */
++ err = nfserrno(nfsd_open_break_lease(fhp->fh_dentry->d_inode,
++ NFSD_MAY_READ));
++ if (err)
++ goto out;
++ err = nfserrno(vfs_test_lock(nf->nf_file, lock));
++out:
++ fh_unlock(fhp);
++ nfsd_file_put(nf);
+ return err;
+ }
+
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 6c0a05f55d6b1..09e4d8a499a38 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -754,7 +754,7 @@ int remove_proc_subtree(const char *name, struct proc_dir_entry *parent)
+ while (1) {
+ next = pde_subdir_first(de);
+ if (next) {
+- if (unlikely(pde_is_permanent(root))) {
++ if (unlikely(pde_is_permanent(next))) {
+ write_unlock(&proc_subdir_lock);
+ WARN(1, "removing permanent /proc entry '%s/%s'",
+ next->parent->name, next->name);
+diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
+index 7b1128398976e..89d492916deaf 100644
+--- a/fs/squashfs/file.c
++++ b/fs/squashfs/file.c
+@@ -211,11 +211,11 @@ failure:
+ * If the skip factor is limited in this way then the file will use multiple
+ * slots.
+ */
+-static inline int calculate_skip(int blocks)
++static inline int calculate_skip(u64 blocks)
+ {
+- int skip = blocks / ((SQUASHFS_META_ENTRIES + 1)
++ u64 skip = blocks / ((SQUASHFS_META_ENTRIES + 1)
+ * SQUASHFS_META_INDEXES);
+- return min(SQUASHFS_CACHED_BLKS - 1, skip + 1);
++ return min((u64) SQUASHFS_CACHED_BLKS - 1, skip + 1);
+ }
+
+
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 0042ef362511d..5c0a0883b91ac 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -135,6 +135,7 @@ enum cpuhp_state {
+ CPUHP_AP_RISCV_TIMER_STARTING,
+ CPUHP_AP_CLINT_TIMER_STARTING,
+ CPUHP_AP_CSKY_TIMER_STARTING,
++ CPUHP_AP_TI_GP_TIMER_STARTING,
+ CPUHP_AP_HYPERV_TIMER_STARTING,
+ CPUHP_AP_KVM_STARTING,
+ CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING,
+diff --git a/include/linux/elevator.h b/include/linux/elevator.h
+index bacc40a0bdf39..bc26b4e11f62f 100644
+--- a/include/linux/elevator.h
++++ b/include/linux/elevator.h
+@@ -34,7 +34,7 @@ struct elevator_mq_ops {
+ void (*depth_updated)(struct blk_mq_hw_ctx *);
+
+ bool (*allow_merge)(struct request_queue *, struct request *, struct bio *);
+- bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int);
++ bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int);
+ int (*request_merge)(struct request_queue *q, struct request **, struct bio *);
+ void (*request_merged)(struct request_queue *, struct request *, enum elv_merge);
+ void (*requests_merged)(struct request_queue *, struct request *, struct request *);
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 56622658b2158..a670ae129f4b9 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -687,6 +687,8 @@ struct i2c_adapter_quirks {
+ #define I2C_AQ_NO_ZERO_LEN_READ BIT(5)
+ #define I2C_AQ_NO_ZERO_LEN_WRITE BIT(6)
+ #define I2C_AQ_NO_ZERO_LEN (I2C_AQ_NO_ZERO_LEN_READ | I2C_AQ_NO_ZERO_LEN_WRITE)
++/* adapter cannot do repeated START */
++#define I2C_AQ_NO_REP_START BIT(7)
+
+ /*
+ * i2c_adapter is the structure used to identify a physical i2c bus along
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 992c18d5e85d7..ad8395cf1262d 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -3191,5 +3191,37 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
+
+ extern int sysctl_nr_trim_pages;
+
++/**
++ * seal_check_future_write - Check for F_SEAL_FUTURE_WRITE flag and handle it
++ * @seals: the seals to check
++ * @vma: the vma to operate on
++ *
++ * Check whether F_SEAL_FUTURE_WRITE is set; if so, do proper check/handling on
++ * the vma flags. Return 0 if check pass, or <0 for errors.
++ */
++static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
++{
++ if (seals & F_SEAL_FUTURE_WRITE) {
++ /*
++ * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
++ * "future write" seal active.
++ */
++ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
++ return -EPERM;
++
++ /*
++ * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
++ * MAP_SHARED and read-only, take care to not allow mprotect to
++ * revert protections on such mappings. Do this only for shared
++ * mappings. For private mappings, don't need to mask
++ * VM_MAYWRITE as we still want them to be COW-writable.
++ */
++ if (vma->vm_flags & VM_SHARED)
++ vma->vm_flags &= ~(VM_MAYWRITE);
++ }
++
++ return 0;
++}
++
+ #endif /* __KERNEL__ */
+ #endif /* _LINUX_MM_H */
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 61c77cfff8c28..b4f85d8dd15e2 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -97,10 +97,10 @@ struct page {
+ };
+ struct { /* page_pool used by netstack */
+ /**
+- * @dma_addr: might require a 64-bit value even on
++ * @dma_addr: might require a 64-bit value on
+ * 32-bit architectures.
+ */
+- dma_addr_t dma_addr;
++ unsigned long dma_addr[2];
+ };
+ struct { /* slab, slob and slub */
+ union {
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index 3327239fa2f9a..cc29dee508f74 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -15,6 +15,8 @@
+ #define NFS_DEF_FILE_IO_SIZE (4096U)
+ #define NFS_MIN_FILE_IO_SIZE (1024U)
+
++#define NFS_BITMASK_SZ 3
++
+ struct nfs4_string {
+ unsigned int len;
+ char *data;
+@@ -525,7 +527,8 @@ struct nfs_closeargs {
+ struct nfs_seqid * seqid;
+ fmode_t fmode;
+ u32 share_access;
+- u32 * bitmask;
++ const u32 * bitmask;
++ u32 bitmask_store[NFS_BITMASK_SZ];
+ struct nfs4_layoutreturn_args *lr_args;
+ };
+
+@@ -608,7 +611,8 @@ struct nfs4_delegreturnargs {
+ struct nfs4_sequence_args seq_args;
+ const struct nfs_fh *fhandle;
+ const nfs4_stateid *stateid;
+- u32 * bitmask;
++ const u32 *bitmask;
++ u32 bitmask_store[NFS_BITMASK_SZ];
+ struct nfs4_layoutreturn_args *lr_args;
+ };
+
+@@ -648,7 +652,8 @@ struct nfs_pgio_args {
+ union {
+ unsigned int replen; /* used by read */
+ struct {
+- u32 * bitmask; /* used by write */
++ const u32 * bitmask; /* used by write */
++ u32 bitmask_store[NFS_BITMASK_SZ]; /* used by write */
+ enum nfs3_stable_how stable; /* used by write */
+ };
+ };
+diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
+index cc66bec8be905..88d311bad9846 100644
+--- a/include/linux/pci-epc.h
++++ b/include/linux/pci-epc.h
+@@ -201,8 +201,10 @@ int pci_epc_start(struct pci_epc *epc);
+ void pci_epc_stop(struct pci_epc *epc);
+ const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
+ u8 func_no);
+-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
+- *epc_features);
++enum pci_barno
++pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features);
++enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
++ *epc_features, enum pci_barno bar);
+ struct pci_epc *pci_epc_get(const char *epc_name);
+ void pci_epc_put(struct pci_epc *epc);
+
+diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h
+index 6644ff3b07024..fa3aca43eb192 100644
+--- a/include/linux/pci-epf.h
++++ b/include/linux/pci-epf.h
+@@ -21,6 +21,7 @@ enum pci_notify_event {
+ };
+
+ enum pci_barno {
++ NO_BAR = -1,
+ BAR_0,
+ BAR_1,
+ BAR_2,
+diff --git a/include/linux/pm.h b/include/linux/pm.h
+index 47aca6bac1d6a..52d9724db9dc6 100644
+--- a/include/linux/pm.h
++++ b/include/linux/pm.h
+@@ -600,6 +600,7 @@ struct dev_pm_info {
+ unsigned int idle_notification:1;
+ unsigned int request_pending:1;
+ unsigned int deferred_resume:1;
++ unsigned int needs_force_resume:1;
+ unsigned int runtime_auto:1;
+ bool ignore_children:1;
+ unsigned int no_callbacks:1;
+diff --git a/include/net/page_pool.h b/include/net/page_pool.h
+index b5b1953053468..e05744b9a1bc2 100644
+--- a/include/net/page_pool.h
++++ b/include/net/page_pool.h
+@@ -198,7 +198,17 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
+
+ static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
+ {
+- return page->dma_addr;
++ dma_addr_t ret = page->dma_addr[0];
++ if (sizeof(dma_addr_t) > sizeof(unsigned long))
++ ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16;
++ return ret;
++}
++
++static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
++{
++ page->dma_addr[0] = addr;
++ if (sizeof(dma_addr_t) > sizeof(unsigned long))
++ page->dma_addr[1] = upper_32_bits(addr);
+ }
+
+ static inline bool is_page_pool_compiled_in(void)
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 6f89c27265f58..9db5702a84a58 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1141,7 +1141,6 @@ DECLARE_EVENT_CLASS(xprt_writelock_event,
+
+ DEFINE_WRITELOCK_EVENT(reserve_xprt);
+ DEFINE_WRITELOCK_EVENT(release_xprt);
+-DEFINE_WRITELOCK_EVENT(transmit_queued);
+
+ DECLARE_EVENT_CLASS(xprt_cong_event,
+ TP_PROTO(
+diff --git a/include/uapi/linux/netfilter/xt_SECMARK.h b/include/uapi/linux/netfilter/xt_SECMARK.h
+index 1f2a708413f5d..beb2cadba8a9c 100644
+--- a/include/uapi/linux/netfilter/xt_SECMARK.h
++++ b/include/uapi/linux/netfilter/xt_SECMARK.h
+@@ -20,4 +20,10 @@ struct xt_secmark_target_info {
+ char secctx[SECMARK_SECCTX_MAX];
+ };
+
++struct xt_secmark_target_info_v1 {
++ __u8 mode;
++ char secctx[SECMARK_SECCTX_MAX];
++ __u32 secid;
++};
++
+ #endif /*_XT_SECMARK_H_target */
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 33a2a702b152c..b897756202dff 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -579,7 +579,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
+ enum dma_data_direction dir, unsigned long attrs)
+ {
+ unsigned int offset = swiotlb_align_offset(dev, orig_addr);
+- unsigned int index, i;
++ unsigned int i;
++ int index;
+ phys_addr_t tlb_addr;
+
+ if (no_iotlb_memory)
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 5c3447cf7ad58..33400ff051a84 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -740,8 +740,10 @@ static int kexec_calculate_store_digests(struct kimage *image)
+
+ sha_region_sz = KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region);
+ sha_regions = vzalloc(sha_region_sz);
+- if (!sha_regions)
++ if (!sha_regions) {
++ ret = -ENOMEM;
+ goto out_free_desc;
++ }
+
+ desc->tfm = tfm;
+
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 833394f9c6085..c99cf3f35802f 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -454,7 +454,7 @@ int walk_system_ram_res(u64 start, u64 end, void *arg,
+ {
+ unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+
+- return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
++ return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false,
+ arg, func);
+ }
+
+@@ -467,7 +467,7 @@ int walk_mem_res(u64 start, u64 end, void *arg,
+ {
+ unsigned long flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+
+- return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, true,
++ return __walk_iomem_res_desc(start, end, flags, IORES_DESC_NONE, false,
+ arg, func);
+ }
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c5fcb5ce21944..984456b431aa8 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -928,7 +928,7 @@ DEFINE_STATIC_KEY_FALSE(sched_uclamp_used);
+
+ static inline unsigned int uclamp_bucket_id(unsigned int clamp_value)
+ {
+- return clamp_value / UCLAMP_BUCKET_DELTA;
++ return min_t(unsigned int, clamp_value / UCLAMP_BUCKET_DELTA, UCLAMP_BUCKETS - 1);
+ }
+
+ static inline unsigned int uclamp_none(enum uclamp_id clamp_id)
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index f217e5251fb2f..10b8b133145df 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10885,16 +10885,22 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
+ {
+ struct cfs_rq *cfs_rq;
+
++ list_add_leaf_cfs_rq(cfs_rq_of(se));
++
+ /* Start to propagate at parent */
+ se = se->parent;
+
+ for_each_sched_entity(se) {
+ cfs_rq = cfs_rq_of(se);
+
+- if (cfs_rq_throttled(cfs_rq))
+- break;
++ if (!cfs_rq_throttled(cfs_rq)){
++ update_load_avg(cfs_rq, se, UPDATE_TG);
++ list_add_leaf_cfs_rq(cfs_rq);
++ continue;
++ }
+
+- update_load_avg(cfs_rq, se, UPDATE_TG);
++ if (list_add_leaf_cfs_rq(cfs_rq))
++ break;
+ }
+ }
+ #else
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 71109065bd8eb..01bf977090dc2 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -172,7 +172,6 @@ static u64 __read_mostly sample_period;
+ static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
+ static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer);
+ static DEFINE_PER_CPU(bool, softlockup_touch_sync);
+-static DEFINE_PER_CPU(bool, soft_watchdog_warn);
+ static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
+ static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
+ static unsigned long soft_lockup_nmi_warn;
+@@ -236,7 +235,7 @@ static void set_sample_period(void)
+ }
+
+ /* Commands for resetting the watchdog */
+-static void __touch_watchdog(void)
++static void update_touch_ts(void)
+ {
+ __this_cpu_write(watchdog_touch_ts, get_timestamp());
+ }
+@@ -331,7 +330,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
+ */
+ static int softlockup_fn(void *data)
+ {
+- __touch_watchdog();
++ update_touch_ts();
+ complete(this_cpu_ptr(&softlockup_completion));
+
+ return 0;
+@@ -374,7 +373,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+
+ /* Clear the guest paused flag on watchdog reset */
+ kvm_check_and_clear_guest_paused();
+- __touch_watchdog();
++ update_touch_ts();
+ return HRTIMER_RESTART;
+ }
+
+@@ -394,21 +393,18 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ if (kvm_check_and_clear_guest_paused())
+ return HRTIMER_RESTART;
+
+- /* only warn once */
+- if (__this_cpu_read(soft_watchdog_warn) == true)
+- return HRTIMER_RESTART;
+-
++ /*
++ * Prevent multiple soft-lockup reports if one cpu is already
++ * engaged in dumping all cpu back traces.
++ */
+ if (softlockup_all_cpu_backtrace) {
+- /* Prevent multiple soft-lockup reports if one cpu is already
+- * engaged in dumping cpu back traces
+- */
+- if (test_and_set_bit(0, &soft_lockup_nmi_warn)) {
+- /* Someone else will report us. Let's give up */
+- __this_cpu_write(soft_watchdog_warn, true);
++ if (test_and_set_bit_lock(0, &soft_lockup_nmi_warn))
+ return HRTIMER_RESTART;
+- }
+ }
+
++ /* Start period for the next softlockup warning. */
++ update_touch_ts();
++
+ pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
+ smp_processor_id(), duration,
+ current->comm, task_pid_nr(current));
+@@ -420,22 +416,14 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ dump_stack();
+
+ if (softlockup_all_cpu_backtrace) {
+- /* Avoid generating two back traces for current
+- * given that one is already made above
+- */
+ trigger_allbutself_cpu_backtrace();
+-
+- clear_bit(0, &soft_lockup_nmi_warn);
+- /* Barrier to sync with other cpus */
+- smp_mb__after_atomic();
++ clear_bit_unlock(0, &soft_lockup_nmi_warn);
+ }
+
+ add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK);
+ if (softlockup_panic)
+ panic("softlockup: hung tasks");
+- __this_cpu_write(soft_watchdog_warn, true);
+- } else
+- __this_cpu_write(soft_watchdog_warn, false);
++ }
+
+ return HRTIMER_RESTART;
+ }
+@@ -460,7 +448,7 @@ static void watchdog_enable(unsigned int cpu)
+ HRTIMER_MODE_REL_PINNED_HARD);
+
+ /* Initialize timestamp */
+- __touch_watchdog();
++ update_touch_ts();
+ /* Enable the perf event */
+ if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
+ watchdog_nmi_enable(cpu);
+diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
+index 7998affa45d49..c87d5b6a8a55a 100644
+--- a/lib/kobject_uevent.c
++++ b/lib/kobject_uevent.c
+@@ -251,12 +251,13 @@ static int kobj_usermode_filter(struct kobject *kobj)
+
+ static int init_uevent_argv(struct kobj_uevent_env *env, const char *subsystem)
+ {
++ int buffer_size = sizeof(env->buf) - env->buflen;
+ int len;
+
+- len = strlcpy(&env->buf[env->buflen], subsystem,
+- sizeof(env->buf) - env->buflen);
+- if (len >= (sizeof(env->buf) - env->buflen)) {
+- WARN(1, KERN_ERR "init_uevent_argv: buffer size too small\n");
++ len = strlcpy(&env->buf[env->buflen], subsystem, buffer_size);
++ if (len >= buffer_size) {
++ pr_warn("init_uevent_argv: buffer size of %d too small, needed %d\n",
++ buffer_size, len);
+ return -ENOMEM;
+ }
+
+diff --git a/lib/nlattr.c b/lib/nlattr.c
+index 5b6116e81f9f2..1d051ef66afe5 100644
+--- a/lib/nlattr.c
++++ b/lib/nlattr.c
+@@ -828,7 +828,7 @@ int nla_strcmp(const struct nlattr *nla, const char *str)
+ int attrlen = nla_len(nla);
+ int d;
+
+- if (attrlen > 0 && buf[attrlen - 1] == '\0')
++ while (attrlen > 0 && buf[attrlen - 1] == '\0')
+ attrlen--;
+
+ d = attrlen - len;
+diff --git a/lib/test_kasan.c b/lib/test_kasan.c
+index 5a2f104ca13f8..20f65b1b4ce59 100644
+--- a/lib/test_kasan.c
++++ b/lib/test_kasan.c
+@@ -449,8 +449,20 @@ static char global_array[10];
+
+ static void kasan_global_oob(struct kunit *test)
+ {
+- volatile int i = 3;
+- char *p = &global_array[ARRAY_SIZE(global_array) + i];
++ /*
++ * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS
++ * from failing here and panicing the kernel, access the array via a
++ * volatile pointer, which will prevent the compiler from being able to
++ * determine the array bounds.
++ *
++ * This access uses a volatile pointer to char (char *volatile) rather
++ * than the more conventional pointer to volatile char (volatile char *)
++ * because we want to prevent the compiler from making inferences about
++ * the pointer itself (i.e. its array bounds), not the data that it
++ * refers to.
++ */
++ char *volatile array = global_array;
++ char *p = &array[ARRAY_SIZE(global_array) + 3];
+
+ /* Only generic mode instruments globals. */
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+@@ -479,8 +491,9 @@ static void ksize_unpoisons_memory(struct kunit *test)
+ static void kasan_stack_oob(struct kunit *test)
+ {
+ char stack_array[10];
+- volatile int i = OOB_TAG_OFF;
+- char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
++ /* See comment in kasan_global_oob. */
++ char *volatile array = stack_array;
++ char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF];
+
+ if (!IS_ENABLED(CONFIG_KASAN_STACK)) {
+ kunit_info(test, "CONFIG_KASAN_STACK is not enabled");
+@@ -494,7 +507,9 @@ static void kasan_alloca_oob_left(struct kunit *test)
+ {
+ volatile int i = 10;
+ char alloca_array[i];
+- char *p = alloca_array - 1;
++ /* See comment in kasan_global_oob. */
++ char *volatile array = alloca_array;
++ char *p = array - 1;
+
+ /* Only generic mode instruments dynamic allocas. */
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+@@ -514,7 +529,9 @@ static void kasan_alloca_oob_right(struct kunit *test)
+ {
+ volatile int i = 10;
+ char alloca_array[i];
+- char *p = alloca_array + i;
++ /* See comment in kasan_global_oob. */
++ char *volatile array = alloca_array;
++ char *p = array + i;
+
+ /* Only generic mode instruments dynamic allocas. */
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+diff --git a/mm/gup.c b/mm/gup.c
+index e4c224cd9661f..0cdb93e98d007 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1548,54 +1548,60 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
+ struct vm_area_struct **vmas,
+ unsigned int gup_flags)
+ {
+- unsigned long i;
+- unsigned long step;
+- bool drain_allow = true;
+- bool migrate_allow = true;
++ unsigned long i, isolation_error_count;
++ bool drain_allow;
+ LIST_HEAD(cma_page_list);
+ long ret = nr_pages;
++ struct page *prev_head, *head;
+ struct migration_target_control mtc = {
+ .nid = NUMA_NO_NODE,
+ .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN,
+ };
+
+ check_again:
+- for (i = 0; i < nr_pages;) {
+-
+- struct page *head = compound_head(pages[i]);
+-
+- /*
+- * gup may start from a tail page. Advance step by the left
+- * part.
+- */
+- step = compound_nr(head) - (pages[i] - head);
++ prev_head = NULL;
++ isolation_error_count = 0;
++ drain_allow = true;
++ for (i = 0; i < nr_pages; i++) {
++ head = compound_head(pages[i]);
++ if (head == prev_head)
++ continue;
++ prev_head = head;
+ /*
+ * If we get a page from the CMA zone, since we are going to
+ * be pinning these entries, we might as well move them out
+ * of the CMA zone if possible.
+ */
+ if (is_migrate_cma_page(head)) {
+- if (PageHuge(head))
+- isolate_huge_page(head, &cma_page_list);
+- else {
++ if (PageHuge(head)) {
++ if (!isolate_huge_page(head, &cma_page_list))
++ isolation_error_count++;
++ } else {
+ if (!PageLRU(head) && drain_allow) {
+ lru_add_drain_all();
+ drain_allow = false;
+ }
+
+- if (!isolate_lru_page(head)) {
+- list_add_tail(&head->lru, &cma_page_list);
+- mod_node_page_state(page_pgdat(head),
+- NR_ISOLATED_ANON +
+- page_is_file_lru(head),
+- thp_nr_pages(head));
++ if (isolate_lru_page(head)) {
++ isolation_error_count++;
++ continue;
+ }
++ list_add_tail(&head->lru, &cma_page_list);
++ mod_node_page_state(page_pgdat(head),
++ NR_ISOLATED_ANON +
++ page_is_file_lru(head),
++ thp_nr_pages(head));
+ }
+ }
+-
+- i += step;
+ }
+
++ /*
++ * If list is empty, and no isolation errors, means that all pages are
++ * in the correct zone.
++ */
++ if (list_empty(&cma_page_list) && !isolation_error_count)
++ return ret;
++
+ if (!list_empty(&cma_page_list)) {
+ /*
+ * drop the above get_user_pages reference.
+@@ -1606,34 +1612,28 @@ check_again:
+ for (i = 0; i < nr_pages; i++)
+ put_page(pages[i]);
+
+- if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
+- (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
+- /*
+- * some of the pages failed migration. Do get_user_pages
+- * without migration.
+- */
+- migrate_allow = false;
+-
++ ret = migrate_pages(&cma_page_list, alloc_migration_target,
++ NULL, (unsigned long)&mtc, MIGRATE_SYNC,
++ MR_CONTIG_RANGE);
++ if (ret) {
+ if (!list_empty(&cma_page_list))
+ putback_movable_pages(&cma_page_list);
++ return ret > 0 ? -ENOMEM : ret;
+ }
+- /*
+- * We did migrate all the pages, Try to get the page references
+- * again migrating any new CMA pages which we failed to isolate
+- * earlier.
+- */
+- ret = __get_user_pages_locked(mm, start, nr_pages,
+- pages, vmas, NULL,
+- gup_flags);
+-
+- if ((ret > 0) && migrate_allow) {
+- nr_pages = ret;
+- drain_allow = true;
+- goto check_again;
+- }
++
++ /* We unpinned pages before migration, pin them again */
++ ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas,
++ NULL, gup_flags);
++ if (ret <= 0)
++ return ret;
++ nr_pages = ret;
+ }
+
+- return ret;
++ /*
++ * check again because pages were unpinned, and we also might have
++ * had isolation errors and need more pages to migrate.
++ */
++ goto check_again;
+ }
+ #else
+ static long check_and_migrate_cma_pages(struct mm_struct *mm,
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 8e89b277ffcc3..19c245b96bd11 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -745,13 +745,20 @@ void hugetlb_fix_reserve_counts(struct inode *inode)
+ {
+ struct hugepage_subpool *spool = subpool_inode(inode);
+ long rsv_adjust;
++ bool reserved = false;
+
+ rsv_adjust = hugepage_subpool_get_pages(spool, 1);
+- if (rsv_adjust) {
++ if (rsv_adjust > 0) {
+ struct hstate *h = hstate_inode(inode);
+
+- hugetlb_acct_memory(h, 1);
++ if (!hugetlb_acct_memory(h, 1))
++ reserved = true;
++ } else if (!rsv_adjust) {
++ reserved = true;
+ }
++
++ if (!reserved)
++ pr_warn("hugetlb: Huge Page Reserved count may go negative.\n");
+ }
+
+ /*
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 494d3cb0b58a3..897b91c5f1d29 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -716,17 +716,17 @@ next:
+ if (pte_write(pteval))
+ writable = true;
+ }
+- if (likely(writable)) {
+- if (likely(referenced)) {
+- result = SCAN_SUCCEED;
+- trace_mm_collapse_huge_page_isolate(page, none_or_zero,
+- referenced, writable, result);
+- return 1;
+- }
+- } else {
++
++ if (unlikely(!writable)) {
+ result = SCAN_PAGE_RO;
++ } else if (unlikely(!referenced)) {
++ result = SCAN_LACK_REFERENCED_PAGE;
++ } else {
++ result = SCAN_SUCCEED;
++ trace_mm_collapse_huge_page_isolate(page, none_or_zero,
++ referenced, writable, result);
++ return 1;
+ }
+-
+ out:
+ release_pte_pages(pte, _pte, compound_pagelist);
+ trace_mm_collapse_huge_page_isolate(page, none_or_zero,
+diff --git a/mm/ksm.c b/mm/ksm.c
+index 9694ee2c71de5..b32391ccf6d57 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -794,6 +794,7 @@ static void remove_rmap_item_from_tree(struct rmap_item *rmap_item)
+ stable_node->rmap_hlist_len--;
+
+ put_anon_vma(rmap_item->anon_vma);
++ rmap_item->head = NULL;
+ rmap_item->address &= PAGE_MASK;
+
+ } else if (rmap_item->address & UNSTABLE_FLAG) {
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 20ca887ea7694..4754f2489d780 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -2967,6 +2967,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
+
+ swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE);
+ entry = swp_entry_to_pte(swp_entry);
++ } else {
++ /*
++ * For now we only support migrating to un-addressable
++ * device memory.
++ */
++ pr_warn_once("Unsupported ZONE_DEVICE page type.\n");
++ goto abort;
+ }
+ } else {
+ entry = mk_pte(page, vma->vm_page_prot);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 7c6b6d8f6c396..f4d24915d1f9e 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2256,25 +2256,11 @@ out_nomem:
+ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ struct shmem_inode_info *info = SHMEM_I(file_inode(file));
++ int ret;
+
+- if (info->seals & F_SEAL_FUTURE_WRITE) {
+- /*
+- * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
+- * "future write" seal active.
+- */
+- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
+- return -EPERM;
+-
+- /*
+- * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
+- * MAP_SHARED and read-only, take care to not allow mprotect to
+- * revert protections on such mappings. Do this only for shared
+- * mappings. For private mappings, don't need to mask
+- * VM_MAYWRITE as we still want them to be COW-writable.
+- */
+- if (vma->vm_flags & VM_SHARED)
+- vma->vm_flags &= ~(VM_MAYWRITE);
+- }
++ ret = seal_check_future_write(info->seals, vma);
++ if (ret)
++ return ret;
+
+ /* arm64 - allow memory tagging on RAM-based files */
+ vma->vm_flags |= VM_MTE_ALLOWED;
+@@ -2373,8 +2359,18 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
+ pgoff_t offset, max_off;
+
+ ret = -ENOMEM;
+- if (!shmem_inode_acct_block(inode, 1))
++ if (!shmem_inode_acct_block(inode, 1)) {
++ /*
++ * We may have got a page, returned -ENOENT triggering a retry,
++ * and now we find ourselves with -ENOMEM. Release the page, to
++ * avoid a BUG_ON in our caller.
++ */
++ if (unlikely(*pagep)) {
++ put_page(*pagep);
++ *pagep = NULL;
++ }
+ goto out;
++ }
+
+ if (!*pagep) {
+ page = shmem_alloc_page(gfp, info, pgoff);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 7a3e42e752350..82f4973a011d9 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5912,7 +5912,7 @@ static void hci_le_phy_update_evt(struct hci_dev *hdev, struct sk_buff *skb)
+
+ BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+
+- if (!ev->status)
++ if (ev->status)
+ return;
+
+ hci_dev_lock(hdev);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 17b87b57a1750..78776d0782c50 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -451,6 +451,8 @@ struct l2cap_chan *l2cap_chan_create(void)
+ if (!chan)
+ return NULL;
+
++ skb_queue_head_init(&chan->tx_q);
++ skb_queue_head_init(&chan->srej_q);
+ mutex_init(&chan->lock);
+
+ /* Set default lock nesting level */
+@@ -516,7 +518,9 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan)
+ chan->flush_to = L2CAP_DEFAULT_FLUSH_TO;
+ chan->retrans_timeout = L2CAP_DEFAULT_RETRANS_TO;
+ chan->monitor_timeout = L2CAP_DEFAULT_MONITOR_TO;
++
+ chan->conf_state = 0;
++ set_bit(CONF_NOT_COMPLETE, &chan->conf_state);
+
+ set_bit(FLAG_FORCE_ACTIVE, &chan->flags);
+ }
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index f1b1edd0b6974..c99d65ef13b1e 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -179,9 +179,17 @@ static int l2cap_sock_connect(struct socket *sock, struct sockaddr *addr,
+ struct l2cap_chan *chan = l2cap_pi(sk)->chan;
+ struct sockaddr_l2 la;
+ int len, err = 0;
++ bool zapped;
+
+ BT_DBG("sk %p", sk);
+
++ lock_sock(sk);
++ zapped = sock_flag(sk, SOCK_ZAPPED);
++ release_sock(sk);
++
++ if (zapped)
++ return -EINVAL;
++
+ if (!addr || alen < offsetofend(struct sockaddr, sa_family) ||
+ addr->sa_family != AF_BLUETOOTH)
+ return -EINVAL;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index fa0f7a4a1d2fc..01e143c2bbc04 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7768,7 +7768,6 @@ static int add_ext_adv_params(struct sock *sk, struct hci_dev *hdev,
+ goto unlock;
+ }
+
+- hdev->cur_adv_instance = cp->instance;
+ /* Submit request for advertising params if ext adv available */
+ if (ext_adv_capable(hdev)) {
+ hci_req_init(&req, hdev);
+diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c
+index dfec65eca8a6e..3db1def4437b3 100644
+--- a/net/bridge/br_arp_nd_proxy.c
++++ b/net/bridge/br_arp_nd_proxy.c
+@@ -160,7 +160,9 @@ void br_do_proxy_suppress_arp(struct sk_buff *skb, struct net_bridge *br,
+ if (br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED)) {
+ if (p && (p->flags & BR_NEIGH_SUPPRESS))
+ return;
+- if (ipv4_is_zeronet(sip) || sip == tip) {
++ if (parp->ar_op != htons(ARPOP_RREQUEST) &&
++ parp->ar_op != htons(ARPOP_RREPLY) &&
++ (ipv4_is_zeronet(sip) || sip == tip)) {
+ /* prevent flooding to neigh suppress ports */
+ BR_INPUT_SKB_CB(skb)->proxyarp_replied = 1;
+ return;
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 180be5102efc5..aa997de1d44c0 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -822,8 +822,10 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
+ key_addrs = skb_flow_dissector_target(flow_dissector,
+ FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+ target_container);
+- memcpy(&key_addrs->v6addrs, &flow_keys->ipv6_src,
+- sizeof(key_addrs->v6addrs));
++ memcpy(&key_addrs->v6addrs.src, &flow_keys->ipv6_src,
++ sizeof(key_addrs->v6addrs.src));
++ memcpy(&key_addrs->v6addrs.dst, &flow_keys->ipv6_dst,
++ sizeof(key_addrs->v6addrs.dst));
+ key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ }
+
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index f3c690b8c8e36..7c3c0774a67c7 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
+ struct page *page,
+ unsigned int dma_sync_size)
+ {
++ dma_addr_t dma_addr = page_pool_get_dma_addr(page);
++
+ dma_sync_size = min(dma_sync_size, pool->p.max_len);
+- dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
++ dma_sync_single_range_for_device(pool->p.dev, dma_addr,
+ pool->p.offset, dma_sync_size,
+ pool->p.dma_dir);
+ }
+@@ -226,7 +228,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
+ put_page(page);
+ return NULL;
+ }
+- page->dma_addr = dma;
++ page_pool_set_dma_addr(page, dma);
+
+ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+@@ -294,13 +296,13 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
+ */
+ goto skip_dma_unmap;
+
+- dma = page->dma_addr;
++ dma = page_pool_get_dma_addr(page);
+
+- /* When page is unmapped, it cannot be returned our pool */
++ /* When page is unmapped, it cannot be returned to our pool */
+ dma_unmap_page_attrs(pool->p.dev, dma,
+ PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ DMA_ATTR_SKIP_CPU_SYNC);
+- page->dma_addr = 0;
++ page_pool_set_dma_addr(page, 0);
+ skip_dma_unmap:
+ /* This may be the last page returned, releasing the pool, so
+ * it is not safe to reference pool afterwards.
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 771688e1b0da9..2603966da904d 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -489,7 +489,7 @@ store_link_ksettings_for_user(void __user *to,
+ {
+ struct ethtool_link_usettings link_usettings;
+
+- memcpy(&link_usettings.base, &from->base, sizeof(link_usettings));
++ memcpy(&link_usettings, from, sizeof(link_usettings));
+ bitmap_to_arr32(link_usettings.link_modes.supported,
+ from->link_modes.supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index 50d3c8896f917..25a55086d2b66 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -384,7 +384,8 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+ int ret;
+
+ ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+- ðtool_genl_family, 0, ctx->ops->reply_cmd);
++ ðtool_genl_family, NLM_F_MULTI,
++ ctx->ops->reply_cmd);
+ if (!ehdr)
+ return -EMSGSIZE;
+
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index f10e7a72ea624..a018afdb3e062 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -193,7 +193,6 @@ static int vti6_tnl_create2(struct net_device *dev)
+
+ strcpy(t->parms.name, dev->name);
+
+- dev_hold(dev);
+ vti6_tnl_link(ip6n, t);
+
+ return 0;
+@@ -932,6 +931,7 @@ static inline int vti6_dev_init_gen(struct net_device *dev)
+ dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
+ if (!dev->tstats)
+ return -ENOMEM;
++ dev_hold(dev);
+ return 0;
+ }
+
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index b7155b078b198..fe71c1ca984a6 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1295,6 +1295,11 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_sub_if_data *sdata)
+
+ sdata->vif.csa_active = false;
+ ifmgd->csa_waiting_bcn = false;
++ /*
++ * If the CSA IE is still present on the beacon after the switch,
++ * we need to consider it as a new CSA (possibly to self).
++ */
++ ifmgd->beacon_crc_valid = false;
+
+ ret = drv_post_channel_switch(sdata);
+ if (ret) {
+@@ -1400,11 +1405,8 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ ch_switch.delay = csa_ie.max_switch_time;
+ }
+
+- if (res < 0) {
+- ieee80211_queue_work(&local->hw,
+- &ifmgd->csa_connection_drop_work);
+- return;
+- }
++ if (res < 0)
++ goto lock_and_drop_connection;
+
+ if (beacon && sdata->vif.csa_active && !ifmgd->csa_waiting_bcn) {
+ if (res)
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 64fae4f645f52..f6bfa0ce262cb 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -2269,17 +2269,6 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb,
+ payload[7]);
+ }
+
+- /* Initialize skb->priority for QoS frames. If the DONT_REORDER flag
+- * is set, stick to the default value for skb->priority to assure
+- * frames injected with this flag are not reordered relative to each
+- * other.
+- */
+- if (ieee80211_is_data_qos(hdr->frame_control) &&
+- !(info->control.flags & IEEE80211_TX_CTRL_DONT_REORDER)) {
+- u8 *p = ieee80211_get_qos_ctl(hdr);
+- skb->priority = *p & IEEE80211_QOS_CTL_TAG1D_MASK;
+- }
+-
+ rcu_read_lock();
+
+ /*
+@@ -2343,6 +2332,15 @@ netdev_tx_t ieee80211_monitor_start_xmit(struct sk_buff *skb,
+
+ info->band = chandef->chan->band;
+
++ /* Initialize skb->priority according to frame type and TID class,
++ * with respect to the sub interface that the frame will actually
++ * be transmitted on. If the DONT_REORDER flag is set, the original
++ * skb-priority is preserved to assure frames injected with this
++ * flag are not reordered relative to each other.
++ */
++ ieee80211_select_queue_80211(sdata, skb, hdr);
++ skb_set_queue_mapping(skb, ieee80211_ac_from_tid(skb->priority));
++
+ /* remove the injection radiotap header */
+ skb_pull(skb, len_rthdr);
+
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index f97f29df4505e..371a114f3a5fd 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -479,8 +479,7 @@ static void mptcp_sock_destruct(struct sock *sk)
+ * ESTABLISHED state and will not have the SOCK_DEAD flag.
+ * Both result in warnings from inet_sock_destruct.
+ */
+-
+- if (sk->sk_state == TCP_ESTABLISHED) {
++ if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
+ sk->sk_state = TCP_CLOSE;
+ WARN_ON_ONCE(sk->sk_socket);
+ sock_orphan(sk);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index d6ec76a0fe62f..1380369d57871 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6213,9 +6213,9 @@ err_obj_ht:
+ INIT_LIST_HEAD(&obj->list);
+ return err;
+ err_trans:
+- kfree(obj->key.name);
+-err_userdata:
+ kfree(obj->udata);
++err_userdata:
++ kfree(obj->key.name);
+ err_strdup:
+ if (obj->ops->destroy)
+ obj->ops->destroy(&ctx, obj);
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 916a3c7f9eafe..79fbf37291f38 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -186,6 +186,8 @@ static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx,
+
+ ctx->optp = skb_header_pointer(skb, ip_hdrlen(skb) +
+ sizeof(struct tcphdr), ctx->optsize, opts);
++ if (!ctx->optp)
++ return NULL;
+ }
+
+ return tcp;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index bf618b7ec1aea..560c2cda52ee3 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -406,9 +406,17 @@ static void nft_rhash_destroy(const struct nft_set *set)
+ (void *)set);
+ }
+
++/* Number of buckets is stored in u32, so cap our result to 1U<<31 */
++#define NFT_MAX_BUCKETS (1U << 31)
++
+ static u32 nft_hash_buckets(u32 size)
+ {
+- return roundup_pow_of_two(size * 4 / 3);
++ u64 val = div_u64((u64)size * 4, 3);
++
++ if (val >= NFT_MAX_BUCKETS)
++ return NFT_MAX_BUCKETS;
++
++ return roundup_pow_of_two(val);
+ }
+
+ static bool nft_rhash_estimate(const struct nft_set_desc *desc, u32 features,
+diff --git a/net/netfilter/xt_SECMARK.c b/net/netfilter/xt_SECMARK.c
+index 75625d13e976c..498a0bf6f0444 100644
+--- a/net/netfilter/xt_SECMARK.c
++++ b/net/netfilter/xt_SECMARK.c
+@@ -24,10 +24,9 @@ MODULE_ALIAS("ip6t_SECMARK");
+ static u8 mode;
+
+ static unsigned int
+-secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
++secmark_tg(struct sk_buff *skb, const struct xt_secmark_target_info_v1 *info)
+ {
+ u32 secmark = 0;
+- const struct xt_secmark_target_info *info = par->targinfo;
+
+ switch (mode) {
+ case SECMARK_MODE_SEL:
+@@ -41,7 +40,7 @@ secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ return XT_CONTINUE;
+ }
+
+-static int checkentry_lsm(struct xt_secmark_target_info *info)
++static int checkentry_lsm(struct xt_secmark_target_info_v1 *info)
+ {
+ int err;
+
+@@ -73,15 +72,15 @@ static int checkentry_lsm(struct xt_secmark_target_info *info)
+ return 0;
+ }
+
+-static int secmark_tg_check(const struct xt_tgchk_param *par)
++static int
++secmark_tg_check(const char *table, struct xt_secmark_target_info_v1 *info)
+ {
+- struct xt_secmark_target_info *info = par->targinfo;
+ int err;
+
+- if (strcmp(par->table, "mangle") != 0 &&
+- strcmp(par->table, "security") != 0) {
++ if (strcmp(table, "mangle") != 0 &&
++ strcmp(table, "security") != 0) {
+ pr_info_ratelimited("only valid in \'mangle\' or \'security\' table, not \'%s\'\n",
+- par->table);
++ table);
+ return -EINVAL;
+ }
+
+@@ -116,25 +115,76 @@ static void secmark_tg_destroy(const struct xt_tgdtor_param *par)
+ }
+ }
+
+-static struct xt_target secmark_tg_reg __read_mostly = {
+- .name = "SECMARK",
+- .revision = 0,
+- .family = NFPROTO_UNSPEC,
+- .checkentry = secmark_tg_check,
+- .destroy = secmark_tg_destroy,
+- .target = secmark_tg,
+- .targetsize = sizeof(struct xt_secmark_target_info),
+- .me = THIS_MODULE,
++static int secmark_tg_check_v0(const struct xt_tgchk_param *par)
++{
++ struct xt_secmark_target_info *info = par->targinfo;
++ struct xt_secmark_target_info_v1 newinfo = {
++ .mode = info->mode,
++ };
++ int ret;
++
++ memcpy(newinfo.secctx, info->secctx, SECMARK_SECCTX_MAX);
++
++ ret = secmark_tg_check(par->table, &newinfo);
++ info->secid = newinfo.secid;
++
++ return ret;
++}
++
++static unsigned int
++secmark_tg_v0(struct sk_buff *skb, const struct xt_action_param *par)
++{
++ const struct xt_secmark_target_info *info = par->targinfo;
++ struct xt_secmark_target_info_v1 newinfo = {
++ .secid = info->secid,
++ };
++
++ return secmark_tg(skb, &newinfo);
++}
++
++static int secmark_tg_check_v1(const struct xt_tgchk_param *par)
++{
++ return secmark_tg_check(par->table, par->targinfo);
++}
++
++static unsigned int
++secmark_tg_v1(struct sk_buff *skb, const struct xt_action_param *par)
++{
++ return secmark_tg(skb, par->targinfo);
++}
++
++static struct xt_target secmark_tg_reg[] __read_mostly = {
++ {
++ .name = "SECMARK",
++ .revision = 0,
++ .family = NFPROTO_UNSPEC,
++ .checkentry = secmark_tg_check_v0,
++ .destroy = secmark_tg_destroy,
++ .target = secmark_tg_v0,
++ .targetsize = sizeof(struct xt_secmark_target_info),
++ .me = THIS_MODULE,
++ },
++ {
++ .name = "SECMARK",
++ .revision = 1,
++ .family = NFPROTO_UNSPEC,
++ .checkentry = secmark_tg_check_v1,
++ .destroy = secmark_tg_destroy,
++ .target = secmark_tg_v1,
++ .targetsize = sizeof(struct xt_secmark_target_info_v1),
++ .usersize = offsetof(struct xt_secmark_target_info_v1, secid),
++ .me = THIS_MODULE,
++ },
+ };
+
+ static int __init secmark_tg_init(void)
+ {
+- return xt_register_target(&secmark_tg_reg);
++ return xt_register_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
+ }
+
+ static void __exit secmark_tg_exit(void)
+ {
+- xt_unregister_target(&secmark_tg_reg);
++ xt_unregister_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
+ }
+
+ module_init(secmark_tg_init);
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 14316ba9b3b32..a5212a3f86e2f 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -209,16 +209,16 @@ static bool fl_range_port_dst_cmp(struct cls_fl_filter *filter,
+ struct fl_flow_key *key,
+ struct fl_flow_key *mkey)
+ {
+- __be16 min_mask, max_mask, min_val, max_val;
++ u16 min_mask, max_mask, min_val, max_val;
+
+- min_mask = htons(filter->mask->key.tp_range.tp_min.dst);
+- max_mask = htons(filter->mask->key.tp_range.tp_max.dst);
+- min_val = htons(filter->key.tp_range.tp_min.dst);
+- max_val = htons(filter->key.tp_range.tp_max.dst);
++ min_mask = ntohs(filter->mask->key.tp_range.tp_min.dst);
++ max_mask = ntohs(filter->mask->key.tp_range.tp_max.dst);
++ min_val = ntohs(filter->key.tp_range.tp_min.dst);
++ max_val = ntohs(filter->key.tp_range.tp_max.dst);
+
+ if (min_mask && max_mask) {
+- if (htons(key->tp_range.tp.dst) < min_val ||
+- htons(key->tp_range.tp.dst) > max_val)
++ if (ntohs(key->tp_range.tp.dst) < min_val ||
++ ntohs(key->tp_range.tp.dst) > max_val)
+ return false;
+
+ /* skb does not have min and max values */
+@@ -232,16 +232,16 @@ static bool fl_range_port_src_cmp(struct cls_fl_filter *filter,
+ struct fl_flow_key *key,
+ struct fl_flow_key *mkey)
+ {
+- __be16 min_mask, max_mask, min_val, max_val;
++ u16 min_mask, max_mask, min_val, max_val;
+
+- min_mask = htons(filter->mask->key.tp_range.tp_min.src);
+- max_mask = htons(filter->mask->key.tp_range.tp_max.src);
+- min_val = htons(filter->key.tp_range.tp_min.src);
+- max_val = htons(filter->key.tp_range.tp_max.src);
++ min_mask = ntohs(filter->mask->key.tp_range.tp_min.src);
++ max_mask = ntohs(filter->mask->key.tp_range.tp_max.src);
++ min_val = ntohs(filter->key.tp_range.tp_min.src);
++ max_val = ntohs(filter->key.tp_range.tp_max.src);
+
+ if (min_mask && max_mask) {
+- if (htons(key->tp_range.tp.src) < min_val ||
+- htons(key->tp_range.tp.src) > max_val)
++ if (ntohs(key->tp_range.tp.src) < min_val ||
++ ntohs(key->tp_range.tp.src) > max_val)
+ return false;
+
+ /* skb does not have min and max values */
+@@ -779,16 +779,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
+ TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
+
+ if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
+- htons(key->tp_range.tp_max.dst) <=
+- htons(key->tp_range.tp_min.dst)) {
++ ntohs(key->tp_range.tp_max.dst) <=
++ ntohs(key->tp_range.tp_min.dst)) {
+ NL_SET_ERR_MSG_ATTR(extack,
+ tb[TCA_FLOWER_KEY_PORT_DST_MIN],
+ "Invalid destination port range (min must be strictly smaller than max)");
+ return -EINVAL;
+ }
+ if (mask->tp_range.tp_min.src && mask->tp_range.tp_max.src &&
+- htons(key->tp_range.tp_max.src) <=
+- htons(key->tp_range.tp_min.src)) {
++ ntohs(key->tp_range.tp_max.src) <=
++ ntohs(key->tp_range.tp_min.src)) {
+ NL_SET_ERR_MSG_ATTR(extack,
+ tb[TCA_FLOWER_KEY_PORT_SRC_MIN],
+ "Invalid source port range (min must be strictly smaller than max)");
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 6f775275826a4..c70f93d64483b 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -901,6 +901,12 @@ static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb,
+
+ list_for_each_entry(entry, &new->entries, list)
+ cycle = ktime_add_ns(cycle, entry->interval);
++
++ if (!cycle) {
++ NL_SET_ERR_MSG(extack, "'cycle_time' can never be 0");
++ return -EINVAL;
++ }
++
+ new->cycle_time = cycle;
+ }
+
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index f77484df097b7..da4ce0947c3aa 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -3147,7 +3147,7 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc,
+ * primary.
+ */
+ if (af->is_any(&addr))
+- memcpy(&addr.v4, sctp_source(asconf), sizeof(addr));
++ memcpy(&addr, sctp_source(asconf), sizeof(addr));
+
+ if (security_sctp_bind_connect(asoc->ep->base.sk,
+ SCTP_PARAM_SET_PRIMARY,
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index af2b7041fa4eb..73bb4c6e9201a 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -1852,20 +1852,35 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
+ SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));
+ sctp_add_cmd_sf(commands, SCTP_CMD_PURGE_ASCONF_QUEUE, SCTP_NULL());
+
+- repl = sctp_make_cookie_ack(new_asoc, chunk);
++ /* Update the content of current association. */
++ if (sctp_assoc_update((struct sctp_association *)asoc, new_asoc)) {
++ struct sctp_chunk *abort;
++
++ abort = sctp_make_abort(asoc, NULL, sizeof(struct sctp_errhdr));
++ if (abort) {
++ sctp_init_cause(abort, SCTP_ERROR_RSRC_LOW, 0);
++ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
++ }
++ sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, SCTP_ERROR(ECONNABORTED));
++ sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED,
++ SCTP_PERR(SCTP_ERROR_RSRC_LOW));
++ SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS);
++ SCTP_DEC_STATS(net, SCTP_MIB_CURRESTAB);
++ goto nomem;
++ }
++
++ repl = sctp_make_cookie_ack(asoc, chunk);
+ if (!repl)
+ goto nomem;
+
+ /* Report association restart to upper layer. */
+ ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_RESTART, 0,
+- new_asoc->c.sinit_num_ostreams,
+- new_asoc->c.sinit_max_instreams,
++ asoc->c.sinit_num_ostreams,
++ asoc->c.sinit_max_instreams,
+ NULL, GFP_ATOMIC);
+ if (!ev)
+ goto nomem_ev;
+
+- /* Update the content of current association. */
+- sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
+ sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
+ if ((sctp_state(asoc, SHUTDOWN_PENDING) ||
+ sctp_state(asoc, SHUTDOWN_SENT)) &&
+@@ -1929,7 +1944,8 @@ static enum sctp_disposition sctp_sf_do_dupcook_b(
+ sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
+ sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
+ SCTP_STATE(SCTP_STATE_ESTABLISHED));
+- SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
++ if (asoc->state < SCTP_STATE_ESTABLISHED)
++ SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
+ sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
+
+ repl = sctp_make_cookie_ack(new_asoc, chunk);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 47340b3b514f3..cb23cca72c24c 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2162,6 +2162,9 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ struct smc_sock *smc;
+ int val, rc;
+
++ if (level == SOL_TCP && optname == TCP_ULP)
++ return -EOPNOTSUPP;
++
+ smc = smc_sk(sk);
+
+ /* generic setsockopts reaching us here always apply to the
+@@ -2186,7 +2189,6 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ if (rc || smc->use_fallback)
+ goto out;
+ switch (optname) {
+- case TCP_ULP:
+ case TCP_FASTOPEN:
+ case TCP_FASTOPEN_CONNECT:
+ case TCP_FASTOPEN_KEY:
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 612f0a641f4cf..f555d335e910d 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1799,7 +1799,6 @@ call_allocate(struct rpc_task *task)
+
+ status = xprt->ops->buf_alloc(task);
+ trace_rpc_buf_alloc(task, status);
+- xprt_inject_disconnect(xprt);
+ if (status == 0)
+ return;
+ if (status != -ENOMEM) {
+@@ -2457,12 +2456,6 @@ call_decode(struct rpc_task *task)
+ task->tk_flags &= ~RPC_CALL_MAJORSEEN;
+ }
+
+- /*
+- * Ensure that we see all writes made by xprt_complete_rqst()
+- * before it changed req->rq_reply_bytes_recvd.
+- */
+- smp_rmb();
+-
+ /*
+ * Did we ever call xprt_complete_rqst()? If not, we should assume
+ * the message is incomplete.
+@@ -2471,6 +2464,11 @@ call_decode(struct rpc_task *task)
+ if (!req->rq_reply_bytes_recvd)
+ goto out;
+
++ /* Ensure that we see all writes made by xprt_complete_rqst()
++ * before it changed req->rq_reply_bytes_recvd.
++ */
++ smp_rmb();
++
+ req->rq_rcv_buf.len = req->rq_private_buf.len;
+ trace_rpc_xdr_recvfrom(task, &req->rq_rcv_buf);
+
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 7034b4755fa18..16b6681a97ab1 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -846,7 +846,8 @@ void
+ svc_rqst_free(struct svc_rqst *rqstp)
+ {
+ svc_release_buffer(rqstp);
+- put_page(rqstp->rq_scratch_page);
++ if (rqstp->rq_scratch_page)
++ put_page(rqstp->rq_scratch_page);
+ kfree(rqstp->rq_resp);
+ kfree(rqstp->rq_argp);
+ kfree(rqstp->rq_auth_data);
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 5a809c64dc7b9..42a400135d412 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1176,7 +1176,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
+ goto out_notconn;
+ err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent);
+ xdr_free_bvec(xdr);
+- trace_svcsock_tcp_send(xprt, err < 0 ? err : sent);
++ trace_svcsock_tcp_send(xprt, err < 0 ? (long)err : sent);
+ if (err < 0 || sent != (xdr->len + sizeof(marker)))
+ goto out_close;
+ mutex_unlock(&xprt->xpt_mutex);
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 691ccf8049a48..20fe31b1b776f 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -698,9 +698,9 @@ int xprt_adjust_timeout(struct rpc_rqst *req)
+ const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout;
+ int status = 0;
+
+- if (time_before(jiffies, req->rq_minortimeo))
+- return status;
+ if (time_before(jiffies, req->rq_majortimeo)) {
++ if (time_before(jiffies, req->rq_minortimeo))
++ return status;
+ if (to->to_exponential)
+ req->rq_timeout <<= 1;
+ else
+@@ -1469,8 +1469,6 @@ bool xprt_prepare_transmit(struct rpc_task *task)
+ struct rpc_xprt *xprt = req->rq_xprt;
+
+ if (!xprt_lock_write(xprt, task)) {
+- trace_xprt_transmit_queued(xprt, task);
+-
+ /* Race breaker: someone may have transmitted us */
+ if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+ rpc_wake_up_queued_task_set_status(&xprt->sending,
+@@ -1483,7 +1481,10 @@ bool xprt_prepare_transmit(struct rpc_task *task)
+
+ void xprt_end_transmit(struct rpc_task *task)
+ {
+- xprt_release_write(task->tk_rqstp->rq_xprt, task);
++ struct rpc_xprt *xprt = task->tk_rqstp->rq_xprt;
++
++ xprt_inject_disconnect(xprt);
++ xprt_release_write(xprt, task);
+ }
+
+ /**
+@@ -1885,7 +1886,6 @@ void xprt_release(struct rpc_task *task)
+ spin_unlock(&xprt->transport_lock);
+ if (req->rq_buffer)
+ xprt->ops->buf_free(task);
+- xprt_inject_disconnect(xprt);
+ xdr_free_bvec(&req->rq_rcv_buf);
+ xdr_free_bvec(&req->rq_snd_buf);
+ if (req->rq_cred != NULL)
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index baca49fe83af2..0104430e4c8e1 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -257,6 +257,7 @@ int frwr_query_device(struct rpcrdma_ep *ep, const struct ib_device *device)
+ ep->re_attr.cap.max_send_wr += 1; /* for ib_drain_sq */
+ ep->re_attr.cap.max_recv_wr = ep->re_max_requests;
+ ep->re_attr.cap.max_recv_wr += RPCRDMA_BACKWARD_WRS;
++ ep->re_attr.cap.max_recv_wr += RPCRDMA_MAX_RECV_BATCH;
+ ep->re_attr.cap.max_recv_wr += 1; /* for ib_drain_rq */
+
+ ep->re_max_rdma_segs =
+@@ -581,7 +582,6 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+ mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ bad_wr = bad_wr->next;
+
+- list_del_init(&mr->mr_list);
+ frwr_mr_recycle(mr);
+ }
+ }
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index 8f5d0cb683609..d40ace8a973d9 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -1459,9 +1459,10 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
+ credits = 1; /* don't deadlock */
+ else if (credits > r_xprt->rx_ep->re_max_requests)
+ credits = r_xprt->rx_ep->re_max_requests;
++ rpcrdma_post_recvs(r_xprt, credits + (buf->rb_bc_srv_max_requests << 1),
++ false);
+ if (buf->rb_credits != credits)
+ rpcrdma_update_cwnd(r_xprt, credits);
+- rpcrdma_post_recvs(r_xprt, false);
+
+ req = rpcr_to_rdmar(rqst);
+ if (unlikely(req->rl_reply))
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 78d29d1bcc203..09953597d055a 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -262,8 +262,10 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ * xprt_rdma_inject_disconnect - inject a connection fault
+ * @xprt: transport context
+ *
+- * If @xprt is connected, disconnect it to simulate spurious connection
+- * loss.
++ * If @xprt is connected, disconnect it to simulate spurious
++ * connection loss. Caller must hold @xprt's send lock to
++ * ensure that data structures and hardware resources are
++ * stable during the rdma_disconnect() call.
+ */
+ static void
+ xprt_rdma_inject_disconnect(struct rpc_xprt *xprt)
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index ec912cf9c618c..f3fffc74ab0fa 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -535,7 +535,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt)
+ * outstanding Receives.
+ */
+ rpcrdma_ep_get(ep);
+- rpcrdma_post_recvs(r_xprt, true);
++ rpcrdma_post_recvs(r_xprt, 1, true);
+
+ rc = rdma_connect(ep->re_id, &ep->re_remote_cma);
+ if (rc)
+@@ -1364,21 +1364,21 @@ int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+ /**
+ * rpcrdma_post_recvs - Refill the Receive Queue
+ * @r_xprt: controlling transport instance
+- * @temp: mark Receive buffers to be deleted after use
++ * @needed: current credit grant
++ * @temp: mark Receive buffers to be deleted after one use
+ *
+ */
+-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp)
++void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp)
+ {
+ struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
+ struct rpcrdma_ep *ep = r_xprt->rx_ep;
+ struct ib_recv_wr *wr, *bad_wr;
+ struct rpcrdma_rep *rep;
+- int needed, count, rc;
++ int count, rc;
+
+ rc = 0;
+ count = 0;
+
+- needed = buf->rb_credits + (buf->rb_bc_srv_max_requests << 1);
+ if (likely(ep->re_receive_count > needed))
+ goto out;
+ needed -= ep->re_receive_count;
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index 94b28657aeeb8..c3bcc84c16c4c 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -460,7 +460,7 @@ int rpcrdma_xprt_connect(struct rpcrdma_xprt *r_xprt);
+ void rpcrdma_xprt_disconnect(struct rpcrdma_xprt *r_xprt);
+
+ int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+-void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp);
++void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp);
+
+ /*
+ * Buffer calls - xprtrdma/verbs.c
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 5a1ce64039f72..0749df80454d4 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -696,7 +696,7 @@ static int tipc_nl_compat_link_dump(struct tipc_nl_compat_msg *msg,
+ if (err)
+ return err;
+
+- link_info.dest = nla_get_flag(link[TIPC_NLA_LINK_DEST]);
++ link_info.dest = htonl(nla_get_flag(link[TIPC_NLA_LINK_DEST]));
+ link_info.up = htonl(nla_get_flag(link[TIPC_NLA_LINK_UP]));
+ nla_strscpy(link_info.str, link[TIPC_NLA_LINK_NAME],
+ TIPC_MAX_LINK_NAME);
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 2823b7c3302d0..40f359bf20440 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -128,13 +128,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
+ static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
+ struct xdp_desc *desc)
+ {
+- u64 chunk, chunk_end;
++ u64 chunk;
+
+- chunk = xp_aligned_extract_addr(pool, desc->addr);
+- chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len);
+- if (chunk != chunk_end)
++ if (desc->len > pool->chunk_size)
+ return false;
+
++ chunk = xp_aligned_extract_addr(pool, desc->addr);
+ if (chunk >= pool->addrs_cnt)
+ return false;
+
+diff --git a/samples/bpf/tracex1_kern.c b/samples/bpf/tracex1_kern.c
+index 3f4599c9a2022..ef30d2b353b0f 100644
+--- a/samples/bpf/tracex1_kern.c
++++ b/samples/bpf/tracex1_kern.c
+@@ -26,7 +26,7 @@
+ SEC("kprobe/__netif_receive_skb_core")
+ int bpf_prog1(struct pt_regs *ctx)
+ {
+- /* attaches to kprobe netif_receive_skb,
++ /* attaches to kprobe __netif_receive_skb_core,
+ * looks for packets on loobpack device and prints them
+ */
+ char devname[IFNAMSIZ];
+@@ -35,7 +35,7 @@ int bpf_prog1(struct pt_regs *ctx)
+ int len;
+
+ /* non-portable! works for the given kernel only */
+- skb = (struct sk_buff *) PT_REGS_PARM1(ctx);
++ bpf_probe_read_kernel(&skb, sizeof(skb), (void *)PT_REGS_PARM1(ctx));
+ dev = _(skb->dev);
+ len = _(skb->len);
+
+diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
+index f54b6ac37ac2e..12a87be0fb446 100644
+--- a/scripts/Makefile.modpost
++++ b/scripts/Makefile.modpost
+@@ -65,7 +65,20 @@ else
+ ifeq ($(KBUILD_EXTMOD),)
+
+ input-symdump := vmlinux.symvers
+-output-symdump := Module.symvers
++output-symdump := modules-only.symvers
++
++quiet_cmd_cat = GEN $@
++ cmd_cat = cat $(real-prereqs) > $@
++
++ifneq ($(wildcard vmlinux.symvers),)
++
++__modpost: Module.symvers
++Module.symvers: vmlinux.symvers modules-only.symvers FORCE
++ $(call if_changed,cat)
++
++targets += Module.symvers
++
++endif
+
+ else
+
+diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
+index e0f9655291665..af814b39b8765 100644
+--- a/scripts/kconfig/nconf.c
++++ b/scripts/kconfig/nconf.c
+@@ -504,8 +504,8 @@ static int get_mext_match(const char *match_str, match_f flag)
+ else if (flag == FIND_NEXT_MATCH_UP)
+ --match_start;
+
++ match_start = (match_start + items_num) % items_num;
+ index = match_start;
+- index = (index + items_num) % items_num;
+ while (true) {
+ char *str = k_menu_items[index].str;
+ if (strcasestr(str, match_str) != NULL)
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index d6c81657d6955..5f9d8d9147d0e 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -2469,19 +2469,6 @@ fail:
+ fatal("parse error in symbol dump file\n");
+ }
+
+-/* For normal builds always dump all symbols.
+- * For external modules only dump symbols
+- * that are not read from kernel Module.symvers.
+- **/
+-static int dump_sym(struct symbol *sym)
+-{
+- if (!external_module)
+- return 1;
+- if (sym->module->from_dump)
+- return 0;
+- return 1;
+-}
+-
+ static void write_dump(const char *fname)
+ {
+ struct buffer buf = { };
+@@ -2492,7 +2479,7 @@ static void write_dump(const char *fname)
+ for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
+ symbol = symbolhash[n];
+ while (symbol) {
+- if (dump_sym(symbol)) {
++ if (!symbol->module->from_dump) {
+ namespace = symbol->namespace;
+ buf_printf(&buf, "0x%08x\t%s\t%s\t%s\t%s\n",
+ symbol->crc, symbol->name,
+diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
+index 1e13c9f7ea8c1..56c9b48460d9e 100644
+--- a/security/keys/trusted-keys/trusted_tpm1.c
++++ b/security/keys/trusted-keys/trusted_tpm1.c
+@@ -500,10 +500,12 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype,
+
+ ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+- if (ret != TPM_NONCE_SIZE)
+- return -EIO;
++ if (ret != TPM_NONCE_SIZE) {
++ ret = -EIO;
++ goto out;
++ }
+
+ ordinal = htonl(TPM_ORD_SEAL);
+ datsize = htonl(datalen);
+diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c
+index bbae04793c50e..c18017e0a3d95 100644
+--- a/sound/firewire/bebob/bebob_stream.c
++++ b/sound/firewire/bebob/bebob_stream.c
+@@ -517,20 +517,22 @@ int snd_bebob_stream_init_duplex(struct snd_bebob *bebob)
+ static int keep_resources(struct snd_bebob *bebob, struct amdtp_stream *stream,
+ unsigned int rate, unsigned int index)
+ {
+- struct snd_bebob_stream_formation *formation;
++ unsigned int pcm_channels;
++ unsigned int midi_ports;
+ struct cmp_connection *conn;
+ int err;
+
+ if (stream == &bebob->tx_stream) {
+- formation = bebob->tx_stream_formations + index;
++ pcm_channels = bebob->tx_stream_formations[index].pcm;
++ midi_ports = bebob->midi_input_ports;
+ conn = &bebob->out_conn;
+ } else {
+- formation = bebob->rx_stream_formations + index;
++ pcm_channels = bebob->rx_stream_formations[index].pcm;
++ midi_ports = bebob->midi_output_ports;
+ conn = &bebob->in_conn;
+ }
+
+- err = amdtp_am824_set_parameters(stream, rate, formation->pcm,
+- formation->midi, false);
++ err = amdtp_am824_set_parameters(stream, rate, pcm_channels, midi_ports, false);
+ if (err < 0)
+ return err;
+
+diff --git a/sound/pci/hda/ideapad_s740_helper.c b/sound/pci/hda/ideapad_s740_helper.c
+new file mode 100644
+index 0000000000000..564b9086e52db
+--- /dev/null
++++ b/sound/pci/hda/ideapad_s740_helper.c
+@@ -0,0 +1,492 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Fixes for Lenovo Ideapad S740, to be included from codec driver */
++
++static const struct hda_verb alc285_ideapad_s740_coefs[] = {
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x10 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0320 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0041 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001d },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004e },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0042 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x007f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x003c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0011 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x002a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x000f },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0046 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0044 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0003 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0009 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x004c },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001b },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0019 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0025 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0018 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0037 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x001a },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0040 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0016 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0076 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0017 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0010 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0015 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0007 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0086 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0001 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x29 },
++{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0002 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0x0000 },
++{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++{}
++};
++
++static void alc285_fixup_ideapad_s740_coef(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ switch (action) {
++ case HDA_FIXUP_ACT_PRE_PROBE:
++ snd_hda_add_verbs(codec, alc285_ideapad_s740_coefs);
++ break;
++ }
++}
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index d6387106619ff..7b0d9d7a1c383 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2650,7 +2650,7 @@ static void generic_acomp_pin_eld_notify(void *audio_ptr, int port, int dev_id)
+ /* skip notification during system suspend (but not in runtime PM);
+ * the state will be updated at resume
+ */
+- if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
++ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ return;
+ /* ditto during suspend/resume process itself */
+ if (snd_hdac_is_in_pm(&codec->core))
+@@ -2836,7 +2836,7 @@ static void intel_pin_eld_notify(void *audio_ptr, int port, int pipe)
+ /* skip notification during system suspend (but not in runtime PM);
+ * the state will be updated at resume
+ */
+- if (snd_power_get_state(codec->card) != SNDRV_CTL_POWER_D0)
++ if (codec->core.dev.power.power_state.event == PM_EVENT_SUSPEND)
+ return;
+ /* ditto during suspend/resume process itself */
+ if (snd_hdac_is_in_pm(&codec->core))
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8ec57bd351dfe..1fe70f2fe4fe8 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6282,6 +6282,9 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
+ /* for alc295_fixup_hp_top_speakers */
+ #include "hp_x360_helper.c"
+
++/* for alc285_fixup_ideapad_s740_coef() */
++#include "ideapad_s740_helper.c"
++
+ enum {
+ ALC269_FIXUP_GPIO2,
+ ALC269_FIXUP_SONY_VAIO,
+@@ -6481,6 +6484,7 @@ enum {
+ ALC282_FIXUP_ACER_DISABLE_LINEOUT,
+ ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
+ ALC256_FIXUP_ACER_HEADSET_MIC,
++ ALC285_FIXUP_IDEAPAD_S740_COEF,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7973,6 +7977,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ },
++ [ALC285_FIXUP_IDEAPAD_S740_COEF] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc285_fixup_ideapad_s740_coef,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8320,6 +8330,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
++ SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
+index cea53a878c360..4aee30db034dd 100644
+--- a/sound/pci/rme9652/hdsp.c
++++ b/sound/pci/rme9652/hdsp.c
+@@ -5321,7 +5321,8 @@ static int snd_hdsp_free(struct hdsp *hdsp)
+ if (hdsp->port)
+ pci_release_regions(hdsp->pci);
+
+- pci_disable_device(hdsp->pci);
++ if (pci_is_enabled(hdsp->pci))
++ pci_disable_device(hdsp->pci);
+ return 0;
+ }
+
+diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
+index 04e878a0f773b..49fee31ad9057 100644
+--- a/sound/pci/rme9652/hdspm.c
++++ b/sound/pci/rme9652/hdspm.c
+@@ -6884,7 +6884,8 @@ static int snd_hdspm_free(struct hdspm * hdspm)
+ if (hdspm->port)
+ pci_release_regions(hdspm->pci);
+
+- pci_disable_device(hdspm->pci);
++ if (pci_is_enabled(hdspm->pci))
++ pci_disable_device(hdspm->pci);
+ return 0;
+ }
+
+diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
+index 012fbec5e6a74..0f4ab86a29f6a 100644
+--- a/sound/pci/rme9652/rme9652.c
++++ b/sound/pci/rme9652/rme9652.c
+@@ -1733,7 +1733,8 @@ static int snd_rme9652_free(struct snd_rme9652 *rme9652)
+ if (rme9652->port)
+ pci_release_regions(rme9652->pci);
+
+- pci_disable_device(rme9652->pci);
++ if (pci_is_enabled(rme9652->pci))
++ pci_disable_device(rme9652->pci);
+ return 0;
+ }
+
+diff --git a/sound/soc/codecs/rt286.c b/sound/soc/codecs/rt286.c
+index 5fb9653d9131f..eec2dd93ecbb0 100644
+--- a/sound/soc/codecs/rt286.c
++++ b/sound/soc/codecs/rt286.c
+@@ -171,6 +171,9 @@ static bool rt286_readable_register(struct device *dev, unsigned int reg)
+ case RT286_PROC_COEF:
+ case RT286_SET_AMP_GAIN_ADC_IN1:
+ case RT286_SET_AMP_GAIN_ADC_IN2:
++ case RT286_SET_GPIO_MASK:
++ case RT286_SET_GPIO_DIRECTION:
++ case RT286_SET_GPIO_DATA:
+ case RT286_SET_POWER(RT286_DAC_OUT1):
+ case RT286_SET_POWER(RT286_DAC_OUT2):
+ case RT286_SET_POWER(RT286_ADC_IN1):
+@@ -1117,12 +1120,11 @@ static const struct dmi_system_id force_combo_jack_table[] = {
+ { }
+ };
+
+-static const struct dmi_system_id dmi_dell_dino[] = {
++static const struct dmi_system_id dmi_dell[] = {
+ {
+- .ident = "Dell Dino",
++ .ident = "Dell",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343")
+ }
+ },
+ { }
+@@ -1133,7 +1135,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ {
+ struct rt286_platform_data *pdata = dev_get_platdata(&i2c->dev);
+ struct rt286_priv *rt286;
+- int i, ret, val;
++ int i, ret, vendor_id;
+
+ rt286 = devm_kzalloc(&i2c->dev, sizeof(*rt286),
+ GFP_KERNEL);
+@@ -1149,14 +1151,15 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ }
+
+ ret = regmap_read(rt286->regmap,
+- RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &val);
++ RT286_GET_PARAM(AC_NODE_ROOT, AC_PAR_VENDOR_ID), &vendor_id);
+ if (ret != 0) {
+ dev_err(&i2c->dev, "I2C error %d\n", ret);
+ return ret;
+ }
+- if (val != RT286_VENDOR_ID && val != RT288_VENDOR_ID) {
++ if (vendor_id != RT286_VENDOR_ID && vendor_id != RT288_VENDOR_ID) {
+ dev_err(&i2c->dev,
+- "Device with ID register %#x is not rt286\n", val);
++ "Device with ID register %#x is not rt286\n",
++ vendor_id);
+ return -ENODEV;
+ }
+
+@@ -1180,8 +1183,8 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ if (pdata)
+ rt286->pdata = *pdata;
+
+- if (dmi_check_system(force_combo_jack_table) ||
+- dmi_check_system(dmi_dell_dino))
++ if ((vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) ||
++ dmi_check_system(force_combo_jack_table))
+ rt286->pdata.cbj_en = true;
+
+ regmap_write(rt286->regmap, RT286_SET_AUDIO_POWER, AC_PWRST_D3);
+@@ -1220,7 +1223,7 @@ static int rt286_i2c_probe(struct i2c_client *i2c,
+ regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL3, 0xf777, 0x4737);
+ regmap_update_bits(rt286->regmap, RT286_DEPOP_CTRL4, 0x00ff, 0x003f);
+
+- if (dmi_check_system(dmi_dell_dino)) {
++ if (vendor_id == RT288_VENDOR_ID && dmi_check_system(dmi_dell)) {
+ regmap_update_bits(rt286->regmap,
+ RT286_SET_GPIO_MASK, 0x40, 0x40);
+ regmap_update_bits(rt286->regmap,
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index a0c8f58d729b3..47ce074289ca9 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -2908,6 +2908,18 @@ static const struct dmi_system_id dmi_platform_intel_quirks[] = {
+ RT5670_GPIO1_IS_IRQ |
+ RT5670_JD_MODE3),
+ },
++ {
++ .callback = rt5670_quirk_cb,
++ .ident = "Dell Venue 10 Pro 5055",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Venue 10 Pro 5055"),
++ },
++ .driver_data = (unsigned long *)(RT5670_DMIC_EN |
++ RT5670_DMIC2_INR |
++ RT5670_GPIO1_IS_IRQ |
++ RT5670_JD_MODE1),
++ },
+ {
+ .callback = rt5670_quirk_cb,
+ .ident = "Aegex 10 tablet (RU2)",
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 21d2e1cba3803..d45f43290653e 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -478,6 +478,9 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TAF"),
+ },
+ .driver_data = (void *)(BYT_RT5640_IN1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
+ BYT_RT5640_MONO_SPEAKER |
+ BYT_RT5640_DIFF_MIC |
+ BYT_RT5640_SSP0_AIF2 |
+@@ -511,6 +514,23 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
++ {
++ /* Chuwi Hi8 (CWI509) */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
++ DMI_MATCH(DMI_BOARD_NAME, "BYT-PA03C"),
++ DMI_MATCH(DMI_SYS_VENDOR, "ilife"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "S806"),
++ },
++ .driver_data = (void *)(BYT_RT5640_IN1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_MONO_SPEAKER |
++ BYT_RT5640_DIFF_MIC |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Circuitco"),
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 1d7677376e742..9dc982c2c7760 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -187,6 +187,17 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ SOF_RT715_DAI_ID_FIX |
+ SOF_SDW_FOUR_SPK),
+ },
++ /* AlderLake devices */
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Alder Lake Client Platform"),
++ },
++ .driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
++ SOF_SDW_TGL_HDMI |
++ SOF_SDW_PCH_DMIC),
++ },
+ {}
+ };
+
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 6e670b3e92a00..289928d4c0c99 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1428,8 +1428,75 @@ static int rsnd_hw_params(struct snd_soc_component *component,
+ }
+ if (io->converted_chan)
+ dev_dbg(dev, "convert channels = %d\n", io->converted_chan);
+- if (io->converted_rate)
++ if (io->converted_rate) {
++ /*
++ * SRC supports convert rates from params_rate(hw_params)/k_down
++ * to params_rate(hw_params)*k_up, where k_up is always 6, and
++ * k_down depends on number of channels and SRC unit.
++ * So all SRC units can upsample audio up to 6 times regardless
++ * its number of channels. And all SRC units can downsample
++ * 2 channel audio up to 6 times too.
++ */
++ int k_up = 6;
++ int k_down = 6;
++ int channel;
++ struct rsnd_mod *src_mod = rsnd_io_to_mod_src(io);
++
+ dev_dbg(dev, "convert rate = %d\n", io->converted_rate);
++
++ channel = io->converted_chan ? io->converted_chan :
++ params_channels(hw_params);
++
++ switch (rsnd_mod_id(src_mod)) {
++ /*
++ * SRC0 can downsample 4, 6 and 8 channel audio up to 4 times.
++ * SRC1, SRC3 and SRC4 can downsample 4 channel audio
++ * up to 4 times.
++ * SRC1, SRC3 and SRC4 can downsample 6 and 8 channel audio
++ * no more than twice.
++ */
++ case 1:
++ case 3:
++ case 4:
++ if (channel > 4) {
++ k_down = 2;
++ break;
++ }
++ fallthrough;
++ case 0:
++ if (channel > 2)
++ k_down = 4;
++ break;
++
++ /* Other SRC units do not support more than 2 channels */
++ default:
++ if (channel > 2)
++ return -EINVAL;
++ }
++
++ if (params_rate(hw_params) > io->converted_rate * k_down) {
++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min =
++ io->converted_rate * k_down;
++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max =
++ io->converted_rate * k_down;
++ hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE;
++ } else if (params_rate(hw_params) * k_up < io->converted_rate) {
++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->min =
++ (io->converted_rate + k_up - 1) / k_up;
++ hw_param_interval(hw_params, SNDRV_PCM_HW_PARAM_RATE)->max =
++ (io->converted_rate + k_up - 1) / k_up;
++ hw_params->cmask |= SNDRV_PCM_HW_PARAM_RATE;
++ }
++
++ /*
++ * TBD: Max SRC input and output rates also depend on number
++ * of channels and SRC unit:
++ * SRC1, SRC3 and SRC4 do not support more than 128kHz
++ * for 6 channel and 96kHz for 8 channel audio.
++ * Perhaps this function should return EINVAL if the input or
++ * the output rate exceeds the limitation.
++ */
++ }
+ }
+
+ return rsnd_dai_call(hw_params, io, substream, hw_params);
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index d0ded427a8363..042207c116514 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -507,10 +507,15 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ struct rsnd_priv *priv)
+ {
+ struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
++ int ret;
+
+ if (!rsnd_ssi_is_run_mods(mod, io))
+ return 0;
+
++ ret = rsnd_ssi_master_clk_start(mod, io);
++ if (ret < 0)
++ return ret;
++
+ ssi->usrcnt++;
+
+ rsnd_mod_power_on(mod);
+@@ -792,7 +797,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ SSI_SYS_STATUS(i * 2),
+ 0xf << (id * 4));
+ stop = true;
+- break;
+ }
+ }
+ break;
+@@ -810,7 +814,6 @@ static void __rsnd_ssi_interrupt(struct rsnd_mod *mod,
+ SSI_SYS_STATUS((i * 2) + 1),
+ 0xf << 4);
+ stop = true;
+- break;
+ }
+ }
+ break;
+@@ -1060,13 +1063,6 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
+ return 0;
+ }
+
+-static int rsnd_ssi_prepare(struct rsnd_mod *mod,
+- struct rsnd_dai_stream *io,
+- struct rsnd_priv *priv)
+-{
+- return rsnd_ssi_master_clk_start(mod, io);
+-}
+-
+ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ .name = SSI_NAME,
+ .probe = rsnd_ssi_common_probe,
+@@ -1079,7 +1075,6 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ .pointer = rsnd_ssi_pio_pointer,
+ .pcm_new = rsnd_ssi_pcm_new,
+ .hw_params = rsnd_ssi_hw_params,
+- .prepare = rsnd_ssi_prepare,
+ .get_status = rsnd_ssi_get_status,
+ };
+
+@@ -1166,7 +1161,6 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
+ .pcm_new = rsnd_ssi_pcm_new,
+ .fallback = rsnd_ssi_fallback,
+ .hw_params = rsnd_ssi_hw_params,
+- .prepare = rsnd_ssi_prepare,
+ .get_status = rsnd_ssi_get_status,
+ };
+
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 246a5e32e22a2..b4810266f5e5d 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -153,7 +153,9 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ fe->dpcm[stream].state = SND_SOC_DPCM_STATE_OPEN;
+ fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+
++ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
+ snd_soc_runtime_activate(fe, stream);
++ mutex_unlock(&fe->card->pcm_mutex);
+
+ mutex_unlock(&fe->card->mutex);
+
+@@ -181,7 +183,9 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
+
+ mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
+
++ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
+ snd_soc_runtime_deactivate(fe, stream);
++ mutex_unlock(&fe->card->pcm_mutex);
+
+ fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 48facd2626585..8a8fe2b980a18 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3827,6 +3827,69 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ }
+ },
++{
++ /*
++ * Pioneer DJ DJM-850
++ * 8 channels playback and 8 channels capture @ 44.1/48/96kHz S24LE
++ * Playback on EP 0x05
++ * Capture on EP 0x86
++ */
++ USB_DEVICE_VENDOR_SPEC(0x08e4, 0x0163),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .ifnum = QUIRK_ANY_INTERFACE,
++ .type = QUIRK_COMPOSITE,
++ .data = (const struct snd_usb_audio_quirk[]) {
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 8,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x05,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC|
++ USB_ENDPOINT_USAGE_DATA,
++ .rates = SNDRV_PCM_RATE_44100|
++ SNDRV_PCM_RATE_48000|
++ SNDRV_PCM_RATE_96000,
++ .rate_min = 44100,
++ .rate_max = 96000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) { 44100, 48000, 96000 }
++ }
++ },
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S24_3LE,
++ .channels = 8,
++ .iface = 0,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .endpoint = 0x86,
++ .ep_idx = 1,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC|
++ USB_ENDPOINT_SYNC_ASYNC|
++ USB_ENDPOINT_USAGE_DATA,
++ .rates = SNDRV_PCM_RATE_44100|
++ SNDRV_PCM_RATE_48000|
++ SNDRV_PCM_RATE_96000,
++ .rate_min = 44100,
++ .rate_max = 96000,
++ .nr_rates = 3,
++ .rate_table = (unsigned int[]) { 44100, 48000, 96000 }
++ }
++ },
++ {
++ .ifnum = -1
++ }
++ }
++ }
++},
+ {
+ /*
+ * Pioneer DJ DJM-450
+diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
+index e7a8d847161f2..1d80ad4e0de8d 100644
+--- a/tools/lib/bpf/ringbuf.c
++++ b/tools/lib/bpf/ringbuf.c
+@@ -202,9 +202,11 @@ static inline int roundup_len(__u32 len)
+ return (len + 7) / 8 * 8;
+ }
+
+-static int ringbuf_process_ring(struct ring* r)
++static int64_t ringbuf_process_ring(struct ring* r)
+ {
+- int *len_ptr, len, err, cnt = 0;
++ int *len_ptr, len, err;
++ /* 64-bit to avoid overflow in case of extreme application behavior */
++ int64_t cnt = 0;
+ unsigned long cons_pos, prod_pos;
+ bool got_new_data;
+ void *sample;
+@@ -244,12 +246,14 @@ done:
+ }
+
+ /* Consume available ring buffer(s) data without event polling.
+- * Returns number of records consumed across all registered ring buffers, or
+- * negative number if any of the callbacks return error.
++ * Returns number of records consumed across all registered ring buffers (or
++ * INT_MAX, whichever is less), or negative number if any of the callbacks
++ * return error.
+ */
+ int ring_buffer__consume(struct ring_buffer *rb)
+ {
+- int i, err, res = 0;
++ int64_t err, res = 0;
++ int i;
+
+ for (i = 0; i < rb->ring_cnt; i++) {
+ struct ring *ring = &rb->rings[i];
+@@ -259,18 +263,24 @@ int ring_buffer__consume(struct ring_buffer *rb)
+ return err;
+ res += err;
+ }
++ if (res > INT_MAX)
++ return INT_MAX;
+ return res;
+ }
+
+ /* Poll for available data and consume records, if any are available.
+- * Returns number of records consumed, or negative number, if any of the
+- * registered callbacks returned error.
++ * Returns number of records consumed (or INT_MAX, whichever is less), or
++ * negative number, if any of the registered callbacks returned error.
+ */
+ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
+ {
+- int i, cnt, err, res = 0;
++ int i, cnt;
++ int64_t err, res = 0;
+
+ cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms);
++ if (cnt < 0)
++ return -errno;
++
+ for (i = 0; i < cnt; i++) {
+ __u32 ring_id = rb->events[i].data.fd;
+ struct ring *ring = &rb->rings[ring_id];
+@@ -280,7 +290,9 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
+ return err;
+ res += err;
+ }
+- return cnt < 0 ? -errno : res;
++ if (res > INT_MAX)
++ return INT_MAX;
++ return res;
+ }
+
+ /* Get an fd that can be used to sleep until data is available in the ring(s) */
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index ce8516e4de34f..2abbd75fbf2e3 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -530,6 +530,7 @@ ifndef NO_LIBELF
+ ifdef LIBBPF_DYNAMIC
+ ifeq ($(feature-libbpf), 1)
+ EXTLIBS += -lbpf
++ $(call detected,CONFIG_LIBBPF_DYNAMIC)
+ else
+ dummy := $(error Error: No libbpf devel library found, please install libbpf-devel);
+ endif
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index e2563d0154eb6..0cf27354aa451 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -140,7 +140,14 @@ perf-$(CONFIG_LIBELF) += symbol-elf.o
+ perf-$(CONFIG_LIBELF) += probe-file.o
+ perf-$(CONFIG_LIBELF) += probe-event.o
+
++ifdef CONFIG_LIBBPF_DYNAMIC
++ hashmap := 1
++endif
+ ifndef CONFIG_LIBBPF
++ hashmap := 1
++endif
++
++ifdef hashmap
+ perf-y += hashmap.o
+ endif
+
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
+index 6f3a70df63bc6..e00435753008a 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh
+@@ -120,12 +120,13 @@ __mirror_gre_test()
+ sleep 5
+
+ for ((i = 0; i < count; ++i)); do
++ local sip=$(mirror_gre_ipv6_addr 1 $i)::1
+ local dip=$(mirror_gre_ipv6_addr 1 $i)::2
+ local htun=h3-gt6-$i
+ local message
+
+ icmp6_capture_install $htun
+- mirror_test v$h1 "" $dip $htun 100 10
++ mirror_test v$h1 $sip $dip $htun 100 10
+ icmp6_capture_uninstall $htun
+ done
+ }
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
+index b0cb1aaffddab..33ddd01689bee 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh
+@@ -507,8 +507,8 @@ do_red_test()
+ check_err $? "backlog $backlog / $limit Got $pct% marked packets, expected == 0."
+ local diff=$((limit - backlog))
+ pct=$((100 * diff / limit))
+- ((0 <= pct && pct <= 5))
+- check_err $? "backlog $backlog / $limit expected <= 5% distance"
++ ((0 <= pct && pct <= 10))
++ check_err $? "backlog $backlog / $limit expected <= 10% distance"
+ log_test "TC $((vlan - 10)): RED backlog > limit"
+
+ stop_traffic
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index be17462fe1467..0af84ad48aa77 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -1,6 +1,10 @@
+ # This mimics the top-level Makefile. We do it explicitly here so that this
+ # Makefile can operate with or without the kbuild infrastructure.
++ifneq ($(LLVM),)
++CC := clang
++else
+ CC := $(CROSS_COMPILE)gcc
++endif
+
+ ifeq (0,$(MAKELEVEL))
+ ifeq ($(OUTPUT),)
+diff --git a/tools/testing/selftests/net/forwarding/mirror_lib.sh b/tools/testing/selftests/net/forwarding/mirror_lib.sh
+index 13db1cb50e57b..6406cd76a19d8 100644
+--- a/tools/testing/selftests/net/forwarding/mirror_lib.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_lib.sh
+@@ -20,6 +20,13 @@ mirror_uninstall()
+ tc filter del dev $swp1 $direction pref 1000
+ }
+
++is_ipv6()
++{
++ local addr=$1; shift
++
++ [[ -z ${addr//[0-9a-fA-F:]/} ]]
++}
++
+ mirror_test()
+ {
+ local vrf_name=$1; shift
+@@ -29,9 +36,17 @@ mirror_test()
+ local pref=$1; shift
+ local expect=$1; shift
+
++ if is_ipv6 $dip; then
++ local proto=-6
++ local type="icmp6 type=128" # Echo request.
++ else
++ local proto=
++ local type="icmp echoreq"
++ fi
++
+ local t0=$(tc_rule_stats_get $dev $pref)
+- $MZ $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \
+- -c 10 -d 100msec -t icmp type=8
++ $MZ $proto $vrf_name ${sip:+-A $sip} -B $dip -a own -b bc -q \
++ -c 10 -d 100msec -t $type
+ sleep 0.5
+ local t1=$(tc_rule_stats_get $dev $pref)
+ local delta=$((t1 - t0))
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index 39edce4f541c2..2674ba20d5249 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -5,8 +5,9 @@ rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
+ ns="ns1-$rndh"
+ ksft_skip=4
+ test_cnt=1
++timeout_poll=100
++timeout_test=$((timeout_poll * 2 + 1))
+ ret=0
+-pids=()
+
+ flush_pids()
+ {
+@@ -14,18 +15,14 @@ flush_pids()
+ # give it some time
+ sleep 1.1
+
+- for pid in ${pids[@]}; do
+- [ -d /proc/$pid ] && kill -SIGUSR1 $pid >/dev/null 2>&1
+- done
+- pids=()
++ ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGUSR1 &>/dev/null
+ }
+
+ cleanup()
+ {
++ ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGKILL &>/dev/null
++
+ ip netns del $ns
+- for pid in ${pids[@]}; do
+- [ -d /proc/$pid ] && kill -9 $pid >/dev/null 2>&1
+- done
+ }
+
+ ip -Version > /dev/null 2>&1
+@@ -79,39 +76,57 @@ trap cleanup EXIT
+ ip netns add $ns
+ ip -n $ns link set dev lo up
+
+-echo "a" | ip netns exec $ns ./mptcp_connect -p 10000 -l 0.0.0.0 -t 100 >/dev/null &
++echo "a" | \
++ timeout ${timeout_test} \
++ ip netns exec $ns \
++ ./mptcp_connect -p 10000 -l -t ${timeout_poll} \
++ 0.0.0.0 >/dev/null &
+ sleep 0.1
+-pids[0]=$!
+ chk_msk_nr 0 "no msk on netns creation"
+
+-echo "b" | ip netns exec $ns ./mptcp_connect -p 10000 127.0.0.1 -j -t 100 >/dev/null &
++echo "b" | \
++ timeout ${timeout_test} \
++ ip netns exec $ns \
++ ./mptcp_connect -p 10000 -j -t ${timeout_poll} \
++ 127.0.0.1 >/dev/null &
+ sleep 0.1
+-pids[1]=$!
+ chk_msk_nr 2 "after MPC handshake "
+ chk_msk_remote_key_nr 2 "....chk remote_key"
+ chk_msk_fallback_nr 0 "....chk no fallback"
+ flush_pids
+
+
+-echo "a" | ip netns exec $ns ./mptcp_connect -p 10001 -s TCP -l 0.0.0.0 -t 100 >/dev/null &
+-pids[0]=$!
++echo "a" | \
++ timeout ${timeout_test} \
++ ip netns exec $ns \
++ ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \
++ 0.0.0.0 >/dev/null &
+ sleep 0.1
+-echo "b" | ip netns exec $ns ./mptcp_connect -p 10001 127.0.0.1 -j -t 100 >/dev/null &
+-pids[1]=$!
++echo "b" | \
++ timeout ${timeout_test} \
++ ip netns exec $ns \
++ ./mptcp_connect -p 10001 -j -t ${timeout_poll} \
++ 127.0.0.1 >/dev/null &
+ sleep 0.1
+ chk_msk_fallback_nr 1 "check fallback"
+ flush_pids
+
+ NR_CLIENTS=100
+ for I in `seq 1 $NR_CLIENTS`; do
+- echo "a" | ip netns exec $ns ./mptcp_connect -p $((I+10001)) -l 0.0.0.0 -t 100 -w 10 >/dev/null &
+- pids[$((I*2))]=$!
++ echo "a" | \
++ timeout ${timeout_test} \
++ ip netns exec $ns \
++ ./mptcp_connect -p $((I+10001)) -l -w 10 \
++ -t ${timeout_poll} 0.0.0.0 >/dev/null &
+ done
+ sleep 0.1
+
+ for I in `seq 1 $NR_CLIENTS`; do
+- echo "b" | ip netns exec $ns ./mptcp_connect -p $((I+10001)) 127.0.0.1 -t 100 -w 10 >/dev/null &
+- pids[$((I*2 + 1))]=$!
++ echo "b" | \
++ timeout ${timeout_test} \
++ ip netns exec $ns \
++ ./mptcp_connect -p $((I+10001)) -w 10 \
++ -t ${timeout_poll} 127.0.0.1 >/dev/null &
+ done
+ sleep 1.5
+
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index e927df83efb91..c37acb790bd66 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -11,7 +11,8 @@ cin=""
+ cout=""
+ ksft_skip=4
+ capture=false
+-timeout=30
++timeout_poll=30
++timeout_test=$((timeout_poll * 2 + 1))
+ ipv6=true
+ ethtool_random_on=true
+ tc_delay="$((RANDOM%50))"
+@@ -272,7 +273,7 @@ check_mptcp_disabled()
+ ip netns exec ${disabled_ns} sysctl -q net.mptcp.enabled=0
+
+ local err=0
+- LANG=C ip netns exec ${disabled_ns} ./mptcp_connect -t $timeout -p 10000 -s MPTCP 127.0.0.1 < "$cin" 2>&1 | \
++ LANG=C ip netns exec ${disabled_ns} ./mptcp_connect -p 10000 -s MPTCP 127.0.0.1 < "$cin" 2>&1 | \
+ grep -q "^socket: Protocol not available$" && err=1
+ ip netns delete ${disabled_ns}
+
+@@ -414,14 +415,20 @@ do_transfer()
+ local stat_cookietx_last=$(ip netns exec ${listener_ns} nstat -z -a TcpExtSyncookiesSent | while read a count c rest ;do echo $count;done)
+ local stat_cookierx_last=$(ip netns exec ${listener_ns} nstat -z -a TcpExtSyncookiesRecv | while read a count c rest ;do echo $count;done)
+
+- ip netns exec ${listener_ns} ./mptcp_connect -t $timeout -l -p $port -s ${srv_proto} $extra_args $local_addr < "$sin" > "$sout" &
++ timeout ${timeout_test} \
++ ip netns exec ${listener_ns} \
++ ./mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \
++ $extra_args $local_addr < "$sin" > "$sout" &
+ local spid=$!
+
+ wait_local_port_listen "${listener_ns}" "${port}"
+
+ local start
+ start=$(date +%s%3N)
+- ip netns exec ${connector_ns} ./mptcp_connect -t $timeout -p $port -s ${cl_proto} $extra_args $connect_addr < "$cin" > "$cout" &
++ timeout ${timeout_test} \
++ ip netns exec ${connector_ns} \
++ ./mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
++ $extra_args $connect_addr < "$cin" > "$cout" &
+ local cpid=$!
+
+ wait $cpid
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 9aa9624cff972..99c5dc0eeb265 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -8,7 +8,8 @@ cin=""
+ cinsent=""
+ cout=""
+ ksft_skip=4
+-timeout=30
++timeout_poll=30
++timeout_test=$((timeout_poll * 2 + 1))
+ mptcp_connect=""
+ capture=0
+
+@@ -249,17 +250,26 @@ do_transfer()
+ local_addr="0.0.0.0"
+ fi
+
+- ip netns exec ${listener_ns} $mptcp_connect -t $timeout -l -p $port \
+- -s ${srv_proto} ${local_addr} < "$sin" > "$sout" &
++ timeout ${timeout_test} \
++ ip netns exec ${listener_ns} \
++ $mptcp_connect -t ${timeout_poll} -l -p $port -s ${srv_proto} \
++ ${local_addr} < "$sin" > "$sout" &
+ spid=$!
+
+ sleep 1
+
+ if [ "$test_link_fail" -eq 0 ];then
+- ip netns exec ${connector_ns} $mptcp_connect -t $timeout -p $port -s ${cl_proto} $connect_addr < "$cin" > "$cout" &
++ timeout ${timeout_test} \
++ ip netns exec ${connector_ns} \
++ $mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
++ $connect_addr < "$cin" > "$cout" &
+ else
+- ( cat "$cin" ; sleep 2; link_failure $listener_ns ; cat "$cin" ) | tee "$cinsent" | \
+- ip netns exec ${connector_ns} $mptcp_connect -t $timeout -p $port -s ${cl_proto} $connect_addr > "$cout" &
++ ( cat "$cin" ; sleep 2; link_failure $listener_ns ; cat "$cin" ) | \
++ tee "$cinsent" | \
++ timeout ${timeout_test} \
++ ip netns exec ${connector_ns} \
++ $mptcp_connect -t ${timeout_poll} -p $port -s ${cl_proto} \
++ $connect_addr > "$cout" &
+ fi
+ cpid=$!
+
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index f039ee57eb3c7..3aeef3bcb1018 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -7,7 +7,8 @@ ns2="ns2-$rndh"
+ ns3="ns3-$rndh"
+ capture=false
+ ksft_skip=4
+-timeout=30
++timeout_poll=30
++timeout_test=$((timeout_poll * 2 + 1))
+ test_cnt=1
+ ret=0
+ bail=0
+@@ -157,14 +158,20 @@ do_transfer()
+ sleep 1
+ fi
+
+- ip netns exec ${ns3} ./mptcp_connect -jt $timeout -l -p $port 0.0.0.0 < "$sin" > "$sout" &
++ timeout ${timeout_test} \
++ ip netns exec ${ns3} \
++ ./mptcp_connect -jt ${timeout_poll} -l -p $port \
++ 0.0.0.0 < "$sin" > "$sout" &
+ local spid=$!
+
+ wait_local_port_listen "${ns3}" "${port}"
+
+ local start
+ start=$(date +%s%3N)
+- ip netns exec ${ns1} ./mptcp_connect -jt $timeout -p $port 10.0.3.3 < "$cin" > "$cout" &
++ timeout ${timeout_test} \
++ ip netns exec ${ns1} \
++ ./mptcp_connect -jt ${timeout_poll} -p $port \
++ 10.0.3.3 < "$cin" > "$cout" &
+ local cpid=$!
+
+ wait $cpid
+diff --git a/tools/testing/selftests/powerpc/security/entry_flush.c b/tools/testing/selftests/powerpc/security/entry_flush.c
+index 78cf914fa3217..68ce377b205e9 100644
+--- a/tools/testing/selftests/powerpc/security/entry_flush.c
++++ b/tools/testing/selftests/powerpc/security/entry_flush.c
+@@ -53,7 +53,7 @@ int entry_flush_test(void)
+
+ entry_flush = entry_flush_orig;
+
+- fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1);
++ fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1);
+ FAIL_IF(fd < 0);
+
+ p = (char *)memalign(zero_size, CACHELINE_SIZE);
+diff --git a/tools/testing/selftests/powerpc/security/flush_utils.h b/tools/testing/selftests/powerpc/security/flush_utils.h
+index 07a5eb3014669..7a3d60292916e 100644
+--- a/tools/testing/selftests/powerpc/security/flush_utils.h
++++ b/tools/testing/selftests/powerpc/security/flush_utils.h
+@@ -9,6 +9,10 @@
+
+ #define CACHELINE_SIZE 128
+
++#define PERF_L1D_READ_MISS_CONFIG ((PERF_COUNT_HW_CACHE_L1D) | \
++ (PERF_COUNT_HW_CACHE_OP_READ << 8) | \
++ (PERF_COUNT_HW_CACHE_RESULT_MISS << 16))
++
+ void syscall_loop(char *p, unsigned long iterations,
+ unsigned long zero_size);
+
+diff --git a/tools/testing/selftests/powerpc/security/rfi_flush.c b/tools/testing/selftests/powerpc/security/rfi_flush.c
+index 7565fd786640f..f73484a6470fa 100644
+--- a/tools/testing/selftests/powerpc/security/rfi_flush.c
++++ b/tools/testing/selftests/powerpc/security/rfi_flush.c
+@@ -54,7 +54,7 @@ int rfi_flush_test(void)
+
+ rfi_flush = rfi_flush_orig;
+
+- fd = perf_event_open_counter(PERF_TYPE_RAW, /* L1d miss */ 0x400f0, -1);
++ fd = perf_event_open_counter(PERF_TYPE_HW_CACHE, PERF_L1D_READ_MISS_CONFIG, -1);
+ FAIL_IF(fd < 0);
+
+ p = (char *)memalign(zero_size, CACHELINE_SIZE);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 2d2dfb8b51eab..7377346be8806 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2734,8 +2734,8 @@ static void grow_halt_poll_ns(struct kvm_vcpu *vcpu)
+ if (val < grow_start)
+ val = grow_start;
+
+- if (val > halt_poll_ns)
+- val = halt_poll_ns;
++ if (val > vcpu->kvm->max_halt_poll_ns)
++ val = vcpu->kvm->max_halt_poll_ns;
+
+ vcpu->halt_poll_ns = val;
+ out:
+@@ -2814,7 +2814,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
+ goto out;
+ }
+ poll_end = cur = ktime_get();
+- } while (single_task_running() && ktime_before(cur, stop));
++ } while (single_task_running() && !need_resched() &&
++ ktime_before(cur, stop));
+ }
+
+ prepare_to_rcuwait(&vcpu->wait);
next reply other threads:[~2021-05-19 12:25 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-19 12:25 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2021-05-14 14:05 [gentoo-commits] proj/linux-patches:5.11 commit in: / Alice Ferrazzi
2021-05-12 12:29 Mike Pagano
2021-05-07 11:29 Alice Ferrazzi
2021-05-06 14:22 Mike Pagano
2021-05-02 16:04 Mike Pagano
2021-04-30 18:56 Mike Pagano
2021-04-28 12:05 Alice Ferrazzi
2021-04-21 12:03 Mike Pagano
2021-04-18 22:23 Mike Pagano
2021-04-16 10:56 Alice Ferrazzi
2021-04-14 10:51 Alice Ferrazzi
2021-04-10 13:27 Mike Pagano
2021-04-07 13:28 Mike Pagano
2021-03-30 12:59 Alice Ferrazzi
2021-03-25 9:47 Alice Ferrazzi
2021-03-24 12:10 Mike Pagano
2021-03-22 15:58 Mike Pagano
2021-03-21 22:05 Mike Pagano
2021-03-20 14:39 Mike Pagano
2021-03-18 22:31 Mike Pagano
2021-03-17 17:01 Mike Pagano
2021-03-11 15:09 Mike Pagano
2021-03-09 12:20 Mike Pagano
2021-03-07 15:18 Mike Pagano
2021-03-04 13:04 Mike Pagano
2021-03-04 12:02 Alice Ferrazzi
2021-02-26 9:59 Alice Ferrazzi
2021-02-23 13:42 Alice Ferrazzi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1621427094.338e6a0008513fbb06451cb85a66bcc37307837e.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox