public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Alice Ferrazzi" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.0 commit in: /
Date: Wed, 16 Nov 2022 11:16:49 +0000 (UTC)	[thread overview]
Message-ID: <1668597077.167e638e734a78f986f9cb92a872a2e5ed7b2a1f.alicef@gentoo> (raw)

commit:     167e638e734a78f986f9cb92a872a2e5ed7b2a1f
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 16 11:11:17 2022 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Nov 16 11:11:17 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=167e638e

Linux patch 6.0.9

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README            |    4 +
 1008_linux-6.0.9.patch | 6943 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6947 insertions(+)

diff --git a/0000_README b/0000_README
index 04880a09..bda6a465 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-6.0.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 6.0.8
 
+Patch:  1008_linux-6.0.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 6.0.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-6.0.9.patch b/1008_linux-6.0.9.patch
new file mode 100644
index 00000000..efa04169
--- /dev/null
+++ b/1008_linux-6.0.9.patch
@@ -0,0 +1,6943 @@
+diff --git a/Documentation/devicetree/bindings/net/engleder,tsnep.yaml b/Documentation/devicetree/bindings/net/engleder,tsnep.yaml
+index d0e1476e15b50..ccc42cb470dac 100644
+--- a/Documentation/devicetree/bindings/net/engleder,tsnep.yaml
++++ b/Documentation/devicetree/bindings/net/engleder,tsnep.yaml
+@@ -28,7 +28,7 @@ properties:
+ 
+   nvmem-cells: true
+ 
+-  nvmem-cells-names: true
++  nvmem-cell-names: true
+ 
+   phy-connection-type:
+     enum:
+diff --git a/Documentation/virt/kvm/devices/vm.rst b/Documentation/virt/kvm/devices/vm.rst
+index 0aa5b1cfd700c..60acc39e0e937 100644
+--- a/Documentation/virt/kvm/devices/vm.rst
++++ b/Documentation/virt/kvm/devices/vm.rst
+@@ -215,6 +215,7 @@ KVM_S390_VM_TOD_EXT).
+ :Parameters: address of a buffer in user space to store the data (u8) to
+ :Returns:   -EFAULT if the given address is not accessible from kernel space;
+ 	    -EINVAL if setting the TOD clock extension to != 0 is not supported
++	    -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)
+ 
+ 3.2. ATTRIBUTE: KVM_S390_VM_TOD_LOW
+ -----------------------------------
+@@ -224,6 +225,7 @@ the POP (u64).
+ 
+ :Parameters: address of a buffer in user space to store the data (u64) to
+ :Returns:    -EFAULT if the given address is not accessible from kernel space
++	     -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)
+ 
+ 3.3. ATTRIBUTE: KVM_S390_VM_TOD_EXT
+ -----------------------------------
+@@ -237,6 +239,7 @@ it, it is stored as 0 and not allowed to be set to a value != 0.
+ 	     (kvm_s390_vm_tod_clock) to
+ :Returns:   -EFAULT if the given address is not accessible from kernel space;
+ 	    -EINVAL if setting the TOD clock extension to != 0 is not supported
++	    -EOPNOTSUPP for a PV guest (TOD managed by the ultravisor)
+ 
+ 4. GROUP: KVM_S390_VM_CRYPTO
+ ============================
+diff --git a/Makefile b/Makefile
+index bcb76d4fdbc11..a234f16783ede 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 0
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index e1be6c429810d..a908a37f03678 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -12,6 +12,14 @@
+ 
+ #include <asm/efi.h>
+ 
++static bool region_is_misaligned(const efi_memory_desc_t *md)
++{
++	if (PAGE_SIZE == EFI_PAGE_SIZE)
++		return false;
++	return !PAGE_ALIGNED(md->phys_addr) ||
++	       !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT);
++}
++
+ /*
+  * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be
+  * executable, everything else can be mapped with the XN bits
+@@ -25,14 +33,22 @@ static __init pteval_t create_mapping_protection(efi_memory_desc_t *md)
+ 	if (type == EFI_MEMORY_MAPPED_IO)
+ 		return PROT_DEVICE_nGnRE;
+ 
+-	if (WARN_ONCE(!PAGE_ALIGNED(md->phys_addr),
+-		      "UEFI Runtime regions are not aligned to 64 KB -- buggy firmware?"))
++	if (region_is_misaligned(md)) {
++		static bool __initdata code_is_misaligned;
++
+ 		/*
+-		 * If the region is not aligned to the page size of the OS, we
+-		 * can not use strict permissions, since that would also affect
+-		 * the mapping attributes of the adjacent regions.
++		 * Regions that are not aligned to the OS page size cannot be
++		 * mapped with strict permissions, as those might interfere
++		 * with the permissions that are needed by the adjacent
++		 * region's mapping. However, if we haven't encountered any
++		 * misaligned runtime code regions so far, we can safely use
++		 * non-executable permissions for non-code regions.
+ 		 */
+-		return pgprot_val(PAGE_KERNEL_EXEC);
++		code_is_misaligned |= (type == EFI_RUNTIME_SERVICES_CODE);
++
++		return code_is_misaligned ? pgprot_val(PAGE_KERNEL_EXEC)
++					  : pgprot_val(PAGE_KERNEL);
++	}
+ 
+ 	/* R-- */
+ 	if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) ==
+@@ -63,19 +79,16 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
+ 	bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE ||
+ 				   md->type == EFI_RUNTIME_SERVICES_DATA);
+ 
+-	if (!PAGE_ALIGNED(md->phys_addr) ||
+-	    !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) {
+-		/*
+-		 * If the end address of this region is not aligned to page
+-		 * size, the mapping is rounded up, and may end up sharing a
+-		 * page frame with the next UEFI memory region. If we create
+-		 * a block entry now, we may need to split it again when mapping
+-		 * the next region, and support for that is going to be removed
+-		 * from the MMU routines. So avoid block mappings altogether in
+-		 * that case.
+-		 */
++	/*
++	 * If this region is not aligned to the page size used by the OS, the
++	 * mapping will be rounded outwards, and may end up sharing a page
++	 * frame with an adjacent runtime memory region. Given that the page
++	 * table descriptor covering the shared page will be rewritten when the
++	 * adjacent region gets mapped, we must avoid block mappings here so we
++	 * don't have to worry about splitting them when that happens.
++	 */
++	if (region_is_misaligned(md))
+ 		page_mappings_only = true;
+-	}
+ 
+ 	create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
+ 			   md->num_pages << EFI_PAGE_SHIFT,
+@@ -102,6 +115,9 @@ int __init efi_set_mapping_permissions(struct mm_struct *mm,
+ 	BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE &&
+ 	       md->type != EFI_RUNTIME_SERVICES_DATA);
+ 
++	if (region_is_misaligned(md))
++		return 0;
++
+ 	/*
+ 	 * Calling apply_to_page_range() is only safe on regions that are
+ 	 * guaranteed to be mapped down to pages. Since we are only called
+diff --git a/arch/m68k/include/uapi/asm/bootinfo-virt.h b/arch/m68k/include/uapi/asm/bootinfo-virt.h
+index b091ee9b06e05..7dbcd7bec1034 100644
+--- a/arch/m68k/include/uapi/asm/bootinfo-virt.h
++++ b/arch/m68k/include/uapi/asm/bootinfo-virt.h
+@@ -13,13 +13,8 @@
+ #define BI_VIRT_VIRTIO_BASE	0x8004
+ #define BI_VIRT_CTRL_BASE	0x8005
+ 
+-/*
+- * A random seed used to initialize the RNG. Record format:
+- *
+- *   - length       [ 2 bytes, 16-bit big endian ]
+- *   - seed data    [ `length` bytes, padded to preserve 2-byte alignment ]
+- */
+-#define BI_VIRT_RNG_SEED	0x8006
++/* No longer used -- replaced with BI_RNG_SEED -- but don't reuse this index:
++ * #define BI_VIRT_RNG_SEED	0x8006 */
+ 
+ #define VIRT_BOOTI_VERSION	MK_BI_VERSION(2, 0)
+ 
+diff --git a/arch/m68k/include/uapi/asm/bootinfo.h b/arch/m68k/include/uapi/asm/bootinfo.h
+index 95ecf3ae4c49f..024e87d7095f8 100644
+--- a/arch/m68k/include/uapi/asm/bootinfo.h
++++ b/arch/m68k/include/uapi/asm/bootinfo.h
+@@ -64,6 +64,13 @@ struct mem_info {
+ 					/* (struct mem_info) */
+ #define BI_COMMAND_LINE		0x0007	/* kernel command line parameters */
+ 					/* (string) */
++/*
++ * A random seed used to initialize the RNG. Record format:
++ *
++ *   - length       [ 2 bytes, 16-bit big endian ]
++ *   - seed data    [ `length` bytes, padded to preserve 4-byte struct alignment ]
++ */
++#define BI_RNG_SEED		0x0008
+ 
+ 
+     /*
+diff --git a/arch/m68k/kernel/setup_mm.c b/arch/m68k/kernel/setup_mm.c
+index 7e7ef67cff8b2..e45cc99237030 100644
+--- a/arch/m68k/kernel/setup_mm.c
++++ b/arch/m68k/kernel/setup_mm.c
+@@ -25,6 +25,7 @@
+ #include <linux/module.h>
+ #include <linux/nvram.h>
+ #include <linux/initrd.h>
++#include <linux/random.h>
+ 
+ #include <asm/bootinfo.h>
+ #include <asm/byteorder.h>
+@@ -151,6 +152,17 @@ static void __init m68k_parse_bootinfo(const struct bi_record *record)
+ 				sizeof(m68k_command_line));
+ 			break;
+ 
++		case BI_RNG_SEED: {
++			u16 len = be16_to_cpup(data);
++			add_bootloader_randomness(data + 2, len);
++			/*
++			 * Zero the data to preserve forward secrecy, and zero the
++			 * length to prevent kexec from using it.
++			 */
++			memzero_explicit((void *)data, len + 2);
++			break;
++		}
++
+ 		default:
+ 			if (MACH_IS_AMIGA)
+ 				unknown = amiga_parse_bootinfo(record);
+diff --git a/arch/m68k/virt/config.c b/arch/m68k/virt/config.c
+index 4ab22946ff68f..632ba200ad425 100644
+--- a/arch/m68k/virt/config.c
++++ b/arch/m68k/virt/config.c
+@@ -2,7 +2,6 @@
+ 
+ #include <linux/reboot.h>
+ #include <linux/serial_core.h>
+-#include <linux/random.h>
+ #include <clocksource/timer-goldfish.h>
+ 
+ #include <asm/bootinfo.h>
+@@ -93,16 +92,6 @@ int __init virt_parse_bootinfo(const struct bi_record *record)
+ 		data += 4;
+ 		virt_bi_data.virtio.irq = be32_to_cpup(data);
+ 		break;
+-	case BI_VIRT_RNG_SEED: {
+-		u16 len = be16_to_cpup(data);
+-		add_bootloader_randomness(data + 2, len);
+-		/*
+-		 * Zero the data to preserve forward secrecy, and zero the
+-		 * length to prevent kexec from using it.
+-		 */
+-		memzero_explicit((void *)data, len + 2);
+-		break;
+-	}
+ 	default:
+ 		unknown = 1;
+ 		break;
+diff --git a/arch/mips/kernel/jump_label.c b/arch/mips/kernel/jump_label.c
+index 71a882c8c6eb1..f7978d50a2ba5 100644
+--- a/arch/mips/kernel/jump_label.c
++++ b/arch/mips/kernel/jump_label.c
+@@ -56,7 +56,7 @@ void arch_jump_label_transform(struct jump_entry *e,
+ 			 * The branch offset must fit in the instruction's 26
+ 			 * bit field.
+ 			 */
+-			WARN_ON((offset >= BIT(25)) ||
++			WARN_ON((offset >= (long)BIT(25)) ||
+ 				(offset < -(long)BIT(25)));
+ 
+ 			insn.j_format.opcode = bc6_op;
+diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
+index ceb9ebab6558c..52002d54b1637 100644
+--- a/arch/riscv/kernel/process.c
++++ b/arch/riscv/kernel/process.c
+@@ -164,6 +164,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
+ 	unsigned long tls = args->tls;
+ 	struct pt_regs *childregs = task_pt_regs(p);
+ 
++	memset(&p->thread.s, 0, sizeof(p->thread.s));
++
+ 	/* p->thread holds context to be restored by __switch_to() */
+ 	if (unlikely(args->fn)) {
+ 		/* Kernel thread */
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index ad76bb59b0590..67ec1fadcfe24 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -283,6 +283,7 @@ void __init setup_arch(char **cmdline_p)
+ 	else
+ 		pr_err("No DTB found in kernel mappings\n");
+ #endif
++	early_init_fdt_scan_reserved_mem();
+ 	misc_mem_init();
+ 
+ 	init_resources();
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index f2e065671e4d5..84ac0fe612e79 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -30,7 +30,7 @@ obj-y += vdso.o
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+ 
+ # Disable -pg to prevent insert call site
+-CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
++CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE)
+ 
+ # Disable profiling and instrumentation for VDSO code
+ GCOV_PROFILE := n
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index b56a0a75533fe..50a1b6edd4918 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -262,7 +262,6 @@ static void __init setup_bootmem(void)
+ 			memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
+ 	}
+ 
+-	early_init_fdt_scan_reserved_mem();
+ 	dma_contiguous_reserve(dma32_phys_limit);
+ 	if (IS_ENABLED(CONFIG_64BIT))
+ 		hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index b7ef0b71014df..2486281027c02 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -1207,6 +1207,8 @@ static int kvm_s390_vm_get_migration(struct kvm *kvm,
+ 	return 0;
+ }
+ 
++static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
++
+ static int kvm_s390_set_tod_ext(struct kvm *kvm, struct kvm_device_attr *attr)
+ {
+ 	struct kvm_s390_vm_tod_clock gtod;
+@@ -1216,7 +1218,7 @@ static int kvm_s390_set_tod_ext(struct kvm *kvm, struct kvm_device_attr *attr)
+ 
+ 	if (!test_kvm_facility(kvm, 139) && gtod.epoch_idx)
+ 		return -EINVAL;
+-	kvm_s390_set_tod_clock(kvm, &gtod);
++	__kvm_s390_set_tod_clock(kvm, &gtod);
+ 
+ 	VM_EVENT(kvm, 3, "SET: TOD extension: 0x%x, TOD base: 0x%llx",
+ 		gtod.epoch_idx, gtod.tod);
+@@ -1247,7 +1249,7 @@ static int kvm_s390_set_tod_low(struct kvm *kvm, struct kvm_device_attr *attr)
+ 			   sizeof(gtod.tod)))
+ 		return -EFAULT;
+ 
+-	kvm_s390_set_tod_clock(kvm, &gtod);
++	__kvm_s390_set_tod_clock(kvm, &gtod);
+ 	VM_EVENT(kvm, 3, "SET: TOD base: 0x%llx", gtod.tod);
+ 	return 0;
+ }
+@@ -1259,6 +1261,16 @@ static int kvm_s390_set_tod(struct kvm *kvm, struct kvm_device_attr *attr)
+ 	if (attr->flags)
+ 		return -EINVAL;
+ 
++	mutex_lock(&kvm->lock);
++	/*
++	 * For protected guests, the TOD is managed by the ultravisor, so trying
++	 * to change it will never bring the expected results.
++	 */
++	if (kvm_s390_pv_is_protected(kvm)) {
++		ret = -EOPNOTSUPP;
++		goto out_unlock;
++	}
++
+ 	switch (attr->attr) {
+ 	case KVM_S390_VM_TOD_EXT:
+ 		ret = kvm_s390_set_tod_ext(kvm, attr);
+@@ -1273,6 +1285,9 @@ static int kvm_s390_set_tod(struct kvm *kvm, struct kvm_device_attr *attr)
+ 		ret = -ENXIO;
+ 		break;
+ 	}
++
++out_unlock:
++	mutex_unlock(&kvm->lock);
+ 	return ret;
+ }
+ 
+@@ -4379,13 +4394,6 @@ static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_t
+ 	preempt_enable();
+ }
+ 
+-void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
+-{
+-	mutex_lock(&kvm->lock);
+-	__kvm_s390_set_tod_clock(kvm, gtod);
+-	mutex_unlock(&kvm->lock);
+-}
+-
+ int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
+ {
+ 	if (!mutex_trylock(&kvm->lock))
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index f6fd668f887e8..4755492dfabc6 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -363,7 +363,6 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
+ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
+ 
+ /* implemented in kvm-s390.c */
+-void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
+ int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
+ long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
+ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
+diff --git a/arch/s390/kvm/pci.c b/arch/s390/kvm/pci.c
+index c50c1645c0aec..ded1af2ddae99 100644
+--- a/arch/s390/kvm/pci.c
++++ b/arch/s390/kvm/pci.c
+@@ -126,7 +126,7 @@ int kvm_s390_pci_aen_init(u8 nisc)
+ 		return -EPERM;
+ 
+ 	mutex_lock(&aift->aift_lock);
+-	aift->kzdev = kcalloc(ZPCI_NR_DEVICES, sizeof(struct kvm_zdev),
++	aift->kzdev = kcalloc(ZPCI_NR_DEVICES, sizeof(struct kvm_zdev *),
+ 			      GFP_KERNEL);
+ 	if (!aift->kzdev) {
+ 		rc = -ENOMEM;
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 1e086b37a3071..28e8e678c8357 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -535,6 +535,11 @@
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
++
++#define MSR_AMD64_DE_CFG		0xc0011029
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT	 1
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE	BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT)
++
+ #define MSR_AMD64_BU_CFG2		0xc001102a
+ #define MSR_AMD64_IBSFETCHCTL		0xc0011030
+ #define MSR_AMD64_IBSFETCHLINAD		0xc0011031
+@@ -637,9 +642,6 @@
+ #define FAM10H_MMIO_CONF_BASE_MASK	0xfffffffULL
+ #define FAM10H_MMIO_CONF_BASE_SHIFT	20
+ #define MSR_FAM10H_NODE_ID		0xc001100c
+-#define MSR_F10H_DECFG			0xc0011029
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT	1
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE		BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
+ 
+ /* K8 MSRs */
+ #define MSR_K8_TOP_MEM1			0xc001001a
+diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
+index cb50589a7102f..437308004ef2e 100644
+--- a/arch/x86/kernel/asm-offsets.c
++++ b/arch/x86/kernel/asm-offsets.c
+@@ -19,7 +19,6 @@
+ #include <asm/suspend.h>
+ #include <asm/tlbflush.h>
+ #include <asm/tdx.h>
+-#include "../kvm/vmx/vmx.h"
+ 
+ #ifdef CONFIG_XEN
+ #include <xen/interface/xen.h>
+@@ -108,9 +107,4 @@ static void __used common(void)
+ 	OFFSET(TSS_sp0, tss_struct, x86_tss.sp0);
+ 	OFFSET(TSS_sp1, tss_struct, x86_tss.sp1);
+ 	OFFSET(TSS_sp2, tss_struct, x86_tss.sp2);
+-
+-	if (IS_ENABLED(CONFIG_KVM_INTEL)) {
+-		BLANK();
+-		OFFSET(VMX_spec_ctrl, vcpu_vmx, spec_ctrl);
+-	}
+ }
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 48276c0e479d8..500b1f9862b13 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -770,8 +770,6 @@ static void init_amd_gh(struct cpuinfo_x86 *c)
+ 		set_cpu_bug(c, X86_BUG_AMD_TLB_MMATCH);
+ }
+ 
+-#define MSR_AMD64_DE_CFG	0xC0011029
+-
+ static void init_amd_ln(struct cpuinfo_x86 *c)
+ {
+ 	/*
+@@ -965,8 +963,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 		 * msr_set_bit() uses the safe accessors, too, even if the MSR
+ 		 * is not present.
+ 		 */
+-		msr_set_bit(MSR_F10H_DECFG,
+-			    MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
++		msr_set_bit(MSR_AMD64_DE_CFG,
++			    MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT);
+ 
+ 		/* A serializing LFENCE stops RDTSC speculation */
+ 		set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index 21fd425088fe5..c393b8773ace6 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -326,8 +326,8 @@ static void init_hygon(struct cpuinfo_x86 *c)
+ 		 * msr_set_bit() uses the safe accessors, too, even if the MSR
+ 		 * is not present.
+ 		 */
+-		msr_set_bit(MSR_F10H_DECFG,
+-			    MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
++		msr_set_bit(MSR_AMD64_DE_CFG,
++			    MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT);
+ 
+ 		/* A serializing LFENCE stops RDTSC speculation */
+ 		set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+diff --git a/arch/x86/kvm/.gitignore b/arch/x86/kvm/.gitignore
+new file mode 100644
+index 0000000000000..615d6ff35c009
+--- /dev/null
++++ b/arch/x86/kvm/.gitignore
+@@ -0,0 +1,2 @@
++/kvm-asm-offsets.s
++/kvm-asm-offsets.h
+diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
+index 30f244b645234..f453a0f96e243 100644
+--- a/arch/x86/kvm/Makefile
++++ b/arch/x86/kvm/Makefile
+@@ -34,3 +34,15 @@ endif
+ obj-$(CONFIG_KVM)	+= kvm.o
+ obj-$(CONFIG_KVM_INTEL)	+= kvm-intel.o
+ obj-$(CONFIG_KVM_AMD)	+= kvm-amd.o
++
++AFLAGS_svm/vmenter.o    := -iquote $(obj)
++$(obj)/svm/vmenter.o: $(obj)/kvm-asm-offsets.h
++
++AFLAGS_vmx/vmenter.o    := -iquote $(obj)
++$(obj)/vmx/vmenter.o: $(obj)/kvm-asm-offsets.h
++
++$(obj)/kvm-asm-offsets.h: $(obj)/kvm-asm-offsets.s FORCE
++	$(call filechk,offsets,__KVM_ASM_OFFSETS_H__)
++
++targets += kvm-asm-offsets.s
++clean-files += kvm-asm-offsets.h
+diff --git a/arch/x86/kvm/kvm-asm-offsets.c b/arch/x86/kvm/kvm-asm-offsets.c
+new file mode 100644
+index 0000000000000..f83e88b85bf21
+--- /dev/null
++++ b/arch/x86/kvm/kvm-asm-offsets.c
+@@ -0,0 +1,27 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Generate definitions needed by assembly language modules.
++ * This code generates raw asm output which is post-processed to extract
++ * and format the required data.
++ */
++#define COMPILE_OFFSETS
++
++#include <linux/kbuild.h>
++#include "vmx/vmx.h"
++#include "svm/svm.h"
++
++static void __used common(void)
++{
++	if (IS_ENABLED(CONFIG_KVM_AMD)) {
++		BLANK();
++		OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs);
++		OFFSET(SVM_current_vmcb, vcpu_svm, current_vmcb);
++		OFFSET(SVM_vmcb01, vcpu_svm, vmcb01);
++		OFFSET(KVM_VMCB_pa, kvm_vmcb_info, pa);
++	}
++
++	if (IS_ENABLED(CONFIG_KVM_INTEL)) {
++		BLANK();
++		OFFSET(VMX_spec_ctrl, vcpu_vmx, spec_ctrl);
++	}
++}
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 3552e6af36844..83e30e4db2ae0 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -6044,7 +6044,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ 
+ 	write_lock(&kvm->mmu_lock);
+ 
+-	kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end);
++	kvm_mmu_invalidate_begin(kvm, 0, -1ul);
+ 
+ 	flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end);
+ 
+@@ -6058,7 +6058,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ 		kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+ 						   gfn_end - gfn_start);
+ 
+-	kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end);
++	kvm_mmu_invalidate_end(kvm, 0, -1ul);
+ 
+ 	write_unlock(&kvm->mmu_lock);
+ }
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 28064060413ac..c9c9bd453a97d 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -605,7 +605,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
+ 	save->dr6  = svm->vcpu.arch.dr6;
+ 
+ 	pr_debug("Virtual Machine Save Area (VMSA):\n");
+-	print_hex_dump(KERN_CONT, "", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
++	print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index f3813dbacb9f1..454746641a483 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2711,9 +2711,9 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
+ 	msr->data = 0;
+ 
+ 	switch (msr->index) {
+-	case MSR_F10H_DECFG:
+-		if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC))
+-			msr->data |= MSR_F10H_DECFG_LFENCE_SERIALIZE;
++	case MSR_AMD64_DE_CFG:
++		if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
++			msr->data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE;
+ 		break;
+ 	case MSR_IA32_PERF_CAPABILITIES:
+ 		return 0;
+@@ -2814,7 +2814,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			msr_info->data = 0x1E;
+ 		}
+ 		break;
+-	case MSR_F10H_DECFG:
++	case MSR_AMD64_DE_CFG:
+ 		msr_info->data = svm->msr_decfg;
+ 		break;
+ 	default:
+@@ -3043,7 +3043,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 	case MSR_VM_IGNNE:
+ 		vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
+ 		break;
+-	case MSR_F10H_DECFG: {
++	case MSR_AMD64_DE_CFG: {
+ 		struct kvm_msr_entry msr_entry;
+ 
+ 		msr_entry.index = msr->index;
+@@ -3915,25 +3915,15 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+-	unsigned long vmcb_pa = svm->current_vmcb->pa;
+ 
+ 	guest_state_enter_irqoff();
+ 
+ 	if (sev_es_guest(vcpu->kvm)) {
+-		__svm_sev_es_vcpu_run(vmcb_pa);
++		__svm_sev_es_vcpu_run(svm);
+ 	} else {
+ 		struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu);
+ 
+-		/*
+-		 * Use a single vmcb (vmcb01 because it's always valid) for
+-		 * context switching guest state via VMLOAD/VMSAVE, that way
+-		 * the state doesn't need to be copied between vmcb01 and
+-		 * vmcb02 when switching vmcbs for nested virtualization.
+-		 */
+-		vmload(svm->vmcb01.pa);
+-		__svm_vcpu_run(vmcb_pa, (unsigned long *)&vcpu->arch.regs);
+-		vmsave(svm->vmcb01.pa);
+-
++		__svm_vcpu_run(svm);
+ 		vmload(__sme_page_pa(sd->save_area));
+ 	}
+ 
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 6a7686bf69000..7ff1879e73c56 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -683,7 +683,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm);
+ 
+ /* vmenter.S */
+ 
+-void __svm_sev_es_vcpu_run(unsigned long vmcb_pa);
+-void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs);
++void __svm_sev_es_vcpu_run(struct vcpu_svm *svm);
++void __svm_vcpu_run(struct vcpu_svm *svm);
+ 
+ #endif
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index 723f8534986c3..5bc2ed7d79c07 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -4,35 +4,37 @@
+ #include <asm/bitsperlong.h>
+ #include <asm/kvm_vcpu_regs.h>
+ #include <asm/nospec-branch.h>
++#include "kvm-asm-offsets.h"
+ 
+ #define WORD_SIZE (BITS_PER_LONG / 8)
+ 
+ /* Intentionally omit RAX as it's context switched by hardware */
+-#define VCPU_RCX	__VCPU_REGS_RCX * WORD_SIZE
+-#define VCPU_RDX	__VCPU_REGS_RDX * WORD_SIZE
+-#define VCPU_RBX	__VCPU_REGS_RBX * WORD_SIZE
++#define VCPU_RCX	(SVM_vcpu_arch_regs + __VCPU_REGS_RCX * WORD_SIZE)
++#define VCPU_RDX	(SVM_vcpu_arch_regs + __VCPU_REGS_RDX * WORD_SIZE)
++#define VCPU_RBX	(SVM_vcpu_arch_regs + __VCPU_REGS_RBX * WORD_SIZE)
+ /* Intentionally omit RSP as it's context switched by hardware */
+-#define VCPU_RBP	__VCPU_REGS_RBP * WORD_SIZE
+-#define VCPU_RSI	__VCPU_REGS_RSI * WORD_SIZE
+-#define VCPU_RDI	__VCPU_REGS_RDI * WORD_SIZE
++#define VCPU_RBP	(SVM_vcpu_arch_regs + __VCPU_REGS_RBP * WORD_SIZE)
++#define VCPU_RSI	(SVM_vcpu_arch_regs + __VCPU_REGS_RSI * WORD_SIZE)
++#define VCPU_RDI	(SVM_vcpu_arch_regs + __VCPU_REGS_RDI * WORD_SIZE)
+ 
+ #ifdef CONFIG_X86_64
+-#define VCPU_R8		__VCPU_REGS_R8  * WORD_SIZE
+-#define VCPU_R9		__VCPU_REGS_R9  * WORD_SIZE
+-#define VCPU_R10	__VCPU_REGS_R10 * WORD_SIZE
+-#define VCPU_R11	__VCPU_REGS_R11 * WORD_SIZE
+-#define VCPU_R12	__VCPU_REGS_R12 * WORD_SIZE
+-#define VCPU_R13	__VCPU_REGS_R13 * WORD_SIZE
+-#define VCPU_R14	__VCPU_REGS_R14 * WORD_SIZE
+-#define VCPU_R15	__VCPU_REGS_R15 * WORD_SIZE
++#define VCPU_R8		(SVM_vcpu_arch_regs + __VCPU_REGS_R8  * WORD_SIZE)
++#define VCPU_R9		(SVM_vcpu_arch_regs + __VCPU_REGS_R9  * WORD_SIZE)
++#define VCPU_R10	(SVM_vcpu_arch_regs + __VCPU_REGS_R10 * WORD_SIZE)
++#define VCPU_R11	(SVM_vcpu_arch_regs + __VCPU_REGS_R11 * WORD_SIZE)
++#define VCPU_R12	(SVM_vcpu_arch_regs + __VCPU_REGS_R12 * WORD_SIZE)
++#define VCPU_R13	(SVM_vcpu_arch_regs + __VCPU_REGS_R13 * WORD_SIZE)
++#define VCPU_R14	(SVM_vcpu_arch_regs + __VCPU_REGS_R14 * WORD_SIZE)
++#define VCPU_R15	(SVM_vcpu_arch_regs + __VCPU_REGS_R15 * WORD_SIZE)
+ #endif
+ 
++#define SVM_vmcb01_pa	(SVM_vmcb01 + KVM_VMCB_pa)
++
+ .section .noinstr.text, "ax"
+ 
+ /**
+  * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode
+- * @vmcb_pa:	unsigned long
+- * @regs:	unsigned long * (to guest registers)
++ * @svm:	struct vcpu_svm *
+  */
+ SYM_FUNC_START(__svm_vcpu_run)
+ 	push %_ASM_BP
+@@ -47,49 +49,54 @@ SYM_FUNC_START(__svm_vcpu_run)
+ #endif
+ 	push %_ASM_BX
+ 
+-	/* Save @regs. */
+-	push %_ASM_ARG2
+-
+-	/* Save @vmcb. */
++	/* Save @svm. */
+ 	push %_ASM_ARG1
+ 
+-	/* Move @regs to RAX. */
+-	mov %_ASM_ARG2, %_ASM_AX
++.ifnc _ASM_ARG1, _ASM_DI
++	/* Move @svm to RDI. */
++	mov %_ASM_ARG1, %_ASM_DI
++.endif
++
++	/*
++	 * Use a single vmcb (vmcb01 because it's always valid) for
++	 * context switching guest state via VMLOAD/VMSAVE, that way
++	 * the state doesn't need to be copied between vmcb01 and
++	 * vmcb02 when switching vmcbs for nested virtualization.
++	 */
++	mov SVM_vmcb01_pa(%_ASM_DI), %_ASM_AX
++1:	vmload %_ASM_AX
++2:
++
++	/* Get svm->current_vmcb->pa into RAX. */
++	mov SVM_current_vmcb(%_ASM_DI), %_ASM_AX
++	mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX
+ 
+ 	/* Load guest registers. */
+-	mov VCPU_RCX(%_ASM_AX), %_ASM_CX
+-	mov VCPU_RDX(%_ASM_AX), %_ASM_DX
+-	mov VCPU_RBX(%_ASM_AX), %_ASM_BX
+-	mov VCPU_RBP(%_ASM_AX), %_ASM_BP
+-	mov VCPU_RSI(%_ASM_AX), %_ASM_SI
+-	mov VCPU_RDI(%_ASM_AX), %_ASM_DI
++	mov VCPU_RCX(%_ASM_DI), %_ASM_CX
++	mov VCPU_RDX(%_ASM_DI), %_ASM_DX
++	mov VCPU_RBX(%_ASM_DI), %_ASM_BX
++	mov VCPU_RBP(%_ASM_DI), %_ASM_BP
++	mov VCPU_RSI(%_ASM_DI), %_ASM_SI
+ #ifdef CONFIG_X86_64
+-	mov VCPU_R8 (%_ASM_AX),  %r8
+-	mov VCPU_R9 (%_ASM_AX),  %r9
+-	mov VCPU_R10(%_ASM_AX), %r10
+-	mov VCPU_R11(%_ASM_AX), %r11
+-	mov VCPU_R12(%_ASM_AX), %r12
+-	mov VCPU_R13(%_ASM_AX), %r13
+-	mov VCPU_R14(%_ASM_AX), %r14
+-	mov VCPU_R15(%_ASM_AX), %r15
++	mov VCPU_R8 (%_ASM_DI),  %r8
++	mov VCPU_R9 (%_ASM_DI),  %r9
++	mov VCPU_R10(%_ASM_DI), %r10
++	mov VCPU_R11(%_ASM_DI), %r11
++	mov VCPU_R12(%_ASM_DI), %r12
++	mov VCPU_R13(%_ASM_DI), %r13
++	mov VCPU_R14(%_ASM_DI), %r14
++	mov VCPU_R15(%_ASM_DI), %r15
+ #endif
+-
+-	/* "POP" @vmcb to RAX. */
+-	pop %_ASM_AX
++	mov VCPU_RDI(%_ASM_DI), %_ASM_DI
+ 
+ 	/* Enter guest mode */
+ 	sti
+ 
+-1:	vmrun %_ASM_AX
+-
+-2:	cli
++3:	vmrun %_ASM_AX
++4:
++	cli
+ 
+-#ifdef CONFIG_RETPOLINE
+-	/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
+-	FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
+-#endif
+-
+-	/* "POP" @regs to RAX. */
++	/* Pop @svm to RAX while it's the only available register. */
+ 	pop %_ASM_AX
+ 
+ 	/* Save all guest registers.  */
+@@ -110,6 +117,18 @@ SYM_FUNC_START(__svm_vcpu_run)
+ 	mov %r15, VCPU_R15(%_ASM_AX)
+ #endif
+ 
++	/* @svm can stay in RDI from now on.  */
++	mov %_ASM_AX, %_ASM_DI
++
++	mov SVM_vmcb01_pa(%_ASM_DI), %_ASM_AX
++5:	vmsave %_ASM_AX
++6:
++
++#ifdef CONFIG_RETPOLINE
++	/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
++	FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
++#endif
++
+ 	/*
+ 	 * Mitigate RETBleed for AMD/Hygon Zen uarch. RET should be
+ 	 * untrained as soon as we exit the VM and are back to the
+@@ -159,17 +178,25 @@ SYM_FUNC_START(__svm_vcpu_run)
+ 	pop %_ASM_BP
+ 	RET
+ 
+-3:	cmpb $0, kvm_rebooting
++10:	cmpb $0, kvm_rebooting
+ 	jne 2b
+ 	ud2
++30:	cmpb $0, kvm_rebooting
++	jne 4b
++	ud2
++50:	cmpb $0, kvm_rebooting
++	jne 6b
++	ud2
+ 
+-	_ASM_EXTABLE(1b, 3b)
++	_ASM_EXTABLE(1b, 10b)
++	_ASM_EXTABLE(3b, 30b)
++	_ASM_EXTABLE(5b, 50b)
+ 
+ SYM_FUNC_END(__svm_vcpu_run)
+ 
+ /**
+  * __svm_sev_es_vcpu_run - Run a SEV-ES vCPU via a transition to SVM guest mode
+- * @vmcb_pa:	unsigned long
++ * @svm:	struct vcpu_svm *
+  */
+ SYM_FUNC_START(__svm_sev_es_vcpu_run)
+ 	push %_ASM_BP
+@@ -184,8 +211,9 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
+ #endif
+ 	push %_ASM_BX
+ 
+-	/* Move @vmcb to RAX. */
+-	mov %_ASM_ARG1, %_ASM_AX
++	/* Get svm->current_vmcb->pa into RAX. */
++	mov SVM_current_vmcb(%_ASM_ARG1), %_ASM_AX
++	mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX
+ 
+ 	/* Enter guest mode */
+ 	sti
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 6de96b9438044..660165065dfe8 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -1,12 +1,12 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #include <linux/linkage.h>
+ #include <asm/asm.h>
+-#include <asm/asm-offsets.h>
+ #include <asm/bitsperlong.h>
+ #include <asm/kvm_vcpu_regs.h>
+ #include <asm/nospec-branch.h>
+ #include <asm/percpu.h>
+ #include <asm/segment.h>
++#include "kvm-asm-offsets.h"
+ #include "run_flags.h"
+ 
+ #define WORD_SIZE (BITS_PER_LONG / 8)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 05f4424eb0c52..71cbafd67319b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1431,20 +1431,10 @@ static const u32 msrs_to_save_all[] = {
+ 	MSR_ARCH_PERFMON_PERFCTR0 + 2, MSR_ARCH_PERFMON_PERFCTR0 + 3,
+ 	MSR_ARCH_PERFMON_PERFCTR0 + 4, MSR_ARCH_PERFMON_PERFCTR0 + 5,
+ 	MSR_ARCH_PERFMON_PERFCTR0 + 6, MSR_ARCH_PERFMON_PERFCTR0 + 7,
+-	MSR_ARCH_PERFMON_PERFCTR0 + 8, MSR_ARCH_PERFMON_PERFCTR0 + 9,
+-	MSR_ARCH_PERFMON_PERFCTR0 + 10, MSR_ARCH_PERFMON_PERFCTR0 + 11,
+-	MSR_ARCH_PERFMON_PERFCTR0 + 12, MSR_ARCH_PERFMON_PERFCTR0 + 13,
+-	MSR_ARCH_PERFMON_PERFCTR0 + 14, MSR_ARCH_PERFMON_PERFCTR0 + 15,
+-	MSR_ARCH_PERFMON_PERFCTR0 + 16, MSR_ARCH_PERFMON_PERFCTR0 + 17,
+ 	MSR_ARCH_PERFMON_EVENTSEL0, MSR_ARCH_PERFMON_EVENTSEL1,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 2, MSR_ARCH_PERFMON_EVENTSEL0 + 3,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 4, MSR_ARCH_PERFMON_EVENTSEL0 + 5,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 6, MSR_ARCH_PERFMON_EVENTSEL0 + 7,
+-	MSR_ARCH_PERFMON_EVENTSEL0 + 8, MSR_ARCH_PERFMON_EVENTSEL0 + 9,
+-	MSR_ARCH_PERFMON_EVENTSEL0 + 10, MSR_ARCH_PERFMON_EVENTSEL0 + 11,
+-	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
+-	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
+-	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
+ 	MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG,
+ 
+ 	MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3,
+@@ -1551,7 +1541,7 @@ static const u32 msr_based_features_all[] = {
+ 	MSR_IA32_VMX_EPT_VPID_CAP,
+ 	MSR_IA32_VMX_VMFUNC,
+ 
+-	MSR_F10H_DECFG,
++	MSR_AMD64_DE_CFG,
+ 	MSR_IA32_UCODE_REV,
+ 	MSR_IA32_ARCH_CAPABILITIES,
+ 	MSR_IA32_PERF_CAPABILITIES,
+@@ -7005,12 +6995,12 @@ static void kvm_init_msr_list(void)
+ 				intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2)
+ 				continue;
+ 			break;
+-		case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 17:
++		case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 7:
+ 			if (msrs_to_save_all[i] - MSR_ARCH_PERFMON_PERFCTR0 >=
+ 			    min(INTEL_PMC_MAX_GENERIC, kvm_pmu_cap.num_counters_gp))
+ 				continue;
+ 			break;
+-		case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 17:
++		case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 7:
+ 			if (msrs_to_save_all[i] - MSR_ARCH_PERFMON_EVENTSEL0 >=
+ 			    min(INTEL_PMC_MAX_GENERIC, kvm_pmu_cap.num_counters_gp))
+ 				continue;
+diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
+index 6b3033845c6d3..5804bbae4f012 100644
+--- a/arch/x86/mm/hugetlbpage.c
++++ b/arch/x86/mm/hugetlbpage.c
+@@ -37,8 +37,12 @@ int pmd_huge(pmd_t pmd)
+  */
+ int pud_huge(pud_t pud)
+ {
++#if CONFIG_PGTABLE_LEVELS > 2
+ 	return !pud_none(pud) &&
+ 		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
++#else
++	return 0;
++#endif
+ }
+ 
+ #ifdef CONFIG_HUGETLB_PAGE
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index bb176c72891c9..4cd39f304e206 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -519,6 +519,7 @@ static void pm_save_spec_msr(void)
+ 		MSR_TSX_FORCE_ABORT,
+ 		MSR_IA32_MCU_OPT_CTRL,
+ 		MSR_AMD64_LS_CFG,
++		MSR_AMD64_DE_CFG,
+ 	};
+ 
+ 	msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index ff9602a0e54ef..b0e442a75690a 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -3266,6 +3266,7 @@ static unsigned int ata_scsiop_maint_in(struct ata_scsi_args *args, u8 *rbuf)
+ 	case REPORT_LUNS:
+ 	case REQUEST_SENSE:
+ 	case SYNCHRONIZE_CACHE:
++	case SYNCHRONIZE_CACHE_16:
+ 	case REZERO_UNIT:
+ 	case SEEK_6:
+ 	case SEEK_10:
+@@ -3924,6 +3925,7 @@ static inline ata_xlat_func_t ata_get_xlat_func(struct ata_device *dev, u8 cmd)
+ 		return ata_scsi_write_same_xlat;
+ 
+ 	case SYNCHRONIZE_CACHE:
++	case SYNCHRONIZE_CACHE_16:
+ 		if (ata_try_flush_cache(dev))
+ 			return ata_scsi_flush_xlat;
+ 		break;
+@@ -4147,6 +4149,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
+ 	 * turning this into a no-op.
+ 	 */
+ 	case SYNCHRONIZE_CACHE:
++	case SYNCHRONIZE_CACHE_16:
+ 		fallthrough;
+ 
+ 	/* no-op's, complete with success */
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 6b7fb955a05ac..78344e4d4215b 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -1533,9 +1533,24 @@ static const struct attribute_group *region_groups[] = {
+ 
+ static void cxl_region_release(struct device *dev)
+ {
++	struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(dev->parent);
+ 	struct cxl_region *cxlr = to_cxl_region(dev);
++	int id = atomic_read(&cxlrd->region_id);
++
++	/*
++	 * Try to reuse the recently idled id rather than the cached
++	 * next id to prevent the region id space from increasing
++	 * unnecessarily.
++	 */
++	if (cxlr->id < id)
++		if (atomic_try_cmpxchg(&cxlrd->region_id, &id, cxlr->id)) {
++			memregion_free(id);
++			goto out;
++		}
+ 
+ 	memregion_free(cxlr->id);
++out:
++	put_device(dev->parent);
+ 	kfree(cxlr);
+ }
+ 
+@@ -1597,6 +1612,11 @@ static struct cxl_region *cxl_region_alloc(struct cxl_root_decoder *cxlrd, int i
+ 	device_initialize(dev);
+ 	lockdep_set_class(&dev->mutex, &cxl_region_key);
+ 	dev->parent = &cxlrd->cxlsd.cxld.dev;
++	/*
++	 * Keep root decoder pinned through cxl_region_release to fixup
++	 * region id allocations
++	 */
++	get_device(dev->parent);
+ 	device_set_pm_not_required(dev);
+ 	dev->bus = &cxl_bus_type;
+ 	dev->type = &cxl_region_type;
+diff --git a/drivers/dma/apple-admac.c b/drivers/dma/apple-admac.c
+index d1f74a3aa999d..6780761a16403 100644
+--- a/drivers/dma/apple-admac.c
++++ b/drivers/dma/apple-admac.c
+@@ -490,7 +490,7 @@ static struct dma_chan *admac_dma_of_xlate(struct of_phandle_args *dma_spec,
+ 		return NULL;
+ 	}
+ 
+-	return &ad->channels[index].chan;
++	return dma_get_slave_channel(&ad->channels[index].chan);
+ }
+ 
+ static int admac_drain_reports(struct admac_data *ad, int channo)
+diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
+index 5a50423b7378e..858bd64f13135 100644
+--- a/drivers/dma/at_hdmac.c
++++ b/drivers/dma/at_hdmac.c
+@@ -256,6 +256,8 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first)
+ 		       ATC_SPIP_BOUNDARY(first->boundary));
+ 	channel_writel(atchan, DPIP, ATC_DPIP_HOLE(first->dst_hole) |
+ 		       ATC_DPIP_BOUNDARY(first->boundary));
++	/* Don't allow CPU to reorder channel enable. */
++	wmb();
+ 	dma_writel(atdma, CHER, atchan->mask);
+ 
+ 	vdbg_dump_regs(atchan);
+@@ -316,7 +318,8 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
+ 	struct at_desc *desc_first = atc_first_active(atchan);
+ 	struct at_desc *desc;
+ 	int ret;
+-	u32 ctrla, dscr, trials;
++	u32 ctrla, dscr;
++	unsigned int i;
+ 
+ 	/*
+ 	 * If the cookie doesn't match to the currently running transfer then
+@@ -386,7 +389,7 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
+ 		dscr = channel_readl(atchan, DSCR);
+ 		rmb(); /* ensure DSCR is read before CTRLA */
+ 		ctrla = channel_readl(atchan, CTRLA);
+-		for (trials = 0; trials < ATC_MAX_DSCR_TRIALS; ++trials) {
++		for (i = 0; i < ATC_MAX_DSCR_TRIALS; ++i) {
+ 			u32 new_dscr;
+ 
+ 			rmb(); /* ensure DSCR is read after CTRLA */
+@@ -412,7 +415,7 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
+ 			rmb(); /* ensure DSCR is read before CTRLA */
+ 			ctrla = channel_readl(atchan, CTRLA);
+ 		}
+-		if (unlikely(trials >= ATC_MAX_DSCR_TRIALS))
++		if (unlikely(i == ATC_MAX_DSCR_TRIALS))
+ 			return -ETIMEDOUT;
+ 
+ 		/* for the first descriptor we can be more accurate */
+@@ -462,18 +465,6 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
+ 	if (!atc_chan_is_cyclic(atchan))
+ 		dma_cookie_complete(txd);
+ 
+-	/* If the transfer was a memset, free our temporary buffer */
+-	if (desc->memset_buffer) {
+-		dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
+-			      desc->memset_paddr);
+-		desc->memset_buffer = false;
+-	}
+-
+-	/* move children to free_list */
+-	list_splice_init(&desc->tx_list, &atchan->free_list);
+-	/* move myself to free_list */
+-	list_move(&desc->desc_node, &atchan->free_list);
+-
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+ 	dma_descriptor_unmap(txd);
+@@ -483,42 +474,20 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
+ 		dmaengine_desc_get_callback_invoke(txd, NULL);
+ 
+ 	dma_run_dependencies(txd);
+-}
+-
+-/**
+- * atc_complete_all - finish work for all transactions
+- * @atchan: channel to complete transactions for
+- *
+- * Eventually submit queued descriptors if any
+- *
+- * Assume channel is idle while calling this function
+- * Called with atchan->lock held and bh disabled
+- */
+-static void atc_complete_all(struct at_dma_chan *atchan)
+-{
+-	struct at_desc *desc, *_desc;
+-	LIST_HEAD(list);
+-	unsigned long flags;
+-
+-	dev_vdbg(chan2dev(&atchan->chan_common), "complete all\n");
+ 
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-
+-	/*
+-	 * Submit queued descriptors ASAP, i.e. before we go through
+-	 * the completed ones.
+-	 */
+-	if (!list_empty(&atchan->queue))
+-		atc_dostart(atchan, atc_first_queued(atchan));
+-	/* empty active_list now it is completed */
+-	list_splice_init(&atchan->active_list, &list);
+-	/* empty queue list by moving descriptors (if any) to active_list */
+-	list_splice_init(&atchan->queue, &atchan->active_list);
+-
++	/* move children to free_list */
++	list_splice_init(&desc->tx_list, &atchan->free_list);
++	/* add myself to free_list */
++	list_add(&desc->desc_node, &atchan->free_list);
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+-	list_for_each_entry_safe(desc, _desc, &list, desc_node)
+-		atc_chain_complete(atchan, desc);
++	/* If the transfer was a memset, free our temporary buffer */
++	if (desc->memset_buffer) {
++		dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
++			      desc->memset_paddr);
++		desc->memset_buffer = false;
++	}
+ }
+ 
+ /**
+@@ -527,26 +496,28 @@ static void atc_complete_all(struct at_dma_chan *atchan)
+  */
+ static void atc_advance_work(struct at_dma_chan *atchan)
+ {
++	struct at_desc *desc;
+ 	unsigned long flags;
+-	int ret;
+ 
+ 	dev_vdbg(chan2dev(&atchan->chan_common), "advance_work\n");
+ 
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-	ret = atc_chan_is_enabled(atchan);
+-	spin_unlock_irqrestore(&atchan->lock, flags);
+-	if (ret)
+-		return;
+-
+-	if (list_empty(&atchan->active_list) ||
+-	    list_is_singular(&atchan->active_list))
+-		return atc_complete_all(atchan);
++	if (atc_chan_is_enabled(atchan) || list_empty(&atchan->active_list))
++		return spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+-	atc_chain_complete(atchan, atc_first_active(atchan));
++	desc = atc_first_active(atchan);
++	/* Remove the transfer node from the active list. */
++	list_del_init(&desc->desc_node);
++	spin_unlock_irqrestore(&atchan->lock, flags);
++	atc_chain_complete(atchan, desc);
+ 
+ 	/* advance work */
+ 	spin_lock_irqsave(&atchan->lock, flags);
+-	atc_dostart(atchan, atc_first_active(atchan));
++	if (!list_empty(&atchan->active_list)) {
++		desc = atc_first_queued(atchan);
++		list_move_tail(&desc->desc_node, &atchan->active_list);
++		atc_dostart(atchan, desc);
++	}
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ }
+ 
+@@ -558,6 +529,7 @@ static void atc_advance_work(struct at_dma_chan *atchan)
+ static void atc_handle_error(struct at_dma_chan *atchan)
+ {
+ 	struct at_desc *bad_desc;
++	struct at_desc *desc;
+ 	struct at_desc *child;
+ 	unsigned long flags;
+ 
+@@ -570,13 +542,12 @@ static void atc_handle_error(struct at_dma_chan *atchan)
+ 	bad_desc = atc_first_active(atchan);
+ 	list_del_init(&bad_desc->desc_node);
+ 
+-	/* As we are stopped, take advantage to push queued descriptors
+-	 * in active_list */
+-	list_splice_init(&atchan->queue, atchan->active_list.prev);
+-
+ 	/* Try to restart the controller */
+-	if (!list_empty(&atchan->active_list))
+-		atc_dostart(atchan, atc_first_active(atchan));
++	if (!list_empty(&atchan->active_list)) {
++		desc = atc_first_queued(atchan);
++		list_move_tail(&desc->desc_node, &atchan->active_list);
++		atc_dostart(atchan, desc);
++	}
+ 
+ 	/*
+ 	 * KERN_CRITICAL may seem harsh, but since this only happens
+@@ -691,19 +662,11 @@ static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	spin_lock_irqsave(&atchan->lock, flags);
+ 	cookie = dma_cookie_assign(tx);
+ 
+-	if (list_empty(&atchan->active_list)) {
+-		dev_vdbg(chan2dev(tx->chan), "tx_submit: started %u\n",
+-				desc->txd.cookie);
+-		atc_dostart(atchan, desc);
+-		list_add_tail(&desc->desc_node, &atchan->active_list);
+-	} else {
+-		dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
+-				desc->txd.cookie);
+-		list_add_tail(&desc->desc_node, &atchan->queue);
+-	}
+-
++	list_add_tail(&desc->desc_node, &atchan->queue);
+ 	spin_unlock_irqrestore(&atchan->lock, flags);
+ 
++	dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
++		 desc->txd.cookie);
+ 	return cookie;
+ }
+ 
+@@ -1445,11 +1408,8 @@ static int atc_terminate_all(struct dma_chan *chan)
+ 	struct at_dma_chan	*atchan = to_at_dma_chan(chan);
+ 	struct at_dma		*atdma = to_at_dma(chan->device);
+ 	int			chan_id = atchan->chan_common.chan_id;
+-	struct at_desc		*desc, *_desc;
+ 	unsigned long		flags;
+ 
+-	LIST_HEAD(list);
+-
+ 	dev_vdbg(chan2dev(chan), "%s\n", __func__);
+ 
+ 	/*
+@@ -1468,19 +1428,15 @@ static int atc_terminate_all(struct dma_chan *chan)
+ 		cpu_relax();
+ 
+ 	/* active_list entries will end up before queued entries */
+-	list_splice_init(&atchan->queue, &list);
+-	list_splice_init(&atchan->active_list, &list);
+-
+-	spin_unlock_irqrestore(&atchan->lock, flags);
+-
+-	/* Flush all pending and queued descriptors */
+-	list_for_each_entry_safe(desc, _desc, &list, desc_node)
+-		atc_chain_complete(atchan, desc);
++	list_splice_tail_init(&atchan->queue, &atchan->free_list);
++	list_splice_tail_init(&atchan->active_list, &atchan->free_list);
+ 
+ 	clear_bit(ATC_IS_PAUSED, &atchan->status);
+ 	/* if channel dedicated to cyclic operations, free it */
+ 	clear_bit(ATC_IS_CYCLIC, &atchan->status);
+ 
++	spin_unlock_irqrestore(&atchan->lock, flags);
++
+ 	return 0;
+ }
+ 
+@@ -1535,20 +1491,26 @@ atc_tx_status(struct dma_chan *chan,
+ }
+ 
+ /**
+- * atc_issue_pending - try to finish work
++ * atc_issue_pending - takes the first transaction descriptor in the pending
++ * queue and starts the transfer.
+  * @chan: target DMA channel
+  */
+ static void atc_issue_pending(struct dma_chan *chan)
+ {
+-	struct at_dma_chan	*atchan = to_at_dma_chan(chan);
++	struct at_dma_chan *atchan = to_at_dma_chan(chan);
++	struct at_desc *desc;
++	unsigned long flags;
+ 
+ 	dev_vdbg(chan2dev(chan), "issue_pending\n");
+ 
+-	/* Not needed for cyclic transfers */
+-	if (atc_chan_is_cyclic(atchan))
+-		return;
++	spin_lock_irqsave(&atchan->lock, flags);
++	if (atc_chan_is_enabled(atchan) || list_empty(&atchan->queue))
++		return spin_unlock_irqrestore(&atchan->lock, flags);
+ 
+-	atc_advance_work(atchan);
++	desc = atc_first_queued(atchan);
++	list_move_tail(&desc->desc_node, &atchan->active_list);
++	atc_dostart(atchan, desc);
++	spin_unlock_irqrestore(&atchan->lock, flags);
+ }
+ 
+ /**
+@@ -1966,7 +1928,11 @@ static int __init at_dma_probe(struct platform_device *pdev)
+ 	  dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask)  ? "slave " : "",
+ 	  plat_dat->nr_channels);
+ 
+-	dma_async_device_register(&atdma->dma_common);
++	err = dma_async_device_register(&atdma->dma_common);
++	if (err) {
++		dev_err(&pdev->dev, "Unable to register: %d.\n", err);
++		goto err_dma_async_device_register;
++	}
+ 
+ 	/*
+ 	 * Do not return an error if the dmac node is not present in order to
+@@ -1986,6 +1952,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
+ 
+ err_of_dma_controller_register:
+ 	dma_async_device_unregister(&atdma->dma_common);
++err_dma_async_device_register:
+ 	dma_pool_destroy(atdma->memset_pool);
+ err_memset_pool_create:
+ 	dma_pool_destroy(atdma->dma_desc_pool);
+diff --git a/drivers/dma/at_hdmac_regs.h b/drivers/dma/at_hdmac_regs.h
+index 4d1ebc040031c..d4d382d746078 100644
+--- a/drivers/dma/at_hdmac_regs.h
++++ b/drivers/dma/at_hdmac_regs.h
+@@ -186,13 +186,13 @@
+ /* LLI == Linked List Item; aka DMA buffer descriptor */
+ struct at_lli {
+ 	/* values that are not changed by hardware */
+-	dma_addr_t	saddr;
+-	dma_addr_t	daddr;
++	u32 saddr;
++	u32 daddr;
+ 	/* value that may get written back: */
+-	u32		ctrla;
++	u32 ctrla;
+ 	/* more values that are not changed by hardware */
+-	u32		ctrlb;
+-	dma_addr_t	dscr;	/* chain to next lli */
++	u32 ctrlb;
++	u32 dscr;	/* chain to next lli */
+ };
+ 
+ /**
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index c2808fd081d65..a9b96b18772f3 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -312,6 +312,24 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
+ 	if (idxd->state != IDXD_DEV_ENABLED)
+ 		return -ENXIO;
+ 
++	/*
++	 * User type WQ is enabled only when SVA is enabled for two reasons:
++	 *   - If no IOMMU or IOMMU Passthrough without SVA, userspace
++	 *     can directly access physical address through the WQ.
++	 *   - The IDXD cdev driver does not provide any ways to pin
++	 *     user pages and translate the address from user VA to IOVA or
++	 *     PA without IOMMU SVA. Therefore the application has no way
++	 *     to instruct the device to perform DMA function. This makes
++	 *     the cdev not usable for normal application usage.
++	 */
++	if (!device_user_pasid_enabled(idxd)) {
++		idxd->cmd_status = IDXD_SCMD_WQ_USER_NO_IOMMU;
++		dev_dbg(&idxd->pdev->dev,
++			"User type WQ cannot be enabled without SVA.\n");
++
++		return -EOPNOTSUPP;
++	}
++
+ 	mutex_lock(&wq->wq_lock);
+ 	wq->type = IDXD_WQT_USER;
+ 	rc = drv_enable_wq(wq);
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 5a8cc52c1abfd..bd6e50f795beb 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -388,7 +388,7 @@ static void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ 	clear_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
+ 	memset(wq->name, 0, WQ_NAME_SIZE);
+ 	wq->max_xfer_bytes = WQ_DEFAULT_MAX_XFER;
+-	wq->max_batch_size = WQ_DEFAULT_MAX_BATCH;
++	idxd_wq_set_max_batch_size(idxd->data->type, wq, WQ_DEFAULT_MAX_BATCH);
+ }
+ 
+ static void idxd_wq_device_reset_cleanup(struct idxd_wq *wq)
+@@ -724,13 +724,21 @@ static void idxd_device_wqs_clear_state(struct idxd_device *idxd)
+ 
+ void idxd_device_clear_state(struct idxd_device *idxd)
+ {
+-	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
+-		return;
++	/* IDXD is always disabled. Other states are cleared only when IDXD is configurable. */
++	if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) {
++		/*
++		 * Clearing wq state is protected by wq lock.
++		 * So no need to be protected by device lock.
++		 */
++		idxd_device_wqs_clear_state(idxd);
++
++		spin_lock(&idxd->dev_lock);
++		idxd_groups_clear_state(idxd);
++		idxd_engines_clear_state(idxd);
++	} else {
++		spin_lock(&idxd->dev_lock);
++	}
+ 
+-	idxd_device_wqs_clear_state(idxd);
+-	spin_lock(&idxd->dev_lock);
+-	idxd_groups_clear_state(idxd);
+-	idxd_engines_clear_state(idxd);
+ 	idxd->state = IDXD_DEV_DISABLED;
+ 	spin_unlock(&idxd->dev_lock);
+ }
+@@ -863,7 +871,7 @@ static int idxd_wq_config_write(struct idxd_wq *wq)
+ 
+ 	/* bytes 12-15 */
+ 	wq->wqcfg->max_xfer_shift = ilog2(wq->max_xfer_bytes);
+-	wq->wqcfg->max_batch_shift = ilog2(wq->max_batch_size);
++	idxd_wqcfg_set_max_batch_shift(idxd->data->type, wq->wqcfg, ilog2(wq->max_batch_size));
+ 
+ 	dev_dbg(dev, "WQ %d CFGs\n", wq->id);
+ 	for (i = 0; i < WQCFG_STRIDES(idxd); i++) {
+@@ -1031,7 +1039,7 @@ static int idxd_wq_load_config(struct idxd_wq *wq)
+ 	wq->priority = wq->wqcfg->priority;
+ 
+ 	wq->max_xfer_bytes = 1ULL << wq->wqcfg->max_xfer_shift;
+-	wq->max_batch_size = 1ULL << wq->wqcfg->max_batch_shift;
++	idxd_wq_set_max_batch_size(idxd->data->type, wq, 1U << wq->wqcfg->max_batch_shift);
+ 
+ 	for (i = 0; i < WQCFG_STRIDES(idxd); i++) {
+ 		wqcfg_offset = WQCFG_OFFSET(idxd, wq->id, i);
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index fed0dfc1eaa83..05c3f86944783 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -308,6 +308,8 @@ struct idxd_device {
+ 	struct work_struct work;
+ 
+ 	struct idxd_pmu *idxd_pmu;
++
++	unsigned long *opcap_bmap;
+ };
+ 
+ /* IDXD software descriptor */
+@@ -540,6 +542,38 @@ static inline int idxd_wq_refcount(struct idxd_wq *wq)
+ 	return wq->client_count;
+ };
+ 
++/*
++ * Intel IAA does not support batch processing.
++ * The max batch size of device, max batch size of wq and
++ * max batch shift of wqcfg should be always 0 on IAA.
++ */
++static inline void idxd_set_max_batch_size(int idxd_type, struct idxd_device *idxd,
++					   u32 max_batch_size)
++{
++	if (idxd_type == IDXD_TYPE_IAX)
++		idxd->max_batch_size = 0;
++	else
++		idxd->max_batch_size = max_batch_size;
++}
++
++static inline void idxd_wq_set_max_batch_size(int idxd_type, struct idxd_wq *wq,
++					      u32 max_batch_size)
++{
++	if (idxd_type == IDXD_TYPE_IAX)
++		wq->max_batch_size = 0;
++	else
++		wq->max_batch_size = max_batch_size;
++}
++
++static inline void idxd_wqcfg_set_max_batch_shift(int idxd_type, union wqcfg *wqcfg,
++						  u32 max_batch_shift)
++{
++	if (idxd_type == IDXD_TYPE_IAX)
++		wqcfg->max_batch_shift = 0;
++	else
++		wqcfg->max_batch_shift = max_batch_shift;
++}
++
+ int __must_check __idxd_driver_register(struct idxd_device_driver *idxd_drv,
+ 					struct module *module, const char *mod_name);
+ #define idxd_driver_register(driver) \
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index aa3478257ddb5..cf94795ca1afa 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -177,7 +177,7 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 		init_completion(&wq->wq_dead);
+ 		init_completion(&wq->wq_resurrect);
+ 		wq->max_xfer_bytes = WQ_DEFAULT_MAX_XFER;
+-		wq->max_batch_size = WQ_DEFAULT_MAX_BATCH;
++		idxd_wq_set_max_batch_size(idxd->data->type, wq, WQ_DEFAULT_MAX_BATCH);
+ 		wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
+ 		wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
+ 		if (!wq->wqcfg) {
+@@ -369,6 +369,19 @@ static void idxd_read_table_offsets(struct idxd_device *idxd)
+ 	dev_dbg(dev, "IDXD Perfmon Offset: %#x\n", idxd->perfmon_offset);
+ }
+ 
++static void multi_u64_to_bmap(unsigned long *bmap, u64 *val, int count)
++{
++	int i, j, nr;
++
++	for (i = 0, nr = 0; i < count; i++) {
++		for (j = 0; j < BITS_PER_LONG_LONG; j++) {
++			if (val[i] & BIT(j))
++				set_bit(nr, bmap);
++			nr++;
++		}
++	}
++}
++
+ static void idxd_read_caps(struct idxd_device *idxd)
+ {
+ 	struct device *dev = &idxd->pdev->dev;
+@@ -389,7 +402,7 @@ static void idxd_read_caps(struct idxd_device *idxd)
+ 
+ 	idxd->max_xfer_bytes = 1ULL << idxd->hw.gen_cap.max_xfer_shift;
+ 	dev_dbg(dev, "max xfer size: %llu bytes\n", idxd->max_xfer_bytes);
+-	idxd->max_batch_size = 1U << idxd->hw.gen_cap.max_batch_shift;
++	idxd_set_max_batch_size(idxd->data->type, idxd, 1U << idxd->hw.gen_cap.max_batch_shift);
+ 	dev_dbg(dev, "max batch size: %u\n", idxd->max_batch_size);
+ 	if (idxd->hw.gen_cap.config_en)
+ 		set_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags);
+@@ -427,6 +440,7 @@ static void idxd_read_caps(struct idxd_device *idxd)
+ 				IDXD_OPCAP_OFFSET + i * sizeof(u64));
+ 		dev_dbg(dev, "opcap[%d]: %#llx\n", i, idxd->hw.opcap.bits[i]);
+ 	}
++	multi_u64_to_bmap(idxd->opcap_bmap, &idxd->hw.opcap.bits[0], 4);
+ }
+ 
+ static struct idxd_device *idxd_alloc(struct pci_dev *pdev, struct idxd_driver_data *data)
+@@ -448,6 +462,12 @@ static struct idxd_device *idxd_alloc(struct pci_dev *pdev, struct idxd_driver_d
+ 	if (idxd->id < 0)
+ 		return NULL;
+ 
++	idxd->opcap_bmap = bitmap_zalloc_node(IDXD_MAX_OPCAP_BITS, GFP_KERNEL, dev_to_node(dev));
++	if (!idxd->opcap_bmap) {
++		ida_free(&idxd_ida, idxd->id);
++		return NULL;
++	}
++
+ 	device_initialize(conf_dev);
+ 	conf_dev->parent = dev;
+ 	conf_dev->bus = &dsa_bus_type;
+diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h
+index 02449aa9c454f..4c96ea85f8435 100644
+--- a/drivers/dma/idxd/registers.h
++++ b/drivers/dma/idxd/registers.h
+@@ -90,6 +90,8 @@ struct opcap {
+ 	u64 bits[4];
+ };
+ 
++#define IDXD_MAX_OPCAP_BITS		256U
++
+ #define IDXD_OPCAP_OFFSET		0x40
+ 
+ #define IDXD_TABLE_OFFSET		0x60
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 3f262a57441b4..82538622320a8 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -961,7 +961,7 @@ static ssize_t wq_max_batch_size_store(struct device *dev, struct device_attribu
+ 	if (batch_size > idxd->max_batch_size)
+ 		return -EINVAL;
+ 
+-	wq->max_batch_size = (u32)batch_size;
++	idxd_wq_set_max_batch_size(idxd->data->type, wq, (u32)batch_size);
+ 
+ 	return count;
+ }
+@@ -1177,14 +1177,8 @@ static ssize_t op_cap_show(struct device *dev,
+ 			   struct device_attribute *attr, char *buf)
+ {
+ 	struct idxd_device *idxd = confdev_to_idxd(dev);
+-	int i, rc = 0;
+-
+-	for (i = 0; i < 4; i++)
+-		rc += sysfs_emit_at(buf, rc, "%#llx ", idxd->hw.opcap.bits[i]);
+ 
+-	rc--;
+-	rc += sysfs_emit_at(buf, rc, "\n");
+-	return rc;
++	return sysfs_emit(buf, "%*pb\n", IDXD_MAX_OPCAP_BITS, idxd->opcap_bmap);
+ }
+ static DEVICE_ATTR_RO(op_cap);
+ 
+@@ -1408,6 +1402,7 @@ static void idxd_conf_device_release(struct device *dev)
+ 	kfree(idxd->wqs);
+ 	kfree(idxd->engines);
+ 	ida_free(&idxd_ida, idxd->id);
++	bitmap_free(idxd->opcap_bmap);
+ 	kfree(idxd);
+ }
+ 
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index f629ef6fd3c2a..113834e1167b6 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -893,6 +893,7 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
+ 	tasklet_kill(&xor_dev->irq_tasklet);
+ 
+ 	clk_disable_unprepare(xor_dev->clk);
++	clk_disable_unprepare(xor_dev->reg_clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
+index e7034f6f3994a..22a392fe6d32b 100644
+--- a/drivers/dma/pxa_dma.c
++++ b/drivers/dma/pxa_dma.c
+@@ -1247,14 +1247,14 @@ static int pxad_init_phys(struct platform_device *op,
+ 		return -ENOMEM;
+ 
+ 	for (i = 0; i < nb_phy_chans; i++)
+-		if (platform_get_irq(op, i) > 0)
++		if (platform_get_irq_optional(op, i) > 0)
+ 			nr_irq++;
+ 
+ 	for (i = 0; i < nb_phy_chans; i++) {
+ 		phy = &pdev->phys[i];
+ 		phy->base = pdev->base;
+ 		phy->idx = i;
+-		irq = platform_get_irq(op, i);
++		irq = platform_get_irq_optional(op, i);
+ 		if ((nr_irq > 1) && (irq > 0))
+ 			ret = devm_request_irq(&op->dev, irq,
+ 					       pxad_chan_handler,
+diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c
+index adb25a11c70fe..5aeaaac846dfd 100644
+--- a/drivers/dma/stm32-dma.c
++++ b/drivers/dma/stm32-dma.c
+@@ -663,6 +663,8 @@ static void stm32_dma_handle_chan_paused(struct stm32_dma_chan *chan)
+ 
+ 	chan->chan_reg.dma_sndtr = stm32_dma_read(dmadev, STM32_DMA_SNDTR(chan->id));
+ 
++	chan->status = DMA_PAUSED;
++
+ 	dev_dbg(chan2dev(chan), "vchan %pK: paused\n", &chan->vchan);
+ }
+ 
+@@ -775,9 +777,7 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid)
+ 	if (status & STM32_DMA_TCI) {
+ 		stm32_dma_irq_clear(chan, STM32_DMA_TCI);
+ 		if (scr & STM32_DMA_SCR_TCIE) {
+-			if (chan->status == DMA_PAUSED && !(scr & STM32_DMA_SCR_EN))
+-				stm32_dma_handle_chan_paused(chan);
+-			else
++			if (chan->status != DMA_PAUSED)
+ 				stm32_dma_handle_chan_done(chan, scr);
+ 		}
+ 		status &= ~STM32_DMA_TCI;
+@@ -824,13 +824,11 @@ static int stm32_dma_pause(struct dma_chan *c)
+ 		return -EPERM;
+ 
+ 	spin_lock_irqsave(&chan->vchan.lock, flags);
++
+ 	ret = stm32_dma_disable_chan(chan);
+-	/*
+-	 * A transfer complete flag is set to indicate the end of transfer due to the stream
+-	 * interruption, so wait for interrupt
+-	 */
+ 	if (!ret)
+-		chan->status = DMA_PAUSED;
++		stm32_dma_handle_chan_paused(chan);
++
+ 	spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ 
+ 	return ret;
+diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
+index 4fdd9f06b7235..4f1aeb81e9c7f 100644
+--- a/drivers/dma/ti/k3-udma-glue.c
++++ b/drivers/dma/ti/k3-udma-glue.c
+@@ -299,6 +299,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
+ 	ret = device_register(&tx_chn->common.chan_dev);
+ 	if (ret) {
+ 		dev_err(dev, "Channel Device registration failed %d\n", ret);
++		put_device(&tx_chn->common.chan_dev);
+ 		tx_chn->common.chan_dev.parent = NULL;
+ 		goto err;
+ 	}
+@@ -917,6 +918,7 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name,
+ 	ret = device_register(&rx_chn->common.chan_dev);
+ 	if (ret) {
+ 		dev_err(dev, "Channel Device registration failed %d\n", ret);
++		put_device(&rx_chn->common.chan_dev);
+ 		rx_chn->common.chan_dev.parent = NULL;
+ 		goto err;
+ 	}
+@@ -1048,6 +1050,7 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name,
+ 	ret = device_register(&rx_chn->common.chan_dev);
+ 	if (ret) {
+ 		dev_err(dev, "Channel Device registration failed %d\n", ret);
++		put_device(&rx_chn->common.chan_dev);
+ 		rx_chn->common.chan_dev.parent = NULL;
+ 		goto err;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index 9ecb7f663e196..278512535b518 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -485,6 +485,21 @@ void amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm, struct seq_file *m);
+  */
+ static inline uint64_t amdgpu_vm_tlb_seq(struct amdgpu_vm *vm)
+ {
++	unsigned long flags;
++	spinlock_t *lock;
++
++	/*
++	 * Workaround to stop racing between the fence signaling and handling
++	 * the cb. The lock is static after initially setting it up, just make
++	 * sure that the dma_fence structure isn't freed up.
++	 */
++	rcu_read_lock();
++	lock = vm->last_tlb_flush->lock;
++	rcu_read_unlock();
++
++	spin_lock_irqsave(lock, flags);
++	spin_unlock_irqrestore(lock, flags);
++
+ 	return atomic64_read(&vm->tlb_seq);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 28ec5f8ac1c11..27159f1d112ec 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -435,7 +435,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ 	if (place->flags & TTM_PL_FLAG_TOPDOWN)
+ 		vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ 
+-	if (fpfn || lpfn != man->size)
++	if (fpfn || lpfn != mgr->mm.size)
+ 		/* Allocate blocks in desired range */
+ 		vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index dc774ddf34456..033fcd594edcb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -1928,7 +1928,7 @@ static int criu_checkpoint(struct file *filep,
+ {
+ 	int ret;
+ 	uint32_t num_devices, num_bos, num_objects;
+-	uint64_t priv_size, priv_offset = 0;
++	uint64_t priv_size, priv_offset = 0, bo_priv_offset;
+ 
+ 	if (!args->devices || !args->bos || !args->priv_data)
+ 		return -EINVAL;
+@@ -1972,38 +1972,34 @@ static int criu_checkpoint(struct file *filep,
+ 	if (ret)
+ 		goto exit_unlock;
+ 
+-	ret = criu_checkpoint_bos(p, num_bos, (uint8_t __user *)args->bos,
+-			    (uint8_t __user *)args->priv_data, &priv_offset);
+-	if (ret)
+-		goto exit_unlock;
++	/* Leave room for BOs in the private data. They need to be restored
++	 * before events, but we checkpoint them last to simplify the error
++	 * handling.
++	 */
++	bo_priv_offset = priv_offset;
++	priv_offset += num_bos * sizeof(struct kfd_criu_bo_priv_data);
+ 
+ 	if (num_objects) {
+ 		ret = kfd_criu_checkpoint_queues(p, (uint8_t __user *)args->priv_data,
+ 						 &priv_offset);
+ 		if (ret)
+-			goto close_bo_fds;
++			goto exit_unlock;
+ 
+ 		ret = kfd_criu_checkpoint_events(p, (uint8_t __user *)args->priv_data,
+ 						 &priv_offset);
+ 		if (ret)
+-			goto close_bo_fds;
++			goto exit_unlock;
+ 
+ 		ret = kfd_criu_checkpoint_svm(p, (uint8_t __user *)args->priv_data, &priv_offset);
+ 		if (ret)
+-			goto close_bo_fds;
++			goto exit_unlock;
+ 	}
+ 
+-close_bo_fds:
+-	if (ret) {
+-		/* If IOCTL returns err, user assumes all FDs opened in criu_dump_bos are closed */
+-		uint32_t i;
+-		struct kfd_criu_bo_bucket *bo_buckets = (struct kfd_criu_bo_bucket *) args->bos;
+-
+-		for (i = 0; i < num_bos; i++) {
+-			if (bo_buckets[i].alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_VRAM)
+-				close_fd(bo_buckets[i].dmabuf_fd);
+-		}
+-	}
++	/* This must be the last thing in this function that can fail.
++	 * Otherwise we leak dmabuf file descriptors.
++	 */
++	ret = criu_checkpoint_bos(p, num_bos, (uint8_t __user *)args->bos,
++			   (uint8_t __user *)args->priv_data, &bo_priv_offset);
+ 
+ exit_unlock:
+ 	mutex_unlock(&p->mutex);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index 83e3ce9f60491..729d26d648af3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -506,6 +506,7 @@ int kfd_criu_restore_event(struct file *devkfd,
+ 		ret = create_other_event(p, ev, &ev_priv->event_id);
+ 		break;
+ 	}
++	mutex_unlock(&p->event_mutex);
+ 
+ exit:
+ 	if (ret)
+@@ -513,8 +514,6 @@ exit:
+ 
+ 	kfree(ev_priv);
+ 
+-	mutex_unlock(&p->event_mutex);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+index b059a77b6081d..058dbb6782df6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+@@ -886,7 +886,7 @@ svm_migrate_to_vram(struct svm_range *prange, uint32_t best_loc,
+ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
+ {
+ 	unsigned long addr = vmf->address;
+-	struct vm_area_struct *vma;
++	struct svm_range_bo *svm_bo;
+ 	enum svm_work_list_ops op;
+ 	struct svm_range *parent;
+ 	struct svm_range *prange;
+@@ -894,29 +894,42 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
+ 	struct mm_struct *mm;
+ 	int r = 0;
+ 
+-	vma = vmf->vma;
+-	mm = vma->vm_mm;
++	svm_bo = vmf->page->zone_device_data;
++	if (!svm_bo) {
++		pr_debug("failed get device page at addr 0x%lx\n", addr);
++		return VM_FAULT_SIGBUS;
++	}
++	if (!mmget_not_zero(svm_bo->eviction_fence->mm)) {
++		pr_debug("addr 0x%lx of process mm is detroyed\n", addr);
++		return VM_FAULT_SIGBUS;
++	}
+ 
+-	p = kfd_lookup_process_by_mm(vma->vm_mm);
++	mm = svm_bo->eviction_fence->mm;
++	if (mm != vmf->vma->vm_mm)
++		pr_debug("addr 0x%lx is COW mapping in child process\n", addr);
++
++	p = kfd_lookup_process_by_mm(mm);
+ 	if (!p) {
+ 		pr_debug("failed find process at fault address 0x%lx\n", addr);
+-		return VM_FAULT_SIGBUS;
++		r = VM_FAULT_SIGBUS;
++		goto out_mmput;
+ 	}
+ 	if (READ_ONCE(p->svms.faulting_task) == current) {
+ 		pr_debug("skipping ram migration\n");
+-		kfd_unref_process(p);
+-		return 0;
++		r = 0;
++		goto out_unref_process;
+ 	}
+-	addr >>= PAGE_SHIFT;
++
+ 	pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
++	addr >>= PAGE_SHIFT;
+ 
+ 	mutex_lock(&p->svms.lock);
+ 
+ 	prange = svm_range_from_addr(&p->svms, addr, &parent);
+ 	if (!prange) {
+-		pr_debug("cannot find svm range at 0x%lx\n", addr);
++		pr_debug("failed get range svms 0x%p addr 0x%lx\n", &p->svms, addr);
+ 		r = -EFAULT;
+-		goto out;
++		goto out_unlock_svms;
+ 	}
+ 
+ 	mutex_lock(&parent->migrate_mutex);
+@@ -938,10 +951,11 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
+ 		goto out_unlock_prange;
+ 	}
+ 
+-	r = svm_migrate_vram_to_ram(prange, mm, KFD_MIGRATE_TRIGGER_PAGEFAULT_CPU);
++	r = svm_migrate_vram_to_ram(prange, vmf->vma->vm_mm,
++				    KFD_MIGRATE_TRIGGER_PAGEFAULT_CPU);
+ 	if (r)
+-		pr_debug("failed %d migrate 0x%p [0x%lx 0x%lx] to ram\n", r,
+-			 prange, prange->start, prange->last);
++		pr_debug("failed %d migrate svms 0x%p range 0x%p [0x%lx 0x%lx]\n",
++			 r, prange->svms, prange, prange->start, prange->last);
+ 
+ 	/* xnack on, update mapping on GPUs with ACCESS_IN_PLACE */
+ 	if (p->xnack_enabled && parent == prange)
+@@ -955,12 +969,13 @@ out_unlock_prange:
+ 	if (prange != parent)
+ 		mutex_unlock(&prange->migrate_mutex);
+ 	mutex_unlock(&parent->migrate_mutex);
+-out:
++out_unlock_svms:
+ 	mutex_unlock(&p->svms.lock);
+-	kfd_unref_process(p);
+-
++out_unref_process:
+ 	pr_debug("CPU fault svms 0x%p address 0x%lx done\n", &p->svms, addr);
+-
++	kfd_unref_process(p);
++out_mmput:
++	mmput(mm);
+ 	return r ? VM_FAULT_SIGBUS : 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+index 4a15aa7a375fe..2680eecb3369d 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+@@ -356,32 +356,32 @@ static struct wm_table ddr5_wm_table = {
+ 			.wm_inst = WM_A,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 9,
+-			.sr_enter_plus_exit_time_us = 11,
++			.sr_exit_time_us = 12.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_B,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 9,
+-			.sr_enter_plus_exit_time_us = 11,
++			.sr_exit_time_us = 12.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_C,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 9,
+-			.sr_enter_plus_exit_time_us = 11,
++			.sr_exit_time_us = 12.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_D,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.72,
+-			.sr_exit_time_us = 9,
+-			.sr_enter_plus_exit_time_us = 11,
++			.sr_exit_time_us = 12.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 	}
+@@ -393,32 +393,32 @@ static struct wm_table lpddr5_wm_table = {
+ 			.wm_inst = WM_A,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 11.5,
+-			.sr_enter_plus_exit_time_us = 14.5,
++			.sr_exit_time_us = 16.5,
++			.sr_enter_plus_exit_time_us = 18.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_B,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 11.5,
+-			.sr_enter_plus_exit_time_us = 14.5,
++			.sr_exit_time_us = 16.5,
++			.sr_enter_plus_exit_time_us = 18.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_C,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 11.5,
+-			.sr_enter_plus_exit_time_us = 14.5,
++			.sr_exit_time_us = 16.5,
++			.sr_enter_plus_exit_time_us = 18.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_D,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 11.5,
+-			.sr_enter_plus_exit_time_us = 14.5,
++			.sr_exit_time_us = 16.5,
++			.sr_enter_plus_exit_time_us = 18.5,
+ 			.valid = true,
+ 		},
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+index f0f3f66629cc0..e7f1d5f8166f9 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+@@ -156,7 +156,8 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
+ {
+ 	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ 	unsigned int num_levels;
+-	unsigned int num_dcfclk_levels, num_dtbclk_levels, num_dispclk_levels;
++	struct clk_limit_num_entries *num_entries_per_clk = &clk_mgr_base->bw_params->clk_table.num_entries_per_clk;
++	unsigned int i;
+ 
+ 	memset(&(clk_mgr_base->clks), 0, sizeof(struct dc_clocks));
+ 	clk_mgr_base->clks.p_state_change_support = true;
+@@ -180,42 +181,42 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
+ 	/* DCFCLK */
+ 	dcn32_init_single_clock(clk_mgr, PPCLK_DCFCLK,
+ 			&clk_mgr_base->bw_params->clk_table.entries[0].dcfclk_mhz,
+-			&num_levels);
+-	num_dcfclk_levels = num_levels;
++			&num_entries_per_clk->num_dcfclk_levels);
+ 
+ 	/* SOCCLK */
+ 	dcn32_init_single_clock(clk_mgr, PPCLK_SOCCLK,
+ 					&clk_mgr_base->bw_params->clk_table.entries[0].socclk_mhz,
+-					&num_levels);
++					&num_entries_per_clk->num_socclk_levels);
++
+ 	/* DTBCLK */
+ 	if (!clk_mgr->base.ctx->dc->debug.disable_dtb_ref_clk_switch)
+ 		dcn32_init_single_clock(clk_mgr, PPCLK_DTBCLK,
+ 				&clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz,
+-				&num_levels);
+-	num_dtbclk_levels = num_levels;
++				&num_entries_per_clk->num_dtbclk_levels);
+ 
+ 	/* DISPCLK */
+ 	dcn32_init_single_clock(clk_mgr, PPCLK_DISPCLK,
+ 			&clk_mgr_base->bw_params->clk_table.entries[0].dispclk_mhz,
+-			&num_levels);
+-	num_dispclk_levels = num_levels;
++			&num_entries_per_clk->num_dispclk_levels);
++	num_levels = num_entries_per_clk->num_dispclk_levels;
+ 
+-	if (num_dcfclk_levels && num_dtbclk_levels && num_dispclk_levels)
++	if (num_entries_per_clk->num_dcfclk_levels &&
++			num_entries_per_clk->num_dtbclk_levels &&
++			num_entries_per_clk->num_dispclk_levels)
+ 		clk_mgr->dpm_present = true;
+ 
+ 	if (clk_mgr_base->ctx->dc->debug.min_disp_clk_khz) {
+-		unsigned int i;
+-
+ 		for (i = 0; i < num_levels; i++)
+ 			if (clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz
+ 					< khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_disp_clk_khz))
+ 				clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz
+ 					= khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_disp_clk_khz);
+ 	}
++	for (i = 0; i < num_levels; i++)
++		if (clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz > 1950)
++			clk_mgr_base->bw_params->clk_table.entries[i].dispclk_mhz = 1950;
+ 
+ 	if (clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz) {
+-		unsigned int i;
+-
+ 		for (i = 0; i < num_levels; i++)
+ 			if (clk_mgr_base->bw_params->clk_table.entries[i].dppclk_mhz
+ 					< khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz))
+@@ -370,7 +371,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
+ 			/* to disable P-State switching, set UCLK min = max */
+ 			if (!clk_mgr_base->clks.p_state_change_support)
+ 				dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+-						clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
++						clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries_per_clk.num_memclk_levels - 1].memclk_mhz);
+ 		}
+ 
+ 		if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support) &&
+@@ -632,7 +633,7 @@ static void dcn32_set_hard_min_memclk(struct clk_mgr *clk_mgr_base, bool current
+ 					khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+ 		else
+ 			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+-					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
++					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries_per_clk.num_memclk_levels - 1].memclk_mhz);
+ 	} else {
+ 		dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+ 				clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
+@@ -648,22 +649,37 @@ static void dcn32_set_hard_max_memclk(struct clk_mgr *clk_mgr_base)
+ 		return;
+ 
+ 	dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK,
+-			clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
++			clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries_per_clk.num_memclk_levels - 1].memclk_mhz);
+ }
+ 
+ /* Get current memclk states, update bounding box */
+ static void dcn32_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
+ {
+ 	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
++	struct clk_limit_num_entries *num_entries_per_clk = &clk_mgr_base->bw_params->clk_table.num_entries_per_clk;
+ 	unsigned int num_levels;
+ 
+ 	if (!clk_mgr->smu_present)
+ 		return;
+ 
+-	/* Refresh memclk states */
++	/* Refresh memclk and fclk states */
+ 	dcn32_init_single_clock(clk_mgr, PPCLK_UCLK,
+ 			&clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz,
+-			&num_levels);
++			&num_entries_per_clk->num_memclk_levels);
++
++	/* memclk must have at least one level */
++	num_entries_per_clk->num_memclk_levels = num_entries_per_clk->num_memclk_levels ? num_entries_per_clk->num_memclk_levels : 1;
++
++	dcn32_init_single_clock(clk_mgr, PPCLK_FCLK,
++			&clk_mgr_base->bw_params->clk_table.entries[0].fclk_mhz,
++			&num_entries_per_clk->num_fclk_levels);
++
++	if (num_entries_per_clk->num_memclk_levels >= num_entries_per_clk->num_fclk_levels) {
++		num_levels = num_entries_per_clk->num_memclk_levels;
++	} else {
++		num_levels = num_entries_per_clk->num_fclk_levels;
++	}
++
+ 	clk_mgr_base->bw_params->clk_table.num_entries = num_levels ? num_levels : 1;
+ 
+ 	if (clk_mgr->dpm_present && !num_levels)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
+index 8c0ab013764e3..e7f1bf0b04c57 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
+@@ -49,18 +49,30 @@
+ #define CTX \
+ 	enc1->base.ctx
+ 
++static void enc314_reset_fifo(struct stream_encoder *enc, bool reset)
++{
++	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
++	uint32_t reset_val = reset ? 1 : 0;
++	uint32_t is_symclk_on;
++
++	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, reset_val);
++	REG_GET(DIG_FE_CNTL, DIG_SYMCLK_FE_ON, &is_symclk_on);
++
++	if (is_symclk_on)
++		REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, reset_val, 10, 5000);
++	else
++		udelay(10);
++}
+ 
+ static void enc314_enable_fifo(struct stream_encoder *enc)
+ {
+ 	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+ 
+-	/* TODO: Confirm if we need to wait for DIG_SYMCLK_FE_ON */
+-	REG_WAIT(DIG_FE_CNTL, DIG_SYMCLK_FE_ON, 1, 10, 5000);
+ 	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_READ_START_LEVEL, 0x7);
+-	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 1);
+-	REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 1, 10, 5000);
+-	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 0);
+-	REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 0, 10, 5000);
++
++	enc314_reset_fifo(enc, true);
++	enc314_reset_fifo(enc, false);
++
+ 	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, 1);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
+index 4bb3b31ea7e0c..60f43473d6d88 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c
+@@ -146,8 +146,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_14_soc = {
+ 		},
+ 	},
+ 	.num_states = 5,
+-	.sr_exit_time_us = 9.0,
+-	.sr_enter_plus_exit_time_us = 11.0,
++	.sr_exit_time_us = 16.5,
++	.sr_enter_plus_exit_time_us = 18.5,
+ 	.sr_exit_z8_time_us = 442.0,
+ 	.sr_enter_plus_exit_z8_time_us = 560.0,
+ 	.writeback_latency_us = 12.0,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+index d9f1b0a4fbd4a..591ab1389e3b3 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+@@ -95,10 +95,23 @@ struct clk_limit_table_entry {
+ 	unsigned int wck_ratio;
+ };
+ 
++struct clk_limit_num_entries {
++	unsigned int num_dcfclk_levels;
++	unsigned int num_fclk_levels;
++	unsigned int num_memclk_levels;
++	unsigned int num_socclk_levels;
++	unsigned int num_dtbclk_levels;
++	unsigned int num_dispclk_levels;
++	unsigned int num_dppclk_levels;
++	unsigned int num_phyclk_levels;
++	unsigned int num_phyclk_d18_levels;
++};
++
+ /* This table is contiguous */
+ struct clk_limit_table {
+ 	struct clk_limit_table_entry entries[MAX_NUM_DPM_LVL];
+-	unsigned int num_entries;
++	struct clk_limit_num_entries num_entries_per_clk;
++	unsigned int num_entries; /* highest populated dpm level for back compatibility */
+ };
+ 
+ struct wm_range_table_entry {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_4_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_4_ppsmc.h
+index d9b0cd7522006..f4d6c07b56ea7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_4_ppsmc.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_4_ppsmc.h
+@@ -54,14 +54,14 @@
+ #define PPSMC_MSG_TestMessage                   0x01 ///< To check if PMFW is alive and responding. Requirement specified by PMFW team
+ #define PPSMC_MSG_GetPmfwVersion                0x02 ///< Get PMFW version
+ #define PPSMC_MSG_GetDriverIfVersion            0x03 ///< Get PMFW_DRIVER_IF version
+-#define PPSMC_MSG_EnableGfxOff                  0x04 ///< Enable GFXOFF
+-#define PPSMC_MSG_DisableGfxOff                 0x05 ///< Disable GFXOFF
++#define PPSMC_MSG_SPARE0                        0x04 ///< SPARE
++#define PPSMC_MSG_SPARE1                        0x05 ///< SPARE
+ #define PPSMC_MSG_PowerDownVcn                  0x06 ///< Power down VCN
+ #define PPSMC_MSG_PowerUpVcn                    0x07 ///< Power up VCN; VCN is power gated by default
+ #define PPSMC_MSG_SetHardMinVcn                 0x08 ///< For wireless display
+ #define PPSMC_MSG_SetSoftMinGfxclk              0x09 ///< Set SoftMin for GFXCLK, argument is frequency in MHz
+-#define PPSMC_MSG_ActiveProcessNotify           0x0A ///< Needs update
+-#define PPSMC_MSG_ForcePowerDownGfx             0x0B ///< Force power down GFX, i.e. enter GFXOFF
++#define PPSMC_MSG_SPARE2                        0x0A ///< SPARE
++#define PPSMC_MSG_SPARE3                        0x0B ///< SPARE
+ #define PPSMC_MSG_PrepareMp1ForUnload           0x0C ///< Prepare PMFW for GFX driver unload
+ #define PPSMC_MSG_SetDriverDramAddrHigh         0x0D ///< Set high 32 bits of DRAM address for Driver table transfer
+ #define PPSMC_MSG_SetDriverDramAddrLow          0x0E ///< Set low 32 bits of DRAM address for Driver table transfer
+@@ -73,8 +73,7 @@
+ #define PPSMC_MSG_SetSoftMinFclk                0x14 ///< Set hard min for FCLK
+ #define PPSMC_MSG_SetSoftMinVcn                 0x15 ///< Set soft min for VCN clocks (VCLK and DCLK)
+ 
+-
+-#define PPSMC_MSG_EnableGfxImu                  0x16 ///< Needs update
++#define PPSMC_MSG_EnableGfxImu                  0x16 ///< Enable GFX IMU
+ 
+ #define PPSMC_MSG_GetGfxclkFrequency            0x17 ///< Get GFX clock frequency
+ #define PPSMC_MSG_GetFclkFrequency              0x18 ///< Get FCLK frequency
+@@ -102,8 +101,8 @@
+ #define PPSMC_MSG_SetHardMinIspxclkByFreq       0x2C ///< Set HardMin by frequency for ISPXCLK
+ #define PPSMC_MSG_PowerDownUmsch                0x2D ///< Power down VCN.UMSCH (aka VSCH) scheduler
+ #define PPSMC_MSG_PowerUpUmsch                  0x2E ///< Power up VCN.UMSCH (aka VSCH) scheduler
+-#define PPSMC_Message_IspStutterOn_MmhubPgDis   0x2F ///< ISP StutterOn mmHub PgDis
+-#define PPSMC_Message_IspStutterOff_MmhubPgEn   0x30 ///< ISP StufferOff mmHub PgEn
++#define PPSMC_MSG_IspStutterOn_MmhubPgDis       0x2F ///< ISP StutterOn mmHub PgDis
++#define PPSMC_MSG_IspStutterOff_MmhubPgEn       0x30 ///< ISP StufferOff mmHub PgEn
+ 
+ #define PPSMC_Message_Count                     0x31 ///< Total number of PPSMC messages
+ /** @}*/
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 644ea150e0751..8292839bc42a9 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -376,7 +376,9 @@ static void sienna_cichlid_check_bxco_support(struct smu_context *smu)
+ 		if (((adev->pdev->device == 0x73A1) &&
+ 		    (adev->pdev->revision == 0x00)) ||
+ 		    ((adev->pdev->device == 0x73BF) &&
+-		    (adev->pdev->revision == 0xCF)))
++		    (adev->pdev->revision == 0xCF)) ||
++		    ((adev->pdev->device == 0x7422) &&
++		    (adev->pdev->revision == 0x00)))
+ 			smu_baco->platform_support = false;
+ 
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index d4492b6d23d25..21ba510716b6c 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -5233,7 +5233,7 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
+ 			      encoder->devdata, IS_ERR(edid) ? NULL : edid);
+ 
+ 	intel_panel_add_edid_fixed_modes(intel_connector,
+-					 intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE,
++					 intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE ||
+ 					 intel_vrr_is_capable(intel_connector));
+ 
+ 	/* MSO requires information from the EDID */
+diff --git a/drivers/gpu/drm/i915/display/intel_lvds.c b/drivers/gpu/drm/i915/display/intel_lvds.c
+index 730480ac3300d..c0bec3e0f0ae9 100644
+--- a/drivers/gpu/drm/i915/display/intel_lvds.c
++++ b/drivers/gpu/drm/i915/display/intel_lvds.c
+@@ -972,8 +972,7 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
+ 
+ 	/* Try EDID first */
+ 	intel_panel_add_edid_fixed_modes(intel_connector,
+-					 intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE,
+-					 false);
++					 intel_connector->panel.vbt.drrs_type != DRRS_TYPE_NONE);
+ 
+ 	/* Failed to get EDID, what about VBT? */
+ 	if (!intel_panel_preferred_fixed_mode(intel_connector))
+diff --git a/drivers/gpu/drm/i915/display/intel_panel.c b/drivers/gpu/drm/i915/display/intel_panel.c
+index 237a40623dd7b..1e008922b95d3 100644
+--- a/drivers/gpu/drm/i915/display/intel_panel.c
++++ b/drivers/gpu/drm/i915/display/intel_panel.c
+@@ -81,15 +81,14 @@ static bool is_alt_drrs_mode(const struct drm_display_mode *mode,
+ 		mode->clock != preferred_mode->clock;
+ }
+ 
+-static bool is_alt_vrr_mode(const struct drm_display_mode *mode,
+-			    const struct drm_display_mode *preferred_mode)
++static bool is_alt_fixed_mode(const struct drm_display_mode *mode,
++			      const struct drm_display_mode *preferred_mode)
+ {
+ 	return drm_mode_match(mode, preferred_mode,
+ 			      DRM_MODE_MATCH_FLAGS |
+ 			      DRM_MODE_MATCH_3D_FLAGS) &&
+ 		mode->hdisplay == preferred_mode->hdisplay &&
+-		mode->vdisplay == preferred_mode->vdisplay &&
+-		mode->clock != preferred_mode->clock;
++		mode->vdisplay == preferred_mode->vdisplay;
+ }
+ 
+ const struct drm_display_mode *
+@@ -172,19 +171,7 @@ int intel_panel_compute_config(struct intel_connector *connector,
+ 	return 0;
+ }
+ 
+-static bool is_alt_fixed_mode(const struct drm_display_mode *mode,
+-			      const struct drm_display_mode *preferred_mode,
+-			      bool has_vrr)
+-{
+-	/* is_alt_drrs_mode() is a subset of is_alt_vrr_mode() */
+-	if (has_vrr)
+-		return is_alt_vrr_mode(mode, preferred_mode);
+-	else
+-		return is_alt_drrs_mode(mode, preferred_mode);
+-}
+-
+-static void intel_panel_add_edid_alt_fixed_modes(struct intel_connector *connector,
+-						 bool has_vrr)
++static void intel_panel_add_edid_alt_fixed_modes(struct intel_connector *connector)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ 	const struct drm_display_mode *preferred_mode =
+@@ -192,7 +179,7 @@ static void intel_panel_add_edid_alt_fixed_modes(struct intel_connector *connect
+ 	struct drm_display_mode *mode, *next;
+ 
+ 	list_for_each_entry_safe(mode, next, &connector->base.probed_modes, head) {
+-		if (!is_alt_fixed_mode(mode, preferred_mode, has_vrr))
++		if (!is_alt_fixed_mode(mode, preferred_mode))
+ 			continue;
+ 
+ 		drm_dbg_kms(&dev_priv->drm,
+@@ -251,11 +238,11 @@ static void intel_panel_destroy_probed_modes(struct intel_connector *connector)
+ }
+ 
+ void intel_panel_add_edid_fixed_modes(struct intel_connector *connector,
+-				      bool has_drrs, bool has_vrr)
++				      bool use_alt_fixed_modes)
+ {
+ 	intel_panel_add_edid_preferred_mode(connector);
+-	if (intel_panel_preferred_fixed_mode(connector) && (has_drrs || has_vrr))
+-		intel_panel_add_edid_alt_fixed_modes(connector, has_vrr);
++	if (intel_panel_preferred_fixed_mode(connector) && use_alt_fixed_modes)
++		intel_panel_add_edid_alt_fixed_modes(connector);
+ 	intel_panel_destroy_probed_modes(connector);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_panel.h b/drivers/gpu/drm/i915/display/intel_panel.h
+index b087c0c3cc6db..4a94bd0eae3bf 100644
+--- a/drivers/gpu/drm/i915/display/intel_panel.h
++++ b/drivers/gpu/drm/i915/display/intel_panel.h
+@@ -41,7 +41,7 @@ int intel_panel_fitting(struct intel_crtc_state *crtc_state,
+ int intel_panel_compute_config(struct intel_connector *connector,
+ 			       struct drm_display_mode *adjusted_mode);
+ void intel_panel_add_edid_fixed_modes(struct intel_connector *connector,
+-				      bool has_drrs, bool has_vrr);
++				      bool use_alt_fixed_modes);
+ void intel_panel_add_vbt_lfp_fixed_mode(struct intel_connector *connector);
+ void intel_panel_add_vbt_sdvo_fixed_mode(struct intel_connector *connector);
+ void intel_panel_add_encoder_fixed_mode(struct intel_connector *connector,
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index e6a870641cd25..fbe777e02ea7f 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -2188,8 +2188,11 @@ static void _psr_invalidate_handle(struct intel_dp *intel_dp)
+ 	if (intel_dp->psr.psr2_sel_fetch_enabled) {
+ 		u32 val;
+ 
+-		if (intel_dp->psr.psr2_sel_fetch_cff_enabled)
++		if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {
++			/* Send one update otherwise lag is observed in screen */
++			intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
+ 			return;
++		}
+ 
+ 		val = man_trk_ctl_enable_bit_get(dev_priv) |
+ 		      man_trk_ctl_partial_frame_bit_get(dev_priv) |
+diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c b/drivers/gpu/drm/i915/display/intel_sdvo.c
+index b5f65b093c106..6b471fc297bd6 100644
+--- a/drivers/gpu/drm/i915/display/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
+@@ -2900,8 +2900,12 @@ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
+ 	intel_panel_add_vbt_sdvo_fixed_mode(intel_connector);
+ 
+ 	if (!intel_panel_preferred_fixed_mode(intel_connector)) {
++		mutex_lock(&i915->drm.mode_config.mutex);
++
+ 		intel_ddc_get_modes(connector, &intel_sdvo->ddc);
+-		intel_panel_add_edid_fixed_modes(intel_connector, false, false);
++		intel_panel_add_edid_fixed_modes(intel_connector, false);
++
++		mutex_unlock(&i915->drm.mode_config.mutex);
+ 	}
+ 
+ 	intel_panel_init(intel_connector);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+index f5062d0c63336..824971a1ceece 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+@@ -40,13 +40,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
+ 		goto err;
+ 	}
+ 
+-	ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
++	ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL);
+ 	if (ret)
+ 		goto err_free;
+ 
+ 	src = obj->mm.pages->sgl;
+ 	dst = st->sgl;
+-	for (i = 0; i < obj->mm.pages->nents; i++) {
++	for (i = 0; i < obj->mm.pages->orig_nents; i++) {
+ 		sg_set_page(dst, sg_page(src), src->length, 0);
+ 		dst = sg_next(dst);
+ 		src = sg_next(src);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+index 34b9c76cd8e66..b5eb279a5c2c7 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+@@ -369,14 +369,14 @@ __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
+ 
+ 	__start_cpu_write(obj);
+ 	/*
+-	 * On non-LLC platforms, force the flush-on-acquire if this is ever
++	 * On non-LLC igfx platforms, force the flush-on-acquire if this is ever
+ 	 * swapped-in. Our async flush path is not trust worthy enough yet(and
+ 	 * happens in the wrong order), and with some tricks it's conceivable
+ 	 * for userspace to change the cache-level to I915_CACHE_NONE after the
+ 	 * pages are swapped-in, and since execbuf binds the object before doing
+ 	 * the async flush, we have a race window.
+ 	 */
+-	if (!HAS_LLC(i915))
++	if (!HAS_LLC(i915) && !IS_DGFX(i915))
+ 		obj->cache_dirty = true;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index e3cd589464777..de89946c4817f 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -1595,6 +1595,9 @@ static void intel_vgpu_remove(struct mdev_device *mdev)
+ 
+ 	if (WARN_ON_ONCE(vgpu->attached))
+ 		return;
++
++	vfio_unregister_group_dev(&vgpu->vfio_device);
++	vfio_uninit_group_dev(&vgpu->vfio_device);
+ 	intel_gvt_destroy_vgpu(vgpu);
+ }
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
+index c186ace7f83b9..2064863a0fd32 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -476,7 +476,12 @@ static int __init vc4_drm_register(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	return platform_driver_register(&vc4_platform_driver);
++	ret = platform_driver_register(&vc4_platform_driver);
++	if (ret)
++		platform_unregister_drivers(component_drivers,
++					    ARRAY_SIZE(component_drivers));
++
++	return ret;
+ }
+ 
+ static void __exit vc4_drm_unregister(void)
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 874c6bd787c56..4e5bba0822a5d 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -2712,9 +2712,16 @@ static int vc4_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ 		DRM_ERROR("Failed to get HDMI state machine clock\n");
+ 		return PTR_ERR(vc4_hdmi->hsm_clock);
+ 	}
++
+ 	vc4_hdmi->audio_clock = vc4_hdmi->hsm_clock;
+ 	vc4_hdmi->cec_clock = vc4_hdmi->hsm_clock;
+ 
++	vc4_hdmi->hsm_rpm_clock = devm_clk_get(dev, "hdmi");
++	if (IS_ERR(vc4_hdmi->hsm_rpm_clock)) {
++		DRM_ERROR("Failed to get HDMI state machine clock\n");
++		return PTR_ERR(vc4_hdmi->hsm_rpm_clock);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -2796,6 +2803,12 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ 		return PTR_ERR(vc4_hdmi->hsm_clock);
+ 	}
+ 
++	vc4_hdmi->hsm_rpm_clock = devm_clk_get(dev, "hdmi");
++	if (IS_ERR(vc4_hdmi->hsm_rpm_clock)) {
++		DRM_ERROR("Failed to get HDMI state machine clock\n");
++		return PTR_ERR(vc4_hdmi->hsm_rpm_clock);
++	}
++
+ 	vc4_hdmi->pixel_bvb_clock = devm_clk_get(dev, "bvb");
+ 	if (IS_ERR(vc4_hdmi->pixel_bvb_clock)) {
+ 		DRM_ERROR("Failed to get pixel bvb clock\n");
+@@ -2859,7 +2872,7 @@ static int vc4_hdmi_runtime_suspend(struct device *dev)
+ {
+ 	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+ 
+-	clk_disable_unprepare(vc4_hdmi->hsm_clock);
++	clk_disable_unprepare(vc4_hdmi->hsm_rpm_clock);
+ 
+ 	return 0;
+ }
+@@ -2877,11 +2890,11 @@ static int vc4_hdmi_runtime_resume(struct device *dev)
+ 	 * its frequency while the power domain is active so that it
+ 	 * keeps its rate.
+ 	 */
+-	ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ);
++	ret = clk_set_min_rate(vc4_hdmi->hsm_rpm_clock, HSM_MIN_CLOCK_FREQ);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
++	ret = clk_prepare_enable(vc4_hdmi->hsm_rpm_clock);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2894,7 +2907,7 @@ static int vc4_hdmi_runtime_resume(struct device *dev)
+ 	 * case, it will lead to a silent CPU stall. Let's make sure we
+ 	 * prevent such a case.
+ 	 */
+-	rate = clk_get_rate(vc4_hdmi->hsm_clock);
++	rate = clk_get_rate(vc4_hdmi->hsm_rpm_clock);
+ 	if (!rate) {
+ 		ret = -EINVAL;
+ 		goto err_disable_clk;
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.h b/drivers/gpu/drm/vc4/vc4_hdmi.h
+index c3ed2b07df235..47f141ec8c40c 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.h
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.h
+@@ -171,6 +171,7 @@ struct vc4_hdmi {
+ 	struct clk *cec_clock;
+ 	struct clk *pixel_clock;
+ 	struct clk *hsm_clock;
++	struct clk *hsm_rpm_clock;
+ 	struct clk *audio_clock;
+ 	struct clk *pixel_bvb_clock;
+ 
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index e0bc731241960..ab57b49a44ed9 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -499,7 +499,7 @@ static int mousevsc_probe(struct hv_device *device,
+ 
+ 	ret = hid_add_device(hid_dev);
+ 	if (ret)
+-		goto probe_err1;
++		goto probe_err2;
+ 
+ 
+ 	ret = hid_parse(hid_dev);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index d049239256a26..2bd1a43021c92 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2522,11 +2522,12 @@ static void wacom_wac_pen_report(struct hid_device *hdev,
+ 
+ 	if (!delay_pen_events(wacom_wac) && wacom_wac->tool[0]) {
+ 		int id = wacom_wac->id[0];
+-		if (wacom_wac->features.quirks & WACOM_QUIRK_PEN_BUTTON3 &&
+-		    wacom_wac->hid_data.barrelswitch & wacom_wac->hid_data.barrelswitch2) {
+-			wacom_wac->hid_data.barrelswitch = 0;
+-			wacom_wac->hid_data.barrelswitch2 = 0;
+-			wacom_wac->hid_data.barrelswitch3 = 1;
++		if (wacom_wac->features.quirks & WACOM_QUIRK_PEN_BUTTON3) {
++			int sw_state = wacom_wac->hid_data.barrelswitch |
++				       (wacom_wac->hid_data.barrelswitch2 << 1);
++			wacom_wac->hid_data.barrelswitch = sw_state == 1;
++			wacom_wac->hid_data.barrelswitch2 = sw_state == 2;
++			wacom_wac->hid_data.barrelswitch3 = sw_state == 3;
+ 		}
+ 		input_report_key(input, BTN_STYLUS, wacom_wac->hid_data.barrelswitch);
+ 		input_report_key(input, BTN_STYLUS2, wacom_wac->hid_data.barrelswitch2);
+diff --git a/drivers/hwspinlock/qcom_hwspinlock.c b/drivers/hwspinlock/qcom_hwspinlock.c
+index 80ea45b3a815f..9734e149d981b 100644
+--- a/drivers/hwspinlock/qcom_hwspinlock.c
++++ b/drivers/hwspinlock/qcom_hwspinlock.c
+@@ -121,7 +121,7 @@ static const struct regmap_config tcsr_mutex_config = {
+ 	.reg_bits		= 32,
+ 	.reg_stride		= 4,
+ 	.val_bits		= 32,
+-	.max_register		= 0x40000,
++	.max_register		= 0x20000,
+ 	.fast_io		= true,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
+index aff36a933ebec..55d8bd232695c 100644
+--- a/drivers/mmc/host/sdhci-brcmstb.c
++++ b/drivers/mmc/host/sdhci-brcmstb.c
+@@ -12,6 +12,7 @@
+ #include <linux/bitops.h>
+ #include <linux/delay.h>
+ 
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ #include "cqhci.h"
+ 
+@@ -55,7 +56,7 @@ static void brcmstb_reset(struct sdhci_host *host, u8 mask)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_brcmstb_priv *priv = sdhci_pltfm_priv(pltfm_host);
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	/* Reset will clear this, so re-enable it */
+ 	if (priv->flags & BRCMSTB_PRIV_FLAGS_GATE_CLOCK)
+diff --git a/drivers/mmc/host/sdhci-cqhci.h b/drivers/mmc/host/sdhci-cqhci.h
+new file mode 100644
+index 0000000000000..cf8e7ba71bbd7
+--- /dev/null
++++ b/drivers/mmc/host/sdhci-cqhci.h
+@@ -0,0 +1,24 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright 2022 The Chromium OS Authors
++ *
++ * Support that applies to the combination of SDHCI and CQHCI, while not
++ * expressing a dependency between the two modules.
++ */
++
++#ifndef __MMC_HOST_SDHCI_CQHCI_H__
++#define __MMC_HOST_SDHCI_CQHCI_H__
++
++#include "cqhci.h"
++#include "sdhci.h"
++
++static inline void sdhci_and_cqhci_reset(struct sdhci_host *host, u8 mask)
++{
++	if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL) &&
++	    host->mmc->cqe_private)
++		cqhci_deactivate(host->mmc);
++
++	sdhci_reset(host, mask);
++}
++
++#endif /* __MMC_HOST_SDHCI_CQHCI_H__ */
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 747df79d90eef..31ea0a2fce358 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -25,6 +25,7 @@
+ #include <linux/of_device.h>
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/pm_runtime.h>
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ #include "sdhci-esdhc.h"
+ #include "cqhci.h"
+@@ -1288,7 +1289,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ 
+ static void esdhc_reset(struct sdhci_host *host, u8 mask)
+ {
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
+ 	sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+@@ -1671,14 +1672,14 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
+ 		host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
+ 
+-	if (host->caps & MMC_CAP_8_BIT_DATA &&
++	if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
+ 	    imx_data->socdata->flags & ESDHC_FLAG_HS400)
+ 		host->mmc->caps2 |= MMC_CAP2_HS400;
+ 
+ 	if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
+ 		host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
+ 
+-	if (host->caps & MMC_CAP_8_BIT_DATA &&
++	if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
+ 	    imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
+ 		host->mmc->caps2 |= MMC_CAP2_HS400_ES;
+ 		host->mmc_host_ops.hs400_enhanced_strobe =
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index 3997cad1f793d..cfb891430174a 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -25,6 +25,7 @@
+ #include <linux/firmware/xlnx-zynqmp.h>
+ 
+ #include "cqhci.h"
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ 
+ #define SDHCI_ARASAN_VENDOR_REGISTER	0x78
+@@ -366,7 +367,7 @@ static void sdhci_arasan_reset(struct sdhci_host *host, u8 mask)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_FORCE_CDTEST) {
+ 		ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 413925bce0ca8..c71000a07656e 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -28,6 +28,7 @@
+ 
+ #include <soc/tegra/common.h>
+ 
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ #include "cqhci.h"
+ 
+@@ -367,7 +368,7 @@ static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
+ 	const struct sdhci_tegra_soc_data *soc_data = tegra_host->soc_data;
+ 	u32 misc_ctrl, clk_ctrl, pad_ctrl;
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	if (!(mask & SDHCI_RESET_ALL))
+ 		return;
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index e7ced1496a073..b82ab5f1fcf31 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -15,6 +15,7 @@
+ #include <linux/sys_soc.h>
+ 
+ #include "cqhci.h"
++#include "sdhci-cqhci.h"
+ #include "sdhci-pltfm.h"
+ 
+ /* CTL_CFG Registers */
+@@ -378,7 +379,7 @@ static void sdhci_am654_reset(struct sdhci_host *host, u8 mask)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
+ 
+-	sdhci_reset(host, mask);
++	sdhci_and_cqhci_reset(host, mask);
+ 
+ 	if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_FORCE_CDTEST) {
+ 		ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+@@ -464,7 +465,7 @@ static struct sdhci_ops sdhci_am654_ops = {
+ 	.set_clock = sdhci_am654_set_clock,
+ 	.write_b = sdhci_am654_write_b,
+ 	.irq = sdhci_am654_cqhci_irq,
+-	.reset = sdhci_reset,
++	.reset = sdhci_and_cqhci_reset,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_am654_pdata = {
+@@ -494,7 +495,7 @@ static struct sdhci_ops sdhci_j721e_8bit_ops = {
+ 	.set_clock = sdhci_am654_set_clock,
+ 	.write_b = sdhci_am654_write_b,
+ 	.irq = sdhci_am654_cqhci_irq,
+-	.reset = sdhci_reset,
++	.reset = sdhci_and_cqhci_reset,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = {
+diff --git a/drivers/net/can/at91_can.c b/drivers/net/can/at91_can.c
+index 3a2d109a3792f..199cb200f2bdd 100644
+--- a/drivers/net/can/at91_can.c
++++ b/drivers/net/can/at91_can.c
+@@ -452,7 +452,7 @@ static netdev_tx_t at91_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	unsigned int mb, prio;
+ 	u32 reg_mid, reg_mcr;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	mb = get_tx_next_mb(priv);
+diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
+index d6605dbb7737b..c63f7fc1e6917 100644
+--- a/drivers/net/can/c_can/c_can_main.c
++++ b/drivers/net/can/c_can/c_can_main.c
+@@ -457,7 +457,7 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
+ 	struct c_can_tx_ring *tx_ring = &priv->tx;
+ 	u32 idx, obj, cmd = IF_COMM_TX;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (c_can_tx_busy(priv, tx_ring))
+diff --git a/drivers/net/can/can327.c b/drivers/net/can/can327.c
+index 0aa1af31d0fe4..0941977807761 100644
+--- a/drivers/net/can/can327.c
++++ b/drivers/net/can/can327.c
+@@ -813,7 +813,7 @@ static netdev_tx_t can327_netdev_start_xmit(struct sk_buff *skb,
+ 	struct can327 *elm = netdev_priv(dev);
+ 	struct can_frame *frame = (struct can_frame *)skb->data;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* We shouldn't get here after a hardware fault:
+diff --git a/drivers/net/can/cc770/cc770.c b/drivers/net/can/cc770/cc770.c
+index 0b9dfc76e769c..30909f3aab576 100644
+--- a/drivers/net/can/cc770/cc770.c
++++ b/drivers/net/can/cc770/cc770.c
+@@ -429,7 +429,7 @@ static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct cc770_priv *priv = netdev_priv(dev);
+ 	unsigned int mo = obj2msgobj(CC770_OBJ_TX);
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	netif_stop_queue(dev);
+diff --git a/drivers/net/can/ctucanfd/ctucanfd_base.c b/drivers/net/can/ctucanfd/ctucanfd_base.c
+index 3c18d028bd8ce..c2c51d7af1bd4 100644
+--- a/drivers/net/can/ctucanfd/ctucanfd_base.c
++++ b/drivers/net/can/ctucanfd/ctucanfd_base.c
+@@ -600,7 +600,7 @@ static netdev_tx_t ctucan_start_xmit(struct sk_buff *skb, struct net_device *nde
+ 	bool ok;
+ 	unsigned long flags;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (unlikely(!CTU_CAN_FD_TXTNF(priv))) {
+diff --git a/drivers/net/can/dev/skb.c b/drivers/net/can/dev/skb.c
+index 07e0feac86292..3f37149d2c7aa 100644
+--- a/drivers/net/can/dev/skb.c
++++ b/drivers/net/can/dev/skb.c
+@@ -5,7 +5,6 @@
+  */
+ 
+ #include <linux/can/dev.h>
+-#include <linux/can/netlink.h>
+ #include <linux/module.h>
+ 
+ #define MOD_DESC "CAN device driver interface"
+@@ -300,7 +299,6 @@ static bool can_skb_headroom_valid(struct net_device *dev, struct sk_buff *skb)
+ bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb)
+ {
+ 	const struct canfd_frame *cfd = (struct canfd_frame *)skb->data;
+-	struct can_priv *priv = netdev_priv(dev);
+ 
+ 	if (skb->protocol == htons(ETH_P_CAN)) {
+ 		if (unlikely(skb->len != CAN_MTU ||
+@@ -314,13 +312,8 @@ bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb)
+ 		goto inval_skb;
+ 	}
+ 
+-	if (!can_skb_headroom_valid(dev, skb)) {
+-		goto inval_skb;
+-	} else if (priv->ctrlmode & CAN_CTRLMODE_LISTENONLY) {
+-		netdev_info_once(dev,
+-				 "interface in listen only mode, dropping skb\n");
++	if (!can_skb_headroom_valid(dev, skb))
+ 		goto inval_skb;
+-	}
+ 
+ 	return false;
+ 
+diff --git a/drivers/net/can/flexcan/flexcan-core.c b/drivers/net/can/flexcan/flexcan-core.c
+index ccb438eca517d..1fcc65350f102 100644
+--- a/drivers/net/can/flexcan/flexcan-core.c
++++ b/drivers/net/can/flexcan/flexcan-core.c
+@@ -742,7 +742,7 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
+ 	u32 ctrl = FLEXCAN_MB_CODE_TX_DATA | ((can_fd_len2dlc(cfd->len)) << 16);
+ 	int i;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	netif_stop_queue(dev);
+diff --git a/drivers/net/can/grcan.c b/drivers/net/can/grcan.c
+index 6c37aab93eb3a..4bedcc3eea0d6 100644
+--- a/drivers/net/can/grcan.c
++++ b/drivers/net/can/grcan.c
+@@ -1345,7 +1345,7 @@ static netdev_tx_t grcan_start_xmit(struct sk_buff *skb,
+ 	unsigned long flags;
+ 	u32 oneshotmode = priv->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* Trying to transmit in silent mode will generate error interrupts, but
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index ad7a89b95da71..78b6e31487cff 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -860,7 +860,7 @@ static netdev_tx_t ifi_canfd_start_xmit(struct sk_buff *skb,
+ 	u32 txst, txid, txdlc;
+ 	int i;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* Check if the TX buffer is full */
+diff --git a/drivers/net/can/janz-ican3.c b/drivers/net/can/janz-ican3.c
+index 71a2caae07579..0732a50921418 100644
+--- a/drivers/net/can/janz-ican3.c
++++ b/drivers/net/can/janz-ican3.c
+@@ -1693,7 +1693,7 @@ static netdev_tx_t ican3_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	void __iomem *desc_addr;
+ 	unsigned long flags;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	spin_lock_irqsave(&mod->lock, flags);
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index ed54c0b3c7d46..e7cc499c17733 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -775,7 +775,7 @@ static netdev_tx_t kvaser_pciefd_start_xmit(struct sk_buff *skb,
+ 	int nwords;
+ 	u8 count;
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	nwords = kvaser_pciefd_prepare_tx_packet(&packet, can, skb);
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 4709c012b1dc9..4dc67fdfcdb9d 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1722,7 +1722,7 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
+ {
+ 	struct m_can_classdev *cdev = netdev_priv(dev);
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (cdev->is_peripheral) {
+diff --git a/drivers/net/can/mscan/mscan.c b/drivers/net/can/mscan/mscan.c
+index 2119fbb287efc..a6829cdc0e81f 100644
+--- a/drivers/net/can/mscan/mscan.c
++++ b/drivers/net/can/mscan/mscan.c
+@@ -191,7 +191,7 @@ static netdev_tx_t mscan_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	int i, rtr, buf_id;
+ 	u32 can_id;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	out_8(&regs->cantier, 0);
+diff --git a/drivers/net/can/pch_can.c b/drivers/net/can/pch_can.c
+index 0558ff67ec6ab..2a44b2803e555 100644
+--- a/drivers/net/can/pch_can.c
++++ b/drivers/net/can/pch_can.c
+@@ -882,7 +882,7 @@ static netdev_tx_t pch_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	int i;
+ 	u32 id2;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	tx_obj_no = priv->tx_obj;
+diff --git a/drivers/net/can/peak_canfd/peak_canfd.c b/drivers/net/can/peak_canfd/peak_canfd.c
+index f8420cc1d9075..31c9c127e24bb 100644
+--- a/drivers/net/can/peak_canfd/peak_canfd.c
++++ b/drivers/net/can/peak_canfd/peak_canfd.c
+@@ -651,7 +651,7 @@ static netdev_tx_t peak_canfd_start_xmit(struct sk_buff *skb,
+ 	int room_left;
+ 	u8 len;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	msg_size = ALIGN(sizeof(*msg) + cf->len, 4);
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 6ee968c59ac90..cc43c9c5e38c5 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -590,7 +590,7 @@ static netdev_tx_t rcar_can_start_xmit(struct sk_buff *skb,
+ 	struct can_frame *cf = (struct can_frame *)skb->data;
+ 	u32 data, i;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (cf->can_id & CAN_EFF_FLAG)	/* Extended frame format */
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index d77c8d6d191a6..26ba650a8cbc2 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -81,8 +81,7 @@ enum rcanfd_chip_id {
+ 
+ /* RSCFDnCFDGERFL / RSCFDnGERFL */
+ #define RCANFD_GERFL_EEF0_7		GENMASK(23, 16)
+-#define RCANFD_GERFL_EEF1		BIT(17)
+-#define RCANFD_GERFL_EEF0		BIT(16)
++#define RCANFD_GERFL_EEF(ch)		BIT(16 + (ch))
+ #define RCANFD_GERFL_CMPOF		BIT(3)	/* CAN FD only */
+ #define RCANFD_GERFL_THLES		BIT(2)
+ #define RCANFD_GERFL_MES		BIT(1)
+@@ -90,7 +89,7 @@ enum rcanfd_chip_id {
+ 
+ #define RCANFD_GERFL_ERR(gpriv, x) \
+ 	((x) & (reg_v3u(gpriv, RCANFD_GERFL_EEF0_7, \
+-			RCANFD_GERFL_EEF0 | RCANFD_GERFL_EEF1) | \
++			RCANFD_GERFL_EEF(0) | RCANFD_GERFL_EEF(1)) | \
+ 		RCANFD_GERFL_MES | \
+ 		((gpriv)->fdmode ? RCANFD_GERFL_CMPOF : 0)))
+ 
+@@ -936,12 +935,8 @@ static void rcar_canfd_global_error(struct net_device *ndev)
+ 	u32 ridx = ch + RCANFD_RFFIFO_IDX;
+ 
+ 	gerfl = rcar_canfd_read(priv->base, RCANFD_GERFL);
+-	if ((gerfl & RCANFD_GERFL_EEF0) && (ch == 0)) {
+-		netdev_dbg(ndev, "Ch0: ECC Error flag\n");
+-		stats->tx_dropped++;
+-	}
+-	if ((gerfl & RCANFD_GERFL_EEF1) && (ch == 1)) {
+-		netdev_dbg(ndev, "Ch1: ECC Error flag\n");
++	if (gerfl & RCANFD_GERFL_EEF(ch)) {
++		netdev_dbg(ndev, "Ch%u: ECC Error flag\n", ch);
+ 		stats->tx_dropped++;
+ 	}
+ 	if (gerfl & RCANFD_GERFL_MES) {
+@@ -1481,7 +1476,7 @@ static netdev_tx_t rcar_canfd_start_xmit(struct sk_buff *skb,
+ 	unsigned long flags;
+ 	u32 ch = priv->channel;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (cf->can_id & CAN_EFF_FLAG) {
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index 98dfd5f295a71..a0a820174a00f 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -291,7 +291,7 @@ static netdev_tx_t sja1000_start_xmit(struct sk_buff *skb,
+ 	u8 cmd_reg_val = 0x00;
+ 	int i;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	netif_stop_queue(dev);
+diff --git a/drivers/net/can/slcan/slcan-core.c b/drivers/net/can/slcan/slcan-core.c
+index 8d13fdf8c28a4..fbb34139daa1a 100644
+--- a/drivers/net/can/slcan/slcan-core.c
++++ b/drivers/net/can/slcan/slcan-core.c
+@@ -594,7 +594,7 @@ static netdev_tx_t slcan_netdev_xmit(struct sk_buff *skb,
+ {
+ 	struct slcan *sl = netdev_priv(dev);
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	spin_lock(&sl->lock);
+diff --git a/drivers/net/can/softing/softing_main.c b/drivers/net/can/softing/softing_main.c
+index a5ef57f415f73..c72f505d29fee 100644
+--- a/drivers/net/can/softing/softing_main.c
++++ b/drivers/net/can/softing/softing_main.c
+@@ -60,7 +60,7 @@ static netdev_tx_t softing_netdev_start_xmit(struct sk_buff *skb,
+ 	struct can_frame *cf = (struct can_frame *)skb->data;
+ 	uint8_t buf[DPRAM_TX_SIZE];
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	spin_lock(&card->spin);
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index b87dc420428d9..e1b8533a602e2 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -373,7 +373,7 @@ static netdev_tx_t hi3110_hard_start_xmit(struct sk_buff *skb,
+ 		return NETDEV_TX_BUSY;
+ 	}
+ 
+-	if (can_dropped_invalid_skb(net, skb))
++	if (can_dev_dropped_skb(net, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	netif_stop_queue(net);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 24883a65ca66a..79c4bab5f7246 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -789,7 +789,7 @@ static netdev_tx_t mcp251x_hard_start_xmit(struct sk_buff *skb,
+ 		return NETDEV_TX_BUSY;
+ 	}
+ 
+-	if (can_dropped_invalid_skb(net, skb))
++	if (can_dev_dropped_skb(net, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	netif_stop_queue(net);
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tx.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tx.c
+index ffb6c36b7d9bd..160528d3cc26b 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tx.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tx.c
+@@ -172,7 +172,7 @@ netdev_tx_t mcp251xfd_start_xmit(struct sk_buff *skb,
+ 	u8 tx_head;
+ 	int err;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (mcp251xfd_tx_busy(priv, tx_ring))
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 525309da1320a..2b78f9197681b 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -429,7 +429,7 @@ static netdev_tx_t sun4ican_start_xmit(struct sk_buff *skb, struct net_device *d
+ 	canid_t id;
+ 	int i;
+ 
+-	if (can_dropped_invalid_skb(dev, skb))
++	if (can_dev_dropped_skb(dev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	netif_stop_queue(dev);
+diff --git a/drivers/net/can/ti_hecc.c b/drivers/net/can/ti_hecc.c
+index b218fb3c6b760..27700f72eac25 100644
+--- a/drivers/net/can/ti_hecc.c
++++ b/drivers/net/can/ti_hecc.c
+@@ -470,7 +470,7 @@ static netdev_tx_t ti_hecc_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	u32 mbxno, mbx_mask, data;
+ 	unsigned long flags;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	mbxno = get_tx_head_mb(priv);
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index d31191686a549..050c0b49938a4 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -747,7 +747,7 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
+ 	size_t size = CPC_HEADER_SIZE + CPC_MSG_HEADER_LEN
+ 			+ sizeof(struct cpc_can_msg);
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* create a URB, and a buffer for it, and copy the data to the URB */
+diff --git a/drivers/net/can/usb/esd_usb.c b/drivers/net/can/usb/esd_usb.c
+index 1bcfad11b1e44..81b88e9e5bdc0 100644
+--- a/drivers/net/can/usb/esd_usb.c
++++ b/drivers/net/can/usb/esd_usb.c
+@@ -725,7 +725,7 @@ static netdev_tx_t esd_usb_start_xmit(struct sk_buff *skb,
+ 	int ret = NETDEV_TX_OK;
+ 	size_t size = sizeof(struct esd_usb_msg);
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* create a URB, and a buffer for it, and copy the data to the URB */
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index 51294b7170405..25f863b4f5f06 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -1913,7 +1913,7 @@ static netdev_tx_t es58x_start_xmit(struct sk_buff *skb,
+ 	unsigned int frame_len;
+ 	int ret;
+ 
+-	if (can_dropped_invalid_skb(netdev, skb)) {
++	if (can_dev_dropped_skb(netdev, skb)) {
+ 		if (priv->tx_urb)
+ 			goto xmit_commit;
+ 		return NETDEV_TX_OK;
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index c1ff3c046d62c..cd4115a1b81c6 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -605,7 +605,7 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
+ 	unsigned int idx;
+ 	struct gs_tx_context *txc;
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* find an empty context to keep track of transmission */
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index e91648ed73862..802e27c0ecedb 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -570,7 +570,7 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
+ 	unsigned int i;
+ 	unsigned long flags;
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	urb = usb_alloc_urb(0, GFP_ATOMIC);
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 69346c63021fe..218b098b261df 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -311,7 +311,7 @@ static netdev_tx_t mcba_usb_start_xmit(struct sk_buff *skb,
+ 		.cmd_id = MBCA_CMD_TRANSMIT_MESSAGE_EV
+ 	};
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	ctx = mcba_usb_get_free_ctx(priv, cf);
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 8c9d53f6e24c3..ca92d027ba3ef 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -351,7 +351,7 @@ static netdev_tx_t peak_usb_ndo_start_xmit(struct sk_buff *skb,
+ 	int i, err;
+ 	size_t size = dev->adapter->tx_buffer_size;
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	for (i = 0; i < PCAN_USB_MAX_TX_URBS; i++)
+diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c
+index 7c35f50fda4ee..67c2ff407d066 100644
+--- a/drivers/net/can/usb/ucan.c
++++ b/drivers/net/can/usb/ucan.c
+@@ -1120,7 +1120,7 @@ static netdev_tx_t ucan_start_xmit(struct sk_buff *skb,
+ 	struct can_frame *cf = (struct can_frame *)skb->data;
+ 
+ 	/* check skb */
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* allocate a context and slow down tx path, if fifo state is low */
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index 64c00abe91cf0..8a5596ce4e463 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -602,7 +602,7 @@ static netdev_tx_t usb_8dev_start_xmit(struct sk_buff *skb,
+ 	int i, err;
+ 	size_t size = sizeof(struct usb_8dev_tx_msg);
+ 
+-	if (can_dropped_invalid_skb(netdev, skb))
++	if (can_dev_dropped_skb(netdev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	/* create a URB, and a buffer for it, and copy the data to the URB */
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 5d3172795ad01..43c812ea1de02 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -743,7 +743,7 @@ static netdev_tx_t xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	struct xcan_priv *priv = netdev_priv(ndev);
+ 	int ret;
+ 
+-	if (can_dropped_invalid_skb(ndev, skb))
++	if (can_dev_dropped_skb(ndev, skb))
+ 		return NETDEV_TX_OK;
+ 
+ 	if (priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES)
+diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+index 53dc8d5fede86..c51d9edb3828a 100644
+--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
++++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+@@ -1004,8 +1004,10 @@ static int xgene_enet_open(struct net_device *ndev)
+ 
+ 	xgene_enet_napi_enable(pdata);
+ 	ret = xgene_enet_register_irq(ndev);
+-	if (ret)
++	if (ret) {
++		xgene_enet_napi_disable(pdata);
+ 		return ret;
++	}
+ 
+ 	if (ndev->phydev) {
+ 		phy_start(ndev->phydev);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
+index 8b53d6688a4bd..958b7f8c77d91 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
+@@ -585,6 +585,7 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx,
+ 
+ 	ret = aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx);
+ 
++	memzero_explicit(&key_rec, sizeof(key_rec));
+ 	return ret;
+ }
+ 
+@@ -932,6 +933,7 @@ static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx,
+ 
+ 	ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx);
+ 
++	memzero_explicit(&sa_key_record, sizeof(sa_key_record));
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c
+index 36c7cf05630a1..4319249595207 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c
++++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c
+@@ -757,6 +757,7 @@ set_ingress_sakey_record(struct aq_hw_s *hw,
+ 			 u16 table_index)
+ {
+ 	u16 packed_record[18];
++	int ret;
+ 
+ 	if (table_index >= NUMROWS_INGRESSSAKEYRECORD)
+ 		return -EINVAL;
+@@ -789,9 +790,12 @@ set_ingress_sakey_record(struct aq_hw_s *hw,
+ 
+ 	packed_record[16] = rec->key_len & 0x3;
+ 
+-	return set_raw_ingress_record(hw, packed_record, 18, 2,
+-				      ROWOFFSET_INGRESSSAKEYRECORD +
+-					      table_index);
++	ret = set_raw_ingress_record(hw, packed_record, 18, 2,
++				     ROWOFFSET_INGRESSSAKEYRECORD +
++				     table_index);
++
++	memzero_explicit(packed_record, sizeof(packed_record));
++	return ret;
+ }
+ 
+ int aq_mss_set_ingress_sakey_record(struct aq_hw_s *hw,
+@@ -1739,14 +1743,14 @@ static int set_egress_sakey_record(struct aq_hw_s *hw,
+ 	ret = set_raw_egress_record(hw, packed_record, 8, 2,
+ 				    ROWOFFSET_EGRESSSAKEYRECORD + table_index);
+ 	if (unlikely(ret))
+-		return ret;
++		goto clear_key;
+ 	ret = set_raw_egress_record(hw, packed_record + 8, 8, 2,
+ 				    ROWOFFSET_EGRESSSAKEYRECORD + table_index -
+ 					    32);
+-	if (unlikely(ret))
+-		return ret;
+ 
+-	return 0;
++clear_key:
++	memzero_explicit(packed_record, sizeof(packed_record));
++	return ret;
+ }
+ 
+ int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw,
+diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
+index 56e0fb07aec7f..1cd3c289f49be 100644
+--- a/drivers/net/ethernet/broadcom/Kconfig
++++ b/drivers/net/ethernet/broadcom/Kconfig
+@@ -77,7 +77,7 @@ config BCMGENET
+ 	select BCM7XXX_PHY
+ 	select MDIO_BCM_UNIMAC
+ 	select DIMLIB
+-	select BROADCOM_PHY if ARCH_BCM2835
++	select BROADCOM_PHY if (ARCH_BCM2835 && PTP_1588_CLOCK_OPTIONAL)
+ 	help
+ 	  This driver supports the built-in Ethernet MACs found in the
+ 	  Broadcom BCM7xxx Set Top Box family chipset.
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 96da0ba3d5078..be5df8fca264e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -12894,8 +12894,8 @@ static int bnxt_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+ 	rcu_read_lock();
+ 	hlist_for_each_entry_rcu(fltr, head, hash) {
+ 		if (bnxt_fltr_match(fltr, new_fltr)) {
++			rc = fltr->sw_id;
+ 			rcu_read_unlock();
+-			rc = 0;
+ 			goto err_free;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 87eb5362ad70a..a182254569661 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -162,7 +162,7 @@ static int bnxt_set_coalesce(struct net_device *dev,
+ 	}
+ 
+ reset_coalesce:
+-	if (netif_running(dev)) {
++	if (test_bit(BNXT_STATE_OPEN, &bp->state)) {
+ 		if (update_stats) {
+ 			rc = bnxt_close_nic(bp, true, false);
+ 			if (!rc)
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+index 174b1e156669e..481d85bfa483f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -1302,6 +1302,7 @@ static int cxgb_up(struct adapter *adap)
+ 		if (ret < 0) {
+ 			CH_ERR(adap, "failed to bind qsets, err %d\n", ret);
+ 			t3_intr_disable(adap);
++			quiesce_rx(adap);
+ 			free_irq_resources(adap);
+ 			err = ret;
+ 			goto out;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+index c2822e635f896..40bb473ec3c89 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+@@ -858,7 +858,7 @@ static int cxgb4vf_open(struct net_device *dev)
+ 	 */
+ 	err = t4vf_update_port_info(pi);
+ 	if (err < 0)
+-		return err;
++		goto err_unwind;
+ 
+ 	/*
+ 	 * Note that this interface is up and start everything up ...
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index 39ae965cd4f64..b0c756b65cc2e 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -882,12 +882,21 @@ _return:
+ 	return err;
+ }
+ 
++static int mac_remove(struct platform_device *pdev)
++{
++	struct mac_device *mac_dev = platform_get_drvdata(pdev);
++
++	platform_device_unregister(mac_dev->priv->eth_dev);
++	return 0;
++}
++
+ static struct platform_driver mac_driver = {
+ 	.driver = {
+ 		.name		= KBUILD_MODNAME,
+ 		.of_match_table	= mac_match,
+ 	},
+ 	.probe		= mac_probe,
++	.remove		= mac_remove,
+ };
+ 
+ builtin_platform_driver(mac_driver);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 5a9e6563923eb..24a701fd140e9 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -2438,6 +2438,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		list_for_each_entry(f, &adapter->vlan_filter_list, list) {
+ 			if (f->is_new_vlan) {
+ 				f->is_new_vlan = false;
++				if (!f->vlan.vid)
++					continue;
+ 				if (f->vlan.tpid == ETH_P_8021Q)
+ 					set_bit(f->vlan.vid,
+ 						adapter->vsi.active_cvlans);
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 1e32438081780..9ee022bb8ec21 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -959,7 +959,7 @@ ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
+ 	 * associated to the queue to schedule NAPI handler
+ 	 */
+ 	q_vector = ring->q_vector;
+-	if (q_vector)
++	if (q_vector && !(vsi->vf && ice_is_vf_disabled(vsi->vf)))
+ 		ice_trigger_sw_intr(hw, q_vector);
+ 
+ 	status = ice_dis_vsi_txq(vsi->port_info, txq_meta->vsi_idx,
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 58d483e2f539e..11399f55e6476 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2222,6 +2222,31 @@ int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi)
+ 	return ice_vsi_stop_tx_rings(vsi, ICE_NO_RESET, 0, vsi->xdp_rings, vsi->num_xdp_txq);
+ }
+ 
++/**
++ * ice_vsi_is_rx_queue_active
++ * @vsi: the VSI being configured
++ *
++ * Return true if at least one queue is active.
++ */
++bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi)
++{
++	struct ice_pf *pf = vsi->back;
++	struct ice_hw *hw = &pf->hw;
++	int i;
++
++	ice_for_each_rxq(vsi, i) {
++		u32 rx_reg;
++		int pf_q;
++
++		pf_q = vsi->rxq_map[i];
++		rx_reg = rd32(hw, QRX_CTRL(pf_q));
++		if (rx_reg & QRX_CTRL_QENA_STAT_M)
++			return true;
++	}
++
++	return false;
++}
++
+ /**
+  * ice_vsi_is_vlan_pruning_ena - check if VLAN pruning is enabled or not
+  * @vsi: VSI to check whether or not VLAN pruning is enabled.
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
+index 8712b1d2ceec9..441fb132f1941 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_lib.h
+@@ -127,4 +127,5 @@ u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi);
+ bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f);
+ void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f);
+ void ice_init_feature_support(struct ice_pf *pf);
++bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi);
+ #endif /* !_ICE_LIB_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 0abeed092de1d..1c51778db951b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -576,7 +576,10 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+ 			return -EINVAL;
+ 		}
+ 		ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
+-		ice_vsi_stop_all_rx_rings(vsi);
++
++		if (ice_vsi_is_rx_queue_active(vsi))
++			ice_vsi_stop_all_rx_rings(vsi);
++
+ 		dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
+ 			vf->vf_id);
+ 		return 0;
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index b6be0552a6c1d..40a5957b14493 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2481,6 +2481,7 @@ out_free:
+ 	for (i = 0; i < mp->rxq_count; i++)
+ 		rxq_deinit(mp->rxq + i);
+ out:
++	napi_disable(&mp->napi);
+ 	free_irq(dev->irq, dev);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index d686c7b6252f4..9c2baa437c231 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -863,6 +863,7 @@ static int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
+ 	}
+ 
+ 	sq->head = 0;
++	sq->cons_head = 0;
+ 	sq->sqe_per_sqb = (pfvf->hw.sqb_size / sq->sqe_size) - 1;
+ 	sq->num_sqbs = (qset->sqe_cnt + sq->sqe_per_sqb) / sq->sqe_per_sqb;
+ 	/* Set SQE threshold to 10% of total SQEs */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 9376d0e62914b..80fde101df96d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -15,6 +15,7 @@
+ #include <net/ip.h>
+ #include <linux/bpf.h>
+ #include <linux/bpf_trace.h>
++#include <linux/bitfield.h>
+ 
+ #include "otx2_reg.h"
+ #include "otx2_common.h"
+@@ -1161,6 +1162,59 @@ int otx2_set_real_num_queues(struct net_device *netdev,
+ }
+ EXPORT_SYMBOL(otx2_set_real_num_queues);
+ 
++static char *nix_sqoperr_e_str[NIX_SQOPERR_MAX] = {
++	"NIX_SQOPERR_OOR",
++	"NIX_SQOPERR_CTX_FAULT",
++	"NIX_SQOPERR_CTX_POISON",
++	"NIX_SQOPERR_DISABLED",
++	"NIX_SQOPERR_SIZE_ERR",
++	"NIX_SQOPERR_OFLOW",
++	"NIX_SQOPERR_SQB_NULL",
++	"NIX_SQOPERR_SQB_FAULT",
++	"NIX_SQOPERR_SQE_SZ_ZERO",
++};
++
++static char *nix_mnqerr_e_str[NIX_MNQERR_MAX] = {
++	"NIX_MNQERR_SQ_CTX_FAULT",
++	"NIX_MNQERR_SQ_CTX_POISON",
++	"NIX_MNQERR_SQB_FAULT",
++	"NIX_MNQERR_SQB_POISON",
++	"NIX_MNQERR_TOTAL_ERR",
++	"NIX_MNQERR_LSO_ERR",
++	"NIX_MNQERR_CQ_QUERY_ERR",
++	"NIX_MNQERR_MAX_SQE_SIZE_ERR",
++	"NIX_MNQERR_MAXLEN_ERR",
++	"NIX_MNQERR_SQE_SIZEM1_ZERO",
++};
++
++static char *nix_snd_status_e_str[NIX_SND_STATUS_MAX] =  {
++	"NIX_SND_STATUS_GOOD",
++	"NIX_SND_STATUS_SQ_CTX_FAULT",
++	"NIX_SND_STATUS_SQ_CTX_POISON",
++	"NIX_SND_STATUS_SQB_FAULT",
++	"NIX_SND_STATUS_SQB_POISON",
++	"NIX_SND_STATUS_HDR_ERR",
++	"NIX_SND_STATUS_EXT_ERR",
++	"NIX_SND_STATUS_JUMP_FAULT",
++	"NIX_SND_STATUS_JUMP_POISON",
++	"NIX_SND_STATUS_CRC_ERR",
++	"NIX_SND_STATUS_IMM_ERR",
++	"NIX_SND_STATUS_SG_ERR",
++	"NIX_SND_STATUS_MEM_ERR",
++	"NIX_SND_STATUS_INVALID_SUBDC",
++	"NIX_SND_STATUS_SUBDC_ORDER_ERR",
++	"NIX_SND_STATUS_DATA_FAULT",
++	"NIX_SND_STATUS_DATA_POISON",
++	"NIX_SND_STATUS_NPC_DROP_ACTION",
++	"NIX_SND_STATUS_LOCK_VIOL",
++	"NIX_SND_STATUS_NPC_UCAST_CHAN_ERR",
++	"NIX_SND_STATUS_NPC_MCAST_CHAN_ERR",
++	"NIX_SND_STATUS_NPC_MCAST_ABORT",
++	"NIX_SND_STATUS_NPC_VTAG_PTR_ERR",
++	"NIX_SND_STATUS_NPC_VTAG_SIZE_ERR",
++	"NIX_SND_STATUS_SEND_STATS_ERR",
++};
++
+ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
+ {
+ 	struct otx2_nic *pf = data;
+@@ -1194,46 +1248,67 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
+ 
+ 	/* SQ */
+ 	for (qidx = 0; qidx < pf->hw.tot_tx_queues; qidx++) {
++		u64 sq_op_err_dbg, mnq_err_dbg, snd_err_dbg;
++		u8 sq_op_err_code, mnq_err_code, snd_err_code;
++
++		/* Below debug registers captures first errors corresponding to
++		 * those registers. We don't have to check against SQ qid as
++		 * these are fatal errors.
++		 */
++
+ 		ptr = otx2_get_regaddr(pf, NIX_LF_SQ_OP_INT);
+ 		val = otx2_atomic64_add((qidx << 44), ptr);
+ 		otx2_write64(pf, NIX_LF_SQ_OP_INT, (qidx << 44) |
+ 			     (val & NIX_SQINT_BITS));
+ 
+-		if (!(val & (NIX_SQINT_BITS | BIT_ULL(42))))
+-			continue;
+-
+ 		if (val & BIT_ULL(42)) {
+ 			netdev_err(pf->netdev, "SQ%lld: error reading NIX_LF_SQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n",
+ 				   qidx, otx2_read64(pf, NIX_LF_ERR_INT));
+-		} else {
+-			if (val & BIT_ULL(NIX_SQINT_LMT_ERR)) {
+-				netdev_err(pf->netdev, "SQ%lld: LMT store error NIX_LF_SQ_OP_ERR_DBG:0x%llx",
+-					   qidx,
+-					   otx2_read64(pf,
+-						       NIX_LF_SQ_OP_ERR_DBG));
+-				otx2_write64(pf, NIX_LF_SQ_OP_ERR_DBG,
+-					     BIT_ULL(44));
+-			}
+-			if (val & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
+-				netdev_err(pf->netdev, "SQ%lld: Meta-descriptor enqueue error NIX_LF_MNQ_ERR_DGB:0x%llx\n",
+-					   qidx,
+-					   otx2_read64(pf, NIX_LF_MNQ_ERR_DBG));
+-				otx2_write64(pf, NIX_LF_MNQ_ERR_DBG,
+-					     BIT_ULL(44));
+-			}
+-			if (val & BIT_ULL(NIX_SQINT_SEND_ERR)) {
+-				netdev_err(pf->netdev, "SQ%lld: Send error, NIX_LF_SEND_ERR_DBG 0x%llx",
+-					   qidx,
+-					   otx2_read64(pf,
+-						       NIX_LF_SEND_ERR_DBG));
+-				otx2_write64(pf, NIX_LF_SEND_ERR_DBG,
+-					     BIT_ULL(44));
+-			}
+-			if (val & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
+-				netdev_err(pf->netdev, "SQ%lld: SQB allocation failed",
+-					   qidx);
++			goto done;
+ 		}
+ 
++		sq_op_err_dbg = otx2_read64(pf, NIX_LF_SQ_OP_ERR_DBG);
++		if (!(sq_op_err_dbg & BIT(44)))
++			goto chk_mnq_err_dbg;
++
++		sq_op_err_code = FIELD_GET(GENMASK(7, 0), sq_op_err_dbg);
++		netdev_err(pf->netdev, "SQ%lld: NIX_LF_SQ_OP_ERR_DBG(%llx)  err=%s\n",
++			   qidx, sq_op_err_dbg, nix_sqoperr_e_str[sq_op_err_code]);
++
++		otx2_write64(pf, NIX_LF_SQ_OP_ERR_DBG, BIT_ULL(44));
++
++		if (sq_op_err_code == NIX_SQOPERR_SQB_NULL)
++			goto chk_mnq_err_dbg;
++
++		/* Err is not NIX_SQOPERR_SQB_NULL, call aq function to read SQ structure.
++		 * TODO: But we are in irq context. How to call mbox functions which does sleep
++		 */
++
++chk_mnq_err_dbg:
++		mnq_err_dbg = otx2_read64(pf, NIX_LF_MNQ_ERR_DBG);
++		if (!(mnq_err_dbg & BIT(44)))
++			goto chk_snd_err_dbg;
++
++		mnq_err_code = FIELD_GET(GENMASK(7, 0), mnq_err_dbg);
++		netdev_err(pf->netdev, "SQ%lld: NIX_LF_MNQ_ERR_DBG(%llx)  err=%s\n",
++			   qidx, mnq_err_dbg,  nix_mnqerr_e_str[mnq_err_code]);
++		otx2_write64(pf, NIX_LF_MNQ_ERR_DBG, BIT_ULL(44));
++
++chk_snd_err_dbg:
++		snd_err_dbg = otx2_read64(pf, NIX_LF_SEND_ERR_DBG);
++		if (snd_err_dbg & BIT(44)) {
++			snd_err_code = FIELD_GET(GENMASK(7, 0), snd_err_dbg);
++			netdev_err(pf->netdev, "SQ%lld: NIX_LF_SND_ERR_DBG:0x%llx err=%s\n",
++				   qidx, snd_err_dbg, nix_snd_status_e_str[snd_err_code]);
++			otx2_write64(pf, NIX_LF_SEND_ERR_DBG, BIT_ULL(44));
++		}
++
++done:
++		/* Print values and reset */
++		if (val & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
++			netdev_err(pf->netdev, "SQ%lld: SQB allocation failed",
++				   qidx);
++
+ 		schedule_work(&pf->reset_task);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
+index 4bbd12ff26e64..e5f30fd778fc1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
+@@ -274,4 +274,61 @@ enum nix_sqint_e {
+ 			BIT_ULL(NIX_SQINT_SEND_ERR) | \
+ 			BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
+ 
++enum nix_sqoperr_e {
++	NIX_SQOPERR_OOR = 0,
++	NIX_SQOPERR_CTX_FAULT = 1,
++	NIX_SQOPERR_CTX_POISON = 2,
++	NIX_SQOPERR_DISABLED = 3,
++	NIX_SQOPERR_SIZE_ERR = 4,
++	NIX_SQOPERR_OFLOW = 5,
++	NIX_SQOPERR_SQB_NULL = 6,
++	NIX_SQOPERR_SQB_FAULT = 7,
++	NIX_SQOPERR_SQE_SZ_ZERO = 8,
++	NIX_SQOPERR_MAX,
++};
++
++enum nix_mnqerr_e {
++	NIX_MNQERR_SQ_CTX_FAULT = 0,
++	NIX_MNQERR_SQ_CTX_POISON = 1,
++	NIX_MNQERR_SQB_FAULT = 2,
++	NIX_MNQERR_SQB_POISON = 3,
++	NIX_MNQERR_TOTAL_ERR = 4,
++	NIX_MNQERR_LSO_ERR = 5,
++	NIX_MNQERR_CQ_QUERY_ERR = 6,
++	NIX_MNQERR_MAX_SQE_SIZE_ERR = 7,
++	NIX_MNQERR_MAXLEN_ERR = 8,
++	NIX_MNQERR_SQE_SIZEM1_ZERO = 9,
++	NIX_MNQERR_MAX,
++};
++
++enum nix_snd_status_e {
++	NIX_SND_STATUS_GOOD = 0x0,
++	NIX_SND_STATUS_SQ_CTX_FAULT = 0x1,
++	NIX_SND_STATUS_SQ_CTX_POISON = 0x2,
++	NIX_SND_STATUS_SQB_FAULT = 0x3,
++	NIX_SND_STATUS_SQB_POISON = 0x4,
++	NIX_SND_STATUS_HDR_ERR = 0x5,
++	NIX_SND_STATUS_EXT_ERR = 0x6,
++	NIX_SND_STATUS_JUMP_FAULT = 0x7,
++	NIX_SND_STATUS_JUMP_POISON = 0x8,
++	NIX_SND_STATUS_CRC_ERR = 0x9,
++	NIX_SND_STATUS_IMM_ERR = 0x10,
++	NIX_SND_STATUS_SG_ERR = 0x11,
++	NIX_SND_STATUS_MEM_ERR = 0x12,
++	NIX_SND_STATUS_INVALID_SUBDC = 0x13,
++	NIX_SND_STATUS_SUBDC_ORDER_ERR = 0x14,
++	NIX_SND_STATUS_DATA_FAULT = 0x15,
++	NIX_SND_STATUS_DATA_POISON = 0x16,
++	NIX_SND_STATUS_NPC_DROP_ACTION = 0x17,
++	NIX_SND_STATUS_LOCK_VIOL = 0x18,
++	NIX_SND_STATUS_NPC_UCAST_CHAN_ERR = 0x19,
++	NIX_SND_STATUS_NPC_MCAST_CHAN_ERR = 0x20,
++	NIX_SND_STATUS_NPC_MCAST_ABORT = 0x21,
++	NIX_SND_STATUS_NPC_VTAG_PTR_ERR = 0x22,
++	NIX_SND_STATUS_NPC_VTAG_SIZE_ERR = 0x23,
++	NIX_SND_STATUS_SEND_MEM_FAULT = 0x24,
++	NIX_SND_STATUS_SEND_STATS_ERR = 0x25,
++	NIX_SND_STATUS_MAX,
++};
++
+ #endif /* OTX2_STRUCT_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index a18e8efd0f1ee..664f977433f4a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -435,6 +435,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
+ 				struct otx2_cq_queue *cq, int budget)
+ {
+ 	int tx_pkts = 0, tx_bytes = 0, qidx;
++	struct otx2_snd_queue *sq;
+ 	struct nix_cqe_tx_s *cqe;
+ 	int processed_cqe = 0;
+ 
+@@ -445,6 +446,9 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
+ 		return 0;
+ 
+ process_cqe:
++	qidx = cq->cq_idx - pfvf->hw.rx_queues;
++	sq = &pfvf->qset.sq[qidx];
++
+ 	while (likely(processed_cqe < budget) && cq->pend_cqe) {
+ 		cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq);
+ 		if (unlikely(!cqe)) {
+@@ -452,18 +456,20 @@ process_cqe:
+ 				return 0;
+ 			break;
+ 		}
++
+ 		if (cq->cq_type == CQ_XDP) {
+-			qidx = cq->cq_idx - pfvf->hw.rx_queues;
+-			otx2_xdp_snd_pkt_handler(pfvf, &pfvf->qset.sq[qidx],
+-						 cqe);
++			otx2_xdp_snd_pkt_handler(pfvf, sq, cqe);
+ 		} else {
+-			otx2_snd_pkt_handler(pfvf, cq,
+-					     &pfvf->qset.sq[cq->cint_idx],
+-					     cqe, budget, &tx_pkts, &tx_bytes);
++			otx2_snd_pkt_handler(pfvf, cq, sq, cqe, budget,
++					     &tx_pkts, &tx_bytes);
+ 		}
++
+ 		cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
+ 		processed_cqe++;
+ 		cq->pend_cqe--;
++
++		sq->cons_head++;
++		sq->cons_head &= (sq->sqe_cnt - 1);
+ 	}
+ 
+ 	/* Free CQEs to HW */
+@@ -972,17 +978,17 @@ bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq,
+ {
+ 	struct netdev_queue *txq = netdev_get_tx_queue(netdev, qidx);
+ 	struct otx2_nic *pfvf = netdev_priv(netdev);
+-	int offset, num_segs, free_sqe;
++	int offset, num_segs, free_desc;
+ 	struct nix_sqe_hdr_s *sqe_hdr;
+ 
+-	/* Check if there is room for new SQE.
+-	 * 'Num of SQBs freed to SQ's pool - SQ's Aura count'
+-	 * will give free SQE count.
++	/* Check if there is enough room between producer
++	 * and consumer index.
+ 	 */
+-	free_sqe = (sq->num_sqbs - *sq->aura_fc_addr) * sq->sqe_per_sqb;
++	free_desc = (sq->cons_head - sq->head - 1 + sq->sqe_cnt) & (sq->sqe_cnt - 1);
++	if (free_desc < sq->sqe_thresh)
++		return false;
+ 
+-	if (free_sqe < sq->sqe_thresh ||
+-	    free_sqe < otx2_get_sqe_count(pfvf, skb))
++	if (free_desc < otx2_get_sqe_count(pfvf, skb))
+ 		return false;
+ 
+ 	num_segs = skb_shinfo(skb)->nr_frags + 1;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+index fbe62bbfb789a..93cac2c2664c2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+@@ -79,6 +79,7 @@ struct sg_list {
+ struct otx2_snd_queue {
+ 	u8			aura_id;
+ 	u16			head;
++	u16			cons_head;
+ 	u16			sqe_size;
+ 	u32			sqe_cnt;
+ 	u16			num_sqbs;
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c b/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c
+index dc3e3ddc60bf5..faa5109a09d7a 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_rxtx.c
+@@ -776,6 +776,7 @@ tx_done:
+ int prestera_rxtx_switch_init(struct prestera_switch *sw)
+ {
+ 	struct prestera_rxtx *rxtx;
++	int err;
+ 
+ 	rxtx = kzalloc(sizeof(*rxtx), GFP_KERNEL);
+ 	if (!rxtx)
+@@ -783,7 +784,11 @@ int prestera_rxtx_switch_init(struct prestera_switch *sw)
+ 
+ 	sw->rxtx = rxtx;
+ 
+-	return prestera_sdma_switch_init(sw);
++	err = prestera_sdma_switch_init(sw);
++	if (err)
++		kfree(rxtx);
++
++	return err;
+ }
+ 
+ void prestera_rxtx_switch_fini(struct prestera_switch *sw)
+diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+index 3f0e5e64de505..57f4373b30ba1 100644
+--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
++++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+@@ -1026,6 +1026,8 @@ static int mtk_star_enable(struct net_device *ndev)
+ 	return 0;
+ 
+ err_free_irq:
++	napi_disable(&priv->rx_napi);
++	napi_disable(&priv->tx_napi);
+ 	free_irq(ndev->irq, ndev);
+ err_free_skbs:
+ 	mtk_star_free_rx_skbs(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 46ba4c2faad21..2e0d59ca62b50 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1770,12 +1770,17 @@ void mlx5_cmd_flush(struct mlx5_core_dev *dev)
+ 	struct mlx5_cmd *cmd = &dev->cmd;
+ 	int i;
+ 
+-	for (i = 0; i < cmd->max_reg_cmds; i++)
+-		while (down_trylock(&cmd->sem))
++	for (i = 0; i < cmd->max_reg_cmds; i++) {
++		while (down_trylock(&cmd->sem)) {
+ 			mlx5_cmd_trigger_completions(dev);
++			cond_resched();
++		}
++	}
+ 
+-	while (down_trylock(&cmd->pages_sem))
++	while (down_trylock(&cmd->pages_sem)) {
+ 		mlx5_cmd_trigger_completions(dev);
++		cond_resched();
++	}
+ 
+ 	/* Unlock cmdif */
+ 	up(&cmd->pages_sem);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+index 39ef2a2561a30..8099a21e674c9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+@@ -164,6 +164,36 @@ static int mlx5_esw_bridge_port_changeupper(struct notifier_block *nb, void *ptr
+ 	return err;
+ }
+ 
++static int
++mlx5_esw_bridge_changeupper_validate_netdev(void *ptr)
++{
++	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++	struct netdev_notifier_changeupper_info *info = ptr;
++	struct net_device *upper = info->upper_dev;
++	struct net_device *lower;
++	struct list_head *iter;
++
++	if (!netif_is_bridge_master(upper) || !netif_is_lag_master(dev))
++		return 0;
++
++	netdev_for_each_lower_dev(dev, lower, iter) {
++		struct mlx5_core_dev *mdev;
++		struct mlx5e_priv *priv;
++
++		if (!mlx5e_eswitch_rep(lower))
++			continue;
++
++		priv = netdev_priv(lower);
++		mdev = priv->mdev;
++		if (!mlx5_lag_is_active(mdev))
++			return -EAGAIN;
++		if (!mlx5_lag_is_shared_fdb(mdev))
++			return -EOPNOTSUPP;
++	}
++
++	return 0;
++}
++
+ static int mlx5_esw_bridge_switchdev_port_event(struct notifier_block *nb,
+ 						unsigned long event, void *ptr)
+ {
+@@ -171,6 +201,7 @@ static int mlx5_esw_bridge_switchdev_port_event(struct notifier_block *nb,
+ 
+ 	switch (event) {
+ 	case NETDEV_PRECHANGEUPPER:
++		err = mlx5_esw_bridge_changeupper_validate_netdev(ptr);
+ 		break;
+ 
+ 	case NETDEV_CHANGEUPPER:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
+index 305fde62a78de..3337241cfd84c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
+@@ -6,70 +6,42 @@
+ #include "en/tc_priv.h"
+ #include "mlx5_core.h"
+ 
+-/* Must be aligned with enum flow_action_id. */
+ static struct mlx5e_tc_act *tc_acts_fdb[NUM_FLOW_ACTIONS] = {
+-	&mlx5e_tc_act_accept,
+-	&mlx5e_tc_act_drop,
+-	&mlx5e_tc_act_trap,
+-	&mlx5e_tc_act_goto,
+-	&mlx5e_tc_act_mirred,
+-	&mlx5e_tc_act_mirred,
+-	&mlx5e_tc_act_redirect_ingress,
+-	NULL, /* FLOW_ACTION_MIRRED_INGRESS, */
+-	&mlx5e_tc_act_vlan,
+-	&mlx5e_tc_act_vlan,
+-	&mlx5e_tc_act_vlan_mangle,
+-	&mlx5e_tc_act_tun_encap,
+-	&mlx5e_tc_act_tun_decap,
+-	&mlx5e_tc_act_pedit,
+-	&mlx5e_tc_act_pedit,
+-	&mlx5e_tc_act_csum,
+-	NULL, /* FLOW_ACTION_MARK, */
+-	&mlx5e_tc_act_ptype,
+-	NULL, /* FLOW_ACTION_PRIORITY, */
+-	NULL, /* FLOW_ACTION_WAKE, */
+-	NULL, /* FLOW_ACTION_QUEUE, */
+-	&mlx5e_tc_act_sample,
+-	&mlx5e_tc_act_police,
+-	&mlx5e_tc_act_ct,
+-	NULL, /* FLOW_ACTION_CT_METADATA, */
+-	&mlx5e_tc_act_mpls_push,
+-	&mlx5e_tc_act_mpls_pop,
+-	NULL, /* FLOW_ACTION_MPLS_MANGLE, */
+-	NULL, /* FLOW_ACTION_GATE, */
+-	NULL, /* FLOW_ACTION_PPPOE_PUSH, */
+-	NULL, /* FLOW_ACTION_JUMP, */
+-	NULL, /* FLOW_ACTION_PIPE, */
+-	&mlx5e_tc_act_vlan,
+-	&mlx5e_tc_act_vlan,
++	[FLOW_ACTION_ACCEPT] = &mlx5e_tc_act_accept,
++	[FLOW_ACTION_DROP] = &mlx5e_tc_act_drop,
++	[FLOW_ACTION_TRAP] = &mlx5e_tc_act_trap,
++	[FLOW_ACTION_GOTO] = &mlx5e_tc_act_goto,
++	[FLOW_ACTION_REDIRECT] = &mlx5e_tc_act_mirred,
++	[FLOW_ACTION_MIRRED] = &mlx5e_tc_act_mirred,
++	[FLOW_ACTION_REDIRECT_INGRESS] = &mlx5e_tc_act_redirect_ingress,
++	[FLOW_ACTION_VLAN_PUSH] = &mlx5e_tc_act_vlan,
++	[FLOW_ACTION_VLAN_POP] = &mlx5e_tc_act_vlan,
++	[FLOW_ACTION_VLAN_MANGLE] = &mlx5e_tc_act_vlan_mangle,
++	[FLOW_ACTION_TUNNEL_ENCAP] = &mlx5e_tc_act_tun_encap,
++	[FLOW_ACTION_TUNNEL_DECAP] = &mlx5e_tc_act_tun_decap,
++	[FLOW_ACTION_MANGLE] = &mlx5e_tc_act_pedit,
++	[FLOW_ACTION_ADD] = &mlx5e_tc_act_pedit,
++	[FLOW_ACTION_CSUM] = &mlx5e_tc_act_csum,
++	[FLOW_ACTION_PTYPE] = &mlx5e_tc_act_ptype,
++	[FLOW_ACTION_SAMPLE] = &mlx5e_tc_act_sample,
++	[FLOW_ACTION_POLICE] = &mlx5e_tc_act_police,
++	[FLOW_ACTION_CT] = &mlx5e_tc_act_ct,
++	[FLOW_ACTION_MPLS_PUSH] = &mlx5e_tc_act_mpls_push,
++	[FLOW_ACTION_MPLS_POP] = &mlx5e_tc_act_mpls_pop,
++	[FLOW_ACTION_VLAN_PUSH_ETH] = &mlx5e_tc_act_vlan,
++	[FLOW_ACTION_VLAN_POP_ETH] = &mlx5e_tc_act_vlan,
+ };
+ 
+-/* Must be aligned with enum flow_action_id. */
+ static struct mlx5e_tc_act *tc_acts_nic[NUM_FLOW_ACTIONS] = {
+-	&mlx5e_tc_act_accept,
+-	&mlx5e_tc_act_drop,
+-	NULL, /* FLOW_ACTION_TRAP, */
+-	&mlx5e_tc_act_goto,
+-	&mlx5e_tc_act_mirred_nic,
+-	NULL, /* FLOW_ACTION_MIRRED, */
+-	NULL, /* FLOW_ACTION_REDIRECT_INGRESS, */
+-	NULL, /* FLOW_ACTION_MIRRED_INGRESS, */
+-	NULL, /* FLOW_ACTION_VLAN_PUSH, */
+-	NULL, /* FLOW_ACTION_VLAN_POP, */
+-	NULL, /* FLOW_ACTION_VLAN_MANGLE, */
+-	NULL, /* FLOW_ACTION_TUNNEL_ENCAP, */
+-	NULL, /* FLOW_ACTION_TUNNEL_DECAP, */
+-	&mlx5e_tc_act_pedit,
+-	&mlx5e_tc_act_pedit,
+-	&mlx5e_tc_act_csum,
+-	&mlx5e_tc_act_mark,
+-	NULL, /* FLOW_ACTION_PTYPE, */
+-	NULL, /* FLOW_ACTION_PRIORITY, */
+-	NULL, /* FLOW_ACTION_WAKE, */
+-	NULL, /* FLOW_ACTION_QUEUE, */
+-	NULL, /* FLOW_ACTION_SAMPLE, */
+-	NULL, /* FLOW_ACTION_POLICE, */
+-	&mlx5e_tc_act_ct,
++	[FLOW_ACTION_ACCEPT] = &mlx5e_tc_act_accept,
++	[FLOW_ACTION_DROP] = &mlx5e_tc_act_drop,
++	[FLOW_ACTION_GOTO] = &mlx5e_tc_act_goto,
++	[FLOW_ACTION_REDIRECT] = &mlx5e_tc_act_mirred_nic,
++	[FLOW_ACTION_MANGLE] = &mlx5e_tc_act_pedit,
++	[FLOW_ACTION_ADD] = &mlx5e_tc_act_pedit,
++	[FLOW_ACTION_CSUM] = &mlx5e_tc_act_csum,
++	[FLOW_ACTION_MARK] = &mlx5e_tc_act_mark,
++	[FLOW_ACTION_CT] = &mlx5e_tc_act_ct,
+ };
+ 
+ /**
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+index ff8ca7a7e1036..aeed165a2dec2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+@@ -11,6 +11,27 @@
+ 
+ #define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
+ 
++/* IPSEC inline data includes:
++ * 1. ESP trailer: up to 255 bytes of padding, 1 byte for pad length, 1 byte for
++ *    next header.
++ * 2. ESP authentication data: 16 bytes for ICV.
++ */
++#define MLX5E_MAX_TX_IPSEC_DS DIV_ROUND_UP(sizeof(struct mlx5_wqe_inline_seg) + \
++					   255 + 1 + 1 + 16, MLX5_SEND_WQE_DS)
++
++/* 366 should be big enough to cover all L2, L3 and L4 headers with possible
++ * encapsulations.
++ */
++#define MLX5E_MAX_TX_INLINE_DS DIV_ROUND_UP(366 - INL_HDR_START_SZ + VLAN_HLEN, \
++					    MLX5_SEND_WQE_DS)
++
++/* Sync the calculation with mlx5e_sq_calc_wqe_attr. */
++#define MLX5E_MAX_TX_WQEBBS DIV_ROUND_UP(MLX5E_TX_WQE_EMPTY_DS_COUNT + \
++					 MLX5E_MAX_TX_INLINE_DS + \
++					 MLX5E_MAX_TX_IPSEC_DS + \
++					 MAX_SKB_FRAGS + 1, \
++					 MLX5_SEND_WQEBB_NUM_DS)
++
+ #define MLX5E_RX_ERR_CQE(cqe) (get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)
+ 
+ static inline
+@@ -424,6 +445,8 @@ mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
+ 
+ static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_size)
+ {
++	WARN_ON_ONCE(PAGE_SIZE / MLX5_SEND_WQE_BB < mlx5e_get_max_sq_wqebbs(mdev));
++
+ 	/* A WQE must not cross the page boundary, hence two conditions:
+ 	 * 1. Its size must not exceed the page size.
+ 	 * 2. If the WQE size is X, and the space remaining in a page is less
+@@ -436,7 +459,6 @@ static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_si
+ 		  "wqe_size %u is greater than max SQ WQEBBs %u",
+ 		  wqe_size, mlx5e_get_max_sq_wqebbs(mdev));
+ 
+-
+ 	return MLX5E_STOP_ROOM(wqe_size);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 02eb2f0fa2ae7..6cf6a81775a85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5514,6 +5514,13 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
+ 	if (priv->fs)
+ 		priv->fs->state_destroy = !test_bit(MLX5E_STATE_DESTROYING, &priv->state);
+ 
++	/* Validate the max_wqe_size_sq capability. */
++	if (WARN_ON_ONCE(mlx5e_get_max_sq_wqebbs(priv->mdev) < MLX5E_MAX_TX_WQEBBS)) {
++		mlx5_core_warn(priv->mdev, "MLX5E: Max SQ WQEBBs firmware capability: %u, needed %lu\n",
++			       mlx5e_get_max_sq_wqebbs(priv->mdev), MLX5E_MAX_TX_WQEBBS);
++		return -EIO;
++	}
++
+ 	/* max number of channels may have changed */
+ 	max_nch = mlx5e_calc_max_nch(priv->mdev, priv->netdev, profile);
+ 	if (priv->channels.params.num_channels > max_nch) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index a687f047e3aeb..229c14b1af004 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -4739,12 +4739,6 @@ int mlx5e_policer_validate(const struct flow_action *action,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	if (act->police.rate_pkt_ps) {
+-		NL_SET_ERR_MSG_MOD(extack,
+-				   "QoS offload not support packets per second");
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index 4d45150a3f8ef..a30e50d9969f7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -304,6 +304,8 @@ static void mlx5e_sq_calc_wqe_attr(struct sk_buff *skb, const struct mlx5e_tx_at
+ 	u16 ds_cnt_inl = 0;
+ 	u16 ds_cnt_ids = 0;
+ 
++	/* Sync the calculation with MLX5E_MAX_TX_WQEBBS. */
++
+ 	if (attr->insz)
+ 		ds_cnt_ids = DIV_ROUND_UP(sizeof(struct mlx5_wqe_inline_seg) + attr->insz,
+ 					  MLX5_SEND_WQE_DS);
+@@ -316,6 +318,9 @@ static void mlx5e_sq_calc_wqe_attr(struct sk_buff *skb, const struct mlx5e_tx_at
+ 			inl += VLAN_HLEN;
+ 
+ 		ds_cnt_inl = DIV_ROUND_UP(inl, MLX5_SEND_WQE_DS);
++		if (WARN_ON_ONCE(ds_cnt_inl > MLX5E_MAX_TX_INLINE_DS))
++			netdev_warn(skb->dev, "ds_cnt_inl = %u > max %u\n", ds_cnt_inl,
++				    (u16)MLX5E_MAX_TX_INLINE_DS);
+ 		ds_cnt += ds_cnt_inl;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 6aa58044b949b..4d8b8f6143cc9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1388,12 +1388,14 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw)
+ 		 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
+ 		 esw->esw_funcs.num_vfs, esw->enabled_vports);
+ 
+-	esw->fdb_table.flags &= ~MLX5_ESW_FDB_CREATED;
+-	if (esw->mode == MLX5_ESWITCH_OFFLOADS)
+-		esw_offloads_disable(esw);
+-	else if (esw->mode == MLX5_ESWITCH_LEGACY)
+-		esw_legacy_disable(esw);
+-	mlx5_esw_acls_ns_cleanup(esw);
++	if (esw->fdb_table.flags & MLX5_ESW_FDB_CREATED) {
++		esw->fdb_table.flags &= ~MLX5_ESW_FDB_CREATED;
++		if (esw->mode == MLX5_ESWITCH_OFFLOADS)
++			esw_offloads_disable(esw);
++		else if (esw->mode == MLX5_ESWITCH_LEGACY)
++			esw_legacy_disable(esw);
++		mlx5_esw_acls_ns_cleanup(esw);
++	}
+ 
+ 	if (esw->mode == MLX5_ESWITCH_OFFLOADS)
+ 		devl_rate_nodes_destroy(devlink);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index a9f4c652f859c..3c68cac4a9c2c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -2207,7 +2207,7 @@ out_free:
+ static int esw_offloads_start(struct mlx5_eswitch *esw,
+ 			      struct netlink_ext_ack *extack)
+ {
+-	int err, err1;
++	int err;
+ 
+ 	esw->mode = MLX5_ESWITCH_OFFLOADS;
+ 	err = mlx5_eswitch_enable_locked(esw, esw->dev->priv.sriov.num_vfs);
+@@ -2215,11 +2215,6 @@ static int esw_offloads_start(struct mlx5_eswitch *esw,
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "Failed setting eswitch to offloads");
+ 		esw->mode = MLX5_ESWITCH_LEGACY;
+-		err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
+-		if (err1) {
+-			NL_SET_ERR_MSG_MOD(extack,
+-					   "Failed setting eswitch back to legacy");
+-		}
+ 		mlx5_rescan_drivers(esw->dev);
+ 	}
+ 	if (esw->offloads.inline_mode == MLX5_INLINE_MODE_NONE) {
+@@ -3272,19 +3267,12 @@ err_metadata:
+ static int esw_offloads_stop(struct mlx5_eswitch *esw,
+ 			     struct netlink_ext_ack *extack)
+ {
+-	int err, err1;
++	int err;
+ 
+ 	esw->mode = MLX5_ESWITCH_LEGACY;
+ 	err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
+-	if (err) {
++	if (err)
+ 		NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
+-		esw->mode = MLX5_ESWITCH_OFFLOADS;
+-		err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
+-		if (err1) {
+-			NL_SET_ERR_MSG_MOD(extack,
+-					   "Failed setting eswitch back to offloads");
+-		}
+-	}
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index ee568bf34ae25..108a3503f413c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -30,9 +30,9 @@ mlx5_eswitch_termtbl_hash(struct mlx5_flow_act *flow_act,
+ 		     sizeof(dest->vport.num), hash);
+ 	hash = jhash((const void *)&dest->vport.vhca_id,
+ 		     sizeof(dest->vport.num), hash);
+-	if (dest->vport.pkt_reformat)
+-		hash = jhash(dest->vport.pkt_reformat,
+-			     sizeof(*dest->vport.pkt_reformat),
++	if (flow_act->pkt_reformat)
++		hash = jhash(flow_act->pkt_reformat,
++			     sizeof(*flow_act->pkt_reformat),
+ 			     hash);
+ 	return hash;
+ }
+@@ -53,9 +53,11 @@ mlx5_eswitch_termtbl_cmp(struct mlx5_flow_act *flow_act1,
+ 	if (ret)
+ 		return ret;
+ 
+-	return dest1->vport.pkt_reformat && dest2->vport.pkt_reformat ?
+-	       memcmp(dest1->vport.pkt_reformat, dest2->vport.pkt_reformat,
+-		      sizeof(*dest1->vport.pkt_reformat)) : 0;
++	if (flow_act1->pkt_reformat && flow_act2->pkt_reformat)
++		return memcmp(flow_act1->pkt_reformat, flow_act2->pkt_reformat,
++			      sizeof(*flow_act1->pkt_reformat));
++
++	return !(flow_act1->pkt_reformat == flow_act2->pkt_reformat);
+ }
+ 
+ static int
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 07c583996c297..9d908a0ccfef1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -152,7 +152,8 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
+ 		mlx5_unload_one(dev);
+ 		if (mlx5_health_wait_pci_up(dev))
+ 			mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
+-		mlx5_load_one(dev, false);
++		else
++			mlx5_load_one(dev, false);
+ 		devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
+ 							BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
+ 							BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index 30f955efa8308..8f74c039b3be2 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -7128,9 +7128,8 @@ static int s2io_card_up(struct s2io_nic *sp)
+ 		if (ret) {
+ 			DBG_PRINT(ERR_DBG, "%s: Out of memory in Open\n",
+ 				  dev->name);
+-			s2io_reset(sp);
+-			free_rx_buffers(sp);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto err_fill_buff;
+ 		}
+ 		DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
+ 			  ring->rx_bufs_left);
+@@ -7168,18 +7167,16 @@ static int s2io_card_up(struct s2io_nic *sp)
+ 	/* Enable Rx Traffic and interrupts on the NIC */
+ 	if (start_nic(sp)) {
+ 		DBG_PRINT(ERR_DBG, "%s: Starting NIC failed\n", dev->name);
+-		s2io_reset(sp);
+-		free_rx_buffers(sp);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_out;
+ 	}
+ 
+ 	/* Add interrupt service routine */
+ 	if (s2io_add_isr(sp) != 0) {
+ 		if (sp->config.intr_type == MSI_X)
+ 			s2io_rem_isr(sp);
+-		s2io_reset(sp);
+-		free_rx_buffers(sp);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_out;
+ 	}
+ 
+ 	timer_setup(&sp->alarm_timer, s2io_alarm_handle, 0);
+@@ -7199,6 +7196,20 @@ static int s2io_card_up(struct s2io_nic *sp)
+ 	}
+ 
+ 	return 0;
++
++err_out:
++	if (config->napi) {
++		if (config->intr_type == MSI_X) {
++			for (i = 0; i < sp->config.rx_ring_num; i++)
++				napi_disable(&sp->mac_control.rings[i].napi);
++		} else {
++			napi_disable(&sp->napi);
++		}
++	}
++err_fill_buff:
++	s2io_reset(sp);
++	free_rx_buffers(sp);
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
+index 4b3482ce90a1c..4fc279a175629 100644
+--- a/drivers/net/ethernet/ni/nixge.c
++++ b/drivers/net/ethernet/ni/nixge.c
+@@ -900,6 +900,7 @@ static int nixge_open(struct net_device *ndev)
+ err_rx_irq:
+ 	free_irq(priv->tx_irq, ndev);
+ err_tx_irq:
++	napi_disable(&priv->napi);
+ 	phy_stop(phy);
+ 	phy_disconnect(phy);
+ 	tasklet_kill(&priv->dma_err_tasklet);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 9af25be424014..66c30a40a6a21 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -630,7 +630,6 @@ static int ehl_common_data(struct pci_dev *pdev,
+ {
+ 	plat->rx_queues_to_use = 8;
+ 	plat->tx_queues_to_use = 8;
+-	plat->clk_ptp_rate = 200000000;
+ 	plat->use_phy_wol = 1;
+ 
+ 	plat->safety_feat_cfg->tsoee = 1;
+@@ -655,6 +654,8 @@ static int ehl_sgmii_data(struct pci_dev *pdev,
+ 	plat->serdes_powerup = intel_serdes_powerup;
+ 	plat->serdes_powerdown = intel_serdes_powerdown;
+ 
++	plat->clk_ptp_rate = 204800000;
++
+ 	return ehl_common_data(pdev, plat);
+ }
+ 
+@@ -668,6 +669,8 @@ static int ehl_rgmii_data(struct pci_dev *pdev,
+ 	plat->bus_id = 1;
+ 	plat->phy_interface = PHY_INTERFACE_MODE_RGMII;
+ 
++	plat->clk_ptp_rate = 204800000;
++
+ 	return ehl_common_data(pdev, plat);
+ }
+ 
+@@ -684,6 +687,8 @@ static int ehl_pse0_common_data(struct pci_dev *pdev,
+ 	plat->bus_id = 2;
+ 	plat->addr64 = 32;
+ 
++	plat->clk_ptp_rate = 200000000;
++
+ 	intel_mgbe_pse_crossts_adj(intel_priv, EHL_PSE_ART_MHZ);
+ 
+ 	return ehl_common_data(pdev, plat);
+@@ -723,6 +728,8 @@ static int ehl_pse1_common_data(struct pci_dev *pdev,
+ 	plat->bus_id = 3;
+ 	plat->addr64 = 32;
+ 
++	plat->clk_ptp_rate = 200000000;
++
+ 	intel_mgbe_pse_crossts_adj(intel_priv, EHL_PSE_ART_MHZ);
+ 
+ 	return ehl_common_data(pdev, plat);
+@@ -758,7 +765,7 @@ static int tgl_common_data(struct pci_dev *pdev,
+ {
+ 	plat->rx_queues_to_use = 6;
+ 	plat->tx_queues_to_use = 4;
+-	plat->clk_ptp_rate = 200000000;
++	plat->clk_ptp_rate = 204800000;
+ 	plat->speed_mode_2500 = intel_speed_mode_2500;
+ 
+ 	plat->safety_feat_cfg->tsoee = 1;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index 79fa7870563b8..a25c187d31853 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -75,20 +75,24 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ 		plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
+ 						   sizeof(*plat->mdio_bus_data),
+ 						   GFP_KERNEL);
+-		if (!plat->mdio_bus_data)
+-			return -ENOMEM;
++		if (!plat->mdio_bus_data) {
++			ret = -ENOMEM;
++			goto err_put_node;
++		}
+ 		plat->mdio_bus_data->needs_reset = true;
+ 	}
+ 
+ 	plat->dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*plat->dma_cfg), GFP_KERNEL);
+-	if (!plat->dma_cfg)
+-		return -ENOMEM;
++	if (!plat->dma_cfg) {
++		ret = -ENOMEM;
++		goto err_put_node;
++	}
+ 
+ 	/* Enable pci device */
+ 	ret = pci_enable_device(pdev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n", __func__);
+-		return ret;
++		goto err_put_node;
+ 	}
+ 
+ 	/* Get the base address of device */
+@@ -97,7 +101,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ 			continue;
+ 		ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
+ 		if (ret)
+-			return ret;
++			goto err_disable_device;
+ 		break;
+ 	}
+ 
+@@ -108,7 +112,8 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ 	phy_mode = device_get_phy_mode(&pdev->dev);
+ 	if (phy_mode < 0) {
+ 		dev_err(&pdev->dev, "phy_mode not found\n");
+-		return phy_mode;
++		ret = phy_mode;
++		goto err_disable_device;
+ 	}
+ 
+ 	plat->phy_interface = phy_mode;
+@@ -125,6 +130,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ 	if (res.irq < 0) {
+ 		dev_err(&pdev->dev, "IRQ macirq not found\n");
+ 		ret = -ENODEV;
++		goto err_disable_msi;
+ 	}
+ 
+ 	res.wol_irq = of_irq_get_byname(np, "eth_wake_irq");
+@@ -137,15 +143,31 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ 	if (res.lpi_irq < 0) {
+ 		dev_err(&pdev->dev, "IRQ eth_lpi not found\n");
+ 		ret = -ENODEV;
++		goto err_disable_msi;
+ 	}
+ 
+-	return stmmac_dvr_probe(&pdev->dev, plat, &res);
++	ret = stmmac_dvr_probe(&pdev->dev, plat, &res);
++	if (ret)
++		goto err_disable_msi;
++
++	return ret;
++
++err_disable_msi:
++	pci_disable_msi(pdev);
++err_disable_device:
++	pci_disable_device(pdev);
++err_put_node:
++	of_node_put(plat->mdio_node);
++	return ret;
+ }
+ 
+ static void loongson_dwmac_remove(struct pci_dev *pdev)
+ {
++	struct net_device *ndev = dev_get_drvdata(&pdev->dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	int i;
+ 
++	of_node_put(priv->plat->mdio_node);
+ 	stmmac_dvr_remove(&pdev->dev);
+ 
+ 	for (i = 0; i < PCI_STD_NUM_BARS; i++) {
+@@ -155,6 +177,7 @@ static void loongson_dwmac_remove(struct pci_dev *pdev)
+ 		break;
+ 	}
+ 
++	pci_disable_msi(pdev);
+ 	pci_disable_device(pdev);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+index c7a6588d9398b..e8b507f88fbce 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+@@ -272,11 +272,9 @@ static int meson8b_devm_clk_prepare_enable(struct meson8b_dwmac *dwmac,
+ 	if (ret)
+ 		return ret;
+ 
+-	devm_add_action_or_reset(dwmac->dev,
+-				 (void(*)(void *))clk_disable_unprepare,
+-				 dwmac->rgmii_tx_clk);
+-
+-	return 0;
++	return devm_add_action_or_reset(dwmac->dev,
++					(void(*)(void *))clk_disable_unprepare,
++					clk);
+ }
+ 
+ static int meson8b_init_rgmii_delays(struct meson8b_dwmac *dwmac)
+diff --git a/drivers/net/ethernet/sunplus/spl2sw_driver.c b/drivers/net/ethernet/sunplus/spl2sw_driver.c
+index 61d1d07dc0704..d6f1fef4ff3af 100644
+--- a/drivers/net/ethernet/sunplus/spl2sw_driver.c
++++ b/drivers/net/ethernet/sunplus/spl2sw_driver.c
+@@ -286,7 +286,6 @@ static u32 spl2sw_init_netdev(struct platform_device *pdev, u8 *mac_addr,
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to register net device \"%s\"!\n",
+ 			ndev->name);
+-		free_netdev(ndev);
+ 		*r_ndev = NULL;
+ 		return ret;
+ 	}
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index f4a6b590a1e39..348201e10d497 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2791,7 +2791,6 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	am65_cpsw_nuss_phylink_cleanup(common);
+ 	am65_cpsw_unregister_devlink(common);
+ 	am65_cpsw_unregister_notifiers(common);
+ 
+@@ -2799,6 +2798,7 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
+ 	 * dma_deconfigure(dev) before devres_release_all(dev)
+ 	 */
+ 	am65_cpsw_nuss_cleanup_ndev(common);
++	am65_cpsw_nuss_phylink_cleanup(common);
+ 
+ 	of_platform_device_destroy(common->mdio_dev, NULL);
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index ed66c4d4d8301..613e2c7c950ca 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -854,6 +854,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
+ 
+ err_cleanup:
+ 	if (!cpsw->usage_count) {
++		napi_disable(&cpsw->napi_rx);
++		napi_disable(&cpsw->napi_tx);
+ 		cpdma_ctlr_stop(cpsw->dma);
+ 		cpsw_destroy_xdp_rxqs(cpsw);
+ 	}
+diff --git a/drivers/net/ethernet/tundra/tsi108_eth.c b/drivers/net/ethernet/tundra/tsi108_eth.c
+index 5251fc3242219..a2fe0534c769b 100644
+--- a/drivers/net/ethernet/tundra/tsi108_eth.c
++++ b/drivers/net/ethernet/tundra/tsi108_eth.c
+@@ -1303,12 +1303,15 @@ static int tsi108_open(struct net_device *dev)
+ 
+ 	data->rxring = dma_alloc_coherent(&data->pdev->dev, rxring_size,
+ 					  &data->rxdma, GFP_KERNEL);
+-	if (!data->rxring)
++	if (!data->rxring) {
++		free_irq(data->irq_num, dev);
+ 		return -ENOMEM;
++	}
+ 
+ 	data->txring = dma_alloc_coherent(&data->pdev->dev, txring_size,
+ 					  &data->txdma, GFP_KERNEL);
+ 	if (!data->txring) {
++		free_irq(data->irq_num, dev);
+ 		dma_free_coherent(&data->pdev->dev, rxring_size, data->rxring,
+ 				    data->rxdma);
+ 		return -ENOMEM;
+diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
+index 30af0081e2bef..83a16d10eedbc 100644
+--- a/drivers/net/hamradio/bpqether.c
++++ b/drivers/net/hamradio/bpqether.c
+@@ -533,7 +533,7 @@ static int bpq_device_event(struct notifier_block *this,
+ 	if (!net_eq(dev_net(dev), &init_net))
+ 		return NOTIFY_DONE;
+ 
+-	if (!dev_is_ethdev(dev))
++	if (!dev_is_ethdev(dev) && !bpq_get_ax25_dev(dev))
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index c6d271e5687e9..ddfa853ec9b53 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1427,7 +1427,8 @@ static struct macsec_rx_sc *del_rx_sc(struct macsec_secy *secy, sci_t sci)
+ 	return NULL;
+ }
+ 
+-static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci)
++static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci,
++					 bool active)
+ {
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct macsec_dev *macsec;
+@@ -1451,7 +1452,7 @@ static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci)
+ 	}
+ 
+ 	rx_sc->sci = sci;
+-	rx_sc->active = true;
++	rx_sc->active = active;
+ 	refcount_set(&rx_sc->refcnt, 1);
+ 
+ 	secy = &macsec_priv(dev)->secy;
+@@ -1860,6 +1861,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_rxsa, &ctx);
++		memzero_explicit(ctx.sa.key, secy->key_len);
+ 		if (err)
+ 			goto cleanup;
+ 	}
+@@ -1904,7 +1906,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
+ 	struct macsec_secy *secy;
+-	bool was_active;
++	bool active = true;
+ 	int ret;
+ 
+ 	if (!attrs[MACSEC_ATTR_IFINDEX])
+@@ -1926,16 +1928,15 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
+ 	secy = &macsec_priv(dev)->secy;
+ 	sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]);
+ 
+-	rx_sc = create_rx_sc(dev, sci);
++	if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
++		active = nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
++
++	rx_sc = create_rx_sc(dev, sci, active);
+ 	if (IS_ERR(rx_sc)) {
+ 		rtnl_unlock();
+ 		return PTR_ERR(rx_sc);
+ 	}
+ 
+-	was_active = rx_sc->active;
+-	if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
+-		rx_sc->active = !!nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
+-
+ 	if (macsec_is_offloaded(netdev_priv(dev))) {
+ 		const struct macsec_ops *ops;
+ 		struct macsec_context ctx;
+@@ -1959,7 +1960,8 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
+ 	return 0;
+ 
+ cleanup:
+-	rx_sc->active = was_active;
++	del_rx_sc(secy, sci);
++	free_rx_sc(rx_sc);
+ 	rtnl_unlock();
+ 	return ret;
+ }
+@@ -2102,6 +2104,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_txsa, &ctx);
++		memzero_explicit(ctx.sa.key, secy->key_len);
+ 		if (err)
+ 			goto cleanup;
+ 	}
+@@ -2598,7 +2601,7 @@ static bool macsec_is_configured(struct macsec_dev *macsec)
+ 	struct macsec_tx_sc *tx_sc = &secy->tx_sc;
+ 	int i;
+ 
+-	if (secy->n_rx_sc > 0)
++	if (secy->rx_sc)
+ 		return true;
+ 
+ 	for (i = 0; i < MACSEC_NUM_AN; i++)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 1080d6ebff63b..9983d37ee87d9 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1533,8 +1533,10 @@ destroy_macvlan_port:
+ 	/* the macvlan port may be freed by macvlan_uninit when fail to register.
+ 	 * so we destroy the macvlan port only when it's valid.
+ 	 */
+-	if (create && macvlan_port_get_rtnl(lowerdev))
++	if (create && macvlan_port_get_rtnl(lowerdev)) {
++		macvlan_flush_sources(port, vlan);
+ 		macvlan_port_destroy(port->dev);
++	}
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(macvlan_common_newlink);
+diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
+index b7b2521c73fb6..c00eef457b850 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.c
++++ b/drivers/net/phy/mscc/mscc_macsec.c
+@@ -632,6 +632,7 @@ static void vsc8584_macsec_free_flow(struct vsc8531_private *priv,
+ 
+ 	list_del(&flow->list);
+ 	clear_bit(flow->index, bitmap);
++	memzero_explicit(flow->key, sizeof(flow->key));
+ 	kfree(flow);
+ }
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index b02bd0a6c0a93..3387074a2bdb8 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1967,17 +1967,25 @@ drop:
+ 					  skb_headlen(skb));
+ 
+ 		if (unlikely(headlen > skb_headlen(skb))) {
++			WARN_ON_ONCE(1);
++			err = -ENOMEM;
+ 			dev_core_stats_rx_dropped_inc(tun->dev);
++napi_busy:
+ 			napi_free_frags(&tfile->napi);
+ 			rcu_read_unlock();
+ 			mutex_unlock(&tfile->napi_mutex);
+-			WARN_ON(1);
+-			return -ENOMEM;
++			return err;
+ 		}
+ 
+-		local_bh_disable();
+-		napi_gro_frags(&tfile->napi);
+-		local_bh_enable();
++		if (likely(napi_schedule_prep(&tfile->napi))) {
++			local_bh_disable();
++			napi_gro_frags(&tfile->napi);
++			napi_complete(&tfile->napi);
++			local_bh_enable();
++		} else {
++			err = -EBUSY;
++			goto napi_busy;
++		}
+ 		mutex_unlock(&tfile->napi_mutex);
+ 	} else if (tfile->napi_enabled) {
+ 		struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index 960f1393595cc..d62a904d2e422 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -325,6 +325,7 @@ static int lapbeth_open(struct net_device *dev)
+ 
+ 	err = lapb_register(dev, &lapbeth_callbacks);
+ 	if (err != LAPB_OK) {
++		napi_disable(&lapbeth->napi);
+ 		pr_err("lapb_register error: %d\n", err);
+ 		return -ENODEV;
+ 	}
+@@ -446,7 +447,7 @@ static int lapbeth_device_event(struct notifier_block *this,
+ 	if (dev_net(dev) != &init_net)
+ 		return NOTIFY_DONE;
+ 
+-	if (!dev_is_ethdev(dev))
++	if (!dev_is_ethdev(dev) && !lapbeth_get_x25_dev(dev))
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index 7ee3ff69dfc85..6fae4e61ede7f 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -287,11 +287,7 @@ int ath11k_regd_update(struct ath11k *ar)
+ 		goto err;
+ 	}
+ 
+-	rtnl_lock();
+-	wiphy_lock(ar->hw->wiphy);
+-	ret = regulatory_set_wiphy_regd_sync(ar->hw->wiphy, regd_copy);
+-	wiphy_unlock(ar->hw->wiphy);
+-	rtnl_unlock();
++	ret = regulatory_set_wiphy_regd(ar->hw->wiphy, regd_copy);
+ 
+ 	kfree(regd_copy);
+ 
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
+index 57304a5adf68e..6e32eb91bba95 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
+@@ -91,6 +91,14 @@ void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+ 	}
+ 
+ 	ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->nr_of_channels);
++
++	if (ipc_imem->mmio->mux_protocol == MUX_AGGREGATION &&
++	    ipc_imem->nr_of_channels == IPC_MEM_IP_CHL_ID_0) {
++		chnl_cfg.ul_nr_of_entries = IPC_MEM_MAX_TDS_MUX_AGGR_UL;
++		chnl_cfg.dl_nr_of_entries = IPC_MEM_MAX_TDS_MUX_AGGR_DL;
++		chnl_cfg.dl_buf_size = IPC_MEM_MAX_ADB_BUF_SIZE;
++	}
++
+ 	ipc_imem_channel_init(ipc_imem, IPC_CTYPE_WWAN, chnl_cfg,
+ 			      IRQ_MOD_OFF);
+ 
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.h b/drivers/net/wwan/iosm/iosm_ipc_mux.h
+index cd9d74cc097f1..9968bb885c1f3 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_mux.h
++++ b/drivers/net/wwan/iosm/iosm_ipc_mux.h
+@@ -10,6 +10,7 @@
+ 
+ #define IPC_MEM_MAX_UL_DG_ENTRIES	100
+ #define IPC_MEM_MAX_TDS_MUX_AGGR_UL	60
++#define IPC_MEM_MAX_TDS_MUX_AGGR_DL	60
+ 
+ #define IPC_MEM_MAX_ADB_BUF_SIZE (16 * 1024)
+ #define IPC_MEM_MAX_UL_ADB_BUF_SIZE IPC_MEM_MAX_ADB_BUF_SIZE
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.c b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
+index 31f57b986df28..97cb6846c6ae2 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_pcie.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
+@@ -232,6 +232,7 @@ static void ipc_pcie_config_init(struct iosm_pcie *ipc_pcie)
+  */
+ static enum ipc_pcie_sleep_state ipc_pcie_read_bios_cfg(struct device *dev)
+ {
++	enum ipc_pcie_sleep_state sleep_state = IPC_PCIE_D0L12;
+ 	union acpi_object *object;
+ 	acpi_handle handle_acpi;
+ 
+@@ -242,12 +243,16 @@ static enum ipc_pcie_sleep_state ipc_pcie_read_bios_cfg(struct device *dev)
+ 	}
+ 
+ 	object = acpi_evaluate_dsm(handle_acpi, &wwan_acpi_guid, 0, 3, NULL);
++	if (!object)
++		goto default_ret;
++
++	if (object->integer.value == 3)
++		sleep_state = IPC_PCIE_D3L2;
+ 
+-	if (object && object->integer.value == 3)
+-		return IPC_PCIE_D3L2;
++	kfree(object);
+ 
+ default_ret:
+-	return IPC_PCIE_D0L12;
++	return sleep_state;
+ }
+ 
+ static int ipc_pcie_probe(struct pci_dev *pci,
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
+index 4712f01a7e33e..3d70b34f96e31 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_wwan.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
+@@ -168,6 +168,7 @@ static void ipc_wwan_setup(struct net_device *iosm_dev)
+ 	iosm_dev->max_mtu = ETH_MAX_MTU;
+ 
+ 	iosm_dev->flags = IFF_POINTOPOINT | IFF_NOARP;
++	iosm_dev->needs_free_netdev = true;
+ 
+ 	iosm_dev->netdev_ops = &ipc_inm_ops;
+ }
+diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c
+index 6872782e8dd89..ef70bb7c88ad6 100644
+--- a/drivers/net/wwan/mhi_wwan_mbim.c
++++ b/drivers/net/wwan/mhi_wwan_mbim.c
+@@ -582,6 +582,7 @@ static void mhi_mbim_setup(struct net_device *ndev)
+ 	ndev->min_mtu = ETH_MIN_MTU;
+ 	ndev->max_mtu = MHI_MAX_BUF_SZ - ndev->needed_headroom;
+ 	ndev->tx_queue_len = 1000;
++	ndev->needs_free_netdev = true;
+ }
+ 
+ static const struct wwan_ops mhi_mbim_wwan_ops = {
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index e7c6f6629e7c5..ba64284eaf9fa 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1614,7 +1614,7 @@ out:
+ 
+ static u32 hv_compose_msi_req_v1(
+ 	struct pci_create_interrupt *int_pkt, const struct cpumask *affinity,
+-	u32 slot, u8 vector, u8 vector_count)
++	u32 slot, u8 vector, u16 vector_count)
+ {
+ 	int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE;
+ 	int_pkt->wslot.slot = slot;
+@@ -1642,7 +1642,7 @@ static int hv_compose_msi_req_get_cpu(const struct cpumask *affinity)
+ 
+ static u32 hv_compose_msi_req_v2(
+ 	struct pci_create_interrupt2 *int_pkt, const struct cpumask *affinity,
+-	u32 slot, u8 vector, u8 vector_count)
++	u32 slot, u8 vector, u16 vector_count)
+ {
+ 	int cpu;
+ 
+@@ -1661,7 +1661,7 @@ static u32 hv_compose_msi_req_v2(
+ 
+ static u32 hv_compose_msi_req_v3(
+ 	struct pci_create_interrupt3 *int_pkt, const struct cpumask *affinity,
+-	u32 slot, u32 vector, u8 vector_count)
++	u32 slot, u32 vector, u16 vector_count)
+ {
+ 	int cpu;
+ 
+@@ -1701,7 +1701,12 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 	struct compose_comp_ctxt comp;
+ 	struct tran_int_desc *int_desc;
+ 	struct msi_desc *msi_desc;
+-	u8 vector, vector_count;
++	/*
++	 * vector_count should be u16: see hv_msi_desc, hv_msi_desc2
++	 * and hv_msi_desc3. vector must be u32: see hv_msi_desc3.
++	 */
++	u16 vector_count;
++	u32 vector;
+ 	struct {
+ 		struct pci_packet pci_pkt;
+ 		union {
+@@ -1767,6 +1772,11 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		vector_count = 1;
+ 	}
+ 
++	/*
++	 * hv_compose_msi_req_v1 and v2 are for x86 only, meaning 'vector'
++	 * can't exceed u8. Cast 'vector' down to u8 for v1/v2 explicitly
++	 * for better readability.
++	 */
+ 	memset(&ctxt, 0, sizeof(ctxt));
+ 	init_completion(&comp.comp_pkt.host_event);
+ 	ctxt.pci_pkt.completion_func = hv_pci_compose_compl;
+@@ -1777,7 +1787,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1,
+ 					dest,
+ 					hpdev->desc.win_slot.slot,
+-					vector,
++					(u8)vector,
+ 					vector_count);
+ 		break;
+ 
+@@ -1786,7 +1796,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ 		size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2,
+ 					dest,
+ 					hpdev->desc.win_slot.slot,
+-					vector,
++					(u8)vector,
+ 					vector_count);
+ 		break;
+ 
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-combo.c b/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
+index 4b18289761044..3e730c05ac3fb 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
+@@ -1914,7 +1914,7 @@ static void qcom_qmp_phy_combo_enable_autonomous_mode(struct qmp_phy *qphy)
+ static void qcom_qmp_phy_combo_disable_autonomous_mode(struct qmp_phy *qphy)
+ {
+ 	const struct qmp_phy_cfg *cfg = qphy->cfg;
+-	void __iomem *pcs_usb = qphy->pcs_usb ?: qphy->pcs_usb;
++	void __iomem *pcs_usb = qphy->pcs_usb ?: qphy->pcs;
+ 	void __iomem *pcs_misc = qphy->pcs_misc;
+ 
+ 	/* Disable i/o clamp_n on resume for normal mode */
+diff --git a/drivers/phy/ralink/phy-mt7621-pci.c b/drivers/phy/ralink/phy-mt7621-pci.c
+index 5e6530f545b5c..85888ab2d307a 100644
+--- a/drivers/phy/ralink/phy-mt7621-pci.c
++++ b/drivers/phy/ralink/phy-mt7621-pci.c
+@@ -280,7 +280,8 @@ static struct phy *mt7621_pcie_phy_of_xlate(struct device *dev,
+ }
+ 
+ static const struct soc_device_attribute mt7621_pci_quirks_match[] = {
+-	{ .soc_id = "mt7621", .revision = "E2" }
++	{ .soc_id = "mt7621", .revision = "E2" },
++	{ /* sentinel */ }
+ };
+ 
+ static const struct regmap_config mt7621_pci_phy_regmap_config = {
+diff --git a/drivers/phy/st/phy-stm32-usbphyc.c b/drivers/phy/st/phy-stm32-usbphyc.c
+index a98c911cc37ae..5bb9647b078f1 100644
+--- a/drivers/phy/st/phy-stm32-usbphyc.c
++++ b/drivers/phy/st/phy-stm32-usbphyc.c
+@@ -710,6 +710,8 @@ static int stm32_usbphyc_probe(struct platform_device *pdev)
+ 		ret = of_property_read_u32(child, "reg", &index);
+ 		if (ret || index > usbphyc->nphys) {
+ 			dev_err(&phy->dev, "invalid reg property: %d\n", ret);
++			if (!ret)
++				ret = -EINVAL;
+ 			goto put_child;
+ 		}
+ 
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index fc8dbbd6fc7c2..4fbe91769c915 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -1298,8 +1298,16 @@ static int __init hp_wmi_bios_setup(struct platform_device *device)
+ 	wwan_rfkill = NULL;
+ 	rfkill2_count = 0;
+ 
+-	if (hp_wmi_rfkill_setup(device))
+-		hp_wmi_rfkill2_setup(device);
++	/*
++	 * In pre-2009 BIOS, command 1Bh return 0x4 to indicate that
++	 * BIOS no longer controls the power for the wireless
++	 * devices. All features supported by this command will no
++	 * longer be supported.
++	 */
++	if (!hp_wmi_bios_2009_later()) {
++		if (hp_wmi_rfkill_setup(device))
++			hp_wmi_rfkill2_setup(device);
++	}
+ 
+ 	err = hp_wmi_hwmon_init();
+ 
+diff --git a/drivers/platform/x86/p2sb.c b/drivers/platform/x86/p2sb.c
+index 384d0962ae93a..1cf2471d54dde 100644
+--- a/drivers/platform/x86/p2sb.c
++++ b/drivers/platform/x86/p2sb.c
+@@ -19,26 +19,23 @@
+ #define P2SBC			0xe0
+ #define P2SBC_HIDE		BIT(8)
+ 
++#define P2SB_DEVFN_DEFAULT	PCI_DEVFN(31, 1)
++
+ static const struct x86_cpu_id p2sb_cpu_ids[] = {
+ 	X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT,	PCI_DEVFN(13, 0)),
+-	X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_D,	PCI_DEVFN(31, 1)),
+-	X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_D,	PCI_DEVFN(31, 1)),
+-	X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE,		PCI_DEVFN(31, 1)),
+-	X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE_L,		PCI_DEVFN(31, 1)),
+-	X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE,		PCI_DEVFN(31, 1)),
+-	X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L,		PCI_DEVFN(31, 1)),
+ 	{}
+ };
+ 
+ static int p2sb_get_devfn(unsigned int *devfn)
+ {
++	unsigned int fn = P2SB_DEVFN_DEFAULT;
+ 	const struct x86_cpu_id *id;
+ 
+ 	id = x86_match_cpu(p2sb_cpu_ids);
+-	if (!id)
+-		return -ENODEV;
++	if (id)
++		fn = (unsigned int)id->driver_data;
+ 
+-	*devfn = (unsigned int)id->driver_data;
++	*devfn = fn;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 3a992a6478c30..6e5990611d834 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -344,6 +344,9 @@ static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *swrm, u8 cmd_data,
+ 	if (swrm_wait_for_wr_fifo_avail(swrm))
+ 		return SDW_CMD_FAIL_OTHER;
+ 
++	if (cmd_id == SWR_BROADCAST_CMD_ID)
++		reinit_completion(&swrm->broadcast);
++
+ 	/* Its assumed that write is okay as we do not get any status back */
+ 	swrm->reg_write(swrm, SWRM_CMD_FIFO_WR_CMD, val);
+ 
+@@ -377,6 +380,12 @@ static int qcom_swrm_cmd_fifo_rd_cmd(struct qcom_swrm_ctrl *swrm,
+ 
+ 	val = swrm_get_packed_reg_val(&swrm->rcmd_id, len, dev_addr, reg_addr);
+ 
++	/*
++	 * Check for outstanding cmd wrt. write fifo depth to avoid
++	 * overflow as read will also increase write fifo cnt.
++	 */
++	swrm_wait_for_wr_fifo_avail(swrm);
++
+ 	/* wait for FIFO RD to complete to avoid overflow */
+ 	usleep_range(100, 105);
+ 	swrm->reg_write(swrm, SWRM_CMD_FIFO_RD_CMD, val);
+diff --git a/drivers/spi/spi-intel.c b/drivers/spi/spi-intel.c
+index 66063687ae271..3f6db482b6c71 100644
+--- a/drivers/spi/spi-intel.c
++++ b/drivers/spi/spi-intel.c
+@@ -52,17 +52,17 @@
+ #define FRACC				0x50
+ 
+ #define FREG(n)				(0x54 + ((n) * 4))
+-#define FREG_BASE_MASK			0x3fff
++#define FREG_BASE_MASK			GENMASK(14, 0)
+ #define FREG_LIMIT_SHIFT		16
+-#define FREG_LIMIT_MASK			(0x03fff << FREG_LIMIT_SHIFT)
++#define FREG_LIMIT_MASK			GENMASK(30, 16)
+ 
+ /* Offset is from @ispi->pregs */
+ #define PR(n)				((n) * 4)
+ #define PR_WPE				BIT(31)
+ #define PR_LIMIT_SHIFT			16
+-#define PR_LIMIT_MASK			(0x3fff << PR_LIMIT_SHIFT)
++#define PR_LIMIT_MASK			GENMASK(30, 16)
+ #define PR_RPE				BIT(15)
+-#define PR_BASE_MASK			0x3fff
++#define PR_BASE_MASK			GENMASK(14, 0)
+ 
+ /* Offsets are from @ispi->sregs */
+ #define SSFSTS_CTL			0x00
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 0a3b9f7eed30f..cd9dc358d3967 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -551,14 +551,17 @@ static void mtk_spi_enable_transfer(struct spi_master *master)
+ 	writel(cmd, mdata->base + SPI_CMD_REG);
+ }
+ 
+-static int mtk_spi_get_mult_delta(u32 xfer_len)
++static int mtk_spi_get_mult_delta(struct mtk_spi *mdata, u32 xfer_len)
+ {
+-	u32 mult_delta;
++	u32 mult_delta = 0;
+ 
+-	if (xfer_len > MTK_SPI_PACKET_SIZE)
+-		mult_delta = xfer_len % MTK_SPI_PACKET_SIZE;
+-	else
+-		mult_delta = 0;
++	if (mdata->dev_comp->ipm_design) {
++		if (xfer_len > MTK_SPI_IPM_PACKET_SIZE)
++			mult_delta = xfer_len % MTK_SPI_IPM_PACKET_SIZE;
++	} else {
++		if (xfer_len > MTK_SPI_PACKET_SIZE)
++			mult_delta = xfer_len % MTK_SPI_PACKET_SIZE;
++	}
+ 
+ 	return mult_delta;
+ }
+@@ -570,22 +573,22 @@ static void mtk_spi_update_mdata_len(struct spi_master *master)
+ 
+ 	if (mdata->tx_sgl_len && mdata->rx_sgl_len) {
+ 		if (mdata->tx_sgl_len > mdata->rx_sgl_len) {
+-			mult_delta = mtk_spi_get_mult_delta(mdata->rx_sgl_len);
++			mult_delta = mtk_spi_get_mult_delta(mdata, mdata->rx_sgl_len);
+ 			mdata->xfer_len = mdata->rx_sgl_len - mult_delta;
+ 			mdata->rx_sgl_len = mult_delta;
+ 			mdata->tx_sgl_len -= mdata->xfer_len;
+ 		} else {
+-			mult_delta = mtk_spi_get_mult_delta(mdata->tx_sgl_len);
++			mult_delta = mtk_spi_get_mult_delta(mdata, mdata->tx_sgl_len);
+ 			mdata->xfer_len = mdata->tx_sgl_len - mult_delta;
+ 			mdata->tx_sgl_len = mult_delta;
+ 			mdata->rx_sgl_len -= mdata->xfer_len;
+ 		}
+ 	} else if (mdata->tx_sgl_len) {
+-		mult_delta = mtk_spi_get_mult_delta(mdata->tx_sgl_len);
++		mult_delta = mtk_spi_get_mult_delta(mdata, mdata->tx_sgl_len);
+ 		mdata->xfer_len = mdata->tx_sgl_len - mult_delta;
+ 		mdata->tx_sgl_len = mult_delta;
+ 	} else if (mdata->rx_sgl_len) {
+-		mult_delta = mtk_spi_get_mult_delta(mdata->rx_sgl_len);
++		mult_delta = mtk_spi_get_mult_delta(mdata, mdata->rx_sgl_len);
+ 		mdata->xfer_len = mdata->rx_sgl_len - mult_delta;
+ 		mdata->rx_sgl_len = mult_delta;
+ 	}
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 9853f6c7e81d7..583c22df40403 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -105,6 +105,32 @@ static void tb_remove_dp_resources(struct tb_switch *sw)
+ 	}
+ }
+ 
++static void tb_discover_dp_resource(struct tb *tb, struct tb_port *port)
++{
++	struct tb_cm *tcm = tb_priv(tb);
++	struct tb_port *p;
++
++	list_for_each_entry(p, &tcm->dp_resources, list) {
++		if (p == port)
++			return;
++	}
++
++	tb_port_dbg(port, "DP %s resource available discovered\n",
++		    tb_port_is_dpin(port) ? "IN" : "OUT");
++	list_add_tail(&port->list, &tcm->dp_resources);
++}
++
++static void tb_discover_dp_resources(struct tb *tb)
++{
++	struct tb_cm *tcm = tb_priv(tb);
++	struct tb_tunnel *tunnel;
++
++	list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
++		if (tb_tunnel_is_dp(tunnel))
++			tb_discover_dp_resource(tb, tunnel->dst_port);
++	}
++}
++
+ static void tb_switch_discover_tunnels(struct tb_switch *sw,
+ 				       struct list_head *list,
+ 				       bool alloc_hopids)
+@@ -1446,6 +1472,8 @@ static int tb_start(struct tb *tb)
+ 	tb_scan_switch(tb->root_switch);
+ 	/* Find out tunnels created by the boot firmware */
+ 	tb_discover_tunnels(tb);
++	/* Add DP resources from the DP tunnels created by the boot firmware */
++	tb_discover_dp_resources(tb);
+ 	/*
+ 	 * If the boot firmware did not create USB 3.x tunnels create them
+ 	 * now for the whole topology.
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 2633137c3e9f1..aa4bc213d301b 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2544,7 +2544,9 @@ static int btrfs_read_roots(struct btrfs_fs_info *fs_info)
+ 		fs_info->dev_root = root;
+ 	}
+ 	/* Initialize fs_info for all devices in any case */
+-	btrfs_init_devices_late(fs_info);
++	ret = btrfs_init_devices_late(fs_info);
++	if (ret)
++		goto out;
+ 
+ 	/*
+ 	 * This tree can share blocks with some other fs tree during relocation
+diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
+index cc9377cf56a33..8c8e28dff2f12 100644
+--- a/fs/btrfs/tests/btrfs-tests.c
++++ b/fs/btrfs/tests/btrfs-tests.c
+@@ -200,7 +200,7 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
+ 
+ void btrfs_free_dummy_root(struct btrfs_root *root)
+ {
+-	if (!root)
++	if (IS_ERR_OR_NULL(root))
+ 		return;
+ 	/* Will be freed by btrfs_free_fs_roots */
+ 	if (WARN_ON(test_bit(BTRFS_ROOT_IN_RADIX, &root->state)))
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 5d004772ab493..7aa220742c61d 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1009,6 +1009,18 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig)
+ 			rcu_assign_pointer(device->name, name);
+ 		}
+ 
++		if (orig_dev->zone_info) {
++			struct btrfs_zoned_device_info *zone_info;
++
++			zone_info = btrfs_clone_dev_zone_info(orig_dev);
++			if (!zone_info) {
++				btrfs_free_device(device);
++				ret = -ENOMEM;
++				goto error;
++			}
++			device->zone_info = zone_info;
++		}
++
+ 		list_add(&device->dev_list, &fs_devices->devices);
+ 		device->fs_devices = fs_devices;
+ 		fs_devices->num_devices++;
+@@ -6805,18 +6817,18 @@ static bool dev_args_match_fs_devices(const struct btrfs_dev_lookup_args *args,
+ static bool dev_args_match_device(const struct btrfs_dev_lookup_args *args,
+ 				  const struct btrfs_device *device)
+ {
+-	ASSERT((args->devid != (u64)-1) || args->missing);
++	if (args->missing) {
++		if (test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state) &&
++		    !device->bdev)
++			return true;
++		return false;
++	}
+ 
+-	if ((args->devid != (u64)-1) && device->devid != args->devid)
++	if (device->devid != args->devid)
+ 		return false;
+ 	if (args->uuid && memcmp(device->uuid, args->uuid, BTRFS_UUID_SIZE) != 0)
+ 		return false;
+-	if (!args->missing)
+-		return true;
+-	if (test_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state) &&
+-	    !device->bdev)
+-		return true;
+-	return false;
++	return true;
+ }
+ 
+ /*
+@@ -7631,10 +7643,11 @@ error:
+ 	return ret;
+ }
+ 
+-void btrfs_init_devices_late(struct btrfs_fs_info *fs_info)
++int btrfs_init_devices_late(struct btrfs_fs_info *fs_info)
+ {
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices, *seed_devs;
+ 	struct btrfs_device *device;
++	int ret = 0;
+ 
+ 	fs_devices->fs_info = fs_info;
+ 
+@@ -7643,12 +7656,18 @@ void btrfs_init_devices_late(struct btrfs_fs_info *fs_info)
+ 		device->fs_info = fs_info;
+ 
+ 	list_for_each_entry(seed_devs, &fs_devices->seed_list, seed_list) {
+-		list_for_each_entry(device, &seed_devs->devices, dev_list)
++		list_for_each_entry(device, &seed_devs->devices, dev_list) {
+ 			device->fs_info = fs_info;
++			ret = btrfs_get_dev_zone_info(device, false);
++			if (ret)
++				break;
++		}
+ 
+ 		seed_devs->fs_info = fs_info;
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
++
++	return ret;
+ }
+ 
+ static u64 btrfs_dev_stats_value(const struct extent_buffer *eb,
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 5639961b3626f..6d6cf77cd1b57 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -629,7 +629,7 @@ int find_free_dev_extent(struct btrfs_device *device, u64 num_bytes,
+ void btrfs_dev_stat_inc_and_print(struct btrfs_device *dev, int index);
+ int btrfs_get_dev_stats(struct btrfs_fs_info *fs_info,
+ 			struct btrfs_ioctl_get_dev_stats *stats);
+-void btrfs_init_devices_late(struct btrfs_fs_info *fs_info);
++int btrfs_init_devices_late(struct btrfs_fs_info *fs_info);
+ int btrfs_init_dev_stats(struct btrfs_fs_info *fs_info);
+ int btrfs_run_dev_stats(struct btrfs_trans_handle *trans);
+ void btrfs_rm_dev_replace_remove_srcdev(struct btrfs_device *srcdev);
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 73c6929f7be66..a227d27a63bfd 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -639,6 +639,46 @@ void btrfs_destroy_dev_zone_info(struct btrfs_device *device)
+ 	device->zone_info = NULL;
+ }
+ 
++struct btrfs_zoned_device_info *btrfs_clone_dev_zone_info(struct btrfs_device *orig_dev)
++{
++	struct btrfs_zoned_device_info *zone_info;
++
++	zone_info = kmemdup(orig_dev->zone_info, sizeof(*zone_info), GFP_KERNEL);
++	if (!zone_info)
++		return NULL;
++
++	zone_info->seq_zones = bitmap_zalloc(zone_info->nr_zones, GFP_KERNEL);
++	if (!zone_info->seq_zones)
++		goto out;
++
++	bitmap_copy(zone_info->seq_zones, orig_dev->zone_info->seq_zones,
++		    zone_info->nr_zones);
++
++	zone_info->empty_zones = bitmap_zalloc(zone_info->nr_zones, GFP_KERNEL);
++	if (!zone_info->empty_zones)
++		goto out;
++
++	bitmap_copy(zone_info->empty_zones, orig_dev->zone_info->empty_zones,
++		    zone_info->nr_zones);
++
++	zone_info->active_zones = bitmap_zalloc(zone_info->nr_zones, GFP_KERNEL);
++	if (!zone_info->active_zones)
++		goto out;
++
++	bitmap_copy(zone_info->active_zones, orig_dev->zone_info->active_zones,
++		    zone_info->nr_zones);
++	zone_info->zone_cache = NULL;
++
++	return zone_info;
++
++out:
++	bitmap_free(zone_info->seq_zones);
++	bitmap_free(zone_info->empty_zones);
++	bitmap_free(zone_info->active_zones);
++	kfree(zone_info);
++	return NULL;
++}
++
+ int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
+ 		       struct blk_zone *zone)
+ {
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index e17462db3a842..8bd16d40b7c65 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -36,6 +36,7 @@ int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
+ int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_info);
+ int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache);
+ void btrfs_destroy_dev_zone_info(struct btrfs_device *device);
++struct btrfs_zoned_device_info *btrfs_clone_dev_zone_info(struct btrfs_device *orig_dev);
+ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info);
+ int btrfs_check_mountopts_zoned(struct btrfs_fs_info *info);
+ int btrfs_sb_log_location_bdev(struct block_device *bdev, int mirror, int rw,
+@@ -103,6 +104,16 @@ static inline int btrfs_get_dev_zone_info(struct btrfs_device *device,
+ 
+ static inline void btrfs_destroy_dev_zone_info(struct btrfs_device *device) { }
+ 
++/*
++ * In case the kernel is compiled without CONFIG_BLK_DEV_ZONED we'll never call
++ * into btrfs_clone_dev_zone_info() so it's safe to return NULL here.
++ */
++static inline struct btrfs_zoned_device_info *btrfs_clone_dev_zone_info(
++						 struct btrfs_device *orig_dev)
++{
++	return NULL;
++}
++
+ static inline int btrfs_check_zoned_mode(const struct btrfs_fs_info *fs_info)
+ {
+ 	if (!btrfs_is_zoned(fs_info))
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 56d2c6fc61753..8568c33d4a76f 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -317,7 +317,7 @@ void nilfs_relax_pressure_in_lock(struct super_block *sb)
+ 	struct the_nilfs *nilfs = sb->s_fs_info;
+ 	struct nilfs_sc_info *sci = nilfs->ns_writer;
+ 
+-	if (!sci || !sci->sc_flush_request)
++	if (sb_rdonly(sb) || unlikely(!sci) || !sci->sc_flush_request)
+ 		return;
+ 
+ 	set_bit(NILFS_SC_PRIOR_FLUSH, &sci->sc_flags);
+@@ -2243,7 +2243,7 @@ int nilfs_construct_segment(struct super_block *sb)
+ 	struct nilfs_transaction_info *ti;
+ 	int err;
+ 
+-	if (!sci)
++	if (sb_rdonly(sb) || unlikely(!sci))
+ 		return -EROFS;
+ 
+ 	/* A call inside transactions causes a deadlock. */
+@@ -2282,7 +2282,7 @@ int nilfs_construct_dsync_segment(struct super_block *sb, struct inode *inode,
+ 	struct nilfs_transaction_info ti;
+ 	int err = 0;
+ 
+-	if (!sci)
++	if (sb_rdonly(sb) || unlikely(!sci))
+ 		return -EROFS;
+ 
+ 	nilfs_transaction_lock(sb, &ti, 0);
+@@ -2778,11 +2778,12 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
+ 
+ 	if (nilfs->ns_writer) {
+ 		/*
+-		 * This happens if the filesystem was remounted
+-		 * read/write after nilfs_error degenerated it into a
+-		 * read-only mount.
++		 * This happens if the filesystem is made read-only by
++		 * __nilfs_error or nilfs_remount and then remounted
++		 * read/write.  In these cases, reuse the existing
++		 * writer.
+ 		 */
+-		nilfs_detach_log_writer(sb);
++		return 0;
+ 	}
+ 
+ 	nilfs->ns_writer = nilfs_segctor_new(sb, root);
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index ba108f915391e..6edb6e0dd61f7 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -1133,8 +1133,6 @@ static int nilfs_remount(struct super_block *sb, int *flags, char *data)
+ 	if ((bool)(*flags & SB_RDONLY) == sb_rdonly(sb))
+ 		goto out;
+ 	if (*flags & SB_RDONLY) {
+-		/* Shutting down log writer */
+-		nilfs_detach_log_writer(sb);
+ 		sb->s_flags |= SB_RDONLY;
+ 
+ 		/*
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 3b4a079c9617c..c8b89b4f94e0e 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -690,9 +690,7 @@ int nilfs_count_free_blocks(struct the_nilfs *nilfs, sector_t *nblocks)
+ {
+ 	unsigned long ncleansegs;
+ 
+-	down_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
+ 	ncleansegs = nilfs_sufile_get_ncleansegs(nilfs->ns_sufile);
+-	up_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
+ 	*nblocks = (sector_t)ncleansegs * nilfs->ns_blocks_per_segment;
+ 	return 0;
+ }
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index b3d5f97f16cdb..865e658535b11 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -240,7 +240,7 @@ static struct fileIdentDesc *udf_find_entry(struct inode *dir,
+ 						      poffset - lfi);
+ 			else {
+ 				if (!copy_name) {
+-					copy_name = kmalloc(UDF_NAME_LEN,
++					copy_name = kmalloc(UDF_NAME_LEN_CS0,
+ 							    GFP_NOFS);
+ 					if (!copy_name) {
+ 						fi = ERR_PTR(-ENOMEM);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 7c90b1ab3e00d..594422890f8d8 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -333,6 +333,7 @@
+ #define DATA_DATA							\
+ 	*(.xiptext)							\
+ 	*(DATA_MAIN)							\
++	*(.data..decrypted)						\
+ 	*(.ref.data)							\
+ 	*(.data..shared_aligned) /* percpu related */			\
+ 	MEM_KEEP(init.data*)						\
+@@ -975,7 +976,6 @@
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ #define PERCPU_DECRYPTED_SECTION					\
+ 	. = ALIGN(PAGE_SIZE);						\
+-	*(.data..decrypted)						\
+ 	*(.data..percpu..decrypted)					\
+ 	. = ALIGN(PAGE_SIZE);
+ #else
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 1fdddbf3546b4..184b957e28ada 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -348,6 +348,27 @@ struct bpf_verifier_state {
+ 	     iter < frame->allocated_stack / BPF_REG_SIZE;		\
+ 	     iter++, reg = bpf_get_spilled_reg(iter, frame))
+ 
++/* Invoke __expr over regsiters in __vst, setting __state and __reg */
++#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr)   \
++	({                                                               \
++		struct bpf_verifier_state *___vstate = __vst;            \
++		int ___i, ___j;                                          \
++		for (___i = 0; ___i <= ___vstate->curframe; ___i++) {    \
++			struct bpf_reg_state *___regs;                   \
++			__state = ___vstate->frame[___i];                \
++			___regs = __state->regs;                         \
++			for (___j = 0; ___j < MAX_BPF_REG; ___j++) {     \
++				__reg = &___regs[___j];                  \
++				(void)(__expr);                          \
++			}                                                \
++			bpf_for_each_spilled_reg(___j, __state, __reg) { \
++				if (!__reg)                              \
++					continue;                        \
++				(void)(__expr);                          \
++			}                                                \
++		}                                                        \
++	})
++
+ /* linked list of verifier states used to prune search */
+ struct bpf_verifier_state_list {
+ 	struct bpf_verifier_state state;
+diff --git a/include/linux/can/dev.h b/include/linux/can/dev.h
+index c3e50e537e397..20631a2463769 100644
+--- a/include/linux/can/dev.h
++++ b/include/linux/can/dev.h
+@@ -147,6 +147,22 @@ static inline u32 can_get_static_ctrlmode(struct can_priv *priv)
+ 	return priv->ctrlmode & ~priv->ctrlmode_supported;
+ }
+ 
++/* drop skb if it does not contain a valid CAN frame for sending */
++static inline bool can_dev_dropped_skb(struct net_device *dev, struct sk_buff *skb)
++{
++	struct can_priv *priv = netdev_priv(dev);
++
++	if (priv->ctrlmode & CAN_CTRLMODE_LISTENONLY) {
++		netdev_info_once(dev,
++				 "interface in listen only mode, dropping skb\n");
++		kfree_skb(skb);
++		dev->stats.tx_dropped++;
++		return true;
++	}
++
++	return can_dropped_invalid_skb(dev, skb);
++}
++
+ void can_setup(struct net_device *dev);
+ 
+ struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 48f4b645193b7..70d6cb94e5802 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -376,7 +376,7 @@ static inline void sk_psock_report_error(struct sk_psock *psock, int err)
+ }
+ 
+ struct sk_psock *sk_psock_init(struct sock *sk, int node);
+-void sk_psock_stop(struct sk_psock *psock, bool wait);
++void sk_psock_stop(struct sk_psock *psock);
+ 
+ #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
+ int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock);
+diff --git a/include/uapi/linux/capability.h b/include/uapi/linux/capability.h
+index 463d1ba2232ac..3d61a0ae055d4 100644
+--- a/include/uapi/linux/capability.h
++++ b/include/uapi/linux/capability.h
+@@ -426,7 +426,7 @@ struct vfs_ns_cap_data {
+  */
+ 
+ #define CAP_TO_INDEX(x)     ((x) >> 5)        /* 1 << 5 == bits in __u32 */
+-#define CAP_TO_MASK(x)      (1 << ((x) & 31)) /* mask for indexed __u32 */
++#define CAP_TO_MASK(x)      (1U << ((x) & 31)) /* mask for indexed __u32 */
+ 
+ 
+ #endif /* _UAPI_LINUX_CAPABILITY_H */
+diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
+index 095299c75828c..2b9e7feba3f32 100644
+--- a/include/uapi/linux/idxd.h
++++ b/include/uapi/linux/idxd.h
+@@ -29,6 +29,7 @@ enum idxd_scmd_stat {
+ 	IDXD_SCMD_WQ_NO_SIZE = 0x800e0000,
+ 	IDXD_SCMD_WQ_NO_PRIV = 0x800f0000,
+ 	IDXD_SCMD_WQ_IRQ_ERR = 0x80100000,
++	IDXD_SCMD_WQ_USER_NO_IOMMU = 0x80110000,
+ };
+ 
+ #define IDXD_SCMD_SOFTERR_MASK	0x80000000
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index 25cd724ade184..e2c46889d5fab 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -346,6 +346,8 @@ int io_provide_buffers_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ 	tmp = READ_ONCE(sqe->off);
+ 	if (tmp > USHRT_MAX)
+ 		return -E2BIG;
++	if (tmp + p->nbufs >= USHRT_MAX)
++		return -EINVAL;
+ 	p->bid = tmp;
+ 	return 0;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 8b5ea7f6b5365..69fb46fdf7635 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1011,12 +1011,17 @@ out:
+  */
+ static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
+ {
++	void *new_arr;
++
+ 	if (!new_n || old_n == new_n)
+ 		goto out;
+ 
+-	arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
+-	if (!arr)
++	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
++	if (!new_arr) {
++		kfree(arr);
+ 		return NULL;
++	}
++	arr = new_arr;
+ 
+ 	if (new_n > old_n)
+ 		memset(arr + old_n * size, 0, (new_n - old_n) * size);
+@@ -6495,31 +6500,15 @@ static int check_func_proto(const struct bpf_func_proto *fn, int func_id,
+ /* Packet data might have moved, any old PTR_TO_PACKET[_META,_END]
+  * are now invalid, so turn them into unknown SCALAR_VALUE.
+  */
+-static void __clear_all_pkt_pointers(struct bpf_verifier_env *env,
+-				     struct bpf_func_state *state)
++static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
+ {
+-	struct bpf_reg_state *regs = state->regs, *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++)
+-		if (reg_is_pkt_pointer_any(&regs[i]))
+-			mark_reg_unknown(env, regs, i);
++	struct bpf_func_state *state;
++	struct bpf_reg_state *reg;
+ 
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
++	bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
+ 		if (reg_is_pkt_pointer_any(reg))
+ 			__mark_reg_unknown(env, reg);
+-	}
+-}
+-
+-static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
+-{
+-	struct bpf_verifier_state *vstate = env->cur_state;
+-	int i;
+-
+-	for (i = 0; i <= vstate->curframe; i++)
+-		__clear_all_pkt_pointers(env, vstate->frame[i]);
++	}));
+ }
+ 
+ enum {
+@@ -6548,41 +6537,28 @@ static void mark_pkt_end(struct bpf_verifier_state *vstate, int regn, bool range
+ 		reg->range = AT_PKT_END;
+ }
+ 
+-static void release_reg_references(struct bpf_verifier_env *env,
+-				   struct bpf_func_state *state,
+-				   int ref_obj_id)
+-{
+-	struct bpf_reg_state *regs = state->regs, *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++)
+-		if (regs[i].ref_obj_id == ref_obj_id)
+-			mark_reg_unknown(env, regs, i);
+-
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
+-		if (reg->ref_obj_id == ref_obj_id)
+-			__mark_reg_unknown(env, reg);
+-	}
+-}
+-
+ /* The pointer with the specified id has released its reference to kernel
+  * resources. Identify all copies of the same pointer and clear the reference.
+  */
+ static int release_reference(struct bpf_verifier_env *env,
+ 			     int ref_obj_id)
+ {
+-	struct bpf_verifier_state *vstate = env->cur_state;
++	struct bpf_func_state *state;
++	struct bpf_reg_state *reg;
+ 	int err;
+-	int i;
+ 
+ 	err = release_reference_state(cur_func(env), ref_obj_id);
+ 	if (err)
+ 		return err;
+ 
+-	for (i = 0; i <= vstate->curframe; i++)
+-		release_reg_references(env, vstate->frame[i], ref_obj_id);
++	bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
++		if (reg->ref_obj_id == ref_obj_id) {
++			if (!env->allow_ptr_leaks)
++				__mark_reg_not_init(env, reg);
++			else
++				__mark_reg_unknown(env, reg);
++		}
++	}));
+ 
+ 	return 0;
+ }
+@@ -9278,34 +9254,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 	return 0;
+ }
+ 
+-static void __find_good_pkt_pointers(struct bpf_func_state *state,
+-				     struct bpf_reg_state *dst_reg,
+-				     enum bpf_reg_type type, int new_range)
+-{
+-	struct bpf_reg_state *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++) {
+-		reg = &state->regs[i];
+-		if (reg->type == type && reg->id == dst_reg->id)
+-			/* keep the maximum range already checked */
+-			reg->range = max(reg->range, new_range);
+-	}
+-
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
+-		if (reg->type == type && reg->id == dst_reg->id)
+-			reg->range = max(reg->range, new_range);
+-	}
+-}
+-
+ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
+ 				   struct bpf_reg_state *dst_reg,
+ 				   enum bpf_reg_type type,
+ 				   bool range_right_open)
+ {
+-	int new_range, i;
++	struct bpf_func_state *state;
++	struct bpf_reg_state *reg;
++	int new_range;
+ 
+ 	if (dst_reg->off < 0 ||
+ 	    (dst_reg->off == 0 && range_right_open))
+@@ -9370,9 +9326,11 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
+ 	 * the range won't allow anything.
+ 	 * dst_reg->off is known < MAX_PACKET_OFF, therefore it fits in a u16.
+ 	 */
+-	for (i = 0; i <= vstate->curframe; i++)
+-		__find_good_pkt_pointers(vstate->frame[i], dst_reg, type,
+-					 new_range);
++	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
++		if (reg->type == type && reg->id == dst_reg->id)
++			/* keep the maximum range already checked */
++			reg->range = max(reg->range, new_range);
++	}));
+ }
+ 
+ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
+@@ -9861,7 +9819,7 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ 
+ 		if (!reg_may_point_to_spin_lock(reg)) {
+ 			/* For not-NULL ptr, reg->ref_obj_id will be reset
+-			 * in release_reg_references().
++			 * in release_reference().
+ 			 *
+ 			 * reg->id is still used by spin_lock ptr. Other
+ 			 * than spin_lock ptr type, reg->id can be reset.
+@@ -9871,22 +9829,6 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
+ 	}
+ }
+ 
+-static void __mark_ptr_or_null_regs(struct bpf_func_state *state, u32 id,
+-				    bool is_null)
+-{
+-	struct bpf_reg_state *reg;
+-	int i;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++)
+-		mark_ptr_or_null_reg(state, &state->regs[i], id, is_null);
+-
+-	bpf_for_each_spilled_reg(i, state, reg) {
+-		if (!reg)
+-			continue;
+-		mark_ptr_or_null_reg(state, reg, id, is_null);
+-	}
+-}
+-
+ /* The logic is similar to find_good_pkt_pointers(), both could eventually
+  * be folded together at some point.
+  */
+@@ -9894,10 +9836,9 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
+ 				  bool is_null)
+ {
+ 	struct bpf_func_state *state = vstate->frame[vstate->curframe];
+-	struct bpf_reg_state *regs = state->regs;
++	struct bpf_reg_state *regs = state->regs, *reg;
+ 	u32 ref_obj_id = regs[regno].ref_obj_id;
+ 	u32 id = regs[regno].id;
+-	int i;
+ 
+ 	if (ref_obj_id && ref_obj_id == id && is_null)
+ 		/* regs[regno] is in the " == NULL" branch.
+@@ -9906,8 +9847,9 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
+ 		 */
+ 		WARN_ON_ONCE(release_reference_state(state, id));
+ 
+-	for (i = 0; i <= vstate->curframe; i++)
+-		__mark_ptr_or_null_regs(vstate->frame[i], id, is_null);
++	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
++		mark_ptr_or_null_reg(state, reg, id, is_null);
++	}));
+ }
+ 
+ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+@@ -10020,23 +9962,11 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
+ {
+ 	struct bpf_func_state *state;
+ 	struct bpf_reg_state *reg;
+-	int i, j;
+ 
+-	for (i = 0; i <= vstate->curframe; i++) {
+-		state = vstate->frame[i];
+-		for (j = 0; j < MAX_BPF_REG; j++) {
+-			reg = &state->regs[j];
+-			if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
+-				*reg = *known_reg;
+-		}
+-
+-		bpf_for_each_spilled_reg(j, state, reg) {
+-			if (!reg)
+-				continue;
+-			if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
+-				*reg = *known_reg;
+-		}
+-	}
++	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
++		if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
++			*reg = *known_reg;
++	}));
+ }
+ 
+ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c
+index 4e51466c4e74d..dafe7e71329b8 100644
+--- a/mm/damon/dbgfs.c
++++ b/mm/damon/dbgfs.c
+@@ -882,6 +882,7 @@ out:
+ static int dbgfs_rm_context(char *name)
+ {
+ 	struct dentry *root, *dir, **new_dirs;
++	struct inode *inode;
+ 	struct damon_ctx **new_ctxs;
+ 	int i, j;
+ 	int ret = 0;
+@@ -897,6 +898,12 @@ static int dbgfs_rm_context(char *name)
+ 	if (!dir)
+ 		return -ENOENT;
+ 
++	inode = d_inode(dir);
++	if (!S_ISDIR(inode->i_mode)) {
++		ret = -EINVAL;
++		goto out_dput;
++	}
++
+ 	new_dirs = kmalloc_array(dbgfs_nr_ctxs - 1, sizeof(*dbgfs_dirs),
+ 			GFP_KERNEL);
+ 	if (!new_dirs) {
+diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
+index 20f414c0379f9..fffb78378d6c0 100644
+--- a/mm/hugetlb_vmemmap.c
++++ b/mm/hugetlb_vmemmap.c
+@@ -11,6 +11,7 @@
+ #define pr_fmt(fmt)	"HugeTLB: " fmt
+ 
+ #include <linux/pgtable.h>
++#include <linux/moduleparam.h>
+ #include <linux/bootmem_info.h>
+ #include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
+diff --git a/mm/memremap.c b/mm/memremap.c
+index 58b20c3c300b8..b893e37c95c1e 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -330,6 +330,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
+ 			WARN(1, "File system DAX not supported\n");
+ 			return ERR_PTR(-EINVAL);
+ 		}
++		params.pgprot = pgprot_decrypted(params.pgprot);
+ 		break;
+ 	case MEMORY_DEVICE_GENERIC:
+ 		break;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 7327b2573f7c2..9fb3a8bd21102 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -64,7 +64,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
+ 	pte_t _dst_pte, *dst_pte;
+ 	bool writable = dst_vma->vm_flags & VM_WRITE;
+ 	bool vm_shared = dst_vma->vm_flags & VM_SHARED;
+-	bool page_in_cache = page->mapping;
++	bool page_in_cache = page_mapping(page);
+ 	spinlock_t *ptl;
+ 	struct inode *inode;
+ 	pgoff_t offset, max_off;
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 1fb49d51b25d6..e48ccf7cf2007 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -451,7 +451,7 @@ int can_rx_register(struct net *net, struct net_device *dev, canid_t can_id,
+ 
+ 	/* insert new receiver  (dev,canid,mask) -> (func,data) */
+ 
+-	if (dev && dev->type != ARPHRD_CAN)
++	if (dev && (dev->type != ARPHRD_CAN || !can_get_ml_priv(dev)))
+ 		return -ENODEV;
+ 
+ 	if (dev && !net_eq(net, dev_net(dev)))
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 43a27d19cdacf..58e7d79ccd292 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -111,6 +111,9 @@ MODULE_ALIAS("can-proto-6");
+ #define ISOTP_FC_WT 1		/* wait */
+ #define ISOTP_FC_OVFLW 2	/* overflow */
+ 
++#define ISOTP_FC_TIMEOUT 1	/* 1 sec */
++#define ISOTP_ECHO_TIMEOUT 2	/* 2 secs */
++
+ enum {
+ 	ISOTP_IDLE = 0,
+ 	ISOTP_WAIT_FIRST_FC,
+@@ -258,7 +261,8 @@ static int isotp_send_fc(struct sock *sk, int ae, u8 flowstatus)
+ 	so->lastrxcf_tstamp = ktime_set(0, 0);
+ 
+ 	/* start rx timeout watchdog */
+-	hrtimer_start(&so->rxtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
++	hrtimer_start(&so->rxtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
++		      HRTIMER_MODE_REL_SOFT);
+ 	return 0;
+ }
+ 
+@@ -344,6 +348,8 @@ static int check_pad(struct isotp_sock *so, struct canfd_frame *cf,
+ 	return 0;
+ }
+ 
++static void isotp_send_cframe(struct isotp_sock *so);
++
+ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+ {
+ 	struct sock *sk = &so->sk;
+@@ -398,14 +404,15 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+ 	case ISOTP_FC_CTS:
+ 		so->tx.bs = 0;
+ 		so->tx.state = ISOTP_SENDING;
+-		/* start cyclic timer for sending CF frame */
+-		hrtimer_start(&so->txtimer, so->tx_gap,
++		/* send CF frame and enable echo timeout handling */
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
++		isotp_send_cframe(so);
+ 		break;
+ 
+ 	case ISOTP_FC_WT:
+ 		/* start timer to wait for next FC frame */
+-		hrtimer_start(&so->txtimer, ktime_set(1, 0),
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
+ 		break;
+ 
+@@ -600,7 +607,7 @@ static int isotp_rcv_cf(struct sock *sk, struct canfd_frame *cf, int ae,
+ 	/* perform blocksize handling, if enabled */
+ 	if (!so->rxfc.bs || ++so->rx.bs < so->rxfc.bs) {
+ 		/* start rx timeout watchdog */
+-		hrtimer_start(&so->rxtimer, ktime_set(1, 0),
++		hrtimer_start(&so->rxtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
+ 		return 0;
+ 	}
+@@ -829,7 +836,7 @@ static void isotp_rcv_echo(struct sk_buff *skb, void *data)
+ 	struct isotp_sock *so = isotp_sk(sk);
+ 	struct canfd_frame *cf = (struct canfd_frame *)skb->data;
+ 
+-	/* only handle my own local echo skb's */
++	/* only handle my own local echo CF/SF skb's (no FF!) */
+ 	if (skb->sk != sk || so->cfecho != *(u32 *)cf->data)
+ 		return;
+ 
+@@ -849,13 +856,16 @@ static void isotp_rcv_echo(struct sk_buff *skb, void *data)
+ 	if (so->txfc.bs && so->tx.bs >= so->txfc.bs) {
+ 		/* stop and wait for FC with timeout */
+ 		so->tx.state = ISOTP_WAIT_FC;
+-		hrtimer_start(&so->txtimer, ktime_set(1, 0),
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_FC_TIMEOUT, 0),
+ 			      HRTIMER_MODE_REL_SOFT);
+ 		return;
+ 	}
+ 
+ 	/* no gap between data frames needed => use burst mode */
+ 	if (!so->tx_gap) {
++		/* enable echo timeout handling */
++		hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0),
++			      HRTIMER_MODE_REL_SOFT);
+ 		isotp_send_cframe(so);
+ 		return;
+ 	}
+@@ -879,7 +889,7 @@ static enum hrtimer_restart isotp_tx_timer_handler(struct hrtimer *hrtimer)
+ 			/* start timeout for unlikely lost echo skb */
+ 			hrtimer_set_expires(&so->txtimer,
+ 					    ktime_add(ktime_get(),
+-						      ktime_set(2, 0)));
++						      ktime_set(ISOTP_ECHO_TIMEOUT, 0)));
+ 			restart = HRTIMER_RESTART;
+ 
+ 			/* push out the next consecutive frame */
+@@ -907,7 +917,8 @@ static enum hrtimer_restart isotp_tx_timer_handler(struct hrtimer *hrtimer)
+ 		break;
+ 
+ 	default:
+-		WARN_ON_ONCE(1);
++		WARN_ONCE(1, "can-isotp: tx timer state %08X cfecho %08X\n",
++			  so->tx.state, so->cfecho);
+ 	}
+ 
+ 	return restart;
+@@ -923,7 +934,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	struct canfd_frame *cf;
+ 	int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
+ 	int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
+-	s64 hrtimer_sec = 0;
++	s64 hrtimer_sec = ISOTP_ECHO_TIMEOUT;
+ 	int off;
+ 	int err;
+ 
+@@ -942,6 +953,8 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+ 		if (err)
+ 			goto err_out;
++
++		so->tx.state = ISOTP_SENDING;
+ 	}
+ 
+ 	if (!size || size > MAX_MSG_LENGTH) {
+@@ -986,6 +999,10 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	cf = (struct canfd_frame *)skb->data;
+ 	skb_put_zero(skb, so->ll.mtu);
+ 
++	/* cfecho should have been zero'ed by init / former isotp_rcv_echo() */
++	if (so->cfecho)
++		pr_notice_once("can-isotp: uninit cfecho %08X\n", so->cfecho);
++
+ 	/* check for single frame transmission depending on TX_DL */
+ 	if (size <= so->tx.ll_dl - SF_PCI_SZ4 - ae - off) {
+ 		/* The message size generally fits into a SingleFrame - good.
+@@ -1011,11 +1028,8 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		else
+ 			cf->data[ae] |= size;
+ 
+-		so->tx.state = ISOTP_IDLE;
+-		wake_up_interruptible(&so->wait);
+-
+-		/* don't enable wait queue for a single frame transmission */
+-		wait_tx_done = 0;
++		/* set CF echo tag for isotp_rcv_echo() (SF-mode) */
++		so->cfecho = *(u32 *)cf->data;
+ 	} else {
+ 		/* send first frame */
+ 
+@@ -1031,31 +1045,23 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 			/* disable wait for FCs due to activated block size */
+ 			so->txfc.bs = 0;
+ 
+-			/* cfecho should have been zero'ed by init */
+-			if (so->cfecho)
+-				pr_notice_once("can-isotp: no fc cfecho %08X\n",
+-					       so->cfecho);
+-
+-			/* set consecutive frame echo tag */
++			/* set CF echo tag for isotp_rcv_echo() (CF-mode) */
+ 			so->cfecho = *(u32 *)cf->data;
+-
+-			/* switch directly to ISOTP_SENDING state */
+-			so->tx.state = ISOTP_SENDING;
+-
+-			/* start timeout for unlikely lost echo skb */
+-			hrtimer_sec = 2;
+ 		} else {
+ 			/* standard flow control check */
+ 			so->tx.state = ISOTP_WAIT_FIRST_FC;
+ 
+ 			/* start timeout for FC */
+-			hrtimer_sec = 1;
+-		}
++			hrtimer_sec = ISOTP_FC_TIMEOUT;
+ 
+-		hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
+-			      HRTIMER_MODE_REL_SOFT);
++			/* no CF echo tag for isotp_rcv_echo() (FF-mode) */
++			so->cfecho = 0;
++		}
+ 	}
+ 
++	hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
++		      HRTIMER_MODE_REL_SOFT);
++
+ 	/* send the first or only CAN frame */
+ 	cf->flags = so->ll.tx_flags;
+ 
+@@ -1068,8 +1074,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 			       __func__, ERR_PTR(err));
+ 
+ 		/* no transmission -> no timeout monitoring */
+-		if (hrtimer_sec)
+-			hrtimer_cancel(&so->txtimer);
++		hrtimer_cancel(&so->txtimer);
+ 
+ 		/* reset consecutive frame echo tag */
+ 		so->cfecho = 0;
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 8452b0fbb78c9..82671a882716f 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -332,6 +332,9 @@ int j1939_send_one(struct j1939_priv *priv, struct sk_buff *skb)
+ 	/* re-claim the CAN_HDR from the SKB */
+ 	cf = skb_push(skb, J1939_CAN_HDR);
+ 
++	/* initialize header structure */
++	memset(cf, 0, J1939_CAN_HDR);
++
+ 	/* make it a full can frame again */
+ 	skb_put(skb, J1939_CAN_FTR + (8 - dlc));
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 5e1a8eeb5e322..d9c19ae05fe67 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4031,23 +4031,25 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
+ 	int i = 0;
+ 	int pos;
+ 
+-	if (list_skb && !list_skb->head_frag && skb_headlen(list_skb) &&
+-	    (skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY)) {
+-		/* gso_size is untrusted, and we have a frag_list with a linear
+-		 * non head_frag head.
+-		 *
+-		 * (we assume checking the first list_skb member suffices;
+-		 * i.e if either of the list_skb members have non head_frag
+-		 * head, then the first one has too).
+-		 *
+-		 * If head_skb's headlen does not fit requested gso_size, it
+-		 * means that the frag_list members do NOT terminate on exact
+-		 * gso_size boundaries. Hence we cannot perform skb_frag_t page
+-		 * sharing. Therefore we must fallback to copying the frag_list
+-		 * skbs; we do so by disabling SG.
+-		 */
+-		if (mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb))
+-			features &= ~NETIF_F_SG;
++	if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
++	    mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
++		struct sk_buff *check_skb;
++
++		for (check_skb = list_skb; check_skb; check_skb = check_skb->next) {
++			if (skb_headlen(check_skb) && !check_skb->head_frag) {
++				/* gso_size is untrusted, and we have a frag_list with
++				 * a linear non head_frag item.
++				 *
++				 * If head_skb's headlen does not fit requested gso_size,
++				 * it means that the frag_list members do NOT terminate
++				 * on exact gso_size boundaries. Hence we cannot perform
++				 * skb_frag_t page sharing. Therefore we must fallback to
++				 * copying the frag_list skbs; we do so by disabling SG.
++				 */
++				features &= ~NETIF_F_SG;
++				break;
++			}
++		}
+ 	}
+ 
+ 	__skb_push(head_skb, doffset);
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 1efdc47a999b4..e6b9ced3eda82 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -803,16 +803,13 @@ static void sk_psock_link_destroy(struct sk_psock *psock)
+ 	}
+ }
+ 
+-void sk_psock_stop(struct sk_psock *psock, bool wait)
++void sk_psock_stop(struct sk_psock *psock)
+ {
+ 	spin_lock_bh(&psock->ingress_lock);
+ 	sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED);
+ 	sk_psock_cork_free(psock);
+ 	__sk_psock_zap_ingress(psock);
+ 	spin_unlock_bh(&psock->ingress_lock);
+-
+-	if (wait)
+-		cancel_work_sync(&psock->work);
+ }
+ 
+ static void sk_psock_done_strp(struct sk_psock *psock);
+@@ -850,7 +847,7 @@ void sk_psock_drop(struct sock *sk, struct sk_psock *psock)
+ 		sk_psock_stop_verdict(sk, psock);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ 
+-	sk_psock_stop(psock, false);
++	sk_psock_stop(psock);
+ 
+ 	INIT_RCU_WORK(&psock->rwork, sk_psock_destroy);
+ 	queue_rcu_work(system_wq, &psock->rwork);
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 9a9fb9487d636..632df0c525625 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -1596,7 +1596,7 @@ void sock_map_destroy(struct sock *sk)
+ 	saved_destroy = psock->saved_destroy;
+ 	sock_map_remove_links(sk, psock);
+ 	rcu_read_unlock();
+-	sk_psock_stop(psock, false);
++	sk_psock_stop(psock);
+ 	sk_psock_put(sk, psock);
+ 	saved_destroy(sk);
+ }
+@@ -1619,9 +1619,10 @@ void sock_map_close(struct sock *sk, long timeout)
+ 	saved_close = psock->saved_close;
+ 	sock_map_remove_links(sk, psock);
+ 	rcu_read_unlock();
+-	sk_psock_stop(psock, true);
+-	sk_psock_put(sk, psock);
++	sk_psock_stop(psock);
+ 	release_sock(sk);
++	cancel_work_sync(&psock->work);
++	sk_psock_put(sk, psock);
+ 	saved_close(sk, timeout);
+ }
+ EXPORT_SYMBOL_GPL(sock_map_close);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 5fbd0a5b48f7e..cdd4f2f60f0c6 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3648,7 +3648,7 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
+ 	case TCP_REPAIR_OPTIONS:
+ 		if (!tp->repair)
+ 			err = -EINVAL;
+-		else if (sk->sk_state == TCP_ESTABLISHED)
++		else if (sk->sk_state == TCP_ESTABLISHED && !tp->bytes_sent)
+ 			err = tcp_repair_options_est(sk, optval, optlen);
+ 		else
+ 			err = -EPERM;
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index c501c329b1dbe..cf9c3e8f7ccbf 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -278,7 +278,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ {
+ 	bool cork = false, enospc = sk_msg_full(msg);
+ 	struct sock *sk_redir;
+-	u32 tosend, delta = 0;
++	u32 tosend, origsize, sent, delta = 0;
+ 	u32 eval = __SK_NONE;
+ 	int ret;
+ 
+@@ -333,10 +333,12 @@ more_data:
+ 			cork = true;
+ 			psock->cork = NULL;
+ 		}
+-		sk_msg_return(sk, msg, msg->sg.size);
++		sk_msg_return(sk, msg, tosend);
+ 		release_sock(sk);
+ 
++		origsize = msg->sg.size;
+ 		ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
++		sent = origsize - msg->sg.size;
+ 
+ 		if (eval == __SK_REDIRECT)
+ 			sock_put(sk_redir);
+@@ -375,7 +377,7 @@ more_data:
+ 		    msg->sg.data[msg->sg.start].page_link &&
+ 		    msg->sg.data[msg->sg.start].length) {
+ 			if (eval == __SK_REDIRECT)
+-				sk_mem_charge(sk, msg->sg.size);
++				sk_mem_charge(sk, tosend - sent);
+ 			goto more_data;
+ 		}
+ 	}
+diff --git a/net/ipv6/addrlabel.c b/net/ipv6/addrlabel.c
+index 8a22486cf2702..17ac45aa7194c 100644
+--- a/net/ipv6/addrlabel.c
++++ b/net/ipv6/addrlabel.c
+@@ -437,6 +437,7 @@ static void ip6addrlbl_putmsg(struct nlmsghdr *nlh,
+ {
+ 	struct ifaddrlblmsg *ifal = nlmsg_data(nlh);
+ 	ifal->ifal_family = AF_INET6;
++	ifal->__ifal_reserved = 0;
+ 	ifal->ifal_prefixlen = prefixlen;
+ 	ifal->ifal_flags = 0;
+ 	ifal->ifal_index = ifindex;
+diff --git a/net/mac80211/s1g.c b/net/mac80211/s1g.c
+index 8ca7d45d6daae..c1f964e9991cd 100644
+--- a/net/mac80211/s1g.c
++++ b/net/mac80211/s1g.c
+@@ -112,6 +112,9 @@ ieee80211_s1g_rx_twt_setup(struct ieee80211_sub_if_data *sdata,
+ 		goto out;
+ 	}
+ 
++	/* TWT Information not supported yet */
++	twt->control |= IEEE80211_TWT_CONTROL_RX_DISABLED;
++
+ 	drv_add_twt_setup(sdata->local, sdata, &sta->sta, twt);
+ out:
+ 	ieee80211_s1g_send_twt_setup(sdata, mgmt->sa, sdata->vif.addr, twt);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 13249e97a0692..d2c4f9226f947 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4379,6 +4379,11 @@ netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb,
+ 	if (likely(!is_multicast_ether_addr(eth->h_dest)))
+ 		goto normal;
+ 
++	if (unlikely(!ieee80211_sdata_running(sdata))) {
++		kfree_skb(skb);
++		return NETDEV_TX_OK;
++	}
++
+ 	if (unlikely(ieee80211_multicast_to_unicast(skb, dev))) {
+ 		struct sk_buff_head queue;
+ 
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index b6b5e496fa403..fc9e728b6333a 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -665,12 +665,14 @@ static __init int mctp_init(void)
+ 
+ 	rc = mctp_neigh_init();
+ 	if (rc)
+-		goto err_unreg_proto;
++		goto err_unreg_routes;
+ 
+ 	mctp_device_init();
+ 
+ 	return 0;
+ 
++err_unreg_routes:
++	mctp_routes_exit();
+ err_unreg_proto:
+ 	proto_unregister(&mctp_proto);
+ err_unreg_sock:
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 2155f15a074cd..f9a80b82dc511 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -1400,7 +1400,7 @@ int __init mctp_routes_init(void)
+ 	return register_pernet_subsys(&mctp_net_ops);
+ }
+ 
+-void __exit mctp_routes_exit(void)
++void mctp_routes_exit(void)
+ {
+ 	unregister_pernet_subsys(&mctp_net_ops);
+ 	rtnl_unregister(PF_MCTP, RTM_DELROUTE);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 879f4a1a27d54..42e370575c304 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -10090,7 +10090,8 @@ static void __net_exit nf_tables_exit_net(struct net *net)
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 
+ 	mutex_lock(&nft_net->commit_mutex);
+-	if (!list_empty(&nft_net->commit_list))
++	if (!list_empty(&nft_net->commit_list) ||
++	    !list_empty(&nft_net->module_list))
+ 		__nf_tables_abort(net, NFNL_ABORT_NONE);
+ 	__nft_release_tables(net);
+ 	mutex_unlock(&nft_net->commit_mutex);
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 9c44518cb70ff..6d18fb3468683 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -294,6 +294,7 @@ replay:
+ 			nfnl_lock(subsys_id);
+ 			if (nfnl_dereference_protected(subsys_id) != ss ||
+ 			    nfnetlink_find_client(type, ss) != nc) {
++				nfnl_unlock(subsys_id);
+ 				err = -EAGAIN;
+ 				break;
+ 			}
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 0749df80454d4..ce00f271ca6b2 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -880,7 +880,7 @@ static int tipc_nl_compat_name_table_dump_header(struct tipc_nl_compat_msg *msg)
+ 	};
+ 
+ 	ntq = (struct tipc_name_table_query *)TLV_DATA(msg->req);
+-	if (TLV_GET_DATA_LEN(msg->req) < sizeof(struct tipc_name_table_query))
++	if (TLV_GET_DATA_LEN(msg->req) < (int)sizeof(struct tipc_name_table_query))
+ 		return -EINVAL;
+ 
+ 	depth = ntohl(ntq->depth);
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index d5c7a5aa68532..c3d950d294329 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -1084,6 +1084,8 @@ MODULE_FIRMWARE("regulatory.db");
+ 
+ static int query_regdb_file(const char *alpha2)
+ {
++	int err;
++
+ 	ASSERT_RTNL();
+ 
+ 	if (regdb)
+@@ -1093,9 +1095,13 @@ static int query_regdb_file(const char *alpha2)
+ 	if (!alpha2)
+ 		return -ENOMEM;
+ 
+-	return request_firmware_nowait(THIS_MODULE, true, "regulatory.db",
+-				       &reg_pdev->dev, GFP_KERNEL,
+-				       (void *)alpha2, regdb_fw_cb);
++	err = request_firmware_nowait(THIS_MODULE, true, "regulatory.db",
++				      &reg_pdev->dev, GFP_KERNEL,
++				      (void *)alpha2, regdb_fw_cb);
++	if (err)
++		kfree(alpha2);
++
++	return err;
+ }
+ 
+ int reg_reload_regdb(void)
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 39fb9cc25cdca..9067e4b70855a 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1674,7 +1674,9 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 		if (old == rcu_access_pointer(known->pub.ies))
+ 			rcu_assign_pointer(known->pub.ies, new->pub.beacon_ies);
+ 
+-		cfg80211_update_hidden_bsses(known, new->pub.beacon_ies, old);
++		cfg80211_update_hidden_bsses(known,
++					     rcu_access_pointer(new->pub.beacon_ies),
++					     old);
+ 
+ 		if (old)
+ 			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+diff --git a/sound/arm/pxa2xx-ac97-lib.c b/sound/arm/pxa2xx-ac97-lib.c
+index e55c0421718b3..2ca33fd5a5757 100644
+--- a/sound/arm/pxa2xx-ac97-lib.c
++++ b/sound/arm/pxa2xx-ac97-lib.c
+@@ -402,8 +402,10 @@ int pxa2xx_ac97_hw_probe(struct platform_device *dev)
+ 		goto err_clk2;
+ 
+ 	irq = platform_get_irq(dev, 0);
+-	if (!irq)
++	if (irq < 0) {
++		ret = irq;
+ 		goto err_irq;
++	}
+ 
+ 	ret = request_irq(irq, pxa2xx_ac97_irq, 0, "AC97", NULL);
+ 	if (ret < 0)
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index cfcd8eff41398..d311cff8d5bef 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include <linux/mm.h>
+ #include <linux/dma-mapping.h>
++#include <linux/dma-map-ops.h>
+ #include <linux/genalloc.h>
+ #include <linux/highmem.h>
+ #include <linux/vmalloc.h>
+@@ -528,17 +529,17 @@ static void *snd_dma_noncontig_alloc(struct snd_dma_buffer *dmab, size_t size)
+ 
+ 	sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir,
+ 				      DEFAULT_GFP, 0);
+-	if (!sgt) {
+ #ifdef CONFIG_SND_DMA_SGBUF
++	if (!sgt && !get_dma_ops(dmab->dev.dev)) {
+ 		if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG)
+ 			dmab->dev.type = SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;
+ 		else
+ 			dmab->dev.type = SNDRV_DMA_TYPE_DEV_SG_FALLBACK;
+ 		return snd_dma_sg_fallback_alloc(dmab, size);
+-#else
+-		return NULL;
+-#endif
+ 	}
++#endif
++	if (!sgt)
++		return NULL;
+ 
+ 	dmab->dev.need_sync = dma_need_sync(dmab->dev.dev,
+ 					    sg_dma_address(sgt->sgl));
+@@ -874,7 +875,7 @@ static const struct snd_malloc_ops snd_dma_noncoherent_ops = {
+ /*
+  * Entry points
+  */
+-static const struct snd_malloc_ops *dma_ops[] = {
++static const struct snd_malloc_ops *snd_dma_ops[] = {
+ 	[SNDRV_DMA_TYPE_CONTINUOUS] = &snd_dma_continuous_ops,
+ 	[SNDRV_DMA_TYPE_VMALLOC] = &snd_dma_vmalloc_ops,
+ #ifdef CONFIG_HAS_DMA
+@@ -900,7 +901,7 @@ static const struct snd_malloc_ops *snd_dma_get_ops(struct snd_dma_buffer *dmab)
+ 	if (WARN_ON_ONCE(!dmab))
+ 		return NULL;
+ 	if (WARN_ON_ONCE(dmab->dev.type <= SNDRV_DMA_TYPE_UNKNOWN ||
+-			 dmab->dev.type >= ARRAY_SIZE(dma_ops)))
++			 dmab->dev.type >= ARRAY_SIZE(snd_dma_ops)))
+ 		return NULL;
+-	return dma_ops[dmab->dev.type];
++	return snd_dma_ops[dmab->dev.type];
+ }
+diff --git a/sound/hda/hdac_sysfs.c b/sound/hda/hdac_sysfs.c
+index e47de49a32e3e..62a9615dcf529 100644
+--- a/sound/hda/hdac_sysfs.c
++++ b/sound/hda/hdac_sysfs.c
+@@ -346,8 +346,10 @@ static int add_widget_node(struct kobject *parent, hda_nid_t nid,
+ 		return -ENOMEM;
+ 	kobject_init(kobj, &widget_ktype);
+ 	err = kobject_add(kobj, parent, "%02x", nid);
+-	if (err < 0)
++	if (err < 0) {
++		kobject_put(kobj);
+ 		return err;
++	}
+ 	err = sysfs_create_group(kobj, group);
+ 	if (err < 0) {
+ 		kobject_put(kobj);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1631e1de84046..8f8b9ebe5c5ff 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2718,6 +2718,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1002, 0xab28),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
++	{ PCI_DEVICE(0x1002, 0xab30),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
++	  AZX_DCAPS_PM_RUNTIME },
+ 	{ PCI_DEVICE(0x1002, 0xab38),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 208933792787d..801dd8d44953b 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1306,6 +1306,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
++	SND_PCI_QUIRK(0x3842, 0x1055, "EVGA Z390 DARK", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6e25a0f89f6b4..b7cccbef401ce 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9414,6 +9414,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++	SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+@@ -9618,6 +9619,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ 	SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
++	SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index a5ed11ea11456..26268ffb82742 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -742,6 +742,18 @@ get_alias_quirk(struct usb_device *dev, unsigned int id)
+ 	return NULL;
+ }
+ 
++/* register card if we reach to the last interface or to the specified
++ * one given via option
++ */
++static int try_to_register_card(struct snd_usb_audio *chip, int ifnum)
++{
++	if (check_delayed_register_option(chip) == ifnum ||
++	    chip->last_iface == ifnum ||
++	    usb_interface_claimed(usb_ifnum_to_if(chip->dev, chip->last_iface)))
++		return snd_card_register(chip->card);
++	return 0;
++}
++
+ /*
+  * probe the active usb device
+  *
+@@ -880,15 +892,9 @@ static int usb_audio_probe(struct usb_interface *intf,
+ 		chip->need_delayed_register = false; /* clear again */
+ 	}
+ 
+-	/* register card if we reach to the last interface or to the specified
+-	 * one given via option
+-	 */
+-	if (check_delayed_register_option(chip) == ifnum ||
+-	    usb_interface_claimed(usb_ifnum_to_if(dev, chip->last_iface))) {
+-		err = snd_card_register(chip->card);
+-		if (err < 0)
+-			goto __error;
+-	}
++	err = try_to_register_card(chip, ifnum);
++	if (err < 0)
++		goto __error_no_register;
+ 
+ 	if (chip->quirk_flags & QUIRK_FLAG_SHARE_MEDIA_DEVICE) {
+ 		/* don't want to fail when snd_media_device_create() fails */
+@@ -907,6 +913,11 @@ static int usb_audio_probe(struct usb_interface *intf,
+ 	return 0;
+ 
+  __error:
++	/* in the case of error in secondary interface, still try to register */
++	if (chip)
++		try_to_register_card(chip, ifnum);
++
++ __error_no_register:
+ 	if (chip) {
+ 		/* chip->active is inside the chip->card object,
+ 		 * decrement before memory is possibly returned.
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 06dfdd45cff8c..874fcf245747f 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2049,6 +2049,10 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ 		}
+ 	}
+ },
++{
++	/* M-Audio Micro */
++	USB_DEVICE_VENDOR_SPEC(0x0763, 0x201a),
++},
+ {
+ 	USB_DEVICE_VENDOR_SPEC(0x0763, 0x2030),
+ 	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index eadac586bcc83..250bda7cda075 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1913,6 +1913,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 	/* XMOS based USB DACs */
+ 	switch (chip->usb_id) {
+ 	case USB_ID(0x1511, 0x0037): /* AURALiC VEGA */
++	case USB_ID(0x21ed, 0xd75a): /* Accuphase DAC-60 option card */
+ 	case USB_ID(0x2522, 0x0012): /* LH Labs VI DAC Infinity */
+ 	case USB_ID(0x2772, 0x0230): /* Pro-Ject Pre Box S2 Digital */
+ 		if (fp->altsetting == 2)
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index 6674bdb096f34..ee71f15eed7f2 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -530,6 +530,11 @@
+ #define MSR_AMD64_CPUID_FN_1		0xc0011004
+ #define MSR_AMD64_LS_CFG		0xc0011020
+ #define MSR_AMD64_DC_CFG		0xc0011022
++
++#define MSR_AMD64_DE_CFG		0xc0011029
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT	1
++#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE	BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT)
++
+ #define MSR_AMD64_BU_CFG2		0xc001102a
+ #define MSR_AMD64_IBSFETCHCTL		0xc0011030
+ #define MSR_AMD64_IBSFETCHLINAD		0xc0011031
+@@ -632,9 +637,6 @@
+ #define FAM10H_MMIO_CONF_BASE_MASK	0xfffffffULL
+ #define FAM10H_MMIO_CONF_BASE_SHIFT	20
+ #define MSR_FAM10H_NODE_ID		0xc001100c
+-#define MSR_F10H_DECFG			0xc0011029
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT	1
+-#define MSR_F10H_DECFG_LFENCE_SERIALIZE		BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
+ 
+ /* K8 MSRs */
+ #define MSR_K8_TOP_MEM1			0xc001001a
+diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
+index 067e9ea59e3b0..3bdbc0ce75b15 100644
+--- a/tools/bpf/bpftool/common.c
++++ b/tools/bpf/bpftool/common.c
+@@ -300,6 +300,9 @@ int do_pin_any(int argc, char **argv, int (*get_fd)(int *, char ***))
+ 	int err;
+ 	int fd;
+ 
++	if (!REQ_ARGS(3))
++		return -EINVAL;
++
+ 	fd = get_fd(&argc, &argv);
+ 	if (fd < 0)
+ 		return fd;
+diff --git a/tools/perf/.gitignore b/tools/perf/.gitignore
+index 4b9c71faa01ad..f136309044dab 100644
+--- a/tools/perf/.gitignore
++++ b/tools/perf/.gitignore
+@@ -4,6 +4,7 @@ PERF-GUI-VARS
+ PERF-VERSION-FILE
+ FEATURE-DUMP
+ perf
++!include/perf/
+ perf-read-vdso32
+ perf-read-vdsox32
+ perf-help
+diff --git a/tools/perf/tests/shell/test_brstack.sh b/tools/perf/tests/shell/test_brstack.sh
+index ec801cffae6bc..d7ff5c4b4da4c 100755
+--- a/tools/perf/tests/shell/test_brstack.sh
++++ b/tools/perf/tests/shell/test_brstack.sh
+@@ -13,7 +13,10 @@ fi
+ 
+ # skip the test if the hardware doesn't support branch stack sampling
+ # and if the architecture doesn't support filter types: any,save_type,u
+-perf record -b -o- -B --branch-filter any,save_type,u true > /dev/null 2>&1 || exit 2
++if ! perf record -o- --no-buildid --branch-filter any,save_type,u -- true > /dev/null 2>&1 ; then
++	echo "skip: system doesn't support filter types: any,save_type,u"
++	exit 2
++fi
+ 
+ TMPDIR=$(mktemp -d /tmp/__perf_test.program.XXXXX)
+ 
+diff --git a/tools/perf/util/parse-branch-options.c b/tools/perf/util/parse-branch-options.c
+index bb4aa88c50a82..35264b5684d09 100644
+--- a/tools/perf/util/parse-branch-options.c
++++ b/tools/perf/util/parse-branch-options.c
+@@ -101,8 +101,10 @@ parse_branch_stack(const struct option *opt, const char *str, int unset)
+ 	/*
+ 	 * cannot set it twice, -b + --branch-filter for instance
+ 	 */
+-	if (*mode)
++	if (*mode) {
++		pr_err("Error: Can't use --branch-any (-b) with --branch-filter (-j).\n");
+ 		return -1;
++	}
+ 
+ 	return parse_branch_str(str, mode);
+ }
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index b82844cb0ce77..7c5f5219dbff2 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -273,7 +273,7 @@ static void new_line_csv(struct perf_stat_config *config, void *ctx)
+ 
+ 	fputc('\n', os->fh);
+ 	if (os->prefix)
+-		fprintf(os->fh, "%s%s", os->prefix, config->csv_sep);
++		fprintf(os->fh, "%s", os->prefix);
+ 	aggr_printout(config, os->evsel, os->id, os->nr);
+ 	for (i = 0; i < os->nfields; i++)
+ 		fputs(config->csv_sep, os->fh);
+@@ -556,7 +556,7 @@ static void printout(struct perf_stat_config *config, struct aggr_cpu_id id, int
+ 			[AGGR_CORE] = 2,
+ 			[AGGR_THREAD] = 1,
+ 			[AGGR_UNSET] = 0,
+-			[AGGR_NODE] = 0,
++			[AGGR_NODE] = 1,
+ 		};
+ 
+ 		pm = config->metric_only ? print_metric_only_csv : print_metric_csv;
+@@ -1126,6 +1126,7 @@ static int aggr_header_lens[] = {
+ 	[AGGR_SOCKET] = 12,
+ 	[AGGR_NONE] = 6,
+ 	[AGGR_THREAD] = 24,
++	[AGGR_NODE] = 6,
+ 	[AGGR_GLOBAL] = 0,
+ };
+ 
+@@ -1135,6 +1136,7 @@ static const char *aggr_header_csv[] = {
+ 	[AGGR_SOCKET] 	= 	"socket,cpus",
+ 	[AGGR_NONE] 	= 	"cpu,",
+ 	[AGGR_THREAD] 	= 	"comm-pid,",
++	[AGGR_NODE] 	= 	"node,",
+ 	[AGGR_GLOBAL] 	=	""
+ };
+ 
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 4c5259828efdc..b76c775f61f99 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -5404,6 +5404,7 @@ static int kvm_debugfs_open(struct inode *inode, struct file *file,
+ 			   int (*get)(void *, u64 *), int (*set)(void *, u64),
+ 			   const char *fmt)
+ {
++	int ret;
+ 	struct kvm_stat_data *stat_data = (struct kvm_stat_data *)
+ 					  inode->i_private;
+ 
+@@ -5415,15 +5416,13 @@ static int kvm_debugfs_open(struct inode *inode, struct file *file,
+ 	if (!kvm_get_kvm_safe(stat_data->kvm))
+ 		return -ENOENT;
+ 
+-	if (simple_attr_open(inode, file, get,
+-		    kvm_stats_debugfs_mode(stat_data->desc) & 0222
+-		    ? set : NULL,
+-		    fmt)) {
++	ret = simple_attr_open(inode, file, get,
++			       kvm_stats_debugfs_mode(stat_data->desc) & 0222
++			       ? set : NULL, fmt);
++	if (ret)
+ 		kvm_put_kvm(stat_data->kvm);
+-		return -ENOMEM;
+-	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int kvm_debugfs_release(struct inode *inode, struct file *file)


             reply	other threads:[~2022-11-16 11:16 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-16 11:16 Alice Ferrazzi [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-01-12 12:17 [gentoo-commits] proj/linux-patches:6.0 commit in: / Mike Pagano
2023-01-07 11:11 Mike Pagano
2023-01-04 11:38 Mike Pagano
2022-12-31 15:29 Mike Pagano
2022-12-21 18:50 Alice Ferrazzi
2022-12-19 12:23 Alice Ferrazzi
2022-12-16 19:56 Mike Pagano
2022-12-14 12:51 Mike Pagano
2022-12-14 12:13 Mike Pagano
2022-12-08 11:40 Alice Ferrazzi
2022-12-06 13:46 Mike Pagano
2022-12-06 13:00 Mike Pagano
2022-12-02 17:23 Mike Pagano
2022-11-26 11:55 Mike Pagano
2022-11-10 18:18 Mike Pagano
2022-11-10 18:10 Mike Pagano
2022-11-09 19:00 Mike Pagano
2022-11-03 15:27 Mike Pagano
2022-11-01 12:46 Mike Pagano
2022-10-29  9:54 Mike Pagano
2022-10-26 11:24 Mike Pagano
2022-10-21 13:14 Mike Pagano
2022-10-15 10:03 Mike Pagano
2022-10-12 11:16 Mike Pagano
2022-10-03  9:31 Mike Pagano
2022-09-11 22:30 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1668597077.167e638e734a78f986f9cb92a872a2e5ed7b2a1f.alicef@gentoo \
    --to=alicef@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox