From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id EFE851382C5 for ; Thu, 3 Jun 2021 10:22:28 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 1D8CBE086D; Thu, 3 Jun 2021 10:22:28 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id C894AE086D for ; Thu, 3 Jun 2021 10:22:27 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 16616340D39 for ; Thu, 3 Jun 2021 10:22:26 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 75D9847 for ; Thu, 3 Jun 2021 10:22:24 +0000 (UTC) From: "Alice Ferrazzi" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Alice Ferrazzi" Message-ID: <1622715725.6b02df12d6c442e8b157b340954a4ca810844d95.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.12 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1008_linux-5.12.9.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Alice Ferrazzi X-VCS-Revision: 6b02df12d6c442e8b157b340954a4ca810844d95 X-VCS-Branch: 5.12 Date: Thu, 3 Jun 2021 10:22:24 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 6c1e8667-0b46-4a59-bd46-d99d5d2d89bc X-Archives-Hash: 057cad0f84906d98e227c5ac1dc1f482 commit: 6b02df12d6c442e8b157b340954a4ca810844d95 Author: Alice Ferrazzi gentoo org> AuthorDate: Thu Jun 3 10:21:39 2021 +0000 Commit: Alice Ferrazzi gentoo org> CommitDate: Thu Jun 3 10:22:05 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6b02df12 Linux patch 5.12.9 Signed-off-by: Alice Ferrazzi gentoo.org> 0000_README | 4 + 1008_linux-5.12.9.patch | 10516 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 10520 insertions(+) diff --git a/0000_README b/0000_README index 90784e9..f16429c 100644 --- a/0000_README +++ b/0000_README @@ -75,6 +75,10 @@ Patch: 1007_linux-5.12.8.patch From: http://www.kernel.org Desc: Linux 5.12.8 +Patch: 1008_linux-5.12.9.patch +From: http://www.kernel.org +Desc: Linux 5.12.9 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1008_linux-5.12.9.patch b/1008_linux-5.12.9.patch new file mode 100644 index 0000000..f5223c7 --- /dev/null +++ b/1008_linux-5.12.9.patch @@ -0,0 +1,10516 @@ +diff --git a/Documentation/userspace-api/seccomp_filter.rst b/Documentation/userspace-api/seccomp_filter.rst +index bd9165241b6c8..6efb41cc80725 100644 +--- a/Documentation/userspace-api/seccomp_filter.rst ++++ b/Documentation/userspace-api/seccomp_filter.rst +@@ -250,14 +250,14 @@ Users can read via ``ioctl(SECCOMP_IOCTL_NOTIF_RECV)`` (or ``poll()``) on a + seccomp notification fd to receive a ``struct seccomp_notif``, which contains + five members: the input length of the structure, a unique-per-filter ``id``, + the ``pid`` of the task which triggered this request (which may be 0 if the +-task is in a pid ns not visible from the listener's pid namespace), a ``flags`` +-member which for now only has ``SECCOMP_NOTIF_FLAG_SIGNALED``, representing +-whether or not the notification is a result of a non-fatal signal, and the +-``data`` passed to seccomp. Userspace can then make a decision based on this +-information about what to do, and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a +-response, indicating what should be returned to userspace. The ``id`` member of +-``struct seccomp_notif_resp`` should be the same ``id`` as in ``struct +-seccomp_notif``. ++task is in a pid ns not visible from the listener's pid namespace). The ++notification also contains the ``data`` passed to seccomp, and a filters flag. ++The structure should be zeroed out prior to calling the ioctl. ++ ++Userspace can then make a decision based on this information about what to do, ++and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be ++returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should ++be the same ``id`` as in ``struct seccomp_notif``. + + It is worth noting that ``struct seccomp_data`` contains the values of register + arguments to the syscall, but does not contain pointers to memory. The task's +diff --git a/Makefile b/Makefile +index a20afcb7d2bf4..d53577db10858 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 12 +-SUBLEVEL = 8 ++SUBLEVEL = 9 + EXTRAVERSION = + NAME = Frozen Wasteland + +diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h +index a7ab84f781f73..a8578d650bb67 100644 +--- a/arch/arm64/include/asm/kvm_asm.h ++++ b/arch/arm64/include/asm/kvm_asm.h +@@ -192,6 +192,8 @@ extern void __kvm_timer_set_cntvoff(u64 cntvoff); + + extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); + ++extern void __kvm_adjust_pc(struct kvm_vcpu *vcpu); ++ + extern u64 __vgic_v3_get_gic_config(void); + extern u64 __vgic_v3_read_vmcr(void); + extern void __vgic_v3_write_vmcr(u32 vmcr); +diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h +index f612c090f2e41..01b9857757f2a 100644 +--- a/arch/arm64/include/asm/kvm_emulate.h ++++ b/arch/arm64/include/asm/kvm_emulate.h +@@ -463,4 +463,9 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu) + vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; + } + ++static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) ++{ ++ return test_bit(feature, vcpu->arch.features); ++} ++ + #endif /* __ARM64_KVM_EMULATE_H__ */ +diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c +index 73629094f9030..0812a496725f6 100644 +--- a/arch/arm64/kvm/hyp/exception.c ++++ b/arch/arm64/kvm/hyp/exception.c +@@ -296,7 +296,7 @@ static void enter_exception32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) + *vcpu_pc(vcpu) = vect_offset; + } + +-void kvm_inject_exception(struct kvm_vcpu *vcpu) ++static void kvm_inject_exception(struct kvm_vcpu *vcpu) + { + if (vcpu_el1_is_32bit(vcpu)) { + switch (vcpu->arch.flags & KVM_ARM64_EXCEPT_MASK) { +@@ -329,3 +329,19 @@ void kvm_inject_exception(struct kvm_vcpu *vcpu) + } + } + } ++ ++/* ++ * Adjust the guest PC on entry, depending on flags provided by EL1 ++ * for the purpose of emulation (MMIO, sysreg) or exception injection. ++ */ ++void __kvm_adjust_pc(struct kvm_vcpu *vcpu) ++{ ++ if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) { ++ kvm_inject_exception(vcpu); ++ vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | ++ KVM_ARM64_EXCEPT_MASK); ++ } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) { ++ kvm_skip_instr(vcpu); ++ vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC; ++ } ++} +diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +index 61716359035d6..4fdfeabefeb43 100644 +--- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h ++++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +@@ -13,8 +13,6 @@ + #include + #include + +-void kvm_inject_exception(struct kvm_vcpu *vcpu); +- + static inline void kvm_skip_instr(struct kvm_vcpu *vcpu) + { + if (vcpu_mode_is_32bit(vcpu)) { +@@ -43,22 +41,6 @@ static inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + } + +-/* +- * Adjust the guest PC on entry, depending on flags provided by EL1 +- * for the purpose of emulation (MMIO, sysreg) or exception injection. +- */ +-static inline void __adjust_pc(struct kvm_vcpu *vcpu) +-{ +- if (vcpu->arch.flags & KVM_ARM64_PENDING_EXCEPTION) { +- kvm_inject_exception(vcpu); +- vcpu->arch.flags &= ~(KVM_ARM64_PENDING_EXCEPTION | +- KVM_ARM64_EXCEPT_MASK); +- } else if (vcpu->arch.flags & KVM_ARM64_INCREMENT_PC) { +- kvm_skip_instr(vcpu); +- vcpu->arch.flags &= ~KVM_ARM64_INCREMENT_PC; +- } +-} +- + /* + * Skip an instruction while host sysregs are live. + * Assumes host is always 64-bit. +diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c +index 68ab6b4d51414..a343ef11736f9 100644 +--- a/arch/arm64/kvm/hyp/nvhe/switch.c ++++ b/arch/arm64/kvm/hyp/nvhe/switch.c +@@ -4,7 +4,6 @@ + * Author: Marc Zyngier + */ + +-#include + #include + #include + +@@ -201,7 +200,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) + */ + __debug_save_host_buffers_nvhe(vcpu); + +- __adjust_pc(vcpu); ++ __kvm_adjust_pc(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks +diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c +index af8e940d0f035..ae65f6b5984e8 100644 +--- a/arch/arm64/kvm/hyp/vhe/switch.c ++++ b/arch/arm64/kvm/hyp/vhe/switch.c +@@ -4,7 +4,6 @@ + * Author: Marc Zyngier + */ + +-#include + #include + + #include +@@ -134,7 +133,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) + __load_guest_stage2(vcpu->arch.hw_mmu); + __activate_traps(vcpu); + +- __adjust_pc(vcpu); ++ __kvm_adjust_pc(vcpu); + + sysreg_restore_guest_state_vhe(guest_ctxt); + __debug_switch_to_guest(vcpu); +diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c +index 4b5acd84b8c87..dd207565cd998 100644 +--- a/arch/arm64/kvm/reset.c ++++ b/arch/arm64/kvm/reset.c +@@ -170,6 +170,25 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) + return 0; + } + ++static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) ++{ ++ struct kvm_vcpu *tmp; ++ bool is32bit; ++ int i; ++ ++ is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); ++ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit) ++ return false; ++ ++ /* Check that the vcpus are either all 32bit or all 64bit */ ++ kvm_for_each_vcpu(i, tmp, vcpu->kvm) { ++ if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit) ++ return false; ++ } ++ ++ return true; ++} ++ + /** + * kvm_reset_vcpu - sets core registers and sys_regs to reset value + * @vcpu: The VCPU pointer +@@ -221,13 +240,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) + } + } + ++ if (!vcpu_allowed_register_width(vcpu)) { ++ ret = -EINVAL; ++ goto out; ++ } ++ + switch (vcpu->arch.target) { + default: + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { +- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) { +- ret = -EINVAL; +- goto out; +- } + pstate = VCPU_RESET_PSTATE_SVC; + } else { + pstate = VCPU_RESET_PSTATE_EL1; +diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c +index 4f2f1e3145deb..cdd073d8ea324 100644 +--- a/arch/arm64/kvm/sys_regs.c ++++ b/arch/arm64/kvm/sys_regs.c +@@ -399,14 +399,14 @@ static bool trap_bvr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) + { +- u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg]; ++ u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm]; + + if (p->is_write) + reg_to_dbg(vcpu, p, rd, dbg_reg); + else + dbg_to_reg(vcpu, p, rd, dbg_reg); + +- trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg); ++ trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg); + + return true; + } +@@ -414,7 +414,7 @@ static bool trap_bvr(struct kvm_vcpu *vcpu, + static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm]; + + if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -424,7 +424,7 @@ static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm]; + + if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -434,21 +434,21 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static void reset_bvr(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) + { +- vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = rd->val; ++ vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val; + } + + static bool trap_bcr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) + { +- u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg]; ++ u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm]; + + if (p->is_write) + reg_to_dbg(vcpu, p, rd, dbg_reg); + else + dbg_to_reg(vcpu, p, rd, dbg_reg); + +- trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg); ++ trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg); + + return true; + } +@@ -456,7 +456,7 @@ static bool trap_bcr(struct kvm_vcpu *vcpu, + static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm]; + + if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -467,7 +467,7 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm]; + + if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -477,22 +477,22 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static void reset_bcr(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) + { +- vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = rd->val; ++ vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val; + } + + static bool trap_wvr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) + { +- u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg]; ++ u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm]; + + if (p->is_write) + reg_to_dbg(vcpu, p, rd, dbg_reg); + else + dbg_to_reg(vcpu, p, rd, dbg_reg); + +- trace_trap_reg(__func__, rd->reg, p->is_write, +- vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg]); ++ trace_trap_reg(__func__, rd->CRm, p->is_write, ++ vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm]); + + return true; + } +@@ -500,7 +500,7 @@ static bool trap_wvr(struct kvm_vcpu *vcpu, + static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm]; + + if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -510,7 +510,7 @@ static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm]; + + if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -520,21 +520,21 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static void reset_wvr(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) + { +- vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = rd->val; ++ vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val; + } + + static bool trap_wcr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *rd) + { +- u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg]; ++ u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm]; + + if (p->is_write) + reg_to_dbg(vcpu, p, rd, dbg_reg); + else + dbg_to_reg(vcpu, p, rd, dbg_reg); + +- trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg); ++ trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg); + + return true; + } +@@ -542,7 +542,7 @@ static bool trap_wcr(struct kvm_vcpu *vcpu, + static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm]; + + if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -552,7 +552,7 @@ static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) + { +- __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg]; ++ __u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm]; + + if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; +@@ -562,7 +562,7 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + static void reset_wcr(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) + { +- vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = rd->val; ++ vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val; + } + + static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c +index 5d9550fdb9cf8..bd4739baa368d 100644 +--- a/arch/arm64/mm/mmu.c ++++ b/arch/arm64/mm/mmu.c +@@ -492,7 +492,8 @@ static void __init map_mem(pgd_t *pgdp) + int flags = 0; + u64 i; + +- if (rodata_full || crash_mem_map || debug_pagealloc_enabled()) ++ if (rodata_full || crash_mem_map || debug_pagealloc_enabled() || ++ IS_ENABLED(CONFIG_KFENCE)) + flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + + /* +diff --git a/arch/mips/alchemy/board-xxs1500.c b/arch/mips/alchemy/board-xxs1500.c +index b184baa4e56a6..f175bce2987fa 100644 +--- a/arch/mips/alchemy/board-xxs1500.c ++++ b/arch/mips/alchemy/board-xxs1500.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + #include + + const char *get_system_type(void) +diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c +index 8286c3521476f..3c3d07cd80660 100644 +--- a/arch/mips/ralink/of.c ++++ b/arch/mips/ralink/of.c +@@ -8,6 +8,7 @@ + + #include + #include ++#include + #include + #include + #include +@@ -25,6 +26,7 @@ + + __iomem void *rt_sysc_membase; + __iomem void *rt_memc_membase; ++EXPORT_SYMBOL_GPL(rt_sysc_membase); + + __iomem void *plat_of_remap_node(const char *node) + { +diff --git a/arch/openrisc/include/asm/barrier.h b/arch/openrisc/include/asm/barrier.h +new file mode 100644 +index 0000000000000..7538294721bed +--- /dev/null ++++ b/arch/openrisc/include/asm/barrier.h +@@ -0,0 +1,9 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef __ASM_BARRIER_H ++#define __ASM_BARRIER_H ++ ++#define mb() asm volatile ("l.msync" ::: "memory") ++ ++#include ++ ++#endif /* __ASM_BARRIER_H */ +diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c +index 2b3e0cb90d789..bde85fc53357f 100644 +--- a/arch/riscv/kernel/stacktrace.c ++++ b/arch/riscv/kernel/stacktrace.c +@@ -27,10 +27,10 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, + fp = frame_pointer(regs); + sp = user_stack_pointer(regs); + pc = instruction_pointer(regs); +- } else if (task == NULL || task == current) { +- fp = (unsigned long)__builtin_frame_address(0); +- sp = sp_in_global; +- pc = (unsigned long)walk_stackframe; ++ } else if (task == current) { ++ fp = (unsigned long)__builtin_frame_address(1); ++ sp = (unsigned long)__builtin_frame_address(0); ++ pc = (unsigned long)__builtin_return_address(0); + } else { + /* task blocked in __switch_to */ + fp = task->thread.s[0]; +@@ -106,15 +106,15 @@ static bool print_trace_address(void *arg, unsigned long pc) + return true; + } + +-void dump_backtrace(struct pt_regs *regs, struct task_struct *task, ++noinline void dump_backtrace(struct pt_regs *regs, struct task_struct *task, + const char *loglvl) + { +- pr_cont("%sCall Trace:\n", loglvl); + walk_stackframe(task, regs, print_trace_address, (void *)loglvl); + } + + void show_stack(struct task_struct *task, unsigned long *sp, const char *loglvl) + { ++ pr_cont("%sCall Trace:\n", loglvl); + dump_backtrace(NULL, task, loglvl); + } + +@@ -139,7 +139,7 @@ unsigned long get_wchan(struct task_struct *task) + + #ifdef CONFIG_STACKTRACE + +-void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, ++noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task, struct pt_regs *regs) + { + walk_stackframe(task, regs, consume_entry, cookie); +diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c +index f98370a399361..f00830e5202fe 100644 +--- a/arch/x86/kvm/hyperv.c ++++ b/arch/x86/kvm/hyperv.c +@@ -1172,6 +1172,7 @@ void kvm_hv_invalidate_tsc_page(struct kvm *kvm) + { + struct kvm_hv *hv = to_kvm_hv(kvm); + u64 gfn; ++ int idx; + + if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN || + hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET || +@@ -1190,9 +1191,16 @@ void kvm_hv_invalidate_tsc_page(struct kvm *kvm) + gfn = hv->hv_tsc_page >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT; + + hv->tsc_ref.tsc_sequence = 0; ++ ++ /* ++ * Take the srcu lock as memslots will be accessed to check the gfn ++ * cache generation against the memslots generation. ++ */ ++ idx = srcu_read_lock(&kvm->srcu); + if (kvm_write_guest(kvm, gfn_to_gpa(gfn), + &hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence))) + hv->hv_tsc_page_status = HV_TSC_PAGE_BROKEN; ++ srcu_read_unlock(&kvm->srcu, idx); + + out_unlock: + mutex_unlock(&hv->hv_lock); +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 86678f8b35020..822c23c3752c5 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -3015,6 +3015,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu) + st->preempted & KVM_VCPU_FLUSH_TLB); + if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB) + kvm_vcpu_flush_tlb_guest(vcpu); ++ } else { ++ st->preempted = 0; + } + + vcpu->arch.st.preempted = 0; +@@ -7113,6 +7115,11 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu) + BUILD_BUG_ON(HF_SMM_MASK != X86EMUL_SMM_MASK); + BUILD_BUG_ON(HF_SMM_INSIDE_NMI_MASK != X86EMUL_SMM_INSIDE_NMI_MASK); + ++ ctxt->interruptibility = 0; ++ ctxt->have_exception = false; ++ ctxt->exception.vector = -1; ++ ctxt->perm_ok = false; ++ + init_decode_cache(ctxt); + vcpu->arch.emulate_regs_need_sync_from_vcpu = false; + } +@@ -7448,11 +7455,6 @@ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type, + kvm_vcpu_check_breakpoint(vcpu, &r)) + return r; + +- ctxt->interruptibility = 0; +- ctxt->have_exception = false; +- ctxt->exception.vector = -1; +- ctxt->perm_ok = false; +- + ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; + + r = x86_decode_insn(ctxt, insn, insn_len); +diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c +index 39359ce0eb2c0..645e82a66bb07 100644 +--- a/drivers/acpi/acpi_apd.c ++++ b/drivers/acpi/acpi_apd.c +@@ -226,6 +226,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = { + { "AMDI0010", APD_ADDR(wt_i2c_desc) }, + { "AMD0020", APD_ADDR(cz_uart_desc) }, + { "AMDI0020", APD_ADDR(cz_uart_desc) }, ++ { "AMDI0022", APD_ADDR(cz_uart_desc) }, + { "AMD0030", }, + { "AMD0040", APD_ADDR(fch_misc_desc)}, + { "HYGO0010", APD_ADDR(wt_i2c_desc) }, +diff --git a/drivers/base/core.c b/drivers/base/core.c +index f29839382f816..dd77e566f2aa0 100644 +--- a/drivers/base/core.c ++++ b/drivers/base/core.c +@@ -192,6 +192,11 @@ int device_links_read_lock_held(void) + { + return srcu_read_lock_held(&device_links_srcu); + } ++ ++static void device_link_synchronize_removal(void) ++{ ++ synchronize_srcu(&device_links_srcu); ++} + #else /* !CONFIG_SRCU */ + static DECLARE_RWSEM(device_links_lock); + +@@ -222,6 +227,10 @@ int device_links_read_lock_held(void) + return lockdep_is_held(&device_links_lock); + } + #endif ++ ++static inline void device_link_synchronize_removal(void) ++{ ++} + #endif /* !CONFIG_SRCU */ + + static bool device_is_ancestor(struct device *dev, struct device *target) +@@ -443,8 +452,13 @@ static struct attribute *devlink_attrs[] = { + }; + ATTRIBUTE_GROUPS(devlink); + +-static void device_link_free(struct device_link *link) ++static void device_link_release_fn(struct work_struct *work) + { ++ struct device_link *link = container_of(work, struct device_link, rm_work); ++ ++ /* Ensure that all references to the link object have been dropped. */ ++ device_link_synchronize_removal(); ++ + while (refcount_dec_not_one(&link->rpm_active)) + pm_runtime_put(link->supplier); + +@@ -453,24 +467,19 @@ static void device_link_free(struct device_link *link) + kfree(link); + } + +-#ifdef CONFIG_SRCU +-static void __device_link_free_srcu(struct rcu_head *rhead) +-{ +- device_link_free(container_of(rhead, struct device_link, rcu_head)); +-} +- + static void devlink_dev_release(struct device *dev) + { + struct device_link *link = to_devlink(dev); + +- call_srcu(&device_links_srcu, &link->rcu_head, __device_link_free_srcu); +-} +-#else +-static void devlink_dev_release(struct device *dev) +-{ +- device_link_free(to_devlink(dev)); ++ INIT_WORK(&link->rm_work, device_link_release_fn); ++ /* ++ * It may take a while to complete this work because of the SRCU ++ * synchronization in device_link_release_fn() and if the consumer or ++ * supplier devices get deleted when it runs, so put it into the "long" ++ * workqueue. ++ */ ++ queue_work(system_long_wq, &link->rm_work); + } +-#endif + + static struct class devlink_class = { + .name = "devlink", +diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c +index ed3b7dab678db..8b55085650ad0 100644 +--- a/drivers/char/hpet.c ++++ b/drivers/char/hpet.c +@@ -984,6 +984,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) + hdp->hd_phys_address = fixmem32->address; + hdp->hd_address = ioremap(fixmem32->address, + HPET_RANGE_SIZE); ++ if (!hdp->hd_address) ++ return AE_ERROR; + + if (hpet_is_known(hdp)) { + iounmap(hdp->hd_address); +diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c b/drivers/crypto/cavium/nitrox/nitrox_main.c +index facc8e6bc5801..d385daf2c71c3 100644 +--- a/drivers/crypto/cavium/nitrox/nitrox_main.c ++++ b/drivers/crypto/cavium/nitrox/nitrox_main.c +@@ -442,7 +442,6 @@ static int nitrox_probe(struct pci_dev *pdev, + err = pci_request_mem_regions(pdev, nitrox_driver_name); + if (err) { + pci_disable_device(pdev); +- dev_err(&pdev->dev, "Failed to request mem regions!\n"); + return err; + } + pci_set_master(pdev); +diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c +index 806ca02c52d71..62026607f3f8b 100644 +--- a/drivers/dma/qcom/hidma_mgmt.c ++++ b/drivers/dma/qcom/hidma_mgmt.c +@@ -418,8 +418,23 @@ static int __init hidma_mgmt_init(void) + hidma_mgmt_of_populate_channels(child); + } + #endif +- return platform_driver_register(&hidma_mgmt_driver); ++ /* ++ * We do not check for return value here, as it is assumed that ++ * platform_driver_register must not fail. The reason for this is that ++ * the (potential) hidma_mgmt_of_populate_channels calls above are not ++ * cleaned up if it does fail, and to do this work is quite ++ * complicated. In particular, various calls of of_address_to_resource, ++ * of_irq_to_resource, platform_device_register_full, of_dma_configure, ++ * and of_msi_configure which then call other functions and so on, must ++ * be cleaned up - this is not a trivial exercise. ++ * ++ * Currently, this module is not intended to be unloaded, and there is ++ * no module_exit function defined which does the needed cleanup. For ++ * this reason, we have to assume success here. ++ */ ++ platform_driver_register(&hidma_mgmt_driver); + ++ return 0; + } + module_init(hidma_mgmt_init); + MODULE_LICENSE("GPL v2"); +diff --git a/drivers/gpio/gpio-cadence.c b/drivers/gpio/gpio-cadence.c +index a4d3239d25944..4ab3fcd9b9ba6 100644 +--- a/drivers/gpio/gpio-cadence.c ++++ b/drivers/gpio/gpio-cadence.c +@@ -278,6 +278,7 @@ static const struct of_device_id cdns_of_ids[] = { + { .compatible = "cdns,gpio-r1p02" }, + { /* sentinel */ }, + }; ++MODULE_DEVICE_TABLE(of, cdns_of_ids); + + static struct platform_driver cdns_gpio_driver = { + .driver = { +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c +index fad3b91f74f54..d39cff4a1fe38 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c +@@ -156,16 +156,16 @@ static uint32_t get_sdma_rlc_reg_offset(struct amdgpu_device *adev, + mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL; + break; + case 1: +- sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA1, 0, ++ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0, + mmSDMA1_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL; + break; + case 2: +- sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA2, 0, +- mmSDMA2_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL; ++ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0, ++ mmSDMA2_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL; + break; + case 3: +- sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA3, 0, +- mmSDMA3_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL; ++ sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0, ++ mmSDMA3_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL; + break; + } + +@@ -450,7 +450,7 @@ static int hqd_sdma_dump_v10_3(struct kgd_dev *kgd, + engine_id, queue_id); + uint32_t i = 0, reg; + #undef HQD_N_REGS +-#define HQD_N_REGS (19+6+7+10) ++#define HQD_N_REGS (19+6+7+12) + + *dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL); + if (*dump == NULL) +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +index 5eee251e3335b..85d90e857693a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +@@ -4356,7 +4356,6 @@ out: + r = amdgpu_ib_ring_tests(tmp_adev); + if (r) { + dev_err(tmp_adev->dev, "ib ring test failed (%d).\n", r); +- r = amdgpu_device_ip_suspend(tmp_adev); + need_full_reset = true; + r = -EAGAIN; + goto end; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c +index 24010cacf7d0e..813b96e233ba8 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c +@@ -290,10 +290,13 @@ out: + static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfbdev) + { + struct amdgpu_framebuffer *rfb = &rfbdev->rfb; ++ int i; + + drm_fb_helper_unregister_fbi(&rfbdev->helper); + + if (rfb->base.obj[0]) { ++ for (i = 0; i < rfb->base.format->num_planes; i++) ++ drm_gem_object_put(rfb->base.obj[0]); + amdgpufb_destroy_pinned_object(rfb->base.obj[0]); + rfb->base.obj[0] = NULL; + drm_framebuffer_unregister_private(&rfb->base); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +index 6b14626c148ee..13ac2f1dcc2c7 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +@@ -1287,6 +1287,7 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_bo_device *bdev, + if (gtt && gtt->userptr) { + amdgpu_ttm_tt_set_user_pages(ttm, NULL); + kfree(ttm->sg); ++ ttm->sg = NULL; + ttm->page_flags &= ~TTM_PAGE_FLAG_SG; + return; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c +index 3b22953aa62e1..e1d3877ee3013 100644 +--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c +@@ -172,6 +172,8 @@ static int jpeg_v2_0_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + if (adev->jpeg.cur_state != AMD_PG_STATE_GATE && + RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS)) + jpeg_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE); +diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c +index c6724a0e0c436..dc947c8ffe213 100644 +--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c ++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c +@@ -198,8 +198,6 @@ static int jpeg_v2_5_hw_fini(void *handle) + if (adev->jpeg.cur_state != AMD_PG_STATE_GATE && + RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS)) + jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE); +- +- ring->sched.ready = false; + } + + return 0; +diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c +index e8fbb2a0de340..1d354245678d5 100644 +--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c +@@ -166,8 +166,6 @@ static int jpeg_v3_0_hw_fini(void *handle) + RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS)) + jpeg_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE); + +- ring->sched.ready = false; +- + return 0; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c +index 690a5090475a5..32c6aa03d2673 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c +@@ -470,11 +470,6 @@ static void sdma_v5_2_gfx_stop(struct amdgpu_device *adev) + ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0); + WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), ib_cntl); + } +- +- sdma0->sched.ready = false; +- sdma1->sched.ready = false; +- sdma2->sched.ready = false; +- sdma3->sched.ready = false; + } + + /** +diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c +index 6117931fa8d79..dd5eb61a5c906 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c +@@ -231,9 +231,13 @@ static int vcn_v1_0_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) || +- RREG32_SOC15(VCN, 0, mmUVD_STATUS)) ++ (adev->vcn.cur_state != AMD_PG_STATE_GATE && ++ RREG32_SOC15(VCN, 0, mmUVD_STATUS))) { + vcn_v1_0_set_powergating_state(adev, AMD_PG_STATE_GATE); ++ } + + return 0; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c +index d63198c945bf0..0a84f0b5a363c 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c +@@ -262,6 +262,8 @@ static int vcn_v2_0_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) || + (adev->vcn.cur_state != AMD_PG_STATE_GATE && + RREG32_SOC15(VCN, 0, mmUVD_STATUS))) +diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c +index b6e0f4ba6272c..f09d37b90da92 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c ++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c +@@ -321,6 +321,8 @@ static int vcn_v2_5_hw_fini(void *handle) + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + int i; + ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { + if (adev->vcn.harvest_config & (1 << i)) + continue; +diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c +index 9b844e9fb16ff..ebbc04ff5da06 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c +@@ -368,7 +368,7 @@ static int vcn_v3_0_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + struct amdgpu_ring *ring; +- int i, j; ++ int i; + + for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { + if (adev->vcn.harvest_config & (1 << i)) +@@ -383,12 +383,6 @@ static int vcn_v3_0_hw_fini(void *handle) + vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE); + } + } +- ring->sched.ready = false; +- +- for (j = 0; j < adev->vcn.num_enc_rings; ++j) { +- ring = &adev->vcn.inst[i].ring_enc[j]; +- ring->sched.ready = false; +- } + } + + return 0; +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c +index 440bf0a0e12ae..204928479c958 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c +@@ -1067,6 +1067,24 @@ static bool dc_link_detect_helper(struct dc_link *link, + dc_is_dvi_signal(link->connector_signal)) { + if (prev_sink) + dc_sink_release(prev_sink); ++ link_disconnect_sink(link); ++ ++ return false; ++ } ++ /* ++ * Abort detection for DP connectors if we have ++ * no EDID and connector is active converter ++ * as there are no display downstream ++ * ++ */ ++ if (dc_is_dp_sst_signal(link->connector_signal) && ++ (link->dpcd_caps.dongle_type == ++ DISPLAY_DONGLE_DP_VGA_CONVERTER || ++ link->dpcd_caps.dongle_type == ++ DISPLAY_DONGLE_DP_DVI_CONVERTER)) { ++ if (prev_sink) ++ dc_sink_release(prev_sink); ++ link_disconnect_sink(link); + + return false; + } +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c +index fbff3df72e6c6..85cd69bbdec19 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c +@@ -2366,6 +2366,8 @@ static ssize_t navi10_get_gpu_metrics(struct smu_context *smu, + + static int navi10_enable_mgpu_fan_boost(struct smu_context *smu) + { ++ struct smu_table_context *table_context = &smu->smu_table; ++ PPTable_t *smc_pptable = table_context->driver_pptable; + struct amdgpu_device *adev = smu->adev; + uint32_t param = 0; + +@@ -2373,6 +2375,13 @@ static int navi10_enable_mgpu_fan_boost(struct smu_context *smu) + if (adev->asic_type == CHIP_NAVI12) + return 0; + ++ /* ++ * Skip the MGpuFanBoost setting for those ASICs ++ * which do not support it ++ */ ++ if (!smc_pptable->MGpuFanBoostLimitRpm) ++ return 0; ++ + /* Workaround for WS SKU */ + if (adev->pdev->device == 0x7312 && + adev->pdev->revision == 0) +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +index 61438940c26e8..e4761bc161129 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +@@ -3015,6 +3015,16 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu, + + static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu) + { ++ struct smu_table_context *table_context = &smu->smu_table; ++ PPTable_t *smc_pptable = table_context->driver_pptable; ++ ++ /* ++ * Skip the MGpuFanBoost setting for those ASICs ++ * which do not support it ++ */ ++ if (!smc_pptable->MGpuFanBoostLimitRpm) ++ return 0; ++ + return smu_cmn_send_smc_msg_with_param(smu, + SMU_MSG_SetMGpuFanBoostLimitRpm, + 0, +diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c +index 2ed309534e97a..4a74502977845 100644 +--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c ++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c +@@ -127,49 +127,13 @@ intel_dp_set_lttpr_transparent_mode(struct intel_dp *intel_dp, bool enable) + return drm_dp_dpcd_write(&intel_dp->aux, DP_PHY_REPEATER_MODE, &val, 1) == 1; + } + +-/** +- * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the LTTPR link training mode +- * @intel_dp: Intel DP struct +- * +- * Read the LTTPR common and DPRX capabilities and switch to non-transparent +- * link training mode if any is detected and read the PHY capabilities for all +- * detected LTTPRs. In case of an LTTPR detection error or if the number of +- * LTTPRs is more than is supported (8), fall back to the no-LTTPR, +- * transparent mode link training mode. +- * +- * Returns: +- * >0 if LTTPRs were detected and the non-transparent LT mode was set. The +- * DPRX capabilities are read out. +- * 0 if no LTTPRs or more than 8 LTTPRs were detected or in case of a +- * detection failure and the transparent LT mode was set. The DPRX +- * capabilities are read out. +- * <0 Reading out the DPRX capabilities failed. +- */ +-int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp) ++static int intel_dp_init_lttpr(struct intel_dp *intel_dp) + { + int lttpr_count; +- bool ret; + int i; + +- ret = intel_dp_read_lttpr_common_caps(intel_dp); +- +- /* The DPTX shall read the DPRX caps after LTTPR detection. */ +- if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd)) { +- intel_dp_reset_lttpr_common_caps(intel_dp); +- return -EIO; +- } +- +- if (!ret) +- return 0; +- +- /* +- * The 0xF0000-0xF02FF range is only valid if the DPCD revision is +- * at least 1.4. +- */ +- if (intel_dp->dpcd[DP_DPCD_REV] < 0x14) { +- intel_dp_reset_lttpr_common_caps(intel_dp); ++ if (!intel_dp_read_lttpr_common_caps(intel_dp)) + return 0; +- } + + lttpr_count = drm_dp_lttpr_count(intel_dp->lttpr_common_caps); + /* +@@ -210,6 +174,37 @@ int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp) + + return lttpr_count; + } ++ ++/** ++ * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the LTTPR link training mode ++ * @intel_dp: Intel DP struct ++ * ++ * Read the LTTPR common and DPRX capabilities and switch to non-transparent ++ * link training mode if any is detected and read the PHY capabilities for all ++ * detected LTTPRs. In case of an LTTPR detection error or if the number of ++ * LTTPRs is more than is supported (8), fall back to the no-LTTPR, ++ * transparent mode link training mode. ++ * ++ * Returns: ++ * >0 if LTTPRs were detected and the non-transparent LT mode was set. The ++ * DPRX capabilities are read out. ++ * 0 if no LTTPRs or more than 8 LTTPRs were detected or in case of a ++ * detection failure and the transparent LT mode was set. The DPRX ++ * capabilities are read out. ++ * <0 Reading out the DPRX capabilities failed. ++ */ ++int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp) ++{ ++ int lttpr_count = intel_dp_init_lttpr(intel_dp); ++ ++ /* The DPTX shall read the DPRX caps after LTTPR detection. */ ++ if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd)) { ++ intel_dp_reset_lttpr_common_caps(intel_dp); ++ return -EIO; ++ } ++ ++ return lttpr_count; ++} + EXPORT_SYMBOL(intel_dp_init_lttpr_and_dprx_caps); + + static u8 dp_voltage_max(u8 preemph) +diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c +index 453d8b4c5763d..07fcd12dca160 100644 +--- a/drivers/gpu/drm/meson/meson_drv.c ++++ b/drivers/gpu/drm/meson/meson_drv.c +@@ -485,11 +485,12 @@ static int meson_probe_remote(struct platform_device *pdev, + static void meson_drv_shutdown(struct platform_device *pdev) + { + struct meson_drm *priv = dev_get_drvdata(&pdev->dev); +- struct drm_device *drm = priv->drm; + +- DRM_DEBUG_DRIVER("\n"); +- drm_kms_helper_poll_fini(drm); +- drm_atomic_helper_shutdown(drm); ++ if (!priv) ++ return; ++ ++ drm_kms_helper_poll_fini(priv->drm); ++ drm_atomic_helper_shutdown(priv->drm); + } + + static int meson_drv_probe(struct platform_device *pdev) +diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c +index 99d446763530e..f9e1c2ceaac05 100644 +--- a/drivers/i2c/busses/i2c-i801.c ++++ b/drivers/i2c/busses/i2c-i801.c +@@ -395,11 +395,9 @@ static int i801_check_post(struct i801_priv *priv, int status) + dev_err(&priv->pci_dev->dev, "Transaction timeout\n"); + /* try to stop the current command */ + dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n"); +- outb_p(inb_p(SMBHSTCNT(priv)) | SMBHSTCNT_KILL, +- SMBHSTCNT(priv)); ++ outb_p(SMBHSTCNT_KILL, SMBHSTCNT(priv)); + usleep_range(1000, 2000); +- outb_p(inb_p(SMBHSTCNT(priv)) & (~SMBHSTCNT_KILL), +- SMBHSTCNT(priv)); ++ outb_p(0, SMBHSTCNT(priv)); + + /* Check if it worked */ + status = inb_p(SMBHSTSTS(priv)); +diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c +index bf25acba2ed53..dcde71ae63419 100644 +--- a/drivers/i2c/busses/i2c-mt65xx.c ++++ b/drivers/i2c/busses/i2c-mt65xx.c +@@ -478,6 +478,11 @@ static void mtk_i2c_clock_disable(struct mtk_i2c *i2c) + static void mtk_i2c_init_hw(struct mtk_i2c *i2c) + { + u16 control_reg; ++ u16 intr_stat_reg; ++ ++ mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START); ++ intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT); ++ mtk_i2c_writew(i2c, intr_stat_reg, OFFSET_INTR_STAT); + + if (i2c->dev_comp->apdma_sync) { + writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST); +diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c +index 62a903fbe9121..7fff57889ff27 100644 +--- a/drivers/i2c/busses/i2c-s3c2410.c ++++ b/drivers/i2c/busses/i2c-s3c2410.c +@@ -483,7 +483,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat) + * forces us to send a new START + * when we change direction + */ ++ dev_dbg(i2c->dev, ++ "missing START before write->read\n"); + s3c24xx_i2c_stop(i2c, -EINVAL); ++ break; + } + + goto retry_write; +diff --git a/drivers/i2c/busses/i2c-sh_mobile.c b/drivers/i2c/busses/i2c-sh_mobile.c +index 3ae6ca21a02c6..2d2e630fd4387 100644 +--- a/drivers/i2c/busses/i2c-sh_mobile.c ++++ b/drivers/i2c/busses/i2c-sh_mobile.c +@@ -807,7 +807,7 @@ static const struct sh_mobile_dt_config r8a7740_dt_config = { + static const struct of_device_id sh_mobile_i2c_dt_ids[] = { + { .compatible = "renesas,iic-r8a73a4", .data = &fast_clock_dt_config }, + { .compatible = "renesas,iic-r8a7740", .data = &r8a7740_dt_config }, +- { .compatible = "renesas,iic-r8a774c0", .data = &fast_clock_dt_config }, ++ { .compatible = "renesas,iic-r8a774c0", .data = &v2_freq_calc_dt_config }, + { .compatible = "renesas,iic-r8a7790", .data = &v2_freq_calc_dt_config }, + { .compatible = "renesas,iic-r8a7791", .data = &v2_freq_calc_dt_config }, + { .compatible = "renesas,iic-r8a7792", .data = &v2_freq_calc_dt_config }, +diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c +index 766c733336045..9c2401c5848ec 100644 +--- a/drivers/iio/adc/ad7124.c ++++ b/drivers/iio/adc/ad7124.c +@@ -616,6 +616,13 @@ static int ad7124_of_parse_channel_config(struct iio_dev *indio_dev, + if (ret) + goto err; + ++ if (channel >= indio_dev->num_channels) { ++ dev_err(indio_dev->dev.parent, ++ "Channel index >= number of channels\n"); ++ ret = -EINVAL; ++ goto err; ++ } ++ + ret = of_property_read_u32_array(child, "diff-channels", + ain, 2); + if (ret) +@@ -707,6 +714,11 @@ static int ad7124_setup(struct ad7124_state *st) + return ret; + } + ++static void ad7124_reg_disable(void *r) ++{ ++ regulator_disable(r); ++} ++ + static int ad7124_probe(struct spi_device *spi) + { + const struct ad7124_chip_info *info; +@@ -752,17 +764,20 @@ static int ad7124_probe(struct spi_device *spi) + ret = regulator_enable(st->vref[i]); + if (ret) + return ret; ++ ++ ret = devm_add_action_or_reset(&spi->dev, ad7124_reg_disable, ++ st->vref[i]); ++ if (ret) ++ return ret; + } + + st->mclk = devm_clk_get(&spi->dev, "mclk"); +- if (IS_ERR(st->mclk)) { +- ret = PTR_ERR(st->mclk); +- goto error_regulator_disable; +- } ++ if (IS_ERR(st->mclk)) ++ return PTR_ERR(st->mclk); + + ret = clk_prepare_enable(st->mclk); + if (ret < 0) +- goto error_regulator_disable; ++ return ret; + + ret = ad7124_soft_reset(st); + if (ret < 0) +@@ -792,11 +807,6 @@ error_remove_trigger: + ad_sd_cleanup_buffer_and_trigger(indio_dev); + error_clk_disable_unprepare: + clk_disable_unprepare(st->mclk); +-error_regulator_disable: +- for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) { +- if (!IS_ERR_OR_NULL(st->vref[i])) +- regulator_disable(st->vref[i]); +- } + + return ret; + } +@@ -805,17 +815,11 @@ static int ad7124_remove(struct spi_device *spi) + { + struct iio_dev *indio_dev = spi_get_drvdata(spi); + struct ad7124_state *st = iio_priv(indio_dev); +- int i; + + iio_device_unregister(indio_dev); + ad_sd_cleanup_buffer_and_trigger(indio_dev); + clk_disable_unprepare(st->mclk); + +- for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) { +- if (!IS_ERR_OR_NULL(st->vref[i])) +- regulator_disable(st->vref[i]); +- } +- + return 0; + } + +diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c +index 2ed580521d815..1141cc13a1249 100644 +--- a/drivers/iio/adc/ad7192.c ++++ b/drivers/iio/adc/ad7192.c +@@ -912,7 +912,7 @@ static int ad7192_probe(struct spi_device *spi) + { + struct ad7192_state *st; + struct iio_dev *indio_dev; +- int ret, voltage_uv = 0; ++ int ret; + + if (!spi->irq) { + dev_err(&spi->dev, "no IRQ?\n"); +@@ -949,15 +949,12 @@ static int ad7192_probe(struct spi_device *spi) + goto error_disable_avdd; + } + +- voltage_uv = regulator_get_voltage(st->avdd); +- +- if (voltage_uv > 0) { +- st->int_vref_mv = voltage_uv / 1000; +- } else { +- ret = voltage_uv; ++ ret = regulator_get_voltage(st->avdd); ++ if (ret < 0) { + dev_err(&spi->dev, "Device tree error, reference voltage undefined\n"); + goto error_disable_avdd; + } ++ st->int_vref_mv = ret / 1000; + + spi_set_drvdata(spi, indio_dev); + st->chip_info = of_device_get_match_data(&spi->dev); +@@ -1014,7 +1011,9 @@ static int ad7192_probe(struct spi_device *spi) + return 0; + + error_disable_clk: +- clk_disable_unprepare(st->mclk); ++ if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 || ++ st->clock_sel == AD7192_CLK_EXT_MCLK2) ++ clk_disable_unprepare(st->mclk); + error_remove_trigger: + ad_sd_cleanup_buffer_and_trigger(indio_dev); + error_disable_dvdd: +@@ -1031,7 +1030,9 @@ static int ad7192_remove(struct spi_device *spi) + struct ad7192_state *st = iio_priv(indio_dev); + + iio_device_unregister(indio_dev); +- clk_disable_unprepare(st->mclk); ++ if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 || ++ st->clock_sel == AD7192_CLK_EXT_MCLK2) ++ clk_disable_unprepare(st->mclk); + ad_sd_cleanup_buffer_and_trigger(indio_dev); + + regulator_disable(st->dvdd); +diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c +index 5c0cbee032308..293d18f0b7fe0 100644 +--- a/drivers/iio/adc/ad7768-1.c ++++ b/drivers/iio/adc/ad7768-1.c +@@ -167,6 +167,10 @@ struct ad7768_state { + * transfer buffers to live in their own cache lines. + */ + union { ++ struct { ++ __be32 chan; ++ s64 timestamp; ++ } scan; + __be32 d32; + u8 d8[2]; + } data ____cacheline_aligned; +@@ -469,11 +473,11 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p) + + mutex_lock(&st->lock); + +- ret = spi_read(st->spi, &st->data.d32, 3); ++ ret = spi_read(st->spi, &st->data.scan.chan, 3); + if (ret < 0) + goto err_unlock; + +- iio_push_to_buffers_with_timestamp(indio_dev, &st->data.d32, ++ iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan, + iio_get_time_ns(indio_dev)); + + iio_trigger_notify_done(indio_dev->trig); +diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c +index 5e980a06258e6..440ef4c7be074 100644 +--- a/drivers/iio/adc/ad7793.c ++++ b/drivers/iio/adc/ad7793.c +@@ -279,6 +279,7 @@ static int ad7793_setup(struct iio_dev *indio_dev, + id &= AD7793_ID_MASK; + + if (id != st->chip_info->id) { ++ ret = -ENODEV; + dev_err(&st->sd.spi->dev, "device ID query failed\n"); + goto out; + } +diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c +index a2cc966580540..8c1e866f72e85 100644 +--- a/drivers/iio/adc/ad7923.c ++++ b/drivers/iio/adc/ad7923.c +@@ -59,8 +59,10 @@ struct ad7923_state { + /* + * DMA (thus cache coherency maintenance) requires the + * transfer buffers to live in their own cache lines. ++ * Ensure rx_buf can be directly used in iio_push_to_buffers_with_timetamp ++ * Length = 8 channels + 4 extra for 8 byte timestamp + */ +- __be16 rx_buf[4] ____cacheline_aligned; ++ __be16 rx_buf[12] ____cacheline_aligned; + __be16 tx_buf[4]; + }; + +diff --git a/drivers/iio/dac/ad5770r.c b/drivers/iio/dac/ad5770r.c +index 84dcf149261f9..42decba1463cc 100644 +--- a/drivers/iio/dac/ad5770r.c ++++ b/drivers/iio/dac/ad5770r.c +@@ -524,23 +524,29 @@ static int ad5770r_channel_config(struct ad5770r_state *st) + device_for_each_child_node(&st->spi->dev, child) { + ret = fwnode_property_read_u32(child, "num", &num); + if (ret) +- return ret; +- if (num >= AD5770R_MAX_CHANNELS) +- return -EINVAL; ++ goto err_child_out; ++ if (num >= AD5770R_MAX_CHANNELS) { ++ ret = -EINVAL; ++ goto err_child_out; ++ } + + ret = fwnode_property_read_u32_array(child, + "adi,range-microamp", + tmp, 2); + if (ret) +- return ret; ++ goto err_child_out; + + min = tmp[0] / 1000; + max = tmp[1] / 1000; + ret = ad5770r_store_output_range(st, min, max, num); + if (ret) +- return ret; ++ goto err_child_out; + } + ++ return 0; ++ ++err_child_out: ++ fwnode_handle_put(child); + return ret; + } + +diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c +index 129eead8febc0..b7523357d8eba 100644 +--- a/drivers/iio/gyro/fxas21002c_core.c ++++ b/drivers/iio/gyro/fxas21002c_core.c +@@ -399,6 +399,7 @@ static int fxas21002c_temp_get(struct fxas21002c_data *data, int *val) + ret = regmap_field_read(data->regmap_fields[F_TEMP], &temp); + if (ret < 0) { + dev_err(dev, "failed to read temp: %d\n", ret); ++ fxas21002c_pm_put(data); + goto data_unlock; + } + +@@ -432,6 +433,7 @@ static int fxas21002c_axis_get(struct fxas21002c_data *data, + &axis_be, sizeof(axis_be)); + if (ret < 0) { + dev_err(dev, "failed to read axis: %d: %d\n", index, ret); ++ fxas21002c_pm_put(data); + goto data_unlock; + } + +diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c +index ea8f068a6da3e..e0dec6f9d8d8f 100644 +--- a/drivers/infiniband/hw/mlx5/mr.c ++++ b/drivers/infiniband/hw/mlx5/mr.c +@@ -764,10 +764,10 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) + ent->xlt = (1 << ent->order) * sizeof(struct mlx5_mtt) / + MLX5_IB_UMR_OCTOWORD; + ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT; +- if ((dev->mdev->profile->mask & MLX5_PROF_MASK_MR_CACHE) && ++ if ((dev->mdev->profile.mask & MLX5_PROF_MASK_MR_CACHE) && + !dev->is_rep && mlx5_core_is_pf(dev->mdev) && + mlx5_ib_can_load_pas_with_umr(dev, 0)) +- ent->limit = dev->mdev->profile->mr_cache[i].limit; ++ ent->limit = dev->mdev->profile.mr_cache[i].limit; + else + ent->limit = 0; + spin_lock_irq(&ent->lock); +diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c +index 1cc565bce2f4d..1da6cea8ecbc4 100644 +--- a/drivers/interconnect/qcom/bcm-voter.c ++++ b/drivers/interconnect/qcom/bcm-voter.c +@@ -1,6 +1,6 @@ + // SPDX-License-Identifier: GPL-2.0 + /* +- * Copyright (c) 2020, The Linux Foundation. All rights reserved. ++ * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + */ + + #include +@@ -205,6 +205,7 @@ struct bcm_voter *of_bcm_voter_get(struct device *dev, const char *name) + } + mutex_unlock(&bcm_voter_lock); + ++ of_node_put(node); + return voter; + } + EXPORT_SYMBOL_GPL(of_bcm_voter_get); +@@ -362,6 +363,7 @@ static const struct of_device_id bcm_voter_of_match[] = { + { .compatible = "qcom,bcm-voter" }, + { } + }; ++MODULE_DEVICE_TABLE(of, bcm_voter_of_match); + + static struct platform_driver qcom_icc_bcm_voter_driver = { + .probe = qcom_icc_bcm_voter_probe, +diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c +index a69a8b573e40d..7de7a260706b3 100644 +--- a/drivers/iommu/amd/iommu.c ++++ b/drivers/iommu/amd/iommu.c +@@ -1747,6 +1747,8 @@ static void amd_iommu_probe_finalize(struct device *dev) + domain = iommu_get_domain_for_dev(dev); + if (domain->type == IOMMU_DOMAIN_DMA) + iommu_setup_dma_ops(dev, IOVA_START_PFN << PAGE_SHIFT, 0); ++ else ++ set_dma_ops(dev, NULL); + } + + static void amd_iommu_release_device(struct device *dev) +diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c +index d5c51b5c20aff..c2bfccb19e24d 100644 +--- a/drivers/iommu/intel/dmar.c ++++ b/drivers/iommu/intel/dmar.c +@@ -1144,7 +1144,7 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd) + + err = iommu_device_register(&iommu->iommu); + if (err) +- goto err_unmap; ++ goto err_sysfs; + } + + drhd->iommu = iommu; +@@ -1152,6 +1152,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd) + + return 0; + ++err_sysfs: ++ iommu_device_sysfs_remove(&iommu->iommu); + err_unmap: + unmap_iommu(iommu); + error_free_seq_id: +diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c +index 7e551da6c1fbd..56930e0b8f590 100644 +--- a/drivers/iommu/intel/iommu.c ++++ b/drivers/iommu/intel/iommu.c +@@ -2525,9 +2525,9 @@ static int domain_setup_first_level(struct intel_iommu *iommu, + struct device *dev, + u32 pasid) + { +- int flags = PASID_FLAG_SUPERVISOR_MODE; + struct dma_pte *pgd = domain->pgd; + int agaw, level; ++ int flags = 0; + + /* + * Skip top levels of page tables for iommu which has +@@ -2543,7 +2543,10 @@ static int domain_setup_first_level(struct intel_iommu *iommu, + if (level != 4 && level != 5) + return -EINVAL; + +- flags |= (level == 5) ? PASID_FLAG_FL5LP : 0; ++ if (pasid != PASID_RID2PASID) ++ flags |= PASID_FLAG_SUPERVISOR_MODE; ++ if (level == 5) ++ flags |= PASID_FLAG_FL5LP; + + if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED) + flags |= PASID_FLAG_PAGE_SNOOP; +@@ -4626,6 +4629,8 @@ static int auxiliary_link_device(struct dmar_domain *domain, + + if (!sinfo) { + sinfo = kzalloc(sizeof(*sinfo), GFP_ATOMIC); ++ if (!sinfo) ++ return -ENOMEM; + sinfo->domain = domain; + sinfo->pdev = dev; + list_add(&sinfo->link_phys, &info->subdevices); +diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c +index 5093d317ff1a2..77fbe9908abdc 100644 +--- a/drivers/iommu/intel/pasid.c ++++ b/drivers/iommu/intel/pasid.c +@@ -663,7 +663,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu, + * Since it is a second level only translation setup, we should + * set SRE bit as well (addresses are expected to be GPAs). + */ +- pasid_set_sre(pte); ++ if (pasid != PASID_RID2PASID) ++ pasid_set_sre(pte); + pasid_set_present(pte); + pasid_flush_caches(iommu, pte, pasid, did); + +diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c +index 2bfdd57348443..81dea4caf561f 100644 +--- a/drivers/iommu/virtio-iommu.c ++++ b/drivers/iommu/virtio-iommu.c +@@ -1138,6 +1138,7 @@ static struct virtio_device_id id_table[] = { + { VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID }, + { 0 }, + }; ++MODULE_DEVICE_TABLE(virtio, id_table); + + static struct virtio_driver virtio_iommu_drv = { + .driver.name = KBUILD_MODNAME, +diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c +index 70061991915a5..cd5642cef01fd 100644 +--- a/drivers/isdn/hardware/mISDN/hfcsusb.c ++++ b/drivers/isdn/hardware/mISDN/hfcsusb.c +@@ -46,7 +46,7 @@ static void hfcsusb_start_endpoint(struct hfcsusb *hw, int channel); + static void hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel); + static int hfcsusb_setup_bch(struct bchannel *bch, int protocol); + static void deactivate_bchannel(struct bchannel *bch); +-static void hfcsusb_ph_info(struct hfcsusb *hw); ++static int hfcsusb_ph_info(struct hfcsusb *hw); + + /* start next background transfer for control channel */ + static void +@@ -241,7 +241,7 @@ hfcusb_l2l1B(struct mISDNchannel *ch, struct sk_buff *skb) + * send full D/B channel status information + * as MPH_INFORMATION_IND + */ +-static void ++static int + hfcsusb_ph_info(struct hfcsusb *hw) + { + struct ph_info *phi; +@@ -250,7 +250,7 @@ hfcsusb_ph_info(struct hfcsusb *hw) + + phi = kzalloc(struct_size(phi, bch, dch->dev.nrbchan), GFP_ATOMIC); + if (!phi) +- return; ++ return -ENOMEM; + + phi->dch.ch.protocol = hw->protocol; + phi->dch.ch.Flags = dch->Flags; +@@ -263,6 +263,8 @@ hfcsusb_ph_info(struct hfcsusb *hw) + _queue_data(&dch->dev.D, MPH_INFORMATION_IND, MISDN_ID_ANY, + struct_size(phi, bch, dch->dev.nrbchan), phi, GFP_ATOMIC); + kfree(phi); ++ ++ return 0; + } + + /* +@@ -347,8 +349,7 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb) + ret = l1_event(dch->l1, hh->prim); + break; + case MPH_INFORMATION_REQ: +- hfcsusb_ph_info(hw); +- ret = 0; ++ ret = hfcsusb_ph_info(hw); + break; + } + +@@ -403,8 +404,7 @@ hfc_l1callback(struct dchannel *dch, u_int cmd) + hw->name, __func__, cmd); + return -1; + } +- hfcsusb_ph_info(hw); +- return 0; ++ return hfcsusb_ph_info(hw); + } + + static int +@@ -746,8 +746,7 @@ hfcsusb_setup_bch(struct bchannel *bch, int protocol) + handle_led(hw, (bch->nr == 1) ? LED_B1_OFF : + LED_B2_OFF); + } +- hfcsusb_ph_info(hw); +- return 0; ++ return hfcsusb_ph_info(hw); + } + + static void +diff --git a/drivers/isdn/hardware/mISDN/mISDNinfineon.c b/drivers/isdn/hardware/mISDN/mISDNinfineon.c +index a16c7a2a7f3d0..88d592bafdb02 100644 +--- a/drivers/isdn/hardware/mISDN/mISDNinfineon.c ++++ b/drivers/isdn/hardware/mISDN/mISDNinfineon.c +@@ -630,17 +630,19 @@ static void + release_io(struct inf_hw *hw) + { + if (hw->cfg.mode) { +- if (hw->cfg.p) { ++ if (hw->cfg.mode == AM_MEMIO) { + release_mem_region(hw->cfg.start, hw->cfg.size); +- iounmap(hw->cfg.p); ++ if (hw->cfg.p) ++ iounmap(hw->cfg.p); + } else + release_region(hw->cfg.start, hw->cfg.size); + hw->cfg.mode = AM_NONE; + } + if (hw->addr.mode) { +- if (hw->addr.p) { ++ if (hw->addr.mode == AM_MEMIO) { + release_mem_region(hw->addr.start, hw->addr.size); +- iounmap(hw->addr.p); ++ if (hw->addr.p) ++ iounmap(hw->addr.p); + } else + release_region(hw->addr.start, hw->addr.size); + hw->addr.mode = AM_NONE; +@@ -670,9 +672,12 @@ setup_io(struct inf_hw *hw) + (ulong)hw->cfg.start, (ulong)hw->cfg.size); + return err; + } +- if (hw->ci->cfg_mode == AM_MEMIO) +- hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size); + hw->cfg.mode = hw->ci->cfg_mode; ++ if (hw->ci->cfg_mode == AM_MEMIO) { ++ hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size); ++ if (!hw->cfg.p) ++ return -ENOMEM; ++ } + if (debug & DEBUG_HW) + pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n", + hw->name, (ulong)hw->cfg.start, +@@ -697,12 +702,12 @@ setup_io(struct inf_hw *hw) + (ulong)hw->addr.start, (ulong)hw->addr.size); + return err; + } ++ hw->addr.mode = hw->ci->addr_mode; + if (hw->ci->addr_mode == AM_MEMIO) { + hw->addr.p = ioremap(hw->addr.start, hw->addr.size); +- if (unlikely(!hw->addr.p)) ++ if (!hw->addr.p) + return -ENOMEM; + } +- hw->addr.mode = hw->ci->addr_mode; + if (debug & DEBUG_HW) + pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n", + hw->name, (ulong)hw->addr.start, +diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c +index 962f7df0691ef..41735a25d50aa 100644 +--- a/drivers/md/dm-snap.c ++++ b/drivers/md/dm-snap.c +@@ -854,7 +854,7 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new) + static uint32_t __minimum_chunk_size(struct origin *o) + { + struct dm_snapshot *snap; +- unsigned chunk_size = 0; ++ unsigned chunk_size = rounddown_pow_of_two(UINT_MAX); + + if (o) + list_for_each_entry(snap, &o->snapshots, list) +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 5d57a5bd171fa..c959e0234871b 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -5310,8 +5310,6 @@ static int in_chunk_boundary(struct mddev *mddev, struct bio *bio) + unsigned int chunk_sectors; + unsigned int bio_sectors = bio_sectors(bio); + +- WARN_ON_ONCE(bio->bi_bdev->bd_partno); +- + chunk_sectors = min(conf->chunk_sectors, conf->prev_chunk_sectors); + return chunk_sectors >= + ((sector & (chunk_sectors - 1)) + bio_sectors); +diff --git a/drivers/media/dvb-frontends/sp8870.c b/drivers/media/dvb-frontends/sp8870.c +index 655db8272268d..9767159aeb9b2 100644 +--- a/drivers/media/dvb-frontends/sp8870.c ++++ b/drivers/media/dvb-frontends/sp8870.c +@@ -281,7 +281,7 @@ static int sp8870_set_frontend_parameters(struct dvb_frontend *fe) + + // read status reg in order to clear pending irqs + err = sp8870_readreg(state, 0x200); +- if (err) ++ if (err < 0) + return err; + + // system controller start +diff --git a/drivers/media/usb/gspca/cpia1.c b/drivers/media/usb/gspca/cpia1.c +index a4f7431486f31..d93d384286c16 100644 +--- a/drivers/media/usb/gspca/cpia1.c ++++ b/drivers/media/usb/gspca/cpia1.c +@@ -1424,7 +1424,6 @@ static int sd_config(struct gspca_dev *gspca_dev, + { + struct sd *sd = (struct sd *) gspca_dev; + struct cam *cam; +- int ret; + + sd->mainsFreq = FREQ_DEF == V4L2_CID_POWER_LINE_FREQUENCY_60HZ; + reset_camera_params(gspca_dev); +@@ -1436,10 +1435,7 @@ static int sd_config(struct gspca_dev *gspca_dev, + cam->cam_mode = mode; + cam->nmodes = ARRAY_SIZE(mode); + +- ret = goto_low_power(gspca_dev); +- if (ret) +- gspca_err(gspca_dev, "Cannot go to low power mode: %d\n", +- ret); ++ goto_low_power(gspca_dev); + /* Check the firmware version. */ + sd->params.version.firmwareVersion = 0; + get_version_information(gspca_dev); +diff --git a/drivers/media/usb/gspca/m5602/m5602_mt9m111.c b/drivers/media/usb/gspca/m5602/m5602_mt9m111.c +index bfa3b381d8a26..bf1af6ed9131e 100644 +--- a/drivers/media/usb/gspca/m5602/m5602_mt9m111.c ++++ b/drivers/media/usb/gspca/m5602/m5602_mt9m111.c +@@ -195,7 +195,7 @@ static const struct v4l2_ctrl_config mt9m111_greenbal_cfg = { + int mt9m111_probe(struct sd *sd) + { + u8 data[2] = {0x00, 0x00}; +- int i, rc = 0; ++ int i, err; + struct gspca_dev *gspca_dev = (struct gspca_dev *)sd; + + if (force_sensor) { +@@ -213,18 +213,18 @@ int mt9m111_probe(struct sd *sd) + /* Do the preinit */ + for (i = 0; i < ARRAY_SIZE(preinit_mt9m111); i++) { + if (preinit_mt9m111[i][0] == BRIDGE) { +- rc |= m5602_write_bridge(sd, +- preinit_mt9m111[i][1], +- preinit_mt9m111[i][2]); ++ err = m5602_write_bridge(sd, ++ preinit_mt9m111[i][1], ++ preinit_mt9m111[i][2]); + } else { + data[0] = preinit_mt9m111[i][2]; + data[1] = preinit_mt9m111[i][3]; +- rc |= m5602_write_sensor(sd, +- preinit_mt9m111[i][1], data, 2); ++ err = m5602_write_sensor(sd, ++ preinit_mt9m111[i][1], data, 2); + } ++ if (err < 0) ++ return err; + } +- if (rc < 0) +- return rc; + + if (m5602_read_sensor(sd, MT9M111_SC_CHIPVER, data, 2)) + return -ENODEV; +diff --git a/drivers/media/usb/gspca/m5602/m5602_po1030.c b/drivers/media/usb/gspca/m5602/m5602_po1030.c +index d680b777f097f..8fd99ceee4b67 100644 +--- a/drivers/media/usb/gspca/m5602/m5602_po1030.c ++++ b/drivers/media/usb/gspca/m5602/m5602_po1030.c +@@ -154,8 +154,8 @@ static const struct v4l2_ctrl_config po1030_greenbal_cfg = { + + int po1030_probe(struct sd *sd) + { +- int rc = 0; + u8 dev_id_h = 0, i; ++ int err; + struct gspca_dev *gspca_dev = (struct gspca_dev *)sd; + + if (force_sensor) { +@@ -174,14 +174,14 @@ int po1030_probe(struct sd *sd) + for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) { + u8 data = preinit_po1030[i][2]; + if (preinit_po1030[i][0] == SENSOR) +- rc |= m5602_write_sensor(sd, +- preinit_po1030[i][1], &data, 1); ++ err = m5602_write_sensor(sd, preinit_po1030[i][1], ++ &data, 1); + else +- rc |= m5602_write_bridge(sd, preinit_po1030[i][1], +- data); ++ err = m5602_write_bridge(sd, preinit_po1030[i][1], ++ data); ++ if (err < 0) ++ return err; + } +- if (rc < 0) +- return rc; + + if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1)) + return -ENODEV; +diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c +index 2e081a58da6c5..49489153cd162 100644 +--- a/drivers/misc/kgdbts.c ++++ b/drivers/misc/kgdbts.c +@@ -100,8 +100,9 @@ + printk(KERN_INFO a); \ + } while (0) + #define v2printk(a...) do { \ +- if (verbose > 1) \ ++ if (verbose > 1) { \ + printk(KERN_INFO a); \ ++ } \ + touch_nmi_watchdog(); \ + } while (0) + #define eprintk(a...) do { \ +diff --git a/drivers/misc/lis3lv02d/lis3lv02d.h b/drivers/misc/lis3lv02d/lis3lv02d.h +index c394c0b08519a..7ac788fae1b86 100644 +--- a/drivers/misc/lis3lv02d/lis3lv02d.h ++++ b/drivers/misc/lis3lv02d/lis3lv02d.h +@@ -271,6 +271,7 @@ struct lis3lv02d { + int regs_size; + u8 *reg_cache; + bool regs_stored; ++ bool init_required; + u8 odr_mask; /* ODR bit mask */ + u8 whoami; /* indicates measurement precision */ + s16 (*read_data) (struct lis3lv02d *lis3, int reg); +diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c +index a98f6b895af71..aab3ebfa9fc4d 100644 +--- a/drivers/misc/mei/interrupt.c ++++ b/drivers/misc/mei/interrupt.c +@@ -277,6 +277,9 @@ static int mei_cl_irq_read(struct mei_cl *cl, struct mei_cl_cb *cb, + return ret; + } + ++ pm_runtime_mark_last_busy(dev->dev); ++ pm_request_autosuspend(dev->dev); ++ + list_move_tail(&cb->list, &cl->rd_pending); + + return 0; +diff --git a/drivers/mtd/nand/raw/cs553x_nand.c b/drivers/mtd/nand/raw/cs553x_nand.c +index 6edf78c16fc8b..df40927e56788 100644 +--- a/drivers/mtd/nand/raw/cs553x_nand.c ++++ b/drivers/mtd/nand/raw/cs553x_nand.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -240,6 +241,15 @@ static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat, + return 0; + } + ++static int cs553x_ecc_correct(struct nand_chip *chip, ++ unsigned char *buf, ++ unsigned char *read_ecc, ++ unsigned char *calc_ecc) ++{ ++ return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); ++} ++ + static struct cs553x_nand_controller *controllers[4]; + + static int cs553x_attach_chip(struct nand_chip *chip) +@@ -251,7 +261,7 @@ static int cs553x_attach_chip(struct nand_chip *chip) + chip->ecc.bytes = 3; + chip->ecc.hwctl = cs_enable_hwecc; + chip->ecc.calculate = cs_calculate_ecc; +- chip->ecc.correct = rawnand_sw_hamming_correct; ++ chip->ecc.correct = cs553x_ecc_correct; + chip->ecc.strength = 1; + + return 0; +diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c +index a24e2f57fa68a..99a4dde9825af 100644 +--- a/drivers/mtd/nand/raw/fsmc_nand.c ++++ b/drivers/mtd/nand/raw/fsmc_nand.c +@@ -25,6 +25,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -432,6 +433,15 @@ static int fsmc_read_hwecc_ecc1(struct nand_chip *chip, const u8 *data, + return 0; + } + ++static int fsmc_correct_ecc1(struct nand_chip *chip, ++ unsigned char *buf, ++ unsigned char *read_ecc, ++ unsigned char *calc_ecc) ++{ ++ return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); ++} ++ + /* Count the number of 0's in buff upto a max of max_bits */ + static int count_written_bits(u8 *buff, int size, int max_bits) + { +@@ -917,7 +927,7 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand) + case NAND_ECC_ENGINE_TYPE_ON_HOST: + dev_info(host->dev, "Using 1-bit HW ECC scheme\n"); + nand->ecc.calculate = fsmc_read_hwecc_ecc1; +- nand->ecc.correct = rawnand_sw_hamming_correct; ++ nand->ecc.correct = fsmc_correct_ecc1; + nand->ecc.hwctl = fsmc_enable_hwecc; + nand->ecc.bytes = 3; + nand->ecc.strength = 1; +diff --git a/drivers/mtd/nand/raw/lpc32xx_slc.c b/drivers/mtd/nand/raw/lpc32xx_slc.c +index 6b7269cfb7d83..d7dfc6fd85ca7 100644 +--- a/drivers/mtd/nand/raw/lpc32xx_slc.c ++++ b/drivers/mtd/nand/raw/lpc32xx_slc.c +@@ -27,6 +27,7 @@ + #include + #include + #include ++#include + + #define LPC32XX_MODNAME "lpc32xx-nand" + +@@ -344,6 +345,18 @@ static int lpc32xx_nand_ecc_calculate(struct nand_chip *chip, + return 0; + } + ++/* ++ * Corrects the data ++ */ ++static int lpc32xx_nand_ecc_correct(struct nand_chip *chip, ++ unsigned char *buf, ++ unsigned char *read_ecc, ++ unsigned char *calc_ecc) ++{ ++ return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); ++} ++ + /* + * Read a single byte from NAND device + */ +@@ -802,7 +815,7 @@ static int lpc32xx_nand_attach_chip(struct nand_chip *chip) + chip->ecc.write_oob = lpc32xx_nand_write_oob_syndrome; + chip->ecc.read_oob = lpc32xx_nand_read_oob_syndrome; + chip->ecc.calculate = lpc32xx_nand_ecc_calculate; +- chip->ecc.correct = rawnand_sw_hamming_correct; ++ chip->ecc.correct = lpc32xx_nand_ecc_correct; + chip->ecc.hwctl = lpc32xx_nand_ecc_enable; + + /* +diff --git a/drivers/mtd/nand/raw/ndfc.c b/drivers/mtd/nand/raw/ndfc.c +index 338d6b1a189eb..98d5a94c3a242 100644 +--- a/drivers/mtd/nand/raw/ndfc.c ++++ b/drivers/mtd/nand/raw/ndfc.c +@@ -22,6 +22,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -100,6 +101,15 @@ static int ndfc_calculate_ecc(struct nand_chip *chip, + return 0; + } + ++static int ndfc_correct_ecc(struct nand_chip *chip, ++ unsigned char *buf, ++ unsigned char *read_ecc, ++ unsigned char *calc_ecc) ++{ ++ return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); ++} ++ + /* + * Speedups for buffer read/write/verify + * +@@ -145,7 +155,7 @@ static int ndfc_chip_init(struct ndfc_controller *ndfc, + chip->controller = &ndfc->ndfc_control; + chip->legacy.read_buf = ndfc_read_buf; + chip->legacy.write_buf = ndfc_write_buf; +- chip->ecc.correct = rawnand_sw_hamming_correct; ++ chip->ecc.correct = ndfc_correct_ecc; + chip->ecc.hwctl = ndfc_enable_hwecc; + chip->ecc.calculate = ndfc_calculate_ecc; + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; +diff --git a/drivers/mtd/nand/raw/sharpsl.c b/drivers/mtd/nand/raw/sharpsl.c +index 5612ee628425b..2f1fe464e6637 100644 +--- a/drivers/mtd/nand/raw/sharpsl.c ++++ b/drivers/mtd/nand/raw/sharpsl.c +@@ -11,6 +11,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -96,6 +97,15 @@ static int sharpsl_nand_calculate_ecc(struct nand_chip *chip, + return readb(sharpsl->io + ECCCNTR) != 0; + } + ++static int sharpsl_nand_correct_ecc(struct nand_chip *chip, ++ unsigned char *buf, ++ unsigned char *read_ecc, ++ unsigned char *calc_ecc) ++{ ++ return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); ++} ++ + static int sharpsl_attach_chip(struct nand_chip *chip) + { + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) +@@ -106,7 +116,7 @@ static int sharpsl_attach_chip(struct nand_chip *chip) + chip->ecc.strength = 1; + chip->ecc.hwctl = sharpsl_nand_enable_hwecc; + chip->ecc.calculate = sharpsl_nand_calculate_ecc; +- chip->ecc.correct = rawnand_sw_hamming_correct; ++ chip->ecc.correct = sharpsl_nand_correct_ecc; + + return 0; + } +diff --git a/drivers/mtd/nand/raw/tmio_nand.c b/drivers/mtd/nand/raw/tmio_nand.c +index de8e919d0ebe6..6d93dd31969b2 100644 +--- a/drivers/mtd/nand/raw/tmio_nand.c ++++ b/drivers/mtd/nand/raw/tmio_nand.c +@@ -34,6 +34,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -292,11 +293,12 @@ static int tmio_nand_correct_data(struct nand_chip *chip, unsigned char *buf, + int r0, r1; + + /* assume ecc.size = 512 and ecc.bytes = 6 */ +- r0 = rawnand_sw_hamming_correct(chip, buf, read_ecc, calc_ecc); ++ r0 = ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); + if (r0 < 0) + return r0; +- r1 = rawnand_sw_hamming_correct(chip, buf + 256, read_ecc + 3, +- calc_ecc + 3); ++ r1 = ecc_sw_hamming_correct(buf + 256, read_ecc + 3, calc_ecc + 3, ++ chip->ecc.size, false); + if (r1 < 0) + return r1; + return r0 + r1; +diff --git a/drivers/mtd/nand/raw/txx9ndfmc.c b/drivers/mtd/nand/raw/txx9ndfmc.c +index 1a9449e53bf9d..b8894ac27073c 100644 +--- a/drivers/mtd/nand/raw/txx9ndfmc.c ++++ b/drivers/mtd/nand/raw/txx9ndfmc.c +@@ -13,6 +13,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -193,8 +194,8 @@ static int txx9ndfmc_correct_data(struct nand_chip *chip, unsigned char *buf, + int stat; + + for (eccsize = chip->ecc.size; eccsize > 0; eccsize -= 256) { +- stat = rawnand_sw_hamming_correct(chip, buf, read_ecc, +- calc_ecc); ++ stat = ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, ++ chip->ecc.size, false); + if (stat < 0) + return stat; + corrected += stat; +diff --git a/drivers/net/caif/caif_serial.c b/drivers/net/caif/caif_serial.c +index 8215cd77301f5..9f30748da4ab9 100644 +--- a/drivers/net/caif/caif_serial.c ++++ b/drivers/net/caif/caif_serial.c +@@ -269,9 +269,6 @@ static netdev_tx_t caif_xmit(struct sk_buff *skb, struct net_device *dev) + { + struct ser_device *ser; + +- if (WARN_ON(!dev)) +- return -EINVAL; +- + ser = netdev_priv(dev); + + /* Send flow off once, on high water mark */ +diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c +index 9c86cacc4a727..23c008e9653b5 100644 +--- a/drivers/net/dsa/bcm_sf2.c ++++ b/drivers/net/dsa/bcm_sf2.c +@@ -775,11 +775,9 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port, + bcm_sf2_sw_mac_link_set(ds, port, interface, true); + + if (port != core_readl(priv, CORE_IMP0_PRT_ID)) { +- u32 reg_rgmii_ctrl; ++ u32 reg_rgmii_ctrl = 0; + u32 reg, offset; + +- reg_rgmii_ctrl = bcm_sf2_reg_rgmii_cntrl(priv, port); +- + if (priv->type == BCM4908_DEVICE_ID || + priv->type == BCM7445_DEVICE_ID) + offset = CORE_STS_OVERRIDE_GMIIP_PORT(port); +@@ -790,6 +788,7 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port, + interface == PHY_INTERFACE_MODE_RGMII_TXID || + interface == PHY_INTERFACE_MODE_MII || + interface == PHY_INTERFACE_MODE_REVMII) { ++ reg_rgmii_ctrl = bcm_sf2_reg_rgmii_cntrl(priv, port); + reg = reg_readl(priv, reg_rgmii_ctrl); + reg &= ~(RX_PAUSE_EN | TX_PAUSE_EN); + +diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c +index 9871d7cff93a8..2e3a482e29201 100644 +--- a/drivers/net/dsa/mt7530.c ++++ b/drivers/net/dsa/mt7530.c +@@ -1214,14 +1214,6 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port) + { + struct mt7530_priv *priv = ds->priv; + +- /* The real fabric path would be decided on the membership in the +- * entry of VLAN table. PCR_MATRIX set up here with ALL_MEMBERS +- * means potential VLAN can be consisting of certain subset of all +- * ports. +- */ +- mt7530_rmw(priv, MT7530_PCR_P(port), +- PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS)); +- + /* Trapped into security mode allows packet forwarding through VLAN + * table lookup. CPU port is set to fallback mode to let untagged + * frames pass through. +diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c +index b777d3f375736..12cd04b568030 100644 +--- a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c ++++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c +@@ -167,9 +167,10 @@ enum sja1105_hostcmd { + SJA1105_HOSTCMD_INVALIDATE = 4, + }; + ++/* Command and entry overlap */ + static void +-sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd, +- enum packing_op op) ++sja1105et_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd, ++ enum packing_op op) + { + const int size = SJA1105_SIZE_DYN_CMD; + +@@ -179,6 +180,20 @@ sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd, + sja1105_packing(buf, &cmd->index, 9, 0, size, op); + } + ++/* Command and entry are separate */ ++static void ++sja1105pqrs_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd, ++ enum packing_op op) ++{ ++ u8 *p = buf + SJA1105_SIZE_VL_LOOKUP_ENTRY; ++ const int size = SJA1105_SIZE_DYN_CMD; ++ ++ sja1105_packing(p, &cmd->valid, 31, 31, size, op); ++ sja1105_packing(p, &cmd->errors, 30, 30, size, op); ++ sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op); ++ sja1105_packing(p, &cmd->index, 9, 0, size, op); ++} ++ + static size_t sja1105et_vl_lookup_entry_packing(void *buf, void *entry_ptr, + enum packing_op op) + { +@@ -641,7 +656,7 @@ static size_t sja1105pqrs_cbs_entry_packing(void *buf, void *entry_ptr, + const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = { + [BLK_IDX_VL_LOOKUP] = { + .entry_packing = sja1105et_vl_lookup_entry_packing, +- .cmd_packing = sja1105_vl_lookup_cmd_packing, ++ .cmd_packing = sja1105et_vl_lookup_cmd_packing, + .access = OP_WRITE, + .max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT, + .packed_size = SJA1105ET_SIZE_VL_LOOKUP_DYN_CMD, +@@ -725,7 +740,7 @@ const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = { + const struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = { + [BLK_IDX_VL_LOOKUP] = { + .entry_packing = sja1105_vl_lookup_entry_packing, +- .cmd_packing = sja1105_vl_lookup_cmd_packing, ++ .cmd_packing = sja1105pqrs_vl_lookup_cmd_packing, + .access = (OP_READ | OP_WRITE), + .max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT, + .packed_size = SJA1105PQRS_SIZE_VL_LOOKUP_DYN_CMD, +diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c +index 51ea104c63bbb..926544440f02a 100644 +--- a/drivers/net/dsa/sja1105/sja1105_main.c ++++ b/drivers/net/dsa/sja1105/sja1105_main.c +@@ -26,6 +26,7 @@ + #include "sja1105_tas.h" + + #define SJA1105_UNKNOWN_MULTICAST 0x010000000000ull ++#define SJA1105_DEFAULT_VLAN (VLAN_N_VID - 1) + + static const struct dsa_switch_ops sja1105_switch_ops; + +@@ -207,6 +208,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv, + default: + dev_err(dev, "Unsupported PHY mode %s!\n", + phy_modes(ports[i].phy_mode)); ++ return -EINVAL; + } + + /* Even though the SerDes port is able to drive SGMII autoneg +@@ -321,6 +323,13 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv) + return 0; + } + ++/* Set up a default VLAN for untagged traffic injected from the CPU ++ * using management routes (e.g. STP, PTP) as opposed to tag_8021q. ++ * All DT-defined ports are members of this VLAN, and there are no ++ * restrictions on forwarding (since the CPU selects the destination). ++ * Frames from this VLAN will always be transmitted as untagged, and ++ * neither the bridge nor the 8021q module cannot create this VLAN ID. ++ */ + static int sja1105_init_static_vlan(struct sja1105_private *priv) + { + struct sja1105_table *table; +@@ -330,17 +339,13 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv) + .vmemb_port = 0, + .vlan_bc = 0, + .tag_port = 0, +- .vlanid = 1, ++ .vlanid = SJA1105_DEFAULT_VLAN, + }; + struct dsa_switch *ds = priv->ds; + int port; + + table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP]; + +- /* The static VLAN table will only contain the initial pvid of 1. +- * All other VLANs are to be configured through dynamic entries, +- * and kept in the static configuration table as backing memory. +- */ + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; +@@ -353,9 +358,6 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv) + + table->entry_count = 1; + +- /* VLAN 1: all DT-defined ports are members; no restrictions on +- * forwarding; always transmit as untagged. +- */ + for (port = 0; port < ds->num_ports; port++) { + struct sja1105_bridge_vlan *v; + +@@ -366,15 +368,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv) + pvid.vlan_bc |= BIT(port); + pvid.tag_port &= ~BIT(port); + +- /* Let traffic that don't need dsa_8021q (e.g. STP, PTP) be +- * transmitted as untagged. +- */ + v = kzalloc(sizeof(*v), GFP_KERNEL); + if (!v) + return -ENOMEM; + + v->port = port; +- v->vid = 1; ++ v->vid = SJA1105_DEFAULT_VLAN; + v->untagged = true; + if (dsa_is_cpu_port(ds, port)) + v->pvid = true; +@@ -2817,11 +2816,22 @@ static int sja1105_vlan_add_one(struct dsa_switch *ds, int port, u16 vid, + bool pvid = flags & BRIDGE_VLAN_INFO_PVID; + struct sja1105_bridge_vlan *v; + +- list_for_each_entry(v, vlan_list, list) +- if (v->port == port && v->vid == vid && +- v->untagged == untagged && v->pvid == pvid) ++ list_for_each_entry(v, vlan_list, list) { ++ if (v->port == port && v->vid == vid) { + /* Already added */ +- return 0; ++ if (v->untagged == untagged && v->pvid == pvid) ++ /* Nothing changed */ ++ return 0; ++ ++ /* It's the same VLAN, but some of the flags changed ++ * and the user did not bother to delete it first. ++ * Update it and trigger sja1105_build_vlan_table. ++ */ ++ v->untagged = untagged; ++ v->pvid = pvid; ++ return 1; ++ } ++ } + + v = kzalloc(sizeof(*v), GFP_KERNEL); + if (!v) { +@@ -2976,13 +2986,13 @@ static int sja1105_setup(struct dsa_switch *ds) + rc = sja1105_static_config_load(priv, ports); + if (rc < 0) { + dev_err(ds->dev, "Failed to load static config: %d\n", rc); +- return rc; ++ goto out_ptp_clock_unregister; + } + /* Configure the CGU (PHY link modes and speeds) */ + rc = sja1105_clocking_setup(priv); + if (rc < 0) { + dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc); +- return rc; ++ goto out_static_config_free; + } + /* On SJA1105, VLAN filtering per se is always enabled in hardware. + * The only thing we can do to disable it is lie about what the 802.1Q +@@ -3003,7 +3013,7 @@ static int sja1105_setup(struct dsa_switch *ds) + + rc = sja1105_devlink_setup(ds); + if (rc < 0) +- return rc; ++ goto out_static_config_free; + + /* The DSA/switchdev model brings up switch ports in standalone mode by + * default, and that means vlan_filtering is 0 since they're not under +@@ -3012,6 +3022,17 @@ static int sja1105_setup(struct dsa_switch *ds) + rtnl_lock(); + rc = sja1105_setup_8021q_tagging(ds, true); + rtnl_unlock(); ++ if (rc) ++ goto out_devlink_teardown; ++ ++ return 0; ++ ++out_devlink_teardown: ++ sja1105_devlink_teardown(ds); ++out_ptp_clock_unregister: ++ sja1105_ptp_clock_unregister(ds); ++out_static_config_free: ++ sja1105_static_config_free(&priv->static_config); + + return rc; + } +@@ -3662,8 +3683,10 @@ static int sja1105_probe(struct spi_device *spi) + priv->cbs = devm_kcalloc(dev, priv->info->num_cbs_shapers, + sizeof(struct sja1105_cbs_entry), + GFP_KERNEL); +- if (!priv->cbs) +- return -ENOMEM; ++ if (!priv->cbs) { ++ rc = -ENOMEM; ++ goto out_unregister_switch; ++ } + } + + /* Connections between dsa_port and sja1105_port */ +@@ -3688,7 +3711,7 @@ static int sja1105_probe(struct spi_device *spi) + dev_err(ds->dev, + "failed to create deferred xmit thread: %d\n", + rc); +- goto out; ++ goto out_destroy_workers; + } + skb_queue_head_init(&sp->xmit_queue); + sp->xmit_tpid = ETH_P_SJA1105; +@@ -3698,7 +3721,8 @@ static int sja1105_probe(struct spi_device *spi) + } + + return 0; +-out: ++ ++out_destroy_workers: + while (port-- > 0) { + struct sja1105_port *sp = &priv->ports[port]; + +@@ -3707,6 +3731,10 @@ out: + + kthread_destroy_worker(sp->xmit_worker); + } ++ ++out_unregister_switch: ++ dsa_unregister_switch(ds); ++ + return rc; + } + +diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c +index 3e8a179f39db4..633b103896535 100644 +--- a/drivers/net/ethernet/broadcom/bnx2.c ++++ b/drivers/net/ethernet/broadcom/bnx2.c +@@ -8247,9 +8247,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev) + BNX2_WR(bp, PCI_COMMAND, reg); + } else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) && + !(bp->flags & BNX2_FLAG_PCIX)) { +- + dev_err(&pdev->dev, + "5706 A1 can only be used in a PCIX bus, aborting\n"); ++ rc = -EPERM; + goto err_out_unmap; + } + +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index cf4249d59383c..027997c711aba 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -282,7 +282,8 @@ static bool bnxt_vf_pciid(enum board_idx idx) + { + return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF || + idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV || +- idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF); ++ idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF || ++ idx == NETXTREME_E_P5_VF_HV); + } + + #define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID) +@@ -6919,17 +6920,10 @@ ctx_err: + static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, u8 *pg_attr, + __le64 *pg_dir) + { +- u8 pg_size = 0; +- + if (!rmem->nr_pages) + return; + +- if (BNXT_PAGE_SHIFT == 13) +- pg_size = 1 << 4; +- else if (BNXT_PAGE_SIZE == 16) +- pg_size = 2 << 4; +- +- *pg_attr = pg_size; ++ BNXT_SET_CTX_PAGE_ATTR(*pg_attr); + if (rmem->depth >= 1) { + if (rmem->depth == 2) + *pg_attr |= 2; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h +index 1259e68cba2a7..f6bf69b02de94 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h +@@ -1456,6 +1456,16 @@ struct bnxt_ctx_pg_info { + + #define BNXT_BACKING_STORE_CFG_LEGACY_LEN 256 + ++#define BNXT_SET_CTX_PAGE_ATTR(attr) \ ++do { \ ++ if (BNXT_PAGE_SIZE == 0x2000) \ ++ attr = FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8K; \ ++ else if (BNXT_PAGE_SIZE == 0x10000) \ ++ attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_64K; \ ++ else \ ++ attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K; \ ++} while (0) ++ + struct bnxt_ctx_mem_info { + u32 qp_max_entries; + u16 qp_min_qp1_entries; +diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c +index 7c5af4beedc6d..591229b96257e 100644 +--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c ++++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c +@@ -1153,7 +1153,7 @@ static void octeon_destroy_resources(struct octeon_device *oct) + * @lio: per-network private data + * @start_stop: whether to start or stop + */ +-static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) ++static int send_rx_ctrl_cmd(struct lio *lio, int start_stop) + { + struct octeon_soft_command *sc; + union octnet_cmd *ncmd; +@@ -1161,15 +1161,15 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) + int retval; + + if (oct->props[lio->ifidx].rx_on == start_stop) +- return; ++ return 0; + + sc = (struct octeon_soft_command *) + octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE, + 16, 0); + if (!sc) { + netif_info(lio, rx_err, lio->netdev, +- "Failed to allocate octeon_soft_command\n"); +- return; ++ "Failed to allocate octeon_soft_command struct\n"); ++ return -ENOMEM; + } + + ncmd = (union octnet_cmd *)sc->virtdptr; +@@ -1192,18 +1192,19 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) + if (retval == IQ_SEND_FAILED) { + netif_info(lio, rx_err, lio->netdev, "Failed to send RX Control message\n"); + octeon_free_soft_command(oct, sc); +- return; + } else { + /* Sleep on a wait queue till the cond flag indicates that the + * response arrived or timed-out. + */ + retval = wait_for_sc_completion_timeout(oct, sc, 0); + if (retval) +- return; ++ return retval; + + oct->props[lio->ifidx].rx_on = start_stop; + WRITE_ONCE(sc->caller_is_done, true); + } ++ ++ return retval; + } + + /** +@@ -1778,6 +1779,7 @@ static int liquidio_open(struct net_device *netdev) + struct octeon_device_priv *oct_priv = + (struct octeon_device_priv *)oct->priv; + struct napi_struct *napi, *n; ++ int ret = 0; + + if (oct->props[lio->ifidx].napi_enabled == 0) { + tasklet_disable(&oct_priv->droq_tasklet); +@@ -1813,7 +1815,9 @@ static int liquidio_open(struct net_device *netdev) + netif_info(lio, ifup, lio->netdev, "Interface Open, ready for traffic\n"); + + /* tell Octeon to start forwarding packets to host */ +- send_rx_ctrl_cmd(lio, 1); ++ ret = send_rx_ctrl_cmd(lio, 1); ++ if (ret) ++ return ret; + + /* start periodical statistics fetch */ + INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats); +@@ -1824,7 +1828,7 @@ static int liquidio_open(struct net_device *netdev) + dev_info(&oct->pci_dev->dev, "%s interface is opened\n", + netdev->name); + +- return 0; ++ return ret; + } + + /** +@@ -1838,6 +1842,7 @@ static int liquidio_stop(struct net_device *netdev) + struct octeon_device_priv *oct_priv = + (struct octeon_device_priv *)oct->priv; + struct napi_struct *napi, *n; ++ int ret = 0; + + ifstate_reset(lio, LIO_IFSTATE_RUNNING); + +@@ -1854,7 +1859,9 @@ static int liquidio_stop(struct net_device *netdev) + lio->link_changes++; + + /* Tell Octeon that nic interface is down. */ +- send_rx_ctrl_cmd(lio, 0); ++ ret = send_rx_ctrl_cmd(lio, 0); ++ if (ret) ++ return ret; + + if (OCTEON_CN23XX_PF(oct)) { + if (!oct->msix_on) +@@ -1889,7 +1896,7 @@ static int liquidio_stop(struct net_device *netdev) + + dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name); + +- return 0; ++ return ret; + } + + /** +diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +index 516f166ceff8c..ffddb3126a323 100644 +--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c ++++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +@@ -595,7 +595,7 @@ static void octeon_destroy_resources(struct octeon_device *oct) + * @lio: per-network private data + * @start_stop: whether to start or stop + */ +-static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) ++static int send_rx_ctrl_cmd(struct lio *lio, int start_stop) + { + struct octeon_device *oct = (struct octeon_device *)lio->oct_dev; + struct octeon_soft_command *sc; +@@ -603,11 +603,16 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) + int retval; + + if (oct->props[lio->ifidx].rx_on == start_stop) +- return; ++ return 0; + + sc = (struct octeon_soft_command *) + octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE, + 16, 0); ++ if (!sc) { ++ netif_info(lio, rx_err, lio->netdev, ++ "Failed to allocate octeon_soft_command struct\n"); ++ return -ENOMEM; ++ } + + ncmd = (union octnet_cmd *)sc->virtdptr; + +@@ -635,11 +640,13 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) + */ + retval = wait_for_sc_completion_timeout(oct, sc, 0); + if (retval) +- return; ++ return retval; + + oct->props[lio->ifidx].rx_on = start_stop; + WRITE_ONCE(sc->caller_is_done, true); + } ++ ++ return retval; + } + + /** +@@ -906,6 +913,7 @@ static int liquidio_open(struct net_device *netdev) + struct octeon_device_priv *oct_priv = + (struct octeon_device_priv *)oct->priv; + struct napi_struct *napi, *n; ++ int ret = 0; + + if (!oct->props[lio->ifidx].napi_enabled) { + tasklet_disable(&oct_priv->droq_tasklet); +@@ -932,11 +940,13 @@ static int liquidio_open(struct net_device *netdev) + (LIQUIDIO_NDEV_STATS_POLL_TIME_MS)); + + /* tell Octeon to start forwarding packets to host */ +- send_rx_ctrl_cmd(lio, 1); ++ ret = send_rx_ctrl_cmd(lio, 1); ++ if (ret) ++ return ret; + + dev_info(&oct->pci_dev->dev, "%s interface is opened\n", netdev->name); + +- return 0; ++ return ret; + } + + /** +@@ -950,9 +960,12 @@ static int liquidio_stop(struct net_device *netdev) + struct octeon_device_priv *oct_priv = + (struct octeon_device_priv *)oct->priv; + struct napi_struct *napi, *n; ++ int ret = 0; + + /* tell Octeon to stop forwarding packets to host */ +- send_rx_ctrl_cmd(lio, 0); ++ ret = send_rx_ctrl_cmd(lio, 0); ++ if (ret) ++ return ret; + + netif_info(lio, ifdown, lio->netdev, "Stopping interface!\n"); + /* Inform that netif carrier is down */ +@@ -986,7 +999,7 @@ static int liquidio_stop(struct net_device *netdev) + + dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name); + +- return 0; ++ return ret; + } + + /** +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c +index bde8494215c41..e664e05b9f026 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c +@@ -1042,7 +1042,7 @@ void clear_all_filters(struct adapter *adapter) + cxgb4_del_filter(dev, f->tid, &f->fs); + } + +- sb = t4_read_reg(adapter, LE_DB_SRVR_START_INDEX_A); ++ sb = adapter->tids.stid_base; + for (i = 0; i < sb; i++) { + f = (struct filter_entry *)adapter->tids.tid_tab[i]; + +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +index 6264bc66a4fc9..421bd9b88028d 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +@@ -6480,9 +6480,9 @@ static void cxgb4_ktls_dev_del(struct net_device *netdev, + + adap->uld[CXGB4_ULD_KTLS].tlsdev_ops->tls_dev_del(netdev, tls_ctx, + direction); +- cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE); + + out_unlock: ++ cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE); + mutex_unlock(&uld_mutex); + } + +diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +index a3f5b80888e5a..82b0aa3d0d887 100644 +--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c ++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c +@@ -60,6 +60,7 @@ static int chcr_get_nfrags_to_send(struct sk_buff *skb, u32 start, u32 len) + } + + static int chcr_init_tcb_fields(struct chcr_ktls_info *tx_info); ++static void clear_conn_resources(struct chcr_ktls_info *tx_info); + /* + * chcr_ktls_save_keys: calculate and save crypto keys. + * @tx_info - driver specific tls info. +@@ -365,10 +366,14 @@ static void chcr_ktls_dev_del(struct net_device *netdev, + chcr_get_ktls_tx_context(tls_ctx); + struct chcr_ktls_info *tx_info = tx_ctx->chcr_info; + struct ch_ktls_port_stats_debug *port_stats; ++ struct chcr_ktls_uld_ctx *u_ctx; + + if (!tx_info) + return; + ++ u_ctx = tx_info->adap->uld[CXGB4_ULD_KTLS].handle; ++ if (u_ctx && u_ctx->detach) ++ return; + /* clear l2t entry */ + if (tx_info->l2te) + cxgb4_l2t_release(tx_info->l2te); +@@ -385,6 +390,8 @@ static void chcr_ktls_dev_del(struct net_device *netdev, + if (tx_info->tid != -1) { + cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan, + tx_info->tid, tx_info->ip_family); ++ ++ xa_erase(&u_ctx->tid_list, tx_info->tid); + } + + port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id]; +@@ -412,6 +419,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk, + struct tls_context *tls_ctx = tls_get_ctx(sk); + struct ch_ktls_port_stats_debug *port_stats; + struct chcr_ktls_ofld_ctx_tx *tx_ctx; ++ struct chcr_ktls_uld_ctx *u_ctx; + struct chcr_ktls_info *tx_info; + struct dst_entry *dst; + struct adapter *adap; +@@ -426,6 +434,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk, + adap = pi->adapter; + port_stats = &adap->ch_ktls_stats.ktls_port[pi->port_id]; + atomic64_inc(&port_stats->ktls_tx_connection_open); ++ u_ctx = adap->uld[CXGB4_ULD_KTLS].handle; + + if (direction == TLS_OFFLOAD_CTX_DIR_RX) { + pr_err("not expecting for RX direction\n"); +@@ -435,6 +444,9 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk, + if (tx_ctx->chcr_info) + goto out; + ++ if (u_ctx && u_ctx->detach) ++ goto out; ++ + tx_info = kvzalloc(sizeof(*tx_info), GFP_KERNEL); + if (!tx_info) + goto out; +@@ -570,6 +582,8 @@ free_tid: + cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan, + tx_info->tid, tx_info->ip_family); + ++ xa_erase(&u_ctx->tid_list, tx_info->tid); ++ + put_module: + /* release module refcount */ + module_put(THIS_MODULE); +@@ -634,8 +648,12 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap, + { + const struct cpl_act_open_rpl *p = (void *)input; + struct chcr_ktls_info *tx_info = NULL; ++ struct chcr_ktls_ofld_ctx_tx *tx_ctx; ++ struct chcr_ktls_uld_ctx *u_ctx; + unsigned int atid, tid, status; ++ struct tls_context *tls_ctx; + struct tid_info *t; ++ int ret = 0; + + tid = GET_TID(p); + status = AOPEN_STATUS_G(ntohl(p->atid_status)); +@@ -667,14 +685,29 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap, + if (!status) { + tx_info->tid = tid; + cxgb4_insert_tid(t, tx_info, tx_info->tid, tx_info->ip_family); ++ /* Adding tid */ ++ tls_ctx = tls_get_ctx(tx_info->sk); ++ tx_ctx = chcr_get_ktls_tx_context(tls_ctx); ++ u_ctx = adap->uld[CXGB4_ULD_KTLS].handle; ++ if (u_ctx) { ++ ret = xa_insert_bh(&u_ctx->tid_list, tid, tx_ctx, ++ GFP_NOWAIT); ++ if (ret < 0) { ++ pr_err("%s: Failed to allocate tid XA entry = %d\n", ++ __func__, tx_info->tid); ++ tx_info->open_state = CH_KTLS_OPEN_FAILURE; ++ goto out; ++ } ++ } + tx_info->open_state = CH_KTLS_OPEN_SUCCESS; + } else { + tx_info->open_state = CH_KTLS_OPEN_FAILURE; + } ++out: + spin_unlock(&tx_info->lock); + + complete(&tx_info->completion); +- return 0; ++ return ret; + } + + /* +@@ -2092,6 +2125,8 @@ static void *chcr_ktls_uld_add(const struct cxgb4_lld_info *lldi) + goto out; + } + u_ctx->lldi = *lldi; ++ u_ctx->detach = false; ++ xa_init_flags(&u_ctx->tid_list, XA_FLAGS_LOCK_BH); + out: + return u_ctx; + } +@@ -2125,6 +2160,45 @@ static int chcr_ktls_uld_rx_handler(void *handle, const __be64 *rsp, + return 0; + } + ++static void clear_conn_resources(struct chcr_ktls_info *tx_info) ++{ ++ /* clear l2t entry */ ++ if (tx_info->l2te) ++ cxgb4_l2t_release(tx_info->l2te); ++ ++#if IS_ENABLED(CONFIG_IPV6) ++ /* clear clip entry */ ++ if (tx_info->ip_family == AF_INET6) ++ cxgb4_clip_release(tx_info->netdev, (const u32 *) ++ &tx_info->sk->sk_v6_rcv_saddr, ++ 1); ++#endif ++ ++ /* clear tid */ ++ if (tx_info->tid != -1) ++ cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan, ++ tx_info->tid, tx_info->ip_family); ++} ++ ++static void ch_ktls_reset_all_conn(struct chcr_ktls_uld_ctx *u_ctx) ++{ ++ struct ch_ktls_port_stats_debug *port_stats; ++ struct chcr_ktls_ofld_ctx_tx *tx_ctx; ++ struct chcr_ktls_info *tx_info; ++ unsigned long index; ++ ++ xa_for_each(&u_ctx->tid_list, index, tx_ctx) { ++ tx_info = tx_ctx->chcr_info; ++ clear_conn_resources(tx_info); ++ port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id]; ++ atomic64_inc(&port_stats->ktls_tx_connection_close); ++ kvfree(tx_info); ++ tx_ctx->chcr_info = NULL; ++ /* release module refcount */ ++ module_put(THIS_MODULE); ++ } ++} ++ + static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state) + { + struct chcr_ktls_uld_ctx *u_ctx = handle; +@@ -2141,7 +2215,10 @@ static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state) + case CXGB4_STATE_DETACH: + pr_info("%s: Down\n", pci_name(u_ctx->lldi.pdev)); + mutex_lock(&dev_mutex); ++ u_ctx->detach = true; + list_del(&u_ctx->entry); ++ ch_ktls_reset_all_conn(u_ctx); ++ xa_destroy(&u_ctx->tid_list); + mutex_unlock(&dev_mutex); + break; + default: +@@ -2180,6 +2257,7 @@ static void __exit chcr_ktls_exit(void) + adap = pci_get_drvdata(u_ctx->lldi.pdev); + memset(&adap->ch_ktls_stats, 0, sizeof(adap->ch_ktls_stats)); + list_del(&u_ctx->entry); ++ xa_destroy(&u_ctx->tid_list); + kfree(u_ctx); + } + mutex_unlock(&dev_mutex); +diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h +index 18b3b1f024156..10572dc55365a 100644 +--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h ++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h +@@ -75,6 +75,8 @@ struct chcr_ktls_ofld_ctx_tx { + struct chcr_ktls_uld_ctx { + struct list_head entry; + struct cxgb4_lld_info lldi; ++ struct xarray tid_list; ++ bool detach; + }; + + static inline struct chcr_ktls_ofld_ctx_tx * +diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c +index 188d871f6b8cd..c320cc8ca68d6 100644 +--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c ++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c +@@ -1564,8 +1564,10 @@ found_ok_skb: + cerr = put_cmsg(msg, SOL_TLS, TLS_GET_RECORD_TYPE, + sizeof(thdr->type), &thdr->type); + +- if (cerr && thdr->type != TLS_RECORD_TYPE_DATA) +- return -EIO; ++ if (cerr && thdr->type != TLS_RECORD_TYPE_DATA) { ++ copied = -EIO; ++ break; ++ } + /* don't send tls header, skip copy */ + goto skip_copy; + } +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c +index 70aea9c274fed..89393fbc726f6 100644 +--- a/drivers/net/ethernet/freescale/fec_main.c ++++ b/drivers/net/ethernet/freescale/fec_main.c +@@ -3282,7 +3282,9 @@ static int fec_enet_init(struct net_device *ndev) + return ret; + } + +- fec_enet_alloc_queue(ndev); ++ ret = fec_enet_alloc_queue(ndev); ++ if (ret) ++ return ret; + + bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize; + +@@ -3290,7 +3292,8 @@ static int fec_enet_init(struct net_device *ndev) + cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma, + GFP_KERNEL); + if (!cbd_base) { +- return -ENOMEM; ++ ret = -ENOMEM; ++ goto free_queue_mem; + } + + /* Get the Ethernet address */ +@@ -3368,6 +3371,10 @@ static int fec_enet_init(struct net_device *ndev) + fec_enet_update_ethtool_stats(ndev); + + return 0; ++ ++free_queue_mem: ++ fec_enet_free_queue(ndev); ++ return ret; + } + + #ifdef CONFIG_OF +diff --git a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c +index a7b7a4aace791..b0c0504950d81 100644 +--- a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c ++++ b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c +@@ -548,8 +548,8 @@ static int fmvj18x_get_hwinfo(struct pcmcia_device *link, u_char *node_id) + + base = ioremap(link->resource[2]->start, resource_size(link->resource[2])); + if (!base) { +- pcmcia_release_window(link, link->resource[2]); +- return -ENOMEM; ++ pcmcia_release_window(link, link->resource[2]); ++ return -1; + } + + pcmcia_map_mem_page(link, link->resource[2], 0); +diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c +index 7302498c6df36..bbc423e931223 100644 +--- a/drivers/net/ethernet/google/gve/gve_main.c ++++ b/drivers/net/ethernet/google/gve/gve_main.c +@@ -180,7 +180,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) + /* Double check we have no extra work. + * Ensure unmask synchronizes with checking for work. + */ +- dma_rmb(); ++ mb(); + if (block->tx) + reschedule |= gve_tx_poll(block, -1); + if (block->rx) +@@ -220,6 +220,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) + int vecs_left = new_num_ntfy_blks % 2; + + priv->num_ntfy_blks = new_num_ntfy_blks; ++ priv->mgmt_msix_idx = priv->num_ntfy_blks; + priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues, + vecs_per_type); + priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues, +@@ -300,20 +301,22 @@ static void gve_free_notify_blocks(struct gve_priv *priv) + { + int i; + +- /* Free the irqs */ +- for (i = 0; i < priv->num_ntfy_blks; i++) { +- struct gve_notify_block *block = &priv->ntfy_blocks[i]; +- int msix_idx = i; ++ if (priv->msix_vectors) { ++ /* Free the irqs */ ++ for (i = 0; i < priv->num_ntfy_blks; i++) { ++ struct gve_notify_block *block = &priv->ntfy_blocks[i]; ++ int msix_idx = i; + +- irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, +- NULL); +- free_irq(priv->msix_vectors[msix_idx].vector, block); ++ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, ++ NULL); ++ free_irq(priv->msix_vectors[msix_idx].vector, block); ++ } ++ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); + } + dma_free_coherent(&priv->pdev->dev, + priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks), + priv->ntfy_blocks, priv->ntfy_block_bus); + priv->ntfy_blocks = NULL; +- free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); + pci_disable_msix(priv->pdev); + kvfree(priv->msix_vectors); + priv->msix_vectors = NULL; +diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c +index 6938f3a939d64..3e04a3973d680 100644 +--- a/drivers/net/ethernet/google/gve/gve_tx.c ++++ b/drivers/net/ethernet/google/gve/gve_tx.c +@@ -212,10 +212,11 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) + tx->dev = &priv->pdev->dev; + if (!tx->raw_addressing) { + tx->tx_fifo.qpl = gve_assign_tx_qpl(priv); +- ++ if (!tx->tx_fifo.qpl) ++ goto abort_with_desc; + /* map Tx FIFO */ + if (gve_tx_fifo_init(priv, &tx->tx_fifo)) +- goto abort_with_desc; ++ goto abort_with_qpl; + } + + tx->q_resources = +@@ -236,6 +237,9 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) + abort_with_fifo: + if (!tx->raw_addressing) + gve_tx_fifo_release(priv, &tx->tx_fifo); ++abort_with_qpl: ++ if (!tx->raw_addressing) ++ gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); + abort_with_desc: + dma_free_coherent(hdev, bytes, tx->desc, tx->bus); + tx->desc = NULL; +@@ -589,7 +593,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) + struct gve_tx_ring *tx; + int nsegs; + +- WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues, ++ WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues, + "skb queue index out of range"); + tx = &priv->tx[skb_get_queue_mapping(skb)]; + if (unlikely(gve_maybe_stop_tx(tx, skb))) { +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +index 0f70158c2551d..49ea1e4a08a99 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +@@ -265,22 +265,17 @@ static void hns3_vector_coalesce_init(struct hns3_enet_tqp_vector *tqp_vector, + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(priv->ae_handle->pdev); + struct hns3_enet_coalesce *tx_coal = &tqp_vector->tx_group.coal; + struct hns3_enet_coalesce *rx_coal = &tqp_vector->rx_group.coal; ++ struct hns3_enet_coalesce *ptx_coal = &priv->tx_coal; ++ struct hns3_enet_coalesce *prx_coal = &priv->rx_coal; + +- /* initialize the configuration for interrupt coalescing. +- * 1. GL (Interrupt Gap Limiter) +- * 2. RL (Interrupt Rate Limiter) +- * 3. QL (Interrupt Quantity Limiter) +- * +- * Default: enable interrupt coalescing self-adaptive and GL +- */ +- tx_coal->adapt_enable = 1; +- rx_coal->adapt_enable = 1; ++ tx_coal->adapt_enable = ptx_coal->adapt_enable; ++ rx_coal->adapt_enable = prx_coal->adapt_enable; + +- tx_coal->int_gl = HNS3_INT_GL_50K; +- rx_coal->int_gl = HNS3_INT_GL_50K; ++ tx_coal->int_gl = ptx_coal->int_gl; ++ rx_coal->int_gl = prx_coal->int_gl; + +- rx_coal->flow_level = HNS3_FLOW_LOW; +- tx_coal->flow_level = HNS3_FLOW_LOW; ++ rx_coal->flow_level = prx_coal->flow_level; ++ tx_coal->flow_level = ptx_coal->flow_level; + + /* device version above V3(include V3), GL can configure 1us + * unit, so uses 1us unit. +@@ -295,8 +290,8 @@ static void hns3_vector_coalesce_init(struct hns3_enet_tqp_vector *tqp_vector, + rx_coal->ql_enable = 1; + tx_coal->int_ql_max = ae_dev->dev_specs.int_ql_max; + rx_coal->int_ql_max = ae_dev->dev_specs.int_ql_max; +- tx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG; +- rx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG; ++ tx_coal->int_ql = ptx_coal->int_ql; ++ rx_coal->int_ql = prx_coal->int_ql; + } + } + +@@ -845,8 +840,6 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb) + l4.udp->dest == htons(4790)))) + return false; + +- skb_checksum_help(skb); +- + return true; + } + +@@ -924,8 +917,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto, + /* the stack computes the IP header already, + * driver calculate l4 checksum when not TSO. + */ +- skb_checksum_help(skb); +- return 0; ++ return skb_checksum_help(skb); + } + + hns3_set_outer_l2l3l4(skb, ol4_proto, ol_type_vlan_len_msec); +@@ -970,7 +962,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto, + break; + case IPPROTO_UDP: + if (hns3_tunnel_csum_bug(skb)) +- break; ++ return skb_checksum_help(skb); + + hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1); + hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S, +@@ -995,8 +987,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto, + /* the stack computes the IP header already, + * driver calculate l4 checksum when not TSO. + */ +- skb_checksum_help(skb); +- return 0; ++ return skb_checksum_help(skb); + } + + return 0; +@@ -3805,6 +3796,34 @@ map_ring_fail: + return ret; + } + ++static void hns3_nic_init_coal_cfg(struct hns3_nic_priv *priv) ++{ ++ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(priv->ae_handle->pdev); ++ struct hns3_enet_coalesce *tx_coal = &priv->tx_coal; ++ struct hns3_enet_coalesce *rx_coal = &priv->rx_coal; ++ ++ /* initialize the configuration for interrupt coalescing. ++ * 1. GL (Interrupt Gap Limiter) ++ * 2. RL (Interrupt Rate Limiter) ++ * 3. QL (Interrupt Quantity Limiter) ++ * ++ * Default: enable interrupt coalescing self-adaptive and GL ++ */ ++ tx_coal->adapt_enable = 1; ++ rx_coal->adapt_enable = 1; ++ ++ tx_coal->int_gl = HNS3_INT_GL_50K; ++ rx_coal->int_gl = HNS3_INT_GL_50K; ++ ++ rx_coal->flow_level = HNS3_FLOW_LOW; ++ tx_coal->flow_level = HNS3_FLOW_LOW; ++ ++ if (ae_dev->dev_specs.int_ql_max) { ++ tx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG; ++ rx_coal->int_ql = HNS3_INT_QL_DEFAULT_CFG; ++ } ++} ++ + static int hns3_nic_alloc_vector_data(struct hns3_nic_priv *priv) + { + struct hnae3_handle *h = priv->ae_handle; +@@ -4265,6 +4284,8 @@ static int hns3_client_init(struct hnae3_handle *handle) + goto out_get_ring_cfg; + } + ++ hns3_nic_init_coal_cfg(priv); ++ + ret = hns3_nic_alloc_vector_data(priv); + if (ret) { + ret = -ENOMEM; +@@ -4287,12 +4308,6 @@ static int hns3_client_init(struct hnae3_handle *handle) + if (ret) + goto out_init_phy; + +- ret = register_netdev(netdev); +- if (ret) { +- dev_err(priv->dev, "probe register netdev fail!\n"); +- goto out_reg_netdev_fail; +- } +- + /* the device can work without cpu rmap, only aRFS needs it */ + ret = hns3_set_rx_cpu_rmap(netdev); + if (ret) +@@ -4325,17 +4340,23 @@ static int hns3_client_init(struct hnae3_handle *handle) + if (ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3) + set_bit(HNAE3_PFLAG_LIMIT_PROMISC, &handle->supported_pflags); + ++ ret = register_netdev(netdev); ++ if (ret) { ++ dev_err(priv->dev, "probe register netdev fail!\n"); ++ goto out_reg_netdev_fail; ++ } ++ + if (netif_msg_drv(handle)) + hns3_info_show(priv); + + return ret; + ++out_reg_netdev_fail: ++ hns3_dbg_uninit(handle); + out_client_start: + hns3_free_rx_cpu_rmap(netdev); + hns3_nic_uninit_irq(priv); + out_init_irq_fail: +- unregister_netdev(netdev); +-out_reg_netdev_fail: + hns3_uninit_phy(netdev); + out_init_phy: + hns3_uninit_all_ring(priv); +@@ -4543,31 +4564,6 @@ int hns3_nic_reset_all_ring(struct hnae3_handle *h) + return 0; + } + +-static void hns3_store_coal(struct hns3_nic_priv *priv) +-{ +- /* ethtool only support setting and querying one coal +- * configuration for now, so save the vector 0' coal +- * configuration here in order to restore it. +- */ +- memcpy(&priv->tx_coal, &priv->tqp_vector[0].tx_group.coal, +- sizeof(struct hns3_enet_coalesce)); +- memcpy(&priv->rx_coal, &priv->tqp_vector[0].rx_group.coal, +- sizeof(struct hns3_enet_coalesce)); +-} +- +-static void hns3_restore_coal(struct hns3_nic_priv *priv) +-{ +- u16 vector_num = priv->vector_num; +- int i; +- +- for (i = 0; i < vector_num; i++) { +- memcpy(&priv->tqp_vector[i].tx_group.coal, &priv->tx_coal, +- sizeof(struct hns3_enet_coalesce)); +- memcpy(&priv->tqp_vector[i].rx_group.coal, &priv->rx_coal, +- sizeof(struct hns3_enet_coalesce)); +- } +-} +- + static int hns3_reset_notify_down_enet(struct hnae3_handle *handle) + { + struct hnae3_knic_private_info *kinfo = &handle->kinfo; +@@ -4626,8 +4622,6 @@ static int hns3_reset_notify_init_enet(struct hnae3_handle *handle) + if (ret) + goto err_put_ring; + +- hns3_restore_coal(priv); +- + ret = hns3_nic_init_vector_data(priv); + if (ret) + goto err_dealloc_vector; +@@ -4693,8 +4687,6 @@ static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle) + + hns3_nic_uninit_vector_data(priv); + +- hns3_store_coal(priv); +- + hns3_nic_dealloc_vector_data(priv); + + hns3_uninit_all_ring(priv); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +index d20f2e2460178..5b051df3429f8 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +@@ -1119,50 +1119,32 @@ static void hns3_get_channels(struct net_device *netdev, + h->ae_algo->ops->get_channels(h, ch); + } + +-static int hns3_get_coalesce_per_queue(struct net_device *netdev, u32 queue, +- struct ethtool_coalesce *cmd) ++static int hns3_get_coalesce(struct net_device *netdev, ++ struct ethtool_coalesce *cmd) + { +- struct hns3_enet_tqp_vector *tx_vector, *rx_vector; + struct hns3_nic_priv *priv = netdev_priv(netdev); ++ struct hns3_enet_coalesce *tx_coal = &priv->tx_coal; ++ struct hns3_enet_coalesce *rx_coal = &priv->rx_coal; + struct hnae3_handle *h = priv->ae_handle; +- u16 queue_num = h->kinfo.num_tqps; + + if (hns3_nic_resetting(netdev)) + return -EBUSY; + +- if (queue >= queue_num) { +- netdev_err(netdev, +- "Invalid queue value %u! Queue max id=%u\n", +- queue, queue_num - 1); +- return -EINVAL; +- } +- +- tx_vector = priv->ring[queue].tqp_vector; +- rx_vector = priv->ring[queue_num + queue].tqp_vector; ++ cmd->use_adaptive_tx_coalesce = tx_coal->adapt_enable; ++ cmd->use_adaptive_rx_coalesce = rx_coal->adapt_enable; + +- cmd->use_adaptive_tx_coalesce = +- tx_vector->tx_group.coal.adapt_enable; +- cmd->use_adaptive_rx_coalesce = +- rx_vector->rx_group.coal.adapt_enable; +- +- cmd->tx_coalesce_usecs = tx_vector->tx_group.coal.int_gl; +- cmd->rx_coalesce_usecs = rx_vector->rx_group.coal.int_gl; ++ cmd->tx_coalesce_usecs = tx_coal->int_gl; ++ cmd->rx_coalesce_usecs = rx_coal->int_gl; + + cmd->tx_coalesce_usecs_high = h->kinfo.int_rl_setting; + cmd->rx_coalesce_usecs_high = h->kinfo.int_rl_setting; + +- cmd->tx_max_coalesced_frames = tx_vector->tx_group.coal.int_ql; +- cmd->rx_max_coalesced_frames = rx_vector->rx_group.coal.int_ql; ++ cmd->tx_max_coalesced_frames = tx_coal->int_ql; ++ cmd->rx_max_coalesced_frames = rx_coal->int_ql; + + return 0; + } + +-static int hns3_get_coalesce(struct net_device *netdev, +- struct ethtool_coalesce *cmd) +-{ +- return hns3_get_coalesce_per_queue(netdev, 0, cmd); +-} +- + static int hns3_check_gl_coalesce_para(struct net_device *netdev, + struct ethtool_coalesce *cmd) + { +@@ -1277,19 +1259,7 @@ static int hns3_check_coalesce_para(struct net_device *netdev, + return ret; + } + +- ret = hns3_check_ql_coalesce_param(netdev, cmd); +- if (ret) +- return ret; +- +- if (cmd->use_adaptive_tx_coalesce == 1 || +- cmd->use_adaptive_rx_coalesce == 1) { +- netdev_info(netdev, +- "adaptive-tx=%u and adaptive-rx=%u, tx_usecs or rx_usecs will changed dynamically.\n", +- cmd->use_adaptive_tx_coalesce, +- cmd->use_adaptive_rx_coalesce); +- } +- +- return 0; ++ return hns3_check_ql_coalesce_param(netdev, cmd); + } + + static void hns3_set_coalesce_per_queue(struct net_device *netdev, +@@ -1335,6 +1305,9 @@ static int hns3_set_coalesce(struct net_device *netdev, + struct ethtool_coalesce *cmd) + { + struct hnae3_handle *h = hns3_get_handle(netdev); ++ struct hns3_nic_priv *priv = netdev_priv(netdev); ++ struct hns3_enet_coalesce *tx_coal = &priv->tx_coal; ++ struct hns3_enet_coalesce *rx_coal = &priv->rx_coal; + u16 queue_num = h->kinfo.num_tqps; + int ret; + int i; +@@ -1349,6 +1322,15 @@ static int hns3_set_coalesce(struct net_device *netdev, + h->kinfo.int_rl_setting = + hns3_rl_round_down(cmd->rx_coalesce_usecs_high); + ++ tx_coal->adapt_enable = cmd->use_adaptive_tx_coalesce; ++ rx_coal->adapt_enable = cmd->use_adaptive_rx_coalesce; ++ ++ tx_coal->int_gl = cmd->tx_coalesce_usecs; ++ rx_coal->int_gl = cmd->rx_coalesce_usecs; ++ ++ tx_coal->int_ql = cmd->tx_max_coalesced_frames; ++ rx_coal->int_ql = cmd->rx_max_coalesced_frames; ++ + for (i = 0; i < queue_num; i++) + hns3_set_coalesce_per_queue(netdev, cmd, i); + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index c3bb16b1f0600..9265a00de18ed 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -694,7 +694,6 @@ void hclge_mbx_handler(struct hclge_dev *hdev) + unsigned int flag; + int ret = 0; + +- memset(&resp_msg, 0, sizeof(resp_msg)); + /* handle all the mailbox requests in the queue */ + while (!hclge_cmd_crq_empty(&hdev->hw)) { + if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) { +@@ -722,6 +721,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev) + + trace_hclge_pf_mbx_get(hdev, req); + ++ /* clear the resp_msg before processing every mailbox message */ ++ memset(&resp_msg, 0, sizeof(resp_msg)); ++ + switch (req->msg.code) { + case HCLGE_MBX_MAP_RING_TO_VECTOR: + ret = hclge_map_unmap_ring_to_vf_vector(vport, true, +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +index 988db46bff0ee..214a38de3f415 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +@@ -467,12 +467,16 @@ static int ixgbe_set_vf_vlan(struct ixgbe_adapter *adapter, int add, int vid, + return err; + } + +-static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) ++static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf) + { + struct ixgbe_hw *hw = &adapter->hw; +- int max_frame = msgbuf[1]; + u32 max_frs; + ++ if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) { ++ e_err(drv, "VF max_frame %d out of range\n", max_frame); ++ return -EINVAL; ++ } ++ + /* + * For 82599EB we have to keep all PFs and VFs operating with + * the same max_frame value in order to avoid sending an oversize +@@ -533,12 +537,6 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) + } + } + +- /* MTU < 68 is an error and causes problems on some kernels */ +- if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) { +- e_err(drv, "VF max_frame %d out of range\n", max_frame); +- return -EINVAL; +- } +- + /* pull current max frame size from hardware */ + max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); + max_frs &= IXGBE_MHADD_MFS_MASK; +@@ -1249,7 +1247,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf) + retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf); + break; + case IXGBE_VF_SET_LPE: +- retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf); ++ retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf); + break; + case IXGBE_VF_SET_MACVLAN: + retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf); +diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c +index 51ed8a54d3801..135ba5b6ae980 100644 +--- a/drivers/net/ethernet/lantiq_xrx200.c ++++ b/drivers/net/ethernet/lantiq_xrx200.c +@@ -154,6 +154,7 @@ static int xrx200_close(struct net_device *net_dev) + + static int xrx200_alloc_skb(struct xrx200_chan *ch) + { ++ dma_addr_t mapping; + int ret = 0; + + ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev, +@@ -163,16 +164,17 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch) + goto skip; + } + +- ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev, +- ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN, +- DMA_FROM_DEVICE); +- if (unlikely(dma_mapping_error(ch->priv->dev, +- ch->dma.desc_base[ch->dma.desc].addr))) { ++ mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data, ++ XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE); ++ if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) { + dev_kfree_skb_any(ch->skb[ch->dma.desc]); + ret = -ENOMEM; + goto skip; + } + ++ ch->dma.desc_base[ch->dma.desc].addr = mapping; ++ /* Make sure the address is written before we give it to HW */ ++ wmb(); + skip: + ch->dma.desc_base[ch->dma.desc].ctl = + LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | +@@ -196,6 +198,8 @@ static int xrx200_hw_receive(struct xrx200_chan *ch) + ch->dma.desc %= LTQ_DESC_NUM; + + if (ret) { ++ ch->skb[ch->dma.desc] = skb; ++ net_dev->stats.rx_dropped++; + netdev_err(net_dev, "failed to allocate new rx buffer\n"); + return ret; + } +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +index 8edba5ea90f03..4a61c90003b5e 100644 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +@@ -993,6 +993,14 @@ enum mvpp22_ptp_packet_format { + + #define MVPP2_DESC_DMA_MASK DMA_BIT_MASK(40) + ++/* Buffer header info bits */ ++#define MVPP2_B_HDR_INFO_MC_ID_MASK 0xfff ++#define MVPP2_B_HDR_INFO_MC_ID(info) ((info) & MVPP2_B_HDR_INFO_MC_ID_MASK) ++#define MVPP2_B_HDR_INFO_LAST_OFFS 12 ++#define MVPP2_B_HDR_INFO_LAST_MASK BIT(12) ++#define MVPP2_B_HDR_INFO_IS_LAST(info) \ ++ (((info) & MVPP2_B_HDR_INFO_LAST_MASK) >> MVPP2_B_HDR_INFO_LAST_OFFS) ++ + struct mvpp2_tai; + + /* Definitions */ +@@ -1002,6 +1010,20 @@ struct mvpp2_rss_table { + u32 indir[MVPP22_RSS_TABLE_ENTRIES]; + }; + ++struct mvpp2_buff_hdr { ++ __le32 next_phys_addr; ++ __le32 next_dma_addr; ++ __le16 byte_count; ++ __le16 info; ++ __le16 reserved1; /* bm_qset (for future use, BM) */ ++ u8 next_phys_addr_high; ++ u8 next_dma_addr_high; ++ __le16 reserved2; ++ __le16 reserved3; ++ __le16 reserved4; ++ __le16 reserved5; ++}; ++ + /* Shared Packet Processor resources */ + struct mvpp2 { + /* Shared registers' base addresses */ +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +index 1767c60056c5a..6c81e4f175ac6 100644 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +@@ -3840,6 +3840,35 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq, + return ret; + } + ++static void mvpp2_buff_hdr_pool_put(struct mvpp2_port *port, struct mvpp2_rx_desc *rx_desc, ++ int pool, u32 rx_status) ++{ ++ phys_addr_t phys_addr, phys_addr_next; ++ dma_addr_t dma_addr, dma_addr_next; ++ struct mvpp2_buff_hdr *buff_hdr; ++ ++ phys_addr = mvpp2_rxdesc_dma_addr_get(port, rx_desc); ++ dma_addr = mvpp2_rxdesc_cookie_get(port, rx_desc); ++ ++ do { ++ buff_hdr = (struct mvpp2_buff_hdr *)phys_to_virt(phys_addr); ++ ++ phys_addr_next = le32_to_cpu(buff_hdr->next_phys_addr); ++ dma_addr_next = le32_to_cpu(buff_hdr->next_dma_addr); ++ ++ if (port->priv->hw_version >= MVPP22) { ++ phys_addr_next |= ((u64)buff_hdr->next_phys_addr_high << 32); ++ dma_addr_next |= ((u64)buff_hdr->next_dma_addr_high << 32); ++ } ++ ++ mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr); ++ ++ phys_addr = phys_addr_next; ++ dma_addr = dma_addr_next; ++ ++ } while (!MVPP2_B_HDR_INFO_IS_LAST(le16_to_cpu(buff_hdr->info))); ++} ++ + /* Main rx processing */ + static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, + int rx_todo, struct mvpp2_rx_queue *rxq) +@@ -3886,14 +3915,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, + MVPP2_RXD_BM_POOL_ID_OFFS; + bm_pool = &port->priv->bm_pools[pool]; + +- /* In case of an error, release the requested buffer pointer +- * to the Buffer Manager. This request process is controlled +- * by the hardware, and the information about the buffer is +- * comprised by the RX descriptor. +- */ +- if (rx_status & MVPP2_RXD_ERR_SUMMARY) +- goto err_drop_frame; +- + if (port->priv->percpu_pools) { + pp = port->priv->page_pool[pool]; + dma_dir = page_pool_get_dma_dir(pp); +@@ -3905,6 +3926,18 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, + rx_bytes + MVPP2_MH_SIZE, + dma_dir); + ++ /* Buffer header not supported */ ++ if (rx_status & MVPP2_RXD_BUF_HDR) ++ goto err_drop_frame; ++ ++ /* In case of an error, release the requested buffer pointer ++ * to the Buffer Manager. This request process is controlled ++ * by the hardware, and the information about the buffer is ++ * comprised by the RX descriptor. ++ */ ++ if (rx_status & MVPP2_RXD_ERR_SUMMARY) ++ goto err_drop_frame; ++ + /* Prefetch header */ + prefetch(data); + +@@ -3986,7 +4019,10 @@ err_drop_frame: + dev->stats.rx_errors++; + mvpp2_rx_error(port, rx_desc); + /* Return the buffer to the pool */ +- mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr); ++ if (rx_status & MVPP2_RXD_BUF_HDR) ++ mvpp2_buff_hdr_pool_put(port, rx_desc, pool, rx_status); ++ else ++ mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr); + } + + rcu_read_unlock(); +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c +index f4962a97a0757..9d9a2e438acfc 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c +@@ -786,6 +786,10 @@ static int otx2_set_rxfh_context(struct net_device *dev, const u32 *indir, + if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) + return -EOPNOTSUPP; + ++ if (*rss_context != ETH_RXFH_CONTEXT_ALLOC && ++ *rss_context >= MAX_RSS_GROUPS) ++ return -EINVAL; ++ + rss = &pfvf->hw.rss_info; + + if (!rss->enable) { +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index bcd5e7ae8482f..10e30efa775d4 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -679,32 +679,53 @@ static int mtk_set_mac_address(struct net_device *dev, void *p) + void mtk_stats_update_mac(struct mtk_mac *mac) + { + struct mtk_hw_stats *hw_stats = mac->hw_stats; +- unsigned int base = MTK_GDM1_TX_GBCNT; +- u64 stats; +- +- base += hw_stats->reg_offset; ++ struct mtk_eth *eth = mac->hw; + + u64_stats_update_begin(&hw_stats->syncp); + +- hw_stats->rx_bytes += mtk_r32(mac->hw, base); +- stats = mtk_r32(mac->hw, base + 0x04); +- if (stats) +- hw_stats->rx_bytes += (stats << 32); +- hw_stats->rx_packets += mtk_r32(mac->hw, base + 0x08); +- hw_stats->rx_overflow += mtk_r32(mac->hw, base + 0x10); +- hw_stats->rx_fcs_errors += mtk_r32(mac->hw, base + 0x14); +- hw_stats->rx_short_errors += mtk_r32(mac->hw, base + 0x18); +- hw_stats->rx_long_errors += mtk_r32(mac->hw, base + 0x1c); +- hw_stats->rx_checksum_errors += mtk_r32(mac->hw, base + 0x20); +- hw_stats->rx_flow_control_packets += +- mtk_r32(mac->hw, base + 0x24); +- hw_stats->tx_skip += mtk_r32(mac->hw, base + 0x28); +- hw_stats->tx_collisions += mtk_r32(mac->hw, base + 0x2c); +- hw_stats->tx_bytes += mtk_r32(mac->hw, base + 0x30); +- stats = mtk_r32(mac->hw, base + 0x34); +- if (stats) +- hw_stats->tx_bytes += (stats << 32); +- hw_stats->tx_packets += mtk_r32(mac->hw, base + 0x38); ++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) { ++ hw_stats->tx_packets += mtk_r32(mac->hw, MT7628_SDM_TPCNT); ++ hw_stats->tx_bytes += mtk_r32(mac->hw, MT7628_SDM_TBCNT); ++ hw_stats->rx_packets += mtk_r32(mac->hw, MT7628_SDM_RPCNT); ++ hw_stats->rx_bytes += mtk_r32(mac->hw, MT7628_SDM_RBCNT); ++ hw_stats->rx_checksum_errors += ++ mtk_r32(mac->hw, MT7628_SDM_CS_ERR); ++ } else { ++ unsigned int offs = hw_stats->reg_offset; ++ u64 stats; ++ ++ hw_stats->rx_bytes += mtk_r32(mac->hw, ++ MTK_GDM1_RX_GBCNT_L + offs); ++ stats = mtk_r32(mac->hw, MTK_GDM1_RX_GBCNT_H + offs); ++ if (stats) ++ hw_stats->rx_bytes += (stats << 32); ++ hw_stats->rx_packets += ++ mtk_r32(mac->hw, MTK_GDM1_RX_GPCNT + offs); ++ hw_stats->rx_overflow += ++ mtk_r32(mac->hw, MTK_GDM1_RX_OERCNT + offs); ++ hw_stats->rx_fcs_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_FERCNT + offs); ++ hw_stats->rx_short_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_SERCNT + offs); ++ hw_stats->rx_long_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_LENCNT + offs); ++ hw_stats->rx_checksum_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_CERCNT + offs); ++ hw_stats->rx_flow_control_packets += ++ mtk_r32(mac->hw, MTK_GDM1_RX_FCCNT + offs); ++ hw_stats->tx_skip += ++ mtk_r32(mac->hw, MTK_GDM1_TX_SKIPCNT + offs); ++ hw_stats->tx_collisions += ++ mtk_r32(mac->hw, MTK_GDM1_TX_COLCNT + offs); ++ hw_stats->tx_bytes += ++ mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_L + offs); ++ stats = mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_H + offs); ++ if (stats) ++ hw_stats->tx_bytes += (stats << 32); ++ hw_stats->tx_packets += ++ mtk_r32(mac->hw, MTK_GDM1_TX_GPCNT + offs); ++ } ++ + u64_stats_update_end(&hw_stats->syncp); + } + +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +index c47272100615d..e02e21cba8289 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +@@ -267,8 +267,21 @@ + /* QDMA FQ Free Page Buffer Length Register */ + #define MTK_QDMA_FQ_BLEN 0x1B2C + +-/* GMA1 Received Good Byte Count Register */ +-#define MTK_GDM1_TX_GBCNT 0x2400 ++/* GMA1 counter / statics register */ ++#define MTK_GDM1_RX_GBCNT_L 0x2400 ++#define MTK_GDM1_RX_GBCNT_H 0x2404 ++#define MTK_GDM1_RX_GPCNT 0x2408 ++#define MTK_GDM1_RX_OERCNT 0x2410 ++#define MTK_GDM1_RX_FERCNT 0x2414 ++#define MTK_GDM1_RX_SERCNT 0x2418 ++#define MTK_GDM1_RX_LENCNT 0x241c ++#define MTK_GDM1_RX_CERCNT 0x2420 ++#define MTK_GDM1_RX_FCCNT 0x2424 ++#define MTK_GDM1_TX_SKIPCNT 0x2428 ++#define MTK_GDM1_TX_COLCNT 0x242c ++#define MTK_GDM1_TX_GBCNT_L 0x2430 ++#define MTK_GDM1_TX_GBCNT_H 0x2434 ++#define MTK_GDM1_TX_GPCNT 0x2438 + #define MTK_STAT_OFFSET 0x40 + + /* QDMA descriptor txd4 */ +@@ -484,6 +497,13 @@ + #define MT7628_SDM_MAC_ADRL (MT7628_SDM_OFFSET + 0x0c) + #define MT7628_SDM_MAC_ADRH (MT7628_SDM_OFFSET + 0x10) + ++/* Counter / stat register */ ++#define MT7628_SDM_TPCNT (MT7628_SDM_OFFSET + 0x100) ++#define MT7628_SDM_TBCNT (MT7628_SDM_OFFSET + 0x104) ++#define MT7628_SDM_RPCNT (MT7628_SDM_OFFSET + 0x108) ++#define MT7628_SDM_RBCNT (MT7628_SDM_OFFSET + 0x10c) ++#define MT7628_SDM_CS_ERR (MT7628_SDM_OFFSET + 0x110) ++ + struct mtk_rx_dma { + unsigned int rxd1; + unsigned int rxd2; +diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +index 1434df66fcf2e..3616b77caa0ad 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c ++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +@@ -2027,8 +2027,6 @@ static int mlx4_en_set_tunable(struct net_device *dev, + return ret; + } + +-#define MLX4_EEPROM_PAGE_LEN 256 +- + static int mlx4_en_get_module_info(struct net_device *dev, + struct ethtool_modinfo *modinfo) + { +@@ -2063,7 +2061,7 @@ static int mlx4_en_get_module_info(struct net_device *dev, + break; + case MLX4_MODULE_ID_SFP: + modinfo->type = ETH_MODULE_SFF_8472; +- modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN; ++ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN; + break; + default: + return -EINVAL; +diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c +index ba6ac31a339dc..256a06b3c096b 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/port.c ++++ b/drivers/net/ethernet/mellanox/mlx4/port.c +@@ -1973,6 +1973,7 @@ EXPORT_SYMBOL(mlx4_get_roce_gid_from_slave); + #define I2C_ADDR_LOW 0x50 + #define I2C_ADDR_HIGH 0x51 + #define I2C_PAGE_SIZE 256 ++#define I2C_HIGH_PAGE_SIZE 128 + + /* Module Info Data */ + struct mlx4_cable_info { +@@ -2026,6 +2027,88 @@ static inline const char *cable_info_mad_err_str(u16 mad_status) + return "Unknown Error"; + } + ++static int mlx4_get_module_id(struct mlx4_dev *dev, u8 port, u8 *module_id) ++{ ++ struct mlx4_cmd_mailbox *inbox, *outbox; ++ struct mlx4_mad_ifc *inmad, *outmad; ++ struct mlx4_cable_info *cable_info; ++ int ret; ++ ++ inbox = mlx4_alloc_cmd_mailbox(dev); ++ if (IS_ERR(inbox)) ++ return PTR_ERR(inbox); ++ ++ outbox = mlx4_alloc_cmd_mailbox(dev); ++ if (IS_ERR(outbox)) { ++ mlx4_free_cmd_mailbox(dev, inbox); ++ return PTR_ERR(outbox); ++ } ++ ++ inmad = (struct mlx4_mad_ifc *)(inbox->buf); ++ outmad = (struct mlx4_mad_ifc *)(outbox->buf); ++ ++ inmad->method = 0x1; /* Get */ ++ inmad->class_version = 0x1; ++ inmad->mgmt_class = 0x1; ++ inmad->base_version = 0x1; ++ inmad->attr_id = cpu_to_be16(0xFF60); /* Module Info */ ++ ++ cable_info = (struct mlx4_cable_info *)inmad->data; ++ cable_info->dev_mem_address = 0; ++ cable_info->page_num = 0; ++ cable_info->i2c_addr = I2C_ADDR_LOW; ++ cable_info->size = cpu_to_be16(1); ++ ++ ret = mlx4_cmd_box(dev, inbox->dma, outbox->dma, port, 3, ++ MLX4_CMD_MAD_IFC, MLX4_CMD_TIME_CLASS_C, ++ MLX4_CMD_NATIVE); ++ if (ret) ++ goto out; ++ ++ if (be16_to_cpu(outmad->status)) { ++ /* Mad returned with bad status */ ++ ret = be16_to_cpu(outmad->status); ++ mlx4_warn(dev, ++ "MLX4_CMD_MAD_IFC Get Module ID attr(%x) port(%d) i2c_addr(%x) offset(%d) size(%d): Response Mad Status(%x) - %s\n", ++ 0xFF60, port, I2C_ADDR_LOW, 0, 1, ret, ++ cable_info_mad_err_str(ret)); ++ ret = -ret; ++ goto out; ++ } ++ cable_info = (struct mlx4_cable_info *)outmad->data; ++ *module_id = cable_info->data[0]; ++out: ++ mlx4_free_cmd_mailbox(dev, inbox); ++ mlx4_free_cmd_mailbox(dev, outbox); ++ return ret; ++} ++ ++static void mlx4_sfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset) ++{ ++ *i2c_addr = I2C_ADDR_LOW; ++ *page_num = 0; ++ ++ if (*offset < I2C_PAGE_SIZE) ++ return; ++ ++ *i2c_addr = I2C_ADDR_HIGH; ++ *offset -= I2C_PAGE_SIZE; ++} ++ ++static void mlx4_qsfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset) ++{ ++ /* Offsets 0-255 belong to page 0. ++ * Offsets 256-639 belong to pages 01, 02, 03. ++ * For example, offset 400 is page 02: 1 + (400 - 256) / 128 = 2 ++ */ ++ if (*offset < I2C_PAGE_SIZE) ++ *page_num = 0; ++ else ++ *page_num = 1 + (*offset - I2C_PAGE_SIZE) / I2C_HIGH_PAGE_SIZE; ++ *i2c_addr = I2C_ADDR_LOW; ++ *offset -= *page_num * I2C_HIGH_PAGE_SIZE; ++} ++ + /** + * mlx4_get_module_info - Read cable module eeprom data + * @dev: mlx4_dev. +@@ -2045,12 +2128,30 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port, + struct mlx4_cmd_mailbox *inbox, *outbox; + struct mlx4_mad_ifc *inmad, *outmad; + struct mlx4_cable_info *cable_info; +- u16 i2c_addr; ++ u8 module_id, i2c_addr, page_num; + int ret; + + if (size > MODULE_INFO_MAX_READ) + size = MODULE_INFO_MAX_READ; + ++ ret = mlx4_get_module_id(dev, port, &module_id); ++ if (ret) ++ return ret; ++ ++ switch (module_id) { ++ case MLX4_MODULE_ID_SFP: ++ mlx4_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset); ++ break; ++ case MLX4_MODULE_ID_QSFP: ++ case MLX4_MODULE_ID_QSFP_PLUS: ++ case MLX4_MODULE_ID_QSFP28: ++ mlx4_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset); ++ break; ++ default: ++ mlx4_err(dev, "Module ID not recognized: %#x\n", module_id); ++ return -EINVAL; ++ } ++ + inbox = mlx4_alloc_cmd_mailbox(dev); + if (IS_ERR(inbox)) + return PTR_ERR(inbox); +@@ -2076,11 +2177,9 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port, + */ + size -= offset + size - I2C_PAGE_SIZE; + +- i2c_addr = I2C_ADDR_LOW; +- + cable_info = (struct mlx4_cable_info *)inmad->data; + cable_info->dev_mem_address = cpu_to_be16(offset); +- cable_info->page_num = 0; ++ cable_info->page_num = page_num; + cable_info->i2c_addr = i2c_addr; + cable_info->size = cpu_to_be16(size); + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c +index 95f2b26a3ee31..9c076aa20306a 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bond.c +@@ -223,6 +223,8 @@ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *pt + rpriv = priv->ppriv; + fwd_vport_num = rpriv->rep->vport; + lag_dev = netdev_master_upper_dev_get(netdev); ++ if (!lag_dev) ++ return; + + netdev_dbg(netdev, "lag_dev(%s)'s slave vport(%d) is txable(%d)\n", + lag_dev->name, fwd_vport_num, net_lag_port_dev_txable(netdev)); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +index 065126370acd2..96ba027dbef3d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +@@ -645,7 +645,7 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe, + } + + if (chain) { +- tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT); ++ tc_skb_ext = tc_skb_ext_alloc(skb); + if (!tc_skb_ext) { + WARN_ON(1); + return false; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +index 9f16ad2c0710b..1560fcbf4ac7c 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +@@ -1504,7 +1504,7 @@ mlx5e_init_fib_work_ipv4(struct mlx5e_priv *priv, + + fen_info = container_of(info, struct fib_entry_notifier_info, info); + fib_dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev; +- if (fib_dev->netdev_ops != &mlx5e_netdev_ops || ++ if (!fib_dev || fib_dev->netdev_ops != &mlx5e_netdev_ops || + fen_info->dst_len != 32) + return NULL; + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +index 16ce7756ac43f..bbe9b1dbb7aca 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +@@ -35,6 +35,7 @@ + #include + #include + #include ++#include + #include "en.h" + #include "lib/mpfs.h" + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index 5db63b9f3b70d..99dc9f2beed5b 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -3027,7 +3027,7 @@ static int mlx5e_update_netdev_queues(struct mlx5e_priv *priv) + int err; + + old_num_txqs = netdev->real_num_tx_queues; +- old_ntc = netdev->num_tc; ++ old_ntc = netdev->num_tc ? : 1; + + nch = priv->channels.params.num_channels; + ntc = priv->channels.params.num_tc; +@@ -5604,6 +5604,11 @@ static void mlx5e_update_features(struct net_device *netdev) + rtnl_unlock(); + } + ++static void mlx5e_reset_channels(struct net_device *netdev) ++{ ++ netdev_reset_tc(netdev); ++} ++ + int mlx5e_attach_netdev(struct mlx5e_priv *priv) + { + const bool take_rtnl = priv->netdev->reg_state == NETREG_REGISTERED; +@@ -5658,6 +5663,7 @@ err_cleanup_tx: + profile->cleanup_tx(priv); + + out: ++ mlx5e_reset_channels(priv->netdev); + set_bit(MLX5E_STATE_DESTROYING, &priv->state); + cancel_work_sync(&priv->update_stats_work); + return err; +@@ -5675,6 +5681,7 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv) + + profile->cleanup_rx(priv); + profile->cleanup_tx(priv); ++ mlx5e_reset_channels(priv->netdev); + cancel_work_sync(&priv->update_stats_work); + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +index d675107d9ecab..78a1403c98026 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +@@ -1276,10 +1276,10 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, + struct netlink_ext_ack *extack) + { + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; +- struct net_device *out_dev, *encap_dev = NULL; + struct mlx5e_tc_flow_parse_attr *parse_attr; + struct mlx5_flow_attr *attr = flow->attr; + bool vf_tun = false, encap_valid = true; ++ struct net_device *encap_dev = NULL; + struct mlx5_esw_flow_attr *esw_attr; + struct mlx5_fc *counter = NULL; + struct mlx5e_rep_priv *rpriv; +@@ -1325,16 +1325,22 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, + esw_attr = attr->esw_attr; + + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { ++ struct net_device *out_dev; + int mirred_ifindex; + + if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) + continue; + + mirred_ifindex = parse_attr->mirred_ifindex[out_index]; +- out_dev = __dev_get_by_index(dev_net(priv->netdev), +- mirred_ifindex); ++ out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex); ++ if (!out_dev) { ++ NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found"); ++ err = -ENODEV; ++ goto err_out; ++ } + err = mlx5e_attach_encap(priv, flow, out_dev, out_index, + extack, &encap_dev, &encap_valid); ++ dev_put(out_dev); + if (err) + goto err_out; + +@@ -1347,6 +1353,12 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, + esw_attr->dests[out_index].mdev = out_priv->mdev; + } + ++ if (vf_tun && esw_attr->out_count > 1) { ++ NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported"); ++ err = -EOPNOTSUPP; ++ goto err_out; ++ } ++ + err = mlx5_eswitch_add_vlan_action(esw, attr); + if (err) + goto err_out; +@@ -3424,8 +3436,12 @@ static int add_vlan_push_action(struct mlx5e_priv *priv, + if (err) + return err; + +- *out_dev = dev_get_by_index_rcu(dev_net(vlan_dev), +- dev_get_iflink(vlan_dev)); ++ rcu_read_lock(); ++ *out_dev = dev_get_by_index_rcu(dev_net(vlan_dev), dev_get_iflink(vlan_dev)); ++ rcu_read_unlock(); ++ if (!*out_dev) ++ return -ENODEV; ++ + if (is_vlan_dev(*out_dev)) + err = add_vlan_push_action(priv, attr, out_dev, action); + +@@ -4908,7 +4924,7 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, + } + + if (chain) { +- tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT); ++ tc_skb_ext = tc_skb_ext_alloc(skb); + if (WARN_ON(!tc_skb_ext)) + return false; + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +index aba17835465b4..2c6d95900e3c9 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +@@ -35,6 +35,7 @@ + #include + #include + #include ++#include + #include "esw/acl/lgcy.h" + #include "mlx5_core.h" + #include "lib/eq.h" +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c +index ec679560a95d0..1cbb330b9f42b 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c +@@ -76,10 +76,11 @@ mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev, + /* As this is the terminating action then the termination table is the + * same prio as the slow path + */ +- ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION | ++ ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION | MLX5_FLOW_TABLE_UNMANAGED | + MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; +- ft_attr.prio = FDB_SLOW_PATH; ++ ft_attr.prio = FDB_TC_OFFLOAD; + ft_attr.max_fte = 1; ++ ft_attr.level = 1; + ft_attr.autogroup.max_num_groups = 1; + tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr); + if (IS_ERR(tt->termtbl)) { +@@ -171,19 +172,6 @@ mlx5_eswitch_termtbl_put(struct mlx5_eswitch *esw, + } + } + +-static bool mlx5_eswitch_termtbl_is_encap_reformat(struct mlx5_pkt_reformat *rt) +-{ +- switch (rt->reformat_type) { +- case MLX5_REFORMAT_TYPE_L2_TO_VXLAN: +- case MLX5_REFORMAT_TYPE_L2_TO_NVGRE: +- case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL: +- case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL: +- return true; +- default: +- return false; +- } +-} +- + static void + mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src, + struct mlx5_flow_act *dst) +@@ -201,14 +189,6 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src, + memset(&src->vlan[1], 0, sizeof(src->vlan[1])); + } + } +- +- if (src->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT && +- mlx5_eswitch_termtbl_is_encap_reformat(src->pkt_reformat)) { +- src->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; +- dst->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; +- dst->pkt_reformat = src->pkt_reformat; +- src->pkt_reformat = NULL; +- } + } + + static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw, +@@ -237,6 +217,7 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw, + int i; + + if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table) || ++ !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level) || + attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH || + !mlx5_eswitch_offload_is_uplink_port(esw, spec)) + return false; +@@ -278,6 +259,14 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw, + if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT) + continue; + ++ if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) { ++ term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; ++ term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat; ++ } else { ++ term_tbl_act.action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; ++ term_tbl_act.pkt_reformat = NULL; ++ } ++ + /* get the terminating table for the action list */ + tt = mlx5_eswitch_termtbl_get_create(esw, &term_tbl_act, + &dest[i], attr); +@@ -299,6 +288,9 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw, + goto revert_changes; + + /* create the FTE */ ++ flow_act->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; ++ flow_act->pkt_reformat = NULL; ++ flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; + rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest); + if (IS_ERR(rule)) + goto revert_changes; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c +index 88e58ac902def..15c3a9058e728 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c +@@ -307,6 +307,11 @@ int mlx5_lag_mp_init(struct mlx5_lag *ldev) + struct lag_mp *mp = &ldev->lag_mp; + int err; + ++ /* always clear mfi, as it might become stale when a route delete event ++ * has been missed ++ */ ++ mp->mfi = NULL; ++ + if (mp->fib_nb.notifier_call) + return 0; + +@@ -335,4 +340,5 @@ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev) + unregister_fib_notifier(&init_net, &mp->fib_nb); + destroy_workqueue(mp->wq); + mp->fib_nb.notifier_call = NULL; ++ mp->mfi = NULL; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c +index fd8449ff9e176..839a01da110f3 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c +@@ -33,6 +33,7 @@ + #include + #include + #include ++#include + #include + #include "mlx5_core.h" + #include "lib/mpfs.h" +@@ -175,6 +176,7 @@ out: + mutex_unlock(&mpfs->lock); + return err; + } ++EXPORT_SYMBOL(mlx5_mpfs_add_mac); + + int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) + { +@@ -206,3 +208,4 @@ unlock: + mutex_unlock(&mpfs->lock); + return err; + } ++EXPORT_SYMBOL(mlx5_mpfs_del_mac); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h +index 4a7b2c3203a7e..4a293542a7aa1 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.h +@@ -84,12 +84,9 @@ struct l2addr_node { + #ifdef CONFIG_MLX5_MPFS + int mlx5_mpfs_init(struct mlx5_core_dev *dev); + void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev); +-int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac); +-int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac); + #else /* #ifndef CONFIG_MLX5_MPFS */ + static inline int mlx5_mpfs_init(struct mlx5_core_dev *dev) { return 0; } + static inline void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev) {} +-static inline int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; } +-static inline int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; } + #endif ++ + #endif +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c +index c568896cfb231..efb93d63e54cb 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c +@@ -503,7 +503,7 @@ static int handle_hca_cap_odp(struct mlx5_core_dev *dev, void *set_ctx) + + static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx) + { +- struct mlx5_profile *prof = dev->profile; ++ struct mlx5_profile *prof = &dev->profile; + void *set_hca_cap; + int err; + +@@ -524,11 +524,11 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx) + to_fw_pkey_sz(dev, 128)); + + /* Check log_max_qp from HCA caps to set in current profile */ +- if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < profile[prof_sel].log_max_qp) { ++ if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) { + mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n", +- profile[prof_sel].log_max_qp, ++ prof->log_max_qp, + MLX5_CAP_GEN_MAX(dev, log_max_qp)); +- profile[prof_sel].log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp); ++ prof->log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp); + } + if (prof->mask & MLX5_PROF_MASK_QP_SIZE) + MLX5_SET(cmd_hca_cap, set_hca_cap, log_max_qp, +@@ -1335,8 +1335,7 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx) + struct mlx5_priv *priv = &dev->priv; + int err; + +- dev->profile = &profile[profile_idx]; +- ++ memcpy(&dev->profile, &profile[profile_idx], sizeof(dev->profile)); + INIT_LIST_HEAD(&priv->ctx_list); + spin_lock_init(&priv->ctx_lock); + mutex_init(&dev->intf_state_mutex); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +index c2ba41bb7a701..96c4509e58381 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c +@@ -128,10 +128,10 @@ static enum devlink_port_fn_state mlx5_sf_to_devlink_state(u8 hw_state) + switch (hw_state) { + case MLX5_VHCA_STATE_ACTIVE: + case MLX5_VHCA_STATE_IN_USE: +- case MLX5_VHCA_STATE_TEARDOWN_REQUEST: + return DEVLINK_PORT_FN_STATE_ACTIVE; + case MLX5_VHCA_STATE_INVALID: + case MLX5_VHCA_STATE_ALLOCATED: ++ case MLX5_VHCA_STATE_TEARDOWN_REQUEST: + default: + return DEVLINK_PORT_FN_STATE_INACTIVE; + } +@@ -184,14 +184,17 @@ sf_err: + return err; + } + +-static int mlx5_sf_activate(struct mlx5_core_dev *dev, struct mlx5_sf *sf) ++static int mlx5_sf_activate(struct mlx5_core_dev *dev, struct mlx5_sf *sf, ++ struct netlink_ext_ack *extack) + { + int err; + + if (mlx5_sf_is_active(sf)) + return 0; +- if (sf->hw_state != MLX5_VHCA_STATE_ALLOCATED) +- return -EINVAL; ++ if (sf->hw_state != MLX5_VHCA_STATE_ALLOCATED) { ++ NL_SET_ERR_MSG_MOD(extack, "SF is inactivated but it is still attached"); ++ return -EBUSY; ++ } + + err = mlx5_cmd_sf_enable_hca(dev, sf->hw_fn_id); + if (err) +@@ -218,7 +221,8 @@ static int mlx5_sf_deactivate(struct mlx5_core_dev *dev, struct mlx5_sf *sf) + + static int mlx5_sf_state_set(struct mlx5_core_dev *dev, struct mlx5_sf_table *table, + struct mlx5_sf *sf, +- enum devlink_port_fn_state state) ++ enum devlink_port_fn_state state, ++ struct netlink_ext_ack *extack) + { + int err = 0; + +@@ -226,7 +230,7 @@ static int mlx5_sf_state_set(struct mlx5_core_dev *dev, struct mlx5_sf_table *ta + if (state == mlx5_sf_to_devlink_state(sf->hw_state)) + goto out; + if (state == DEVLINK_PORT_FN_STATE_ACTIVE) +- err = mlx5_sf_activate(dev, sf); ++ err = mlx5_sf_activate(dev, sf, extack); + else if (state == DEVLINK_PORT_FN_STATE_INACTIVE) + err = mlx5_sf_deactivate(dev, sf); + else +@@ -257,7 +261,7 @@ int mlx5_devlink_sf_port_fn_state_set(struct devlink *devlink, struct devlink_po + goto out; + } + +- err = mlx5_sf_state_set(dev, table, sf, state); ++ err = mlx5_sf_state_set(dev, table, sf, state, extack); + out: + mlx5_sf_table_put(table); + return err; +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +index 369d7cde39933..492415790b13f 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +@@ -1076,7 +1076,6 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv) + */ + static int stmmac_init_phy(struct net_device *dev) + { +- struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; + struct stmmac_priv *priv = netdev_priv(dev); + struct device_node *node; + int ret; +@@ -1102,8 +1101,12 @@ static int stmmac_init_phy(struct net_device *dev) + ret = phylink_connect_phy(priv->phylink, phydev); + } + +- phylink_ethtool_get_wol(priv->phylink, &wol); +- device_set_wakeup_capable(priv->device, !!wol.supported); ++ if (!priv->plat->pmt) { ++ struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; ++ ++ phylink_ethtool_get_wol(priv->phylink, &wol); ++ device_set_wakeup_capable(priv->device, !!wol.supported); ++ } + + return ret; + } +diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c +index d7a144b4a09f0..dc50e948195de 100644 +--- a/drivers/net/ethernet/ti/netcp_core.c ++++ b/drivers/net/ethernet/ti/netcp_core.c +@@ -1350,8 +1350,8 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe) + tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id, + KNAV_QUEUE_SHARED); + if (IS_ERR(tx_pipe->dma_queue)) { +- dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n", +- name, ret); ++ dev_err(dev, "Could not open DMA queue for channel \"%s\": %pe\n", ++ name, tx_pipe->dma_queue); + ret = PTR_ERR(tx_pipe->dma_queue); + goto err; + } +diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h +index 8020776313716..2734a5164b927 100644 +--- a/drivers/net/ipa/ipa.h ++++ b/drivers/net/ipa/ipa.h +@@ -56,6 +56,7 @@ enum ipa_flag { + * @mem_virt: Virtual address of IPA-local memory space + * @mem_offset: Offset from @mem_virt used for access to IPA memory + * @mem_size: Total size (bytes) of memory at @mem_virt ++ * @mem_count: Number of entries in the mem array + * @mem: Array of IPA-local memory region descriptors + * @imem_iova: I/O virtual address of IPA region in IMEM + * @imem_size; Size of IMEM region +@@ -102,6 +103,7 @@ struct ipa { + void *mem_virt; + u32 mem_offset; + u32 mem_size; ++ u32 mem_count; + const struct ipa_mem *mem; + + unsigned long imem_iova; +diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c +index f25029b9ec857..c1294d7ebdad4 100644 +--- a/drivers/net/ipa/ipa_mem.c ++++ b/drivers/net/ipa/ipa_mem.c +@@ -181,7 +181,7 @@ int ipa_mem_config(struct ipa *ipa) + * for the region, write "canary" values in the space prior to + * the region's base address. + */ +- for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) { ++ for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) { + const struct ipa_mem *mem = &ipa->mem[mem_id]; + u16 canary_count; + __le32 *canary; +@@ -488,6 +488,7 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data) + ipa->mem_size = resource_size(res); + + /* The ipa->mem[] array is indexed by enum ipa_mem_id values */ ++ ipa->mem_count = mem_data->local_count; + ipa->mem = mem_data->local; + + ret = ipa_imem_init(ipa, mem_data->imem_addr, mem_data->imem_size); +diff --git a/drivers/net/mdio/mdio-octeon.c b/drivers/net/mdio/mdio-octeon.c +index d1e1009d51afe..6faf39314ac93 100644 +--- a/drivers/net/mdio/mdio-octeon.c ++++ b/drivers/net/mdio/mdio-octeon.c +@@ -71,7 +71,6 @@ static int octeon_mdiobus_probe(struct platform_device *pdev) + + return 0; + fail_register: +- mdiobus_free(bus->mii_bus); + smi_en.u64 = 0; + oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN); + return err; +@@ -85,7 +84,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev) + bus = platform_get_drvdata(pdev); + + mdiobus_unregister(bus->mii_bus); +- mdiobus_free(bus->mii_bus); + smi_en.u64 = 0; + oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN); + return 0; +diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c +index 3d7eda99d34e2..dd7430c998a2a 100644 +--- a/drivers/net/mdio/mdio-thunder.c ++++ b/drivers/net/mdio/mdio-thunder.c +@@ -126,7 +126,6 @@ static void thunder_mdiobus_pci_remove(struct pci_dev *pdev) + continue; + + mdiobus_unregister(bus->mii_bus); +- mdiobus_free(bus->mii_bus); + oct_mdio_writeq(0, bus->register_base + SMI_EN); + } + pci_release_regions(pdev); +diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c +index 3ef4b2841402c..5c779cc0ea112 100644 +--- a/drivers/net/usb/hso.c ++++ b/drivers/net/usb/hso.c +@@ -1689,7 +1689,7 @@ static int hso_serial_tiocmset(struct tty_struct *tty, + spin_unlock_irqrestore(&serial->serial_lock, flags); + + return usb_control_msg(serial->parent->usb, +- usb_rcvctrlpipe(serial->parent->usb, 0), 0x22, ++ usb_sndctrlpipe(serial->parent->usb, 0), 0x22, + 0x21, val, if_num, NULL, 0, + USB_CTRL_SET_TIMEOUT); + } +@@ -2436,7 +2436,7 @@ static int hso_rfkill_set_block(void *data, bool blocked) + if (hso_dev->usb_gone) + rv = 0; + else +- rv = usb_control_msg(hso_dev->usb, usb_rcvctrlpipe(hso_dev->usb, 0), ++ rv = usb_control_msg(hso_dev->usb, usb_sndctrlpipe(hso_dev->usb, 0), + enabled ? 0x82 : 0x81, 0x40, 0, 0, NULL, 0, + USB_CTRL_SET_TIMEOUT); + mutex_unlock(&hso_dev->mutex); +@@ -2618,32 +2618,31 @@ static struct hso_device *hso_create_bulk_serial_device( + num_urbs = 2; + serial->tiocmget = kzalloc(sizeof(struct hso_tiocmget), + GFP_KERNEL); ++ if (!serial->tiocmget) ++ goto exit; + serial->tiocmget->serial_state_notification + = kzalloc(sizeof(struct hso_serial_state_notification), + GFP_KERNEL); +- /* it isn't going to break our heart if serial->tiocmget +- * allocation fails don't bother checking this. +- */ +- if (serial->tiocmget && serial->tiocmget->serial_state_notification) { +- tiocmget = serial->tiocmget; +- tiocmget->endp = hso_get_ep(interface, +- USB_ENDPOINT_XFER_INT, +- USB_DIR_IN); +- if (!tiocmget->endp) { +- dev_err(&interface->dev, "Failed to find INT IN ep\n"); +- goto exit; +- } +- +- tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL); +- if (tiocmget->urb) { +- mutex_init(&tiocmget->mutex); +- init_waitqueue_head(&tiocmget->waitq); +- } else +- hso_free_tiomget(serial); ++ if (!serial->tiocmget->serial_state_notification) ++ goto exit; ++ tiocmget = serial->tiocmget; ++ tiocmget->endp = hso_get_ep(interface, ++ USB_ENDPOINT_XFER_INT, ++ USB_DIR_IN); ++ if (!tiocmget->endp) { ++ dev_err(&interface->dev, "Failed to find INT IN ep\n"); ++ goto exit; + } +- } +- else ++ ++ tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL); ++ if (!tiocmget->urb) ++ goto exit; ++ ++ mutex_init(&tiocmget->mutex); ++ init_waitqueue_head(&tiocmget->waitq); ++ } else { + num_urbs = 1; ++ } + + if (hso_serial_common_create(serial, num_urbs, BULK_URB_RX_SIZE, + BULK_URB_TX_SIZE)) +diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c +index 4353b370249f1..76ed79bb1e3f1 100644 +--- a/drivers/net/usb/smsc75xx.c ++++ b/drivers/net/usb/smsc75xx.c +@@ -1483,7 +1483,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf) + ret = smsc75xx_wait_ready(dev, 0); + if (ret < 0) { + netdev_warn(dev->net, "device not ready in smsc75xx_bind\n"); +- return ret; ++ goto err; + } + + smsc75xx_init_mac_address(dev); +@@ -1492,7 +1492,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf) + ret = smsc75xx_reset(dev); + if (ret < 0) { + netdev_warn(dev->net, "smsc75xx_reset error %d\n", ret); +- return ret; ++ goto err; + } + + dev->net->netdev_ops = &smsc75xx_netdev_ops; +@@ -1502,6 +1502,10 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf) + dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; + dev->net->max_mtu = MAX_SINGLE_PACKET_SIZE; + return 0; ++ ++err: ++ kfree(pdata); ++ return ret; + } + + static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf) +diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h +index 956157946106c..dbc8aef82a65f 100644 +--- a/drivers/net/wireless/ath/ath10k/htt.h ++++ b/drivers/net/wireless/ath/ath10k/htt.h +@@ -845,6 +845,7 @@ enum htt_security_types { + + #define ATH10K_HTT_TXRX_PEER_SECURITY_MAX 2 + #define ATH10K_TXRX_NUM_EXT_TIDS 19 ++#define ATH10K_TXRX_NON_QOS_TID 16 + + enum htt_security_flags { + #define HTT_SECURITY_TYPE_MASK 0x7F +diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c +index 1a08156d5011d..7ffb5d5b2a70e 100644 +--- a/drivers/net/wireless/ath/ath10k/htt_rx.c ++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c +@@ -1746,16 +1746,97 @@ static void ath10k_htt_rx_h_csum_offload(struct sk_buff *msdu) + msdu->ip_summed = ath10k_htt_rx_get_csum_state(msdu); + } + ++static u64 ath10k_htt_rx_h_get_pn(struct ath10k *ar, struct sk_buff *skb, ++ u16 offset, ++ enum htt_rx_mpdu_encrypt_type enctype) ++{ ++ struct ieee80211_hdr *hdr; ++ u64 pn = 0; ++ u8 *ehdr; ++ ++ hdr = (struct ieee80211_hdr *)(skb->data + offset); ++ ehdr = skb->data + offset + ieee80211_hdrlen(hdr->frame_control); ++ ++ if (enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2) { ++ pn = ehdr[0]; ++ pn |= (u64)ehdr[1] << 8; ++ pn |= (u64)ehdr[4] << 16; ++ pn |= (u64)ehdr[5] << 24; ++ pn |= (u64)ehdr[6] << 32; ++ pn |= (u64)ehdr[7] << 40; ++ } ++ return pn; ++} ++ ++static bool ath10k_htt_rx_h_frag_multicast_check(struct ath10k *ar, ++ struct sk_buff *skb, ++ u16 offset) ++{ ++ struct ieee80211_hdr *hdr; ++ ++ hdr = (struct ieee80211_hdr *)(skb->data + offset); ++ return !is_multicast_ether_addr(hdr->addr1); ++} ++ ++static bool ath10k_htt_rx_h_frag_pn_check(struct ath10k *ar, ++ struct sk_buff *skb, ++ u16 peer_id, ++ u16 offset, ++ enum htt_rx_mpdu_encrypt_type enctype) ++{ ++ struct ath10k_peer *peer; ++ union htt_rx_pn_t *last_pn, new_pn = {0}; ++ struct ieee80211_hdr *hdr; ++ bool more_frags; ++ u8 tid, frag_number; ++ u32 seq; ++ ++ peer = ath10k_peer_find_by_id(ar, peer_id); ++ if (!peer) { ++ ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid peer for frag pn check\n"); ++ return false; ++ } ++ ++ hdr = (struct ieee80211_hdr *)(skb->data + offset); ++ if (ieee80211_is_data_qos(hdr->frame_control)) ++ tid = ieee80211_get_tid(hdr); ++ else ++ tid = ATH10K_TXRX_NON_QOS_TID; ++ ++ last_pn = &peer->frag_tids_last_pn[tid]; ++ new_pn.pn48 = ath10k_htt_rx_h_get_pn(ar, skb, offset, enctype); ++ more_frags = ieee80211_has_morefrags(hdr->frame_control); ++ frag_number = le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_FRAG; ++ seq = (__le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4; ++ ++ if (frag_number == 0) { ++ last_pn->pn48 = new_pn.pn48; ++ peer->frag_tids_seq[tid] = seq; ++ } else { ++ if (seq != peer->frag_tids_seq[tid]) ++ return false; ++ ++ if (new_pn.pn48 != last_pn->pn48 + 1) ++ return false; ++ ++ last_pn->pn48 = new_pn.pn48; ++ } ++ ++ return true; ++} ++ + static void ath10k_htt_rx_h_mpdu(struct ath10k *ar, + struct sk_buff_head *amsdu, + struct ieee80211_rx_status *status, + bool fill_crypt_header, + u8 *rx_hdr, +- enum ath10k_pkt_rx_err *err) ++ enum ath10k_pkt_rx_err *err, ++ u16 peer_id, ++ bool frag) + { + struct sk_buff *first; + struct sk_buff *last; +- struct sk_buff *msdu; ++ struct sk_buff *msdu, *temp; + struct htt_rx_desc *rxd; + struct ieee80211_hdr *hdr; + enum htt_rx_mpdu_encrypt_type enctype; +@@ -1768,6 +1849,7 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar, + bool is_decrypted; + bool is_mgmt; + u32 attention; ++ bool frag_pn_check = true, multicast_check = true; + + if (skb_queue_empty(amsdu)) + return; +@@ -1866,7 +1948,37 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar, + } + + skb_queue_walk(amsdu, msdu) { ++ if (frag && !fill_crypt_header && is_decrypted && ++ enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2) ++ frag_pn_check = ath10k_htt_rx_h_frag_pn_check(ar, ++ msdu, ++ peer_id, ++ 0, ++ enctype); ++ ++ if (frag) ++ multicast_check = ath10k_htt_rx_h_frag_multicast_check(ar, ++ msdu, ++ 0); ++ ++ if (!frag_pn_check || !multicast_check) { ++ /* Discard the fragment with invalid PN or multicast DA ++ */ ++ temp = msdu->prev; ++ __skb_unlink(msdu, amsdu); ++ dev_kfree_skb_any(msdu); ++ msdu = temp; ++ frag_pn_check = true; ++ multicast_check = true; ++ continue; ++ } ++ + ath10k_htt_rx_h_csum_offload(msdu); ++ ++ if (frag && !fill_crypt_header && ++ enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA) ++ status->flag &= ~RX_FLAG_MMIC_STRIPPED; ++ + ath10k_htt_rx_h_undecap(ar, msdu, status, first_hdr, enctype, + is_decrypted); + +@@ -1884,6 +1996,11 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar, + + hdr = (void *)msdu->data; + hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED); ++ ++ if (frag && !fill_crypt_header && ++ enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA) ++ status->flag &= ~RX_FLAG_IV_STRIPPED & ++ ~RX_FLAG_MMIC_STRIPPED; + } + } + +@@ -1991,14 +2108,62 @@ static void ath10k_htt_rx_h_unchain(struct ath10k *ar, + ath10k_unchain_msdu(amsdu, unchain_cnt); + } + ++static bool ath10k_htt_rx_validate_amsdu(struct ath10k *ar, ++ struct sk_buff_head *amsdu) ++{ ++ u8 *subframe_hdr; ++ struct sk_buff *first; ++ bool is_first, is_last; ++ struct htt_rx_desc *rxd; ++ struct ieee80211_hdr *hdr; ++ size_t hdr_len, crypto_len; ++ enum htt_rx_mpdu_encrypt_type enctype; ++ int bytes_aligned = ar->hw_params.decap_align_bytes; ++ ++ first = skb_peek(amsdu); ++ ++ rxd = (void *)first->data - sizeof(*rxd); ++ hdr = (void *)rxd->rx_hdr_status; ++ ++ is_first = !!(rxd->msdu_end.common.info0 & ++ __cpu_to_le32(RX_MSDU_END_INFO0_FIRST_MSDU)); ++ is_last = !!(rxd->msdu_end.common.info0 & ++ __cpu_to_le32(RX_MSDU_END_INFO0_LAST_MSDU)); ++ ++ /* Return in case of non-aggregated msdu */ ++ if (is_first && is_last) ++ return true; ++ ++ /* First msdu flag is not set for the first msdu of the list */ ++ if (!is_first) ++ return false; ++ ++ enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0), ++ RX_MPDU_START_INFO0_ENCRYPT_TYPE); ++ ++ hdr_len = ieee80211_hdrlen(hdr->frame_control); ++ crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype); ++ ++ subframe_hdr = (u8 *)hdr + round_up(hdr_len, bytes_aligned) + ++ crypto_len; ++ ++ /* Validate if the amsdu has a proper first subframe. ++ * There are chances a single msdu can be received as amsdu when ++ * the unauthenticated amsdu flag of a QoS header ++ * gets flipped in non-SPP AMSDU's, in such cases the first ++ * subframe has llc/snap header in place of a valid da. ++ * return false if the da matches rfc1042 pattern ++ */ ++ if (ether_addr_equal(subframe_hdr, rfc1042_header)) ++ return false; ++ ++ return true; ++} ++ + static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar, + struct sk_buff_head *amsdu, + struct ieee80211_rx_status *rx_status) + { +- /* FIXME: It might be a good idea to do some fuzzy-testing to drop +- * invalid/dangerous frames. +- */ +- + if (!rx_status->freq) { + ath10k_dbg(ar, ATH10K_DBG_HTT, "no channel configured; ignoring frame(s)!\n"); + return false; +@@ -2009,6 +2174,11 @@ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar, + return false; + } + ++ if (!ath10k_htt_rx_validate_amsdu(ar, amsdu)) { ++ ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid amsdu received\n"); ++ return false; ++ } ++ + return true; + } + +@@ -2071,7 +2241,8 @@ static int ath10k_htt_rx_handle_amsdu(struct ath10k_htt *htt) + ath10k_htt_rx_h_unchain(ar, &amsdu, &drop_cnt, &unchain_cnt); + + ath10k_htt_rx_h_filter(ar, &amsdu, rx_status, &drop_cnt_filter); +- ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true, first_hdr, &err); ++ ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true, first_hdr, &err, 0, ++ false); + msdus_to_queue = skb_queue_len(&amsdu); + ath10k_htt_rx_h_enqueue(ar, &amsdu, rx_status); + +@@ -2204,6 +2375,11 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt, + fw_desc = &rx->fw_desc; + rx_desc_len = fw_desc->len; + ++ if (fw_desc->u.bits.discard) { ++ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt discard mpdu\n"); ++ goto err; ++ } ++ + /* I have not yet seen any case where num_mpdu_ranges > 1. + * qcacld does not seem handle that case either, so we introduce the + * same limitiation here as well. +@@ -2509,6 +2685,13 @@ static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt, + rx_desc = (struct htt_hl_rx_desc *)(skb->data + tot_hdr_len); + rx_desc_info = __le32_to_cpu(rx_desc->info); + ++ hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len); ++ ++ if (is_multicast_ether_addr(hdr->addr1)) { ++ /* Discard the fragment with multicast DA */ ++ goto err; ++ } ++ + if (!MS(rx_desc_info, HTT_RX_DESC_HL_INFO_ENCRYPTED)) { + spin_unlock_bh(&ar->data_lock); + return ath10k_htt_rx_proc_rx_ind_hl(htt, &resp->rx_ind_hl, skb, +@@ -2516,8 +2699,6 @@ static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt, + HTT_RX_NON_TKIP_MIC); + } + +- hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len); +- + if (ieee80211_has_retry(hdr->frame_control)) + goto err; + +@@ -3027,7 +3208,7 @@ static int ath10k_htt_rx_in_ord_ind(struct ath10k *ar, struct sk_buff *skb) + ath10k_htt_rx_h_ppdu(ar, &amsdu, status, vdev_id); + ath10k_htt_rx_h_filter(ar, &amsdu, status, NULL); + ath10k_htt_rx_h_mpdu(ar, &amsdu, status, false, NULL, +- NULL); ++ NULL, peer_id, frag); + ath10k_htt_rx_h_enqueue(ar, &amsdu, status); + break; + case -EAGAIN: +diff --git a/drivers/net/wireless/ath/ath10k/rx_desc.h b/drivers/net/wireless/ath/ath10k/rx_desc.h +index f2b6bf8f0d60d..705b6295e4663 100644 +--- a/drivers/net/wireless/ath/ath10k/rx_desc.h ++++ b/drivers/net/wireless/ath/ath10k/rx_desc.h +@@ -1282,7 +1282,19 @@ struct fw_rx_desc_base { + #define FW_RX_DESC_UDP (1 << 6) + + struct fw_rx_desc_hl { +- u8 info0; ++ union { ++ struct { ++ u8 discard:1, ++ forward:1, ++ any_err:1, ++ dup_err:1, ++ reserved:1, ++ inspect:1, ++ extension:2; ++ } bits; ++ u8 info0; ++ } u; ++ + u8 version; + u8 len; + u8 flags; +diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c +index 850ad38b888f4..e3b620dd4d928 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_rx.c ++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c +@@ -854,6 +854,24 @@ static void ath11k_dp_rx_frags_cleanup(struct dp_rx_tid *rx_tid, bool rel_link_d + __skb_queue_purge(&rx_tid->rx_frags); + } + ++void ath11k_peer_frags_flush(struct ath11k *ar, struct ath11k_peer *peer) ++{ ++ struct dp_rx_tid *rx_tid; ++ int i; ++ ++ lockdep_assert_held(&ar->ab->base_lock); ++ ++ for (i = 0; i <= IEEE80211_NUM_TIDS; i++) { ++ rx_tid = &peer->rx_tid[i]; ++ ++ spin_unlock_bh(&ar->ab->base_lock); ++ del_timer_sync(&rx_tid->frag_timer); ++ spin_lock_bh(&ar->ab->base_lock); ++ ++ ath11k_dp_rx_frags_cleanup(rx_tid, true); ++ } ++} ++ + void ath11k_peer_rx_tid_cleanup(struct ath11k *ar, struct ath11k_peer *peer) + { + struct dp_rx_tid *rx_tid; +diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.h b/drivers/net/wireless/ath/ath11k/dp_rx.h +index bf399312b5ff5..623da3bf9dc81 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_rx.h ++++ b/drivers/net/wireless/ath/ath11k/dp_rx.h +@@ -49,6 +49,7 @@ int ath11k_dp_peer_rx_pn_replay_config(struct ath11k_vif *arvif, + const u8 *peer_addr, + enum set_key_cmd key_cmd, + struct ieee80211_key_conf *key); ++void ath11k_peer_frags_flush(struct ath11k *ar, struct ath11k_peer *peer); + void ath11k_peer_rx_tid_cleanup(struct ath11k *ar, struct ath11k_peer *peer); + void ath11k_peer_rx_tid_delete(struct ath11k *ar, + struct ath11k_peer *peer, u8 tid); +diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c +index faa2e678e63ef..7ad0383affcba 100644 +--- a/drivers/net/wireless/ath/ath11k/mac.c ++++ b/drivers/net/wireless/ath/ath11k/mac.c +@@ -2711,6 +2711,12 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + */ + spin_lock_bh(&ab->base_lock); + peer = ath11k_peer_find(ab, arvif->vdev_id, peer_addr); ++ ++ /* flush the fragments cache during key (re)install to ++ * ensure all frags in the new frag list belong to the same key. ++ */ ++ if (peer && cmd == SET_KEY) ++ ath11k_peer_frags_flush(ar, peer); + spin_unlock_bh(&ab->base_lock); + + if (!peer) { +diff --git a/drivers/net/wireless/ath/ath6kl/debug.c b/drivers/net/wireless/ath/ath6kl/debug.c +index 7506cea46f589..433a047f3747b 100644 +--- a/drivers/net/wireless/ath/ath6kl/debug.c ++++ b/drivers/net/wireless/ath/ath6kl/debug.c +@@ -1027,14 +1027,17 @@ static ssize_t ath6kl_lrssi_roam_write(struct file *file, + { + struct ath6kl *ar = file->private_data; + unsigned long lrssi_roam_threshold; ++ int ret; + + if (kstrtoul_from_user(user_buf, count, 0, &lrssi_roam_threshold)) + return -EINVAL; + + ar->lrssi_roam_threshold = lrssi_roam_threshold; + +- ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold); ++ ret = ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold); + ++ if (ret) ++ return ret; + return count; + } + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c +index ce8c102df7b3e..633d0ab190314 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c +@@ -1217,13 +1217,9 @@ static struct sdio_driver brcmf_sdmmc_driver = { + }, + }; + +-void brcmf_sdio_register(void) ++int brcmf_sdio_register(void) + { +- int ret; +- +- ret = sdio_register_driver(&brcmf_sdmmc_driver); +- if (ret) +- brcmf_err("sdio_register_driver failed: %d\n", ret); ++ return sdio_register_driver(&brcmf_sdmmc_driver); + } + + void brcmf_sdio_exit(void) +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h +index 08f9d47f2e5ca..3f5da3bb6aa59 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h +@@ -275,11 +275,26 @@ void brcmf_bus_add_txhdrlen(struct device *dev, uint len); + + #ifdef CONFIG_BRCMFMAC_SDIO + void brcmf_sdio_exit(void); +-void brcmf_sdio_register(void); ++int brcmf_sdio_register(void); ++#else ++static inline void brcmf_sdio_exit(void) { } ++static inline int brcmf_sdio_register(void) { return 0; } + #endif ++ + #ifdef CONFIG_BRCMFMAC_USB + void brcmf_usb_exit(void); +-void brcmf_usb_register(void); ++int brcmf_usb_register(void); ++#else ++static inline void brcmf_usb_exit(void) { } ++static inline int brcmf_usb_register(void) { return 0; } ++#endif ++ ++#ifdef CONFIG_BRCMFMAC_PCIE ++void brcmf_pcie_exit(void); ++int brcmf_pcie_register(void); ++#else ++static inline void brcmf_pcie_exit(void) { } ++static inline int brcmf_pcie_register(void) { return 0; } + #endif + + #endif /* BRCMFMAC_BUS_H */ +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c +index ea78fe527c5dc..7e528833aafa0 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c +@@ -1518,40 +1518,34 @@ void brcmf_bus_change_state(struct brcmf_bus *bus, enum brcmf_bus_state state) + } + } + +-static void brcmf_driver_register(struct work_struct *work) +-{ +-#ifdef CONFIG_BRCMFMAC_SDIO +- brcmf_sdio_register(); +-#endif +-#ifdef CONFIG_BRCMFMAC_USB +- brcmf_usb_register(); +-#endif +-#ifdef CONFIG_BRCMFMAC_PCIE +- brcmf_pcie_register(); +-#endif +-} +-static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register); +- + int __init brcmf_core_init(void) + { +- if (!schedule_work(&brcmf_driver_work)) +- return -EBUSY; ++ int err; + ++ err = brcmf_sdio_register(); ++ if (err) ++ return err; ++ ++ err = brcmf_usb_register(); ++ if (err) ++ goto error_usb_register; ++ ++ err = brcmf_pcie_register(); ++ if (err) ++ goto error_pcie_register; + return 0; ++ ++error_pcie_register: ++ brcmf_usb_exit(); ++error_usb_register: ++ brcmf_sdio_exit(); ++ return err; + } + + void __exit brcmf_core_exit(void) + { +- cancel_work_sync(&brcmf_driver_work); +- +-#ifdef CONFIG_BRCMFMAC_SDIO + brcmf_sdio_exit(); +-#endif +-#ifdef CONFIG_BRCMFMAC_USB + brcmf_usb_exit(); +-#endif +-#ifdef CONFIG_BRCMFMAC_PCIE + brcmf_pcie_exit(); +-#endif + } + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c +index ad79e3b7e74a3..143a705b5cb3a 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c +@@ -2140,15 +2140,10 @@ static struct pci_driver brcmf_pciedrvr = { + }; + + +-void brcmf_pcie_register(void) ++int brcmf_pcie_register(void) + { +- int err; +- + brcmf_dbg(PCIE, "Enter\n"); +- err = pci_register_driver(&brcmf_pciedrvr); +- if (err) +- brcmf_err(NULL, "PCIE driver registration failed, err=%d\n", +- err); ++ return pci_register_driver(&brcmf_pciedrvr); + } + + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h +index d026401d20010..8e6c227e8315c 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h +@@ -11,9 +11,4 @@ struct brcmf_pciedev { + struct brcmf_pciedev_info *devinfo; + }; + +- +-void brcmf_pcie_exit(void); +-void brcmf_pcie_register(void); +- +- + #endif /* BRCMFMAC_PCIE_H */ +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c +index 586f4dfc638b9..9fb68c2dc7e39 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c +@@ -1584,12 +1584,8 @@ void brcmf_usb_exit(void) + usb_deregister(&brcmf_usbdrvr); + } + +-void brcmf_usb_register(void) ++int brcmf_usb_register(void) + { +- int ret; +- + brcmf_dbg(USB, "Enter\n"); +- ret = usb_register(&brcmf_usbdrvr); +- if (ret) +- brcmf_err("usb_register failed %d\n", ret); ++ return usb_register(&brcmf_usbdrvr); + } +diff --git a/drivers/net/wireless/marvell/libertas/mesh.c b/drivers/net/wireless/marvell/libertas/mesh.c +index f5b78257d5518..c68814841583f 100644 +--- a/drivers/net/wireless/marvell/libertas/mesh.c ++++ b/drivers/net/wireless/marvell/libertas/mesh.c +@@ -801,24 +801,6 @@ static const struct attribute_group mesh_ie_group = { + .attrs = mesh_ie_attrs, + }; + +-static void lbs_persist_config_init(struct net_device *dev) +-{ +- int ret; +- ret = sysfs_create_group(&(dev->dev.kobj), &boot_opts_group); +- if (ret) +- pr_err("failed to create boot_opts_group.\n"); +- +- ret = sysfs_create_group(&(dev->dev.kobj), &mesh_ie_group); +- if (ret) +- pr_err("failed to create mesh_ie_group.\n"); +-} +- +-static void lbs_persist_config_remove(struct net_device *dev) +-{ +- sysfs_remove_group(&(dev->dev.kobj), &boot_opts_group); +- sysfs_remove_group(&(dev->dev.kobj), &mesh_ie_group); +-} +- + + /*************************************************************************** + * Initializing and starting, stopping mesh +@@ -1014,6 +996,10 @@ static int lbs_add_mesh(struct lbs_private *priv) + SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent); + + mesh_dev->flags |= IFF_BROADCAST | IFF_MULTICAST; ++ mesh_dev->sysfs_groups[0] = &lbs_mesh_attr_group; ++ mesh_dev->sysfs_groups[1] = &boot_opts_group; ++ mesh_dev->sysfs_groups[2] = &mesh_ie_group; ++ + /* Register virtual mesh interface */ + ret = register_netdev(mesh_dev); + if (ret) { +@@ -1021,19 +1007,10 @@ static int lbs_add_mesh(struct lbs_private *priv) + goto err_free_netdev; + } + +- ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group); +- if (ret) +- goto err_unregister; +- +- lbs_persist_config_init(mesh_dev); +- + /* Everything successful */ + ret = 0; + goto done; + +-err_unregister: +- unregister_netdev(mesh_dev); +- + err_free_netdev: + free_netdev(mesh_dev); + +@@ -1054,8 +1031,6 @@ void lbs_remove_mesh(struct lbs_private *priv) + + netif_stop_queue(mesh_dev); + netif_carrier_off(mesh_dev); +- sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group); +- lbs_persist_config_remove(mesh_dev); + unregister_netdev(mesh_dev); + priv->mesh_dev = NULL; + kfree(mesh_dev->ieee80211_ptr); +diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c +index d958b5da9b88a..4df4f37e6b895 100644 +--- a/drivers/nvme/target/tcp.c ++++ b/drivers/nvme/target/tcp.c +@@ -538,7 +538,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req) + * nvmet_req_init is completed. + */ + if (queue->rcv_state == NVMET_TCP_RECV_PDU && +- len && len < cmd->req.port->inline_data_size && ++ len && len <= cmd->req.port->inline_data_size && + nvme_is_write(cmd->req.cmd)) + return; + } +diff --git a/drivers/platform/x86/hp-wireless.c b/drivers/platform/x86/hp-wireless.c +index 12c31fd5d5ae2..0753ef18e7211 100644 +--- a/drivers/platform/x86/hp-wireless.c ++++ b/drivers/platform/x86/hp-wireless.c +@@ -17,12 +17,14 @@ MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Alex Hung"); + MODULE_ALIAS("acpi*:HPQ6001:*"); + MODULE_ALIAS("acpi*:WSTADEF:*"); ++MODULE_ALIAS("acpi*:AMDI0051:*"); + + static struct input_dev *hpwl_input_dev; + + static const struct acpi_device_id hpwl_ids[] = { + {"HPQ6001", 0}, + {"WSTADEF", 0}, ++ {"AMDI0051", 0}, + {"", 0}, + }; + +diff --git a/drivers/platform/x86/hp_accel.c b/drivers/platform/x86/hp_accel.c +index 799cbe2ffcf36..8c0867bda8280 100644 +--- a/drivers/platform/x86/hp_accel.c ++++ b/drivers/platform/x86/hp_accel.c +@@ -88,6 +88,9 @@ MODULE_DEVICE_TABLE(acpi, lis3lv02d_device_ids); + static int lis3lv02d_acpi_init(struct lis3lv02d *lis3) + { + struct acpi_device *dev = lis3->bus_priv; ++ if (!lis3->init_required) ++ return 0; ++ + if (acpi_evaluate_object(dev->handle, METHOD_NAME__INI, + NULL, NULL) != AE_OK) + return -EINVAL; +@@ -356,6 +359,7 @@ static int lis3lv02d_add(struct acpi_device *device) + } + + /* call the core layer do its init */ ++ lis3_dev.init_required = true; + ret = lis3lv02d_init_device(&lis3_dev); + if (ret) + return ret; +@@ -403,11 +407,27 @@ static int lis3lv02d_suspend(struct device *dev) + + static int lis3lv02d_resume(struct device *dev) + { ++ lis3_dev.init_required = false; ++ lis3lv02d_poweron(&lis3_dev); ++ return 0; ++} ++ ++static int lis3lv02d_restore(struct device *dev) ++{ ++ lis3_dev.init_required = true; + lis3lv02d_poweron(&lis3_dev); + return 0; + } + +-static SIMPLE_DEV_PM_OPS(hp_accel_pm, lis3lv02d_suspend, lis3lv02d_resume); ++static const struct dev_pm_ops hp_accel_pm = { ++ .suspend = lis3lv02d_suspend, ++ .resume = lis3lv02d_resume, ++ .freeze = lis3lv02d_suspend, ++ .thaw = lis3lv02d_resume, ++ .poweroff = lis3lv02d_suspend, ++ .restore = lis3lv02d_restore, ++}; ++ + #define HP_ACCEL_PM (&hp_accel_pm) + #else + #define HP_ACCEL_PM NULL +diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c +index 05cced59e251a..f58b8543f6ac5 100644 +--- a/drivers/platform/x86/intel_punit_ipc.c ++++ b/drivers/platform/x86/intel_punit_ipc.c +@@ -312,6 +312,7 @@ static const struct acpi_device_id punit_ipc_acpi_ids[] = { + { "INT34D4", 0 }, + { } + }; ++MODULE_DEVICE_TABLE(acpi, punit_ipc_acpi_ids); + + static struct platform_driver intel_punit_ipc_driver = { + .probe = intel_punit_ipc_probe, +diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c +index c44a6e8dceb8c..8618c44106c2b 100644 +--- a/drivers/platform/x86/touchscreen_dmi.c ++++ b/drivers/platform/x86/touchscreen_dmi.c +@@ -115,6 +115,32 @@ static const struct ts_dmi_data chuwi_hi10_plus_data = { + .properties = chuwi_hi10_plus_props, + }; + ++static const struct property_entry chuwi_hi10_pro_props[] = { ++ PROPERTY_ENTRY_U32("touchscreen-min-x", 8), ++ PROPERTY_ENTRY_U32("touchscreen-min-y", 8), ++ PROPERTY_ENTRY_U32("touchscreen-size-x", 1912), ++ PROPERTY_ENTRY_U32("touchscreen-size-y", 1272), ++ PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), ++ PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-pro.fw"), ++ PROPERTY_ENTRY_U32("silead,max-fingers", 10), ++ PROPERTY_ENTRY_BOOL("silead,home-button"), ++ { } ++}; ++ ++static const struct ts_dmi_data chuwi_hi10_pro_data = { ++ .embedded_fw = { ++ .name = "silead/gsl1680-chuwi-hi10-pro.fw", ++ .prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 }, ++ .length = 42504, ++ .sha256 = { 0xdb, 0x92, 0x68, 0xa8, 0xdb, 0x81, 0x31, 0x00, ++ 0x1f, 0x58, 0x89, 0xdb, 0x19, 0x1b, 0x15, 0x8c, ++ 0x05, 0x14, 0xf4, 0x95, 0xba, 0x15, 0x45, 0x98, ++ 0x42, 0xa3, 0xbb, 0x65, 0xe3, 0x30, 0xa5, 0x93 }, ++ }, ++ .acpi_name = "MSSL1680:00", ++ .properties = chuwi_hi10_pro_props, ++}; ++ + static const struct property_entry chuwi_vi8_props[] = { + PROPERTY_ENTRY_U32("touchscreen-min-x", 4), + PROPERTY_ENTRY_U32("touchscreen-min-y", 6), +@@ -889,6 +915,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = { + DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"), + }, + }, ++ { ++ /* Chuwi Hi10 Prus (CWI597) */ ++ .driver_data = (void *)&chuwi_hi10_pro_data, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Hi10 pro tablet"), ++ DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"), ++ }, ++ }, + { + /* Chuwi Vi8 (CWI506) */ + .driver_data = (void *)&chuwi_vi8_data, +@@ -1070,6 +1105,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = { + DMI_MATCH(DMI_BIOS_VERSION, "jumperx.T87.KFBNEEA"), + }, + }, ++ { ++ /* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */ ++ .driver_data = (void *)&trekstor_surftab_wintron70_data, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "MEDIACOM"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "WinPad 7 W10 - WPW700"), ++ }, ++ }, + { + /* Mediacom Flexbook Edge 11 (same hw as TS Primebook C11) */ + .driver_data = (void *)&trekstor_primebook_c11_data, +diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c +index 530e5f90095e6..0d1034e3ed0f2 100644 +--- a/drivers/ptp/ptp_ocp.c ++++ b/drivers/ptp/ptp_ocp.c +@@ -324,7 +324,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) + if (!bp->base) { + dev_err(&pdev->dev, "io_remap bar0\n"); + err = -ENOMEM; +- goto out; ++ goto out_release_regions; + } + bp->reg = bp->base + OCP_REGISTER_OFFSET; + bp->tod = bp->base + TOD_REGISTER_OFFSET; +@@ -347,6 +347,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) + return 0; + + out: ++ pci_iounmap(pdev, bp->base); ++out_release_regions: + pci_release_regions(pdev); + out_disable: + pci_disable_device(pdev); +diff --git a/drivers/s390/block/dasd_diag.c b/drivers/s390/block/dasd_diag.c +index 1b9e1442e6a50..fd42a5fffaed1 100644 +--- a/drivers/s390/block/dasd_diag.c ++++ b/drivers/s390/block/dasd_diag.c +@@ -642,12 +642,18 @@ static void dasd_diag_setup_blk_queue(struct dasd_block *block) + blk_queue_segment_boundary(q, PAGE_SIZE - 1); + } + ++static int dasd_diag_pe_handler(struct dasd_device *device, ++ __u8 tbvpm, __u8 fcsecpm) ++{ ++ return dasd_generic_verify_path(device, tbvpm); ++} ++ + static struct dasd_discipline dasd_diag_discipline = { + .owner = THIS_MODULE, + .name = "DIAG", + .ebcname = "DIAG", + .check_device = dasd_diag_check_device, +- .verify_path = dasd_generic_verify_path, ++ .pe_handler = dasd_diag_pe_handler, + .fill_geometry = dasd_diag_fill_geometry, + .setup_blk_queue = dasd_diag_setup_blk_queue, + .start_IO = dasd_start_diag, +diff --git a/drivers/s390/block/dasd_fba.c b/drivers/s390/block/dasd_fba.c +index 1aeb68794ce8b..417437844d343 100644 +--- a/drivers/s390/block/dasd_fba.c ++++ b/drivers/s390/block/dasd_fba.c +@@ -800,13 +800,19 @@ static void dasd_fba_setup_blk_queue(struct dasd_block *block) + blk_queue_flag_set(QUEUE_FLAG_DISCARD, q); + } + ++static int dasd_fba_pe_handler(struct dasd_device *device, ++ __u8 tbvpm, __u8 fcsecpm) ++{ ++ return dasd_generic_verify_path(device, tbvpm); ++} ++ + static struct dasd_discipline dasd_fba_discipline = { + .owner = THIS_MODULE, + .name = "FBA ", + .ebcname = "FBA ", + .check_device = dasd_fba_check_characteristics, + .do_analysis = dasd_fba_do_analysis, +- .verify_path = dasd_generic_verify_path, ++ .pe_handler = dasd_fba_pe_handler, + .setup_blk_queue = dasd_fba_setup_blk_queue, + .fill_geometry = dasd_fba_fill_geometry, + .start_IO = dasd_start_IO, +diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h +index b8a04c42d1d2e..015239c7d5139 100644 +--- a/drivers/s390/block/dasd_int.h ++++ b/drivers/s390/block/dasd_int.h +@@ -297,7 +297,6 @@ struct dasd_discipline { + * e.g. verify that new path is compatible with the current + * configuration. + */ +- int (*verify_path)(struct dasd_device *, __u8); + int (*pe_handler)(struct dasd_device *, __u8, __u8); + + /* +diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c +index b9febc581b1f4..8d1b2771c1aa0 100644 +--- a/drivers/s390/cio/vfio_ccw_cp.c ++++ b/drivers/s390/cio/vfio_ccw_cp.c +@@ -638,6 +638,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb) + static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1); + int ret; + ++ /* this is an error in the caller */ ++ if (cp->initialized) ++ return -EBUSY; ++ + /* + * We only support prefetching the channel program. We assume all channel + * programs executed by supported guests likewise support prefetching. +diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c +index ccb061ab0a0ad..7231de2767a96 100644 +--- a/drivers/scsi/BusLogic.c ++++ b/drivers/scsi/BusLogic.c +@@ -3078,11 +3078,11 @@ static int blogic_qcmd_lck(struct scsi_cmnd *command, + ccb->opcode = BLOGIC_INITIATOR_CCB_SG; + ccb->datalen = count * sizeof(struct blogic_sg_seg); + if (blogic_multimaster_type(adapter)) +- ccb->data = (void *)((unsigned int) ccb->dma_handle + ++ ccb->data = (unsigned int) ccb->dma_handle + + ((unsigned long) &ccb->sglist - +- (unsigned long) ccb)); ++ (unsigned long) ccb); + else +- ccb->data = ccb->sglist; ++ ccb->data = virt_to_32bit_virt(ccb->sglist); + + scsi_for_each_sg(command, sg, count, i) { + ccb->sglist[i].segbytes = sg_dma_len(sg); +diff --git a/drivers/scsi/BusLogic.h b/drivers/scsi/BusLogic.h +index 6182cc8a0344a..e081ad47d1cf4 100644 +--- a/drivers/scsi/BusLogic.h ++++ b/drivers/scsi/BusLogic.h +@@ -814,7 +814,7 @@ struct blogic_ccb { + unsigned char cdblen; /* Byte 2 */ + unsigned char sense_datalen; /* Byte 3 */ + u32 datalen; /* Bytes 4-7 */ +- void *data; /* Bytes 8-11 */ ++ u32 data; /* Bytes 8-11 */ + unsigned char:8; /* Byte 12 */ + unsigned char:8; /* Byte 13 */ + enum blogic_adapter_status adapter_status; /* Byte 14 */ +diff --git a/drivers/scsi/aic7xxx/scsi_message.h b/drivers/scsi/aic7xxx/scsi_message.h +index a7515c3039edb..53343a6d8ae19 100644 +--- a/drivers/scsi/aic7xxx/scsi_message.h ++++ b/drivers/scsi/aic7xxx/scsi_message.h +@@ -3,6 +3,17 @@ + * $FreeBSD: src/sys/cam/scsi/scsi_message.h,v 1.2 2000/05/01 20:21:29 peter Exp $ + */ + ++/* Messages (1 byte) */ /* I/T (M)andatory or (O)ptional */ ++#define MSG_SAVEDATAPOINTER 0x02 /* O/O */ ++#define MSG_RESTOREPOINTERS 0x03 /* O/O */ ++#define MSG_DISCONNECT 0x04 /* O/O */ ++#define MSG_MESSAGE_REJECT 0x07 /* M/M */ ++#define MSG_NOOP 0x08 /* M/M */ ++ ++/* Messages (2 byte) */ ++#define MSG_SIMPLE_Q_TAG 0x20 /* O/O */ ++#define MSG_IGN_WIDE_RESIDUE 0x23 /* O/O */ ++ + /* Identify message */ /* M/M */ + #define MSG_IDENTIFYFLAG 0x80 + #define MSG_IDENTIFY_DISCFLAG 0x40 +diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c +index 19cf418928faa..e3d03d744713d 100644 +--- a/drivers/scsi/libsas/sas_port.c ++++ b/drivers/scsi/libsas/sas_port.c +@@ -25,7 +25,7 @@ static bool phy_is_wideport_member(struct asd_sas_port *port, struct asd_sas_phy + + static void sas_resume_port(struct asd_sas_phy *phy) + { +- struct domain_device *dev; ++ struct domain_device *dev, *n; + struct asd_sas_port *port = phy->port; + struct sas_ha_struct *sas_ha = phy->ha; + struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt); +@@ -44,7 +44,7 @@ static void sas_resume_port(struct asd_sas_phy *phy) + * 1/ presume every device came back + * 2/ force the next revalidation to check all expander phys + */ +- list_for_each_entry(dev, &port->dev_list, dev_list_node) { ++ list_for_each_entry_safe(dev, n, &port->dev_list, dev_list_node) { + int i, rc; + + rc = sas_notify_lldd_dev_found(dev); +diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c +index 1b1a57f46989a..c2a38a172904c 100644 +--- a/drivers/scsi/pm8001/pm8001_hwi.c ++++ b/drivers/scsi/pm8001/pm8001_hwi.c +@@ -3709,11 +3709,13 @@ static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void* piomb) + case HW_EVENT_PHY_START_STATUS: + pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS status = %x\n", + status); +- if (status == 0) { ++ if (status == 0) + phy->phy_state = 1; +- if (pm8001_ha->flags == PM8001F_RUN_TIME && +- phy->enable_completion != NULL) +- complete(phy->enable_completion); ++ ++ if (pm8001_ha->flags == PM8001F_RUN_TIME && ++ phy->enable_completion != NULL) { ++ complete(phy->enable_completion); ++ phy->enable_completion = NULL; + } + break; + case HW_EVENT_SAS_PHY_UP: +diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c +index bd626ef876dac..4f3ec2bba8c95 100644 +--- a/drivers/scsi/pm8001/pm8001_init.c ++++ b/drivers/scsi/pm8001/pm8001_init.c +@@ -1144,8 +1144,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev, + goto err_out_shost; + } + list_add_tail(&pm8001_ha->list, &hba_list); +- scsi_scan_host(pm8001_ha->shost); + pm8001_ha->flags = PM8001F_RUN_TIME; ++ scsi_scan_host(pm8001_ha->shost); + return 0; + + err_out_shost: +diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c +index a98d4496ff8b6..0a637609504e8 100644 +--- a/drivers/scsi/pm8001/pm8001_sas.c ++++ b/drivers/scsi/pm8001/pm8001_sas.c +@@ -264,12 +264,17 @@ void pm8001_scan_start(struct Scsi_Host *shost) + int i; + struct pm8001_hba_info *pm8001_ha; + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); ++ DECLARE_COMPLETION_ONSTACK(completion); + pm8001_ha = sha->lldd_ha; + /* SAS_RE_INITIALIZATION not available in SPCv/ve */ + if (pm8001_ha->chip_id == chip_8001) + PM8001_CHIP_DISP->sas_re_init_req(pm8001_ha); +- for (i = 0; i < pm8001_ha->chip->n_phy; ++i) ++ for (i = 0; i < pm8001_ha->chip->n_phy; ++i) { ++ pm8001_ha->phy[i].enable_completion = &completion; + PM8001_CHIP_DISP->phy_start_req(pm8001_ha, i); ++ wait_for_completion(&completion); ++ msleep(300); ++ } + } + + int pm8001_scan_finished(struct Scsi_Host *shost, unsigned long time) +diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c +index c6b0834e38061..5de7adfabd57b 100644 +--- a/drivers/scsi/pm8001/pm80xx_hwi.c ++++ b/drivers/scsi/pm8001/pm80xx_hwi.c +@@ -3485,13 +3485,13 @@ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) + pm8001_dbg(pm8001_ha, INIT, + "phy start resp status:0x%x, phyid:0x%x\n", + status, phy_id); +- if (status == 0) { ++ if (status == 0) + phy->phy_state = PHY_LINK_DOWN; +- if (pm8001_ha->flags == PM8001F_RUN_TIME && +- phy->enable_completion != NULL) { +- complete(phy->enable_completion); +- phy->enable_completion = NULL; +- } ++ ++ if (pm8001_ha->flags == PM8001F_RUN_TIME && ++ phy->enable_completion != NULL) { ++ complete(phy->enable_completion); ++ phy->enable_completion = NULL; + } + return 0; + +diff --git a/drivers/scsi/ufs/ufs-mediatek.c b/drivers/scsi/ufs/ufs-mediatek.c +index a981f261b3043..aee3cfc7142a4 100644 +--- a/drivers/scsi/ufs/ufs-mediatek.c ++++ b/drivers/scsi/ufs/ufs-mediatek.c +@@ -922,6 +922,7 @@ static void ufs_mtk_vreg_set_lpm(struct ufs_hba *hba, bool lpm) + static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) + { + int err; ++ struct arm_smccc_res res; + + if (ufshcd_is_link_hibern8(hba)) { + err = ufs_mtk_link_set_lpm(hba); +@@ -941,6 +942,9 @@ static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) + goto fail; + } + ++ if (ufshcd_is_link_off(hba)) ++ ufs_mtk_device_reset_ctrl(0, res); ++ + return 0; + fail: + /* +diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c +index 0287366874882..fb45e6af66381 100644 +--- a/drivers/spi/spi-fsl-dspi.c ++++ b/drivers/spi/spi-fsl-dspi.c +@@ -1375,11 +1375,13 @@ poll_mode: + ret = spi_register_controller(ctlr); + if (ret != 0) { + dev_err(&pdev->dev, "Problem registering DSPI ctlr\n"); +- goto out_free_irq; ++ goto out_release_dma; + } + + return ret; + ++out_release_dma: ++ dspi_release_dma(dspi); + out_free_irq: + if (dspi->irq) + free_irq(dspi->irq, dspi); +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c +index 8da4fe475b84b..6ae7418f648ca 100644 +--- a/drivers/spi/spi.c ++++ b/drivers/spi/spi.c +@@ -823,16 +823,29 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force) + + if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio)) { + if (!(spi->mode & SPI_NO_CS)) { +- if (spi->cs_gpiod) +- /* polarity handled by gpiolib */ +- gpiod_set_value_cansleep(spi->cs_gpiod, +- enable1); +- else ++ if (spi->cs_gpiod) { ++ /* ++ * Historically ACPI has no means of the GPIO polarity and ++ * thus the SPISerialBus() resource defines it on the per-chip ++ * basis. In order to avoid a chain of negations, the GPIO ++ * polarity is considered being Active High. Even for the cases ++ * when _DSD() is involved (in the updated versions of ACPI) ++ * the GPIO CS polarity must be defined Active High to avoid ++ * ambiguity. That's why we use enable, that takes SPI_CS_HIGH ++ * into account. ++ */ ++ if (has_acpi_companion(&spi->dev)) ++ gpiod_set_value_cansleep(spi->cs_gpiod, !enable); ++ else ++ /* Polarity handled by GPIO library */ ++ gpiod_set_value_cansleep(spi->cs_gpiod, enable1); ++ } else { + /* + * invert the enable line, as active low is + * default for SPI. + */ + gpio_set_value_cansleep(spi->cs_gpio, !enable); ++ } + } + /* Some SPI masters need both GPIO CS & slave_select */ + if ((spi->controller->flags & SPI_MASTER_GPIO_SS) && +@@ -3463,9 +3476,12 @@ int spi_set_cs_timing(struct spi_device *spi, struct spi_delay *setup, + + if (spi->controller->set_cs_timing && + !(spi->cs_gpiod || gpio_is_valid(spi->cs_gpio))) { ++ mutex_lock(&spi->controller->io_mutex); ++ + if (spi->controller->auto_runtime_pm) { + status = pm_runtime_get_sync(parent); + if (status < 0) { ++ mutex_unlock(&spi->controller->io_mutex); + pm_runtime_put_noidle(parent); + dev_err(&spi->controller->dev, "Failed to power device: %d\n", + status); +@@ -3476,11 +3492,13 @@ int spi_set_cs_timing(struct spi_device *spi, struct spi_delay *setup, + hold, inactive); + pm_runtime_mark_last_busy(parent); + pm_runtime_put_autosuspend(parent); +- return status; + } else { +- return spi->controller->set_cs_timing(spi, setup, hold, ++ status = spi->controller->set_cs_timing(spi, setup, hold, + inactive); + } ++ ++ mutex_unlock(&spi->controller->io_mutex); ++ return status; + } + + if ((setup && setup->unit == SPI_DELAY_UNIT_SCK) || +diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c +index 3536c03ff5235..0d50c1e190bf4 100644 +--- a/drivers/staging/emxx_udc/emxx_udc.c ++++ b/drivers/staging/emxx_udc/emxx_udc.c +@@ -2065,7 +2065,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + int status) + { +- struct nbu2ss_req *req; ++ struct nbu2ss_req *req, *n; + + /* Endpoint Disable */ + _nbu2ss_epn_exit(udc, ep); +@@ -2077,7 +2077,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc, + return 0; + + /* called with irqs blocked */ +- list_for_each_entry(req, &ep->queue, queue) { ++ list_for_each_entry_safe(req, n, &ep->queue, queue) { + _nbu2ss_ep_done(ep, req, status); + } + +diff --git a/drivers/staging/iio/cdc/ad7746.c b/drivers/staging/iio/cdc/ad7746.c +index dfd71e99e872e..eab534dc4bcc0 100644 +--- a/drivers/staging/iio/cdc/ad7746.c ++++ b/drivers/staging/iio/cdc/ad7746.c +@@ -700,7 +700,6 @@ static int ad7746_probe(struct i2c_client *client, + indio_dev->num_channels = ARRAY_SIZE(ad7746_channels); + else + indio_dev->num_channels = ARRAY_SIZE(ad7746_channels) - 2; +- indio_dev->num_channels = ARRAY_SIZE(ad7746_channels); + indio_dev->modes = INDIO_DIRECT_MODE; + + if (pdata) { +diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c +index 5ecb9f18a53de..9e8cd07179d71 100644 +--- a/drivers/target/target_core_transport.c ++++ b/drivers/target/target_core_transport.c +@@ -1400,7 +1400,7 @@ void transport_init_se_cmd( + cmd->orig_fe_lun = unpacked_lun; + + if (!(cmd->se_cmd_flags & SCF_USE_CPUID)) +- cmd->cpuid = smp_processor_id(); ++ cmd->cpuid = raw_smp_processor_id(); + + cmd->state_active = false; + } +diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c +index d1248ba943a4e..62c0aa5d07837 100644 +--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c ++++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c +@@ -237,6 +237,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev, + if (ACPI_FAILURE(status)) + trip_cnt = 0; + else { ++ int i; ++ + int34x_thermal_zone->aux_trips = + kcalloc(trip_cnt, + sizeof(*int34x_thermal_zone->aux_trips), +@@ -247,6 +249,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev, + } + trip_mask = BIT(trip_cnt) - 1; + int34x_thermal_zone->aux_trip_nr = trip_cnt; ++ for (i = 0; i < trip_cnt; ++i) ++ int34x_thermal_zone->aux_trips[i] = THERMAL_TEMP_INVALID; + } + + trip_cnt = int340x_thermal_read_trips(int34x_thermal_zone); +diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c +index 295742e839602..4d8edc61a78b2 100644 +--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c ++++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c +@@ -166,7 +166,7 @@ static int sys_get_trip_temp(struct thermal_zone_device *tzd, + if (thres_reg_value) + *temp = zonedev->tj_max - thres_reg_value * 1000; + else +- *temp = 0; ++ *temp = THERMAL_TEMP_INVALID; + pr_debug("sys_get_trip_temp %d\n", *temp); + + return 0; +diff --git a/drivers/thermal/qcom/qcom-spmi-adc-tm5.c b/drivers/thermal/qcom/qcom-spmi-adc-tm5.c +index b460b56e981cc..232fd0b333251 100644 +--- a/drivers/thermal/qcom/qcom-spmi-adc-tm5.c ++++ b/drivers/thermal/qcom/qcom-spmi-adc-tm5.c +@@ -441,7 +441,7 @@ static int adc_tm5_get_dt_channel_data(struct adc_tm5_chip *adc_tm, + + if (args.args_count != 1 || args.args[0] >= ADC5_MAX_CHANNEL) { + dev_err(dev, "%s: invalid ADC channel number %d\n", name, chan); +- return ret; ++ return -EINVAL; + } + channel->adc_channel = args.args[0]; + +diff --git a/drivers/thunderbolt/dma_port.c b/drivers/thunderbolt/dma_port.c +index 7288aaf01ae6a..5631319f7b205 100644 +--- a/drivers/thunderbolt/dma_port.c ++++ b/drivers/thunderbolt/dma_port.c +@@ -366,15 +366,15 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address, + void *buf, size_t size) + { + unsigned int retries = DMA_PORT_RETRIES; +- unsigned int offset; +- +- offset = address & 3; +- address = address & ~3; + + do { +- u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4); ++ unsigned int offset; ++ size_t nbytes; + int ret; + ++ offset = address & 3; ++ nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4); ++ + ret = dma_port_flash_read_block(dma, address, dma->buf, + ALIGN(nbytes, 4)); + if (ret) { +@@ -386,6 +386,7 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address, + return ret; + } + ++ nbytes -= offset; + memcpy(buf, dma->buf + offset, nbytes); + + size -= nbytes; +diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c +index 680bc738dd66d..671d72af8ba13 100644 +--- a/drivers/thunderbolt/usb4.c ++++ b/drivers/thunderbolt/usb4.c +@@ -68,15 +68,15 @@ static int usb4_do_read_data(u16 address, void *buf, size_t size, + unsigned int retries = USB4_DATA_RETRIES; + unsigned int offset; + +- offset = address & 3; +- address = address & ~3; +- + do { +- size_t nbytes = min_t(size_t, size, USB4_DATA_DWORDS * 4); + unsigned int dwaddress, dwords; + u8 data[USB4_DATA_DWORDS * 4]; ++ size_t nbytes; + int ret; + ++ offset = address & 3; ++ nbytes = min_t(size_t, size + offset, USB4_DATA_DWORDS * 4); ++ + dwaddress = address / 4; + dwords = ALIGN(nbytes, 4) / 4; + +@@ -87,6 +87,7 @@ static int usb4_do_read_data(u16 address, void *buf, size_t size, + return ret; + } + ++ nbytes -= offset; + memcpy(buf, data + offset, nbytes); + + size -= nbytes; +diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h +index 52bb21205bb68..34aa2714f3c93 100644 +--- a/drivers/tty/serial/8250/8250.h ++++ b/drivers/tty/serial/8250/8250.h +@@ -88,6 +88,7 @@ struct serial8250_config { + #define UART_BUG_NOMSR (1 << 2) /* UART has buggy MSR status bits (Au1x00) */ + #define UART_BUG_THRE (1 << 3) /* UART has buggy THRE reassertion */ + #define UART_BUG_PARITY (1 << 4) /* UART mishandles parity if FIFO enabled */ ++#define UART_BUG_TXRACE (1 << 5) /* UART Tx fails to set remote DR */ + + + #ifdef CONFIG_SERIAL_8250_SHARE_IRQ +diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c +index c33e02cbde930..ec0d1da71a203 100644 +--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c ++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c +@@ -403,6 +403,7 @@ static int aspeed_vuart_probe(struct platform_device *pdev) + port.port.status = UPSTAT_SYNC_FIFO; + port.port.dev = &pdev->dev; + port.port.has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE); ++ port.bugs |= UART_BUG_TXRACE; + + rc = sysfs_create_group(&vuart->dev->kobj, &aspeed_vuart_attr_group); + if (rc < 0) +diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c +index 9e204f9b799a1..a3a0154da567d 100644 +--- a/drivers/tty/serial/8250/8250_dw.c ++++ b/drivers/tty/serial/8250/8250_dw.c +@@ -714,6 +714,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = { + { "APMC0D08", 0}, + { "AMD0020", 0 }, + { "AMDI0020", 0 }, ++ { "AMDI0022", 0 }, + { "BRCM2032", 0 }, + { "HISI0031", 0 }, + { }, +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 689d8227f95f7..780cc99732b62 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -56,6 +56,8 @@ struct serial_private { + int line[]; + }; + ++#define PCI_DEVICE_ID_HPE_PCI_SERIAL 0x37e ++ + static const struct pci_device_id pci_use_msi[] = { + { PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900, + 0xA000, 0x1000) }, +@@ -63,6 +65,8 @@ static const struct pci_device_id pci_use_msi[] = { + 0xA000, 0x1000) }, + { PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9922, + 0xA000, 0x1000) }, ++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL, ++ PCI_ANY_ID, PCI_ANY_ID) }, + { } + }; + +@@ -1997,6 +2001,16 @@ static struct pci_serial_quirk pci_serial_quirks[] = { + .init = pci_hp_diva_init, + .setup = pci_hp_diva_setup, + }, ++ /* ++ * HPE PCI serial device ++ */ ++ { ++ .vendor = PCI_VENDOR_ID_HP_3PAR, ++ .device = PCI_DEVICE_ID_HPE_PCI_SERIAL, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_hp_diva_setup, ++ }, + /* + * Intel + */ +@@ -3944,21 +3958,26 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board) + uart.port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_SHARE_IRQ; + uart.port.uartclk = board->base_baud * 16; + +- if (pci_match_id(pci_use_msi, dev)) { +- dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n"); +- pci_set_master(dev); +- rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES); ++ if (board->flags & FL_NOIRQ) { ++ uart.port.irq = 0; + } else { +- dev_dbg(&dev->dev, "Using legacy interrupts\n"); +- rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY); +- } +- if (rc < 0) { +- kfree(priv); +- priv = ERR_PTR(rc); +- goto err_deinit; ++ if (pci_match_id(pci_use_msi, dev)) { ++ dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n"); ++ pci_set_master(dev); ++ rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES); ++ } else { ++ dev_dbg(&dev->dev, "Using legacy interrupts\n"); ++ rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY); ++ } ++ if (rc < 0) { ++ kfree(priv); ++ priv = ERR_PTR(rc); ++ goto err_deinit; ++ } ++ ++ uart.port.irq = pci_irq_vector(dev, 0); + } + +- uart.port.irq = pci_irq_vector(dev, 0); + uart.port.dev = &dev->dev; + + for (i = 0; i < nr_ports; i++) { +@@ -4973,6 +4992,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + { PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_DIVA_AUX, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_b2_1_115200 }, ++ /* HPE PCI serial device */ ++ { PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL, ++ PCI_ANY_ID, PCI_ANY_ID, 0, 0, ++ pbn_b1_1_115200 }, + + { PCI_VENDOR_ID_DCI, PCI_DEVICE_ID_DCI_PCCOM2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index b0af13074cd36..6e141429c9808 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -1815,6 +1815,18 @@ void serial8250_tx_chars(struct uart_8250_port *up) + count = up->tx_loadsz; + do { + serial_out(up, UART_TX, xmit->buf[xmit->tail]); ++ if (up->bugs & UART_BUG_TXRACE) { ++ /* ++ * The Aspeed BMC virtual UARTs have a bug where data ++ * may get stuck in the BMC's Tx FIFO from bursts of ++ * writes on the APB interface. ++ * ++ * Delay back-to-back writes by a read cycle to avoid ++ * stalling the VUART. Read a register that won't have ++ * side-effects and discard the result. ++ */ ++ serial_in(up, UART_SCR); ++ } + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); + port->icount.tx++; + if (uart_circ_empty(xmit)) +diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c +index 1b61d26bb7afe..43e55e6abea66 100644 +--- a/drivers/tty/serial/max310x.c ++++ b/drivers/tty/serial/max310x.c +@@ -1519,6 +1519,8 @@ static int __init max310x_uart_init(void) + + #ifdef CONFIG_SPI_MASTER + ret = spi_register_driver(&max310x_spi_driver); ++ if (ret) ++ uart_unregister_driver(&max310x_uart); + #endif + + return ret; +diff --git a/drivers/tty/serial/rp2.c b/drivers/tty/serial/rp2.c +index 5690c09cc0417..944a4c0105795 100644 +--- a/drivers/tty/serial/rp2.c ++++ b/drivers/tty/serial/rp2.c +@@ -195,7 +195,6 @@ struct rp2_card { + void __iomem *bar0; + void __iomem *bar1; + spinlock_t card_lock; +- struct completion fw_loaded; + }; + + #define RP_ID(prod) PCI_VDEVICE(RP, (prod)) +@@ -664,17 +663,10 @@ static void rp2_remove_ports(struct rp2_card *card) + card->initialized_ports = 0; + } + +-static void rp2_fw_cb(const struct firmware *fw, void *context) ++static int rp2_load_firmware(struct rp2_card *card, const struct firmware *fw) + { +- struct rp2_card *card = context; + resource_size_t phys_base; +- int i, rc = -ENOENT; +- +- if (!fw) { +- dev_err(&card->pdev->dev, "cannot find '%s' firmware image\n", +- RP2_FW_NAME); +- goto no_fw; +- } ++ int i, rc = 0; + + phys_base = pci_resource_start(card->pdev, 1); + +@@ -720,23 +712,13 @@ static void rp2_fw_cb(const struct firmware *fw, void *context) + card->initialized_ports++; + } + +- release_firmware(fw); +-no_fw: +- /* +- * rp2_fw_cb() is called from a workqueue long after rp2_probe() +- * has already returned success. So if something failed here, +- * we'll just leave the now-dormant device in place until somebody +- * unbinds it. +- */ +- if (rc) +- dev_warn(&card->pdev->dev, "driver initialization failed\n"); +- +- complete(&card->fw_loaded); ++ return rc; + } + + static int rp2_probe(struct pci_dev *pdev, + const struct pci_device_id *id) + { ++ const struct firmware *fw; + struct rp2_card *card; + struct rp2_uart_port *ports; + void __iomem * const *bars; +@@ -747,7 +729,6 @@ static int rp2_probe(struct pci_dev *pdev, + return -ENOMEM; + pci_set_drvdata(pdev, card); + spin_lock_init(&card->card_lock); +- init_completion(&card->fw_loaded); + + rc = pcim_enable_device(pdev); + if (rc) +@@ -780,21 +761,23 @@ static int rp2_probe(struct pci_dev *pdev, + return -ENOMEM; + card->ports = ports; + +- rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt, +- IRQF_SHARED, DRV_NAME, card); +- if (rc) ++ rc = request_firmware(&fw, RP2_FW_NAME, &pdev->dev); ++ if (rc < 0) { ++ dev_err(&pdev->dev, "cannot find '%s' firmware image\n", ++ RP2_FW_NAME); + return rc; ++ } + +- /* +- * Only catastrophic errors (e.g. ENOMEM) are reported here. +- * If the FW image is missing, we'll find out in rp2_fw_cb() +- * and print an error message. +- */ +- rc = request_firmware_nowait(THIS_MODULE, 1, RP2_FW_NAME, &pdev->dev, +- GFP_KERNEL, card, rp2_fw_cb); ++ rc = rp2_load_firmware(card, fw); ++ ++ release_firmware(fw); ++ if (rc < 0) ++ return rc; ++ ++ rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt, ++ IRQF_SHARED, DRV_NAME, card); + if (rc) + return rc; +- dev_dbg(&pdev->dev, "waiting for firmware blob...\n"); + + return 0; + } +@@ -803,7 +786,6 @@ static void rp2_remove(struct pci_dev *pdev) + { + struct rp2_card *card = pci_get_drvdata(pdev); + +- wait_for_completion(&card->fw_loaded); + rp2_remove_ports(card); + } + +diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c +index bbae072a125db..222032792d6c2 100644 +--- a/drivers/tty/serial/serial-tegra.c ++++ b/drivers/tty/serial/serial-tegra.c +@@ -338,7 +338,7 @@ static void tegra_uart_fifo_reset(struct tegra_uart_port *tup, u8 fcr_bits) + + do { + lsr = tegra_uart_read(tup, UART_LSR); +- if ((lsr | UART_LSR_TEMT) && !(lsr & UART_LSR_DR)) ++ if ((lsr & UART_LSR_TEMT) && !(lsr & UART_LSR_DR)) + break; + udelay(1); + } while (--tmout); +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c +index 43f02ed055d5e..18255eb90df3d 100644 +--- a/drivers/tty/serial/serial_core.c ++++ b/drivers/tty/serial/serial_core.c +@@ -865,9 +865,11 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port, + goto check_and_exit; + } + +- retval = security_locked_down(LOCKDOWN_TIOCSSERIAL); +- if (retval && (change_irq || change_port)) +- goto exit; ++ if (change_irq || change_port) { ++ retval = security_locked_down(LOCKDOWN_TIOCSSERIAL); ++ if (retval) ++ goto exit; ++ } + + /* + * Ask the low level driver to verify the settings. +diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c +index e1179e74a2b89..3b1aaa93d750e 100644 +--- a/drivers/tty/serial/sh-sci.c ++++ b/drivers/tty/serial/sh-sci.c +@@ -1023,10 +1023,10 @@ static int scif_set_rtrg(struct uart_port *port, int rx_trig) + { + unsigned int bits; + ++ if (rx_trig >= port->fifosize) ++ rx_trig = port->fifosize - 1; + if (rx_trig < 1) + rx_trig = 1; +- if (rx_trig >= port->fifosize) +- rx_trig = port->fifosize; + + /* HSCIF can be set to an arbitrary level. */ + if (sci_getreg(port, HSRTRGR)->size) { +diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c +index 56707b6b0f57c..c083985e387b2 100644 +--- a/drivers/usb/cdns3/cdnsp-gadget.c ++++ b/drivers/usb/cdns3/cdnsp-gadget.c +@@ -422,17 +422,17 @@ unmap: + int cdnsp_ep_dequeue(struct cdnsp_ep *pep, struct cdnsp_request *preq) + { + struct cdnsp_device *pdev = pep->pdev; +- int ret; ++ int ret_stop = 0; ++ int ret_rem; + + trace_cdnsp_request_dequeue(preq); + +- if (GET_EP_CTX_STATE(pep->out_ctx) == EP_STATE_RUNNING) { +- ret = cdnsp_cmd_stop_ep(pdev, pep); +- if (ret) +- return ret; +- } ++ if (GET_EP_CTX_STATE(pep->out_ctx) == EP_STATE_RUNNING) ++ ret_stop = cdnsp_cmd_stop_ep(pdev, pep); ++ ++ ret_rem = cdnsp_remove_request(pdev, preq, pep); + +- return cdnsp_remove_request(pdev, preq, pep); ++ return ret_rem ? ret_rem : ret_stop; + } + + static void cdnsp_zero_in_ctx(struct cdnsp_device *pdev) +diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c +index 533236366a03b..2218941d35a3f 100644 +--- a/drivers/usb/core/devio.c ++++ b/drivers/usb/core/devio.c +@@ -1218,7 +1218,12 @@ static int do_proc_bulk(struct usb_dev_state *ps, + ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb)); + if (ret) + return ret; +- tbuf = kmalloc(len1, GFP_KERNEL); ++ ++ /* ++ * len1 can be almost arbitrarily large. Don't WARN if it's ++ * too big, just fail the request. ++ */ ++ tbuf = kmalloc(len1, GFP_KERNEL | __GFP_NOWARN); + if (!tbuf) { + ret = -ENOMEM; + goto done; +@@ -1696,7 +1701,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb + if (num_sgs) { + as->urb->sg = kmalloc_array(num_sgs, + sizeof(struct scatterlist), +- GFP_KERNEL); ++ GFP_KERNEL | __GFP_NOWARN); + if (!as->urb->sg) { + ret = -ENOMEM; + goto error; +@@ -1731,7 +1736,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb + (uurb_start - as->usbm->vm_start); + } else { + as->urb->transfer_buffer = kmalloc(uurb->buffer_length, +- GFP_KERNEL); ++ GFP_KERNEL | __GFP_NOWARN); + if (!as->urb->transfer_buffer) { + ret = -ENOMEM; + goto error; +diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h +index 73f4482d833a7..22ea1f4f2d66d 100644 +--- a/drivers/usb/core/hub.h ++++ b/drivers/usb/core/hub.h +@@ -148,8 +148,10 @@ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub) + { + unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2; + +- /* Wait at least 100 msec for power to become stable */ +- return max(delay, 100U); ++ if (!hub->hdev->parent) /* root hub */ ++ return delay; ++ else /* Wait at least 100 msec for power to become stable */ ++ return max(delay, 100U); + } + + static inline int hub_port_debounce_be_connected(struct usb_hub *hub, +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index 8585b56d9f2df..a721d0e4d9994 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -1236,6 +1236,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep, + req->start_sg = sg_next(s); + + req->num_queued_sgs++; ++ req->num_pending_sgs--; + + /* + * The number of pending SG entries may not correspond to the +@@ -1243,7 +1244,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep, + * don't include unused SG entries. + */ + if (length == 0) { +- req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs; ++ req->num_pending_sgs = 0; + break; + } + +@@ -2839,15 +2840,15 @@ static int dwc3_gadget_ep_reclaim_trb_sg(struct dwc3_ep *dep, + struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue]; + struct scatterlist *sg = req->sg; + struct scatterlist *s; +- unsigned int pending = req->num_pending_sgs; ++ unsigned int num_queued = req->num_queued_sgs; + unsigned int i; + int ret = 0; + +- for_each_sg(sg, s, pending, i) { ++ for_each_sg(sg, s, num_queued, i) { + trb = &dep->trb_pool[dep->trb_dequeue]; + + req->sg = sg_next(s); +- req->num_pending_sgs--; ++ req->num_queued_sgs--; + + ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req, + trb, event, status, true); +@@ -2870,7 +2871,7 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep, + + static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req) + { +- return req->num_pending_sgs == 0; ++ return req->num_pending_sgs == 0 && req->num_queued_sgs == 0; + } + + static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep, +@@ -2879,7 +2880,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep, + { + int ret; + +- if (req->num_pending_sgs) ++ if (req->request.num_mapped_sgs) + ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event, + status); + else +diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c +index 0c418ce50ba0f..f1b35a39d1ba8 100644 +--- a/drivers/usb/gadget/udc/renesas_usb3.c ++++ b/drivers/usb/gadget/udc/renesas_usb3.c +@@ -1488,7 +1488,7 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep, + struct renesas_usb3_request *usb3_req) + { + struct renesas_usb3 *usb3 = usb3_ep_to_usb3(usb3_ep); +- struct renesas_usb3_request *usb3_req_first = usb3_get_request(usb3_ep); ++ struct renesas_usb3_request *usb3_req_first; + unsigned long flags; + int ret = -EAGAIN; + u32 enable_bits = 0; +@@ -1496,7 +1496,8 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep, + spin_lock_irqsave(&usb3->lock, flags); + if (usb3_ep->halt || usb3_ep->started) + goto out; +- if (usb3_req != usb3_req_first) ++ usb3_req_first = __usb3_get_request(usb3_ep); ++ if (!usb3_req_first || usb3_req != usb3_req_first) + goto out; + + if (usb3_pn_change(usb3, usb3_ep->num) < 0) +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c +index 6cdea0d00d194..82de51c3db676 100644 +--- a/drivers/usb/host/xhci-ring.c ++++ b/drivers/usb/host/xhci-ring.c +@@ -829,14 +829,10 @@ static void xhci_giveback_invalidated_tds(struct xhci_virt_ep *ep) + list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, + cancelled_td_list) { + +- /* +- * Doesn't matter what we pass for status, since the core will +- * just overwrite it (because the URB has been unlinked). +- */ + ring = xhci_urb_to_transfer_ring(ep->xhci, td->urb); + + if (td->cancel_status == TD_CLEARED) +- xhci_td_cleanup(ep->xhci, td, ring, 0); ++ xhci_td_cleanup(ep->xhci, td, ring, td->status); + + if (ep->xhci->xhc_state & XHCI_STATE_DYING) + return; +@@ -938,14 +934,18 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep) + continue; + } + /* +- * If ring stopped on the TD we need to cancel, then we have to ++ * If a ring stopped on the TD we need to cancel then we have to + * move the xHC endpoint ring dequeue pointer past this TD. ++ * Rings halted due to STALL may show hw_deq is past the stalled ++ * TD, but still require a set TR Deq command to flush xHC cache. + */ + hw_deq = xhci_get_hw_deq(xhci, ep->vdev, ep->ep_index, + td->urb->stream_id); + hw_deq &= ~0xf; + +- if (trb_in_td(xhci, td->start_seg, td->first_trb, ++ if (td->cancel_status == TD_HALTED) { ++ cached_td = td; ++ } else if (trb_in_td(xhci, td->start_seg, td->first_trb, + td->last_trb, hw_deq, false)) { + switch (td->cancel_status) { + case TD_CLEARED: /* TD is already no-op */ +diff --git a/drivers/usb/misc/trancevibrator.c b/drivers/usb/misc/trancevibrator.c +index a3dfc77578ea1..26baba3ab7d73 100644 +--- a/drivers/usb/misc/trancevibrator.c ++++ b/drivers/usb/misc/trancevibrator.c +@@ -61,9 +61,9 @@ static ssize_t speed_store(struct device *dev, struct device_attribute *attr, + /* Set speed */ + retval = usb_control_msg(tv->udev, usb_sndctrlpipe(tv->udev, 0), + 0x01, /* vendor request: set speed */ +- USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER, ++ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER, + tv->speed, /* speed value */ +- 0, NULL, 0, USB_CTRL_GET_TIMEOUT); ++ 0, NULL, 0, USB_CTRL_SET_TIMEOUT); + if (retval) { + tv->speed = old; + dev_dbg(&tv->udev->dev, "retval = %d\n", retval); +diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c +index b5d6616442635..748139d262633 100644 +--- a/drivers/usb/misc/uss720.c ++++ b/drivers/usb/misc/uss720.c +@@ -736,6 +736,7 @@ static int uss720_probe(struct usb_interface *intf, + parport_announce_port(pp); + + usb_set_intfdata(intf, pp); ++ usb_put_dev(usbdev); + return 0; + + probe_abort: +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index c867592477c9c..5a725ffb76756 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1034,6 +1034,9 @@ static const struct usb_device_id id_table_combined[] = { + /* Sienna devices */ + { USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) }, + { USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) }, ++ /* IDS GmbH devices */ ++ { USB_DEVICE(IDS_VID, IDS_SI31A_PID) }, ++ { USB_DEVICE(IDS_VID, IDS_CM31A_PID) }, + /* U-Blox devices */ + { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) }, + { USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) }, +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index 3d47c6d72256e..d854e04a4286e 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -1567,6 +1567,13 @@ + #define UNJO_VID 0x22B7 + #define UNJO_ISODEBUG_V1_PID 0x150D + ++/* ++ * IDS GmbH ++ */ ++#define IDS_VID 0x2CAF ++#define IDS_SI31A_PID 0x13A2 ++#define IDS_CM31A_PID 0x13A3 ++ + /* + * U-Blox products (http://www.u-blox.com). + */ +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index c6969ca728390..61d94641ddc08 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1240,6 +1240,10 @@ static const struct usb_device_id option_ids[] = { + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */ + .driver_info = NCTRL(0) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7010, 0xff), /* Telit LE910-S1 (RNDIS) */ ++ .driver_info = NCTRL(2) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff), /* Telit LE910-S1 (ECM) */ ++ .driver_info = NCTRL(2) }, + { USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */ + .driver_info = NCTRL(0) | ZLP }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ +diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c +index eed9acd1ae089..f2895c274e19d 100644 +--- a/drivers/usb/serial/pl2303.c ++++ b/drivers/usb/serial/pl2303.c +@@ -113,6 +113,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) }, + { USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) }, + { USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) }, ++ { USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) }, + { USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) }, + { USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) }, + { } /* Terminating entry */ +diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h +index 0f681ddbfd288..6097ee8fccb25 100644 +--- a/drivers/usb/serial/pl2303.h ++++ b/drivers/usb/serial/pl2303.h +@@ -158,6 +158,7 @@ + /* ADLINK ND-6530 RS232,RS485 and RS422 adapter */ + #define ADLINK_VENDOR_ID 0x0b63 + #define ADLINK_ND6530_PRODUCT_ID 0x6530 ++#define ADLINK_ND6530GC_PRODUCT_ID 0x653a + + /* SMART USB Serial Adapter */ + #define SMART_VENDOR_ID 0x0b8c +diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c +index fe1c13a8849cc..f94d7b20d32e7 100644 +--- a/drivers/usb/serial/ti_usb_3410_5052.c ++++ b/drivers/usb/serial/ti_usb_3410_5052.c +@@ -37,6 +37,7 @@ + /* Vendor and product ids */ + #define TI_VENDOR_ID 0x0451 + #define IBM_VENDOR_ID 0x04b3 ++#define STARTECH_VENDOR_ID 0x14b0 + #define TI_3410_PRODUCT_ID 0x3410 + #define IBM_4543_PRODUCT_ID 0x4543 + #define IBM_454B_PRODUCT_ID 0x454b +@@ -372,6 +373,7 @@ static const struct usb_device_id ti_id_table_3410[] = { + { USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) }, + { USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) }, + { USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) }, ++ { USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) }, + { } /* terminator */ + }; + +@@ -410,6 +412,7 @@ static const struct usb_device_id ti_id_table_combined[] = { + { USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) }, + { USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) }, + { USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) }, ++ { USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) }, + { } /* terminator */ + }; + +diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c +index cf720e944aaaa..42acdc8b684fe 100644 +--- a/drivers/usb/typec/mux.c ++++ b/drivers/usb/typec/mux.c +@@ -191,6 +191,7 @@ static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id, + bool match; + int nval; + u16 *val; ++ int ret; + int i; + + /* +@@ -218,10 +219,10 @@ static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id, + if (!val) + return ERR_PTR(-ENOMEM); + +- nval = fwnode_property_read_u16_array(fwnode, "svid", val, nval); +- if (nval < 0) { ++ ret = fwnode_property_read_u16_array(fwnode, "svid", val, nval); ++ if (ret < 0) { + kfree(val); +- return ERR_PTR(nval); ++ return ERR_PTR(ret); + } + + for (i = 0; i < nval; i++) { +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c +index 52acc884a61f1..b3e3dff464917 100644 +--- a/drivers/usb/typec/tcpm/tcpm.c ++++ b/drivers/usb/typec/tcpm/tcpm.c +@@ -1534,6 +1534,8 @@ static int tcpm_pd_svdm(struct tcpm_port *port, struct typec_altmode *adev, + if (PD_VDO_SVDM_VER(p[0]) < svdm_version) + typec_partner_set_svdm_version(port->partner, + PD_VDO_SVDM_VER(p[0])); ++ ++ tcpm_ams_start(port, DISCOVER_IDENTITY); + /* 6.4.4.3.1: Only respond as UFP (device) */ + if (port->data_role == TYPEC_DEVICE && + port->nr_snk_vdo) { +@@ -1552,14 +1554,19 @@ static int tcpm_pd_svdm(struct tcpm_port *port, struct typec_altmode *adev, + } + break; + case CMD_DISCOVER_SVID: ++ tcpm_ams_start(port, DISCOVER_SVIDS); + break; + case CMD_DISCOVER_MODES: ++ tcpm_ams_start(port, DISCOVER_MODES); + break; + case CMD_ENTER_MODE: ++ tcpm_ams_start(port, DFP_TO_UFP_ENTER_MODE); + break; + case CMD_EXIT_MODE: ++ tcpm_ams_start(port, DFP_TO_UFP_EXIT_MODE); + break; + case CMD_ATTENTION: ++ tcpm_ams_start(port, ATTENTION); + /* Attention command does not have response */ + *adev_action = ADEV_ATTENTION; + return 0; +@@ -2267,6 +2274,12 @@ static void tcpm_pd_data_request(struct tcpm_port *port, + bool frs_enable; + int ret; + ++ if (tcpm_vdm_ams(port) && type != PD_DATA_VENDOR_DEF) { ++ port->vdm_state = VDM_STATE_ERR_BUSY; ++ tcpm_ams_finish(port); ++ mod_vdm_delayed_work(port, 0); ++ } ++ + switch (type) { + case PD_DATA_SOURCE_CAP: + for (i = 0; i < cnt; i++) +@@ -2397,7 +2410,10 @@ static void tcpm_pd_data_request(struct tcpm_port *port, + NONE_AMS); + break; + case PD_DATA_VENDOR_DEF: +- tcpm_handle_vdm_request(port, msg->payload, cnt); ++ if (tcpm_vdm_ams(port) || port->nr_snk_vdo) ++ tcpm_handle_vdm_request(port, msg->payload, cnt); ++ else if (port->negotiated_rev > PD_REV20) ++ tcpm_pd_handle_msg(port, PD_MSG_CTRL_NOT_SUPP, NONE_AMS); + break; + case PD_DATA_BIST: + port->bist_request = le32_to_cpu(msg->payload[0]); +@@ -2439,6 +2455,16 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port, + enum pd_ctrl_msg_type type = pd_header_type_le(msg->header); + enum tcpm_state next_state; + ++ /* ++ * Stop VDM state machine if interrupted by other Messages while NOT_SUPP is allowed in ++ * VDM AMS if waiting for VDM responses and will be handled later. ++ */ ++ if (tcpm_vdm_ams(port) && type != PD_CTRL_NOT_SUPP && type != PD_CTRL_GOOD_CRC) { ++ port->vdm_state = VDM_STATE_ERR_BUSY; ++ tcpm_ams_finish(port); ++ mod_vdm_delayed_work(port, 0); ++ } ++ + switch (type) { + case PD_CTRL_GOOD_CRC: + case PD_CTRL_PING: +@@ -2697,7 +2723,14 @@ static void tcpm_pd_ext_msg_request(struct tcpm_port *port, + enum pd_ext_msg_type type = pd_header_type_le(msg->header); + unsigned int data_size = pd_ext_header_data_size_le(msg->ext_msg.header); + +- if (!(msg->ext_msg.header & PD_EXT_HDR_CHUNKED)) { ++ /* stopping VDM state machine if interrupted by other Messages */ ++ if (tcpm_vdm_ams(port)) { ++ port->vdm_state = VDM_STATE_ERR_BUSY; ++ tcpm_ams_finish(port); ++ mod_vdm_delayed_work(port, 0); ++ } ++ ++ if (!(le16_to_cpu(msg->ext_msg.header) & PD_EXT_HDR_CHUNKED)) { + tcpm_pd_handle_msg(port, PD_MSG_CTRL_NOT_SUPP, NONE_AMS); + tcpm_log(port, "Unchunked extended messages unsupported"); + return; +@@ -2791,7 +2824,7 @@ static void tcpm_pd_rx_handler(struct kthread_work *work) + "Data role mismatch, initiating error recovery"); + tcpm_set_state(port, ERROR_RECOVERY, 0); + } else { +- if (msg->header & PD_HEADER_EXT_HDR) ++ if (le16_to_cpu(msg->header) & PD_HEADER_EXT_HDR) + tcpm_pd_ext_msg_request(port, msg); + else if (cnt) + tcpm_pd_data_request(port, msg); +diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c +index 1e266f083bf8a..7104ddb9696d6 100644 +--- a/drivers/usb/typec/ucsi/ucsi.c ++++ b/drivers/usb/typec/ucsi/ucsi.c +@@ -717,8 +717,8 @@ static void ucsi_handle_connector_change(struct work_struct *work) + ucsi_send_command(con->ucsi, command, NULL, 0); + + /* 3. ACK connector change */ +- clear_bit(EVENT_PENDING, &ucsi->flags); + ret = ucsi_acknowledge_connector_change(ucsi); ++ clear_bit(EVENT_PENDING, &ucsi->flags); + if (ret) { + dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret); + goto out_unlock; +diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c +index 4d2809c7d4e32..a0e86c5d7cd7a 100644 +--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c ++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c +@@ -15,6 +15,7 @@ + #include + #include + #include ++#include + #include "mlx5_vdpa.h" + + MODULE_AUTHOR("Eli Cohen "); +@@ -1854,11 +1855,16 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, struct vhost_iotlb *iotlb + static void mlx5_vdpa_free(struct vdpa_device *vdev) + { + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); ++ struct mlx5_core_dev *pfmdev; + struct mlx5_vdpa_net *ndev; + + ndev = to_mlx5_vdpa_ndev(mvdev); + + free_resources(ndev); ++ if (!is_zero_ether_addr(ndev->config.mac)) { ++ pfmdev = pci_get_drvdata(pci_physfn(mvdev->mdev->pdev)); ++ mlx5_mpfs_del_mac(pfmdev, ndev->config.mac); ++ } + mlx5_vdpa_free_resources(&ndev->mvdev); + mutex_destroy(&ndev->reslock); + } +@@ -1980,6 +1986,7 @@ static int mlx5v_probe(struct auxiliary_device *adev, + struct mlx5_adev *madev = container_of(adev, struct mlx5_adev, adev); + struct mlx5_core_dev *mdev = madev->mdev; + struct virtio_net_config *config; ++ struct mlx5_core_dev *pfmdev; + struct mlx5_vdpa_dev *mvdev; + struct mlx5_vdpa_net *ndev; + u32 max_vqs; +@@ -2008,10 +2015,17 @@ static int mlx5v_probe(struct auxiliary_device *adev, + if (err) + goto err_mtu; + ++ if (!is_zero_ether_addr(config->mac)) { ++ pfmdev = pci_get_drvdata(pci_physfn(mdev->pdev)); ++ err = mlx5_mpfs_add_mac(pfmdev, config->mac); ++ if (err) ++ goto err_mtu; ++ } ++ + mvdev->vdev.dma_dev = mdev->device; + err = mlx5_vdpa_alloc_resources(&ndev->mvdev); + if (err) +- goto err_mtu; ++ goto err_mpfs; + + err = alloc_resources(ndev); + if (err) +@@ -2028,6 +2042,9 @@ err_reg: + free_resources(ndev); + err_res: + mlx5_vdpa_free_resources(&ndev->mvdev); ++err_mpfs: ++ if (!is_zero_ether_addr(config->mac)) ++ mlx5_mpfs_del_mac(pfmdev, config->mac); + err_mtu: + mutex_destroy(&ndev->reslock); + put_device(&mvdev->vdev.dev); +diff --git a/fs/afs/dir.c b/fs/afs/dir.c +index 31251d11d576c..a3d42f29d0235 100644 +--- a/fs/afs/dir.c ++++ b/fs/afs/dir.c +@@ -1842,7 +1842,9 @@ static void afs_rename_edit_dir(struct afs_operation *op) + new_inode = d_inode(new_dentry); + if (new_inode) { + spin_lock(&new_inode->i_lock); +- if (new_inode->i_nlink > 0) ++ if (S_ISDIR(new_inode->i_mode)) ++ clear_nlink(new_inode); ++ else if (new_inode->i_nlink > 0) + drop_nlink(new_inode); + spin_unlock(&new_inode->i_lock); + } +diff --git a/fs/block_dev.c b/fs/block_dev.c +index a5a6a7930e5ed..389609bc5aa3f 100644 +--- a/fs/block_dev.c ++++ b/fs/block_dev.c +@@ -1244,6 +1244,9 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate) + + lockdep_assert_held(&bdev->bd_mutex); + ++ if (!(disk->flags & GENHD_FL_UP)) ++ return -ENXIO; ++ + rescan: + ret = blk_drop_partitions(bdev); + if (ret) +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 910769d5fcdb4..1eb5d22d53735 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -4975,7 +4975,7 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo, + u64 start, u64 len) + { + int ret = 0; +- u64 off = start; ++ u64 off; + u64 max = start + len; + u32 flags = 0; + u32 found_type; +@@ -5010,6 +5010,11 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo, + goto out_free_ulist; + } + ++ /* ++ * We can't initialize that to 'start' as this could miss extents due ++ * to extent item merging ++ */ ++ off = 0; + start = round_down(start, btrfs_inode_sectorsize(inode)); + len = round_up(max, btrfs_inode_sectorsize(inode)) - start; + +diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c +index 0abbf050580d1..53ee17f5e382c 100644 +--- a/fs/btrfs/reflink.c ++++ b/fs/btrfs/reflink.c +@@ -285,6 +285,11 @@ copy_inline_extent: + ret = btrfs_inode_set_file_extent_range(BTRFS_I(dst), 0, aligned_end); + out: + if (!ret && !trans) { ++ /* ++ * Release path before starting a new transaction so we don't ++ * hold locks that would confuse lockdep. ++ */ ++ btrfs_release_path(path); + /* + * No transaction here means we copied the inline extent into a + * page of the destination inode. +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 53624fca0747a..d7f1599e69b1f 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -1858,8 +1858,6 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans, + ret = btrfs_update_inode(trans, root, BTRFS_I(inode)); + } else if (ret == -EEXIST) { + ret = 0; +- } else { +- BUG(); /* Logic Error */ + } + iput(inode); + +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index 29272d99102c5..7606ce344de56 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -958,6 +958,13 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses) + /* Internal types */ + server->capabilities |= SMB2_NT_FIND | SMB2_LARGE_FILES; + ++ /* ++ * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context ++ * Set the cipher type manually. ++ */ ++ if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION)) ++ server->cipher_type = SMB2_ENCRYPTION_AES128_CCM; ++ + security_blob = smb2_get_data_area_len(&blob_offset, &blob_length, + (struct smb2_sync_hdr *)rsp); + /* +@@ -3900,10 +3907,10 @@ smb2_new_read_req(void **buf, unsigned int *total_len, + * Related requests use info from previous read request + * in chain. + */ +- shdr->SessionId = 0xFFFFFFFF; ++ shdr->SessionId = 0xFFFFFFFFFFFFFFFF; + shdr->TreeId = 0xFFFFFFFF; +- req->PersistentFileId = 0xFFFFFFFF; +- req->VolatileFileId = 0xFFFFFFFF; ++ req->PersistentFileId = 0xFFFFFFFFFFFFFFFF; ++ req->VolatileFileId = 0xFFFFFFFFFFFFFFFF; + } + } + if (remaining_bytes > io_parms->length) +diff --git a/fs/cifs/trace.h b/fs/cifs/trace.h +index d6df908dccade..dafcb6ab050dd 100644 +--- a/fs/cifs/trace.h ++++ b/fs/cifs/trace.h +@@ -12,6 +12,11 @@ + + #include + ++/* ++ * Please use this 3-part article as a reference for writing new tracepoints: ++ * https://lwn.net/Articles/379903/ ++ */ ++ + /* For logging errors in read or write */ + DECLARE_EVENT_CLASS(smb3_rw_err_class, + TP_PROTO(unsigned int xid, +@@ -529,16 +534,16 @@ DECLARE_EVENT_CLASS(smb3_exit_err_class, + TP_ARGS(xid, func_name, rc), + TP_STRUCT__entry( + __field(unsigned int, xid) +- __field(const char *, func_name) ++ __string(func_name, func_name) + __field(int, rc) + ), + TP_fast_assign( + __entry->xid = xid; +- __entry->func_name = func_name; ++ __assign_str(func_name, func_name); + __entry->rc = rc; + ), + TP_printk("\t%s: xid=%u rc=%d", +- __entry->func_name, __entry->xid, __entry->rc) ++ __get_str(func_name), __entry->xid, __entry->rc) + ) + + #define DEFINE_SMB3_EXIT_ERR_EVENT(name) \ +@@ -583,14 +588,14 @@ DECLARE_EVENT_CLASS(smb3_enter_exit_class, + TP_ARGS(xid, func_name), + TP_STRUCT__entry( + __field(unsigned int, xid) +- __field(const char *, func_name) ++ __string(func_name, func_name) + ), + TP_fast_assign( + __entry->xid = xid; +- __entry->func_name = func_name; ++ __assign_str(func_name, func_name); + ), + TP_printk("\t%s: xid=%u", +- __entry->func_name, __entry->xid) ++ __get_str(func_name), __entry->xid) + ) + + #define DEFINE_SMB3_ENTER_EXIT_EVENT(name) \ +@@ -857,16 +862,16 @@ DECLARE_EVENT_CLASS(smb3_reconnect_class, + TP_STRUCT__entry( + __field(__u64, currmid) + __field(__u64, conn_id) +- __field(char *, hostname) ++ __string(hostname, hostname) + ), + TP_fast_assign( + __entry->currmid = currmid; + __entry->conn_id = conn_id; +- __entry->hostname = hostname; ++ __assign_str(hostname, hostname); + ), + TP_printk("conn_id=0x%llx server=%s current_mid=%llu", + __entry->conn_id, +- __entry->hostname, ++ __get_str(hostname), + __entry->currmid) + ) + +@@ -891,7 +896,7 @@ DECLARE_EVENT_CLASS(smb3_credit_class, + TP_STRUCT__entry( + __field(__u64, currmid) + __field(__u64, conn_id) +- __field(char *, hostname) ++ __string(hostname, hostname) + __field(int, credits) + __field(int, credits_to_add) + __field(int, in_flight) +@@ -899,7 +904,7 @@ DECLARE_EVENT_CLASS(smb3_credit_class, + TP_fast_assign( + __entry->currmid = currmid; + __entry->conn_id = conn_id; +- __entry->hostname = hostname; ++ __assign_str(hostname, hostname); + __entry->credits = credits; + __entry->credits_to_add = credits_to_add; + __entry->in_flight = in_flight; +@@ -907,7 +912,7 @@ DECLARE_EVENT_CLASS(smb3_credit_class, + TP_printk("conn_id=0x%llx server=%s current_mid=%llu " + "credits=%d credit_change=%d in_flight=%d", + __entry->conn_id, +- __entry->hostname, ++ __get_str(hostname), + __entry->currmid, + __entry->credits, + __entry->credits_to_add, +diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c +index 1d252164d97b6..8129a430d789d 100644 +--- a/fs/debugfs/inode.c ++++ b/fs/debugfs/inode.c +@@ -45,10 +45,13 @@ static unsigned int debugfs_allow __ro_after_init = DEFAULT_DEBUGFS_ALLOW_BITS; + static int debugfs_setattr(struct user_namespace *mnt_userns, + struct dentry *dentry, struct iattr *ia) + { +- int ret = security_locked_down(LOCKDOWN_DEBUGFS); ++ int ret; + +- if (ret && (ia->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID))) +- return ret; ++ if (ia->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID)) { ++ ret = security_locked_down(LOCKDOWN_DEBUGFS); ++ if (ret) ++ return ret; ++ } + return simple_setattr(&init_user_ns, dentry, ia); + } + +diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c +index d158a500c25c6..d2103852475fa 100644 +--- a/fs/nfs/filelayout/filelayout.c ++++ b/fs/nfs/filelayout/filelayout.c +@@ -718,7 +718,7 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo, + if (unlikely(!p)) + goto out_err; + fl->fh_array[i]->size = be32_to_cpup(p++); +- if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) { ++ if (fl->fh_array[i]->size > NFS_MAXFHSIZE) { + printk(KERN_ERR "NFS: Too big fh %d received %d\n", + i, fl->fh_array[i]->size); + goto out_err; +diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c +index 441a2fa073c8f..2bcbafded19ec 100644 +--- a/fs/nfs/nfs4file.c ++++ b/fs/nfs/nfs4file.c +@@ -211,7 +211,7 @@ static loff_t nfs4_file_llseek(struct file *filep, loff_t offset, int whence) + case SEEK_HOLE: + case SEEK_DATA: + ret = nfs42_proc_llseek(filep, offset, whence); +- if (ret != -ENOTSUPP) ++ if (ret != -EOPNOTSUPP) + return ret; + fallthrough; + default: +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 820abae88cf04..0b809cc6ad1df 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -1682,7 +1682,7 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state, + rcu_read_unlock(); + trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0); + +- if (!signal_pending(current)) { ++ if (!fatal_signal_pending(current)) { + if (schedule_timeout(5*HZ) == 0) + status = -EAGAIN; + else +@@ -3458,7 +3458,7 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst, + write_sequnlock(&state->seqlock); + trace_nfs4_close_stateid_update_wait(state->inode, dst, 0); + +- if (signal_pending(current)) ++ if (fatal_signal_pending(current)) + status = -EINTR; + else + if (schedule_timeout(5*HZ) != 0) +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c +index 78c9c4bdef2b6..98b9c1ed366ee 100644 +--- a/fs/nfs/pagelist.c ++++ b/fs/nfs/pagelist.c +@@ -1094,15 +1094,16 @@ nfs_pageio_do_add_request(struct nfs_pageio_descriptor *desc, + struct nfs_page *prev = NULL; + unsigned int size; + +- if (mirror->pg_count != 0) { +- prev = nfs_list_entry(mirror->pg_list.prev); +- } else { ++ if (list_empty(&mirror->pg_list)) { + if (desc->pg_ops->pg_init) + desc->pg_ops->pg_init(desc, req); + if (desc->pg_error < 0) + return 0; + mirror->pg_base = req->wb_pgbase; +- } ++ mirror->pg_count = 0; ++ mirror->pg_recoalesce = 0; ++ } else ++ prev = nfs_list_entry(mirror->pg_list.prev); + + if (desc->pg_maxretrans && req->wb_nio > desc->pg_maxretrans) { + if (NFS_SERVER(desc->pg_inode)->flags & NFS_MOUNT_SOFTERR) +@@ -1127,17 +1128,16 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc) + { + struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc); + +- + if (!list_empty(&mirror->pg_list)) { + int error = desc->pg_ops->pg_doio(desc); + if (error < 0) + desc->pg_error = error; +- else ++ if (list_empty(&mirror->pg_list)) { + mirror->pg_bytes_written += mirror->pg_count; +- } +- if (list_empty(&mirror->pg_list)) { +- mirror->pg_count = 0; +- mirror->pg_base = 0; ++ mirror->pg_count = 0; ++ mirror->pg_base = 0; ++ mirror->pg_recoalesce = 0; ++ } + } + } + +@@ -1227,7 +1227,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) + + do { + list_splice_init(&mirror->pg_list, &head); +- mirror->pg_bytes_written -= mirror->pg_count; + mirror->pg_count = 0; + mirror->pg_base = 0; + mirror->pg_recoalesce = 0; +diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c +index f726f8b12b7e8..a5e3628a52733 100644 +--- a/fs/nfs/pnfs.c ++++ b/fs/nfs/pnfs.c +@@ -1317,6 +1317,11 @@ _pnfs_return_layout(struct inode *ino) + { + struct pnfs_layout_hdr *lo = NULL; + struct nfs_inode *nfsi = NFS_I(ino); ++ struct pnfs_layout_range range = { ++ .iomode = IOMODE_ANY, ++ .offset = 0, ++ .length = NFS4_MAX_UINT64, ++ }; + LIST_HEAD(tmp_list); + const struct cred *cred; + nfs4_stateid stateid; +@@ -1344,16 +1349,10 @@ _pnfs_return_layout(struct inode *ino) + } + valid_layout = pnfs_layout_is_valid(lo); + pnfs_clear_layoutcommit(ino, &tmp_list); +- pnfs_mark_matching_lsegs_return(lo, &tmp_list, NULL, 0); ++ pnfs_mark_matching_lsegs_return(lo, &tmp_list, &range, 0); + +- if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) { +- struct pnfs_layout_range range = { +- .iomode = IOMODE_ANY, +- .offset = 0, +- .length = NFS4_MAX_UINT64, +- }; ++ if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) + NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range); +- } + + /* Don't send a LAYOUTRETURN if list was initially empty */ + if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) || +diff --git a/fs/proc/base.c b/fs/proc/base.c +index 3851bfcdba56e..58bbf334265b7 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -2703,6 +2703,10 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf, + void *page; + int rv; + ++ /* A task may only write when it was the opener. */ ++ if (file->f_cred != current_real_cred()) ++ return -EPERM; ++ + rcu_read_lock(); + task = pid_task(proc_pid(inode), PIDTYPE_PID); + if (!task) { +diff --git a/include/linux/bits.h b/include/linux/bits.h +index 7f475d59a0974..87d112650dfbb 100644 +--- a/include/linux/bits.h ++++ b/include/linux/bits.h +@@ -22,7 +22,7 @@ + #include + #define GENMASK_INPUT_CHECK(h, l) \ + (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \ +- __builtin_constant_p((l) > (h)), (l) > (h), 0))) ++ __is_constexpr((l) > (h)), (l) > (h), 0))) + #else + /* + * BUILD_BUG_ON_ZERO is not available in h files included from asm files, +diff --git a/include/linux/const.h b/include/linux/const.h +index 81b8aae5a8559..435ddd72d2c46 100644 +--- a/include/linux/const.h ++++ b/include/linux/const.h +@@ -3,4 +3,12 @@ + + #include + ++/* ++ * This returns a constant expression while determining if an argument is ++ * a constant expression, most importantly without evaluating the argument. ++ * Glory to Martin Uecker ++ */ ++#define __is_constexpr(x) \ ++ (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8))) ++ + #endif /* _LINUX_CONST_H */ +diff --git a/include/linux/device.h b/include/linux/device.h +index ba660731bd258..bec8d09014daa 100644 +--- a/include/linux/device.h ++++ b/include/linux/device.h +@@ -566,7 +566,7 @@ struct device { + * @flags: Link flags. + * @rpm_active: Whether or not the consumer device is runtime-PM-active. + * @kref: Count repeated addition of the same link. +- * @rcu_head: An RCU head to use for deferred execution of SRCU callbacks. ++ * @rm_work: Work structure used for removing the link. + * @supplier_preactivated: Supplier has been made active before consumer probe. + */ + struct device_link { +@@ -579,9 +579,7 @@ struct device_link { + u32 flags; + refcount_t rpm_active; + struct kref kref; +-#ifdef CONFIG_SRCU +- struct rcu_head rcu_head; +-#endif ++ struct work_struct rm_work; + bool supplier_preactivated; /* Owned by consumer probe. */ + }; + +diff --git a/include/linux/minmax.h b/include/linux/minmax.h +index c0f57b0c64d90..5433c08fcc685 100644 +--- a/include/linux/minmax.h ++++ b/include/linux/minmax.h +@@ -2,6 +2,8 @@ + #ifndef _LINUX_MINMAX_H + #define _LINUX_MINMAX_H + ++#include ++ + /* + * min()/max()/clamp() macros must accomplish three things: + * +@@ -17,14 +19,6 @@ + #define __typecheck(x, y) \ + (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) + +-/* +- * This returns a constant expression while determining if an argument is +- * a constant expression, most importantly without evaluating the argument. +- * Glory to Martin Uecker +- */ +-#define __is_constexpr(x) \ +- (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8))) +- + #define __no_side_effects(x, y) \ + (__is_constexpr(x) && __is_constexpr(y)) + +diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h +index ab07f09f2bad9..133967c40214b 100644 +--- a/include/linux/mlx5/driver.h ++++ b/include/linux/mlx5/driver.h +@@ -698,6 +698,27 @@ struct mlx5_hv_vhca; + #define MLX5_LOG_SW_ICM_BLOCK_SIZE(dev) (MLX5_CAP_DEV_MEM(dev, log_sw_icm_alloc_granularity)) + #define MLX5_SW_ICM_BLOCK_SIZE(dev) (1 << MLX5_LOG_SW_ICM_BLOCK_SIZE(dev)) + ++enum { ++ MLX5_PROF_MASK_QP_SIZE = (u64)1 << 0, ++ MLX5_PROF_MASK_MR_CACHE = (u64)1 << 1, ++}; ++ ++enum { ++ MR_CACHE_LAST_STD_ENTRY = 20, ++ MLX5_IMR_MTT_CACHE_ENTRY, ++ MLX5_IMR_KSM_CACHE_ENTRY, ++ MAX_MR_CACHE_ENTRIES ++}; ++ ++struct mlx5_profile { ++ u64 mask; ++ u8 log_max_qp; ++ struct { ++ int size; ++ int limit; ++ } mr_cache[MAX_MR_CACHE_ENTRIES]; ++}; ++ + struct mlx5_core_dev { + struct device *device; + enum mlx5_coredev_type coredev_type; +@@ -726,7 +747,7 @@ struct mlx5_core_dev { + struct mutex intf_state_mutex; + unsigned long intf_state; + struct mlx5_priv priv; +- struct mlx5_profile *profile; ++ struct mlx5_profile profile; + u32 issi; + struct mlx5e_resources mlx5e_res; + struct mlx5_dm *dm; +@@ -1073,18 +1094,6 @@ static inline u8 mlx5_mkey_variant(u32 mkey) + return mkey & 0xff; + } + +-enum { +- MLX5_PROF_MASK_QP_SIZE = (u64)1 << 0, +- MLX5_PROF_MASK_MR_CACHE = (u64)1 << 1, +-}; +- +-enum { +- MR_CACHE_LAST_STD_ENTRY = 20, +- MLX5_IMR_MTT_CACHE_ENTRY, +- MLX5_IMR_KSM_CACHE_ENTRY, +- MAX_MR_CACHE_ENTRIES +-}; +- + /* Async-atomic event notifier used by mlx5 core to forward FW + * evetns recived from event queue to mlx5 consumers. + * Optimise event queue dipatching. +@@ -1138,15 +1147,6 @@ int mlx5_rdma_rn_get_params(struct mlx5_core_dev *mdev, + struct ib_device *device, + struct rdma_netdev_alloc_params *params); + +-struct mlx5_profile { +- u64 mask; +- u8 log_max_qp; +- struct { +- int size; +- int limit; +- } mr_cache[MAX_MR_CACHE_ENTRIES]; +-}; +- + enum { + MLX5_PCI_DEV_IS_VF = 1 << 0, + }; +diff --git a/include/linux/mlx5/mpfs.h b/include/linux/mlx5/mpfs.h +new file mode 100644 +index 0000000000000..bf700c8d55164 +--- /dev/null ++++ b/include/linux/mlx5/mpfs.h +@@ -0,0 +1,18 @@ ++/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB ++ * Copyright (c) 2021 Mellanox Technologies Ltd. ++ */ ++ ++#ifndef _MLX5_MPFS_ ++#define _MLX5_MPFS_ ++ ++struct mlx5_core_dev; ++ ++#ifdef CONFIG_MLX5_MPFS ++int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac); ++int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac); ++#else /* #ifndef CONFIG_MLX5_MPFS */ ++static inline int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; } ++static inline int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; } ++#endif ++ ++#endif +diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h +index d2e97ee802af5..8a063b2b944d1 100644 +--- a/include/linux/sunrpc/xprt.h ++++ b/include/linux/sunrpc/xprt.h +@@ -367,6 +367,8 @@ struct rpc_xprt * xprt_alloc(struct net *net, size_t size, + unsigned int num_prealloc, + unsigned int max_req); + void xprt_free(struct rpc_xprt *); ++void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task); ++bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req); + + static inline int + xprt_enable_swap(struct rpc_xprt *xprt) +diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h +index 911fae42b0c0e..394c4269901cb 100644 +--- a/include/net/cfg80211.h ++++ b/include/net/cfg80211.h +@@ -5756,7 +5756,7 @@ unsigned int ieee80211_get_mesh_hdrlen(struct ieee80211s_hdr *meshhdr); + */ + int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr, + const u8 *addr, enum nl80211_iftype iftype, +- u8 data_offset); ++ u8 data_offset, bool is_amsdu); + + /** + * ieee80211_data_to_8023 - convert an 802.11 data frame to 802.3 +@@ -5768,7 +5768,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr, + static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr, + enum nl80211_iftype iftype) + { +- return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0); ++ return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0, false); + } + + /** +diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h +index 54c4d5c908a52..233f47f4189c7 100644 +--- a/include/net/netfilter/nf_flow_table.h ++++ b/include/net/netfilter/nf_flow_table.h +@@ -130,7 +130,6 @@ enum nf_flow_flags { + NF_FLOW_HW, + NF_FLOW_HW_DYING, + NF_FLOW_HW_DEAD, +- NF_FLOW_HW_REFRESH, + NF_FLOW_HW_PENDING, + }; + +diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h +index 255e4f4b521f4..ec7823921bd26 100644 +--- a/include/net/pkt_cls.h ++++ b/include/net/pkt_cls.h +@@ -709,6 +709,17 @@ tc_cls_common_offload_init(struct flow_cls_common_offload *cls_common, + cls_common->extack = extack; + } + ++#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) ++static inline struct tc_skb_ext *tc_skb_ext_alloc(struct sk_buff *skb) ++{ ++ struct tc_skb_ext *tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT); ++ ++ if (tc_skb_ext) ++ memset(tc_skb_ext, 0, sizeof(*tc_skb_ext)); ++ return tc_skb_ext; ++} ++#endif ++ + enum tc_matchall_command { + TC_CLSMATCHALL_REPLACE, + TC_CLSMATCHALL_DESTROY, +diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h +index 15b1b30f454e4..4ba757aba9b2d 100644 +--- a/include/net/pkt_sched.h ++++ b/include/net/pkt_sched.h +@@ -128,12 +128,7 @@ void __qdisc_run(struct Qdisc *q); + static inline void qdisc_run(struct Qdisc *q) + { + if (qdisc_run_begin(q)) { +- /* NOLOCK qdisc must check 'state' under the qdisc seqlock +- * to avoid racing with dev_qdisc_reset() +- */ +- if (!(q->flags & TCQ_F_NOLOCK) || +- likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state))) +- __qdisc_run(q); ++ __qdisc_run(q); + qdisc_run_end(q); + } + } +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index 2d6eb60c58c81..2c4f3527cc098 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -36,6 +36,7 @@ struct qdisc_rate_table { + enum qdisc_state_t { + __QDISC_STATE_SCHED, + __QDISC_STATE_DEACTIVATED, ++ __QDISC_STATE_MISSED, + }; + + struct qdisc_size_table { +@@ -159,8 +160,33 @@ static inline bool qdisc_is_empty(const struct Qdisc *qdisc) + static inline bool qdisc_run_begin(struct Qdisc *qdisc) + { + if (qdisc->flags & TCQ_F_NOLOCK) { ++ if (spin_trylock(&qdisc->seqlock)) ++ goto nolock_empty; ++ ++ /* If the MISSED flag is set, it means other thread has ++ * set the MISSED flag before second spin_trylock(), so ++ * we can return false here to avoid multi cpus doing ++ * the set_bit() and second spin_trylock() concurrently. ++ */ ++ if (test_bit(__QDISC_STATE_MISSED, &qdisc->state)) ++ return false; ++ ++ /* Set the MISSED flag before the second spin_trylock(), ++ * if the second spin_trylock() return false, it means ++ * other cpu holding the lock will do dequeuing for us ++ * or it will see the MISSED flag set after releasing ++ * lock and reschedule the net_tx_action() to do the ++ * dequeuing. ++ */ ++ set_bit(__QDISC_STATE_MISSED, &qdisc->state); ++ ++ /* Retry again in case other CPU may not see the new flag ++ * after it releases the lock at the end of qdisc_run_end(). ++ */ + if (!spin_trylock(&qdisc->seqlock)) + return false; ++ ++nolock_empty: + WRITE_ONCE(qdisc->empty, false); + } else if (qdisc_is_running(qdisc)) { + return false; +@@ -176,8 +202,15 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) + static inline void qdisc_run_end(struct Qdisc *qdisc) + { + write_seqcount_end(&qdisc->running); +- if (qdisc->flags & TCQ_F_NOLOCK) ++ if (qdisc->flags & TCQ_F_NOLOCK) { + spin_unlock(&qdisc->seqlock); ++ ++ if (unlikely(test_bit(__QDISC_STATE_MISSED, ++ &qdisc->state))) { ++ clear_bit(__QDISC_STATE_MISSED, &qdisc->state); ++ __netif_schedule(qdisc); ++ } ++ } + } + + static inline bool qdisc_may_bulk(const struct Qdisc *qdisc) +diff --git a/include/net/sock.h b/include/net/sock.h +index 8487f58da36d2..62e3811e95a78 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -2225,13 +2225,15 @@ static inline void skb_set_owner_r(struct sk_buff *skb, struct sock *sk) + sk_mem_charge(sk, skb->truesize); + } + +-static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) ++static inline __must_check bool skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) + { + if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) { + skb_orphan(skb); + skb->destructor = sock_efree; + skb->sk = sk; ++ return true; + } ++ return false; + } + + void sk_reset_timer(struct sock *sk, struct timer_list *timer, +diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h +index f6afee209620d..26e6d94d64ed4 100644 +--- a/include/uapi/linux/kvm.h ++++ b/include/uapi/linux/kvm.h +@@ -8,6 +8,7 @@ + * Note: you must update KVM_API_VERSION if you change this interface. + */ + ++#include + #include + #include + #include +@@ -1834,8 +1835,8 @@ struct kvm_hyperv_eventfd { + * conversion after harvesting an entry. Also, it must not skip any + * dirty bits, so that dirty bits are always harvested in sequence. + */ +-#define KVM_DIRTY_GFN_F_DIRTY BIT(0) +-#define KVM_DIRTY_GFN_F_RESET BIT(1) ++#define KVM_DIRTY_GFN_F_DIRTY _BITUL(0) ++#define KVM_DIRTY_GFN_F_RESET _BITUL(1) + #define KVM_DIRTY_GFN_F_MASK 0x3 + + /* +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 21247e49fe82b..b186d852fe3df 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -12714,12 +12714,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, + if (is_priv) + env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ; + +- if (bpf_prog_is_dev_bound(env->prog->aux)) { +- ret = bpf_prog_offload_verifier_prep(env->prog); +- if (ret) +- goto skip_full_check; +- } +- + env->explored_states = kvcalloc(state_htab_size(env), + sizeof(struct bpf_verifier_state_list *), + GFP_USER); +@@ -12743,6 +12737,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, + if (ret < 0) + goto skip_full_check; + ++ if (bpf_prog_is_dev_bound(env->prog->aux)) { ++ ret = bpf_prog_offload_verifier_prep(env->prog); ++ if (ret) ++ goto skip_full_check; ++ } ++ + ret = check_cfg(env); + if (ret < 0) + goto skip_full_check; +diff --git a/kernel/seccomp.c b/kernel/seccomp.c +index 1d60fc2c99871..93684cc63285a 100644 +--- a/kernel/seccomp.c ++++ b/kernel/seccomp.c +@@ -1098,28 +1098,30 @@ static int seccomp_do_user_notification(int this_syscall, + + up(&match->notif->request); + wake_up_poll(&match->wqh, EPOLLIN | EPOLLRDNORM); +- mutex_unlock(&match->notify_lock); + + /* + * This is where we wait for a reply from userspace. + */ +-wait: +- err = wait_for_completion_interruptible(&n.ready); +- mutex_lock(&match->notify_lock); +- if (err == 0) { +- /* Check if we were woken up by a addfd message */ ++ do { ++ mutex_unlock(&match->notify_lock); ++ err = wait_for_completion_interruptible(&n.ready); ++ mutex_lock(&match->notify_lock); ++ if (err != 0) ++ goto interrupted; ++ + addfd = list_first_entry_or_null(&n.addfd, + struct seccomp_kaddfd, list); +- if (addfd && n.state != SECCOMP_NOTIFY_REPLIED) { ++ /* Check if we were woken up by a addfd message */ ++ if (addfd) + seccomp_handle_addfd(addfd); +- mutex_unlock(&match->notify_lock); +- goto wait; +- } +- ret = n.val; +- err = n.error; +- flags = n.flags; +- } + ++ } while (n.state != SECCOMP_NOTIFY_REPLIED); ++ ++ ret = n.val; ++ err = n.error; ++ flags = n.flags; ++ ++interrupted: + /* If there were any pending addfd calls, clear them out */ + list_for_each_entry_safe(addfd, tmp, &n.addfd, list) { + /* The process went away before we got a chance to handle it */ +diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c +index 07cfa3249f83a..0a2d78e811cf5 100644 +--- a/net/bluetooth/cmtp/core.c ++++ b/net/bluetooth/cmtp/core.c +@@ -392,6 +392,11 @@ int cmtp_add_connection(struct cmtp_connadd_req *req, struct socket *sock) + if (!(session->flags & BIT(CMTP_LOOPBACK))) { + err = cmtp_attach_device(session); + if (err < 0) { ++ /* Caller will call fput in case of failure, and so ++ * will cmtp_session kthread. ++ */ ++ get_file(session->sock->file); ++ + atomic_inc(&session->terminate); + wake_up_interruptible(sk_sleep(session->sock->sk)); + up_write(&cmtp_session_sem); +diff --git a/net/can/isotp.c b/net/can/isotp.c +index 9f94ad3caee92..253b24417c8e5 100644 +--- a/net/can/isotp.c ++++ b/net/can/isotp.c +@@ -1062,27 +1062,31 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len) + if (len < ISOTP_MIN_NAMELEN) + return -EINVAL; + ++ if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) ++ return -EADDRNOTAVAIL; ++ ++ if (!addr->can_ifindex) ++ return -ENODEV; ++ ++ lock_sock(sk); ++ + /* do not register frame reception for functional addressing */ + if (so->opt.flags & CAN_ISOTP_SF_BROADCAST) + do_rx_reg = 0; + + /* do not validate rx address for functional addressing */ + if (do_rx_reg) { +- if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) +- return -EADDRNOTAVAIL; ++ if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) { ++ err = -EADDRNOTAVAIL; ++ goto out; ++ } + +- if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) +- return -EADDRNOTAVAIL; ++ if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) { ++ err = -EADDRNOTAVAIL; ++ goto out; ++ } + } + +- if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) +- return -EADDRNOTAVAIL; +- +- if (!addr->can_ifindex) +- return -ENODEV; +- +- lock_sock(sk); +- + if (so->bound && addr->can_ifindex == so->ifindex && + addr->can_addr.tp.rx_id == so->rxid && + addr->can_addr.tp.tx_id == so->txid) +@@ -1164,16 +1168,13 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer) + return ISOTP_MIN_NAMELEN; + } + +-static int isotp_setsockopt(struct socket *sock, int level, int optname, ++static int isotp_setsockopt_locked(struct socket *sock, int level, int optname, + sockptr_t optval, unsigned int optlen) + { + struct sock *sk = sock->sk; + struct isotp_sock *so = isotp_sk(sk); + int ret = 0; + +- if (level != SOL_CAN_ISOTP) +- return -EINVAL; +- + if (so->bound) + return -EISCONN; + +@@ -1248,6 +1249,22 @@ static int isotp_setsockopt(struct socket *sock, int level, int optname, + return ret; + } + ++static int isotp_setsockopt(struct socket *sock, int level, int optname, ++ sockptr_t optval, unsigned int optlen) ++ ++{ ++ struct sock *sk = sock->sk; ++ int ret; ++ ++ if (level != SOL_CAN_ISOTP) ++ return -EINVAL; ++ ++ lock_sock(sk); ++ ret = isotp_setsockopt_locked(sock, level, optname, optval, optlen); ++ release_sock(sk); ++ return ret; ++} ++ + static int isotp_getsockopt(struct socket *sock, int level, int optname, + char __user *optval, int __user *optlen) + { +diff --git a/net/core/dev.c b/net/core/dev.c +index 70829c568645c..9631944740586 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -3804,7 +3804,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, + + if (q->flags & TCQ_F_NOLOCK) { + rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK; +- qdisc_run(q); ++ if (likely(!netif_xmit_frozen_or_stopped(txq))) ++ qdisc_run(q); + + if (unlikely(to_free)) + kfree_skb_list(to_free); +@@ -4974,25 +4975,43 @@ static __latent_entropy void net_tx_action(struct softirq_action *h) + sd->output_queue_tailp = &sd->output_queue; + local_irq_enable(); + ++ rcu_read_lock(); ++ + while (head) { + struct Qdisc *q = head; + spinlock_t *root_lock = NULL; + + head = head->next_sched; + +- if (!(q->flags & TCQ_F_NOLOCK)) { +- root_lock = qdisc_lock(q); +- spin_lock(root_lock); +- } + /* We need to make sure head->next_sched is read + * before clearing __QDISC_STATE_SCHED + */ + smp_mb__before_atomic(); ++ ++ if (!(q->flags & TCQ_F_NOLOCK)) { ++ root_lock = qdisc_lock(q); ++ spin_lock(root_lock); ++ } else if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED, ++ &q->state))) { ++ /* There is a synchronize_net() between ++ * STATE_DEACTIVATED flag being set and ++ * qdisc_reset()/some_qdisc_is_busy() in ++ * dev_deactivate(), so we can safely bail out ++ * early here to avoid data race between ++ * qdisc_deactivate() and some_qdisc_is_busy() ++ * for lockless qdisc. ++ */ ++ clear_bit(__QDISC_STATE_SCHED, &q->state); ++ continue; ++ } ++ + clear_bit(__QDISC_STATE_SCHED, &q->state); + qdisc_run(q); + if (root_lock) + spin_unlock(root_lock); + } ++ ++ rcu_read_unlock(); + } + + xfrm_dev_backlog(sd); +diff --git a/net/core/filter.c b/net/core/filter.c +index 9323d34d34ccc..52f4359efbd2b 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -3782,6 +3782,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room, + __skb_push(skb, head_room); + memset(skb->data, 0, head_room); + skb_reset_mac_header(skb); ++ skb_reset_mac_len(skb); + } + + return ret; +diff --git a/net/core/neighbour.c b/net/core/neighbour.c +index 8379719d1dcef..98f20efbfadf2 100644 +--- a/net/core/neighbour.c ++++ b/net/core/neighbour.c +@@ -131,6 +131,9 @@ static void neigh_update_gc_list(struct neighbour *n) + write_lock_bh(&n->tbl->lock); + write_lock(&n->lock); + ++ if (n->dead) ++ goto out; ++ + /* remove from the gc list if new state is permanent or if neighbor + * is externally learned; otherwise entry should be on the gc list + */ +@@ -147,6 +150,7 @@ static void neigh_update_gc_list(struct neighbour *n) + atomic_inc(&n->tbl->gc_entries); + } + ++out: + write_unlock(&n->lock); + write_unlock_bh(&n->tbl->lock); + } +diff --git a/net/core/sock.c b/net/core/sock.c +index 5ec90f99e1028..9c7b143e7a964 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -2132,10 +2132,10 @@ void skb_orphan_partial(struct sk_buff *skb) + if (skb_is_tcp_pure_ack(skb)) + return; + +- if (can_skb_orphan_partial(skb)) +- skb_set_owner_sk_safe(skb, skb->sk); +- else +- skb_orphan(skb); ++ if (can_skb_orphan_partial(skb) && skb_set_owner_sk_safe(skb, skb->sk)) ++ return; ++ ++ skb_orphan(skb); + } + EXPORT_SYMBOL(skb_orphan_partial); + +diff --git a/net/dsa/master.c b/net/dsa/master.c +index 052a977914a6d..63adbc21a735a 100644 +--- a/net/dsa/master.c ++++ b/net/dsa/master.c +@@ -147,8 +147,7 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset, + struct dsa_switch *ds = cpu_dp->ds; + int port = cpu_dp->index; + int len = ETH_GSTRING_LEN; +- int mcount = 0, count; +- unsigned int i; ++ int mcount = 0, count, i; + uint8_t pfx[4]; + uint8_t *ndata; + +@@ -178,6 +177,8 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset, + */ + ds->ops->get_strings(ds, port, stringset, ndata); + count = ds->ops->get_sset_count(ds, port, stringset); ++ if (count < 0) ++ return; + for (i = 0; i < count; i++) { + memmove(ndata + (i * len + sizeof(pfx)), + ndata + i * len, len - sizeof(pfx)); +diff --git a/net/dsa/slave.c b/net/dsa/slave.c +index 992fcab4b5529..8dd7c8e84a65a 100644 +--- a/net/dsa/slave.c ++++ b/net/dsa/slave.c +@@ -787,13 +787,15 @@ static int dsa_slave_get_sset_count(struct net_device *dev, int sset) + struct dsa_switch *ds = dp->ds; + + if (sset == ETH_SS_STATS) { +- int count; ++ int count = 0; + +- count = 4; +- if (ds->ops->get_sset_count) +- count += ds->ops->get_sset_count(ds, dp->index, sset); ++ if (ds->ops->get_sset_count) { ++ count = ds->ops->get_sset_count(ds, dp->index, sset); ++ if (count < 0) ++ return count; ++ } + +- return count; ++ return count + 4; + } + + return -EOPNOTSUPP; +diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c +index bfcdc75fc01e6..26c32407f0290 100644 +--- a/net/hsr/hsr_device.c ++++ b/net/hsr/hsr_device.c +@@ -218,6 +218,7 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev) + if (master) { + skb->dev = master->dev; + skb_reset_mac_header(skb); ++ skb_reset_mac_len(skb); + hsr_forward_skb(skb, master); + } else { + atomic_long_inc(&dev->tx_dropped); +@@ -259,6 +260,7 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master) + goto out; + + skb_reset_mac_header(skb); ++ skb_reset_mac_len(skb); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + +diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c +index 6852e9bccf5b8..ceb8afb2a62f4 100644 +--- a/net/hsr/hsr_forward.c ++++ b/net/hsr/hsr_forward.c +@@ -474,8 +474,8 @@ static void handle_std_frame(struct sk_buff *skb, + } + } + +-void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb, +- struct hsr_frame_info *frame) ++int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb, ++ struct hsr_frame_info *frame) + { + struct hsr_port *port = frame->port_rcv; + struct hsr_priv *hsr = port->hsr; +@@ -483,20 +483,26 @@ void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb, + /* HSRv0 supervisory frames double as a tag so treat them as tagged. */ + if ((!hsr->prot_version && proto == htons(ETH_P_PRP)) || + proto == htons(ETH_P_HSR)) { ++ /* Check if skb contains hsr_ethhdr */ ++ if (skb->mac_len < sizeof(struct hsr_ethhdr)) ++ return -EINVAL; ++ + /* HSR tagged frame :- Data or Supervision */ + frame->skb_std = NULL; + frame->skb_prp = NULL; + frame->skb_hsr = skb; + frame->sequence_nr = hsr_get_skb_sequence_nr(skb); +- return; ++ return 0; + } + + /* Standard frame or PRP from master port */ + handle_std_frame(skb, frame); ++ ++ return 0; + } + +-void prp_fill_frame_info(__be16 proto, struct sk_buff *skb, +- struct hsr_frame_info *frame) ++int prp_fill_frame_info(__be16 proto, struct sk_buff *skb, ++ struct hsr_frame_info *frame) + { + /* Supervision frame */ + struct prp_rct *rct = skb_get_PRP_rct(skb); +@@ -507,9 +513,11 @@ void prp_fill_frame_info(__be16 proto, struct sk_buff *skb, + frame->skb_std = NULL; + frame->skb_prp = skb; + frame->sequence_nr = prp_get_skb_sequence_nr(rct); +- return; ++ return 0; + } + handle_std_frame(skb, frame); ++ ++ return 0; + } + + static int fill_frame_info(struct hsr_frame_info *frame, +@@ -519,9 +527,10 @@ static int fill_frame_info(struct hsr_frame_info *frame, + struct hsr_vlan_ethhdr *vlan_hdr; + struct ethhdr *ethhdr; + __be16 proto; ++ int ret; + +- /* Check if skb contains hsr_ethhdr */ +- if (skb->mac_len < sizeof(struct hsr_ethhdr)) ++ /* Check if skb contains ethhdr */ ++ if (skb->mac_len < sizeof(struct ethhdr)) + return -EINVAL; + + memset(frame, 0, sizeof(*frame)); +@@ -548,7 +557,10 @@ static int fill_frame_info(struct hsr_frame_info *frame, + + frame->is_from_san = false; + frame->port_rcv = port; +- hsr->proto_ops->fill_frame_info(proto, skb, frame); ++ ret = hsr->proto_ops->fill_frame_info(proto, skb, frame); ++ if (ret) ++ return ret; ++ + check_local_dest(port->hsr, skb, frame); + + return 0; +diff --git a/net/hsr/hsr_forward.h b/net/hsr/hsr_forward.h +index b6acaafa83fc2..206636750b300 100644 +--- a/net/hsr/hsr_forward.h ++++ b/net/hsr/hsr_forward.h +@@ -24,8 +24,8 @@ struct sk_buff *prp_get_untagged_frame(struct hsr_frame_info *frame, + struct hsr_port *port); + bool prp_drop_frame(struct hsr_frame_info *frame, struct hsr_port *port); + bool hsr_drop_frame(struct hsr_frame_info *frame, struct hsr_port *port); +-void prp_fill_frame_info(__be16 proto, struct sk_buff *skb, +- struct hsr_frame_info *frame); +-void hsr_fill_frame_info(__be16 proto, struct sk_buff *skb, +- struct hsr_frame_info *frame); ++int prp_fill_frame_info(__be16 proto, struct sk_buff *skb, ++ struct hsr_frame_info *frame); ++int hsr_fill_frame_info(__be16 proto, struct sk_buff *skb, ++ struct hsr_frame_info *frame); + #endif /* __HSR_FORWARD_H */ +diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h +index 8f264672b70bd..53d1f7a824630 100644 +--- a/net/hsr/hsr_main.h ++++ b/net/hsr/hsr_main.h +@@ -186,8 +186,8 @@ struct hsr_proto_ops { + struct hsr_port *port); + struct sk_buff * (*create_tagged_frame)(struct hsr_frame_info *frame, + struct hsr_port *port); +- void (*fill_frame_info)(__be16 proto, struct sk_buff *skb, +- struct hsr_frame_info *frame); ++ int (*fill_frame_info)(__be16 proto, struct sk_buff *skb, ++ struct hsr_frame_info *frame); + bool (*invalid_dan_ingress_frame)(__be16 protocol); + void (*update_san_info)(struct hsr_node *node, bool is_sup); + }; +diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c +index c5227d42faf56..b70e6bbf6021f 100644 +--- a/net/hsr/hsr_slave.c ++++ b/net/hsr/hsr_slave.c +@@ -60,12 +60,11 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb) + goto finish_pass; + + skb_push(skb, ETH_HLEN); +- +- if (skb_mac_header(skb) != skb->data) { +- WARN_ONCE(1, "%s:%d: Malformed frame at source port %s)\n", +- __func__, __LINE__, port->dev->name); +- goto finish_consume; +- } ++ skb_reset_mac_header(skb); ++ if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) || ++ protocol == htons(ETH_P_HSR)) ++ skb_set_network_header(skb, ETH_HLEN + HSR_HLEN); ++ skb_reset_mac_len(skb); + + hsr_forward_skb(skb, port); + +diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c +index 6c8604390266f..2cab0c563214c 100644 +--- a/net/ipv6/mcast.c ++++ b/net/ipv6/mcast.c +@@ -1601,10 +1601,7 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu) + IPV6_TLV_PADN, 0 }; + + /* we assume size > sizeof(ra) here */ +- /* limit our allocations to order-0 page */ +- size = min_t(int, size, SKB_MAX_ORDER(0, 0)); + skb = sock_alloc_send_skb(sk, size, 1, &err); +- + if (!skb) + return NULL; + +diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c +index 47a0dc46cbdb0..28e44782c94d1 100644 +--- a/net/ipv6/reassembly.c ++++ b/net/ipv6/reassembly.c +@@ -343,7 +343,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb) + hdr = ipv6_hdr(skb); + fhdr = (struct frag_hdr *)skb_transport_header(skb); + +- if (!(fhdr->frag_off & htons(0xFFF9))) { ++ if (!(fhdr->frag_off & htons(IP6_OFFSET | IP6_MF))) { + /* It is not a fragmented frame */ + skb->transport_header += sizeof(struct frag_hdr); + __IP6_INC_STATS(net, +@@ -351,6 +351,8 @@ static int ipv6_frag_rcv(struct sk_buff *skb) + + IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb); + IP6CB(skb)->flags |= IP6SKB_FRAGMENTED; ++ IP6CB(skb)->frag_max_size = ntohs(hdr->payload_len) + ++ sizeof(struct ipv6hdr); + return 1; + } + +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index ecda126a70266..02e818d740f60 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -50,12 +50,6 @@ struct ieee80211_local; + #define IEEE80211_ENCRYPT_HEADROOM 8 + #define IEEE80211_ENCRYPT_TAILROOM 18 + +-/* IEEE 802.11 (Ch. 9.5 Defragmentation) requires support for concurrent +- * reception of at least three fragmented frames. This limit can be increased +- * by changing this define, at the cost of slower frame reassembly and +- * increased memory use (about 2 kB of RAM per entry). */ +-#define IEEE80211_FRAGMENT_MAX 4 +- + /* power level hasn't been configured (or set to automatic) */ + #define IEEE80211_UNSET_POWER_LEVEL INT_MIN + +@@ -88,18 +82,6 @@ extern const u8 ieee80211_ac_to_qos_mask[IEEE80211_NUM_ACS]; + + #define IEEE80211_MAX_NAN_INSTANCE_ID 255 + +-struct ieee80211_fragment_entry { +- struct sk_buff_head skb_list; +- unsigned long first_frag_time; +- u16 seq; +- u16 extra_len; +- u16 last_frag; +- u8 rx_queue; +- bool check_sequential_pn; /* needed for CCMP/GCMP */ +- u8 last_pn[6]; /* PN of the last fragment if CCMP was used */ +-}; +- +- + struct ieee80211_bss { + u32 device_ts_beacon, device_ts_presp; + +@@ -241,8 +223,15 @@ struct ieee80211_rx_data { + */ + int security_idx; + +- u32 tkip_iv32; +- u16 tkip_iv16; ++ union { ++ struct { ++ u32 iv32; ++ u16 iv16; ++ } tkip; ++ struct { ++ u8 pn[IEEE80211_CCMP_PN_LEN]; ++ } ccm_gcm; ++ }; + }; + + struct ieee80211_csa_settings { +@@ -902,9 +891,7 @@ struct ieee80211_sub_if_data { + + char name[IFNAMSIZ]; + +- /* Fragment table for host-based reassembly */ +- struct ieee80211_fragment_entry fragments[IEEE80211_FRAGMENT_MAX]; +- unsigned int fragment_next; ++ struct ieee80211_fragment_cache frags; + + /* TID bitmap for NoAck policy */ + u16 noack_map; +@@ -2318,4 +2305,7 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw, + #define debug_noinline + #endif + ++void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache); ++void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache); ++ + #endif /* IEEE80211_I_H */ +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c +index b80c9b016b2be..6f8885766cbaa 100644 +--- a/net/mac80211/iface.c ++++ b/net/mac80211/iface.c +@@ -8,7 +8,7 @@ + * Copyright 2008, Johannes Berg + * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright (c) 2016 Intel Deutschland GmbH +- * Copyright (C) 2018-2020 Intel Corporation ++ * Copyright (C) 2018-2021 Intel Corporation + */ + #include + #include +@@ -676,16 +676,12 @@ static void ieee80211_set_multicast_list(struct net_device *dev) + */ + static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata) + { +- int i; +- + /* free extra data */ + ieee80211_free_keys(sdata, false); + + ieee80211_debugfs_remove_netdev(sdata); + +- for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) +- __skb_queue_purge(&sdata->fragments[i].skb_list); +- sdata->fragment_next = 0; ++ ieee80211_destroy_frag_cache(&sdata->frags); + + if (ieee80211_vif_is_mesh(&sdata->vif)) + ieee80211_mesh_teardown_sdata(sdata); +@@ -1928,8 +1924,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name, + sdata->wdev.wiphy = local->hw.wiphy; + sdata->local = local; + +- for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) +- skb_queue_head_init(&sdata->fragments[i].skb_list); ++ ieee80211_init_frag_cache(&sdata->frags); + + INIT_LIST_HEAD(&sdata->key_list); + +diff --git a/net/mac80211/key.c b/net/mac80211/key.c +index 56c068cb49c4d..f695fc80088bc 100644 +--- a/net/mac80211/key.c ++++ b/net/mac80211/key.c +@@ -799,6 +799,7 @@ int ieee80211_key_link(struct ieee80211_key *key, + struct ieee80211_sub_if_data *sdata, + struct sta_info *sta) + { ++ static atomic_t key_color = ATOMIC_INIT(0); + struct ieee80211_key *old_key; + int idx = key->conf.keyidx; + bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE; +@@ -850,6 +851,12 @@ int ieee80211_key_link(struct ieee80211_key *key, + key->sdata = sdata; + key->sta = sta; + ++ /* ++ * Assign a unique ID to every key so we can easily prevent mixed ++ * key and fragment cache attacks. ++ */ ++ key->color = atomic_inc_return(&key_color); ++ + increment_tailroom_need_count(sdata); + + ret = ieee80211_key_replace(sdata, sta, pairwise, old_key, key); +diff --git a/net/mac80211/key.h b/net/mac80211/key.h +index 7ad72e9b4991d..1e326c89d7217 100644 +--- a/net/mac80211/key.h ++++ b/net/mac80211/key.h +@@ -128,6 +128,8 @@ struct ieee80211_key { + } debugfs; + #endif + ++ unsigned int color; ++ + /* + * key config, must be last because it contains key + * material as variable length member +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index c1343c028b767..59de7a86599dc 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -6,7 +6,7 @@ + * Copyright 2007-2010 Johannes Berg + * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright(c) 2015 - 2017 Intel Deutschland GmbH +- * Copyright (C) 2018-2020 Intel Corporation ++ * Copyright (C) 2018-2021 Intel Corporation + */ + + #include +@@ -2122,19 +2122,34 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx) + return result; + } + ++void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache) ++{ ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(cache->entries); i++) ++ skb_queue_head_init(&cache->entries[i].skb_list); ++} ++ ++void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache) ++{ ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(cache->entries); i++) ++ __skb_queue_purge(&cache->entries[i].skb_list); ++} ++ + static inline struct ieee80211_fragment_entry * +-ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata, ++ieee80211_reassemble_add(struct ieee80211_fragment_cache *cache, + unsigned int frag, unsigned int seq, int rx_queue, + struct sk_buff **skb) + { + struct ieee80211_fragment_entry *entry; + +- entry = &sdata->fragments[sdata->fragment_next++]; +- if (sdata->fragment_next >= IEEE80211_FRAGMENT_MAX) +- sdata->fragment_next = 0; ++ entry = &cache->entries[cache->next++]; ++ if (cache->next >= IEEE80211_FRAGMENT_MAX) ++ cache->next = 0; + +- if (!skb_queue_empty(&entry->skb_list)) +- __skb_queue_purge(&entry->skb_list); ++ __skb_queue_purge(&entry->skb_list); + + __skb_queue_tail(&entry->skb_list, *skb); /* no need for locking */ + *skb = NULL; +@@ -2149,14 +2164,14 @@ ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata, + } + + static inline struct ieee80211_fragment_entry * +-ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata, ++ieee80211_reassemble_find(struct ieee80211_fragment_cache *cache, + unsigned int frag, unsigned int seq, + int rx_queue, struct ieee80211_hdr *hdr) + { + struct ieee80211_fragment_entry *entry; + int i, idx; + +- idx = sdata->fragment_next; ++ idx = cache->next; + for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) { + struct ieee80211_hdr *f_hdr; + struct sk_buff *f_skb; +@@ -2165,7 +2180,7 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata, + if (idx < 0) + idx = IEEE80211_FRAGMENT_MAX - 1; + +- entry = &sdata->fragments[idx]; ++ entry = &cache->entries[idx]; + if (skb_queue_empty(&entry->skb_list) || entry->seq != seq || + entry->rx_queue != rx_queue || + entry->last_frag + 1 != frag) +@@ -2193,15 +2208,27 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata, + return NULL; + } + ++static bool requires_sequential_pn(struct ieee80211_rx_data *rx, __le16 fc) ++{ ++ return rx->key && ++ (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP || ++ rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 || ++ rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP || ++ rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) && ++ ieee80211_has_protected(fc); ++} ++ + static ieee80211_rx_result debug_noinline + ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx) + { ++ struct ieee80211_fragment_cache *cache = &rx->sdata->frags; + struct ieee80211_hdr *hdr; + u16 sc; + __le16 fc; + unsigned int frag, seq; + struct ieee80211_fragment_entry *entry; + struct sk_buff *skb; ++ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb); + + hdr = (struct ieee80211_hdr *)rx->skb->data; + fc = hdr->frame_control; +@@ -2217,6 +2244,9 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx) + goto out_no_led; + } + ++ if (rx->sta) ++ cache = &rx->sta->frags; ++ + if (likely(!ieee80211_has_morefrags(fc) && frag == 0)) + goto out; + +@@ -2235,20 +2265,17 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx) + + if (frag == 0) { + /* This is the first fragment of a new frame. */ +- entry = ieee80211_reassemble_add(rx->sdata, frag, seq, ++ entry = ieee80211_reassemble_add(cache, frag, seq, + rx->seqno_idx, &(rx->skb)); +- if (rx->key && +- (rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP || +- rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 || +- rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP || +- rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) && +- ieee80211_has_protected(fc)) { ++ if (requires_sequential_pn(rx, fc)) { + int queue = rx->security_idx; + + /* Store CCMP/GCMP PN so that we can verify that the + * next fragment has a sequential PN value. + */ + entry->check_sequential_pn = true; ++ entry->is_protected = true; ++ entry->key_color = rx->key->color; + memcpy(entry->last_pn, + rx->key->u.ccmp.rx_pn[queue], + IEEE80211_CCMP_PN_LEN); +@@ -2260,6 +2287,11 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx) + sizeof(rx->key->u.gcmp.rx_pn[queue])); + BUILD_BUG_ON(IEEE80211_CCMP_PN_LEN != + IEEE80211_GCMP_PN_LEN); ++ } else if (rx->key && ++ (ieee80211_has_protected(fc) || ++ (status->flag & RX_FLAG_DECRYPTED))) { ++ entry->is_protected = true; ++ entry->key_color = rx->key->color; + } + return RX_QUEUED; + } +@@ -2267,7 +2299,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx) + /* This is a fragment for a frame that should already be pending in + * fragment cache. Add this fragment to the end of the pending entry. + */ +- entry = ieee80211_reassemble_find(rx->sdata, frag, seq, ++ entry = ieee80211_reassemble_find(cache, frag, seq, + rx->seqno_idx, hdr); + if (!entry) { + I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag); +@@ -2282,25 +2314,39 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx) + if (entry->check_sequential_pn) { + int i; + u8 pn[IEEE80211_CCMP_PN_LEN], *rpn; +- int queue; + +- if (!rx->key || +- (rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP && +- rx->key->conf.cipher != WLAN_CIPHER_SUITE_CCMP_256 && +- rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP && +- rx->key->conf.cipher != WLAN_CIPHER_SUITE_GCMP_256)) ++ if (!requires_sequential_pn(rx, fc)) ++ return RX_DROP_UNUSABLE; ++ ++ /* Prevent mixed key and fragment cache attacks */ ++ if (entry->key_color != rx->key->color) + return RX_DROP_UNUSABLE; ++ + memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN); + for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) { + pn[i]++; + if (pn[i]) + break; + } +- queue = rx->security_idx; +- rpn = rx->key->u.ccmp.rx_pn[queue]; ++ ++ rpn = rx->ccm_gcm.pn; + if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN)) + return RX_DROP_UNUSABLE; + memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN); ++ } else if (entry->is_protected && ++ (!rx->key || ++ (!ieee80211_has_protected(fc) && ++ !(status->flag & RX_FLAG_DECRYPTED)) || ++ rx->key->color != entry->key_color)) { ++ /* Drop this as a mixed key or fragment cache attack, even ++ * if for TKIP Michael MIC should protect us, and WEP is a ++ * lost cause anyway. ++ */ ++ return RX_DROP_UNUSABLE; ++ } else if (entry->is_protected && rx->key && ++ entry->key_color != rx->key->color && ++ (status->flag & RX_FLAG_DECRYPTED)) { ++ return RX_DROP_UNUSABLE; + } + + skb_pull(rx->skb, ieee80211_hdrlen(fc)); +@@ -2493,13 +2539,13 @@ static bool ieee80211_frame_allowed(struct ieee80211_rx_data *rx, __le16 fc) + struct ethhdr *ehdr = (struct ethhdr *) rx->skb->data; + + /* +- * Allow EAPOL frames to us/the PAE group address regardless +- * of whether the frame was encrypted or not. ++ * Allow EAPOL frames to us/the PAE group address regardless of ++ * whether the frame was encrypted or not, and always disallow ++ * all other destination addresses for them. + */ +- if (ehdr->h_proto == rx->sdata->control_port_protocol && +- (ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) || +- ether_addr_equal(ehdr->h_dest, pae_group_addr))) +- return true; ++ if (unlikely(ehdr->h_proto == rx->sdata->control_port_protocol)) ++ return ether_addr_equal(ehdr->h_dest, rx->sdata->vif.addr) || ++ ether_addr_equal(ehdr->h_dest, pae_group_addr); + + if (ieee80211_802_1x_port_control(rx) || + ieee80211_drop_unencrypted(rx, fc)) +@@ -2524,8 +2570,28 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb, + cfg80211_rx_control_port(dev, skb, noencrypt); + dev_kfree_skb(skb); + } else { ++ struct ethhdr *ehdr = (void *)skb_mac_header(skb); ++ + memset(skb->cb, 0, sizeof(skb->cb)); + ++ /* ++ * 802.1X over 802.11 requires that the authenticator address ++ * be used for EAPOL frames. However, 802.1X allows the use of ++ * the PAE group address instead. If the interface is part of ++ * a bridge and we pass the frame with the PAE group address, ++ * then the bridge will forward it to the network (even if the ++ * client was not associated yet), which isn't supposed to ++ * happen. ++ * To avoid that, rewrite the destination address to our own ++ * address, so that the authenticator (e.g. hostapd) will see ++ * the frame, but bridge won't forward it anywhere else. Note ++ * that due to earlier filtering, the only other address can ++ * be the PAE group address. ++ */ ++ if (unlikely(skb->protocol == sdata->control_port_protocol && ++ !ether_addr_equal(ehdr->h_dest, sdata->vif.addr))) ++ ether_addr_copy(ehdr->h_dest, sdata->vif.addr); ++ + /* deliver to local stack */ + if (rx->list) + list_add_tail(&skb->list, rx->list); +@@ -2565,6 +2631,7 @@ ieee80211_deliver_skb(struct ieee80211_rx_data *rx) + if ((sdata->vif.type == NL80211_IFTYPE_AP || + sdata->vif.type == NL80211_IFTYPE_AP_VLAN) && + !(sdata->flags & IEEE80211_SDATA_DONT_BRIDGE_PACKETS) && ++ ehdr->h_proto != rx->sdata->control_port_protocol && + (sdata->vif.type != NL80211_IFTYPE_AP_VLAN || !sdata->u.vlan.sta)) { + if (is_multicast_ether_addr(ehdr->h_dest) && + ieee80211_vif_get_num_mcast_if(sdata) != 0) { +@@ -2674,7 +2741,7 @@ __ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx, u8 data_offset) + if (ieee80211_data_to_8023_exthdr(skb, ðhdr, + rx->sdata->vif.addr, + rx->sdata->vif.type, +- data_offset)) ++ data_offset, true)) + return RX_DROP_UNUSABLE; + + ieee80211_amsdu_to_8023s(skb, &frame_list, dev->dev_addr, +@@ -2731,6 +2798,23 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx) + if (is_multicast_ether_addr(hdr->addr1)) + return RX_DROP_UNUSABLE; + ++ if (rx->key) { ++ /* ++ * We should not receive A-MSDUs on pre-HT connections, ++ * and HT connections cannot use old ciphers. Thus drop ++ * them, as in those cases we couldn't even have SPP ++ * A-MSDUs or such. ++ */ ++ switch (rx->key->conf.cipher) { ++ case WLAN_CIPHER_SUITE_WEP40: ++ case WLAN_CIPHER_SUITE_WEP104: ++ case WLAN_CIPHER_SUITE_TKIP: ++ return RX_DROP_UNUSABLE; ++ default: ++ break; ++ } ++ } ++ + return __ieee80211_rx_h_amsdu(rx, 0); + } + +diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c +index ec6973ee88ef4..f2fb69da9b6e1 100644 +--- a/net/mac80211/sta_info.c ++++ b/net/mac80211/sta_info.c +@@ -4,7 +4,7 @@ + * Copyright 2006-2007 Jiri Benc + * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright (C) 2015 - 2017 Intel Deutschland GmbH +- * Copyright (C) 2018-2020 Intel Corporation ++ * Copyright (C) 2018-2021 Intel Corporation + */ + + #include +@@ -392,6 +392,8 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata, + + u64_stats_init(&sta->rx_stats.syncp); + ++ ieee80211_init_frag_cache(&sta->frags); ++ + sta->sta_state = IEEE80211_STA_NONE; + + /* Mark TID as unreserved */ +@@ -1102,6 +1104,8 @@ static void __sta_info_destroy_part2(struct sta_info *sta) + + ieee80211_sta_debugfs_remove(sta); + ++ ieee80211_destroy_frag_cache(&sta->frags); ++ + cleanup_single_sta(sta); + } + +diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h +index 78b9d0c7cc583..0333072ebd982 100644 +--- a/net/mac80211/sta_info.h ++++ b/net/mac80211/sta_info.h +@@ -3,7 +3,7 @@ + * Copyright 2002-2005, Devicescape Software, Inc. + * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright(c) 2015-2017 Intel Deutschland GmbH +- * Copyright(c) 2020 Intel Corporation ++ * Copyright(c) 2020-2021 Intel Corporation + */ + + #ifndef STA_INFO_H +@@ -438,6 +438,34 @@ struct ieee80211_sta_rx_stats { + u64 msdu[IEEE80211_NUM_TIDS + 1]; + }; + ++/* ++ * IEEE 802.11-2016 (10.6 "Defragmentation") recommends support for "concurrent ++ * reception of at least one MSDU per access category per associated STA" ++ * on APs, or "at least one MSDU per access category" on other interface types. ++ * ++ * This limit can be increased by changing this define, at the cost of slower ++ * frame reassembly and increased memory use while fragments are pending. ++ */ ++#define IEEE80211_FRAGMENT_MAX 4 ++ ++struct ieee80211_fragment_entry { ++ struct sk_buff_head skb_list; ++ unsigned long first_frag_time; ++ u16 seq; ++ u16 extra_len; ++ u16 last_frag; ++ u8 rx_queue; ++ u8 check_sequential_pn:1, /* needed for CCMP/GCMP */ ++ is_protected:1; ++ u8 last_pn[6]; /* PN of the last fragment if CCMP was used */ ++ unsigned int key_color; ++}; ++ ++struct ieee80211_fragment_cache { ++ struct ieee80211_fragment_entry entries[IEEE80211_FRAGMENT_MAX]; ++ unsigned int next; ++}; ++ + /* + * The bandwidth threshold below which the per-station CoDel parameters will be + * scaled to be more lenient (to prevent starvation of slow stations). This +@@ -531,6 +559,7 @@ struct ieee80211_sta_rx_stats { + * @status_stats.last_ack_signal: last ACK signal + * @status_stats.ack_signal_filled: last ACK signal validity + * @status_stats.avg_ack_signal: average ACK signal ++ * @frags: fragment cache + */ + struct sta_info { + /* General information, mostly static */ +@@ -639,6 +668,8 @@ struct sta_info { + + struct cfg80211_chan_def tdls_chandef; + ++ struct ieee80211_fragment_cache frags; ++ + /* keep last! */ + struct ieee80211_sta sta; + }; +diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c +index 91bf32af55e9a..bca47fad5a162 100644 +--- a/net/mac80211/wpa.c ++++ b/net/mac80211/wpa.c +@@ -3,6 +3,7 @@ + * Copyright 2002-2004, Instant802 Networks, Inc. + * Copyright 2008, Jouni Malinen + * Copyright (C) 2016-2017 Intel Deutschland GmbH ++ * Copyright (C) 2020-2021 Intel Corporation + */ + + #include +@@ -167,8 +168,8 @@ ieee80211_rx_h_michael_mic_verify(struct ieee80211_rx_data *rx) + + update_iv: + /* update IV in key information to be able to detect replays */ +- rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip_iv32; +- rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip_iv16; ++ rx->key->u.tkip.rx[rx->security_idx].iv32 = rx->tkip.iv32; ++ rx->key->u.tkip.rx[rx->security_idx].iv16 = rx->tkip.iv16; + + return RX_CONTINUE; + +@@ -294,8 +295,8 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx) + key, skb->data + hdrlen, + skb->len - hdrlen, rx->sta->sta.addr, + hdr->addr1, hwaccel, rx->security_idx, +- &rx->tkip_iv32, +- &rx->tkip_iv16); ++ &rx->tkip.iv32, ++ &rx->tkip.iv16); + if (res != TKIP_DECRYPT_OK) + return RX_DROP_UNUSABLE; + +@@ -553,6 +554,8 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx, + } + + memcpy(key->u.ccmp.rx_pn[queue], pn, IEEE80211_CCMP_PN_LEN); ++ if (unlikely(ieee80211_is_frag(hdr))) ++ memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN); + } + + /* Remove CCMP header and MIC */ +@@ -781,6 +784,8 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx) + } + + memcpy(key->u.gcmp.rx_pn[queue], pn, IEEE80211_GCMP_PN_LEN); ++ if (unlikely(ieee80211_is_frag(hdr))) ++ memcpy(rx->ccm_gcm.pn, pn, IEEE80211_CCMP_PN_LEN); + } + + /* Remove GCMP header and MIC */ +diff --git a/net/mptcp/options.c b/net/mptcp/options.c +index 89a4225ed3211..8848a9e2a95b1 100644 +--- a/net/mptcp/options.c ++++ b/net/mptcp/options.c +@@ -127,7 +127,6 @@ static void mptcp_parse_option(const struct sk_buff *skb, + memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN); + pr_debug("MP_JOIN hmac"); + } else { +- pr_warn("MP_JOIN bad option size"); + mp_opt->mp_join = 0; + } + break; +diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c +index 65e5d3eb10789..228dd40828c4b 100644 +--- a/net/mptcp/protocol.c ++++ b/net/mptcp/protocol.c +@@ -869,12 +869,18 @@ static bool mptcp_skb_can_collapse_to(u64 write_seq, + !mpext->frozen; + } + ++/* we can append data to the given data frag if: ++ * - there is space available in the backing page_frag ++ * - the data frag tail matches the current page_frag free offset ++ * - the data frag end sequence number matches the current write seq ++ */ + static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk, + const struct page_frag *pfrag, + const struct mptcp_data_frag *df) + { + return df && pfrag->page == df->page && + pfrag->size - pfrag->offset > 0 && ++ pfrag->offset == (df->offset + df->data_len) && + df->data_seq + df->data_len == msk->write_seq; + } + +diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c +index 4fe7acaa472f2..1936db3574d2e 100644 +--- a/net/mptcp/subflow.c ++++ b/net/mptcp/subflow.c +@@ -839,7 +839,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk, + + data_len = mpext->data_len; + if (data_len == 0) { +- pr_err("Infinite mapping not handled"); + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); + return MAPPING_INVALID; + } +diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c +index c77ba8690ed8d..434a50bfb5b7a 100644 +--- a/net/netfilter/nf_flow_table_core.c ++++ b/net/netfilter/nf_flow_table_core.c +@@ -259,8 +259,7 @@ void flow_offload_refresh(struct nf_flowtable *flow_table, + { + flow->timeout = nf_flowtable_time_stamp + NF_FLOW_TIMEOUT; + +- if (likely(!nf_flowtable_hw_offload(flow_table) || +- !test_and_clear_bit(NF_FLOW_HW_REFRESH, &flow->flags))) ++ if (likely(!nf_flowtable_hw_offload(flow_table))) + return; + + nf_flow_offload_add(flow_table, flow); +diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c +index 1c5460e7bce87..92047cea3c170 100644 +--- a/net/netfilter/nf_flow_table_offload.c ++++ b/net/netfilter/nf_flow_table_offload.c +@@ -753,10 +753,11 @@ static void flow_offload_work_add(struct flow_offload_work *offload) + + err = flow_offload_rule_add(offload, flow_rule); + if (err < 0) +- set_bit(NF_FLOW_HW_REFRESH, &offload->flow->flags); +- else +- set_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status); ++ goto out; ++ ++ set_bit(IPS_HW_OFFLOAD_BIT, &offload->flow->ct->status); + ++out: + nf_flow_offload_destroy(flow_rule); + } + +diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c +index 9944523f5c2c3..2d73f265b12c9 100644 +--- a/net/netfilter/nft_set_pipapo.c ++++ b/net/netfilter/nft_set_pipapo.c +@@ -408,8 +408,8 @@ int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst, + * + * Return: true on match, false otherwise. + */ +-static bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set, +- const u32 *key, const struct nft_set_ext **ext) ++bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set, ++ const u32 *key, const struct nft_set_ext **ext) + { + struct nft_pipapo *priv = nft_set_priv(set); + unsigned long *res_map, *fill_map; +diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h +index 25a75591583eb..d84afb8fa79a1 100644 +--- a/net/netfilter/nft_set_pipapo.h ++++ b/net/netfilter/nft_set_pipapo.h +@@ -178,6 +178,8 @@ struct nft_pipapo_elem { + + int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst, + union nft_pipapo_map_bucket *mt, bool match_only); ++bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set, ++ const u32 *key, const struct nft_set_ext **ext); + + /** + * pipapo_and_field_buckets_4bit() - Intersect 4-bit buckets +diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c +index d65ae0e23028d..eabdb8d552eef 100644 +--- a/net/netfilter/nft_set_pipapo_avx2.c ++++ b/net/netfilter/nft_set_pipapo_avx2.c +@@ -1131,6 +1131,9 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set, + bool map_index; + int i, ret = 0; + ++ if (unlikely(!irq_fpu_usable())) ++ return nft_pipapo_lookup(net, set, key, ext); ++ + m = rcu_dereference(priv->match); + + /* This also protects access to all data related to scratch maps */ +diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c +index 15424d26e85df..ca3c37f2f1e67 100644 +--- a/net/openvswitch/meter.c ++++ b/net/openvswitch/meter.c +@@ -611,6 +611,14 @@ bool ovs_meter_execute(struct datapath *dp, struct sk_buff *skb, + spin_lock(&meter->lock); + + long_delta_ms = (now_ms - meter->used); /* ms */ ++ if (long_delta_ms < 0) { ++ /* This condition means that we have several threads fighting ++ * for a meter lock, and the one who received the packets a ++ * bit later wins. Assuming that all racing threads received ++ * packets at the same time to avoid overflow. ++ */ ++ long_delta_ms = 0; ++ } + + /* Make sure delta_ms will not be too large, so that bucket will not + * wrap around below. +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c +index 9611e41c7b8be..c52557ec7fb33 100644 +--- a/net/packet/af_packet.c ++++ b/net/packet/af_packet.c +@@ -422,7 +422,8 @@ static __u32 tpacket_get_timestamp(struct sk_buff *skb, struct timespec64 *ts, + ktime_to_timespec64_cond(shhwtstamps->hwtstamp, ts)) + return TP_STATUS_TS_RAW_HARDWARE; + +- if (ktime_to_timespec64_cond(skb->tstamp, ts)) ++ if ((flags & SOF_TIMESTAMPING_SOFTWARE) && ++ ktime_to_timespec64_cond(skb->tstamp, ts)) + return TP_STATUS_TS_SOFTWARE; + + return 0; +@@ -2340,7 +2341,12 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, + + skb_copy_bits(skb, 0, h.raw + macoff, snaplen); + +- if (!(ts_status = tpacket_get_timestamp(skb, &ts, po->tp_tstamp))) ++ /* Always timestamp; prefer an existing software timestamp taken ++ * closer to the time of capture. ++ */ ++ ts_status = tpacket_get_timestamp(skb, &ts, ++ po->tp_tstamp | SOF_TIMESTAMPING_SOFTWARE); ++ if (!ts_status) + ktime_get_real_ts64(&ts); + + status |= ts_status; +diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c +index 340d5af86e87f..94f6942d7ec11 100644 +--- a/net/sched/cls_api.c ++++ b/net/sched/cls_api.c +@@ -1624,7 +1624,7 @@ int tcf_classify_ingress(struct sk_buff *skb, + + /* If we missed on some chain */ + if (ret == TC_ACT_UNSPEC && last_executed_chain) { +- ext = skb_ext_add(skb, TC_SKB_EXT); ++ ext = tc_skb_ext_alloc(skb); + if (WARN_ON_ONCE(!ext)) + return TC_ACT_SHOT; + ext->chain = last_executed_chain; +diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c +index cd2748e2d4a20..d320bcfb2da2c 100644 +--- a/net/sched/sch_dsmark.c ++++ b/net/sched/sch_dsmark.c +@@ -407,7 +407,8 @@ static void dsmark_reset(struct Qdisc *sch) + struct dsmark_qdisc_data *p = qdisc_priv(sch); + + pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p); +- qdisc_reset(p->q); ++ if (p->q) ++ qdisc_reset(p->q); + sch->qstats.backlog = 0; + sch->q.qlen = 0; + } +diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c +index 949163fe68afd..cac684952edc5 100644 +--- a/net/sched/sch_fq_pie.c ++++ b/net/sched/sch_fq_pie.c +@@ -138,8 +138,15 @@ static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, + + /* Classifies packet into corresponding flow */ + idx = fq_pie_classify(skb, sch, &ret); +- sel_flow = &q->flows[idx]; ++ if (idx == 0) { ++ if (ret & __NET_XMIT_BYPASS) ++ qdisc_qstats_drop(sch); ++ __qdisc_drop(skb, to_free); ++ return ret; ++ } ++ idx--; + ++ sel_flow = &q->flows[idx]; + /* Checks whether adding a new packet would exceed memory limit */ + get_pie_cb(skb)->mem_usage = skb->truesize; + memory_limited = q->memory_usage > q->memory_limit + skb->truesize; +@@ -297,9 +304,9 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, + goto flow_error; + } + q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]); +- if (!q->flows_cnt || q->flows_cnt >= 65536) { ++ if (!q->flows_cnt || q->flows_cnt > 65536) { + NL_SET_ERR_MSG_MOD(extack, +- "Number of flows must range in [1..65535]"); ++ "Number of flows must range in [1..65536]"); + goto flow_error; + } + } +@@ -367,7 +374,7 @@ static void fq_pie_timer(struct timer_list *t) + struct fq_pie_sched_data *q = from_timer(q, t, adapt_timer); + struct Qdisc *sch = q->sch; + spinlock_t *root_lock; /* to lock qdisc for probability calculations */ +- u16 idx; ++ u32 idx; + + root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + spin_lock(root_lock); +@@ -388,7 +395,7 @@ static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt, + { + struct fq_pie_sched_data *q = qdisc_priv(sch); + int err; +- u16 idx; ++ u32 idx; + + pie_params_init(&q->p_params); + sch->limit = 10 * 1024; +@@ -500,7 +507,7 @@ static int fq_pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d) + static void fq_pie_reset(struct Qdisc *sch) + { + struct fq_pie_sched_data *q = qdisc_priv(sch); +- u16 idx; ++ u32 idx; + + INIT_LIST_HEAD(&q->new_flows); + INIT_LIST_HEAD(&q->old_flows); +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index 49eae93d1489d..854d2b38db856 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -35,6 +35,25 @@ + const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops; + EXPORT_SYMBOL(default_qdisc_ops); + ++static void qdisc_maybe_clear_missed(struct Qdisc *q, ++ const struct netdev_queue *txq) ++{ ++ clear_bit(__QDISC_STATE_MISSED, &q->state); ++ ++ /* Make sure the below netif_xmit_frozen_or_stopped() ++ * checking happens after clearing STATE_MISSED. ++ */ ++ smp_mb__after_atomic(); ++ ++ /* Checking netif_xmit_frozen_or_stopped() again to ++ * make sure STATE_MISSED is set if the STATE_MISSED ++ * set by netif_tx_wake_queue()'s rescheduling of ++ * net_tx_action() is cleared by the above clear_bit(). ++ */ ++ if (!netif_xmit_frozen_or_stopped(txq)) ++ set_bit(__QDISC_STATE_MISSED, &q->state); ++} ++ + /* Main transmission queue. */ + + /* Modifications to data participating in scheduling must be protected with +@@ -74,6 +93,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q) + } + } else { + skb = SKB_XOFF_MAGIC; ++ qdisc_maybe_clear_missed(q, txq); + } + } + +@@ -242,6 +262,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate, + } + } else { + skb = NULL; ++ qdisc_maybe_clear_missed(q, txq); + } + if (lock) + spin_unlock(lock); +@@ -251,8 +272,10 @@ validate: + *validate = true; + + if ((q->flags & TCQ_F_ONETXQUEUE) && +- netif_xmit_frozen_or_stopped(txq)) ++ netif_xmit_frozen_or_stopped(txq)) { ++ qdisc_maybe_clear_missed(q, txq); + return skb; ++ } + + skb = qdisc_dequeue_skb_bad_txq(q); + if (unlikely(skb)) { +@@ -311,6 +334,8 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q, + HARD_TX_LOCK(dev, txq, smp_processor_id()); + if (!netif_xmit_frozen_or_stopped(txq)) + skb = dev_hard_start_xmit(skb, dev, txq, &ret); ++ else ++ qdisc_maybe_clear_missed(q, txq); + + HARD_TX_UNLOCK(dev, txq); + } else { +@@ -640,8 +665,10 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) + { + struct pfifo_fast_priv *priv = qdisc_priv(qdisc); + struct sk_buff *skb = NULL; ++ bool need_retry = true; + int band; + ++retry: + for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) { + struct skb_array *q = band2list(priv, band); + +@@ -652,6 +679,23 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) + } + if (likely(skb)) { + qdisc_update_stats_at_dequeue(qdisc, skb); ++ } else if (need_retry && ++ test_bit(__QDISC_STATE_MISSED, &qdisc->state)) { ++ /* Delay clearing the STATE_MISSED here to reduce ++ * the overhead of the second spin_trylock() in ++ * qdisc_run_begin() and __netif_schedule() calling ++ * in qdisc_run_end(). ++ */ ++ clear_bit(__QDISC_STATE_MISSED, &qdisc->state); ++ ++ /* Make sure dequeuing happens after clearing ++ * STATE_MISSED. ++ */ ++ smp_mb__after_atomic(); ++ ++ need_retry = false; ++ ++ goto retry; + } else { + WRITE_ONCE(qdisc->empty, true); + } +@@ -1158,8 +1202,10 @@ static void dev_reset_queue(struct net_device *dev, + qdisc_reset(qdisc); + + spin_unlock_bh(qdisc_lock(qdisc)); +- if (nolock) ++ if (nolock) { ++ clear_bit(__QDISC_STATE_MISSED, &qdisc->state); + spin_unlock_bh(&qdisc->seqlock); ++ } + } + + static bool some_qdisc_is_busy(struct net_device *dev) +diff --git a/net/sctp/socket.c b/net/sctp/socket.c +index 4ae428f2f2c57..413266eb83b64 100644 +--- a/net/sctp/socket.c ++++ b/net/sctp/socket.c +@@ -4473,6 +4473,7 @@ static int sctp_setsockopt_encap_port(struct sock *sk, + transports) + t->encap_port = encap_port; + ++ asoc->encap_port = encap_port; + return 0; + } + +diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c +index e92df779af733..55871b277f475 100644 +--- a/net/sctp/sysctl.c ++++ b/net/sctp/sysctl.c +@@ -307,7 +307,7 @@ static struct ctl_table sctp_net_table[] = { + .data = &init_net.sctp.encap_port, + .maxlen = sizeof(int), + .mode = 0644, +- .proc_handler = proc_dointvec, ++ .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = &udp_port_max, + }, +diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c +index 9c6e95882553e..967712ba52a0d 100644 +--- a/net/smc/smc_ism.c ++++ b/net/smc/smc_ism.c +@@ -402,6 +402,14 @@ struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name, + return NULL; + } + ++ smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)", ++ WQ_MEM_RECLAIM, name); ++ if (!smcd->event_wq) { ++ kfree(smcd->conn); ++ kfree(smcd); ++ return NULL; ++ } ++ + smcd->dev.parent = parent; + smcd->dev.release = smcd_release; + device_initialize(&smcd->dev); +@@ -415,19 +423,14 @@ struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name, + INIT_LIST_HEAD(&smcd->vlan); + INIT_LIST_HEAD(&smcd->lgr_list); + init_waitqueue_head(&smcd->lgrs_deleted); +- smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)", +- WQ_MEM_RECLAIM, name); +- if (!smcd->event_wq) { +- kfree(smcd->conn); +- kfree(smcd); +- return NULL; +- } + return smcd; + } + EXPORT_SYMBOL_GPL(smcd_alloc_dev); + + int smcd_register_dev(struct smcd_dev *smcd) + { ++ int rc; ++ + mutex_lock(&smcd_dev_list.mutex); + if (list_empty(&smcd_dev_list.list)) { + u8 *system_eid = NULL; +@@ -447,7 +450,14 @@ int smcd_register_dev(struct smcd_dev *smcd) + dev_name(&smcd->dev), smcd->pnetid, + smcd->pnetid_by_user ? " (user defined)" : ""); + +- return device_add(&smcd->dev); ++ rc = device_add(&smcd->dev); ++ if (rc) { ++ mutex_lock(&smcd_dev_list.mutex); ++ list_del(&smcd->list); ++ mutex_unlock(&smcd_dev_list.mutex); ++ } ++ ++ return rc; + } + EXPORT_SYMBOL_GPL(smcd_register_dev); + +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c +index f555d335e910d..42623d6b8f0ec 100644 +--- a/net/sunrpc/clnt.c ++++ b/net/sunrpc/clnt.c +@@ -1677,13 +1677,6 @@ call_reserveresult(struct rpc_task *task) + return; + } + +- /* +- * Even though there was an error, we may have acquired +- * a request slot somehow. Make sure not to leak it. +- */ +- if (task->tk_rqstp) +- xprt_release(task); +- + switch (status) { + case -ENOMEM: + rpc_delay(task, HZ >> 2); +diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c +index 20fe31b1b776f..dc9207c1ff078 100644 +--- a/net/sunrpc/xprt.c ++++ b/net/sunrpc/xprt.c +@@ -70,6 +70,7 @@ + static void xprt_init(struct rpc_xprt *xprt, struct net *net); + static __be32 xprt_alloc_xid(struct rpc_xprt *xprt); + static void xprt_destroy(struct rpc_xprt *xprt); ++static void xprt_request_init(struct rpc_task *task); + + static DEFINE_SPINLOCK(xprt_list_lock); + static LIST_HEAD(xprt_list); +@@ -1602,17 +1603,40 @@ xprt_transmit(struct rpc_task *task) + spin_unlock(&xprt->queue_lock); + } + +-static void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task) ++static void xprt_complete_request_init(struct rpc_task *task) ++{ ++ if (task->tk_rqstp) ++ xprt_request_init(task); ++} ++ ++void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task) + { + set_bit(XPRT_CONGESTED, &xprt->state); +- rpc_sleep_on(&xprt->backlog, task, NULL); ++ rpc_sleep_on(&xprt->backlog, task, xprt_complete_request_init); ++} ++EXPORT_SYMBOL_GPL(xprt_add_backlog); ++ ++static bool __xprt_set_rq(struct rpc_task *task, void *data) ++{ ++ struct rpc_rqst *req = data; ++ ++ if (task->tk_rqstp == NULL) { ++ memset(req, 0, sizeof(*req)); /* mark unused */ ++ task->tk_rqstp = req; ++ return true; ++ } ++ return false; + } + +-static void xprt_wake_up_backlog(struct rpc_xprt *xprt) ++bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req) + { +- if (rpc_wake_up_next(&xprt->backlog) == NULL) ++ if (rpc_wake_up_first(&xprt->backlog, __xprt_set_rq, req) == NULL) { + clear_bit(XPRT_CONGESTED, &xprt->state); ++ return false; ++ } ++ return true; + } ++EXPORT_SYMBOL_GPL(xprt_wake_up_backlog); + + static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task) + { +@@ -1622,7 +1646,7 @@ static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task + goto out; + spin_lock(&xprt->reserve_lock); + if (test_bit(XPRT_CONGESTED, &xprt->state)) { +- rpc_sleep_on(&xprt->backlog, task, NULL); ++ xprt_add_backlog(xprt, task); + ret = true; + } + spin_unlock(&xprt->reserve_lock); +@@ -1699,11 +1723,11 @@ EXPORT_SYMBOL_GPL(xprt_alloc_slot); + void xprt_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *req) + { + spin_lock(&xprt->reserve_lock); +- if (!xprt_dynamic_free_slot(xprt, req)) { ++ if (!xprt_wake_up_backlog(xprt, req) && ++ !xprt_dynamic_free_slot(xprt, req)) { + memset(req, 0, sizeof(*req)); /* mark unused */ + list_add(&req->rq_list, &xprt->free); + } +- xprt_wake_up_backlog(xprt); + spin_unlock(&xprt->reserve_lock); + } + EXPORT_SYMBOL_GPL(xprt_free_slot); +@@ -1890,10 +1914,10 @@ void xprt_release(struct rpc_task *task) + xdr_free_bvec(&req->rq_snd_buf); + if (req->rq_cred != NULL) + put_rpccred(req->rq_cred); +- task->tk_rqstp = NULL; + if (req->rq_release_snd_buf) + req->rq_release_snd_buf(req); + ++ task->tk_rqstp = NULL; + if (likely(!bc_prealloc(req))) + xprt->ops->free_slot(xprt, req); + else +diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c +index 21ddd78a8c351..15e0c4e7b24ad 100644 +--- a/net/sunrpc/xprtrdma/rpc_rdma.c ++++ b/net/sunrpc/xprtrdma/rpc_rdma.c +@@ -628,8 +628,9 @@ out_mapping_err: + return false; + } + +-/* The tail iovec might not reside in the same page as the +- * head iovec. ++/* The tail iovec may include an XDR pad for the page list, ++ * as well as additional content, and may not reside in the ++ * same page as the head iovec. + */ + static bool rpcrdma_prepare_tail_iov(struct rpcrdma_req *req, + struct xdr_buf *xdr, +@@ -747,19 +748,27 @@ static bool rpcrdma_prepare_readch(struct rpcrdma_xprt *r_xprt, + struct rpcrdma_req *req, + struct xdr_buf *xdr) + { +- struct kvec *tail = &xdr->tail[0]; +- + if (!rpcrdma_prepare_head_iov(r_xprt, req, xdr->head[0].iov_len)) + return false; + +- /* If there is a Read chunk, the page list is handled ++ /* If there is a Read chunk, the page list is being handled + * via explicit RDMA, and thus is skipped here. + */ + +- if (tail->iov_len) { +- if (!rpcrdma_prepare_tail_iov(req, xdr, +- offset_in_page(tail->iov_base), +- tail->iov_len)) ++ /* Do not include the tail if it is only an XDR pad */ ++ if (xdr->tail[0].iov_len > 3) { ++ unsigned int page_base, len; ++ ++ /* If the content in the page list is an odd length, ++ * xdr_write_pages() adds a pad at the beginning of ++ * the tail iovec. Force the tail's non-pad content to ++ * land at the next XDR position in the Send message. ++ */ ++ page_base = offset_in_page(xdr->tail[0].iov_base); ++ len = xdr->tail[0].iov_len; ++ page_base += len & 3; ++ len -= len & 3; ++ if (!rpcrdma_prepare_tail_iov(req, xdr, page_base, len)) + return false; + kref_get(&req->rl_kref); + } +diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c +index 09953597d055a..19a49d26b1e41 100644 +--- a/net/sunrpc/xprtrdma/transport.c ++++ b/net/sunrpc/xprtrdma/transport.c +@@ -520,9 +520,8 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task) + return; + + out_sleep: +- set_bit(XPRT_CONGESTED, &xprt->state); +- rpc_sleep_on(&xprt->backlog, task, NULL); + task->tk_status = -EAGAIN; ++ xprt_add_backlog(xprt, task); + } + + /** +@@ -537,10 +536,11 @@ xprt_rdma_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *rqst) + struct rpcrdma_xprt *r_xprt = + container_of(xprt, struct rpcrdma_xprt, rx_xprt); + +- memset(rqst, 0, sizeof(*rqst)); +- rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst)); +- if (unlikely(!rpc_wake_up_next(&xprt->backlog))) +- clear_bit(XPRT_CONGESTED, &xprt->state); ++ rpcrdma_reply_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst)); ++ if (!xprt_wake_up_backlog(xprt, rqst)) { ++ memset(rqst, 0, sizeof(*rqst)); ++ rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst)); ++ } + } + + static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt, +diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c +index f3fffc74ab0fa..0731b4756c491 100644 +--- a/net/sunrpc/xprtrdma/verbs.c ++++ b/net/sunrpc/xprtrdma/verbs.c +@@ -1184,6 +1184,20 @@ rpcrdma_mr_get(struct rpcrdma_xprt *r_xprt) + return mr; + } + ++/** ++ * rpcrdma_reply_put - Put reply buffers back into pool ++ * @buffers: buffer pool ++ * @req: object to return ++ * ++ */ ++void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req) ++{ ++ if (req->rl_reply) { ++ rpcrdma_rep_put(buffers, req->rl_reply); ++ req->rl_reply = NULL; ++ } ++} ++ + /** + * rpcrdma_buffer_get - Get a request buffer + * @buffers: Buffer pool from which to obtain a buffer +@@ -1212,9 +1226,7 @@ rpcrdma_buffer_get(struct rpcrdma_buffer *buffers) + */ + void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req) + { +- if (req->rl_reply) +- rpcrdma_rep_put(buffers, req->rl_reply); +- req->rl_reply = NULL; ++ rpcrdma_reply_put(buffers, req); + + spin_lock(&buffers->rb_lock); + list_add(&req->rl_list, &buffers->rb_send_bufs); +diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h +index 28af11fbe6438..ac13b93af9bd1 100644 +--- a/net/sunrpc/xprtrdma/xprt_rdma.h ++++ b/net/sunrpc/xprtrdma/xprt_rdma.h +@@ -481,6 +481,7 @@ struct rpcrdma_req *rpcrdma_buffer_get(struct rpcrdma_buffer *); + void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, + struct rpcrdma_req *req); + void rpcrdma_recv_buffer_put(struct rpcrdma_rep *); ++void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req); + + bool rpcrdma_regbuf_realloc(struct rpcrdma_regbuf *rb, size_t size, + gfp_t flags); +diff --git a/net/tipc/core.c b/net/tipc/core.c +index 5cc1f03072150..72f3ac73779bf 100644 +--- a/net/tipc/core.c ++++ b/net/tipc/core.c +@@ -119,6 +119,8 @@ static void __net_exit tipc_exit_net(struct net *net) + #ifdef CONFIG_TIPC_CRYPTO + tipc_crypto_stop(&tipc_net(net)->crypto_tx); + #endif ++ while (atomic_read(&tn->wq_count)) ++ cond_resched(); + } + + static void __net_exit tipc_pernet_pre_exit(struct net *net) +diff --git a/net/tipc/core.h b/net/tipc/core.h +index 03de7b213f553..5741ae488bb56 100644 +--- a/net/tipc/core.h ++++ b/net/tipc/core.h +@@ -149,6 +149,8 @@ struct tipc_net { + #endif + /* Work item for net finalize */ + struct tipc_net_work final_work; ++ /* The numbers of work queues in schedule */ ++ atomic_t wq_count; + }; + + static inline struct tipc_net *tipc_net(struct net *net) +diff --git a/net/tipc/msg.c b/net/tipc/msg.c +index e9263280a2d4a..d0fc5fadbc680 100644 +--- a/net/tipc/msg.c ++++ b/net/tipc/msg.c +@@ -149,18 +149,13 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) + if (unlikely(head)) + goto err; + *buf = NULL; ++ if (skb_has_frag_list(frag) && __skb_linearize(frag)) ++ goto err; + frag = skb_unshare(frag, GFP_ATOMIC); + if (unlikely(!frag)) + goto err; + head = *headbuf = frag; + TIPC_SKB_CB(head)->tail = NULL; +- if (skb_is_nonlinear(head)) { +- skb_walk_frags(head, tail) { +- TIPC_SKB_CB(head)->tail = tail; +- } +- } else { +- skb_frag_list_init(head); +- } + return 0; + } + +diff --git a/net/tipc/socket.c b/net/tipc/socket.c +index 022999e0202d7..5792f4891dc18 100644 +--- a/net/tipc/socket.c ++++ b/net/tipc/socket.c +@@ -1265,7 +1265,10 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq, + spin_lock_bh(&inputq->lock); + if (skb_peek(arrvq) == skb) { + skb_queue_splice_tail_init(&tmpq, inputq); +- __skb_dequeue(arrvq); ++ /* Decrease the skb's refcnt as increasing in the ++ * function tipc_skb_peek ++ */ ++ kfree_skb(__skb_dequeue(arrvq)); + } + spin_unlock_bh(&inputq->lock); + __skb_queue_purge(&tmpq); +diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c +index 21e75e28e86ad..4c66e22b52f77 100644 +--- a/net/tipc/udp_media.c ++++ b/net/tipc/udp_media.c +@@ -812,6 +812,7 @@ static void cleanup_bearer(struct work_struct *work) + kfree_rcu(rcast, rcu); + } + ++ atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count); + dst_cache_destroy(&ub->rcast.dst_cache); + udp_tunnel_sock_release(ub->ubsock); + synchronize_net(); +@@ -832,6 +833,7 @@ static void tipc_udp_disable(struct tipc_bearer *b) + RCU_INIT_POINTER(ub->bearer, NULL); + + /* sock_release need to be done outside of rtnl lock */ ++ atomic_inc(&tipc_net(sock_net(ub->ubsock->sk))->wq_count); + INIT_WORK(&ub->work, cleanup_bearer); + schedule_work(&ub->work); + } +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c +index 01d933ae5f164..6086cf4f10a7c 100644 +--- a/net/tls/tls_sw.c ++++ b/net/tls/tls_sw.c +@@ -37,6 +37,7 @@ + + #include + #include ++#include + #include + + #include +@@ -1281,7 +1282,7 @@ int tls_sw_sendpage(struct sock *sk, struct page *page, + } + + static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock, +- int flags, long timeo, int *err) ++ bool nonblock, long timeo, int *err) + { + struct tls_context *tls_ctx = tls_get_ctx(sk); + struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx); +@@ -1306,7 +1307,7 @@ static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock, + if (sock_flag(sk, SOCK_DONE)) + return NULL; + +- if ((flags & MSG_DONTWAIT) || !timeo) { ++ if (nonblock || !timeo) { + *err = -EAGAIN; + return NULL; + } +@@ -1786,7 +1787,7 @@ int tls_sw_recvmsg(struct sock *sk, + bool async_capable; + bool async = false; + +- skb = tls_wait_data(sk, psock, flags, timeo, &err); ++ skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err); + if (!skb) { + if (psock) { + int ret = __tcp_bpf_recvmsg(sk, psock, +@@ -1990,9 +1991,9 @@ ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos, + + lock_sock(sk); + +- timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); ++ timeo = sock_rcvtimeo(sk, flags & SPLICE_F_NONBLOCK); + +- skb = tls_wait_data(sk, NULL, flags, timeo, &err); ++ skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo, &err); + if (!skb) + goto splice_read_end; + +diff --git a/net/wireless/util.c b/net/wireless/util.c +index 1bf0200f562ab..f342b61476754 100644 +--- a/net/wireless/util.c ++++ b/net/wireless/util.c +@@ -542,7 +542,7 @@ EXPORT_SYMBOL(ieee80211_get_mesh_hdrlen); + + int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr, + const u8 *addr, enum nl80211_iftype iftype, +- u8 data_offset) ++ u8 data_offset, bool is_amsdu) + { + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; + struct { +@@ -629,7 +629,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr, + skb_copy_bits(skb, hdrlen, &payload, sizeof(payload)); + tmp.h_proto = payload.proto; + +- if (likely((ether_addr_equal(payload.hdr, rfc1042_header) && ++ if (likely((!is_amsdu && ether_addr_equal(payload.hdr, rfc1042_header) && + tmp.h_proto != htons(ETH_P_AARP) && + tmp.h_proto != htons(ETH_P_IPX)) || + ether_addr_equal(payload.hdr, bridge_tunnel_header))) +@@ -771,6 +771,9 @@ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list, + remaining = skb->len - offset; + if (subframe_len > remaining) + goto purge; ++ /* mitigate A-MSDU aggregation injection attacks */ ++ if (ether_addr_equal(eth.h_dest, rfc1042_header)) ++ goto purge; + + offset += sizeof(struct ethhdr); + last = remaining <= subframe_len + padding; +diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c +index 1e2a1105d0e67..1278b8f03fc45 100644 +--- a/samples/bpf/xdpsock_user.c ++++ b/samples/bpf/xdpsock_user.c +@@ -1282,7 +1282,7 @@ static void tx_only(struct xsk_socket_info *xsk, u32 *frame_nb, int batch_size) + for (i = 0; i < batch_size; i++) { + struct xdp_desc *tx_desc = xsk_ring_prod__tx_desc(&xsk->tx, + idx + i); +- tx_desc->addr = (*frame_nb + i) << XSK_UMEM__DEFAULT_FRAME_SHIFT; ++ tx_desc->addr = (*frame_nb + i) * opt_xsk_frame_size; + tx_desc->len = PKT_SIZE; + } + +diff --git a/sound/firewire/dice/dice-pcm.c b/sound/firewire/dice/dice-pcm.c +index af8a90ee40f39..a69ca1111b033 100644 +--- a/sound/firewire/dice/dice-pcm.c ++++ b/sound/firewire/dice/dice-pcm.c +@@ -218,7 +218,7 @@ static int pcm_open(struct snd_pcm_substream *substream) + + if (frames_per_period > 0) { + // For double_pcm_frame quirk. +- if (rate > 96000) { ++ if (rate > 96000 && !dice->disable_double_pcm_frames) { + frames_per_period *= 2; + frames_per_buffer *= 2; + } +@@ -273,7 +273,7 @@ static int pcm_hw_params(struct snd_pcm_substream *substream, + + mutex_lock(&dice->mutex); + // For double_pcm_frame quirk. +- if (rate > 96000) { ++ if (rate > 96000 && !dice->disable_double_pcm_frames) { + events_per_period /= 2; + events_per_buffer /= 2; + } +diff --git a/sound/firewire/dice/dice-stream.c b/sound/firewire/dice/dice-stream.c +index 1a14c083e8cea..c4dfe76500c29 100644 +--- a/sound/firewire/dice/dice-stream.c ++++ b/sound/firewire/dice/dice-stream.c +@@ -181,7 +181,7 @@ static int keep_resources(struct snd_dice *dice, struct amdtp_stream *stream, + // as 'Dual Wire'. + // For this quirk, blocking mode is required and PCM buffer size should + // be aligned to SYT_INTERVAL. +- double_pcm_frames = rate > 96000; ++ double_pcm_frames = (rate > 96000 && !dice->disable_double_pcm_frames); + if (double_pcm_frames) { + rate /= 2; + pcm_chs *= 2; +diff --git a/sound/firewire/dice/dice.c b/sound/firewire/dice/dice.c +index 107a81691f0e8..239d164b0eea8 100644 +--- a/sound/firewire/dice/dice.c ++++ b/sound/firewire/dice/dice.c +@@ -21,6 +21,7 @@ MODULE_LICENSE("GPL v2"); + #define OUI_SSL 0x0050c2 // Actually ID reserved by IEEE. + #define OUI_PRESONUS 0x000a92 + #define OUI_HARMAN 0x000fd7 ++#define OUI_AVID 0x00a07e + + #define DICE_CATEGORY_ID 0x04 + #define WEISS_CATEGORY_ID 0x00 +@@ -222,6 +223,14 @@ static int dice_probe(struct fw_unit *unit, + (snd_dice_detect_formats_t)entry->driver_data; + } + ++ // Below models are compliant to IEC 61883-1/6 and have no quirk at high sampling transfer ++ // frequency. ++ // * Avid M-Box 3 Pro ++ // * M-Audio Profire 610 ++ // * M-Audio Profire 2626 ++ if (entry->vendor_id == OUI_MAUDIO || entry->vendor_id == OUI_AVID) ++ dice->disable_double_pcm_frames = true; ++ + spin_lock_init(&dice->lock); + mutex_init(&dice->mutex); + init_completion(&dice->clock_accepted); +@@ -278,7 +287,22 @@ static void dice_bus_reset(struct fw_unit *unit) + + #define DICE_INTERFACE 0x000001 + ++#define DICE_DEV_ENTRY_TYPICAL(vendor, model, data) \ ++ { \ ++ .match_flags = IEEE1394_MATCH_VENDOR_ID | \ ++ IEEE1394_MATCH_MODEL_ID | \ ++ IEEE1394_MATCH_SPECIFIER_ID | \ ++ IEEE1394_MATCH_VERSION, \ ++ .vendor_id = (vendor), \ ++ .model_id = (model), \ ++ .specifier_id = (vendor), \ ++ .version = DICE_INTERFACE, \ ++ .driver_data = (kernel_ulong_t)(data), \ ++ } ++ + static const struct ieee1394_device_id dice_id_table[] = { ++ // Avid M-Box 3 Pro. To match in probe function. ++ DICE_DEV_ENTRY_TYPICAL(OUI_AVID, 0x000004, snd_dice_detect_extension_formats), + /* M-Audio Profire 2626 has a different value in version field. */ + { + .match_flags = IEEE1394_MATCH_VENDOR_ID | +diff --git a/sound/firewire/dice/dice.h b/sound/firewire/dice/dice.h +index adc6f7c844609..3c967d1b3605d 100644 +--- a/sound/firewire/dice/dice.h ++++ b/sound/firewire/dice/dice.h +@@ -109,7 +109,8 @@ struct snd_dice { + struct fw_iso_resources rx_resources[MAX_STREAMS]; + struct amdtp_stream tx_stream[MAX_STREAMS]; + struct amdtp_stream rx_stream[MAX_STREAMS]; +- bool global_enabled; ++ bool global_enabled:1; ++ bool disable_double_pcm_frames:1; + struct completion clock_accepted; + unsigned int substreams_counter; + +diff --git a/sound/isa/gus/gus_main.c b/sound/isa/gus/gus_main.c +index afc088f0377ce..b7518122a10d6 100644 +--- a/sound/isa/gus/gus_main.c ++++ b/sound/isa/gus/gus_main.c +@@ -77,17 +77,8 @@ static const struct snd_kcontrol_new snd_gus_joystick_control = { + + static void snd_gus_init_control(struct snd_gus_card *gus) + { +- int ret; +- +- if (!gus->ace_flag) { +- ret = +- snd_ctl_add(gus->card, +- snd_ctl_new1(&snd_gus_joystick_control, +- gus)); +- if (ret) +- snd_printk(KERN_ERR "gus: snd_ctl_add failed: %d\n", +- ret); +- } ++ if (!gus->ace_flag) ++ snd_ctl_add(gus->card, snd_ctl_new1(&snd_gus_joystick_control, gus)); + } + + /* +diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c +index 38dc1fde25f3c..aa48705310231 100644 +--- a/sound/isa/sb/sb16_main.c ++++ b/sound/isa/sb/sb16_main.c +@@ -846,14 +846,10 @@ int snd_sb16dsp_pcm(struct snd_sb *chip, int device) + snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_sb16_playback_ops); + snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_sb16_capture_ops); + +- if (chip->dma16 >= 0 && chip->dma8 != chip->dma16) { +- err = snd_ctl_add(card, snd_ctl_new1( +- &snd_sb16_dma_control, chip)); +- if (err) +- return err; +- } else { ++ if (chip->dma16 >= 0 && chip->dma8 != chip->dma16) ++ snd_ctl_add(card, snd_ctl_new1(&snd_sb16_dma_control, chip)); ++ else + pcm->info_flags = SNDRV_PCM_INFO_HALF_DUPLEX; +- } + + snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, + card->dev, 64*1024, 128*1024); +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 43a63db4ab6ad..d8424d226714f 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -2603,6 +2603,28 @@ static const struct hda_model_fixup alc882_fixup_models[] = { + {} + }; + ++static const struct snd_hda_pin_quirk alc882_pin_fixup_tbl[] = { ++ SND_HDA_PIN_QUIRK(0x10ec1220, 0x1043, "ASUS", ALC1220_FIXUP_CLEVO_P950, ++ {0x14, 0x01014010}, ++ {0x15, 0x01011012}, ++ {0x16, 0x01016011}, ++ {0x18, 0x01a19040}, ++ {0x19, 0x02a19050}, ++ {0x1a, 0x0181304f}, ++ {0x1b, 0x0221401f}, ++ {0x1e, 0x01456130}), ++ SND_HDA_PIN_QUIRK(0x10ec1220, 0x1462, "MS-7C35", ALC1220_FIXUP_CLEVO_P950, ++ {0x14, 0x01015010}, ++ {0x15, 0x01011012}, ++ {0x16, 0x01011011}, ++ {0x18, 0x01a11040}, ++ {0x19, 0x02a19050}, ++ {0x1a, 0x0181104f}, ++ {0x1b, 0x0221401f}, ++ {0x1e, 0x01451130}), ++ {} ++}; ++ + /* + * BIOS auto configuration + */ +@@ -2644,6 +2666,7 @@ static int patch_alc882(struct hda_codec *codec) + + snd_hda_pick_fixup(codec, alc882_fixup_models, alc882_fixup_tbl, + alc882_fixups); ++ snd_hda_pick_pin_fixup(codec, alc882_pin_fixup_tbl, alc882_fixups, true); + snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE); + + alc_auto_parse_customize_define(codec); +@@ -6535,6 +6558,8 @@ enum { + ALC295_FIXUP_ASUS_DACS, + ALC295_FIXUP_HP_OMEN, + ALC285_FIXUP_HP_SPECTRE_X360, ++ ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, ++ ALC623_FIXUP_LENOVO_THINKSTATION_P340, + }; + + static const struct hda_fixup alc269_fixups[] = { +@@ -8095,6 +8120,18 @@ static const struct hda_fixup alc269_fixups[] = { + .chained = true, + .chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1, + }, ++ [ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc285_fixup_ideapad_s740_coef, ++ .chained = true, ++ .chain_id = ALC285_FIXUP_THINKPAD_HEADSET_JACK, ++ }, ++ [ALC623_FIXUP_LENOVO_THINKSTATION_P340] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc_fixup_no_shutup, ++ .chained = true, ++ .chain_id = ALC283_FIXUP_HEADSET_MIC, ++ }, + }; + + static const struct snd_pci_quirk alc269_fixup_tbl[] = { +@@ -8277,6 +8314,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), + SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), ++ SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), ++ SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), ++ SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), ++ SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED), + SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), + SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), +@@ -8412,7 +8453,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1558, 0xc019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0xc022, "Clevo NH77[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS), +- SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), ++ SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), + SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), + SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE), + SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE), +@@ -8462,6 +8503,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME), + SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF), ++ SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP), + SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), + SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), + SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), +@@ -8677,6 +8719,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = { + {.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"}, + {.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"}, + {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"}, ++ {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"}, ++ {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"}, + {} + }; + #define ALC225_STANDARD_PINS \ +diff --git a/sound/soc/codecs/cs35l33.c b/sound/soc/codecs/cs35l33.c +index 7ad7b733af9b6..e8f3dcfd144da 100644 +--- a/sound/soc/codecs/cs35l33.c ++++ b/sound/soc/codecs/cs35l33.c +@@ -1201,6 +1201,7 @@ static int cs35l33_i2c_probe(struct i2c_client *i2c_client, + dev_err(&i2c_client->dev, + "CS35L33 Device ID (%X). Expected ID %X\n", + devid, CS35L33_CHIP_ID); ++ ret = -EINVAL; + goto err_enable; + } + +diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c +index 811b7b1c9732e..323a4f7f384cd 100644 +--- a/sound/soc/codecs/cs42l42.c ++++ b/sound/soc/codecs/cs42l42.c +@@ -398,6 +398,9 @@ static const struct regmap_config cs42l42_regmap = { + .reg_defaults = cs42l42_reg_defaults, + .num_reg_defaults = ARRAY_SIZE(cs42l42_reg_defaults), + .cache_type = REGCACHE_RBTREE, ++ ++ .use_single_read = true, ++ .use_single_write = true, + }; + + static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false); +diff --git a/sound/soc/codecs/cs43130.c b/sound/soc/codecs/cs43130.c +index 80bc7c10ed757..80cd3ea0c1577 100644 +--- a/sound/soc/codecs/cs43130.c ++++ b/sound/soc/codecs/cs43130.c +@@ -1735,6 +1735,14 @@ static DEVICE_ATTR(hpload_dc_r, 0444, cs43130_show_dc_r, NULL); + static DEVICE_ATTR(hpload_ac_l, 0444, cs43130_show_ac_l, NULL); + static DEVICE_ATTR(hpload_ac_r, 0444, cs43130_show_ac_r, NULL); + ++static struct attribute *hpload_attrs[] = { ++ &dev_attr_hpload_dc_l.attr, ++ &dev_attr_hpload_dc_r.attr, ++ &dev_attr_hpload_ac_l.attr, ++ &dev_attr_hpload_ac_r.attr, ++}; ++ATTRIBUTE_GROUPS(hpload); ++ + static struct reg_sequence hp_en_cal_seq[] = { + {CS43130_INT_MASK_4, CS43130_INT_MASK_ALL}, + {CS43130_HP_MEAS_LOAD_1, 0}, +@@ -2302,25 +2310,15 @@ static int cs43130_probe(struct snd_soc_component *component) + + cs43130->hpload_done = false; + if (cs43130->dc_meas) { +- ret = device_create_file(component->dev, &dev_attr_hpload_dc_l); +- if (ret < 0) +- return ret; +- +- ret = device_create_file(component->dev, &dev_attr_hpload_dc_r); +- if (ret < 0) +- return ret; +- +- ret = device_create_file(component->dev, &dev_attr_hpload_ac_l); +- if (ret < 0) +- return ret; +- +- ret = device_create_file(component->dev, &dev_attr_hpload_ac_r); +- if (ret < 0) ++ ret = sysfs_create_groups(&component->dev->kobj, hpload_groups); ++ if (ret) + return ret; + + cs43130->wq = create_singlethread_workqueue("cs43130_hp"); +- if (!cs43130->wq) ++ if (!cs43130->wq) { ++ sysfs_remove_groups(&component->dev->kobj, hpload_groups); + return -ENOMEM; ++ } + INIT_WORK(&cs43130->work, cs43130_imp_meas); + } + +diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c +index be360a402b67f..936384a94f25e 100644 +--- a/sound/soc/qcom/lpass-cpu.c ++++ b/sound/soc/qcom/lpass-cpu.c +@@ -835,18 +835,8 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev) + if (dai_id == LPASS_DP_RX) + continue; + +- drvdata->mi2s_osr_clk[dai_id] = devm_clk_get(dev, ++ drvdata->mi2s_osr_clk[dai_id] = devm_clk_get_optional(dev, + variant->dai_osr_clk_names[i]); +- if (IS_ERR(drvdata->mi2s_osr_clk[dai_id])) { +- dev_warn(dev, +- "%s() error getting optional %s: %ld\n", +- __func__, +- variant->dai_osr_clk_names[i], +- PTR_ERR(drvdata->mi2s_osr_clk[dai_id])); +- +- drvdata->mi2s_osr_clk[dai_id] = NULL; +- } +- + drvdata->mi2s_bit_clk[dai_id] = devm_clk_get(dev, + variant->dai_bit_clk_names[i]); + if (IS_ERR(drvdata->mi2s_bit_clk[dai_id])) { +diff --git a/sound/usb/format.c b/sound/usb/format.c +index e6ff317a67852..2287f8c653150 100644 +--- a/sound/usb/format.c ++++ b/sound/usb/format.c +@@ -436,7 +436,7 @@ static bool check_valid_altsetting_v2v3(struct snd_usb_audio *chip, int iface, + if (snd_BUG_ON(altsetting >= 64 - 8)) + return false; + +- err = snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0), UAC2_CS_CUR, ++ err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), UAC2_CS_CUR, + USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN, + UAC2_AS_VAL_ALT_SETTINGS << 8, + iface, &raw_data, sizeof(raw_data)); +diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c +index ffd922327ae4a..805873d0cd553 100644 +--- a/sound/usb/mixer_quirks.c ++++ b/sound/usb/mixer_quirks.c +@@ -3017,7 +3017,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer) + case USB_ID(0x1235, 0x8203): /* Focusrite Scarlett 6i6 2nd Gen */ + case USB_ID(0x1235, 0x8204): /* Focusrite Scarlett 18i8 2nd Gen */ + case USB_ID(0x1235, 0x8201): /* Focusrite Scarlett 18i20 2nd Gen */ +- err = snd_scarlett_gen2_controls_create(mixer); ++ err = snd_scarlett_gen2_init(mixer); + break; + + case USB_ID(0x041e, 0x323b): /* Creative Sound Blaster E1 */ +diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c +index 560c2ade829d0..4caf379d5b991 100644 +--- a/sound/usb/mixer_scarlett_gen2.c ++++ b/sound/usb/mixer_scarlett_gen2.c +@@ -635,7 +635,7 @@ static int scarlett2_usb( + /* send a second message to get the response */ + + err = snd_usb_ctl_msg(mixer->chip->dev, +- usb_sndctrlpipe(mixer->chip->dev, 0), ++ usb_rcvctrlpipe(mixer->chip->dev, 0), + SCARLETT2_USB_VENDOR_SPECIFIC_CMD_RESP, + USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_IN, + 0, +@@ -1997,38 +1997,11 @@ static int scarlett2_mixer_status_create(struct usb_mixer_interface *mixer) + return usb_submit_urb(mixer->urb, GFP_KERNEL); + } + +-/* Entry point */ +-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer) ++static int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer, ++ const struct scarlett2_device_info *info) + { +- const struct scarlett2_device_info *info; + int err; + +- /* only use UAC_VERSION_2 */ +- if (!mixer->protocol) +- return 0; +- +- switch (mixer->chip->usb_id) { +- case USB_ID(0x1235, 0x8203): +- info = &s6i6_gen2_info; +- break; +- case USB_ID(0x1235, 0x8204): +- info = &s18i8_gen2_info; +- break; +- case USB_ID(0x1235, 0x8201): +- info = &s18i20_gen2_info; +- break; +- default: /* device not (yet) supported */ +- return -EINVAL; +- } +- +- if (!(mixer->chip->setup & SCARLETT2_ENABLE)) { +- usb_audio_err(mixer->chip, +- "Focusrite Scarlett Gen 2 Mixer Driver disabled; " +- "use options snd_usb_audio device_setup=1 " +- "to enable and report any issues to g@b4.vu"); +- return 0; +- } +- + /* Initialise private data, routing, sequence number */ + err = scarlett2_init_private(mixer, info); + if (err < 0) +@@ -2073,3 +2046,51 @@ int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer) + + return 0; + } ++ ++int snd_scarlett_gen2_init(struct usb_mixer_interface *mixer) ++{ ++ struct snd_usb_audio *chip = mixer->chip; ++ const struct scarlett2_device_info *info; ++ int err; ++ ++ /* only use UAC_VERSION_2 */ ++ if (!mixer->protocol) ++ return 0; ++ ++ switch (chip->usb_id) { ++ case USB_ID(0x1235, 0x8203): ++ info = &s6i6_gen2_info; ++ break; ++ case USB_ID(0x1235, 0x8204): ++ info = &s18i8_gen2_info; ++ break; ++ case USB_ID(0x1235, 0x8201): ++ info = &s18i20_gen2_info; ++ break; ++ default: /* device not (yet) supported */ ++ return -EINVAL; ++ } ++ ++ if (!(chip->setup & SCARLETT2_ENABLE)) { ++ usb_audio_info(chip, ++ "Focusrite Scarlett Gen 2 Mixer Driver disabled; " ++ "use options snd_usb_audio vid=0x%04x pid=0x%04x " ++ "device_setup=1 to enable and report any issues " ++ "to g@b4.vu", ++ USB_ID_VENDOR(chip->usb_id), ++ USB_ID_PRODUCT(chip->usb_id)); ++ return 0; ++ } ++ ++ usb_audio_info(chip, ++ "Focusrite Scarlett Gen 2 Mixer Driver enabled pid=0x%04x", ++ USB_ID_PRODUCT(chip->usb_id)); ++ ++ err = snd_scarlett_gen2_controls_create(mixer, info); ++ if (err < 0) ++ usb_audio_err(mixer->chip, ++ "Error initialising Scarlett Mixer Driver: %d", ++ err); ++ ++ return err; ++} +diff --git a/sound/usb/mixer_scarlett_gen2.h b/sound/usb/mixer_scarlett_gen2.h +index 52e1dad77afd4..668c6b0cb50a6 100644 +--- a/sound/usb/mixer_scarlett_gen2.h ++++ b/sound/usb/mixer_scarlett_gen2.h +@@ -2,6 +2,6 @@ + #ifndef __USB_MIXER_SCARLETT_GEN2_H + #define __USB_MIXER_SCARLETT_GEN2_H + +-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer); ++int snd_scarlett_gen2_init(struct usb_mixer_interface *mixer); + + #endif /* __USB_MIXER_SCARLETT_GEN2_H */ +diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst +index 790944c356025..baee8591ac76a 100644 +--- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst ++++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst +@@ -30,7 +30,8 @@ CGROUP COMMANDS + | *ATTACH_TYPE* := { **ingress** | **egress** | **sock_create** | **sock_ops** | **device** | + | **bind4** | **bind6** | **post_bind4** | **post_bind6** | **connect4** | **connect6** | + | **getpeername4** | **getpeername6** | **getsockname4** | **getsockname6** | **sendmsg4** | +-| **sendmsg6** | **recvmsg4** | **recvmsg6** | **sysctl** | **getsockopt** | **setsockopt** } ++| **sendmsg6** | **recvmsg4** | **recvmsg6** | **sysctl** | **getsockopt** | **setsockopt** | ++| **sock_release** } + | *ATTACH_FLAGS* := { **multi** | **override** } + + DESCRIPTION +@@ -106,6 +107,7 @@ DESCRIPTION + **getpeername6** call to getpeername(2) for an inet6 socket (since 5.8); + **getsockname4** call to getsockname(2) for an inet4 socket (since 5.8); + **getsockname6** call to getsockname(2) for an inet6 socket (since 5.8). ++ **sock_release** closing an userspace inet socket (since 5.9). + + **bpftool cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG* + Detach *PROG* from the cgroup *CGROUP* and attach type +diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst +index 358c7309d4191..fe1b38e7e887d 100644 +--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst ++++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst +@@ -44,7 +44,7 @@ PROG COMMANDS + | **cgroup/connect4** | **cgroup/connect6** | **cgroup/getpeername4** | **cgroup/getpeername6** | + | **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/sendmsg4** | **cgroup/sendmsg6** | + | **cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/sysctl** | +-| **cgroup/getsockopt** | **cgroup/setsockopt** | ++| **cgroup/getsockopt** | **cgroup/setsockopt** | **cgroup/sock_release** | + | **struct_ops** | **fentry** | **fexit** | **freplace** | **sk_lookup** + | } + | *ATTACH_TYPE* := { +diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool +index fdffbc64c65c6..c10807914f7b6 100644 +--- a/tools/bpf/bpftool/bash-completion/bpftool ++++ b/tools/bpf/bpftool/bash-completion/bpftool +@@ -478,7 +478,7 @@ _bpftool() + cgroup/recvmsg4 cgroup/recvmsg6 \ + cgroup/post_bind4 cgroup/post_bind6 \ + cgroup/sysctl cgroup/getsockopt \ +- cgroup/setsockopt struct_ops \ ++ cgroup/setsockopt cgroup/sock_release struct_ops \ + fentry fexit freplace sk_lookup" -- \ + "$cur" ) ) + return 0 +@@ -1008,7 +1008,7 @@ _bpftool() + device bind4 bind6 post_bind4 post_bind6 connect4 connect6 \ + getpeername4 getpeername6 getsockname4 getsockname6 \ + sendmsg4 sendmsg6 recvmsg4 recvmsg6 sysctl getsockopt \ +- setsockopt' ++ setsockopt sock_release' + local ATTACH_FLAGS='multi override' + local PROG_TYPE='id pinned tag name' + case $prev in +@@ -1019,7 +1019,7 @@ _bpftool() + ingress|egress|sock_create|sock_ops|device|bind4|bind6|\ + post_bind4|post_bind6|connect4|connect6|getpeername4|\ + getpeername6|getsockname4|getsockname6|sendmsg4|sendmsg6|\ +- recvmsg4|recvmsg6|sysctl|getsockopt|setsockopt) ++ recvmsg4|recvmsg6|sysctl|getsockopt|setsockopt|sock_release) + COMPREPLY=( $( compgen -W "$PROG_TYPE" -- \ + "$cur" ) ) + return 0 +diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c +index d901cc1b904af..6e53b1d393f4a 100644 +--- a/tools/bpf/bpftool/cgroup.c ++++ b/tools/bpf/bpftool/cgroup.c +@@ -28,7 +28,8 @@ + " connect6 | getpeername4 | getpeername6 |\n" \ + " getsockname4 | getsockname6 | sendmsg4 |\n" \ + " sendmsg6 | recvmsg4 | recvmsg6 |\n" \ +- " sysctl | getsockopt | setsockopt }" ++ " sysctl | getsockopt | setsockopt |\n" \ ++ " sock_release }" + + static unsigned int query_flags; + +diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c +index f2b915b205464..0430dec6e288e 100644 +--- a/tools/bpf/bpftool/prog.c ++++ b/tools/bpf/bpftool/prog.c +@@ -2137,7 +2137,7 @@ static int do_help(int argc, char **argv) + " cgroup/getpeername4 | cgroup/getpeername6 |\n" + " cgroup/getsockname4 | cgroup/getsockname6 | cgroup/sendmsg4 |\n" + " cgroup/sendmsg6 | cgroup/recvmsg4 | cgroup/recvmsg6 |\n" +- " cgroup/getsockopt | cgroup/setsockopt |\n" ++ " cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n" + " struct_ops | fentry | fexit | freplace | sk_lookup }\n" + " ATTACH_TYPE := { msg_verdict | stream_verdict | stream_parser |\n" + " flow_dissector }\n" +diff --git a/tools/include/linux/bits.h b/tools/include/linux/bits.h +index 7f475d59a0974..87d112650dfbb 100644 +--- a/tools/include/linux/bits.h ++++ b/tools/include/linux/bits.h +@@ -22,7 +22,7 @@ + #include + #define GENMASK_INPUT_CHECK(h, l) \ + (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \ +- __builtin_constant_p((l) > (h)), (l) > (h), 0))) ++ __is_constexpr((l) > (h)), (l) > (h), 0))) + #else + /* + * BUILD_BUG_ON_ZERO is not available in h files included from asm files, +diff --git a/tools/include/linux/const.h b/tools/include/linux/const.h +index 81b8aae5a8559..435ddd72d2c46 100644 +--- a/tools/include/linux/const.h ++++ b/tools/include/linux/const.h +@@ -3,4 +3,12 @@ + + #include + ++/* ++ * This returns a constant expression while determining if an argument is ++ * a constant expression, most importantly without evaluating the argument. ++ * Glory to Martin Uecker ++ */ ++#define __is_constexpr(x) \ ++ (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8))) ++ + #endif /* _LINUX_CONST_H */ +diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h +index f6afee209620d..26e6d94d64ed4 100644 +--- a/tools/include/uapi/linux/kvm.h ++++ b/tools/include/uapi/linux/kvm.h +@@ -8,6 +8,7 @@ + * Note: you must update KVM_API_VERSION if you change this interface. + */ + ++#include + #include + #include + #include +@@ -1834,8 +1835,8 @@ struct kvm_hyperv_eventfd { + * conversion after harvesting an entry. Also, it must not skip any + * dirty bits, so that dirty bits are always harvested in sequence. + */ +-#define KVM_DIRTY_GFN_F_DIRTY BIT(0) +-#define KVM_DIRTY_GFN_F_RESET BIT(1) ++#define KVM_DIRTY_GFN_F_DIRTY _BITUL(0) ++#define KVM_DIRTY_GFN_F_RESET _BITUL(1) + #define KVM_DIRTY_GFN_F_MASK 0x3 + + /* +diff --git a/tools/perf/perf.c b/tools/perf/perf.c +index 20cb91ef06ffc..2f6b67189b426 100644 +--- a/tools/perf/perf.c ++++ b/tools/perf/perf.c +@@ -443,6 +443,8 @@ int main(int argc, const char **argv) + const char *cmd; + char sbuf[STRERR_BUFSIZE]; + ++ perf_debug_setup(); ++ + /* libsubcmd init */ + exec_cmd_init("perf", PREFIX, PERF_EXEC_PATH, EXEC_PATH_ENVIRONMENT); + pager_init(PERF_PAGER_ENVIRONMENT); +@@ -531,8 +533,6 @@ int main(int argc, const char **argv) + */ + pthread__block_sigwinch(); + +- perf_debug_setup(); +- + while (1) { + static int done_help; + +diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c +index e1f3f5c8c550c..9cda076693aee 100644 +--- a/tools/perf/pmu-events/jevents.c ++++ b/tools/perf/pmu-events/jevents.c +@@ -958,7 +958,7 @@ static int get_maxfds(void) + struct rlimit rlim; + + if (getrlimit(RLIMIT_NOFILE, &rlim) == 0) +- return min((int)rlim.rlim_max / 2, 512); ++ return min(rlim.rlim_max / 2, (rlim_t)512); + + return 512; + } +diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py +index 7daa8bb70a5a0..711d4f9f5645c 100755 +--- a/tools/perf/scripts/python/exported-sql-viewer.py ++++ b/tools/perf/scripts/python/exported-sql-viewer.py +@@ -91,6 +91,11 @@ + from __future__ import print_function + + import sys ++# Only change warnings if the python -W option was not used ++if not sys.warnoptions: ++ import warnings ++ # PySide2 causes deprecation warnings, ignore them. ++ warnings.filterwarnings("ignore", category=DeprecationWarning) + import argparse + import weakref + import threading +@@ -125,8 +130,9 @@ if pyside_version_1: + from PySide.QtGui import * + from PySide.QtSql import * + +-from decimal import * +-from ctypes import * ++from decimal import Decimal, ROUND_HALF_UP ++from ctypes import CDLL, Structure, create_string_buffer, addressof, sizeof, \ ++ c_void_p, c_bool, c_byte, c_char, c_int, c_uint, c_longlong, c_ulonglong + from multiprocessing import Process, Array, Value, Event + + # xrange is range in Python3 +@@ -3868,7 +3874,7 @@ def CopyTableCellsToClipboard(view, as_csv=False, with_hdr=False): + if with_hdr: + model = indexes[0].model() + for col in range(min_col, max_col + 1): +- val = model.headerData(col, Qt.Horizontal) ++ val = model.headerData(col, Qt.Horizontal, Qt.DisplayRole) + if as_csv: + text += sep + ToCSValue(val) + sep = "," +diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +index 8c59677bee130..20ad663978cc4 100644 +--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c ++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +@@ -1146,6 +1146,8 @@ static bool intel_pt_fup_event(struct intel_pt_decoder *decoder) + decoder->set_fup_tx_flags = false; + decoder->tx_flags = decoder->fup_tx_flags; + decoder->state.type = INTEL_PT_TRANSACTION; ++ if (decoder->fup_tx_flags & INTEL_PT_ABORT_TX) ++ decoder->state.type |= INTEL_PT_BRANCH; + decoder->state.from_ip = decoder->ip; + decoder->state.to_ip = 0; + decoder->state.flags = decoder->fup_tx_flags; +@@ -1220,8 +1222,10 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder) + return 0; + if (err == -EAGAIN || + intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) { ++ bool no_tip = decoder->pkt_state != INTEL_PT_STATE_FUP; ++ + decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; +- if (intel_pt_fup_event(decoder)) ++ if (intel_pt_fup_event(decoder) && no_tip) + return 0; + return -EAGAIN; + } +diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c +index f6e28ac231b79..05a83637c9758 100644 +--- a/tools/perf/util/intel-pt.c ++++ b/tools/perf/util/intel-pt.c +@@ -707,8 +707,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn, + + *ip += intel_pt_insn->length; + +- if (to_ip && *ip == to_ip) ++ if (to_ip && *ip == to_ip) { ++ intel_pt_insn->length = 0; + goto out_no_cache; ++ } + + if (*ip >= al.map->end) + break; +@@ -1198,6 +1200,7 @@ static void intel_pt_set_pid_tid_cpu(struct intel_pt *pt, + + static void intel_pt_sample_flags(struct intel_pt_queue *ptq) + { ++ ptq->insn_len = 0; + if (ptq->state->flags & INTEL_PT_ABORT_TX) { + ptq->flags = PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TX_ABORT; + } else if (ptq->state->flags & INTEL_PT_ASYNC) { +diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h +index 0f4258eaa629e..61ab17238715a 100644 +--- a/tools/testing/selftests/kvm/include/kvm_util.h ++++ b/tools/testing/selftests/kvm/include/kvm_util.h +@@ -295,7 +295,7 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm); + + unsigned int vm_get_page_size(struct kvm_vm *vm); + unsigned int vm_get_page_shift(struct kvm_vm *vm); +-unsigned int vm_get_max_gfn(struct kvm_vm *vm); ++uint64_t vm_get_max_gfn(struct kvm_vm *vm); + int vm_get_fd(struct kvm_vm *vm); + + unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); +diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c +index b8849a1aca792..2f0e4365f61bd 100644 +--- a/tools/testing/selftests/kvm/lib/kvm_util.c ++++ b/tools/testing/selftests/kvm/lib/kvm_util.c +@@ -1969,7 +1969,7 @@ unsigned int vm_get_page_shift(struct kvm_vm *vm) + return vm->page_shift; + } + +-unsigned int vm_get_max_gfn(struct kvm_vm *vm) ++uint64_t vm_get_max_gfn(struct kvm_vm *vm) + { + return vm->max_gfn; + } +diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c +index 81490b9b4e32a..abf381800a590 100644 +--- a/tools/testing/selftests/kvm/lib/perf_test_util.c ++++ b/tools/testing/selftests/kvm/lib/perf_test_util.c +@@ -2,6 +2,7 @@ + /* + * Copyright (C) 2020, Google LLC. + */ ++#include + + #include "kvm_util.h" + #include "perf_test_util.h" +@@ -80,7 +81,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, + */ + TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), + "Requested more guest memory than address space allows.\n" +- " guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n", ++ " guest pages: %" PRIx64 " max gfn: %" PRIx64 ++ " vcpus: %d wss: %" PRIx64 "]\n", + guest_num_pages, vm_get_max_gfn(vm), vcpus, + vcpu_memory_bytes); + +diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c +index 6096bf0a5b34f..98351ba0933cd 100644 +--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c ++++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c +@@ -71,14 +71,22 @@ struct memslot_antagonist_args { + }; + + static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, +- uint64_t nr_modifications, uint64_t gpa) ++ uint64_t nr_modifications) + { ++ const uint64_t pages = 1; ++ uint64_t gpa; + int i; + ++ /* ++ * Add the dummy memslot just below the perf_test_util memslot, which is ++ * at the top of the guest physical address space. ++ */ ++ gpa = guest_test_phys_mem - pages * vm_get_page_size(vm); ++ + for (i = 0; i < nr_modifications; i++) { + usleep(delay); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, +- DUMMY_MEMSLOT_INDEX, 1, 0); ++ DUMMY_MEMSLOT_INDEX, pages, 0); + + vm_mem_region_delete(vm, DUMMY_MEMSLOT_INDEX); + } +@@ -120,11 +128,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) + pr_info("Started all vCPUs\n"); + + add_remove_memslot(vm, p->memslot_modification_delay, +- p->nr_memslot_modifications, +- guest_test_phys_mem + +- (guest_percpu_mem_size * nr_vcpus) + +- perf_test_args.host_page_size + +- perf_test_args.guest_page_size); ++ p->nr_memslot_modifications); + + run_vcpus = false; + +diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json +index 1cda2e11b3ad9..773c5027553d2 100644 +--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json ++++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json +@@ -9,11 +9,11 @@ + "setup": [ + "$IP link add dev $DUMMY type dummy || /bin/true" + ], +- "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536", +- "expExitCode": "2", ++ "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_pie flows 65536", ++ "expExitCode": "0", + "verifyCmd": "$TC qdisc show dev $DUMMY", +- "matchPattern": "qdisc", +- "matchCount": "0", ++ "matchPattern": "qdisc fq_pie 1: root refcnt 2 limit 10240p flows 65536", ++ "matchCount": "1", + "teardown": [ + "$IP link del dev $DUMMY" + ] +diff --git a/virt/lib/irqbypass.c b/virt/lib/irqbypass.c +index c9bb3957f58a7..28fda42e471bb 100644 +--- a/virt/lib/irqbypass.c ++++ b/virt/lib/irqbypass.c +@@ -40,21 +40,17 @@ static int __connect(struct irq_bypass_producer *prod, + if (prod->add_consumer) + ret = prod->add_consumer(prod, cons); + +- if (ret) +- goto err_add_consumer; +- +- ret = cons->add_producer(cons, prod); +- if (ret) +- goto err_add_producer; ++ if (!ret) { ++ ret = cons->add_producer(cons, prod); ++ if (ret && prod->del_consumer) ++ prod->del_consumer(prod, cons); ++ } + + if (cons->start) + cons->start(cons); + if (prod->start) + prod->start(prod); +-err_add_producer: +- if (prod->del_consumer) +- prod->del_consumer(prod, cons); +-err_add_consumer: ++ + return ret; + } +