From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <gentoo-commits+bounces-1053367-garchives=archives.gentoo.org@lists.gentoo.org> Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 5A7BC138334 for <garchives@archives.gentoo.org>; Sat, 20 Oct 2018 12:37:01 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 4D3FFE087C; Sat, 20 Oct 2018 12:36:59 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 169E2E087C for <gentoo-commits@lists.gentoo.org>; Sat, 20 Oct 2018 12:36:57 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 90A2F335C6F for <gentoo-commits@lists.gentoo.org>; Sat, 20 Oct 2018 12:36:55 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id B0141439 for <gentoo-commits@lists.gentoo.org>; Sat, 20 Oct 2018 12:36:53 +0000 (UTC) From: "Mike Pagano" <mpagano@gentoo.org> To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" <mpagano@gentoo.org> Message-ID: <1540038981.15ce58de63de9683dd3a0076e8833daf45fb2992.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.18 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1015_linux-4.18.16.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 15ce58de63de9683dd3a0076e8833daf45fb2992 X-VCS-Branch: 4.18 Date: Sat, 20 Oct 2018 12:36:53 +0000 (UTC) Precedence: bulk List-Post: <mailto:gentoo-commits@lists.gentoo.org> List-Help: <mailto:gentoo-commits+help@lists.gentoo.org> List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org> List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org> List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org> X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: bf57b581-d145-4844-b08c-b6b47927634a X-Archives-Hash: 396428b5f9cdc128c3ba4673d08522f5 commit: 15ce58de63de9683dd3a0076e8833daf45fb2992 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org> AuthorDate: Sat Oct 20 12:36:21 2018 +0000 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org> CommitDate: Sat Oct 20 12:36:21 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=15ce58de Linux patch 4.18.16 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org> 0000_README | 4 + 1015_linux-4.18.16.patch | 2439 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2443 insertions(+) diff --git a/0000_README b/0000_README index 5676b13..52e9ca9 100644 --- a/0000_README +++ b/0000_README @@ -103,6 +103,10 @@ Patch: 1014_linux-4.18.15.patch From: http://www.kernel.org Desc: Linux 4.18.15 +Patch: 1015_linux-4.18.16.patch +From: http://www.kernel.org +Desc: Linux 4.18.16 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1015_linux-4.18.16.patch b/1015_linux-4.18.16.patch new file mode 100644 index 0000000..9bc7017 --- /dev/null +++ b/1015_linux-4.18.16.patch @@ -0,0 +1,2439 @@ +diff --git a/Makefile b/Makefile +index 968eb96a0553..034dd990b0ae 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 18 +-SUBLEVEL = 15 ++SUBLEVEL = 16 + EXTRAVERSION = + NAME = Merciless Moray + +diff --git a/arch/arc/Makefile b/arch/arc/Makefile +index 6c1b20dd76ad..7c6c97782022 100644 +--- a/arch/arc/Makefile ++++ b/arch/arc/Makefile +@@ -6,34 +6,12 @@ + # published by the Free Software Foundation. + # + +-ifeq ($(CROSS_COMPILE),) +-ifndef CONFIG_CPU_BIG_ENDIAN +-CROSS_COMPILE := arc-linux- +-else +-CROSS_COMPILE := arceb-linux- +-endif +-endif +- + KBUILD_DEFCONFIG := nsim_700_defconfig + + cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__ + cflags-$(CONFIG_ISA_ARCOMPACT) += -mA7 + cflags-$(CONFIG_ISA_ARCV2) += -mcpu=archs + +-is_700 = $(shell $(CC) -dM -E - < /dev/null | grep -q "ARC700" && echo 1 || echo 0) +- +-ifdef CONFIG_ISA_ARCOMPACT +-ifeq ($(is_700), 0) +- $(error Toolchain not configured for ARCompact builds) +-endif +-endif +- +-ifdef CONFIG_ISA_ARCV2 +-ifeq ($(is_700), 1) +- $(error Toolchain not configured for ARCv2 builds) +-endif +-endif +- + ifdef CONFIG_ARC_CURR_IN_REG + # For a global register defintion, make sure it gets passed to every file + # We had a customer reported bug where some code built in kernel was NOT using +@@ -87,7 +65,7 @@ ldflags-$(CONFIG_CPU_BIG_ENDIAN) += -EB + # --build-id w/o "-marclinux". Default arc-elf32-ld is OK + ldflags-$(upto_gcc44) += -marclinux + +-LIBGCC := $(shell $(CC) $(cflags-y) --print-libgcc-file-name) ++LIBGCC = $(shell $(CC) $(cflags-y) --print-libgcc-file-name) + + # Modules with short calls might break for calls into builtin-kernel + KBUILD_CFLAGS_MODULE += -mlong-calls -mno-millicode +diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S +index ff12f47a96b6..09d347b61218 100644 +--- a/arch/powerpc/kernel/tm.S ++++ b/arch/powerpc/kernel/tm.S +@@ -175,13 +175,27 @@ _GLOBAL(tm_reclaim) + std r1, PACATMSCRATCH(r13) + ld r1, PACAR1(r13) + +- /* Store the PPR in r11 and reset to decent value */ + std r11, GPR11(r1) /* Temporary stash */ + ++ /* ++ * Move the saved user r1 to the kernel stack in case PACATMSCRATCH is ++ * clobbered by an exception once we turn on MSR_RI below. ++ */ ++ ld r11, PACATMSCRATCH(r13) ++ std r11, GPR1(r1) ++ ++ /* ++ * Store r13 away so we can free up the scratch SPR for the SLB fault ++ * handler (needed once we start accessing the thread_struct). ++ */ ++ GET_SCRATCH0(r11) ++ std r11, GPR13(r1) ++ + /* Reset MSR RI so we can take SLB faults again */ + li r11, MSR_RI + mtmsrd r11, 1 + ++ /* Store the PPR in r11 and reset to decent value */ + mfspr r11, SPRN_PPR + HMT_MEDIUM + +@@ -206,11 +220,11 @@ _GLOBAL(tm_reclaim) + SAVE_GPR(8, r7) /* user r8 */ + SAVE_GPR(9, r7) /* user r9 */ + SAVE_GPR(10, r7) /* user r10 */ +- ld r3, PACATMSCRATCH(r13) /* user r1 */ ++ ld r3, GPR1(r1) /* user r1 */ + ld r4, GPR7(r1) /* user r7 */ + ld r5, GPR11(r1) /* user r11 */ + ld r6, GPR12(r1) /* user r12 */ +- GET_SCRATCH0(8) /* user r13 */ ++ ld r8, GPR13(r1) /* user r13 */ + std r3, GPR1(r7) + std r4, GPR7(r7) + std r5, GPR11(r7) +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c +index b5a71baedbc2..59d07bd5374a 100644 +--- a/arch/powerpc/mm/numa.c ++++ b/arch/powerpc/mm/numa.c +@@ -1204,7 +1204,9 @@ int find_and_online_cpu_nid(int cpu) + int new_nid; + + /* Use associativity from first thread for all siblings */ +- vphn_get_associativity(cpu, associativity); ++ if (vphn_get_associativity(cpu, associativity)) ++ return cpu_to_node(cpu); ++ + new_nid = associativity_to_nid(associativity); + if (new_nid < 0 || !node_possible(new_nid)) + new_nid = first_online_node; +diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h +new file mode 100644 +index 000000000000..c9fecd120d18 +--- /dev/null ++++ b/arch/riscv/include/asm/asm-prototypes.h +@@ -0,0 +1,7 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_RISCV_PROTOTYPES_H ++ ++#include <linux/ftrace.h> ++#include <asm-generic/asm-prototypes.h> ++ ++#endif /* _ASM_RISCV_PROTOTYPES_H */ +diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S +index eaa843a52907..a480356e0ed8 100644 +--- a/arch/x86/boot/compressed/mem_encrypt.S ++++ b/arch/x86/boot/compressed/mem_encrypt.S +@@ -25,20 +25,6 @@ ENTRY(get_sev_encryption_bit) + push %ebx + push %ecx + push %edx +- push %edi +- +- /* +- * RIP-relative addressing is needed to access the encryption bit +- * variable. Since we are running in 32-bit mode we need this call/pop +- * sequence to get the proper relative addressing. +- */ +- call 1f +-1: popl %edi +- subl $1b, %edi +- +- movl enc_bit(%edi), %eax +- cmpl $0, %eax +- jge .Lsev_exit + + /* Check if running under a hypervisor */ + movl $1, %eax +@@ -69,15 +55,12 @@ ENTRY(get_sev_encryption_bit) + + movl %ebx, %eax + andl $0x3f, %eax /* Return the encryption bit location */ +- movl %eax, enc_bit(%edi) + jmp .Lsev_exit + + .Lno_sev: + xor %eax, %eax +- movl %eax, enc_bit(%edi) + + .Lsev_exit: +- pop %edi + pop %edx + pop %ecx + pop %ebx +@@ -113,8 +96,6 @@ ENTRY(set_sev_encryption_mask) + ENDPROC(set_sev_encryption_mask) + + .data +-enc_bit: +- .int 0xffffffff + + #ifdef CONFIG_AMD_MEM_ENCRYPT + .balign 8 +diff --git a/drivers/clocksource/timer-fttmr010.c b/drivers/clocksource/timer-fttmr010.c +index c020038ebfab..cf93f6419b51 100644 +--- a/drivers/clocksource/timer-fttmr010.c ++++ b/drivers/clocksource/timer-fttmr010.c +@@ -130,13 +130,17 @@ static int fttmr010_timer_set_next_event(unsigned long cycles, + cr &= ~fttmr010->t1_enable_val; + writel(cr, fttmr010->base + TIMER_CR); + +- /* Setup the match register forward/backward in time */ +- cr = readl(fttmr010->base + TIMER1_COUNT); +- if (fttmr010->count_down) +- cr -= cycles; +- else +- cr += cycles; +- writel(cr, fttmr010->base + TIMER1_MATCH1); ++ if (fttmr010->count_down) { ++ /* ++ * ASPEED Timer Controller will load TIMER1_LOAD register ++ * into TIMER1_COUNT register when the timer is re-enabled. ++ */ ++ writel(cycles, fttmr010->base + TIMER1_LOAD); ++ } else { ++ /* Setup the match register forward in time */ ++ cr = readl(fttmr010->base + TIMER1_COUNT); ++ writel(cr + cycles, fttmr010->base + TIMER1_MATCH1); ++ } + + /* Start */ + cr = readl(fttmr010->base + TIMER_CR); +diff --git a/drivers/clocksource/timer-ti-32k.c b/drivers/clocksource/timer-ti-32k.c +index 880a861ab3c8..713214d085e0 100644 +--- a/drivers/clocksource/timer-ti-32k.c ++++ b/drivers/clocksource/timer-ti-32k.c +@@ -98,6 +98,9 @@ static int __init ti_32k_timer_init(struct device_node *np) + return -ENXIO; + } + ++ if (!of_machine_is_compatible("ti,am43")) ++ ti_32k_timer.cs.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP; ++ + ti_32k_timer.counter = ti_32k_timer.base; + + /* +diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c +index 0a788d76ed5f..0ec4659795f1 100644 +--- a/drivers/gpu/drm/arm/malidp_drv.c ++++ b/drivers/gpu/drm/arm/malidp_drv.c +@@ -615,6 +615,7 @@ static int malidp_bind(struct device *dev) + drm->irq_enabled = true; + + ret = drm_vblank_init(drm, drm->mode_config.num_crtc); ++ drm_crtc_vblank_reset(&malidp->crtc); + if (ret < 0) { + DRM_ERROR("failed to initialise vblank\n"); + goto vblank_fail; +diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c +index c2e55e5d97f6..1cf6290d6435 100644 +--- a/drivers/hwtracing/intel_th/pci.c ++++ b/drivers/hwtracing/intel_th/pci.c +@@ -160,6 +160,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = { + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x18e1), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, ++ { ++ /* Ice Lake PCH */ ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6), ++ .driver_data = (kernel_ulong_t)&intel_th_2x, ++ }, + { 0 }, + }; + +diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c +index 0e5eb0f547d3..b83348416885 100644 +--- a/drivers/infiniband/core/uverbs_cmd.c ++++ b/drivers/infiniband/core/uverbs_cmd.c +@@ -2048,33 +2048,55 @@ static int modify_qp(struct ib_uverbs_file *file, + + if ((cmd->base.attr_mask & IB_QP_CUR_STATE && + cmd->base.cur_qp_state > IB_QPS_ERR) || +- cmd->base.qp_state > IB_QPS_ERR) { ++ (cmd->base.attr_mask & IB_QP_STATE && ++ cmd->base.qp_state > IB_QPS_ERR)) { + ret = -EINVAL; + goto release_qp; + } + +- attr->qp_state = cmd->base.qp_state; +- attr->cur_qp_state = cmd->base.cur_qp_state; +- attr->path_mtu = cmd->base.path_mtu; +- attr->path_mig_state = cmd->base.path_mig_state; +- attr->qkey = cmd->base.qkey; +- attr->rq_psn = cmd->base.rq_psn; +- attr->sq_psn = cmd->base.sq_psn; +- attr->dest_qp_num = cmd->base.dest_qp_num; +- attr->qp_access_flags = cmd->base.qp_access_flags; +- attr->pkey_index = cmd->base.pkey_index; +- attr->alt_pkey_index = cmd->base.alt_pkey_index; +- attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify; +- attr->max_rd_atomic = cmd->base.max_rd_atomic; +- attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic; +- attr->min_rnr_timer = cmd->base.min_rnr_timer; +- attr->port_num = cmd->base.port_num; +- attr->timeout = cmd->base.timeout; +- attr->retry_cnt = cmd->base.retry_cnt; +- attr->rnr_retry = cmd->base.rnr_retry; +- attr->alt_port_num = cmd->base.alt_port_num; +- attr->alt_timeout = cmd->base.alt_timeout; +- attr->rate_limit = cmd->rate_limit; ++ if (cmd->base.attr_mask & IB_QP_STATE) ++ attr->qp_state = cmd->base.qp_state; ++ if (cmd->base.attr_mask & IB_QP_CUR_STATE) ++ attr->cur_qp_state = cmd->base.cur_qp_state; ++ if (cmd->base.attr_mask & IB_QP_PATH_MTU) ++ attr->path_mtu = cmd->base.path_mtu; ++ if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE) ++ attr->path_mig_state = cmd->base.path_mig_state; ++ if (cmd->base.attr_mask & IB_QP_QKEY) ++ attr->qkey = cmd->base.qkey; ++ if (cmd->base.attr_mask & IB_QP_RQ_PSN) ++ attr->rq_psn = cmd->base.rq_psn; ++ if (cmd->base.attr_mask & IB_QP_SQ_PSN) ++ attr->sq_psn = cmd->base.sq_psn; ++ if (cmd->base.attr_mask & IB_QP_DEST_QPN) ++ attr->dest_qp_num = cmd->base.dest_qp_num; ++ if (cmd->base.attr_mask & IB_QP_ACCESS_FLAGS) ++ attr->qp_access_flags = cmd->base.qp_access_flags; ++ if (cmd->base.attr_mask & IB_QP_PKEY_INDEX) ++ attr->pkey_index = cmd->base.pkey_index; ++ if (cmd->base.attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) ++ attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify; ++ if (cmd->base.attr_mask & IB_QP_MAX_QP_RD_ATOMIC) ++ attr->max_rd_atomic = cmd->base.max_rd_atomic; ++ if (cmd->base.attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) ++ attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic; ++ if (cmd->base.attr_mask & IB_QP_MIN_RNR_TIMER) ++ attr->min_rnr_timer = cmd->base.min_rnr_timer; ++ if (cmd->base.attr_mask & IB_QP_PORT) ++ attr->port_num = cmd->base.port_num; ++ if (cmd->base.attr_mask & IB_QP_TIMEOUT) ++ attr->timeout = cmd->base.timeout; ++ if (cmd->base.attr_mask & IB_QP_RETRY_CNT) ++ attr->retry_cnt = cmd->base.retry_cnt; ++ if (cmd->base.attr_mask & IB_QP_RNR_RETRY) ++ attr->rnr_retry = cmd->base.rnr_retry; ++ if (cmd->base.attr_mask & IB_QP_ALT_PATH) { ++ attr->alt_port_num = cmd->base.alt_port_num; ++ attr->alt_timeout = cmd->base.alt_timeout; ++ attr->alt_pkey_index = cmd->base.alt_pkey_index; ++ } ++ if (cmd->base.attr_mask & IB_QP_RATE_LIMIT) ++ attr->rate_limit = cmd->rate_limit; + + if (cmd->base.attr_mask & IB_QP_AV) + copy_ah_attr_from_uverbs(qp->device, &attr->ah_attr, +diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c +index 20b9f31052bf..85cd1a3593d6 100644 +--- a/drivers/infiniband/hw/bnxt_re/main.c ++++ b/drivers/infiniband/hw/bnxt_re/main.c +@@ -78,7 +78,7 @@ static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list); + /* Mutex to protect the list of bnxt_re devices added */ + static DEFINE_MUTEX(bnxt_re_dev_lock); + static struct workqueue_struct *bnxt_re_wq; +-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait); ++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev); + + /* SR-IOV helper functions */ + +@@ -182,7 +182,7 @@ static void bnxt_re_shutdown(void *p) + if (!rdev) + return; + +- bnxt_re_ib_unreg(rdev, false); ++ bnxt_re_ib_unreg(rdev); + } + + static void bnxt_re_stop_irq(void *handle) +@@ -251,7 +251,7 @@ static struct bnxt_ulp_ops bnxt_re_ulp_ops = { + /* Driver registration routines used to let the networking driver (bnxt_en) + * to know that the RoCE driver is now installed + */ +-static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait) ++static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev) + { + struct bnxt_en_dev *en_dev; + int rc; +@@ -260,14 +260,9 @@ static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait) + return -EINVAL; + + en_dev = rdev->en_dev; +- /* Acquire rtnl lock if it is not invokded from netdev event */ +- if (lock_wait) +- rtnl_lock(); + + rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev, + BNXT_ROCE_ULP); +- if (lock_wait) +- rtnl_unlock(); + return rc; + } + +@@ -281,14 +276,12 @@ static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev) + + en_dev = rdev->en_dev; + +- rtnl_lock(); + rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP, + &bnxt_re_ulp_ops, rdev); +- rtnl_unlock(); + return rc; + } + +-static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait) ++static int bnxt_re_free_msix(struct bnxt_re_dev *rdev) + { + struct bnxt_en_dev *en_dev; + int rc; +@@ -298,13 +291,9 @@ static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait) + + en_dev = rdev->en_dev; + +- if (lock_wait) +- rtnl_lock(); + + rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP); + +- if (lock_wait) +- rtnl_unlock(); + return rc; + } + +@@ -320,7 +309,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev) + + num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus()); + +- rtnl_lock(); + num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP, + rdev->msix_entries, + num_msix_want); +@@ -335,7 +323,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev) + } + rdev->num_msix = num_msix_got; + done: +- rtnl_unlock(); + return rc; + } + +@@ -358,24 +345,18 @@ static void bnxt_re_fill_fw_msg(struct bnxt_fw_msg *fw_msg, void *msg, + fw_msg->timeout = timeout; + } + +-static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id, +- bool lock_wait) ++static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id) + { + struct bnxt_en_dev *en_dev = rdev->en_dev; + struct hwrm_ring_free_input req = {0}; + struct hwrm_ring_free_output resp; + struct bnxt_fw_msg fw_msg; +- bool do_unlock = false; + int rc = -EINVAL; + + if (!en_dev) + return rc; + + memset(&fw_msg, 0, sizeof(fw_msg)); +- if (lock_wait) { +- rtnl_lock(); +- do_unlock = true; +- } + + bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1); + req.ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL; +@@ -386,8 +367,6 @@ static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id, + if (rc) + dev_err(rdev_to_dev(rdev), + "Failed to free HW ring:%d :%#x", req.ring_id, rc); +- if (do_unlock) +- rtnl_unlock(); + return rc; + } + +@@ -405,7 +384,6 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr, + return rc; + + memset(&fw_msg, 0, sizeof(fw_msg)); +- rtnl_lock(); + bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1); + req.enables = 0; + req.page_tbl_addr = cpu_to_le64(dma_arr[0]); +@@ -426,27 +404,21 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr, + if (!rc) + *fw_ring_id = le16_to_cpu(resp.ring_id); + +- rtnl_unlock(); + return rc; + } + + static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev, +- u32 fw_stats_ctx_id, bool lock_wait) ++ u32 fw_stats_ctx_id) + { + struct bnxt_en_dev *en_dev = rdev->en_dev; + struct hwrm_stat_ctx_free_input req = {0}; + struct bnxt_fw_msg fw_msg; +- bool do_unlock = false; + int rc = -EINVAL; + + if (!en_dev) + return rc; + + memset(&fw_msg, 0, sizeof(fw_msg)); +- if (lock_wait) { +- rtnl_lock(); +- do_unlock = true; +- } + + bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1); + req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id); +@@ -457,8 +429,6 @@ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev, + dev_err(rdev_to_dev(rdev), + "Failed to free HW stats context %#x", rc); + +- if (do_unlock) +- rtnl_unlock(); + return rc; + } + +@@ -478,7 +448,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev, + return rc; + + memset(&fw_msg, 0, sizeof(fw_msg)); +- rtnl_lock(); + + bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1); + req.update_period_ms = cpu_to_le32(1000); +@@ -490,7 +459,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev, + if (!rc) + *fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id); + +- rtnl_unlock(); + return rc; + } + +@@ -929,19 +897,19 @@ fail: + return rc; + } + +-static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev, bool lock_wait) ++static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev) + { + int i; + + for (i = 0; i < rdev->num_msix - 1; i++) { +- bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, lock_wait); ++ bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id); + bnxt_qplib_free_nq(&rdev->nq[i]); + } + } + +-static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait) ++static void bnxt_re_free_res(struct bnxt_re_dev *rdev) + { +- bnxt_re_free_nq_res(rdev, lock_wait); ++ bnxt_re_free_nq_res(rdev); + + if (rdev->qplib_res.dpi_tbl.max) { + bnxt_qplib_dealloc_dpi(&rdev->qplib_res, +@@ -1219,7 +1187,7 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev) + return 0; + } + +-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait) ++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev) + { + int i, rc; + +@@ -1234,28 +1202,27 @@ static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait) + cancel_delayed_work(&rdev->worker); + + bnxt_re_cleanup_res(rdev); +- bnxt_re_free_res(rdev, lock_wait); ++ bnxt_re_free_res(rdev); + + if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) { + rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw); + if (rc) + dev_warn(rdev_to_dev(rdev), + "Failed to deinitialize RCFW: %#x", rc); +- bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, +- lock_wait); ++ bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id); + bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx); + bnxt_qplib_disable_rcfw_channel(&rdev->rcfw); +- bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait); ++ bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id); + bnxt_qplib_free_rcfw_channel(&rdev->rcfw); + } + if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) { +- rc = bnxt_re_free_msix(rdev, lock_wait); ++ rc = bnxt_re_free_msix(rdev); + if (rc) + dev_warn(rdev_to_dev(rdev), + "Failed to free MSI-X vectors: %#x", rc); + } + if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) { +- rc = bnxt_re_unregister_netdev(rdev, lock_wait); ++ rc = bnxt_re_unregister_netdev(rdev); + if (rc) + dev_warn(rdev_to_dev(rdev), + "Failed to unregister with netdev: %#x", rc); +@@ -1276,6 +1243,12 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) + { + int i, j, rc; + ++ bool locked; ++ ++ /* Acquire rtnl lock through out this function */ ++ rtnl_lock(); ++ locked = true; ++ + /* Registered a new RoCE device instance to netdev */ + rc = bnxt_re_register_netdev(rdev); + if (rc) { +@@ -1374,12 +1347,16 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) + schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000)); + } + ++ rtnl_unlock(); ++ locked = false; ++ + /* Register ib dev */ + rc = bnxt_re_register_ib(rdev); + if (rc) { + pr_err("Failed to register with IB: %#x\n", rc); + goto fail; + } ++ set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags); + dev_info(rdev_to_dev(rdev), "Device registered successfully"); + for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) { + rc = device_create_file(&rdev->ibdev.dev, +@@ -1395,7 +1372,6 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) + goto fail; + } + } +- set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags); + ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed, + &rdev->active_width); + set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags); +@@ -1404,17 +1380,21 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) + + return 0; + free_sctx: +- bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true); ++ bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id); + free_ctx: + bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx); + disable_rcfw: + bnxt_qplib_disable_rcfw_channel(&rdev->rcfw); + free_ring: +- bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true); ++ bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id); + free_rcfw: + bnxt_qplib_free_rcfw_channel(&rdev->rcfw); + fail: +- bnxt_re_ib_unreg(rdev, true); ++ if (!locked) ++ rtnl_lock(); ++ bnxt_re_ib_unreg(rdev); ++ rtnl_unlock(); ++ + return rc; + } + +@@ -1567,7 +1547,7 @@ static int bnxt_re_netdev_event(struct notifier_block *notifier, + */ + if (atomic_read(&rdev->sched_count) > 0) + goto exit; +- bnxt_re_ib_unreg(rdev, false); ++ bnxt_re_ib_unreg(rdev); + bnxt_re_remove_one(rdev); + bnxt_re_dev_unreg(rdev); + break; +@@ -1646,7 +1626,10 @@ static void __exit bnxt_re_mod_exit(void) + */ + flush_workqueue(bnxt_re_wq); + bnxt_re_dev_stop(rdev); +- bnxt_re_ib_unreg(rdev, true); ++ /* Acquire the rtnl_lock as the L2 resources are freed here */ ++ rtnl_lock(); ++ bnxt_re_ib_unreg(rdev); ++ rtnl_unlock(); + bnxt_re_remove_one(rdev); + bnxt_re_dev_unreg(rdev); + } +diff --git a/drivers/input/keyboard/atakbd.c b/drivers/input/keyboard/atakbd.c +index f1235831283d..fdeda0b0fbd6 100644 +--- a/drivers/input/keyboard/atakbd.c ++++ b/drivers/input/keyboard/atakbd.c +@@ -79,8 +79,7 @@ MODULE_LICENSE("GPL"); + */ + + +-static unsigned char atakbd_keycode[0x72] = { /* American layout */ +- [0] = KEY_GRAVE, ++static unsigned char atakbd_keycode[0x73] = { /* American layout */ + [1] = KEY_ESC, + [2] = KEY_1, + [3] = KEY_2, +@@ -121,9 +120,9 @@ static unsigned char atakbd_keycode[0x72] = { /* American layout */ + [38] = KEY_L, + [39] = KEY_SEMICOLON, + [40] = KEY_APOSTROPHE, +- [41] = KEY_BACKSLASH, /* FIXME, '#' */ ++ [41] = KEY_GRAVE, + [42] = KEY_LEFTSHIFT, +- [43] = KEY_GRAVE, /* FIXME: '~' */ ++ [43] = KEY_BACKSLASH, + [44] = KEY_Z, + [45] = KEY_X, + [46] = KEY_C, +@@ -149,45 +148,34 @@ static unsigned char atakbd_keycode[0x72] = { /* American layout */ + [66] = KEY_F8, + [67] = KEY_F9, + [68] = KEY_F10, +- [69] = KEY_ESC, +- [70] = KEY_DELETE, +- [71] = KEY_KP7, +- [72] = KEY_KP8, +- [73] = KEY_KP9, ++ [71] = KEY_HOME, ++ [72] = KEY_UP, + [74] = KEY_KPMINUS, +- [75] = KEY_KP4, +- [76] = KEY_KP5, +- [77] = KEY_KP6, ++ [75] = KEY_LEFT, ++ [77] = KEY_RIGHT, + [78] = KEY_KPPLUS, +- [79] = KEY_KP1, +- [80] = KEY_KP2, +- [81] = KEY_KP3, +- [82] = KEY_KP0, +- [83] = KEY_KPDOT, +- [90] = KEY_KPLEFTPAREN, +- [91] = KEY_KPRIGHTPAREN, +- [92] = KEY_KPASTERISK, /* FIXME */ +- [93] = KEY_KPASTERISK, +- [94] = KEY_KPPLUS, +- [95] = KEY_HELP, ++ [80] = KEY_DOWN, ++ [82] = KEY_INSERT, ++ [83] = KEY_DELETE, + [96] = KEY_102ND, +- [97] = KEY_KPASTERISK, /* FIXME */ +- [98] = KEY_KPSLASH, ++ [97] = KEY_UNDO, ++ [98] = KEY_HELP, + [99] = KEY_KPLEFTPAREN, + [100] = KEY_KPRIGHTPAREN, + [101] = KEY_KPSLASH, + [102] = KEY_KPASTERISK, +- [103] = KEY_UP, +- [104] = KEY_KPASTERISK, /* FIXME */ +- [105] = KEY_LEFT, +- [106] = KEY_RIGHT, +- [107] = KEY_KPASTERISK, /* FIXME */ +- [108] = KEY_DOWN, +- [109] = KEY_KPASTERISK, /* FIXME */ +- [110] = KEY_KPASTERISK, /* FIXME */ +- [111] = KEY_KPASTERISK, /* FIXME */ +- [112] = KEY_KPASTERISK, /* FIXME */ +- [113] = KEY_KPASTERISK /* FIXME */ ++ [103] = KEY_KP7, ++ [104] = KEY_KP8, ++ [105] = KEY_KP9, ++ [106] = KEY_KP4, ++ [107] = KEY_KP5, ++ [108] = KEY_KP6, ++ [109] = KEY_KP1, ++ [110] = KEY_KP2, ++ [111] = KEY_KP3, ++ [112] = KEY_KP0, ++ [113] = KEY_KPDOT, ++ [114] = KEY_KPENTER, + }; + + static struct input_dev *atakbd_dev; +@@ -195,21 +183,15 @@ static struct input_dev *atakbd_dev; + static void atakbd_interrupt(unsigned char scancode, char down) + { + +- if (scancode < 0x72) { /* scancodes < 0xf2 are keys */ ++ if (scancode < 0x73) { /* scancodes < 0xf3 are keys */ + + // report raw events here? + + scancode = atakbd_keycode[scancode]; + +- if (scancode == KEY_CAPSLOCK) { /* CapsLock is a toggle switch key on Amiga */ +- input_report_key(atakbd_dev, scancode, 1); +- input_report_key(atakbd_dev, scancode, 0); +- input_sync(atakbd_dev); +- } else { +- input_report_key(atakbd_dev, scancode, down); +- input_sync(atakbd_dev); +- } +- } else /* scancodes >= 0xf2 are mouse data, most likely */ ++ input_report_key(atakbd_dev, scancode, down); ++ input_sync(atakbd_dev); ++ } else /* scancodes >= 0xf3 are mouse data, most likely */ + printk(KERN_INFO "atakbd: unhandled scancode %x\n", scancode); + + return; +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c +index c53363443280..c2b511a16b0e 100644 +--- a/drivers/iommu/amd_iommu.c ++++ b/drivers/iommu/amd_iommu.c +@@ -246,7 +246,13 @@ static u16 get_alias(struct device *dev) + + /* The callers make sure that get_device_id() does not fail here */ + devid = get_device_id(dev); ++ ++ /* For ACPI HID devices, we simply return the devid as such */ ++ if (!dev_is_pci(dev)) ++ return devid; ++ + ivrs_alias = amd_iommu_alias_table[devid]; ++ + pci_for_each_dma_alias(pdev, __last_alias, &pci_alias); + + if (ivrs_alias == pci_alias) +diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c +index 2b1724e8d307..701820b39fd1 100644 +--- a/drivers/iommu/rockchip-iommu.c ++++ b/drivers/iommu/rockchip-iommu.c +@@ -1242,6 +1242,12 @@ err_unprepare_clocks: + + static void rk_iommu_shutdown(struct platform_device *pdev) + { ++ struct rk_iommu *iommu = platform_get_drvdata(pdev); ++ int i = 0, irq; ++ ++ while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) ++ devm_free_irq(iommu->dev, irq, iommu); ++ + pm_runtime_force_suspend(&pdev->dev); + } + +diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c +index 666d319d3d1a..1f6c1eefe389 100644 +--- a/drivers/media/usb/dvb-usb-v2/af9035.c ++++ b/drivers/media/usb/dvb-usb-v2/af9035.c +@@ -402,8 +402,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap, + if (msg[0].addr == state->af9033_i2c_addr[1]) + reg |= 0x100000; + +- ret = af9035_wr_regs(d, reg, &msg[0].buf[3], +- msg[0].len - 3); ++ ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg, ++ &msg[0].buf[3], ++ msg[0].len - 3) ++ : -EOPNOTSUPP; + } else { + /* I2C write */ + u8 buf[MAX_XFER_SIZE]; +diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h +index 09e38f0733bd..10b9cb2185b1 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h ++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h +@@ -753,7 +753,6 @@ struct cpl_abort_req_rss { + }; + + struct cpl_abort_req_rss6 { +- WR_HDR; + union opcode_tid ot; + __u32 srqidx_status; + }; +diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c +index 372664686309..129f4e9f38da 100644 +--- a/drivers/net/ethernet/ibm/emac/core.c ++++ b/drivers/net/ethernet/ibm/emac/core.c +@@ -2677,12 +2677,17 @@ static int emac_init_phy(struct emac_instance *dev) + if (of_phy_is_fixed_link(np)) { + int res = emac_dt_mdio_probe(dev); + +- if (!res) { +- res = of_phy_register_fixed_link(np); +- if (res) +- mdiobus_unregister(dev->mii_bus); ++ if (res) ++ return res; ++ ++ res = of_phy_register_fixed_link(np); ++ dev->phy_dev = of_phy_find_device(np); ++ if (res || !dev->phy_dev) { ++ mdiobus_unregister(dev->mii_bus); ++ return res ? res : -EINVAL; + } +- return res; ++ emac_adjust_link(dev->ndev); ++ put_device(&dev->phy_dev->mdio.dev); + } + return 0; + } +diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c +index 1f3372c1802e..2df92dbd38e1 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/eq.c ++++ b/drivers/net/ethernet/mellanox/mlx4/eq.c +@@ -240,7 +240,8 @@ static void mlx4_set_eq_affinity_hint(struct mlx4_priv *priv, int vec) + struct mlx4_dev *dev = &priv->dev; + struct mlx4_eq *eq = &priv->eq_table.eq[vec]; + +- if (!eq->affinity_mask || cpumask_empty(eq->affinity_mask)) ++ if (!cpumask_available(eq->affinity_mask) || ++ cpumask_empty(eq->affinity_mask)) + return; + + hint_err = irq_set_affinity_hint(eq->irq, eq->affinity_mask); +diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c +index e0680ce91328..09ed0ba4225a 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c +@@ -190,6 +190,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data) + + static void + qed_dcbx_set_params(struct qed_dcbx_results *p_data, ++ struct qed_hwfn *p_hwfn, + struct qed_hw_info *p_info, + bool enable, + u8 prio, +@@ -206,6 +207,11 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data, + else + p_data->arr[type].update = DONT_UPDATE_DCB_DSCP; + ++ /* Do not add vlan tag 0 when DCB is enabled and port in UFP/OV mode */ ++ if ((test_bit(QED_MF_8021Q_TAGGING, &p_hwfn->cdev->mf_bits) || ++ test_bit(QED_MF_8021AD_TAGGING, &p_hwfn->cdev->mf_bits))) ++ p_data->arr[type].dont_add_vlan0 = true; ++ + /* QM reconf data */ + if (p_info->personality == personality) + p_info->offload_tc = tc; +@@ -233,7 +239,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, + personality = qed_dcbx_app_update[i].personality; + name = qed_dcbx_app_update[i].name; + +- qed_dcbx_set_params(p_data, p_info, enable, ++ qed_dcbx_set_params(p_data, p_hwfn, p_info, enable, + prio, tc, type, personality); + } + } +@@ -956,6 +962,7 @@ static void qed_dcbx_update_protocol_data(struct protocol_dcb_data *p_data, + p_data->dcb_enable_flag = p_src->arr[type].enable; + p_data->dcb_priority = p_src->arr[type].priority; + p_data->dcb_tc = p_src->arr[type].tc; ++ p_data->dcb_dont_add_vlan0 = p_src->arr[type].dont_add_vlan0; + } + + /* Set pf update ramrod command params */ +diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h +index 5feb90e049e0..d950d836858c 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h ++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h +@@ -55,6 +55,7 @@ struct qed_dcbx_app_data { + u8 update; /* Update indication */ + u8 priority; /* Priority */ + u8 tc; /* Traffic Class */ ++ bool dont_add_vlan0; /* Do not insert a vlan tag with id 0 */ + }; + + #define QED_DCBX_VERSION_DISABLED 0 +diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c +index e5249b4741d0..194f4dbe57d3 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c +@@ -1636,7 +1636,7 @@ static int qed_vf_start(struct qed_hwfn *p_hwfn, + int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params) + { + struct qed_load_req_params load_req_params; +- u32 load_code, param, drv_mb_param; ++ u32 load_code, resp, param, drv_mb_param; + bool b_default_mtu = true; + struct qed_hwfn *p_hwfn; + int rc = 0, mfw_rc, i; +@@ -1782,6 +1782,19 @@ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params) + + if (IS_PF(cdev)) { + p_hwfn = QED_LEADING_HWFN(cdev); ++ ++ /* Get pre-negotiated values for stag, bandwidth etc. */ ++ DP_VERBOSE(p_hwfn, ++ QED_MSG_SPQ, ++ "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n"); ++ drv_mb_param = 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET; ++ rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, ++ DRV_MSG_CODE_GET_OEM_UPDATES, ++ drv_mb_param, &resp, ¶m); ++ if (rc) ++ DP_NOTICE(p_hwfn, ++ "Failed to send GET_OEM_UPDATES attention request\n"); ++ + drv_mb_param = STORM_FW_VERSION; + rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, + DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER, +diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h +index 463ffa83685f..ec5de7cf1af4 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h ++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h +@@ -12415,6 +12415,7 @@ struct public_drv_mb { + #define DRV_MSG_SET_RESOURCE_VALUE_MSG 0x35000000 + #define DRV_MSG_CODE_OV_UPDATE_WOL 0x38000000 + #define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE 0x39000000 ++#define DRV_MSG_CODE_GET_OEM_UPDATES 0x41000000 + + #define DRV_MSG_CODE_BW_UPDATE_ACK 0x32000000 + #define DRV_MSG_CODE_NIG_DRAIN 0x30000000 +@@ -12540,6 +12541,9 @@ struct public_drv_mb { + #define DRV_MB_PARAM_ESWITCH_MODE_VEB 0x1 + #define DRV_MB_PARAM_ESWITCH_MODE_VEPA 0x2 + ++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK 0x1 ++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET 0 ++ + #define DRV_MB_PARAM_SET_LED_MODE_OPER 0x0 + #define DRV_MB_PARAM_SET_LED_MODE_ON 0x1 + #define DRV_MB_PARAM_SET_LED_MODE_OFF 0x2 +diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h +index b81f4faf7b10..1c40989479bd 100644 +--- a/drivers/net/ethernet/renesas/ravb.h ++++ b/drivers/net/ethernet/renesas/ravb.h +@@ -431,6 +431,7 @@ enum EIS_BIT { + EIS_CULF1 = 0x00000080, + EIS_TFFF = 0x00000100, + EIS_QFS = 0x00010000, ++ EIS_RESERVED = (GENMASK(31, 17) | GENMASK(15, 11)), + }; + + /* RIC0 */ +@@ -475,6 +476,7 @@ enum RIS0_BIT { + RIS0_FRF15 = 0x00008000, + RIS0_FRF16 = 0x00010000, + RIS0_FRF17 = 0x00020000, ++ RIS0_RESERVED = GENMASK(31, 18), + }; + + /* RIC1 */ +@@ -531,6 +533,7 @@ enum RIS2_BIT { + RIS2_QFF16 = 0x00010000, + RIS2_QFF17 = 0x00020000, + RIS2_RFFF = 0x80000000, ++ RIS2_RESERVED = GENMASK(30, 18), + }; + + /* TIC */ +@@ -547,6 +550,7 @@ enum TIS_BIT { + TIS_FTF1 = 0x00000002, /* Undocumented? */ + TIS_TFUF = 0x00000100, + TIS_TFWF = 0x00000200, ++ TIS_RESERVED = (GENMASK(31, 20) | GENMASK(15, 12) | GENMASK(7, 4)) + }; + + /* ISS */ +@@ -620,6 +624,7 @@ enum GIC_BIT { + enum GIS_BIT { + GIS_PTCF = 0x00000001, /* Undocumented? */ + GIS_PTMF = 0x00000004, ++ GIS_RESERVED = GENMASK(15, 10), + }; + + /* GIE (R-Car Gen3 only) */ +diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c +index 0d811c02ff34..db4e306ca996 100644 +--- a/drivers/net/ethernet/renesas/ravb_main.c ++++ b/drivers/net/ethernet/renesas/ravb_main.c +@@ -742,10 +742,11 @@ static void ravb_error_interrupt(struct net_device *ndev) + u32 eis, ris2; + + eis = ravb_read(ndev, EIS); +- ravb_write(ndev, ~EIS_QFS, EIS); ++ ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS); + if (eis & EIS_QFS) { + ris2 = ravb_read(ndev, RIS2); +- ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF), RIS2); ++ ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED), ++ RIS2); + + /* Receive Descriptor Empty int */ + if (ris2 & RIS2_QFF0) +@@ -798,7 +799,7 @@ static bool ravb_timestamp_interrupt(struct net_device *ndev) + u32 tis = ravb_read(ndev, TIS); + + if (tis & TIS_TFUF) { +- ravb_write(ndev, ~TIS_TFUF, TIS); ++ ravb_write(ndev, ~(TIS_TFUF | TIS_RESERVED), TIS); + ravb_get_tx_tstamp(ndev); + return true; + } +@@ -933,7 +934,7 @@ static int ravb_poll(struct napi_struct *napi, int budget) + /* Processing RX Descriptor Ring */ + if (ris0 & mask) { + /* Clear RX interrupt */ +- ravb_write(ndev, ~mask, RIS0); ++ ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0); + if (ravb_rx(ndev, "a, q)) + goto out; + } +@@ -941,7 +942,7 @@ static int ravb_poll(struct napi_struct *napi, int budget) + if (tis & mask) { + spin_lock_irqsave(&priv->lock, flags); + /* Clear TX interrupt */ +- ravb_write(ndev, ~mask, TIS); ++ ravb_write(ndev, ~(mask | TIS_RESERVED), TIS); + ravb_tx_free(ndev, q, true); + netif_wake_subqueue(ndev, q); + mmiowb(); +diff --git a/drivers/net/ethernet/renesas/ravb_ptp.c b/drivers/net/ethernet/renesas/ravb_ptp.c +index eede70ec37f8..9e3222fd69f9 100644 +--- a/drivers/net/ethernet/renesas/ravb_ptp.c ++++ b/drivers/net/ethernet/renesas/ravb_ptp.c +@@ -319,7 +319,7 @@ void ravb_ptp_interrupt(struct net_device *ndev) + } + } + +- ravb_write(ndev, ~gis, GIS); ++ ravb_write(ndev, ~(gis | GIS_RESERVED), GIS); + } + + void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev) +diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c +index 778c4f76a884..2153956a0b20 100644 +--- a/drivers/pci/controller/dwc/pcie-designware.c ++++ b/drivers/pci/controller/dwc/pcie-designware.c +@@ -135,7 +135,7 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, + if (val & PCIE_ATU_ENABLE) + return; + +- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); ++ mdelay(LINK_WAIT_IATU); + } + dev_err(pci->dev, "Outbound iATU is not being enabled\n"); + } +@@ -178,7 +178,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, + if (val & PCIE_ATU_ENABLE) + return; + +- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); ++ mdelay(LINK_WAIT_IATU); + } + dev_err(pci->dev, "Outbound iATU is not being enabled\n"); + } +@@ -236,7 +236,7 @@ static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index, + if (val & PCIE_ATU_ENABLE) + return 0; + +- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); ++ mdelay(LINK_WAIT_IATU); + } + dev_err(pci->dev, "Inbound iATU is not being enabled\n"); + +@@ -282,7 +282,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar, + if (val & PCIE_ATU_ENABLE) + return 0; + +- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); ++ mdelay(LINK_WAIT_IATU); + } + dev_err(pci->dev, "Inbound iATU is not being enabled\n"); + +diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h +index bee4e2535a61..b99d1d72dd12 100644 +--- a/drivers/pci/controller/dwc/pcie-designware.h ++++ b/drivers/pci/controller/dwc/pcie-designware.h +@@ -26,8 +26,7 @@ + + /* Parameters for the waiting for iATU enabled routine */ + #define LINK_WAIT_MAX_IATU_RETRIES 5 +-#define LINK_WAIT_IATU_MIN 9000 +-#define LINK_WAIT_IATU_MAX 10000 ++#define LINK_WAIT_IATU 9 + + /* Synopsys-specific PCIe configuration registers */ + #define PCIE_PORT_LINK_CONTROL 0x710 +diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c +index b91db89eb924..d3ba867d01f0 100644 +--- a/drivers/pinctrl/pinctrl-amd.c ++++ b/drivers/pinctrl/pinctrl-amd.c +@@ -348,21 +348,12 @@ static void amd_gpio_irq_enable(struct irq_data *d) + unsigned long flags; + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); + struct amd_gpio *gpio_dev = gpiochip_get_data(gc); +- u32 mask = BIT(INTERRUPT_ENABLE_OFF) | BIT(INTERRUPT_MASK_OFF); + + raw_spin_lock_irqsave(&gpio_dev->lock, flags); + pin_reg = readl(gpio_dev->base + (d->hwirq)*4); + pin_reg |= BIT(INTERRUPT_ENABLE_OFF); + pin_reg |= BIT(INTERRUPT_MASK_OFF); + writel(pin_reg, gpio_dev->base + (d->hwirq)*4); +- /* +- * When debounce logic is enabled it takes ~900 us before interrupts +- * can be enabled. During this "debounce warm up" period the +- * "INTERRUPT_ENABLE" bit will read as 0. Poll the bit here until it +- * reads back as 1, signaling that interrupts are now enabled. +- */ +- while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask) +- continue; + raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); + } + +@@ -426,7 +417,7 @@ static void amd_gpio_irq_eoi(struct irq_data *d) + static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type) + { + int ret = 0; +- u32 pin_reg; ++ u32 pin_reg, pin_reg_irq_en, mask; + unsigned long flags, irq_flags; + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); + struct amd_gpio *gpio_dev = gpiochip_get_data(gc); +@@ -495,6 +486,28 @@ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type) + } + + pin_reg |= CLR_INTR_STAT << INTERRUPT_STS_OFF; ++ /* ++ * If WAKE_INT_MASTER_REG.MaskStsEn is set, a software write to the ++ * debounce registers of any GPIO will block wake/interrupt status ++ * generation for *all* GPIOs for a lenght of time that depends on ++ * WAKE_INT_MASTER_REG.MaskStsLength[11:0]. During this period the ++ * INTERRUPT_ENABLE bit will read as 0. ++ * ++ * We temporarily enable irq for the GPIO whose configuration is ++ * changing, and then wait for it to read back as 1 to know when ++ * debounce has settled and then disable the irq again. ++ * We do this polling with the spinlock held to ensure other GPIO ++ * access routines do not read an incorrect value for the irq enable ++ * bit of other GPIOs. We keep the GPIO masked while polling to avoid ++ * spurious irqs, and disable the irq again after polling. ++ */ ++ mask = BIT(INTERRUPT_ENABLE_OFF); ++ pin_reg_irq_en = pin_reg; ++ pin_reg_irq_en |= mask; ++ pin_reg_irq_en &= ~BIT(INTERRUPT_MASK_OFF); ++ writel(pin_reg_irq_en, gpio_dev->base + (d->hwirq)*4); ++ while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask) ++ continue; + writel(pin_reg, gpio_dev->base + (d->hwirq)*4); + raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); + +diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c +index c3a76af9f5fa..ada1ebebd325 100644 +--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c ++++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c +@@ -3475,11 +3475,10 @@ static int ibmvscsis_probe(struct vio_dev *vdev, + vscsi->dds.window[LOCAL].liobn, + vscsi->dds.window[REMOTE].liobn); + +- strcpy(vscsi->eye, "VSCSI "); +- strncat(vscsi->eye, vdev->name, MAX_EYE); ++ snprintf(vscsi->eye, sizeof(vscsi->eye), "VSCSI %s", vdev->name); + + vscsi->dds.unit_id = vdev->unit_address; +- strncpy(vscsi->dds.partition_name, partition_name, ++ strscpy(vscsi->dds.partition_name, partition_name, + sizeof(vscsi->dds.partition_name)); + vscsi->dds.partition_num = partition_number; + +diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c +index 02d65dce74e5..2e8a91341254 100644 +--- a/drivers/scsi/ipr.c ++++ b/drivers/scsi/ipr.c +@@ -3310,6 +3310,65 @@ static void ipr_release_dump(struct kref *kref) + LEAVE; + } + ++static void ipr_add_remove_thread(struct work_struct *work) ++{ ++ unsigned long lock_flags; ++ struct ipr_resource_entry *res; ++ struct scsi_device *sdev; ++ struct ipr_ioa_cfg *ioa_cfg = ++ container_of(work, struct ipr_ioa_cfg, scsi_add_work_q); ++ u8 bus, target, lun; ++ int did_work; ++ ++ ENTER; ++ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); ++ ++restart: ++ do { ++ did_work = 0; ++ if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) { ++ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); ++ return; ++ } ++ ++ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) { ++ if (res->del_from_ml && res->sdev) { ++ did_work = 1; ++ sdev = res->sdev; ++ if (!scsi_device_get(sdev)) { ++ if (!res->add_to_ml) ++ list_move_tail(&res->queue, &ioa_cfg->free_res_q); ++ else ++ res->del_from_ml = 0; ++ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); ++ scsi_remove_device(sdev); ++ scsi_device_put(sdev); ++ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); ++ } ++ break; ++ } ++ } ++ } while (did_work); ++ ++ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) { ++ if (res->add_to_ml) { ++ bus = res->bus; ++ target = res->target; ++ lun = res->lun; ++ res->add_to_ml = 0; ++ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); ++ scsi_add_device(ioa_cfg->host, bus, target, lun); ++ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); ++ goto restart; ++ } ++ } ++ ++ ioa_cfg->scan_done = 1; ++ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); ++ kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE); ++ LEAVE; ++} ++ + /** + * ipr_worker_thread - Worker thread + * @work: ioa config struct +@@ -3324,13 +3383,9 @@ static void ipr_release_dump(struct kref *kref) + static void ipr_worker_thread(struct work_struct *work) + { + unsigned long lock_flags; +- struct ipr_resource_entry *res; +- struct scsi_device *sdev; + struct ipr_dump *dump; + struct ipr_ioa_cfg *ioa_cfg = + container_of(work, struct ipr_ioa_cfg, work_q); +- u8 bus, target, lun; +- int did_work; + + ENTER; + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); +@@ -3368,49 +3423,9 @@ static void ipr_worker_thread(struct work_struct *work) + return; + } + +-restart: +- do { +- did_work = 0; +- if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) { +- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); +- return; +- } ++ schedule_work(&ioa_cfg->scsi_add_work_q); + +- list_for_each_entry(res, &ioa_cfg->used_res_q, queue) { +- if (res->del_from_ml && res->sdev) { +- did_work = 1; +- sdev = res->sdev; +- if (!scsi_device_get(sdev)) { +- if (!res->add_to_ml) +- list_move_tail(&res->queue, &ioa_cfg->free_res_q); +- else +- res->del_from_ml = 0; +- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); +- scsi_remove_device(sdev); +- scsi_device_put(sdev); +- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); +- } +- break; +- } +- } +- } while (did_work); +- +- list_for_each_entry(res, &ioa_cfg->used_res_q, queue) { +- if (res->add_to_ml) { +- bus = res->bus; +- target = res->target; +- lun = res->lun; +- res->add_to_ml = 0; +- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); +- scsi_add_device(ioa_cfg->host, bus, target, lun); +- spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); +- goto restart; +- } +- } +- +- ioa_cfg->scan_done = 1; + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); +- kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE); + LEAVE; + } + +@@ -9908,6 +9923,7 @@ static void ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg, + INIT_LIST_HEAD(&ioa_cfg->free_res_q); + INIT_LIST_HEAD(&ioa_cfg->used_res_q); + INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread); ++ INIT_WORK(&ioa_cfg->scsi_add_work_q, ipr_add_remove_thread); + init_waitqueue_head(&ioa_cfg->reset_wait_q); + init_waitqueue_head(&ioa_cfg->msi_wait_q); + init_waitqueue_head(&ioa_cfg->eeh_wait_q); +diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h +index 93570734cbfb..a98cfd24035a 100644 +--- a/drivers/scsi/ipr.h ++++ b/drivers/scsi/ipr.h +@@ -1568,6 +1568,7 @@ struct ipr_ioa_cfg { + u8 saved_mode_page_len; + + struct work_struct work_q; ++ struct work_struct scsi_add_work_q; + struct workqueue_struct *reset_work_q; + + wait_queue_head_t reset_wait_q; +diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c +index 729d343861f4..de64cbb0e3d5 100644 +--- a/drivers/scsi/lpfc/lpfc_attr.c ++++ b/drivers/scsi/lpfc/lpfc_attr.c +@@ -320,12 +320,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr, + localport->port_id, statep); + + list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { ++ nrport = NULL; ++ spin_lock(&vport->phba->hbalock); + rport = lpfc_ndlp_get_nrport(ndlp); +- if (!rport) +- continue; +- +- /* local short-hand pointer. */ +- nrport = rport->remoteport; ++ if (rport) ++ nrport = rport->remoteport; ++ spin_unlock(&vport->phba->hbalock); + if (!nrport) + continue; + +@@ -3304,6 +3304,7 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport) + struct lpfc_nodelist *ndlp; + #if (IS_ENABLED(CONFIG_NVME_FC)) + struct lpfc_nvme_rport *rport; ++ struct nvme_fc_remote_port *remoteport = NULL; + #endif + + shost = lpfc_shost_from_vport(vport); +@@ -3314,8 +3315,12 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport) + if (ndlp->rport) + ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo; + #if (IS_ENABLED(CONFIG_NVME_FC)) ++ spin_lock(&vport->phba->hbalock); + rport = lpfc_ndlp_get_nrport(ndlp); + if (rport) ++ remoteport = rport->remoteport; ++ spin_unlock(&vport->phba->hbalock); ++ if (remoteport) + nvme_fc_set_remoteport_devloss(rport->remoteport, + vport->cfg_devloss_tmo); + #endif +diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c +index 9df0c051349f..aec5b10a8c85 100644 +--- a/drivers/scsi/lpfc/lpfc_debugfs.c ++++ b/drivers/scsi/lpfc/lpfc_debugfs.c +@@ -551,7 +551,7 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size) + unsigned char *statep; + struct nvme_fc_local_port *localport; + struct lpfc_nvmet_tgtport *tgtp; +- struct nvme_fc_remote_port *nrport; ++ struct nvme_fc_remote_port *nrport = NULL; + struct lpfc_nvme_rport *rport; + + cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE); +@@ -696,11 +696,11 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size) + len += snprintf(buf + len, size - len, "\tRport List:\n"); + list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { + /* local short-hand pointer. */ ++ spin_lock(&phba->hbalock); + rport = lpfc_ndlp_get_nrport(ndlp); +- if (!rport) +- continue; +- +- nrport = rport->remoteport; ++ if (rport) ++ nrport = rport->remoteport; ++ spin_unlock(&phba->hbalock); + if (!nrport) + continue; + +diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c +index cab1fb087e6a..0960dcaf1684 100644 +--- a/drivers/scsi/lpfc/lpfc_nvme.c ++++ b/drivers/scsi/lpfc/lpfc_nvme.c +@@ -2718,7 +2718,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn); + rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn); + ++ spin_lock_irq(&vport->phba->hbalock); + oldrport = lpfc_ndlp_get_nrport(ndlp); ++ spin_unlock_irq(&vport->phba->hbalock); + if (!oldrport) + lpfc_nlp_get(ndlp); + +@@ -2833,7 +2835,7 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + struct nvme_fc_local_port *localport; + struct lpfc_nvme_lport *lport; + struct lpfc_nvme_rport *rport; +- struct nvme_fc_remote_port *remoteport; ++ struct nvme_fc_remote_port *remoteport = NULL; + + localport = vport->localport; + +@@ -2847,11 +2849,14 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + if (!lport) + goto input_err; + ++ spin_lock_irq(&vport->phba->hbalock); + rport = lpfc_ndlp_get_nrport(ndlp); +- if (!rport) ++ if (rport) ++ remoteport = rport->remoteport; ++ spin_unlock_irq(&vport->phba->hbalock); ++ if (!remoteport) + goto input_err; + +- remoteport = rport->remoteport; + lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, + "6033 Unreg nvme remoteport %p, portname x%llx, " + "port_id x%06x, portstate x%x port type x%x\n", +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index 9421d9877730..0949d3db56e7 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -1277,7 +1277,8 @@ static int sd_init_command(struct scsi_cmnd *cmd) + case REQ_OP_ZONE_RESET: + return sd_zbc_setup_reset_cmnd(cmd); + default: +- BUG(); ++ WARN_ON_ONCE(1); ++ return BLKPREP_KILL; + } + } + +diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c +index 4b5e250e8615..e5c7e1ef6318 100644 +--- a/drivers/soundwire/stream.c ++++ b/drivers/soundwire/stream.c +@@ -899,9 +899,10 @@ static void sdw_release_master_stream(struct sdw_stream_runtime *stream) + struct sdw_master_runtime *m_rt = stream->m_rt; + struct sdw_slave_runtime *s_rt, *_s_rt; + +- list_for_each_entry_safe(s_rt, _s_rt, +- &m_rt->slave_rt_list, m_rt_node) +- sdw_stream_remove_slave(s_rt->slave, stream); ++ list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) { ++ sdw_slave_port_release(s_rt->slave->bus, s_rt->slave, stream); ++ sdw_release_slave_stream(s_rt->slave, stream); ++ } + + list_del(&m_rt->bus_node); + } +@@ -1112,7 +1113,7 @@ int sdw_stream_add_master(struct sdw_bus *bus, + "Master runtime config failed for stream:%s", + stream->name); + ret = -ENOMEM; +- goto error; ++ goto unlock; + } + + ret = sdw_config_stream(bus->dev, stream, stream_config, false); +@@ -1123,11 +1124,11 @@ int sdw_stream_add_master(struct sdw_bus *bus, + if (ret) + goto stream_error; + +- stream->state = SDW_STREAM_CONFIGURED; ++ goto unlock; + + stream_error: + sdw_release_master_stream(stream); +-error: ++unlock: + mutex_unlock(&bus->bus_lock); + return ret; + } +@@ -1141,6 +1142,10 @@ EXPORT_SYMBOL(sdw_stream_add_master); + * @stream: SoundWire stream + * @port_config: Port configuration for audio stream + * @num_ports: Number of ports ++ * ++ * It is expected that Slave is added before adding Master ++ * to the Stream. ++ * + */ + int sdw_stream_add_slave(struct sdw_slave *slave, + struct sdw_stream_config *stream_config, +@@ -1186,6 +1191,12 @@ int sdw_stream_add_slave(struct sdw_slave *slave, + if (ret) + goto stream_error; + ++ /* ++ * Change stream state to CONFIGURED on first Slave add. ++ * Bus is not aware of number of Slave(s) in a stream at this ++ * point so cannot depend on all Slave(s) to be added in order to ++ * change stream state to CONFIGURED. ++ */ + stream->state = SDW_STREAM_CONFIGURED; + goto error; + +diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c +index 6ae92d4dca19..3b518ead504e 100644 +--- a/drivers/spi/spi-gpio.c ++++ b/drivers/spi/spi-gpio.c +@@ -287,8 +287,8 @@ static int spi_gpio_request(struct device *dev, + *mflags |= SPI_MASTER_NO_RX; + + spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW); +- if (IS_ERR(spi_gpio->mosi)) +- return PTR_ERR(spi_gpio->mosi); ++ if (IS_ERR(spi_gpio->sck)) ++ return PTR_ERR(spi_gpio->sck); + + for (i = 0; i < num_chipselects; i++) { + spi_gpio->cs_gpios[i] = devm_gpiod_get_index(dev, "cs", +diff --git a/fs/namespace.c b/fs/namespace.c +index 1949e0939d40..bd2f4c68506a 100644 +--- a/fs/namespace.c ++++ b/fs/namespace.c +@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file) + { + int ret; + +- sb_start_write(file_inode(file)->i_sb); ++ sb_start_write(file->f_path.mnt->mnt_sb); + ret = __mnt_want_write_file(file); + if (ret) +- sb_end_write(file_inode(file)->i_sb); ++ sb_end_write(file->f_path.mnt->mnt_sb); + return ret; + } + +@@ -540,8 +540,7 @@ void __mnt_drop_write_file(struct file *file) + + void mnt_drop_write_file_path(struct file *file) + { +- __mnt_drop_write_file(file); +- sb_end_write(file_inode(file)->i_sb); ++ mnt_drop_write(file->f_path.mnt); + } + + void mnt_drop_write_file(struct file *file) +diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h +index a8a126259bc4..0bec79ae4c2d 100644 +--- a/include/linux/huge_mm.h ++++ b/include/linux/huge_mm.h +@@ -42,7 +42,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, + unsigned char *vec); + extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long old_end, +- pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush); ++ pmd_t *old_pmd, pmd_t *new_pmd); + extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr, pgprot_t newprot, + int prot_numa); +diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c +index f833a60699ad..e60078ffb302 100644 +--- a/kernel/bpf/sockmap.c ++++ b/kernel/bpf/sockmap.c +@@ -132,6 +132,7 @@ struct smap_psock { + struct work_struct gc_work; + + struct proto *sk_proto; ++ void (*save_unhash)(struct sock *sk); + void (*save_close)(struct sock *sk, long timeout); + void (*save_data_ready)(struct sock *sk); + void (*save_write_space)(struct sock *sk); +@@ -143,6 +144,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, + static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size); + static int bpf_tcp_sendpage(struct sock *sk, struct page *page, + int offset, size_t size, int flags); ++static void bpf_tcp_unhash(struct sock *sk); + static void bpf_tcp_close(struct sock *sk, long timeout); + + static inline struct smap_psock *smap_psock_sk(const struct sock *sk) +@@ -184,6 +186,7 @@ static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS], + struct proto *base) + { + prot[SOCKMAP_BASE] = *base; ++ prot[SOCKMAP_BASE].unhash = bpf_tcp_unhash; + prot[SOCKMAP_BASE].close = bpf_tcp_close; + prot[SOCKMAP_BASE].recvmsg = bpf_tcp_recvmsg; + prot[SOCKMAP_BASE].stream_memory_read = bpf_tcp_stream_read; +@@ -217,6 +220,7 @@ static int bpf_tcp_init(struct sock *sk) + return -EBUSY; + } + ++ psock->save_unhash = sk->sk_prot->unhash; + psock->save_close = sk->sk_prot->close; + psock->sk_proto = sk->sk_prot; + +@@ -305,30 +309,12 @@ static struct smap_psock_map_entry *psock_map_pop(struct sock *sk, + return e; + } + +-static void bpf_tcp_close(struct sock *sk, long timeout) ++static void bpf_tcp_remove(struct sock *sk, struct smap_psock *psock) + { +- void (*close_fun)(struct sock *sk, long timeout); + struct smap_psock_map_entry *e; + struct sk_msg_buff *md, *mtmp; +- struct smap_psock *psock; + struct sock *osk; + +- lock_sock(sk); +- rcu_read_lock(); +- psock = smap_psock_sk(sk); +- if (unlikely(!psock)) { +- rcu_read_unlock(); +- release_sock(sk); +- return sk->sk_prot->close(sk, timeout); +- } +- +- /* The psock may be destroyed anytime after exiting the RCU critial +- * section so by the time we use close_fun the psock may no longer +- * be valid. However, bpf_tcp_close is called with the sock lock +- * held so the close hook and sk are still valid. +- */ +- close_fun = psock->save_close; +- + if (psock->cork) { + free_start_sg(psock->sock, psock->cork, true); + kfree(psock->cork); +@@ -379,6 +365,42 @@ static void bpf_tcp_close(struct sock *sk, long timeout) + kfree(e); + e = psock_map_pop(sk, psock); + } ++} ++ ++static void bpf_tcp_unhash(struct sock *sk) ++{ ++ void (*unhash_fun)(struct sock *sk); ++ struct smap_psock *psock; ++ ++ rcu_read_lock(); ++ psock = smap_psock_sk(sk); ++ if (unlikely(!psock)) { ++ rcu_read_unlock(); ++ if (sk->sk_prot->unhash) ++ sk->sk_prot->unhash(sk); ++ return; ++ } ++ unhash_fun = psock->save_unhash; ++ bpf_tcp_remove(sk, psock); ++ rcu_read_unlock(); ++ unhash_fun(sk); ++} ++ ++static void bpf_tcp_close(struct sock *sk, long timeout) ++{ ++ void (*close_fun)(struct sock *sk, long timeout); ++ struct smap_psock *psock; ++ ++ lock_sock(sk); ++ rcu_read_lock(); ++ psock = smap_psock_sk(sk); ++ if (unlikely(!psock)) { ++ rcu_read_unlock(); ++ release_sock(sk); ++ return sk->sk_prot->close(sk, timeout); ++ } ++ close_fun = psock->save_close; ++ bpf_tcp_remove(sk, psock); + rcu_read_unlock(); + release_sock(sk); + close_fun(sk, timeout); +@@ -2100,8 +2122,12 @@ static int sock_map_update_elem(struct bpf_map *map, + return -EINVAL; + } + ++ /* ULPs are currently supported only for TCP sockets in ESTABLISHED ++ * state. ++ */ + if (skops.sk->sk_type != SOCK_STREAM || +- skops.sk->sk_protocol != IPPROTO_TCP) { ++ skops.sk->sk_protocol != IPPROTO_TCP || ++ skops.sk->sk_state != TCP_ESTABLISHED) { + fput(socket->file); + return -EOPNOTSUPP; + } +@@ -2456,6 +2482,16 @@ static int sock_hash_update_elem(struct bpf_map *map, + return -EINVAL; + } + ++ /* ULPs are currently supported only for TCP sockets in ESTABLISHED ++ * state. ++ */ ++ if (skops.sk->sk_type != SOCK_STREAM || ++ skops.sk->sk_protocol != IPPROTO_TCP || ++ skops.sk->sk_state != TCP_ESTABLISHED) { ++ fput(socket->file); ++ return -EOPNOTSUPP; ++ } ++ + lock_sock(skops.sk); + preempt_disable(); + rcu_read_lock(); +@@ -2544,10 +2580,22 @@ const struct bpf_map_ops sock_hash_ops = { + .map_release_uref = sock_map_release, + }; + ++static bool bpf_is_valid_sock_op(struct bpf_sock_ops_kern *ops) ++{ ++ return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB || ++ ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB; ++} + BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock, + struct bpf_map *, map, void *, key, u64, flags) + { + WARN_ON_ONCE(!rcu_read_lock_held()); ++ ++ /* ULPs are currently supported only for TCP sockets in ESTABLISHED ++ * state. This checks that the sock ops triggering the update is ++ * one indicating we are (or will be soon) in an ESTABLISHED state. ++ */ ++ if (!bpf_is_valid_sock_op(bpf_sock)) ++ return -EOPNOTSUPP; + return sock_map_ctx_update_elem(bpf_sock, map, key, flags); + } + +@@ -2566,6 +2614,9 @@ BPF_CALL_4(bpf_sock_hash_update, struct bpf_sock_ops_kern *, bpf_sock, + struct bpf_map *, map, void *, key, u64, flags) + { + WARN_ON_ONCE(!rcu_read_lock_held()); ++ ++ if (!bpf_is_valid_sock_op(bpf_sock)) ++ return -EOPNOTSUPP; + return sock_hash_ctx_update_elem(bpf_sock, map, key, flags); + } + +diff --git a/mm/huge_memory.c b/mm/huge_memory.c +index f7274e0c8bdc..3238bb2d0c93 100644 +--- a/mm/huge_memory.c ++++ b/mm/huge_memory.c +@@ -1778,7 +1778,7 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd) + + bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long old_end, +- pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) ++ pmd_t *old_pmd, pmd_t *new_pmd) + { + spinlock_t *old_ptl, *new_ptl; + pmd_t pmd; +@@ -1809,7 +1809,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd); +- if (pmd_present(pmd) && pmd_dirty(pmd)) ++ if (pmd_present(pmd)) + force_flush = true; + VM_BUG_ON(!pmd_none(*new_pmd)); + +@@ -1820,12 +1820,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, + } + pmd = move_soft_dirty_pmd(pmd); + set_pmd_at(mm, new_addr, new_pmd, pmd); +- if (new_ptl != old_ptl) +- spin_unlock(new_ptl); + if (force_flush) + flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); +- else +- *need_flush = true; ++ if (new_ptl != old_ptl) ++ spin_unlock(new_ptl); + spin_unlock(old_ptl); + return true; + } +diff --git a/mm/mremap.c b/mm/mremap.c +index 5c2e18505f75..a9617e72e6b7 100644 +--- a/mm/mremap.c ++++ b/mm/mremap.c +@@ -115,7 +115,7 @@ static pte_t move_soft_dirty_pte(pte_t pte) + static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, + unsigned long old_addr, unsigned long old_end, + struct vm_area_struct *new_vma, pmd_t *new_pmd, +- unsigned long new_addr, bool need_rmap_locks, bool *need_flush) ++ unsigned long new_addr, bool need_rmap_locks) + { + struct mm_struct *mm = vma->vm_mm; + pte_t *old_pte, *new_pte, pte; +@@ -163,15 +163,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, + + pte = ptep_get_and_clear(mm, old_addr, old_pte); + /* +- * If we are remapping a dirty PTE, make sure ++ * If we are remapping a valid PTE, make sure + * to flush TLB before we drop the PTL for the +- * old PTE or we may race with page_mkclean(). ++ * PTE. + * +- * This check has to be done after we removed the +- * old PTE from page tables or another thread may +- * dirty it after the check and before the removal. ++ * NOTE! Both old and new PTL matter: the old one ++ * for racing with page_mkclean(), the new one to ++ * make sure the physical page stays valid until ++ * the TLB entry for the old mapping has been ++ * flushed. + */ +- if (pte_present(pte) && pte_dirty(pte)) ++ if (pte_present(pte)) + force_flush = true; + pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr); + pte = move_soft_dirty_pte(pte); +@@ -179,13 +181,11 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, + } + + arch_leave_lazy_mmu_mode(); ++ if (force_flush) ++ flush_tlb_range(vma, old_end - len, old_end); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + pte_unmap(new_pte - 1); +- if (force_flush) +- flush_tlb_range(vma, old_end - len, old_end); +- else +- *need_flush = true; + pte_unmap_unlock(old_pte - 1, old_ptl); + if (need_rmap_locks) + drop_rmap_locks(vma); +@@ -198,7 +198,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma, + { + unsigned long extent, next, old_end; + pmd_t *old_pmd, *new_pmd; +- bool need_flush = false; + unsigned long mmun_start; /* For mmu_notifiers */ + unsigned long mmun_end; /* For mmu_notifiers */ + +@@ -229,8 +228,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, + if (need_rmap_locks) + take_rmap_locks(vma); + moved = move_huge_pmd(vma, old_addr, new_addr, +- old_end, old_pmd, new_pmd, +- &need_flush); ++ old_end, old_pmd, new_pmd); + if (need_rmap_locks) + drop_rmap_locks(vma); + if (moved) +@@ -246,10 +244,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, + if (extent > next - new_addr) + extent = next - new_addr; + move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma, +- new_pmd, new_addr, need_rmap_locks, &need_flush); ++ new_pmd, new_addr, need_rmap_locks); + } +- if (need_flush) +- flush_tlb_range(vma, old_end-len, old_addr); + + mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end); + +diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c +index 71c20c1d4002..9f481cfdf77d 100644 +--- a/net/batman-adv/bat_v_elp.c ++++ b/net/batman-adv/bat_v_elp.c +@@ -241,7 +241,7 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh) + * the packet to be exactly of that size to make the link + * throughput estimation effective. + */ +- skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len); ++ skb_put_zero(skb, probe_len - hard_iface->bat_v.elp_skb->len); + + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Sending unicast (probe) ELP packet on interface %s to %pM\n", +@@ -268,6 +268,7 @@ static void batadv_v_elp_periodic_work(struct work_struct *work) + struct batadv_priv *bat_priv; + struct sk_buff *skb; + u32 elp_interval; ++ bool ret; + + bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work); + hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v); +@@ -329,8 +330,11 @@ static void batadv_v_elp_periodic_work(struct work_struct *work) + * may sleep and that is not allowed in an rcu protected + * context. Therefore schedule a task for that. + */ +- queue_work(batadv_event_workqueue, +- &hardif_neigh->bat_v.metric_work); ++ ret = queue_work(batadv_event_workqueue, ++ &hardif_neigh->bat_v.metric_work); ++ ++ if (!ret) ++ batadv_hardif_neigh_put(hardif_neigh); + } + rcu_read_unlock(); + +diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c +index a2de5a44bd41..58c093caf49e 100644 +--- a/net/batman-adv/bridge_loop_avoidance.c ++++ b/net/batman-adv/bridge_loop_avoidance.c +@@ -1772,6 +1772,7 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb, + { + struct batadv_bla_backbone_gw *backbone_gw; + struct ethhdr *ethhdr; ++ bool ret; + + ethhdr = eth_hdr(skb); + +@@ -1795,8 +1796,13 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb, + if (unlikely(!backbone_gw)) + return true; + +- queue_work(batadv_event_workqueue, &backbone_gw->report_work); +- /* backbone_gw is unreferenced in the report work function function */ ++ ret = queue_work(batadv_event_workqueue, &backbone_gw->report_work); ++ ++ /* backbone_gw is unreferenced in the report work function function ++ * if queue_work() call was successful ++ */ ++ if (!ret) ++ batadv_backbone_gw_put(backbone_gw); + + return true; + } +diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c +index 8b198ee798c9..140c61a3f1ec 100644 +--- a/net/batman-adv/gateway_client.c ++++ b/net/batman-adv/gateway_client.c +@@ -32,6 +32,7 @@ + #include <linux/kernel.h> + #include <linux/kref.h> + #include <linux/list.h> ++#include <linux/lockdep.h> + #include <linux/netdevice.h> + #include <linux/netlink.h> + #include <linux/rculist.h> +@@ -348,6 +349,9 @@ out: + * @bat_priv: the bat priv with all the soft interface information + * @orig_node: originator announcing gateway capabilities + * @gateway: announced bandwidth information ++ * ++ * Has to be called with the appropriate locks being acquired ++ * (gw.list_lock). + */ + static void batadv_gw_node_add(struct batadv_priv *bat_priv, + struct batadv_orig_node *orig_node, +@@ -355,6 +359,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv, + { + struct batadv_gw_node *gw_node; + ++ lockdep_assert_held(&bat_priv->gw.list_lock); ++ + if (gateway->bandwidth_down == 0) + return; + +@@ -369,10 +375,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv, + gw_node->bandwidth_down = ntohl(gateway->bandwidth_down); + gw_node->bandwidth_up = ntohl(gateway->bandwidth_up); + +- spin_lock_bh(&bat_priv->gw.list_lock); + kref_get(&gw_node->refcount); + hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.gateway_list); +- spin_unlock_bh(&bat_priv->gw.list_lock); + + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, + "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n", +@@ -428,11 +432,14 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv, + { + struct batadv_gw_node *gw_node, *curr_gw = NULL; + ++ spin_lock_bh(&bat_priv->gw.list_lock); + gw_node = batadv_gw_node_get(bat_priv, orig_node); + if (!gw_node) { + batadv_gw_node_add(bat_priv, orig_node, gateway); ++ spin_unlock_bh(&bat_priv->gw.list_lock); + goto out; + } ++ spin_unlock_bh(&bat_priv->gw.list_lock); + + if (gw_node->bandwidth_down == ntohl(gateway->bandwidth_down) && + gw_node->bandwidth_up == ntohl(gateway->bandwidth_up)) +diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c +index c3578444f3cb..34caf129a9bf 100644 +--- a/net/batman-adv/network-coding.c ++++ b/net/batman-adv/network-coding.c +@@ -854,16 +854,27 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv, + spinlock_t *lock; /* Used to lock list selected by "int in_coding" */ + struct list_head *list; + ++ /* Select ingoing or outgoing coding node */ ++ if (in_coding) { ++ lock = &orig_neigh_node->in_coding_list_lock; ++ list = &orig_neigh_node->in_coding_list; ++ } else { ++ lock = &orig_neigh_node->out_coding_list_lock; ++ list = &orig_neigh_node->out_coding_list; ++ } ++ ++ spin_lock_bh(lock); ++ + /* Check if nc_node is already added */ + nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding); + + /* Node found */ + if (nc_node) +- return nc_node; ++ goto unlock; + + nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC); + if (!nc_node) +- return NULL; ++ goto unlock; + + /* Initialize nc_node */ + INIT_LIST_HEAD(&nc_node->list); +@@ -872,22 +883,14 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv, + kref_get(&orig_neigh_node->refcount); + nc_node->orig_node = orig_neigh_node; + +- /* Select ingoing or outgoing coding node */ +- if (in_coding) { +- lock = &orig_neigh_node->in_coding_list_lock; +- list = &orig_neigh_node->in_coding_list; +- } else { +- lock = &orig_neigh_node->out_coding_list_lock; +- list = &orig_neigh_node->out_coding_list; +- } +- + batadv_dbg(BATADV_DBG_NC, bat_priv, "Adding nc_node %pM -> %pM\n", + nc_node->addr, nc_node->orig_node->orig); + + /* Add nc_node to orig_node */ +- spin_lock_bh(lock); + kref_get(&nc_node->refcount); + list_add_tail_rcu(&nc_node->list, list); ++ ++unlock: + spin_unlock_bh(lock); + + return nc_node; +diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c +index 1485263a348b..626ddca332db 100644 +--- a/net/batman-adv/soft-interface.c ++++ b/net/batman-adv/soft-interface.c +@@ -574,15 +574,20 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid) + struct batadv_softif_vlan *vlan; + int err; + ++ spin_lock_bh(&bat_priv->softif_vlan_list_lock); ++ + vlan = batadv_softif_vlan_get(bat_priv, vid); + if (vlan) { + batadv_softif_vlan_put(vlan); ++ spin_unlock_bh(&bat_priv->softif_vlan_list_lock); + return -EEXIST; + } + + vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC); +- if (!vlan) ++ if (!vlan) { ++ spin_unlock_bh(&bat_priv->softif_vlan_list_lock); + return -ENOMEM; ++ } + + vlan->bat_priv = bat_priv; + vlan->vid = vid; +@@ -590,17 +595,23 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid) + + atomic_set(&vlan->ap_isolation, 0); + ++ kref_get(&vlan->refcount); ++ hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list); ++ spin_unlock_bh(&bat_priv->softif_vlan_list_lock); ++ ++ /* batadv_sysfs_add_vlan cannot be in the spinlock section due to the ++ * sleeping behavior of the sysfs functions and the fs_reclaim lock ++ */ + err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan); + if (err) { +- kfree(vlan); ++ /* ref for the function */ ++ batadv_softif_vlan_put(vlan); ++ ++ /* ref for the list */ ++ batadv_softif_vlan_put(vlan); + return err; + } + +- spin_lock_bh(&bat_priv->softif_vlan_list_lock); +- kref_get(&vlan->refcount); +- hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list); +- spin_unlock_bh(&bat_priv->softif_vlan_list_lock); +- + /* add a new TT local entry. This one will be marked with the NOPURGE + * flag + */ +diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c +index f2eef43bd2ec..09427fc6494a 100644 +--- a/net/batman-adv/sysfs.c ++++ b/net/batman-adv/sysfs.c +@@ -188,7 +188,8 @@ ssize_t batadv_store_##_name(struct kobject *kobj, \ + \ + return __batadv_store_uint_attr(buff, count, _min, _max, \ + _post_func, attr, \ +- &bat_priv->_var, net_dev); \ ++ &bat_priv->_var, net_dev, \ ++ NULL); \ + } + + #define BATADV_ATTR_SIF_SHOW_UINT(_name, _var) \ +@@ -262,7 +263,9 @@ ssize_t batadv_store_##_name(struct kobject *kobj, \ + \ + length = __batadv_store_uint_attr(buff, count, _min, _max, \ + _post_func, attr, \ +- &hard_iface->_var, net_dev); \ ++ &hard_iface->_var, \ ++ hard_iface->soft_iface, \ ++ net_dev); \ + \ + batadv_hardif_put(hard_iface); \ + return length; \ +@@ -356,10 +359,12 @@ __batadv_store_bool_attr(char *buff, size_t count, + + static int batadv_store_uint_attr(const char *buff, size_t count, + struct net_device *net_dev, ++ struct net_device *slave_dev, + const char *attr_name, + unsigned int min, unsigned int max, + atomic_t *attr) + { ++ char ifname[IFNAMSIZ + 3] = ""; + unsigned long uint_val; + int ret; + +@@ -385,8 +390,11 @@ static int batadv_store_uint_attr(const char *buff, size_t count, + if (atomic_read(attr) == uint_val) + return count; + +- batadv_info(net_dev, "%s: Changing from: %i to: %lu\n", +- attr_name, atomic_read(attr), uint_val); ++ if (slave_dev) ++ snprintf(ifname, sizeof(ifname), "%s: ", slave_dev->name); ++ ++ batadv_info(net_dev, "%s: %sChanging from: %i to: %lu\n", ++ attr_name, ifname, atomic_read(attr), uint_val); + + atomic_set(attr, uint_val); + return count; +@@ -397,12 +405,13 @@ static ssize_t __batadv_store_uint_attr(const char *buff, size_t count, + void (*post_func)(struct net_device *), + const struct attribute *attr, + atomic_t *attr_store, +- struct net_device *net_dev) ++ struct net_device *net_dev, ++ struct net_device *slave_dev) + { + int ret; + +- ret = batadv_store_uint_attr(buff, count, net_dev, attr->name, min, max, +- attr_store); ++ ret = batadv_store_uint_attr(buff, count, net_dev, slave_dev, ++ attr->name, min, max, attr_store); + if (post_func && ret) + post_func(net_dev); + +@@ -571,7 +580,7 @@ static ssize_t batadv_store_gw_sel_class(struct kobject *kobj, + return __batadv_store_uint_attr(buff, count, 1, BATADV_TQ_MAX_VALUE, + batadv_post_gw_reselect, attr, + &bat_priv->gw.sel_class, +- bat_priv->soft_iface); ++ bat_priv->soft_iface, NULL); + } + + static ssize_t batadv_show_gw_bwidth(struct kobject *kobj, +@@ -1090,8 +1099,9 @@ static ssize_t batadv_store_throughput_override(struct kobject *kobj, + if (old_tp_override == tp_override) + goto out; + +- batadv_info(net_dev, "%s: Changing from: %u.%u MBit to: %u.%u MBit\n", +- "throughput_override", ++ batadv_info(hard_iface->soft_iface, ++ "%s: %s: Changing from: %u.%u MBit to: %u.%u MBit\n", ++ "throughput_override", net_dev->name, + old_tp_override / 10, old_tp_override % 10, + tp_override / 10, tp_override % 10); + +diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c +index 12a2b7d21376..d21624c44665 100644 +--- a/net/batman-adv/translation-table.c ++++ b/net/batman-adv/translation-table.c +@@ -1613,6 +1613,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global, + { + struct batadv_tt_orig_list_entry *orig_entry; + ++ spin_lock_bh(&tt_global->list_lock); ++ + orig_entry = batadv_tt_global_orig_entry_find(tt_global, orig_node); + if (orig_entry) { + /* refresh the ttvn: the current value could be a bogus one that +@@ -1635,11 +1637,9 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global, + orig_entry->flags = flags; + kref_init(&orig_entry->refcount); + +- spin_lock_bh(&tt_global->list_lock); + kref_get(&orig_entry->refcount); + hlist_add_head_rcu(&orig_entry->list, + &tt_global->orig_list); +- spin_unlock_bh(&tt_global->list_lock); + atomic_inc(&tt_global->orig_list_count); + + sync_flags: +@@ -1647,6 +1647,8 @@ sync_flags: + out: + if (orig_entry) + batadv_tt_orig_list_entry_put(orig_entry); ++ ++ spin_unlock_bh(&tt_global->list_lock); + } + + /** +diff --git a/net/batman-adv/tvlv.c b/net/batman-adv/tvlv.c +index a637458205d1..40e69c9346d2 100644 +--- a/net/batman-adv/tvlv.c ++++ b/net/batman-adv/tvlv.c +@@ -529,15 +529,20 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv, + { + struct batadv_tvlv_handler *tvlv_handler; + ++ spin_lock_bh(&bat_priv->tvlv.handler_list_lock); ++ + tvlv_handler = batadv_tvlv_handler_get(bat_priv, type, version); + if (tvlv_handler) { ++ spin_unlock_bh(&bat_priv->tvlv.handler_list_lock); + batadv_tvlv_handler_put(tvlv_handler); + return; + } + + tvlv_handler = kzalloc(sizeof(*tvlv_handler), GFP_ATOMIC); +- if (!tvlv_handler) ++ if (!tvlv_handler) { ++ spin_unlock_bh(&bat_priv->tvlv.handler_list_lock); + return; ++ } + + tvlv_handler->ogm_handler = optr; + tvlv_handler->unicast_handler = uptr; +@@ -547,7 +552,6 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv, + kref_init(&tvlv_handler->refcount); + INIT_HLIST_NODE(&tvlv_handler->list); + +- spin_lock_bh(&bat_priv->tvlv.handler_list_lock); + kref_get(&tvlv_handler->refcount); + hlist_add_head_rcu(&tvlv_handler->list, &bat_priv->tvlv.handler_list); + spin_unlock_bh(&bat_priv->tvlv.handler_list_lock); +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index e7de5f282722..effa87858b21 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -612,7 +612,10 @@ static void smc_connect_work(struct work_struct *work) + smc->sk.sk_err = -rc; + + out: +- smc->sk.sk_state_change(&smc->sk); ++ if (smc->sk.sk_err) ++ smc->sk.sk_state_change(&smc->sk); ++ else ++ smc->sk.sk_write_space(&smc->sk); + kfree(smc->connect_info); + smc->connect_info = NULL; + release_sock(&smc->sk); +@@ -1345,7 +1348,7 @@ static __poll_t smc_poll(struct file *file, struct socket *sock, + return EPOLLNVAL; + + smc = smc_sk(sock->sk); +- if ((sk->sk_state == SMC_INIT) || smc->use_fallback) { ++ if (smc->use_fallback) { + /* delegate to CLC child sock */ + mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); + sk->sk_err = smc->clcsock->sk->sk_err; +diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c +index ae5d168653ce..086157555ac3 100644 +--- a/net/smc/smc_clc.c ++++ b/net/smc/smc_clc.c +@@ -405,14 +405,12 @@ int smc_clc_send_proposal(struct smc_sock *smc, + vec[i++].iov_len = sizeof(trl); + /* due to the few bytes needed for clc-handshake this cannot block */ + len = kernel_sendmsg(smc->clcsock, &msg, vec, i, plen); +- if (len < sizeof(pclc)) { +- if (len >= 0) { +- reason_code = -ENETUNREACH; +- smc->sk.sk_err = -reason_code; +- } else { +- smc->sk.sk_err = smc->clcsock->sk->sk_err; +- reason_code = -smc->sk.sk_err; +- } ++ if (len < 0) { ++ smc->sk.sk_err = smc->clcsock->sk->sk_err; ++ reason_code = -smc->sk.sk_err; ++ } else if (len < (int)sizeof(pclc)) { ++ reason_code = -ENETUNREACH; ++ smc->sk.sk_err = -reason_code; + } + + return reason_code; +diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c +index 6c253343a6f9..70d18d0d39ff 100644 +--- a/tools/testing/selftests/bpf/test_maps.c ++++ b/tools/testing/selftests/bpf/test_maps.c +@@ -566,7 +566,11 @@ static void test_sockmap(int tasks, void *data) + /* Test update without programs */ + for (i = 0; i < 6; i++) { + err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY); +- if (err) { ++ if (i < 2 && !err) { ++ printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n", ++ i, sfd[i]); ++ goto out_sockmap; ++ } else if (i >= 2 && err) { + printf("Failed noprog update sockmap '%i:%i'\n", + i, sfd[i]); + goto out_sockmap; +@@ -727,7 +731,7 @@ static void test_sockmap(int tasks, void *data) + } + + /* Test map update elem afterwards fd lives in fd and map_fd */ +- for (i = 0; i < 6; i++) { ++ for (i = 2; i < 6; i++) { + err = bpf_map_update_elem(map_fd_rx, &i, &sfd[i], BPF_ANY); + if (err) { + printf("Failed map_fd_rx update sockmap %i '%i:%i'\n", +@@ -831,7 +835,7 @@ static void test_sockmap(int tasks, void *data) + } + + /* Delete the elems without programs */ +- for (i = 0; i < 6; i++) { ++ for (i = 2; i < 6; i++) { + err = bpf_map_delete_elem(fd, &i); + if (err) { + printf("Failed delete sockmap %i '%i:%i'\n", +diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh +index 32a194e3e07a..0ab9423d009f 100755 +--- a/tools/testing/selftests/net/pmtu.sh ++++ b/tools/testing/selftests/net/pmtu.sh +@@ -178,8 +178,8 @@ setup() { + + cleanup() { + [ ${cleanup_done} -eq 1 ] && return +- ip netns del ${NS_A} 2 > /dev/null +- ip netns del ${NS_B} 2 > /dev/null ++ ip netns del ${NS_A} 2> /dev/null ++ ip netns del ${NS_B} 2> /dev/null + cleanup_done=1 + } +