From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.3 commit in: /
Date: Sun, 10 Nov 2019 16:22:12 +0000 (UTC) [thread overview]
Message-ID: <1573402912.0daea8e2bb22888ab05d48a2e6486bedbbb18a61.mpagano@gentoo> (raw)
commit: 0daea8e2bb22888ab05d48a2e6486bedbbb18a61
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 10 16:21:52 2019 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 10 16:21:52 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0daea8e2
Linux patch 5.3.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-5.3.10.patch | 6701 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6705 insertions(+)
diff --git a/0000_README b/0000_README
index c1a5896..6cbaced 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-5.3.9.patch
From: http://www.kernel.org
Desc: Linux 5.3.9
+Patch: 1009_linux-5.3.10.patch
+From: http://www.kernel.org
+Desc: Linux 5.3.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-5.3.10.patch b/1009_linux-5.3.10.patch
new file mode 100644
index 0000000..5f59953
--- /dev/null
+++ b/1009_linux-5.3.10.patch
@@ -0,0 +1,6701 @@
+diff --git a/Makefile b/Makefile
+index ad5f5230bbbe..e2a8b4534da5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 3
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Bobtail Squid
+
+diff --git a/arch/arm/boot/dts/am3874-iceboard.dts b/arch/arm/boot/dts/am3874-iceboard.dts
+index 883fb85135d4..1b4b2b0500e4 100644
+--- a/arch/arm/boot/dts/am3874-iceboard.dts
++++ b/arch/arm/boot/dts/am3874-iceboard.dts
+@@ -111,13 +111,13 @@
+ reg = <0x70>;
+ #address-cells = <1>;
+ #size-cells = <0>;
++ i2c-mux-idle-disconnect;
+
+ i2c@0 {
+ /* FMC A */
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@1 {
+@@ -125,7 +125,6 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@2 {
+@@ -133,7 +132,6 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <2>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@3 {
+@@ -141,7 +139,6 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <3>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@4 {
+@@ -149,14 +146,12 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <4>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@5 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <5>;
+- i2c-mux-idle-disconnect;
+
+ ina230@40 { compatible = "ti,ina230"; reg = <0x40>; shunt-resistor = <5000>; };
+ ina230@41 { compatible = "ti,ina230"; reg = <0x41>; shunt-resistor = <5000>; };
+@@ -182,14 +177,12 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <6>;
+- i2c-mux-idle-disconnect;
+ };
+
+ i2c@7 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <7>;
+- i2c-mux-idle-disconnect;
+
+ u41: pca9575@20 {
+ compatible = "nxp,pca9575";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+index 81399b2c5af9..d4f0e455612d 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+@@ -8,6 +8,14 @@
+ reg = <0 0x40000000>;
+ };
+
++ leds {
++ /*
++ * Since there is no upstream GPIO driver yet,
++ * remove the incomplete node.
++ */
++ /delete-node/ act;
++ };
++
+ reg_3v3: fixed-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "3V3";
+diff --git a/arch/arm/boot/dts/imx6-logicpd-som.dtsi b/arch/arm/boot/dts/imx6-logicpd-som.dtsi
+index 7ceae3573248..547fb141ec0c 100644
+--- a/arch/arm/boot/dts/imx6-logicpd-som.dtsi
++++ b/arch/arm/boot/dts/imx6-logicpd-som.dtsi
+@@ -207,6 +207,10 @@
+ vin-supply = <&sw1c_reg>;
+ };
+
++&snvs_poweroff {
++ status = "okay";
++};
++
+ &iomuxc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_hog>;
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index c1a4fff5ceda..6323a9462afa 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -448,7 +448,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x302d0000 0x10000>;
+ interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT1_ROOT_CLK>,
+ <&clks IMX7D_GPT1_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ };
+@@ -457,7 +457,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x302e0000 0x10000>;
+ interrupts = <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT2_ROOT_CLK>,
+ <&clks IMX7D_GPT2_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+@@ -467,7 +467,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x302f0000 0x10000>;
+ interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT3_ROOT_CLK>,
+ <&clks IMX7D_GPT3_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+@@ -477,7 +477,7 @@
+ compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
+ reg = <0x30300000 0x10000>;
+ interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clks IMX7D_CLK_DUMMY>,
++ clocks = <&clks IMX7D_GPT4_ROOT_CLK>,
+ <&clks IMX7D_GPT4_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+diff --git a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+index 3fdd0a72f87f..506b118e511a 100644
+--- a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
++++ b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+@@ -192,3 +192,7 @@
+ &twl_gpio {
+ ti,use-leds;
+ };
++
++&twl_keypad {
++ status = "disabled";
++};
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index 4454449de00c..a40fe8d49da6 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -369,7 +369,7 @@
+ compatible = "ti,wl1285", "ti,wl1283";
+ reg = <2>;
+ /* gpio_100 with gpmc_wait2 pad as wakeirq */
+- interrupts-extended = <&gpio4 4 IRQ_TYPE_EDGE_RISING>,
++ interrupts-extended = <&gpio4 4 IRQ_TYPE_LEVEL_HIGH>,
+ <&omap4_pmx_core 0x4e>;
+ interrupt-names = "irq", "wakeup";
+ ref-clock-frequency = <26000000>;
+diff --git a/arch/arm/boot/dts/omap4-panda-common.dtsi b/arch/arm/boot/dts/omap4-panda-common.dtsi
+index 14be2ecb62b1..55ea8b6189af 100644
+--- a/arch/arm/boot/dts/omap4-panda-common.dtsi
++++ b/arch/arm/boot/dts/omap4-panda-common.dtsi
+@@ -474,7 +474,7 @@
+ compatible = "ti,wl1271";
+ reg = <2>;
+ /* gpio_53 with gpmc_ncs3 pad as wakeup */
+- interrupts-extended = <&gpio2 21 IRQ_TYPE_EDGE_RISING>,
++ interrupts-extended = <&gpio2 21 IRQ_TYPE_LEVEL_HIGH>,
+ <&omap4_pmx_core 0x3a>;
+ interrupt-names = "irq", "wakeup";
+ ref-clock-frequency = <38400000>;
+diff --git a/arch/arm/boot/dts/omap4-sdp.dts b/arch/arm/boot/dts/omap4-sdp.dts
+index 3c274965ff40..91480ac1f328 100644
+--- a/arch/arm/boot/dts/omap4-sdp.dts
++++ b/arch/arm/boot/dts/omap4-sdp.dts
+@@ -512,7 +512,7 @@
+ compatible = "ti,wl1281";
+ reg = <2>;
+ interrupt-parent = <&gpio1>;
+- interrupts = <21 IRQ_TYPE_EDGE_RISING>; /* gpio 53 */
++ interrupts = <21 IRQ_TYPE_LEVEL_HIGH>; /* gpio 53 */
+ ref-clock-frequency = <26000000>;
+ tcxo-clock-frequency = <26000000>;
+ };
+diff --git a/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi b/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi
+index 6dbbc9b3229c..d0032213101e 100644
+--- a/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi
++++ b/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi
+@@ -69,7 +69,7 @@
+ compatible = "ti,wl1271";
+ reg = <2>;
+ interrupt-parent = <&gpio2>;
+- interrupts = <9 IRQ_TYPE_EDGE_RISING>; /* gpio 41 */
++ interrupts = <9 IRQ_TYPE_LEVEL_HIGH>; /* gpio 41 */
+ ref-clock-frequency = <38400000>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi
+index 7fff555ee394..68ac04641bdb 100644
+--- a/arch/arm/boot/dts/omap5-board-common.dtsi
++++ b/arch/arm/boot/dts/omap5-board-common.dtsi
+@@ -362,7 +362,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&wlcore_irq_pin>;
+ interrupt-parent = <&gpio1>;
+- interrupts = <14 IRQ_TYPE_EDGE_RISING>; /* gpio 14 */
++ interrupts = <14 IRQ_TYPE_LEVEL_HIGH>; /* gpio 14 */
+ ref-clock-frequency = <26000000>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/vf610-zii-scu4-aib.dts b/arch/arm/boot/dts/vf610-zii-scu4-aib.dts
+index d7019e89f588..8136e0ca10d5 100644
+--- a/arch/arm/boot/dts/vf610-zii-scu4-aib.dts
++++ b/arch/arm/boot/dts/vf610-zii-scu4-aib.dts
+@@ -600,6 +600,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x70>;
++ i2c-mux-idle-disconnect;
+
+ sff0_i2c: i2c@1 {
+ #address-cells = <1>;
+@@ -638,6 +639,7 @@
+ reg = <0x71>;
+ #address-cells = <1>;
+ #size-cells = <0>;
++ i2c-mux-idle-disconnect;
+
+ sff5_i2c: i2c@1 {
+ #address-cells = <1>;
+diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h
+index 567dbede4785..f1d0a7807cd0 100644
+--- a/arch/arm/include/asm/domain.h
++++ b/arch/arm/include/asm/domain.h
+@@ -82,7 +82,7 @@
+ #ifndef __ASSEMBLY__
+
+ #ifdef CONFIG_CPU_CP15_MMU
+-static inline unsigned int get_domain(void)
++static __always_inline unsigned int get_domain(void)
+ {
+ unsigned int domain;
+
+@@ -94,7 +94,7 @@ static inline unsigned int get_domain(void)
+ return domain;
+ }
+
+-static inline void set_domain(unsigned val)
++static __always_inline void set_domain(unsigned int val)
+ {
+ asm volatile(
+ "mcr p15, 0, %0, c3, c0 @ set domain"
+@@ -102,12 +102,12 @@ static inline void set_domain(unsigned val)
+ isb();
+ }
+ #else
+-static inline unsigned int get_domain(void)
++static __always_inline unsigned int get_domain(void)
+ {
+ return 0;
+ }
+
+-static inline void set_domain(unsigned val)
++static __always_inline void set_domain(unsigned int val)
+ {
+ }
+ #endif
+diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
+index 303248e5b990..98c6b91be4a8 100644
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -22,7 +22,7 @@
+ * perform such accesses (eg, via list poison values) which could then
+ * be exploited for priviledge escalation.
+ */
+-static inline unsigned int uaccess_save_and_enable(void)
++static __always_inline unsigned int uaccess_save_and_enable(void)
+ {
+ #ifdef CONFIG_CPU_SW_DOMAIN_PAN
+ unsigned int old_domain = get_domain();
+@@ -37,7 +37,7 @@ static inline unsigned int uaccess_save_and_enable(void)
+ #endif
+ }
+
+-static inline void uaccess_restore(unsigned int flags)
++static __always_inline void uaccess_restore(unsigned int flags)
+ {
+ #ifdef CONFIG_CPU_SW_DOMAIN_PAN
+ /* Restore the user access mask */
+diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
+index a7810be07da1..4a3982812a40 100644
+--- a/arch/arm/kernel/head-common.S
++++ b/arch/arm/kernel/head-common.S
+@@ -68,7 +68,7 @@ ENDPROC(__vet_atags)
+ * The following fragment of code is executed with the MMU on in MMU mode,
+ * and uses absolute addresses; this is not position independent.
+ *
+- * r0 = cp#15 control register
++ * r0 = cp#15 control register (exc_ret for M-class)
+ * r1 = machine ID
+ * r2 = atags/dtb pointer
+ * r9 = processor ID
+@@ -137,7 +137,8 @@ __mmap_switched_data:
+ #ifdef CONFIG_CPU_CP15
+ .long cr_alignment @ r3
+ #else
+- .long 0 @ r3
++M_CLASS(.long exc_ret) @ r3
++AR_CLASS(.long 0) @ r3
+ #endif
+ .size __mmap_switched_data, . - __mmap_switched_data
+
+diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S
+index afa350f44dea..0fc814bbc34b 100644
+--- a/arch/arm/kernel/head-nommu.S
++++ b/arch/arm/kernel/head-nommu.S
+@@ -201,6 +201,8 @@ M_CLASS(streq r3, [r12, #PMSAv8_MAIR1])
+ bic r0, r0, #V7M_SCB_CCR_IC
+ #endif
+ str r0, [r12, V7M_SCB_CCR]
++ /* Pass exc_ret to __mmap_switched */
++ mov r0, r10
+ #endif /* CONFIG_CPU_CP15 elif CONFIG_CPU_V7M */
+ ret lr
+ ENDPROC(__after_proc_init)
+diff --git a/arch/arm/mach-davinci/dm365.c b/arch/arm/mach-davinci/dm365.c
+index 2f9ae6431bf5..cebab6af31a2 100644
+--- a/arch/arm/mach-davinci/dm365.c
++++ b/arch/arm/mach-davinci/dm365.c
+@@ -462,8 +462,8 @@ static s8 dm365_queue_priority_mapping[][2] = {
+ };
+
+ static const struct dma_slave_map dm365_edma_map[] = {
+- { "davinci-mcbsp.0", "tx", EDMA_FILTER_PARAM(0, 2) },
+- { "davinci-mcbsp.0", "rx", EDMA_FILTER_PARAM(0, 3) },
++ { "davinci-mcbsp", "tx", EDMA_FILTER_PARAM(0, 2) },
++ { "davinci-mcbsp", "rx", EDMA_FILTER_PARAM(0, 3) },
+ { "davinci_voicecodec", "tx", EDMA_FILTER_PARAM(0, 2) },
+ { "davinci_voicecodec", "rx", EDMA_FILTER_PARAM(0, 3) },
+ { "spi_davinci.2", "tx", EDMA_FILTER_PARAM(0, 10) },
+diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
+index 04b36436cbc0..6587432faf05 100644
+--- a/arch/arm/mm/alignment.c
++++ b/arch/arm/mm/alignment.c
+@@ -767,6 +767,36 @@ do_alignment_t32_to_handler(unsigned long *pinstr, struct pt_regs *regs,
+ return NULL;
+ }
+
++static int alignment_get_arm(struct pt_regs *regs, u32 *ip, unsigned long *inst)
++{
++ u32 instr = 0;
++ int fault;
++
++ if (user_mode(regs))
++ fault = get_user(instr, ip);
++ else
++ fault = probe_kernel_address(ip, instr);
++
++ *inst = __mem_to_opcode_arm(instr);
++
++ return fault;
++}
++
++static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst)
++{
++ u16 instr = 0;
++ int fault;
++
++ if (user_mode(regs))
++ fault = get_user(instr, ip);
++ else
++ fault = probe_kernel_address(ip, instr);
++
++ *inst = __mem_to_opcode_thumb16(instr);
++
++ return fault;
++}
++
+ static int
+ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ {
+@@ -774,10 +804,10 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ unsigned long instr = 0, instrptr;
+ int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs);
+ unsigned int type;
+- unsigned int fault;
+ u16 tinstr = 0;
+ int isize = 4;
+ int thumb2_32b = 0;
++ int fault;
+
+ if (interrupts_enabled(regs))
+ local_irq_enable();
+@@ -786,15 +816,14 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+
+ if (thumb_mode(regs)) {
+ u16 *ptr = (u16 *)(instrptr & ~1);
+- fault = probe_kernel_address(ptr, tinstr);
+- tinstr = __mem_to_opcode_thumb16(tinstr);
++
++ fault = alignment_get_thumb(regs, ptr, &tinstr);
+ if (!fault) {
+ if (cpu_architecture() >= CPU_ARCH_ARMv7 &&
+ IS_T32(tinstr)) {
+ /* Thumb-2 32-bit */
+- u16 tinst2 = 0;
+- fault = probe_kernel_address(ptr + 1, tinst2);
+- tinst2 = __mem_to_opcode_thumb16(tinst2);
++ u16 tinst2;
++ fault = alignment_get_thumb(regs, ptr + 1, &tinst2);
+ instr = __opcode_thumb32_compose(tinstr, tinst2);
+ thumb2_32b = 1;
+ } else {
+@@ -803,8 +832,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ }
+ }
+ } else {
+- fault = probe_kernel_address((void *)instrptr, instr);
+- instr = __mem_to_opcode_arm(instr);
++ fault = alignment_get_arm(regs, (void *)instrptr, &instr);
+ }
+
+ if (fault) {
+diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
+index 1448f144e7fb..1a49d503eafc 100644
+--- a/arch/arm/mm/proc-v7m.S
++++ b/arch/arm/mm/proc-v7m.S
+@@ -132,13 +132,11 @@ __v7m_setup_cont:
+ dsb
+ mov r6, lr @ save LR
+ ldr sp, =init_thread_union + THREAD_START_SP
+- stmia sp, {r0-r3, r12}
+ cpsie i
+ svc #0
+ 1: cpsid i
+- ldr r0, =exc_ret
+- orr lr, lr, #EXC_RET_THREADMODE_PROCESSSTACK
+- str lr, [r0]
++ /* Calculate exc_ret */
++ orr r10, lr, #EXC_RET_THREADMODE_PROCESSSTACK
+ ldmia sp, {r0-r3, r12}
+ str r5, [r12, #11 * 4] @ restore the original SVC vector entry
+ mov lr, r6 @ restore LR
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts
+index 24f1aac366d6..d5b6e8159a33 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pine64-plus.dts
+@@ -63,3 +63,12 @@
+ reg = <1>;
+ };
+ };
++
++®_dc1sw {
++ /*
++ * Ethernet PHY needs 30ms to properly power up and some more
++ * to initialize. 100ms should be plenty of time to finish
++ * whole process.
++ */
++ regulator-enable-ramp-delay = <100000>;
++};
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+index e6fb9683f213..25099202c52c 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+@@ -159,6 +159,12 @@
+ };
+
+ ®_dc1sw {
++ /*
++ * Ethernet PHY needs 30ms to properly power up and some more
++ * to initialize. 100ms should be plenty of time to finish
++ * whole process.
++ */
++ regulator-enable-ramp-delay = <100000>;
+ regulator-name = "vcc-phy";
+ };
+
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index 9cc9bdde81ac..cd92f546c483 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -142,15 +142,6 @@
+ clock-output-names = "ext-osc32k";
+ };
+
+- pmu {
+- compatible = "arm,cortex-a53-pmu";
+- interrupts = <GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 153 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 155 IRQ_TYPE_LEVEL_HIGH>;
+- interrupt-affinity = <&cpu0>, <&cpu1>, <&cpu2>, <&cpu3>;
+- };
+-
+ psci {
+ compatible = "arm,psci-0.2";
+ method = "smc";
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi
+index 8a3a770e8f2c..56789ccf9454 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray-pinctrl.dtsi
+@@ -42,13 +42,14 @@
+
+ pinmux: pinmux@14029c {
+ compatible = "pinctrl-single";
+- reg = <0x0014029c 0x250>;
++ reg = <0x0014029c 0x26c>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xf>;
+ pinctrl-single,gpio-range = <
+- &range 0 154 MODE_GPIO
++ &range 0 91 MODE_GPIO
++ &range 95 60 MODE_GPIO
+ >;
+ range: gpio-range {
+ #pinctrl-single,gpio-range-cells = <3>;
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+index 71e2e34400d4..0098dfdef96c 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+@@ -464,8 +464,7 @@
+ <&pinmux 108 16 27>,
+ <&pinmux 135 77 6>,
+ <&pinmux 141 67 4>,
+- <&pinmux 145 149 6>,
+- <&pinmux 151 91 4>;
++ <&pinmux 145 149 6>;
+ };
+
+ i2c1: i2c@e0000 {
+diff --git a/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi b/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
+index e6fdba39453c..228ab83037d0 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
+@@ -33,7 +33,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster0_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@1 {
+@@ -49,7 +49,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster0_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@100 {
+@@ -65,7 +65,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster1_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@101 {
+@@ -81,7 +81,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster1_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@200 {
+@@ -97,7 +97,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster2_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@201 {
+@@ -113,7 +113,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster2_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@300 {
+@@ -129,7 +129,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster3_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@301 {
+@@ -145,7 +145,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster3_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@400 {
+@@ -161,7 +161,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster4_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@401 {
+@@ -177,7 +177,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster4_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@500 {
+@@ -193,7 +193,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster5_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@501 {
+@@ -209,7 +209,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster5_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@600 {
+@@ -225,7 +225,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster6_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@601 {
+@@ -241,7 +241,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster6_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@700 {
+@@ -257,7 +257,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster7_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cpu@701 {
+@@ -273,7 +273,7 @@
+ i-cache-line-size = <64>;
+ i-cache-sets = <192>;
+ next-level-cache = <&cluster7_l2>;
+- cpu-idle-states = <&cpu_pw20>;
++ cpu-idle-states = <&cpu_pw15>;
+ };
+
+ cluster0_l2: l2-cache0 {
+@@ -340,9 +340,9 @@
+ cache-level = <2>;
+ };
+
+- cpu_pw20: cpu-pw20 {
++ cpu_pw15: cpu-pw15 {
+ compatible = "arm,idle-state";
+- idle-state-name = "PW20";
++ idle-state-name = "PW15";
+ arm,psci-suspend-param = <0x0>;
+ entry-latency-us = <2000>;
+ exit-latency-us = <2000>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 232a7412755a..0d0a6543e5db 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -650,7 +650,7 @@
+ compatible = "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b40000 0x10000>;
+ interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MM_CLK_DUMMY>,
++ clocks = <&clk IMX8MM_CLK_IPG_ROOT>,
+ <&clk IMX8MM_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MM_CLK_USDHC1_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -666,7 +666,7 @@
+ compatible = "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b50000 0x10000>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MM_CLK_DUMMY>,
++ clocks = <&clk IMX8MM_CLK_IPG_ROOT>,
+ <&clk IMX8MM_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MM_CLK_USDHC2_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -680,7 +680,7 @@
+ compatible = "fsl,imx8mm-usdhc", "fsl,imx7d-usdhc";
+ reg = <0x30b60000 0x10000>;
+ interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MM_CLK_DUMMY>,
++ clocks = <&clk IMX8MM_CLK_IPG_ROOT>,
+ <&clk IMX8MM_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MM_CLK_USDHC3_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+index 7a1706f969f0..3faa652fdf20 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
+@@ -101,8 +101,8 @@
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1000000>;
+ gpios = <&gpio3 19 GPIO_ACTIVE_HIGH>;
+- states = <1000000 0x0
+- 900000 0x1>;
++ states = <1000000 0x1
++ 900000 0x0>;
+ regulator-always-on;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index d1f4eb197af2..32c270c4c22b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -782,7 +782,7 @@
+ "fsl,imx7d-usdhc";
+ reg = <0x30b40000 0x10000>;
+ interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MQ_CLK_DUMMY>,
++ clocks = <&clk IMX8MQ_CLK_IPG_ROOT>,
+ <&clk IMX8MQ_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MQ_CLK_USDHC1_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+@@ -799,7 +799,7 @@
+ "fsl,imx7d-usdhc";
+ reg = <0x30b50000 0x10000>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX8MQ_CLK_DUMMY>,
++ clocks = <&clk IMX8MQ_CLK_IPG_ROOT>,
+ <&clk IMX8MQ_CLK_NAND_USDHC_BUS>,
+ <&clk IMX8MQ_CLK_USDHC2_ROOT>;
+ clock-names = "ipg", "ahb", "per";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts b/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts
+index 0d1f5f9a0de9..c133e8d64b2a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-hugsun-x99.dts
+@@ -644,7 +644,7 @@
+ status = "okay";
+
+ u2phy0_host: host-port {
+- phy-supply = <&vcc5v0_host>;
++ phy-supply = <&vcc5v0_typec>;
+ status = "okay";
+ };
+
+@@ -712,7 +712,7 @@
+
+ &usbdrd_dwc3_0 {
+ status = "okay";
+- dr_mode = "otg";
++ dr_mode = "host";
+ };
+
+ &usbdrd3_1 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+index eb5594062006..99d65d2fca5e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+@@ -166,7 +166,7 @@
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <800000>;
+- regulator-max-microvolt = <1400000>;
++ regulator-max-microvolt = <1700000>;
+ vin-supply = <&vcc5v0_sys>;
+ };
+ };
+@@ -240,8 +240,8 @@
+ rk808: pmic@1b {
+ compatible = "rockchip,rk808";
+ reg = <0x1b>;
+- interrupt-parent = <&gpio1>;
+- interrupts = <21 IRQ_TYPE_LEVEL_LOW>;
++ interrupt-parent = <&gpio3>;
++ interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
+ #clock-cells = <1>;
+ clock-output-names = "xin32k", "rk808-clkout2";
+ pinctrl-names = "default";
+@@ -567,7 +567,7 @@
+
+ pmic {
+ pmic_int_l: pmic-int-l {
+- rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
++ rockchip,pins = <3 RK_PB2 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+
+ vsel1_gpio: vsel1-gpio {
+@@ -613,7 +613,6 @@
+
+ &sdmmc {
+ bus-width = <4>;
+- cap-mmc-highspeed;
+ cap-sd-highspeed;
+ cd-gpios = <&gpio0 7 GPIO_ACTIVE_LOW>;
+ disable-wp;
+@@ -625,8 +624,7 @@
+
+ &sdhci {
+ bus-width = <8>;
+- mmc-hs400-1_8v;
+- mmc-hs400-enhanced-strobe;
++ mmc-hs200-1_8v;
+ non-removable;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index ca70ff73f171..38c75fb3f232 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -42,7 +42,7 @@
+ */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+
+- gic_its: gic-its@18200000 {
++ gic_its: gic-its@1820000 {
+ compatible = "arm,gic-v3-its";
+ reg = <0x00 0x01820000 0x00 0x10000>;
+ socionext,synquacer-pre-its = <0x1000000 0x400000>;
+diff --git a/arch/mips/bcm63xx/prom.c b/arch/mips/bcm63xx/prom.c
+index 77a836e661c9..df69eaa453a1 100644
+--- a/arch/mips/bcm63xx/prom.c
++++ b/arch/mips/bcm63xx/prom.c
+@@ -84,7 +84,7 @@ void __init prom_init(void)
+ * Here we will start up CPU1 in the background and ask it to
+ * reconfigure itself then go back to sleep.
+ */
+- memcpy((void *)0xa0000200, &bmips_smp_movevec, 0x20);
++ memcpy((void *)0xa0000200, bmips_smp_movevec, 0x20);
+ __sync();
+ set_c0_cause(C_SW0);
+ cpumask_set_cpu(1, &bmips_booted_mask);
+diff --git a/arch/mips/include/asm/bmips.h b/arch/mips/include/asm/bmips.h
+index bf6a8afd7ad2..581a6a3c66e4 100644
+--- a/arch/mips/include/asm/bmips.h
++++ b/arch/mips/include/asm/bmips.h
+@@ -75,11 +75,11 @@ static inline int register_bmips_smp_ops(void)
+ #endif
+ }
+
+-extern char bmips_reset_nmi_vec;
+-extern char bmips_reset_nmi_vec_end;
+-extern char bmips_smp_movevec;
+-extern char bmips_smp_int_vec;
+-extern char bmips_smp_int_vec_end;
++extern char bmips_reset_nmi_vec[];
++extern char bmips_reset_nmi_vec_end[];
++extern char bmips_smp_movevec[];
++extern char bmips_smp_int_vec[];
++extern char bmips_smp_int_vec_end[];
+
+ extern int bmips_smp_enabled;
+ extern int bmips_cpu_offset;
+diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
+index 76fae9b79f13..712c15de6ab9 100644
+--- a/arch/mips/kernel/smp-bmips.c
++++ b/arch/mips/kernel/smp-bmips.c
+@@ -464,10 +464,10 @@ static void bmips_wr_vec(unsigned long dst, char *start, char *end)
+
+ static inline void bmips_nmi_handler_setup(void)
+ {
+- bmips_wr_vec(BMIPS_NMI_RESET_VEC, &bmips_reset_nmi_vec,
+- &bmips_reset_nmi_vec_end);
+- bmips_wr_vec(BMIPS_WARM_RESTART_VEC, &bmips_smp_int_vec,
+- &bmips_smp_int_vec_end);
++ bmips_wr_vec(BMIPS_NMI_RESET_VEC, bmips_reset_nmi_vec,
++ bmips_reset_nmi_vec_end);
++ bmips_wr_vec(BMIPS_WARM_RESTART_VEC, bmips_smp_int_vec,
++ bmips_smp_int_vec_end);
+ }
+
+ struct reset_vec_info {
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 9650777d0aaf..5f9d12ce91e5 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -351,17 +351,16 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
+ struct nbd_device *nbd = cmd->nbd;
+ struct nbd_config *config;
+
++ if (!mutex_trylock(&cmd->lock))
++ return BLK_EH_RESET_TIMER;
++
+ if (!refcount_inc_not_zero(&nbd->config_refs)) {
+ cmd->status = BLK_STS_TIMEOUT;
++ mutex_unlock(&cmd->lock);
+ goto done;
+ }
+ config = nbd->config;
+
+- if (!mutex_trylock(&cmd->lock)) {
+- nbd_config_put(nbd);
+- return BLK_EH_RESET_TIMER;
+- }
+-
+ if (config->num_connections > 1) {
+ dev_err_ratelimited(nbd_to_dev(nbd),
+ "Connection timed out, retrying (%d/%d alive)\n",
+@@ -674,6 +673,12 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index)
+ ret = -ENOENT;
+ goto out;
+ }
++ if (cmd->status != BLK_STS_OK) {
++ dev_err(disk_to_dev(nbd->disk), "Command already handled %p\n",
++ req);
++ ret = -ENOENT;
++ goto out;
++ }
+ if (test_bit(NBD_CMD_REQUEUED, &cmd->flags)) {
+ dev_err(disk_to_dev(nbd->disk), "Raced with timeout on req %p\n",
+ req);
+@@ -755,7 +760,10 @@ static bool nbd_clear_req(struct request *req, void *data, bool reserved)
+ {
+ struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
+
++ mutex_lock(&cmd->lock);
+ cmd->status = BLK_STS_IOERR;
++ mutex_unlock(&cmd->lock);
++
+ blk_mq_complete_request(req);
+ return true;
+ }
+diff --git a/drivers/crypto/chelsio/chtls/chtls_cm.c b/drivers/crypto/chelsio/chtls/chtls_cm.c
+index 774d991d7cca..aca75237bbcf 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_cm.c
++++ b/drivers/crypto/chelsio/chtls/chtls_cm.c
+@@ -1297,7 +1297,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
+ tp->write_seq = snd_isn;
+ tp->snd_nxt = snd_isn;
+ tp->snd_una = snd_isn;
+- inet_sk(sk)->inet_id = tp->write_seq ^ jiffies;
++ inet_sk(sk)->inet_id = prandom_u32();
+ assign_rxopt(sk, opt);
+
+ if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
+diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
+index 551bca6fef24..f382c2b23d75 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_io.c
++++ b/drivers/crypto/chelsio/chtls/chtls_io.c
+@@ -1701,7 +1701,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ return peekmsg(sk, msg, len, nonblock, flags);
+
+ if (sk_can_busy_loop(sk) &&
+- skb_queue_empty(&sk->sk_receive_queue) &&
++ skb_queue_empty_lockless(&sk->sk_receive_queue) &&
+ sk->sk_state == TCP_ESTABLISHED)
+ sk_busy_loop(sk, nonblock);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 61e38e43ad1d..85b0515c0fdc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -140,7 +140,12 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
+ return 0;
+
+ error_free:
+- while (i--) {
++ for (i = 0; i < last_entry; ++i) {
++ struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
++
++ amdgpu_bo_unref(&bo);
++ }
++ for (i = first_userptr; i < num_entries; ++i) {
+ struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
+
+ amdgpu_bo_unref(&bo);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index bea6f298dfdc..0ff786dec8c4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -421,7 +421,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
+ .interruptible = (bp->type != ttm_bo_type_kernel),
+ .no_wait_gpu = false,
+ .resv = bp->resv,
+- .flags = TTM_OPT_FLAG_ALLOW_RES_EVICT
++ .flags = bp->type != ttm_bo_type_kernel ?
++ TTM_OPT_FLAG_ALLOW_RES_EVICT : 0
+ };
+ struct amdgpu_bo *bo;
+ unsigned long page_align, size = bp->size;
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+index 69d9e26c60c8..9e110d51dc1f 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_kms.c
+@@ -85,7 +85,8 @@ static void komeda_kms_commit_tail(struct drm_atomic_state *old_state)
+
+ drm_atomic_helper_commit_modeset_disables(dev, old_state);
+
+- drm_atomic_helper_commit_planes(dev, old_state, 0);
++ drm_atomic_helper_commit_planes(dev, old_state,
++ DRM_PLANE_COMMIT_ACTIVE_ONLY);
+
+ drm_atomic_helper_commit_modeset_enables(dev, old_state);
+
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index fa66951b05d0..7b098ff5f5dd 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -108,6 +108,12 @@
+ #define ASPEED_I2CD_S_TX_CMD BIT(2)
+ #define ASPEED_I2CD_M_TX_CMD BIT(1)
+ #define ASPEED_I2CD_M_START_CMD BIT(0)
++#define ASPEED_I2CD_MASTER_CMDS_MASK \
++ (ASPEED_I2CD_M_STOP_CMD | \
++ ASPEED_I2CD_M_S_RX_CMD_LAST | \
++ ASPEED_I2CD_M_RX_CMD | \
++ ASPEED_I2CD_M_TX_CMD | \
++ ASPEED_I2CD_M_START_CMD)
+
+ /* 0x18 : I2CD Slave Device Address Register */
+ #define ASPEED_I2CD_DEV_ADDR_MASK GENMASK(6, 0)
+@@ -336,18 +342,19 @@ static void aspeed_i2c_do_start(struct aspeed_i2c_bus *bus)
+ struct i2c_msg *msg = &bus->msgs[bus->msgs_index];
+ u8 slave_addr = i2c_8bit_addr_from_msg(msg);
+
+- bus->master_state = ASPEED_I2C_MASTER_START;
+-
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ /*
+ * If it's requested in the middle of a slave session, set the master
+ * state to 'pending' then H/W will continue handling this master
+ * command when the bus comes back to the idle state.
+ */
+- if (bus->slave_state != ASPEED_I2C_SLAVE_INACTIVE)
++ if (bus->slave_state != ASPEED_I2C_SLAVE_INACTIVE) {
+ bus->master_state = ASPEED_I2C_MASTER_PENDING;
++ return;
++ }
+ #endif /* CONFIG_I2C_SLAVE */
+
++ bus->master_state = ASPEED_I2C_MASTER_START;
+ bus->buf_index = 0;
+
+ if (msg->flags & I2C_M_RD) {
+@@ -422,20 +429,6 @@ static u32 aspeed_i2c_master_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ }
+ }
+
+-#if IS_ENABLED(CONFIG_I2C_SLAVE)
+- /*
+- * A pending master command will be started by H/W when the bus comes
+- * back to idle state after completing a slave operation so change the
+- * master state from 'pending' to 'start' at here if slave is inactive.
+- */
+- if (bus->master_state == ASPEED_I2C_MASTER_PENDING) {
+- if (bus->slave_state != ASPEED_I2C_SLAVE_INACTIVE)
+- goto out_no_complete;
+-
+- bus->master_state = ASPEED_I2C_MASTER_START;
+- }
+-#endif /* CONFIG_I2C_SLAVE */
+-
+ /* Master is not currently active, irq was for someone else. */
+ if (bus->master_state == ASPEED_I2C_MASTER_INACTIVE ||
+ bus->master_state == ASPEED_I2C_MASTER_PENDING)
+@@ -462,11 +455,15 @@ static u32 aspeed_i2c_master_irq(struct aspeed_i2c_bus *bus, u32 irq_status)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ /*
+ * If a peer master starts a xfer immediately after it queues a
+- * master command, change its state to 'pending' then H/W will
+- * continue the queued master xfer just after completing the
+- * slave mode session.
++ * master command, clear the queued master command and change
++ * its state to 'pending'. To simplify handling of pending
++ * cases, it uses S/W solution instead of H/W command queue
++ * handling.
+ */
+ if (unlikely(irq_status & ASPEED_I2CD_INTR_SLAVE_MATCH)) {
++ writel(readl(bus->base + ASPEED_I2C_CMD_REG) &
++ ~ASPEED_I2CD_MASTER_CMDS_MASK,
++ bus->base + ASPEED_I2C_CMD_REG);
+ bus->master_state = ASPEED_I2C_MASTER_PENDING;
+ dev_dbg(bus->dev,
+ "master goes pending due to a slave start\n");
+@@ -629,6 +626,14 @@ static irqreturn_t aspeed_i2c_bus_irq(int irq, void *dev_id)
+ irq_handled |= aspeed_i2c_master_irq(bus,
+ irq_remaining);
+ }
++
++ /*
++ * Start a pending master command at here if a slave operation is
++ * completed.
++ */
++ if (bus->master_state == ASPEED_I2C_MASTER_PENDING &&
++ bus->slave_state == ASPEED_I2C_SLAVE_INACTIVE)
++ aspeed_i2c_do_start(bus);
+ #else
+ irq_handled = aspeed_i2c_master_irq(bus, irq_remaining);
+ #endif /* CONFIG_I2C_SLAVE */
+@@ -691,6 +696,15 @@ static int aspeed_i2c_master_xfer(struct i2c_adapter *adap,
+ ASPEED_I2CD_BUS_BUSY_STS))
+ aspeed_i2c_recover_bus(bus);
+
++ /*
++ * If timed out and the state is still pending, drop the pending
++ * master command.
++ */
++ spin_lock_irqsave(&bus->lock, flags);
++ if (bus->master_state == ASPEED_I2C_MASTER_PENDING)
++ bus->master_state = ASPEED_I2C_MASTER_INACTIVE;
++ spin_unlock_irqrestore(&bus->lock, flags);
++
+ return -ETIMEDOUT;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 29eae1bf4f86..2152ec5f535c 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -875,7 +875,7 @@ static irqreturn_t mtk_i2c_irq(int irqno, void *dev_id)
+
+ static u32 mtk_i2c_functionality(struct i2c_adapter *adap)
+ {
+- if (adap->quirks->flags & I2C_AQ_NO_ZERO_LEN)
++ if (i2c_check_quirks(adap, I2C_AQ_NO_ZERO_LEN))
+ return I2C_FUNC_I2C |
+ (I2C_FUNC_SMBUS_EMUL & ~I2C_FUNC_SMBUS_QUICK);
+ else
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 266d1c269b83..1fac7344ae9c 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -305,7 +305,7 @@ struct stm32f7_i2c_dev {
+ struct regmap *regmap;
+ };
+
+-/**
++/*
+ * All these values are coming from I2C Specification, Version 6.0, 4th of
+ * April 2014.
+ *
+@@ -1192,6 +1192,8 @@ static void stm32f7_i2c_slave_start(struct stm32f7_i2c_dev *i2c_dev)
+ STM32F7_I2C_CR1_TXIE;
+ stm32f7_i2c_set_bits(base + STM32F7_I2C_CR1, mask);
+
++ /* Write 1st data byte */
++ writel_relaxed(value, base + STM32F7_I2C_TXDR);
+ } else {
+ /* Notify i2c slave that new write transfer is starting */
+ i2c_slave_event(slave, I2C_SLAVE_WRITE_REQUESTED, &value);
+@@ -1501,7 +1503,7 @@ static irqreturn_t stm32f7_i2c_isr_error(int irq, void *data)
+ void __iomem *base = i2c_dev->base;
+ struct device *dev = i2c_dev->dev;
+ struct stm32_i2c_dma *dma = i2c_dev->dma;
+- u32 mask, status;
++ u32 status;
+
+ status = readl_relaxed(i2c_dev->base + STM32F7_I2C_ISR);
+
+@@ -1526,12 +1528,15 @@ static irqreturn_t stm32f7_i2c_isr_error(int irq, void *data)
+ f7_msg->result = -EINVAL;
+ }
+
+- /* Disable interrupts */
+- if (stm32f7_i2c_is_slave_registered(i2c_dev))
+- mask = STM32F7_I2C_XFER_IRQ_MASK;
+- else
+- mask = STM32F7_I2C_ALL_IRQ_MASK;
+- stm32f7_i2c_disable_irq(i2c_dev, mask);
++ if (!i2c_dev->slave_running) {
++ u32 mask;
++ /* Disable interrupts */
++ if (stm32f7_i2c_is_slave_registered(i2c_dev))
++ mask = STM32F7_I2C_XFER_IRQ_MASK;
++ else
++ mask = STM32F7_I2C_ALL_IRQ_MASK;
++ stm32f7_i2c_disable_irq(i2c_dev, mask);
++ }
+
+ /* Disable dma */
+ if (i2c_dev->use_dma) {
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c3a8d732805f..868c356fbf49 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -175,6 +175,22 @@ static DEFINE_IDA(its_vpeid_ida);
+ #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base)
+ #define gic_data_rdist_vlpi_base() (gic_data_rdist_rd_base() + SZ_128K)
+
++static u16 get_its_list(struct its_vm *vm)
++{
++ struct its_node *its;
++ unsigned long its_list = 0;
++
++ list_for_each_entry(its, &its_nodes, entry) {
++ if (!its->is_v4)
++ continue;
++
++ if (vm->vlpi_count[its->list_nr])
++ __set_bit(its->list_nr, &its_list);
++ }
++
++ return (u16)its_list;
++}
++
+ static struct its_collection *dev_event_to_col(struct its_device *its_dev,
+ u32 event)
+ {
+@@ -976,17 +992,15 @@ static void its_send_vmapp(struct its_node *its,
+
+ static void its_send_vmovp(struct its_vpe *vpe)
+ {
+- struct its_cmd_desc desc;
++ struct its_cmd_desc desc = {};
+ struct its_node *its;
+ unsigned long flags;
+ int col_id = vpe->col_idx;
+
+ desc.its_vmovp_cmd.vpe = vpe;
+- desc.its_vmovp_cmd.its_list = (u16)its_list_map;
+
+ if (!its_list_map) {
+ its = list_first_entry(&its_nodes, struct its_node, entry);
+- desc.its_vmovp_cmd.seq_num = 0;
+ desc.its_vmovp_cmd.col = &its->collections[col_id];
+ its_send_single_vcommand(its, its_build_vmovp_cmd, &desc);
+ return;
+@@ -1003,6 +1017,7 @@ static void its_send_vmovp(struct its_vpe *vpe)
+ raw_spin_lock_irqsave(&vmovp_lock, flags);
+
+ desc.its_vmovp_cmd.seq_num = vmovp_seq_num++;
++ desc.its_vmovp_cmd.its_list = get_its_list(vpe->its_vm);
+
+ /* Emit VMOVPs */
+ list_for_each_entry(its, &its_nodes, entry) {
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index daefc52b0ec5..7d0a12fe2714 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -252,8 +252,8 @@ static int __init plic_init(struct device_node *node,
+ continue;
+ }
+
+- /* skip context holes */
+- if (parent.args[0] == -1)
++ /* skip contexts other than supervisor external interrupt */
++ if (parent.args[0] != IRQ_S_EXT)
+ continue;
+
+ hartid = plic_find_hart_id(parent.np);
+diff --git a/drivers/isdn/capi/capi.c b/drivers/isdn/capi/capi.c
+index c92b405b7646..ba8619524231 100644
+--- a/drivers/isdn/capi/capi.c
++++ b/drivers/isdn/capi/capi.c
+@@ -744,7 +744,7 @@ capi_poll(struct file *file, poll_table *wait)
+
+ poll_wait(file, &(cdev->recvwait), wait);
+ mask = EPOLLOUT | EPOLLWRNORM;
+- if (!skb_queue_empty(&cdev->recvqueue))
++ if (!skb_queue_empty_lockless(&cdev->recvqueue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+ return mask;
+ }
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 907af62846ba..0721c22e2bc8 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1808,7 +1808,6 @@ int b53_mirror_add(struct dsa_switch *ds, int port,
+ loc = B53_EG_MIR_CTL;
+
+ b53_read16(dev, B53_MGMT_PAGE, loc, ®);
+- reg &= ~MIRROR_MASK;
+ reg |= BIT(port);
+ b53_write16(dev, B53_MGMT_PAGE, loc, reg);
+
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 28c963a21dac..9f05bf714ba2 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -37,22 +37,11 @@ static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
+ unsigned int i;
+ u32 reg, offset;
+
+- if (priv->type == BCM7445_DEVICE_ID)
+- offset = CORE_STS_OVERRIDE_IMP;
+- else
+- offset = CORE_STS_OVERRIDE_IMP2;
+-
+ /* Enable the port memories */
+ reg = core_readl(priv, CORE_MEM_PSM_VDD_CTRL);
+ reg &= ~P_TXQ_PSM_VDD(port);
+ core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
+
+- /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */
+- reg = core_readl(priv, CORE_IMP_CTL);
+- reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);
+- reg &= ~(RX_DIS | TX_DIS);
+- core_writel(priv, reg, CORE_IMP_CTL);
+-
+ /* Enable forwarding */
+ core_writel(priv, SW_FWDG_EN, CORE_SWMODE);
+
+@@ -71,10 +60,27 @@ static void bcm_sf2_imp_setup(struct dsa_switch *ds, int port)
+
+ b53_brcm_hdr_setup(ds, port);
+
+- /* Force link status for IMP port */
+- reg = core_readl(priv, offset);
+- reg |= (MII_SW_OR | LINK_STS);
+- core_writel(priv, reg, offset);
++ if (port == 8) {
++ if (priv->type == BCM7445_DEVICE_ID)
++ offset = CORE_STS_OVERRIDE_IMP;
++ else
++ offset = CORE_STS_OVERRIDE_IMP2;
++
++ /* Force link status for IMP port */
++ reg = core_readl(priv, offset);
++ reg |= (MII_SW_OR | LINK_STS);
++ core_writel(priv, reg, offset);
++
++ /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */
++ reg = core_readl(priv, CORE_IMP_CTL);
++ reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);
++ reg &= ~(RX_DIS | TX_DIS);
++ core_writel(priv, reg, CORE_IMP_CTL);
++ } else {
++ reg = core_readl(priv, CORE_G_PCTL_PORT(port));
++ reg &= ~(RX_DIS | TX_DIS);
++ core_writel(priv, reg, CORE_G_PCTL_PORT(port));
++ }
+ }
+
+ static void bcm_sf2_gphy_enable_set(struct dsa_switch *ds, bool enable)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index b22196880d6d..06e2581b28ea 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2018,6 +2018,8 @@ static void bcmgenet_link_intr_enable(struct bcmgenet_priv *priv)
+ */
+ if (priv->internal_phy) {
+ int0_enable |= UMAC_IRQ_LINK_EVENT;
++ if (GENET_IS_V1(priv) || GENET_IS_V2(priv) || GENET_IS_V3(priv))
++ int0_enable |= UMAC_IRQ_PHY_DET_R;
+ } else if (priv->ext_phy) {
+ int0_enable |= UMAC_IRQ_LINK_EVENT;
+ } else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+@@ -2616,11 +2618,14 @@ static void bcmgenet_irq_task(struct work_struct *work)
+ priv->irq0_stat = 0;
+ spin_unlock_irq(&priv->lock);
+
++ if (status & UMAC_IRQ_PHY_DET_R &&
++ priv->dev->phydev->autoneg != AUTONEG_ENABLE)
++ phy_init_hw(priv->dev->phydev);
++
+ /* Link UP/DOWN event */
+- if (status & UMAC_IRQ_LINK_EVENT) {
+- priv->dev->phydev->link = !!(status & UMAC_IRQ_LINK_UP);
++ if (status & UMAC_IRQ_LINK_EVENT)
+ phy_mac_interrupt(priv->dev->phydev);
+- }
++
+ }
+
+ /* bcmgenet_isr1: handle Rx and Tx priority queues */
+@@ -2715,7 +2720,7 @@ static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
+ }
+
+ /* all other interested interrupts handled in bottom half */
+- status &= UMAC_IRQ_LINK_EVENT;
++ status &= (UMAC_IRQ_LINK_EVENT | UMAC_IRQ_PHY_DET_R);
+ if (status) {
+ /* Save irq status for bottom-half processing. */
+ spin_lock_irqsave(&priv->lock, flags);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+index a4dead4ab0ed..86b528d8364c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+@@ -695,10 +695,10 @@ static void uld_init(struct adapter *adap, struct cxgb4_lld_info *lld)
+ lld->write_cmpl_support = adap->params.write_cmpl_support;
+ }
+
+-static void uld_attach(struct adapter *adap, unsigned int uld)
++static int uld_attach(struct adapter *adap, unsigned int uld)
+ {
+- void *handle;
+ struct cxgb4_lld_info lli;
++ void *handle;
+
+ uld_init(adap, &lli);
+ uld_queue_init(adap, uld, &lli);
+@@ -708,7 +708,7 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
+ dev_warn(adap->pdev_dev,
+ "could not attach to the %s driver, error %ld\n",
+ adap->uld[uld].name, PTR_ERR(handle));
+- return;
++ return PTR_ERR(handle);
+ }
+
+ adap->uld[uld].handle = handle;
+@@ -716,22 +716,22 @@ static void uld_attach(struct adapter *adap, unsigned int uld)
+
+ if (adap->flags & CXGB4_FULL_INIT_DONE)
+ adap->uld[uld].state_change(handle, CXGB4_STATE_UP);
++
++ return 0;
+ }
+
+-/**
+- * cxgb4_register_uld - register an upper-layer driver
+- * @type: the ULD type
+- * @p: the ULD methods
++/* cxgb4_register_uld - register an upper-layer driver
++ * @type: the ULD type
++ * @p: the ULD methods
+ *
+- * Registers an upper-layer driver with this driver and notifies the ULD
+- * about any presently available devices that support its type. Returns
+- * %-EBUSY if a ULD of the same type is already registered.
++ * Registers an upper-layer driver with this driver and notifies the ULD
++ * about any presently available devices that support its type.
+ */
+ void cxgb4_register_uld(enum cxgb4_uld type,
+ const struct cxgb4_uld_info *p)
+ {
+- int ret = 0;
+ struct adapter *adap;
++ int ret = 0;
+
+ if (type >= CXGB4_ULD_MAX)
+ return;
+@@ -763,8 +763,12 @@ void cxgb4_register_uld(enum cxgb4_uld type,
+ if (ret)
+ goto free_irq;
+ adap->uld[type] = *p;
+- uld_attach(adap, type);
++ ret = uld_attach(adap, type);
++ if (ret)
++ goto free_txq;
+ continue;
++free_txq:
++ release_sge_txq_uld(adap, type);
+ free_irq:
+ if (adap->flags & CXGB4_FULL_INIT_DONE)
+ quiesce_rx_uld(adap, type);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index b3da81e90132..928bfea5457b 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -3791,15 +3791,11 @@ int t4_sge_alloc_eth_txq(struct adapter *adap, struct sge_eth_txq *txq,
+ * write the CIDX Updates into the Status Page at the end of the
+ * TX Queue.
+ */
+- c.autoequiqe_to_viid = htonl((dbqt
+- ? FW_EQ_ETH_CMD_AUTOEQUIQE_F
+- : FW_EQ_ETH_CMD_AUTOEQUEQE_F) |
++ c.autoequiqe_to_viid = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE_F |
+ FW_EQ_ETH_CMD_VIID_V(pi->viid));
+
+ c.fetchszm_to_iqid =
+- htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(dbqt
+- ? HOSTFCMODE_INGRESS_QUEUE_X
+- : HOSTFCMODE_STATUS_PAGE_X) |
++ htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(HOSTFCMODE_STATUS_PAGE_X) |
+ FW_EQ_ETH_CMD_PCIECHN_V(pi->tx_chan) |
+ FW_EQ_ETH_CMD_FETCHRO_F | FW_EQ_ETH_CMD_IQID_V(iqid));
+
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 030fed65393e..713dc30f9dbb 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -726,6 +726,18 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ */
+ nfrags = skb_shinfo(skb)->nr_frags;
+
++ /* Setup HW checksumming */
++ csum_vlan = 0;
++ if (skb->ip_summed == CHECKSUM_PARTIAL &&
++ !ftgmac100_prep_tx_csum(skb, &csum_vlan))
++ goto drop;
++
++ /* Add VLAN tag */
++ if (skb_vlan_tag_present(skb)) {
++ csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG;
++ csum_vlan |= skb_vlan_tag_get(skb) & 0xffff;
++ }
++
+ /* Get header len */
+ len = skb_headlen(skb);
+
+@@ -752,19 +764,6 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
+ if (nfrags == 0)
+ f_ctl_stat |= FTGMAC100_TXDES0_LTS;
+ txdes->txdes3 = cpu_to_le32(map);
+-
+- /* Setup HW checksumming */
+- csum_vlan = 0;
+- if (skb->ip_summed == CHECKSUM_PARTIAL &&
+- !ftgmac100_prep_tx_csum(skb, &csum_vlan))
+- goto drop;
+-
+- /* Add VLAN tag */
+- if (skb_vlan_tag_present(skb)) {
+- csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG;
+- csum_vlan |= skb_vlan_tag_get(skb) & 0xffff;
+- }
+-
+ txdes->txdes1 = cpu_to_le32(csum_vlan);
+
+ /* Next descriptor */
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index c84167447abe..f51bc0255556 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -237,6 +237,7 @@ struct hip04_priv {
+ dma_addr_t rx_phys[RX_DESC_NUM];
+ unsigned int rx_head;
+ unsigned int rx_buf_size;
++ unsigned int rx_cnt_remaining;
+
+ struct device_node *phy_node;
+ struct phy_device *phy;
+@@ -575,7 +576,6 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
+ struct hip04_priv *priv = container_of(napi, struct hip04_priv, napi);
+ struct net_device *ndev = priv->ndev;
+ struct net_device_stats *stats = &ndev->stats;
+- unsigned int cnt = hip04_recv_cnt(priv);
+ struct rx_desc *desc;
+ struct sk_buff *skb;
+ unsigned char *buf;
+@@ -588,8 +588,8 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
+
+ /* clean up tx descriptors */
+ tx_remaining = hip04_tx_reclaim(ndev, false);
+-
+- while (cnt && !last) {
++ priv->rx_cnt_remaining += hip04_recv_cnt(priv);
++ while (priv->rx_cnt_remaining && !last) {
+ buf = priv->rx_buf[priv->rx_head];
+ skb = build_skb(buf, priv->rx_buf_size);
+ if (unlikely(!skb)) {
+@@ -635,11 +635,13 @@ refill:
+ hip04_set_recv_desc(priv, phys);
+
+ priv->rx_head = RX_NEXT(priv->rx_head);
+- if (rx >= budget)
++ if (rx >= budget) {
++ --priv->rx_cnt_remaining;
+ goto done;
++ }
+
+- if (--cnt == 0)
+- cnt = hip04_recv_cnt(priv);
++ if (--priv->rx_cnt_remaining == 0)
++ priv->rx_cnt_remaining += hip04_recv_cnt(priv);
+ }
+
+ if (!(priv->reg_inten & RCV_INT)) {
+@@ -724,6 +726,7 @@ static int hip04_mac_open(struct net_device *ndev)
+ int i;
+
+ priv->rx_head = 0;
++ priv->rx_cnt_remaining = 0;
+ priv->tx_head = 0;
+ priv->tx_tail = 0;
+ hip04_reset_ppe(priv);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 48c7b70fc2c4..58a7d62b38de 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -32,6 +32,8 @@
+
+ #define HNAE3_MOD_VERSION "1.0"
+
++#define HNAE3_MIN_VECTOR_NUM 2 /* first one for misc, another for IO */
++
+ /* Device IDs */
+ #define HNAE3_DEV_ID_GE 0xA220
+ #define HNAE3_DEV_ID_25GE 0xA221
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 3fde5471e1c0..65b53ec1d9ca 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -800,6 +800,9 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev)
+ hnae3_get_field(__le16_to_cpu(req->pf_intr_vector_number),
+ HCLGE_PF_VEC_NUM_M, HCLGE_PF_VEC_NUM_S);
+
++ /* nic's msix numbers is always equals to the roce's. */
++ hdev->num_nic_msi = hdev->num_roce_msi;
++
+ /* PF should have NIC vectors and Roce vectors,
+ * NIC vectors are queued before Roce vectors.
+ */
+@@ -809,6 +812,15 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev)
+ hdev->num_msi =
+ hnae3_get_field(__le16_to_cpu(req->pf_intr_vector_number),
+ HCLGE_PF_VEC_NUM_M, HCLGE_PF_VEC_NUM_S);
++
++ hdev->num_nic_msi = hdev->num_msi;
++ }
++
++ if (hdev->num_nic_msi < HNAE3_MIN_VECTOR_NUM) {
++ dev_err(&hdev->pdev->dev,
++ "Just %u msi resources, not enough for pf(min:2).\n",
++ hdev->num_nic_msi);
++ return -EINVAL;
+ }
+
+ return 0;
+@@ -1394,6 +1406,10 @@ static int hclge_assign_tqp(struct hclge_vport *vport, u16 num_tqps)
+ kinfo->rss_size = min_t(u16, hdev->rss_size_max,
+ vport->alloc_tqps / hdev->tm_info.num_tc);
+
++ /* ensure one to one mapping between irq and queue at default */
++ kinfo->rss_size = min_t(u16, kinfo->rss_size,
++ (hdev->num_nic_msi - 1) / hdev->tm_info.num_tc);
++
+ return 0;
+ }
+
+@@ -2172,7 +2188,8 @@ static int hclge_init_msi(struct hclge_dev *hdev)
+ int vectors;
+ int i;
+
+- vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
++ vectors = pci_alloc_irq_vectors(pdev, HNAE3_MIN_VECTOR_NUM,
++ hdev->num_msi,
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
+ if (vectors < 0) {
+ dev_err(&pdev->dev,
+@@ -2187,6 +2204,7 @@ static int hclge_init_msi(struct hclge_dev *hdev)
+
+ hdev->num_msi = vectors;
+ hdev->num_msi_left = vectors;
++
+ hdev->base_msi_vector = pdev->irq;
+ hdev->roce_base_vector = hdev->base_msi_vector +
+ hdev->roce_base_msix_offset;
+@@ -3644,6 +3662,7 @@ static int hclge_get_vector(struct hnae3_handle *handle, u16 vector_num,
+ int alloc = 0;
+ int i, j;
+
++ vector_num = min_t(u16, hdev->num_nic_msi - 1, vector_num);
+ vector_num = min(hdev->num_msi_left, vector_num);
+
+ for (j = 0; j < vector_num; j++) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 6a12285f4c76..6dc66d3f8408 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -795,6 +795,7 @@ struct hclge_dev {
+ u32 base_msi_vector;
+ u16 *vector_status;
+ int *vector_irq;
++ u16 num_nic_msi; /* Num of nic vectors for this PF */
+ u16 num_roce_msi; /* Num of roce vectors for this PF */
+ int roce_base_vector;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 3f41fa2bc414..856337705949 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -540,9 +540,16 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
+ kinfo->rss_size = kinfo->req_rss_size;
+ } else if (kinfo->rss_size > max_rss_size ||
+ (!kinfo->req_rss_size && kinfo->rss_size < max_rss_size)) {
++ /* if user not set rss, the rss_size should compare with the
++ * valid msi numbers to ensure one to one map between tqp and
++ * irq as default.
++ */
++ if (!kinfo->req_rss_size)
++ max_rss_size = min_t(u16, max_rss_size,
++ (hdev->num_nic_msi - 1) /
++ kinfo->num_tc);
++
+ /* Set to the maximum specification value (max_rss_size). */
+- dev_info(&hdev->pdev->dev, "rss changes from %d to %d\n",
+- kinfo->rss_size, max_rss_size);
+ kinfo->rss_size = max_rss_size;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a13a0e101c3b..b094d4e9ba2d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -411,6 +411,13 @@ static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+ kinfo->tqp[i] = &hdev->htqp[i].q;
+ }
+
++ /* after init the max rss_size and tqps, adjust the default tqp numbers
++ * and rss size with the actual vector numbers
++ */
++ kinfo->num_tqps = min_t(u16, hdev->num_nic_msix - 1, kinfo->num_tqps);
++ kinfo->rss_size = min_t(u16, kinfo->num_tqps / kinfo->num_tc,
++ kinfo->rss_size);
++
+ return 0;
+ }
+
+@@ -502,6 +509,7 @@ static int hclgevf_get_vector(struct hnae3_handle *handle, u16 vector_num,
+ int alloc = 0;
+ int i, j;
+
++ vector_num = min_t(u16, hdev->num_nic_msix - 1, vector_num);
+ vector_num = min(hdev->num_msi_left, vector_num);
+
+ for (j = 0; j < vector_num; j++) {
+@@ -2208,13 +2216,14 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+ int vectors;
+ int i;
+
+- if (hnae3_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_ROCE_B))
++ if (hnae3_dev_roce_supported(hdev))
+ vectors = pci_alloc_irq_vectors(pdev,
+ hdev->roce_base_msix_offset + 1,
+ hdev->num_msi,
+ PCI_IRQ_MSIX);
+ else
+- vectors = pci_alloc_irq_vectors(pdev, 1, hdev->num_msi,
++ vectors = pci_alloc_irq_vectors(pdev, HNAE3_MIN_VECTOR_NUM,
++ hdev->num_msi,
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
+
+ if (vectors < 0) {
+@@ -2230,6 +2239,7 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+
+ hdev->num_msi = vectors;
+ hdev->num_msi_left = vectors;
++
+ hdev->base_msi_vector = pdev->irq;
+ hdev->roce_base_vector = pdev->irq + hdev->roce_base_msix_offset;
+
+@@ -2495,7 +2505,7 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev)
+
+ req = (struct hclgevf_query_res_cmd *)desc.data;
+
+- if (hnae3_get_bit(hdev->ae_dev->flag, HNAE3_DEV_SUPPORT_ROCE_B)) {
++ if (hnae3_dev_roce_supported(hdev)) {
+ hdev->roce_base_msix_offset =
+ hnae3_get_field(__le16_to_cpu(req->msixcap_localid_ba_rocee),
+ HCLGEVF_MSIX_OFT_ROCEE_M,
+@@ -2504,6 +2514,9 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev)
+ hnae3_get_field(__le16_to_cpu(req->vf_intr_vector_number),
+ HCLGEVF_VEC_NUM_M, HCLGEVF_VEC_NUM_S);
+
++ /* nic's msix numbers is always equals to the roce's. */
++ hdev->num_nic_msix = hdev->num_roce_msix;
++
+ /* VF should have NIC vectors and Roce vectors, NIC vectors
+ * are queued before Roce vectors. The offset is fixed to 64.
+ */
+@@ -2513,6 +2526,15 @@ static int hclgevf_query_vf_resource(struct hclgevf_dev *hdev)
+ hdev->num_msi =
+ hnae3_get_field(__le16_to_cpu(req->vf_intr_vector_number),
+ HCLGEVF_VEC_NUM_M, HCLGEVF_VEC_NUM_S);
++
++ hdev->num_nic_msix = hdev->num_msi;
++ }
++
++ if (hdev->num_nic_msix < HNAE3_MIN_VECTOR_NUM) {
++ dev_err(&hdev->pdev->dev,
++ "Just %u msi resources, not enough for vf(min:2).\n",
++ hdev->num_nic_msix);
++ return -EINVAL;
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index 5a9e30998a8f..3c90cff0e43a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -265,6 +265,7 @@ struct hclgevf_dev {
+ u16 num_msi;
+ u16 num_msi_left;
+ u16 num_msi_used;
++ u16 num_nic_msix; /* Num of nic vectors for this VF */
+ u16 num_roce_msix; /* Num of roce vectors for this VF */
+ u16 roce_base_msix_offset;
+ int roce_base_vector;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index 4356f3a58002..1187ef1375e2 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -471,12 +471,31 @@ void mlx4_init_quotas(struct mlx4_dev *dev)
+ priv->mfunc.master.res_tracker.res_alloc[RES_MPT].quota[pf];
+ }
+
+-static int get_max_gauranteed_vfs_counter(struct mlx4_dev *dev)
++static int
++mlx4_calc_res_counter_guaranteed(struct mlx4_dev *dev,
++ struct resource_allocator *res_alloc,
++ int vf)
+ {
+- /* reduce the sink counter */
+- return (dev->caps.max_counters - 1 -
+- (MLX4_PF_COUNTERS_PER_PORT * MLX4_MAX_PORTS))
+- / MLX4_MAX_PORTS;
++ struct mlx4_active_ports actv_ports;
++ int ports, counters_guaranteed;
++
++ /* For master, only allocate according to the number of phys ports */
++ if (vf == mlx4_master_func_num(dev))
++ return MLX4_PF_COUNTERS_PER_PORT * dev->caps.num_ports;
++
++ /* calculate real number of ports for the VF */
++ actv_ports = mlx4_get_active_ports(dev, vf);
++ ports = bitmap_weight(actv_ports.ports, dev->caps.num_ports);
++ counters_guaranteed = ports * MLX4_VF_COUNTERS_PER_PORT;
++
++ /* If we do not have enough counters for this VF, do not
++ * allocate any for it. '-1' to reduce the sink counter.
++ */
++ if ((res_alloc->res_reserved + counters_guaranteed) >
++ (dev->caps.max_counters - 1))
++ return 0;
++
++ return counters_guaranteed;
+ }
+
+ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
+@@ -484,7 +503,6 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ int i, j;
+ int t;
+- int max_vfs_guarantee_counter = get_max_gauranteed_vfs_counter(dev);
+
+ priv->mfunc.master.res_tracker.slave_list =
+ kcalloc(dev->num_slaves, sizeof(struct slave_list),
+@@ -603,16 +621,8 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev)
+ break;
+ case RES_COUNTER:
+ res_alloc->quota[t] = dev->caps.max_counters;
+- if (t == mlx4_master_func_num(dev))
+- res_alloc->guaranteed[t] =
+- MLX4_PF_COUNTERS_PER_PORT *
+- MLX4_MAX_PORTS;
+- else if (t <= max_vfs_guarantee_counter)
+- res_alloc->guaranteed[t] =
+- MLX4_VF_COUNTERS_PER_PORT *
+- MLX4_MAX_PORTS;
+- else
+- res_alloc->guaranteed[t] = 0;
++ res_alloc->guaranteed[t] =
++ mlx4_calc_res_counter_guaranteed(dev, res_alloc, t);
+ break;
+ default:
+ break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index a6a52806be45..310f65ef5446 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -90,15 +90,19 @@ static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv,
+ if (ret)
+ return ret;
+
+- if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET)
++ if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) {
++ ip_rt_put(rt);
+ return -ENETUNREACH;
++ }
+ #else
+ return -EOPNOTSUPP;
+ #endif
+
+ ret = get_route_and_out_devs(priv, rt->dst.dev, route_dev, out_dev);
+- if (ret < 0)
++ if (ret < 0) {
++ ip_rt_put(rt);
+ return ret;
++ }
+
+ if (!(*out_ttl))
+ *out_ttl = ip4_dst_hoplimit(&rt->dst);
+@@ -142,8 +146,10 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
+ *out_ttl = ip6_dst_hoplimit(dst);
+
+ ret = get_route_and_out_devs(priv, dst->dev, route_dev, out_dev);
+- if (ret < 0)
++ if (ret < 0) {
++ dst_release(dst);
+ return ret;
++ }
+ #else
+ return -EOPNOTSUPP;
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 20e628c907e5..a9bb8e2b34a7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1021,7 +1021,7 @@ static bool ext_link_mode_requested(const unsigned long *adver)
+ {
+ #define MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT ETHTOOL_LINK_MODE_50000baseKR_Full_BIT
+ int size = __ETHTOOL_LINK_MODE_MASK_NBITS - MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT;
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(modes);
++ __ETHTOOL_DECLARE_LINK_MODE_MASK(modes) = {0,};
+
+ bitmap_set(modes, MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT, size);
+ return bitmap_intersects(modes, adver, __ETHTOOL_LINK_MODE_MASK_NBITS);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index ac6e586d403d..fb139f8b9acf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1367,8 +1367,11 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
+ if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state)))
+ return 0;
+
+- if (rq->cqd.left)
++ if (rq->cqd.left) {
+ work_done += mlx5e_decompress_cqes_cont(rq, cqwq, 0, budget);
++ if (rq->cqd.left || work_done >= budget)
++ goto out;
++ }
+
+ cqe = mlx5_cqwq_get_cqe(cqwq);
+ if (!cqe) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+index 840ec945ccba..bbff8d8ded76 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+@@ -35,6 +35,7 @@
+ #include <linux/udp.h>
+ #include <net/udp.h>
+ #include "en.h"
++#include "en/port.h"
+
+ enum {
+ MLX5E_ST_LINK_STATE,
+@@ -80,22 +81,12 @@ static int mlx5e_test_link_state(struct mlx5e_priv *priv)
+
+ static int mlx5e_test_link_speed(struct mlx5e_priv *priv)
+ {
+- u32 out[MLX5_ST_SZ_DW(ptys_reg)];
+- u32 eth_proto_oper;
+- int i;
++ u32 speed;
+
+ if (!netif_carrier_ok(priv->netdev))
+ return 1;
+
+- if (mlx5_query_port_ptys(priv->mdev, out, sizeof(out), MLX5_PTYS_EN, 1))
+- return 1;
+-
+- eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper);
+- for (i = 0; i < MLX5E_LINK_MODES_NUMBER; i++) {
+- if (eth_proto_oper & MLX5E_PROT_MASK(i))
+- return 0;
+- }
+- return 1;
++ return mlx5e_port_linkspeed(priv->mdev, &speed);
+ }
+
+ struct mlx5ehdr {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 0323fd078271..35945cdd0a61 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -285,7 +285,6 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
+
+ mlx5_eswitch_set_rule_source_port(esw, spec, attr);
+
+- spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ if (attr->outer_match_level != MLX5_MATCH_NONE)
+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+index 1d55a324a17e..7879e1746297 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
+@@ -177,22 +177,32 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
+ memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
+ }
+
++static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
++ const struct mlx5_flow_spec *spec)
++{
++ u32 port_mask, port_value;
++
++ if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
++ return spec->flow_context.flow_source == MLX5_VPORT_UPLINK;
++
++ port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
++ misc_parameters.source_port);
++ port_value = MLX5_GET(fte_match_param, spec->match_value,
++ misc_parameters.source_port);
++ return (port_mask & port_value & 0xffff) == MLX5_VPORT_UPLINK;
++}
++
+ bool
+ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_spec *spec)
+ {
+- u32 port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
+- misc_parameters.source_port);
+- u32 port_value = MLX5_GET(fte_match_param, spec->match_value,
+- misc_parameters.source_port);
+-
+ if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table))
+ return false;
+
+ /* push vlan on RX */
+ return (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) &&
+- ((port_mask & port_value) == MLX5_VPORT_UPLINK);
++ mlx5_eswitch_offload_is_uplink_port(esw, spec);
+ }
+
+ struct mlx5_flow_handle *
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index 17ceac7505e5..b94cdbd7bb18 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -1128,7 +1128,7 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
+ if (err)
+ goto err_thermal_init;
+
+- if (mlxsw_driver->params_register && !reload)
++ if (mlxsw_driver->params_register)
+ devlink_params_publish(devlink);
+
+ return 0;
+@@ -1201,7 +1201,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ return;
+ }
+
+- if (mlxsw_core->driver->params_unregister && !reload)
++ if (mlxsw_core->driver->params_unregister)
+ devlink_params_unpublish(devlink);
+ mlxsw_thermal_fini(mlxsw_core->thermal);
+ mlxsw_hwmon_fini(mlxsw_core->hwmon);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index bae0074ab9aa..00c86c7dd42d 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -976,6 +976,10 @@ static int r8168dp_2_mdio_read(struct rtl8169_private *tp, int reg)
+ {
+ int value;
+
++ /* Work around issue with chip reporting wrong PHY ID */
++ if (reg == MII_PHYSID2)
++ return 0xc912;
++
+ r8168dp_2_mdio_start(tp);
+
+ value = r8169_mdio_read(tp, reg);
+diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
+index 8fc33867e524..af8eabe7a6d4 100644
+--- a/drivers/net/phy/bcm7xxx.c
++++ b/drivers/net/phy/bcm7xxx.c
+@@ -572,6 +572,7 @@ static int bcm7xxx_28nm_probe(struct phy_device *phydev)
+ .name = _name, \
+ /* PHY_BASIC_FEATURES */ \
+ .flags = PHY_IS_INTERNAL, \
++ .soft_reset = genphy_soft_reset, \
+ .config_init = bcm7xxx_config_init, \
+ .suspend = bcm7xxx_suspend, \
+ .resume = bcm7xxx_config_init, \
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index a5a57ca94c1a..26a13fd3c463 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -87,8 +87,24 @@ struct phylink {
+ phylink_printk(KERN_WARNING, pl, fmt, ##__VA_ARGS__)
+ #define phylink_info(pl, fmt, ...) \
+ phylink_printk(KERN_INFO, pl, fmt, ##__VA_ARGS__)
++#if defined(CONFIG_DYNAMIC_DEBUG)
+ #define phylink_dbg(pl, fmt, ...) \
++do { \
++ if ((pl)->config->type == PHYLINK_NETDEV) \
++ netdev_dbg((pl)->netdev, fmt, ##__VA_ARGS__); \
++ else if ((pl)->config->type == PHYLINK_DEV) \
++ dev_dbg((pl)->dev, fmt, ##__VA_ARGS__); \
++} while (0)
++#elif defined(DEBUG)
++#define phylink_dbg(pl, fmt, ...) \
+ phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__)
++#else
++#define phylink_dbg(pl, fmt, ...) \
++({ \
++ if (0) \
++ phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__); \
++})
++#endif
+
+ /**
+ * phylink_set_port_modes() - set the port type modes in the ethtool mask
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 32f53de5b1fe..fe630438f67b 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -787,6 +787,13 @@ static const struct usb_device_id products[] = {
+ .driver_info = 0,
+ },
+
++/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = 0,
++},
++
+ /* NVIDIA Tegra USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
+ {
+ USB_DEVICE_AND_INTERFACE_INFO(NVIDIA_VENDOR_ID, 0x09ff, USB_CLASS_COMM,
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index f033fee225a1..7dd6289b1ffc 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1265,8 +1265,11 @@ static void lan78xx_status(struct lan78xx_net *dev, struct urb *urb)
+ netif_dbg(dev, link, dev->net, "PHY INTR: 0x%08x\n", intdata);
+ lan78xx_defer_kevent(dev, EVENT_LINK_RESET);
+
+- if (dev->domain_data.phyirq > 0)
++ if (dev->domain_data.phyirq > 0) {
++ local_irq_disable();
+ generic_handle_irq(dev->domain_data.phyirq);
++ local_irq_enable();
++ }
+ } else
+ netdev_warn(dev->net,
+ "unexpected interrupt: 0x%08x\n", intdata);
+@@ -3789,10 +3792,14 @@ static int lan78xx_probe(struct usb_interface *intf,
+ /* driver requires remote-wakeup capability during autosuspend. */
+ intf->needs_remote_wakeup = 1;
+
++ ret = lan78xx_phy_init(dev);
++ if (ret < 0)
++ goto out4;
++
+ ret = register_netdev(netdev);
+ if (ret != 0) {
+ netif_err(dev, probe, netdev, "couldn't register the device\n");
+- goto out4;
++ goto out5;
+ }
+
+ usb_set_intfdata(intf, dev);
+@@ -3805,14 +3812,10 @@ static int lan78xx_probe(struct usb_interface *intf,
+ pm_runtime_set_autosuspend_delay(&udev->dev,
+ DEFAULT_AUTOSUSPEND_DELAY);
+
+- ret = lan78xx_phy_init(dev);
+- if (ret < 0)
+- goto out5;
+-
+ return 0;
+
+ out5:
+- unregister_netdev(netdev);
++ phy_disconnect(netdev->phydev);
+ out4:
+ usb_free_urb(dev->urb_intr);
+ out3:
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 9eedc0714422..7661d7475c2a 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5402,6 +5402,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
++ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601)},
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 3d9bcc957f7d..e07872869266 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2487,9 +2487,11 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ vni = tunnel_id_to_key32(info->key.tun_id);
+ ifindex = 0;
+ dst_cache = &info->dst_cache;
+- if (info->options_len &&
+- info->key.tun_flags & TUNNEL_VXLAN_OPT)
++ if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {
++ if (info->options_len < sizeof(*md))
++ goto drop;
+ md = ip_tunnel_info_opts(info);
++ }
+ ttl = info->key.ttl;
+ tos = info->key.tos;
+ label = info->key.label;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index e6b175370f2e..8b7bd4822465 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -1205,6 +1205,7 @@ static int __init unittest_data_add(void)
+ of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+ if (!unittest_data_node) {
+ pr_warn("%s: No tree to attach; not running tests\n", __func__);
++ kfree(unittest_data);
+ return -ENODATA;
+ }
+
+diff --git a/drivers/pinctrl/bcm/pinctrl-ns2-mux.c b/drivers/pinctrl/bcm/pinctrl-ns2-mux.c
+index 2bf6af7df7d9..9fabc451550e 100644
+--- a/drivers/pinctrl/bcm/pinctrl-ns2-mux.c
++++ b/drivers/pinctrl/bcm/pinctrl-ns2-mux.c
+@@ -640,8 +640,8 @@ static int ns2_pinmux_enable(struct pinctrl_dev *pctrl_dev,
+ const struct ns2_pin_function *func;
+ const struct ns2_pin_group *grp;
+
+- if (grp_select > pinctrl->num_groups ||
+- func_select > pinctrl->num_functions)
++ if (grp_select >= pinctrl->num_groups ||
++ func_select >= pinctrl->num_functions)
+ return -EINVAL;
+
+ func = &pinctrl->functions[func_select];
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index a18d6eefe672..4323796cbe11 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -96,6 +96,7 @@ struct intel_pinctrl_context {
+ * @pctldesc: Pin controller description
+ * @pctldev: Pointer to the pin controller device
+ * @chip: GPIO chip in this pin controller
++ * @irqchip: IRQ chip in this pin controller
+ * @soc: SoC/PCH specific pin configuration data
+ * @communities: All communities in this pin controller
+ * @ncommunities: Number of communities in this pin controller
+@@ -108,6 +109,7 @@ struct intel_pinctrl {
+ struct pinctrl_desc pctldesc;
+ struct pinctrl_dev *pctldev;
+ struct gpio_chip chip;
++ struct irq_chip irqchip;
+ const struct intel_pinctrl_soc_data *soc;
+ struct intel_community *communities;
+ size_t ncommunities;
+@@ -1081,16 +1083,6 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
+ return ret;
+ }
+
+-static struct irq_chip intel_gpio_irqchip = {
+- .name = "intel-gpio",
+- .irq_ack = intel_gpio_irq_ack,
+- .irq_mask = intel_gpio_irq_mask,
+- .irq_unmask = intel_gpio_irq_unmask,
+- .irq_set_type = intel_gpio_irq_type,
+- .irq_set_wake = intel_gpio_irq_wake,
+- .flags = IRQCHIP_MASK_ON_SUSPEND,
+-};
+-
+ static int intel_gpio_add_pin_ranges(struct intel_pinctrl *pctrl,
+ const struct intel_community *community)
+ {
+@@ -1140,12 +1132,22 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
+
+ pctrl->chip = intel_gpio_chip;
+
++ /* Setup GPIO chip */
+ pctrl->chip.ngpio = intel_gpio_ngpio(pctrl);
+ pctrl->chip.label = dev_name(pctrl->dev);
+ pctrl->chip.parent = pctrl->dev;
+ pctrl->chip.base = -1;
+ pctrl->irq = irq;
+
++ /* Setup IRQ chip */
++ pctrl->irqchip.name = dev_name(pctrl->dev);
++ pctrl->irqchip.irq_ack = intel_gpio_irq_ack;
++ pctrl->irqchip.irq_mask = intel_gpio_irq_mask;
++ pctrl->irqchip.irq_unmask = intel_gpio_irq_unmask;
++ pctrl->irqchip.irq_set_type = intel_gpio_irq_type;
++ pctrl->irqchip.irq_set_wake = intel_gpio_irq_wake;
++ pctrl->irqchip.flags = IRQCHIP_MASK_ON_SUSPEND;
++
+ ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);
+ if (ret) {
+ dev_err(pctrl->dev, "failed to register gpiochip\n");
+@@ -1175,15 +1177,14 @@ static int intel_gpio_probe(struct intel_pinctrl *pctrl, int irq)
+ return ret;
+ }
+
+- ret = gpiochip_irqchip_add(&pctrl->chip, &intel_gpio_irqchip, 0,
++ ret = gpiochip_irqchip_add(&pctrl->chip, &pctrl->irqchip, 0,
+ handle_bad_irq, IRQ_TYPE_NONE);
+ if (ret) {
+ dev_err(pctrl->dev, "failed to add irqchip\n");
+ return ret;
+ }
+
+- gpiochip_set_chained_irqchip(&pctrl->chip, &intel_gpio_irqchip, irq,
+- NULL);
++ gpiochip_set_chained_irqchip(&pctrl->chip, &pctrl->irqchip, irq, NULL);
+ return 0;
+ }
+
+diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c
+index 31b6e511670f..b7c7f24699c9 100644
+--- a/drivers/pinctrl/pinctrl-stmfx.c
++++ b/drivers/pinctrl/pinctrl-stmfx.c
+@@ -697,7 +697,7 @@ static int stmfx_pinctrl_probe(struct platform_device *pdev)
+
+ static int stmfx_pinctrl_remove(struct platform_device *pdev)
+ {
+- struct stmfx *stmfx = dev_get_platdata(&pdev->dev);
++ struct stmfx *stmfx = dev_get_drvdata(pdev->dev.parent);
+
+ return stmfx_function_disable(stmfx,
+ STMFX_FUNC_GPIO |
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index aa53648a2214..9aca5e7ce6d0 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -415,6 +415,13 @@ static const struct dmi_system_id critclk_systems[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "CB6363"),
+ },
+ },
++ {
++ .ident = "SIMATIC IPC227E",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SIEMENS AG"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "6ES7647-8B"),
++ },
++ },
+ { /*sentinel*/ }
+ };
+
+diff --git a/drivers/regulator/da9062-regulator.c b/drivers/regulator/da9062-regulator.c
+index 2ffc64622451..9b2ca472f70c 100644
+--- a/drivers/regulator/da9062-regulator.c
++++ b/drivers/regulator/da9062-regulator.c
+@@ -136,7 +136,6 @@ static int da9062_buck_set_mode(struct regulator_dev *rdev, unsigned mode)
+ static unsigned da9062_buck_get_mode(struct regulator_dev *rdev)
+ {
+ struct da9062_regulator *regl = rdev_get_drvdata(rdev);
+- struct regmap_field *field;
+ unsigned int val, mode = 0;
+ int ret;
+
+@@ -158,18 +157,7 @@ static unsigned da9062_buck_get_mode(struct regulator_dev *rdev)
+ return REGULATOR_MODE_NORMAL;
+ }
+
+- /* Detect current regulator state */
+- ret = regmap_field_read(regl->suspend, &val);
+- if (ret < 0)
+- return 0;
+-
+- /* Read regulator mode from proper register, depending on state */
+- if (val)
+- field = regl->suspend_sleep;
+- else
+- field = regl->sleep;
+-
+- ret = regmap_field_read(field, &val);
++ ret = regmap_field_read(regl->sleep, &val);
+ if (ret < 0)
+ return 0;
+
+@@ -208,21 +196,9 @@ static int da9062_ldo_set_mode(struct regulator_dev *rdev, unsigned mode)
+ static unsigned da9062_ldo_get_mode(struct regulator_dev *rdev)
+ {
+ struct da9062_regulator *regl = rdev_get_drvdata(rdev);
+- struct regmap_field *field;
+ int ret, val;
+
+- /* Detect current regulator state */
+- ret = regmap_field_read(regl->suspend, &val);
+- if (ret < 0)
+- return 0;
+-
+- /* Read regulator mode from proper register, depending on state */
+- if (val)
+- field = regl->suspend_sleep;
+- else
+- field = regl->sleep;
+-
+- ret = regmap_field_read(field, &val);
++ ret = regmap_field_read(regl->sleep, &val);
+ if (ret < 0)
+ return 0;
+
+@@ -408,10 +384,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK1_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK1_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK1_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK1_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9061_ID_BUCK2,
+@@ -444,10 +420,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK3_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK3_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK3_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK3_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9061_ID_BUCK3,
+@@ -480,10 +456,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK4_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK4_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK4_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK4_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9061_ID_LDO1,
+@@ -509,10 +485,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO1_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO1_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO1_CONT,
++ __builtin_ffs((int)DA9062AA_LDO1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO1_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO1_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -542,10 +518,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO2_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO2_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO2_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO2_CONT,
++ __builtin_ffs((int)DA9062AA_LDO2_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO2_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO2_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO2_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -575,10 +551,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO3_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO3_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO3_CONT,
++ __builtin_ffs((int)DA9062AA_LDO3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO3_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO3_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -608,10 +584,10 @@ static const struct da9062_regulator_info local_da9061_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO4_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO4_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO4_CONT,
++ __builtin_ffs((int)DA9062AA_LDO4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO4_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO4_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -652,10 +628,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK1_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK1_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK1_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK1_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_BUCK2,
+@@ -688,10 +664,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK2_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK2_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK2_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK2_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK2_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK2_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK2_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_BUCK3,
+@@ -724,10 +700,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK3_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK3_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK3_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK3_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_BUCK4,
+@@ -760,10 +736,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ __builtin_ffs((int)DA9062AA_BUCK4_MODE_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_BUCK4_MODE_MASK)) - 1),
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VBUCK4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_BUCK4_CONT,
++ __builtin_ffs((int)DA9062AA_BUCK4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VBUCK4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_BUCK4_CONF_MASK) - 1),
+ },
+ {
+ .desc.id = DA9062_ID_LDO1,
+@@ -789,10 +765,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO1_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO1_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO1_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO1_CONT,
++ __builtin_ffs((int)DA9062AA_LDO1_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO1_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO1_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO1_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -822,10 +798,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO2_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO2_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO2_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO2_CONT,
++ __builtin_ffs((int)DA9062AA_LDO2_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO2_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO2_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO2_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -855,10 +831,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO3_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO3_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO3_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO3_CONT,
++ __builtin_ffs((int)DA9062AA_LDO3_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO3_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO3_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO3_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+@@ -888,10 +864,10 @@ static const struct da9062_regulator_info local_da9062_regulator_info[] = {
+ sizeof(unsigned int) * 8 -
+ __builtin_clz((DA9062AA_LDO4_SL_B_MASK)) - 1),
+ .suspend_vsel_reg = DA9062AA_VLDO4_B,
+- .suspend = REG_FIELD(DA9062AA_DVC_1,
+- __builtin_ffs((int)DA9062AA_VLDO4_SEL_MASK) - 1,
++ .suspend = REG_FIELD(DA9062AA_LDO4_CONT,
++ __builtin_ffs((int)DA9062AA_LDO4_CONF_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+- __builtin_clz((DA9062AA_VLDO4_SEL_MASK)) - 1),
++ __builtin_clz(DA9062AA_LDO4_CONF_MASK) - 1),
+ .oc_event = REG_FIELD(DA9062AA_STATUS_D,
+ __builtin_ffs((int)DA9062AA_LDO4_ILIM_MASK) - 1,
+ sizeof(unsigned int) * 8 -
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 9112faa6a9a0..38dd06fbab38 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -231,12 +231,12 @@ static int of_get_regulation_constraints(struct device *dev,
+ "regulator-off-in-suspend"))
+ suspend_state->enabled = DISABLE_IN_SUSPEND;
+
+- if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
+- &pval))
++ if (!of_property_read_u32(suspend_np,
++ "regulator-suspend-min-microvolt", &pval))
+ suspend_state->min_uV = pval;
+
+- if (!of_property_read_u32(np, "regulator-suspend-max-microvolt",
+- &pval))
++ if (!of_property_read_u32(suspend_np,
++ "regulator-suspend-max-microvolt", &pval))
+ suspend_state->max_uV = pval;
+
+ if (!of_property_read_u32(suspend_np,
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index df5df1c495ad..689537927f6f 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -788,7 +788,13 @@ static int pfuze100_regulator_probe(struct i2c_client *client,
+
+ /* SW2~SW4 high bit check and modify the voltage value table */
+ if (i >= sw_check_start && i <= sw_check_end) {
+- regmap_read(pfuze_chip->regmap, desc->vsel_reg, &val);
++ ret = regmap_read(pfuze_chip->regmap,
++ desc->vsel_reg, &val);
++ if (ret) {
++ dev_err(&client->dev, "Fails to read from the register.\n");
++ return ret;
++ }
++
+ if (val & sw_hi) {
+ if (pfuze_chip->chip_id == PFUZE3000 ||
+ pfuze_chip->chip_id == PFUZE3001) {
+diff --git a/drivers/regulator/ti-abb-regulator.c b/drivers/regulator/ti-abb-regulator.c
+index cced1ffb896c..89b9314d64c9 100644
+--- a/drivers/regulator/ti-abb-regulator.c
++++ b/drivers/regulator/ti-abb-regulator.c
+@@ -173,19 +173,14 @@ static int ti_abb_wait_txdone(struct device *dev, struct ti_abb *abb)
+ while (timeout++ <= abb->settling_time) {
+ status = ti_abb_check_txdone(abb);
+ if (status)
+- break;
++ return 0;
+
+ udelay(1);
+ }
+
+- if (timeout > abb->settling_time) {
+- dev_warn_ratelimited(dev,
+- "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
+- __func__, timeout, readl(abb->int_base));
+- return -ETIMEDOUT;
+- }
+-
+- return 0;
++ dev_warn_ratelimited(dev, "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
++ __func__, timeout, readl(abb->int_base));
++ return -ETIMEDOUT;
+ }
+
+ /**
+@@ -205,19 +200,14 @@ static int ti_abb_clear_all_txdone(struct device *dev, const struct ti_abb *abb)
+
+ status = ti_abb_check_txdone(abb);
+ if (!status)
+- break;
++ return 0;
+
+ udelay(1);
+ }
+
+- if (timeout > abb->settling_time) {
+- dev_warn_ratelimited(dev,
+- "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
+- __func__, timeout, readl(abb->int_base));
+- return -ETIMEDOUT;
+- }
+-
+- return 0;
++ dev_warn_ratelimited(dev, "%s:TRANXDONE timeout(%duS) int=0x%08x\n",
++ __func__, timeout, readl(abb->int_base));
++ return -ETIMEDOUT;
+ }
+
+ /**
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index 1b92f3c19ff3..90cf4691b8c3 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -898,7 +898,7 @@ config SCSI_SNI_53C710
+
+ config 53C700_LE_ON_BE
+ bool
+- depends on SCSI_LASI700
++ depends on SCSI_LASI700 || SCSI_SNI_53C710
+ default y
+
+ config SCSI_STEX
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index 4971104b1817..f32da0ca529e 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -512,6 +512,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ unsigned int tpg_desc_tbl_off;
+ unsigned char orig_transition_tmo;
+ unsigned long flags;
++ bool transitioning_sense = false;
+
+ if (!pg->expiry) {
+ unsigned long transition_tmo = ALUA_FAILOVER_TIMEOUT * HZ;
+@@ -572,13 +573,19 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ goto retry;
+ }
+ /*
+- * Retry on ALUA state transition or if any
+- * UNIT ATTENTION occurred.
++ * If the array returns with 'ALUA state transition'
++ * sense code here it cannot return RTPG data during
++ * transition. So set the state to 'transitioning' directly.
+ */
+ if (sense_hdr.sense_key == NOT_READY &&
+- sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x0a)
+- err = SCSI_DH_RETRY;
+- else if (sense_hdr.sense_key == UNIT_ATTENTION)
++ sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x0a) {
++ transitioning_sense = true;
++ goto skip_rtpg;
++ }
++ /*
++ * Retry on any other UNIT ATTENTION occurred.
++ */
++ if (sense_hdr.sense_key == UNIT_ATTENTION)
+ err = SCSI_DH_RETRY;
+ if (err == SCSI_DH_RETRY &&
+ pg->expiry != 0 && time_before(jiffies, pg->expiry)) {
+@@ -666,7 +673,11 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ off = 8 + (desc[7] * 4);
+ }
+
++ skip_rtpg:
+ spin_lock_irqsave(&pg->lock, flags);
++ if (transitioning_sense)
++ pg->state = SCSI_ACCESS_STATE_TRANSITIONING;
++
+ sdev_printk(KERN_INFO, sdev,
+ "%s: port group %02x state %c %s supports %c%c%c%c%c%c%c\n",
+ ALUA_DH_NAME, pg->group_id, print_alua_state(pg->state),
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 1bb6aada93fa..a4519710b3fc 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -5478,6 +5478,8 @@ static int hpsa_ciss_submit(struct ctlr_info *h,
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
++ c->device = dev;
++
+ enqueue_cmd_and_start_io(h, c);
+ /* the cmd'll come back via intr handler in complete_scsi_command() */
+ return 0;
+@@ -5549,6 +5551,7 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
+ hpsa_cmd_init(h, c->cmdindex, c);
+ c->cmd_type = CMD_SCSI;
+ c->scsi_cmd = cmd;
++ c->device = dev;
+ rc = hpsa_scsi_ioaccel_raid_map(h, c);
+ if (rc < 0) /* scsi_dma_map failed. */
+ rc = SCSI_MLQUEUE_HOST_BUSY;
+@@ -5556,6 +5559,7 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
+ hpsa_cmd_init(h, c->cmdindex, c);
+ c->cmd_type = CMD_SCSI;
+ c->scsi_cmd = cmd;
++ c->device = dev;
+ rc = hpsa_scsi_ioaccel_direct_map(h, c);
+ if (rc < 0) /* scsi_dma_map failed. */
+ rc = SCSI_MLQUEUE_HOST_BUSY;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 2835afbd2edc..04cf6986eb8e 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3233,6 +3233,10 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ req->req_q_in, req->req_q_out, rsp->rsp_q_in, rsp->rsp_q_out);
+
+ ha->wq = alloc_workqueue("qla2xxx_wq", 0, 0);
++ if (unlikely(!ha->wq)) {
++ ret = -ENOMEM;
++ goto probe_failed;
++ }
+
+ if (ha->isp_ops->initialize_adapter(base_vha)) {
+ ql_log(ql_log_fatal, base_vha, 0x00d6,
+diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c
+index aef4881d8e21..a85d52b5dc32 100644
+--- a/drivers/scsi/sni_53c710.c
++++ b/drivers/scsi/sni_53c710.c
+@@ -66,10 +66,8 @@ static int snirm710_probe(struct platform_device *dev)
+
+ base = res->start;
+ hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL);
+- if (!hostdata) {
+- dev_printk(KERN_ERR, dev, "Failed to allocate host data\n");
++ if (!hostdata)
+ return -ENOMEM;
+- }
+
+ hostdata->dev = &dev->dev;
+ dma_set_mask(&dev->dev, DMA_BIT_MASK(32));
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 04bf2acd3800..2d19f0e332b0 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -1074,27 +1074,6 @@ passthrough_parse_cdb(struct se_cmd *cmd,
+ struct se_device *dev = cmd->se_dev;
+ unsigned int size;
+
+- /*
+- * Clear a lun set in the cdb if the initiator talking to use spoke
+- * and old standards version, as we can't assume the underlying device
+- * won't choke up on it.
+- */
+- switch (cdb[0]) {
+- case READ_10: /* SBC - RDProtect */
+- case READ_12: /* SBC - RDProtect */
+- case READ_16: /* SBC - RDProtect */
+- case SEND_DIAGNOSTIC: /* SPC - SELF-TEST Code */
+- case VERIFY: /* SBC - VRProtect */
+- case VERIFY_16: /* SBC - VRProtect */
+- case WRITE_VERIFY: /* SBC - VRProtect */
+- case WRITE_VERIFY_12: /* SBC - VRProtect */
+- case MAINTENANCE_IN: /* SPC - Parameter Data Format for SA RTPG */
+- break;
+- default:
+- cdb[1] &= 0x1f; /* clear logical unit number */
+- break;
+- }
+-
+ /*
+ * For REPORT LUNS we always need to emulate the response, for everything
+ * else, pass it up.
+diff --git a/drivers/tty/serial/8250/8250_men_mcb.c b/drivers/tty/serial/8250/8250_men_mcb.c
+index 02c5aff58a74..8df89e9cd254 100644
+--- a/drivers/tty/serial/8250/8250_men_mcb.c
++++ b/drivers/tty/serial/8250/8250_men_mcb.c
+@@ -72,8 +72,8 @@ static int serial_8250_men_mcb_probe(struct mcb_device *mdev,
+ {
+ struct serial_8250_men_mcb_data *data;
+ struct resource *mem;
+- unsigned int num_ports;
+- unsigned int i;
++ int num_ports;
++ int i;
+ void __iomem *membase;
+
+ mem = mcb_get_resource(mdev, IORESOURCE_MEM);
+@@ -88,7 +88,7 @@ static int serial_8250_men_mcb_probe(struct mcb_device *mdev,
+ dev_dbg(&mdev->dev, "found a 16z%03u with %u ports\n",
+ mdev->id, num_ports);
+
+- if (num_ports == 0 || num_ports > 4) {
++ if (num_ports <= 0 || num_ports > 4) {
+ dev_err(&mdev->dev, "unexpected number of ports: %u\n",
+ num_ports);
+ return -ENODEV;
+@@ -133,7 +133,7 @@ static int serial_8250_men_mcb_probe(struct mcb_device *mdev,
+
+ static void serial_8250_men_mcb_remove(struct mcb_device *mdev)
+ {
+- unsigned int num_ports, i;
++ int num_ports, i;
+ struct serial_8250_men_mcb_data *data = mcb_get_drvdata(mdev);
+
+ if (!data)
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 4d8f8f4ecf98..51fa614b4079 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1154,7 +1154,7 @@ static int check_pending_gadget_drivers(struct usb_udc *udc)
+ dev_name(&udc->dev)) == 0) {
+ ret = udc_bind_to_driver(udc, driver);
+ if (ret != -EPROBE_DEFER)
+- list_del(&driver->pending);
++ list_del_init(&driver->pending);
+ break;
+ }
+
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 5ef5a16c01d2..7289d443bfb3 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1379,6 +1379,11 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file);
+ struct cifsInodeInfo {
+ bool can_cache_brlcks;
+ struct list_head llist; /* locks helb by this inode */
++ /*
++ * NOTE: Some code paths call down_read(lock_sem) twice, so
++ * we must always use use cifs_down_write() instead of down_write()
++ * for this semaphore to avoid deadlocks.
++ */
+ struct rw_semaphore lock_sem; /* protect the fields above */
+ /* BB add in lists for dirty pages i.e. write caching info for oplock */
+ struct list_head openFileList;
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 592a6cea2b79..65b07f92bc71 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -166,6 +166,7 @@ extern int cifs_unlock_range(struct cifsFileInfo *cfile,
+ struct file_lock *flock, const unsigned int xid);
+ extern int cifs_push_mandatory_locks(struct cifsFileInfo *cfile);
+
++extern void cifs_down_write(struct rw_semaphore *sem);
+ extern struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid,
+ struct file *file,
+ struct tcon_link *tlink,
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8ee57d1f507f..8995c03056e3 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -556,9 +556,11 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ spin_lock(&GlobalMid_Lock);
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
++ kref_get(&mid_entry->refcount);
+ if (mid_entry->mid_state == MID_REQUEST_SUBMITTED)
+ mid_entry->mid_state = MID_RETRY_NEEDED;
+ list_move(&mid_entry->qhead, &retry_list);
++ mid_entry->mid_flags |= MID_DELETED;
+ }
+ spin_unlock(&GlobalMid_Lock);
+ mutex_unlock(&server->srv_mutex);
+@@ -568,6 +570,7 @@ cifs_reconnect(struct TCP_Server_Info *server)
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
++ cifs_mid_q_entry_release(mid_entry);
+ }
+
+ if (cifs_rdma_enabled(server)) {
+@@ -887,8 +890,10 @@ dequeue_mid(struct mid_q_entry *mid, bool malformed)
+ if (mid->mid_flags & MID_DELETED)
+ printk_once(KERN_WARNING
+ "trying to dequeue a deleted mid\n");
+- else
++ else {
+ list_del_init(&mid->qhead);
++ mid->mid_flags |= MID_DELETED;
++ }
+ spin_unlock(&GlobalMid_Lock);
+ }
+
+@@ -958,8 +963,10 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
+ cifs_dbg(FYI, "Clearing mid 0x%llx\n", mid_entry->mid);
++ kref_get(&mid_entry->refcount);
+ mid_entry->mid_state = MID_SHUTDOWN;
+ list_move(&mid_entry->qhead, &dispose_list);
++ mid_entry->mid_flags |= MID_DELETED;
+ }
+ spin_unlock(&GlobalMid_Lock);
+
+@@ -969,6 +976,7 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ cifs_dbg(FYI, "Callback mid 0x%llx\n", mid_entry->mid);
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
++ cifs_mid_q_entry_release(mid_entry);
+ }
+ /* 1/8th of sec is more than enough time for them to exit */
+ msleep(125);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 53dbb6e0d390..facb52d37d19 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -281,6 +281,13 @@ cifs_has_mand_locks(struct cifsInodeInfo *cinode)
+ return has_locks;
+ }
+
++void
++cifs_down_write(struct rw_semaphore *sem)
++{
++ while (!down_write_trylock(sem))
++ msleep(10);
++}
++
+ struct cifsFileInfo *
+ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ struct tcon_link *tlink, __u32 oplock)
+@@ -306,7 +313,7 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+ INIT_LIST_HEAD(&fdlocks->locks);
+ fdlocks->cfile = cfile;
+ cfile->llist = fdlocks;
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_add(&fdlocks->llist, &cinode->llist);
+ up_write(&cinode->lock_sem);
+
+@@ -464,7 +471,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ * Delete any outstanding lock records. We'll lose them when the file
+ * is closed anyway.
+ */
+- down_write(&cifsi->lock_sem);
++ cifs_down_write(&cifsi->lock_sem);
+ list_for_each_entry_safe(li, tmp, &cifs_file->llist->locks, llist) {
+ list_del(&li->llist);
+ cifs_del_lock_waiters(li);
+@@ -1027,7 +1034,7 @@ static void
+ cifs_lock_add(struct cifsFileInfo *cfile, struct cifsLockInfo *lock)
+ {
+ struct cifsInodeInfo *cinode = CIFS_I(d_inode(cfile->dentry));
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_add_tail(&lock->llist, &cfile->llist->locks);
+ up_write(&cinode->lock_sem);
+ }
+@@ -1049,7 +1056,7 @@ cifs_lock_add_if(struct cifsFileInfo *cfile, struct cifsLockInfo *lock,
+
+ try_again:
+ exist = false;
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+
+ exist = cifs_find_lock_conflict(cfile, lock->offset, lock->length,
+ lock->type, lock->flags, &conf_lock,
+@@ -1072,7 +1079,7 @@ try_again:
+ (lock->blist.next == &lock->blist));
+ if (!rc)
+ goto try_again;
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_del_init(&lock->blist);
+ }
+
+@@ -1125,7 +1132,7 @@ cifs_posix_lock_set(struct file *file, struct file_lock *flock)
+ return rc;
+
+ try_again:
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ if (!cinode->can_cache_brlcks) {
+ up_write(&cinode->lock_sem);
+ return rc;
+@@ -1331,7 +1338,7 @@ cifs_push_locks(struct cifsFileInfo *cfile)
+ int rc = 0;
+
+ /* we are going to update can_cache_brlcks here - need a write access */
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ if (!cinode->can_cache_brlcks) {
+ up_write(&cinode->lock_sem);
+ return rc;
+@@ -1522,7 +1529,7 @@ cifs_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock,
+ if (!buf)
+ return -ENOMEM;
+
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ for (i = 0; i < 2; i++) {
+ cur = buf;
+ num = 0;
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index e6a1fc72018f..8b0b512c5792 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -145,7 +145,7 @@ smb2_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock,
+
+ cur = buf;
+
+- down_write(&cinode->lock_sem);
++ cifs_down_write(&cinode->lock_sem);
+ list_for_each_entry_safe(li, tmp, &cfile->llist->locks, llist) {
+ if (flock->fl_start > li->offset ||
+ (flock->fl_start + length) <
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 5d6d44bfe10a..bb52751ba783 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -86,22 +86,8 @@ AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
+
+ static void _cifs_mid_q_entry_release(struct kref *refcount)
+ {
+- struct mid_q_entry *mid = container_of(refcount, struct mid_q_entry,
+- refcount);
+-
+- mempool_free(mid, cifs_mid_poolp);
+-}
+-
+-void cifs_mid_q_entry_release(struct mid_q_entry *midEntry)
+-{
+- spin_lock(&GlobalMid_Lock);
+- kref_put(&midEntry->refcount, _cifs_mid_q_entry_release);
+- spin_unlock(&GlobalMid_Lock);
+-}
+-
+-void
+-DeleteMidQEntry(struct mid_q_entry *midEntry)
+-{
++ struct mid_q_entry *midEntry =
++ container_of(refcount, struct mid_q_entry, refcount);
+ #ifdef CONFIG_CIFS_STATS2
+ __le16 command = midEntry->server->vals->lock_cmd;
+ __u16 smb_cmd = le16_to_cpu(midEntry->command);
+@@ -166,6 +152,19 @@ DeleteMidQEntry(struct mid_q_entry *midEntry)
+ }
+ }
+ #endif
++
++ mempool_free(midEntry, cifs_mid_poolp);
++}
++
++void cifs_mid_q_entry_release(struct mid_q_entry *midEntry)
++{
++ spin_lock(&GlobalMid_Lock);
++ kref_put(&midEntry->refcount, _cifs_mid_q_entry_release);
++ spin_unlock(&GlobalMid_Lock);
++}
++
++void DeleteMidQEntry(struct mid_q_entry *midEntry)
++{
+ cifs_mid_q_entry_release(midEntry);
+ }
+
+@@ -173,8 +172,10 @@ void
+ cifs_delete_mid(struct mid_q_entry *mid)
+ {
+ spin_lock(&GlobalMid_Lock);
+- list_del_init(&mid->qhead);
+- mid->mid_flags |= MID_DELETED;
++ if (!(mid->mid_flags & MID_DELETED)) {
++ list_del_init(&mid->qhead);
++ mid->mid_flags |= MID_DELETED;
++ }
+ spin_unlock(&GlobalMid_Lock);
+
+ DeleteMidQEntry(mid);
+@@ -868,7 +869,10 @@ cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server)
+ rc = -EHOSTDOWN;
+ break;
+ default:
+- list_del_init(&mid->qhead);
++ if (!(mid->mid_flags & MID_DELETED)) {
++ list_del_init(&mid->qhead);
++ mid->mid_flags |= MID_DELETED;
++ }
+ cifs_dbg(VFS, "%s: invalid mid state mid=%llu state=%d\n",
+ __func__, mid->mid, mid->mid_state);
+ rc = -EIO;
+diff --git a/include/linux/gfp.h b/include/linux/gfp.h
+index f33881688f42..ff1c96b8ae92 100644
+--- a/include/linux/gfp.h
++++ b/include/linux/gfp.h
+@@ -325,6 +325,29 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
+ return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
+ }
+
++/**
++ * gfpflags_normal_context - is gfp_flags a normal sleepable context?
++ * @gfp_flags: gfp_flags to test
++ *
++ * Test whether @gfp_flags indicates that the allocation is from the
++ * %current context and allowed to sleep.
++ *
++ * An allocation being allowed to block doesn't mean it owns the %current
++ * context. When direct reclaim path tries to allocate memory, the
++ * allocation context is nested inside whatever %current was doing at the
++ * time of the original allocation. The nested allocation may be allowed
++ * to block but modifying anything %current owns can corrupt the outer
++ * context's expectations.
++ *
++ * %true result from this function indicates that the allocation context
++ * can sleep and use anything that's associated with %current.
++ */
++static inline bool gfpflags_normal_context(const gfp_t gfp_flags)
++{
++ return (gfp_flags & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC)) ==
++ __GFP_DIRECT_RECLAIM;
++}
++
+ #ifdef CONFIG_HIGHMEM
+ #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM
+ #else
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index b8b570c30b5e..e4b323e4db8f 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1437,9 +1437,8 @@ struct mlx5_ifc_extended_dest_format_bits {
+ };
+
+ union mlx5_ifc_dest_format_struct_flow_counter_list_auto_bits {
+- struct mlx5_ifc_dest_format_struct_bits dest_format_struct;
++ struct mlx5_ifc_extended_dest_format_bits extended_dest_format;
+ struct mlx5_ifc_flow_counter_list_bits flow_counter_list;
+- u8 reserved_at_0[0x40];
+ };
+
+ struct mlx5_ifc_fte_match_param_bits {
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 9b18d33681c2..7647beaac2d2 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1360,7 +1360,8 @@ static inline __u32 skb_get_hash_flowi6(struct sk_buff *skb, const struct flowi6
+ return skb->hash;
+ }
+
+-__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb);
++__u32 skb_get_hash_perturb(const struct sk_buff *skb,
++ const siphash_key_t *perturb);
+
+ static inline __u32 skb_get_hash_raw(const struct sk_buff *skb)
+ {
+@@ -1500,6 +1501,19 @@ static inline int skb_queue_empty(const struct sk_buff_head *list)
+ return list->next == (const struct sk_buff *) list;
+ }
+
++/**
++ * skb_queue_empty_lockless - check if a queue is empty
++ * @list: queue head
++ *
++ * Returns true if the queue is empty, false otherwise.
++ * This variant can be used in lockless contexts.
++ */
++static inline bool skb_queue_empty_lockless(const struct sk_buff_head *list)
++{
++ return READ_ONCE(list->next) == (const struct sk_buff *) list;
++}
++
++
+ /**
+ * skb_queue_is_last - check if skb is the last entry in the queue
+ * @list: queue head
+@@ -1853,9 +1867,11 @@ static inline void __skb_insert(struct sk_buff *newsk,
+ struct sk_buff *prev, struct sk_buff *next,
+ struct sk_buff_head *list)
+ {
+- newsk->next = next;
+- newsk->prev = prev;
+- next->prev = prev->next = newsk;
++ /* see skb_queue_empty_lockless() for the opposite READ_ONCE() */
++ WRITE_ONCE(newsk->next, next);
++ WRITE_ONCE(newsk->prev, prev);
++ WRITE_ONCE(next->prev, newsk);
++ WRITE_ONCE(prev->next, newsk);
+ list->qlen++;
+ }
+
+@@ -1866,11 +1882,11 @@ static inline void __skb_queue_splice(const struct sk_buff_head *list,
+ struct sk_buff *first = list->next;
+ struct sk_buff *last = list->prev;
+
+- first->prev = prev;
+- prev->next = first;
++ WRITE_ONCE(first->prev, prev);
++ WRITE_ONCE(prev->next, first);
+
+- last->next = next;
+- next->prev = last;
++ WRITE_ONCE(last->next, next);
++ WRITE_ONCE(next->prev, last);
+ }
+
+ /**
+@@ -2011,8 +2027,8 @@ static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)
+ next = skb->next;
+ prev = skb->prev;
+ skb->next = skb->prev = NULL;
+- next->prev = prev;
+- prev->next = next;
++ WRITE_ONCE(next->prev, prev);
++ WRITE_ONCE(prev->next, next);
+ }
+
+ /**
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index 127a5c4e3699..86e028388bad 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -122,7 +122,7 @@ static inline void skb_mark_napi_id(struct sk_buff *skb,
+ static inline void sk_mark_napi_id(struct sock *sk, const struct sk_buff *skb)
+ {
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+- sk->sk_napi_id = skb->napi_id;
++ WRITE_ONCE(sk->sk_napi_id, skb->napi_id);
+ #endif
+ sk_rx_queue_set(sk, skb);
+ }
+@@ -132,8 +132,8 @@ static inline void sk_mark_napi_id_once(struct sock *sk,
+ const struct sk_buff *skb)
+ {
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+- if (!sk->sk_napi_id)
+- sk->sk_napi_id = skb->napi_id;
++ if (!READ_ONCE(sk->sk_napi_id))
++ WRITE_ONCE(sk->sk_napi_id, skb->napi_id);
+ #endif
+ }
+
+diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
+index 90bd210be060..5cd12276ae21 100644
+--- a/include/net/flow_dissector.h
++++ b/include/net/flow_dissector.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/types.h>
+ #include <linux/in6.h>
++#include <linux/siphash.h>
+ #include <uapi/linux/if_ether.h>
+
+ /**
+@@ -276,7 +277,7 @@ struct flow_keys_basic {
+ struct flow_keys {
+ struct flow_dissector_key_control control;
+ #define FLOW_KEYS_HASH_START_FIELD basic
+- struct flow_dissector_key_basic basic;
++ struct flow_dissector_key_basic basic __aligned(SIPHASH_ALIGNMENT);
+ struct flow_dissector_key_tags tags;
+ struct flow_dissector_key_vlan vlan;
+ struct flow_dissector_key_vlan cvlan;
+diff --git a/include/net/fq.h b/include/net/fq.h
+index d126b5d20261..2ad85e683041 100644
+--- a/include/net/fq.h
++++ b/include/net/fq.h
+@@ -69,7 +69,7 @@ struct fq {
+ struct list_head backlogs;
+ spinlock_t lock;
+ u32 flows_cnt;
+- u32 perturbation;
++ siphash_key_t perturbation;
+ u32 limit;
+ u32 memory_limit;
+ u32 memory_usage;
+diff --git a/include/net/fq_impl.h b/include/net/fq_impl.h
+index be40a4b327e3..107c0d700ed6 100644
+--- a/include/net/fq_impl.h
++++ b/include/net/fq_impl.h
+@@ -108,7 +108,7 @@ begin:
+
+ static u32 fq_flow_idx(struct fq *fq, struct sk_buff *skb)
+ {
+- u32 hash = skb_get_hash_perturb(skb, fq->perturbation);
++ u32 hash = skb_get_hash_perturb(skb, &fq->perturbation);
+
+ return reciprocal_scale(hash, fq->flows_cnt);
+ }
+@@ -308,7 +308,7 @@ static int fq_init(struct fq *fq, int flows_cnt)
+ INIT_LIST_HEAD(&fq->backlogs);
+ spin_lock_init(&fq->lock);
+ fq->flows_cnt = max_t(u32, flows_cnt, 1);
+- fq->perturbation = prandom_u32();
++ get_random_bytes(&fq->perturbation, sizeof(fq->perturbation));
+ fq->quantum = 300;
+ fq->limit = 8192;
+ fq->memory_limit = 16 << 20; /* 16 MBytes */
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 29d89de39822..e6609ab69161 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -184,7 +184,7 @@ static inline struct sk_buff *ip_fraglist_next(struct ip_fraglist_iter *iter)
+ }
+
+ struct ip_frag_state {
+- struct iphdr *iph;
++ bool DF;
+ unsigned int hlen;
+ unsigned int ll_rs;
+ unsigned int mtu;
+@@ -195,7 +195,7 @@ struct ip_frag_state {
+ };
+
+ void ip_frag_init(struct sk_buff *skb, unsigned int hlen, unsigned int ll_rs,
+- unsigned int mtu, struct ip_frag_state *state);
++ unsigned int mtu, bool DF, struct ip_frag_state *state);
+ struct sk_buff *ip_frag_next(struct sk_buff *skb,
+ struct ip_frag_state *state);
+
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index ab40d7afdc54..8f8b37198f9b 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -52,6 +52,9 @@ struct bpf_prog;
+ #define NETDEV_HASHENTRIES (1 << NETDEV_HASHBITS)
+
+ struct net {
++ /* First cache line can be often dirtied.
++ * Do not place here read-mostly fields.
++ */
+ refcount_t passive; /* To decide when the network
+ * namespace should be freed.
+ */
+@@ -60,7 +63,13 @@ struct net {
+ */
+ spinlock_t rules_mod_lock;
+
+- u32 hash_mix;
++ unsigned int dev_unreg_count;
++
++ unsigned int dev_base_seq; /* protected by rtnl_mutex */
++ int ifindex;
++
++ spinlock_t nsid_lock;
++ atomic_t fnhe_genid;
+
+ struct list_head list; /* list of network namespaces */
+ struct list_head exit_list; /* To linked to call pernet exit
+@@ -76,11 +85,11 @@ struct net {
+ #endif
+ struct user_namespace *user_ns; /* Owning user namespace */
+ struct ucounts *ucounts;
+- spinlock_t nsid_lock;
+ struct idr netns_ids;
+
+ struct ns_common ns;
+
++ struct list_head dev_base_head;
+ struct proc_dir_entry *proc_net;
+ struct proc_dir_entry *proc_net_stat;
+
+@@ -93,12 +102,14 @@ struct net {
+
+ struct uevent_sock *uevent_sock; /* uevent socket */
+
+- struct list_head dev_base_head;
+ struct hlist_head *dev_name_head;
+ struct hlist_head *dev_index_head;
+- unsigned int dev_base_seq; /* protected by rtnl_mutex */
+- int ifindex;
+- unsigned int dev_unreg_count;
++ /* Note that @hash_mix can be read millions times per second,
++ * it is critical that it is on a read_mostly cache line.
++ */
++ u32 hash_mix;
++
++ struct net_device *loopback_dev; /* The loopback */
+
+ /* core fib_rules */
+ struct list_head rules_ops;
+@@ -106,7 +117,6 @@ struct net {
+ struct list_head fib_notifier_ops; /* Populated by
+ * register_pernet_subsys()
+ */
+- struct net_device *loopback_dev; /* The loopback */
+ struct netns_core core;
+ struct netns_mib mib;
+ struct netns_packet packet;
+@@ -171,7 +181,6 @@ struct net {
+ struct netns_xdp xdp;
+ #endif
+ struct sock *diag_nlsk;
+- atomic_t fnhe_genid;
+ } __randomize_layout;
+
+ #include <linux/seq_file_net.h>
+@@ -333,7 +342,7 @@ static inline struct net *read_pnet(const possible_net_t *pnet)
+ #define __net_initconst __initconst
+ #endif
+
+-int peernet2id_alloc(struct net *net, struct net *peer);
++int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp);
+ int peernet2id(struct net *net, struct net *peer);
+ bool peernet_has_id(struct net *net, struct net *peer);
+ struct net *get_net_ns_by_id(struct net *net, int id);
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 2c53f1a1d905..b03f96370f8e 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -949,8 +949,8 @@ static inline void sk_incoming_cpu_update(struct sock *sk)
+ {
+ int cpu = raw_smp_processor_id();
+
+- if (unlikely(sk->sk_incoming_cpu != cpu))
+- sk->sk_incoming_cpu = cpu;
++ if (unlikely(READ_ONCE(sk->sk_incoming_cpu) != cpu))
++ WRITE_ONCE(sk->sk_incoming_cpu, cpu);
+ }
+
+ static inline void sock_rps_record_flow_hash(__u32 hash)
+@@ -2233,12 +2233,17 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp,
+ * sk_page_frag - return an appropriate page_frag
+ * @sk: socket
+ *
+- * If socket allocation mode allows current thread to sleep, it means its
+- * safe to use the per task page_frag instead of the per socket one.
++ * Use the per task page_frag instead of the per socket one for
++ * optimization when we know that we're in the normal context and owns
++ * everything that's associated with %current.
++ *
++ * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest
++ * inside other socket operations and end up recursing into sk_page_frag()
++ * while it's already in use.
+ */
+ static inline struct page_frag *sk_page_frag(struct sock *sk)
+ {
+- if (gfpflags_allow_blocking(sk->sk_allocation))
++ if (gfpflags_normal_context(sk->sk_allocation))
+ return ¤t->task_frag;
+
+ return &sk->sk_frag;
+diff --git a/include/sound/simple_card_utils.h b/include/sound/simple_card_utils.h
+index 985a5f583de4..31f76b6abf71 100644
+--- a/include/sound/simple_card_utils.h
++++ b/include/sound/simple_card_utils.h
+@@ -135,9 +135,9 @@ int asoc_simple_init_priv(struct asoc_simple_priv *priv,
+ struct link_info *li);
+
+ #ifdef DEBUG
+-inline void asoc_simple_debug_dai(struct asoc_simple_priv *priv,
+- char *name,
+- struct asoc_simple_dai *dai)
++static inline void asoc_simple_debug_dai(struct asoc_simple_priv *priv,
++ char *name,
++ struct asoc_simple_dai *dai)
+ {
+ struct device *dev = simple_priv_to_dev(priv);
+
+@@ -167,7 +167,7 @@ inline void asoc_simple_debug_dai(struct asoc_simple_priv *priv,
+ dev_dbg(dev, "%s clk %luHz\n", name, clk_get_rate(dai->clk));
+ }
+
+-inline void asoc_simple_debug_info(struct asoc_simple_priv *priv)
++static inline void asoc_simple_debug_info(struct asoc_simple_priv *priv)
+ {
+ struct snd_soc_card *card = simple_priv_to_card(priv);
+ struct device *dev = simple_priv_to_dev(priv);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index dd310d3b5843..725b9b35f933 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -674,6 +674,8 @@ static bool synth_field_signed(char *type)
+ {
+ if (str_has_prefix(type, "u"))
+ return false;
++ if (strcmp(type, "gfp_t") == 0)
++ return false;
+
+ return true;
+ }
+diff --git a/net/atm/common.c b/net/atm/common.c
+index b7528e77997c..0ce530af534d 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -668,7 +668,7 @@ __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait)
+ mask |= EPOLLHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* writable? */
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 94ddf19998c7..5f508c50649d 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -460,7 +460,7 @@ __poll_t bt_sock_poll(struct file *file, struct socket *sock,
+ if (sk->sk_state == BT_LISTEN)
+ return bt_accept_poll(sk);
+
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+@@ -470,7 +470,7 @@ __poll_t bt_sock_poll(struct file *file, struct socket *sock,
+ if (sk->sk_shutdown == SHUTDOWN_MASK)
+ mask |= EPOLLHUP;
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ if (sk->sk_state == BT_CLOSED)
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 4f5444d2a526..a48cb1baeac6 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -34,6 +34,7 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ {
+ int frag_max_size = BR_INPUT_SKB_CB(skb)->frag_max_size;
+ unsigned int hlen, ll_rs, mtu;
++ ktime_t tstamp = skb->tstamp;
+ struct ip_frag_state state;
+ struct iphdr *iph;
+ int err;
+@@ -81,6 +82,7 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ if (iter.frag)
+ ip_fraglist_prepare(skb, &iter);
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, data, skb);
+ if (err || !iter.frag)
+ break;
+@@ -94,7 +96,7 @@ slow_path:
+ * This may also be a clone skbuff, we could preserve the geometry for
+ * the copies but probably not worth the effort.
+ */
+- ip_frag_init(skb, hlen, ll_rs, frag_max_size, &state);
++ ip_frag_init(skb, hlen, ll_rs, frag_max_size, false, &state);
+
+ while (state.left > 0) {
+ struct sk_buff *skb2;
+@@ -105,6 +107,7 @@ slow_path:
+ goto blackhole;
+ }
+
++ skb2->tstamp = tstamp;
+ err = output(net, sk, data, skb2);
+ if (err)
+ goto blackhole;
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index 13ea920600ae..ef14da50a981 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -953,7 +953,7 @@ static __poll_t caif_poll(struct file *file,
+ mask |= EPOLLRDHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue) ||
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+ (sk->sk_shutdown & RCV_SHUTDOWN))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 45a162ef5e02..5dc112ec7286 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -97,7 +97,7 @@ int __skb_wait_for_more_packets(struct sock *sk, int *err, long *timeo_p,
+ if (error)
+ goto out_err;
+
+- if (sk->sk_receive_queue.prev != skb)
++ if (READ_ONCE(sk->sk_receive_queue.prev) != skb)
+ goto out;
+
+ /* Socket shut down? */
+@@ -278,7 +278,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags,
+ break;
+
+ sk_busy_loop(sk, flags & MSG_DONTWAIT);
+- } while (sk->sk_receive_queue.prev != *last);
++ } while (READ_ONCE(sk->sk_receive_queue.prev) != *last);
+
+ error = -EAGAIN;
+
+@@ -767,7 +767,7 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ mask = 0;
+
+ /* exceptional events? */
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+@@ -777,7 +777,7 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
+ mask |= EPOLLHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Connection-based need to check for termination and startup */
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 4ed9df74eb8a..33b278b826b5 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -9411,7 +9411,7 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char
+ call_netdevice_notifiers(NETDEV_UNREGISTER, dev);
+ rcu_barrier();
+
+- new_nsid = peernet2id_alloc(dev_net(dev), net);
++ new_nsid = peernet2id_alloc(dev_net(dev), net, GFP_KERNEL);
+ /* If there is an ifindex conflict assign a new one */
+ if (__dev_get_by_index(net, dev->ifindex))
+ new_ifindex = dev_new_index(net);
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 6288e69e94fc..563a48c3df36 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -1395,11 +1395,13 @@ static int ethtool_reset(struct net_device *dev, char __user *useraddr)
+
+ static int ethtool_get_wol(struct net_device *dev, char __user *useraddr)
+ {
+- struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
++ struct ethtool_wolinfo wol;
+
+ if (!dev->ethtool_ops->get_wol)
+ return -EOPNOTSUPP;
+
++ memset(&wol, 0, sizeof(struct ethtool_wolinfo));
++ wol.cmd = ETHTOOL_GWOL;
+ dev->ethtool_ops->get_wol(dev, &wol);
+
+ if (copy_to_user(useraddr, &wol, sizeof(wol)))
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 2470b4b404e6..2f5326a82465 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1333,30 +1333,21 @@ out_bad:
+ }
+ EXPORT_SYMBOL(__skb_flow_dissect);
+
+-static u32 hashrnd __read_mostly;
++static siphash_key_t hashrnd __read_mostly;
+ static __always_inline void __flow_hash_secret_init(void)
+ {
+ net_get_random_once(&hashrnd, sizeof(hashrnd));
+ }
+
+-static __always_inline u32 __flow_hash_words(const u32 *words, u32 length,
+- u32 keyval)
++static const void *flow_keys_hash_start(const struct flow_keys *flow)
+ {
+- return jhash2(words, length, keyval);
+-}
+-
+-static inline const u32 *flow_keys_hash_start(const struct flow_keys *flow)
+-{
+- const void *p = flow;
+-
+- BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % sizeof(u32));
+- return (const u32 *)(p + FLOW_KEYS_HASH_OFFSET);
++ BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % SIPHASH_ALIGNMENT);
++ return &flow->FLOW_KEYS_HASH_START_FIELD;
+ }
+
+ static inline size_t flow_keys_hash_length(const struct flow_keys *flow)
+ {
+ size_t diff = FLOW_KEYS_HASH_OFFSET + sizeof(flow->addrs);
+- BUILD_BUG_ON((sizeof(*flow) - FLOW_KEYS_HASH_OFFSET) % sizeof(u32));
+ BUILD_BUG_ON(offsetof(typeof(*flow), addrs) !=
+ sizeof(*flow) - sizeof(flow->addrs));
+
+@@ -1371,7 +1362,7 @@ static inline size_t flow_keys_hash_length(const struct flow_keys *flow)
+ diff -= sizeof(flow->addrs.tipckey);
+ break;
+ }
+- return (sizeof(*flow) - diff) / sizeof(u32);
++ return sizeof(*flow) - diff;
+ }
+
+ __be32 flow_get_u32_src(const struct flow_keys *flow)
+@@ -1437,14 +1428,15 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys)
+ }
+ }
+
+-static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval)
++static inline u32 __flow_hash_from_keys(struct flow_keys *keys,
++ const siphash_key_t *keyval)
+ {
+ u32 hash;
+
+ __flow_hash_consistentify(keys);
+
+- hash = __flow_hash_words(flow_keys_hash_start(keys),
+- flow_keys_hash_length(keys), keyval);
++ hash = siphash(flow_keys_hash_start(keys),
++ flow_keys_hash_length(keys), keyval);
+ if (!hash)
+ hash = 1;
+
+@@ -1454,12 +1446,13 @@ static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval)
+ u32 flow_hash_from_keys(struct flow_keys *keys)
+ {
+ __flow_hash_secret_init();
+- return __flow_hash_from_keys(keys, hashrnd);
++ return __flow_hash_from_keys(keys, &hashrnd);
+ }
+ EXPORT_SYMBOL(flow_hash_from_keys);
+
+ static inline u32 ___skb_get_hash(const struct sk_buff *skb,
+- struct flow_keys *keys, u32 keyval)
++ struct flow_keys *keys,
++ const siphash_key_t *keyval)
+ {
+ skb_flow_dissect_flow_keys(skb, keys,
+ FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
+@@ -1507,7 +1500,7 @@ u32 __skb_get_hash_symmetric(const struct sk_buff *skb)
+ &keys, NULL, 0, 0, 0,
+ FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
+
+- return __flow_hash_from_keys(&keys, hashrnd);
++ return __flow_hash_from_keys(&keys, &hashrnd);
+ }
+ EXPORT_SYMBOL_GPL(__skb_get_hash_symmetric);
+
+@@ -1527,13 +1520,14 @@ void __skb_get_hash(struct sk_buff *skb)
+
+ __flow_hash_secret_init();
+
+- hash = ___skb_get_hash(skb, &keys, hashrnd);
++ hash = ___skb_get_hash(skb, &keys, &hashrnd);
+
+ __skb_set_sw_hash(skb, hash, flow_keys_have_l4(&keys));
+ }
+ EXPORT_SYMBOL(__skb_get_hash);
+
+-__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb)
++__u32 skb_get_hash_perturb(const struct sk_buff *skb,
++ const siphash_key_t *perturb)
+ {
+ struct flow_keys keys;
+
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index a0e0d298c991..87c32ab63304 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -245,11 +245,11 @@ static int __peernet2id(struct net *net, struct net *peer)
+ return __peernet2id_alloc(net, peer, &no);
+ }
+
+-static void rtnl_net_notifyid(struct net *net, int cmd, int id);
++static void rtnl_net_notifyid(struct net *net, int cmd, int id, gfp_t gfp);
+ /* This function returns the id of a peer netns. If no id is assigned, one will
+ * be allocated and returned.
+ */
+-int peernet2id_alloc(struct net *net, struct net *peer)
++int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp)
+ {
+ bool alloc = false, alive = false;
+ int id;
+@@ -268,7 +268,7 @@ int peernet2id_alloc(struct net *net, struct net *peer)
+ id = __peernet2id_alloc(net, peer, &alloc);
+ spin_unlock_bh(&net->nsid_lock);
+ if (alloc && id >= 0)
+- rtnl_net_notifyid(net, RTM_NEWNSID, id);
++ rtnl_net_notifyid(net, RTM_NEWNSID, id, gfp);
+ if (alive)
+ put_net(peer);
+ return id;
+@@ -478,6 +478,7 @@ struct net *copy_net_ns(unsigned long flags,
+
+ if (rv < 0) {
+ put_userns:
++ key_remove_domain(net->key_domain);
+ put_user_ns(user_ns);
+ net_drop_ns(net);
+ dec_ucounts:
+@@ -532,7 +533,8 @@ static void unhash_nsid(struct net *net, struct net *last)
+ idr_remove(&tmp->netns_ids, id);
+ spin_unlock_bh(&tmp->nsid_lock);
+ if (id >= 0)
+- rtnl_net_notifyid(tmp, RTM_DELNSID, id);
++ rtnl_net_notifyid(tmp, RTM_DELNSID, id,
++ GFP_KERNEL);
+ if (tmp == last)
+ break;
+ }
+@@ -764,7 +766,7 @@ static int rtnl_net_newid(struct sk_buff *skb, struct nlmsghdr *nlh,
+ err = alloc_netid(net, peer, nsid);
+ spin_unlock_bh(&net->nsid_lock);
+ if (err >= 0) {
+- rtnl_net_notifyid(net, RTM_NEWNSID, err);
++ rtnl_net_notifyid(net, RTM_NEWNSID, err, GFP_KERNEL);
+ err = 0;
+ } else if (err == -ENOSPC && nsid >= 0) {
+ err = -EEXIST;
+@@ -1051,7 +1053,7 @@ end:
+ return err < 0 ? err : skb->len;
+ }
+
+-static void rtnl_net_notifyid(struct net *net, int cmd, int id)
++static void rtnl_net_notifyid(struct net *net, int cmd, int id, gfp_t gfp)
+ {
+ struct net_fill_args fillargs = {
+ .cmd = cmd,
+@@ -1060,7 +1062,7 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id)
+ struct sk_buff *msg;
+ int err = -ENOMEM;
+
+- msg = nlmsg_new(rtnl_net_get_size(), GFP_KERNEL);
++ msg = nlmsg_new(rtnl_net_get_size(), gfp);
+ if (!msg)
+ goto out;
+
+@@ -1068,7 +1070,7 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id)
+ if (err < 0)
+ goto err_out;
+
+- rtnl_notify(msg, net, 0, RTNLGRP_NSID, NULL, 0);
++ rtnl_notify(msg, net, 0, RTNLGRP_NSID, NULL, gfp);
+ return;
+
+ err_out:
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 1ee6460f8275..868a768f7300 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1523,7 +1523,7 @@ static noinline_for_stack int nla_put_ifalias(struct sk_buff *skb,
+
+ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ const struct net_device *dev,
+- struct net *src_net)
++ struct net *src_net, gfp_t gfp)
+ {
+ bool put_iflink = false;
+
+@@ -1531,7 +1531,7 @@ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ struct net *link_net = dev->rtnl_link_ops->get_link_net(dev);
+
+ if (!net_eq(dev_net(dev), link_net)) {
+- int id = peernet2id_alloc(src_net, link_net);
++ int id = peernet2id_alloc(src_net, link_net, gfp);
+
+ if (nla_put_s32(skb, IFLA_LINK_NETNSID, id))
+ return -EMSGSIZE;
+@@ -1589,7 +1589,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ int type, u32 pid, u32 seq, u32 change,
+ unsigned int flags, u32 ext_filter_mask,
+ u32 event, int *new_nsid, int new_ifindex,
+- int tgt_netnsid)
++ int tgt_netnsid, gfp_t gfp)
+ {
+ struct ifinfomsg *ifm;
+ struct nlmsghdr *nlh;
+@@ -1681,7 +1681,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ goto nla_put_failure;
+ }
+
+- if (rtnl_fill_link_netnsid(skb, dev, src_net))
++ if (rtnl_fill_link_netnsid(skb, dev, src_net, gfp))
+ goto nla_put_failure;
+
+ if (new_nsid &&
+@@ -2001,7 +2001,7 @@ walk_entries:
+ NETLINK_CB(cb->skb).portid,
+ nlh->nlmsg_seq, 0, flags,
+ ext_filter_mask, 0, NULL, 0,
+- netnsid);
++ netnsid, GFP_KERNEL);
+
+ if (err < 0) {
+ if (likely(skb->len))
+@@ -3359,7 +3359,7 @@ static int rtnl_getlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ err = rtnl_fill_ifinfo(nskb, dev, net,
+ RTM_NEWLINK, NETLINK_CB(skb).portid,
+ nlh->nlmsg_seq, 0, 0, ext_filter_mask,
+- 0, NULL, 0, netnsid);
++ 0, NULL, 0, netnsid, GFP_KERNEL);
+ if (err < 0) {
+ /* -EMSGSIZE implies BUG in if_nlmsg_size */
+ WARN_ON(err == -EMSGSIZE);
+@@ -3471,7 +3471,7 @@ struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
+
+ err = rtnl_fill_ifinfo(skb, dev, dev_net(dev),
+ type, 0, 0, change, 0, 0, event,
+- new_nsid, new_ifindex, -1);
++ new_nsid, new_ifindex, -1, flags);
+ if (err < 0) {
+ /* -EMSGSIZE implies BUG in if_nlmsg_size() */
+ WARN_ON(err == -EMSGSIZE);
+@@ -3916,7 +3916,7 @@ static int valid_fdb_dump_strict(const struct nlmsghdr *nlh,
+ ndm = nlmsg_data(nlh);
+ if (ndm->ndm_pad1 || ndm->ndm_pad2 || ndm->ndm_state ||
+ ndm->ndm_flags || ndm->ndm_type) {
+- NL_SET_ERR_MSG(extack, "Invalid values in header for fbd dump request");
++ NL_SET_ERR_MSG(extack, "Invalid values in header for fdb dump request");
+ return -EINVAL;
+ }
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3aa93af51d48..b4247635c4a2 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1125,7 +1125,7 @@ set_rcvbuf:
+ break;
+ }
+ case SO_INCOMING_CPU:
+- sk->sk_incoming_cpu = val;
++ WRITE_ONCE(sk->sk_incoming_cpu, val);
+ break;
+
+ case SO_CNX_ADVICE:
+@@ -1474,7 +1474,7 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case SO_INCOMING_CPU:
+- v.val = sk->sk_incoming_cpu;
++ v.val = READ_ONCE(sk->sk_incoming_cpu);
+ break;
+
+ case SO_MEMINFO:
+@@ -3593,7 +3593,7 @@ bool sk_busy_loop_end(void *p, unsigned long start_time)
+ {
+ struct sock *sk = p;
+
+- return !skb_queue_empty(&sk->sk_receive_queue) ||
++ return !skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+ sk_busy_loop_timeout(sk, start_time);
+ }
+ EXPORT_SYMBOL(sk_busy_loop_end);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b685bc82f8d0..6b8a602849dd 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -117,7 +117,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->inet_daddr,
+ inet->inet_sport,
+ inet->inet_dport);
+- inet->inet_id = dp->dccps_iss ^ jiffies;
++ inet->inet_id = prandom_u32();
+
+ err = dccp_connect(sk);
+ rt = NULL;
+@@ -416,7 +416,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
+ RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
+ newinet->mc_index = inet_iif(skb);
+ newinet->mc_ttl = ip_hdr(skb)->ttl;
+- newinet->inet_id = jiffies;
++ newinet->inet_id = prandom_u32();
+
+ if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL)
+ goto put_and_exit;
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index 0ea75286abf4..3349ea81f901 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -1205,7 +1205,7 @@ static __poll_t dn_poll(struct file *file, struct socket *sock, poll_table *wai
+ struct dn_scp *scp = DN_SK(sk);
+ __poll_t mask = datagram_poll(file, sock, wait);
+
+- if (!skb_queue_empty(&scp->other_receive_queue))
++ if (!skb_queue_empty_lockless(&scp->other_receive_queue))
+ mask |= EPOLLRDBAND;
+
+ return mask;
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 96f787cf9b6e..130f1a343abb 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -46,7 +46,7 @@ static struct dsa_switch_tree *dsa_tree_alloc(int index)
+ dst->index = index;
+
+ INIT_LIST_HEAD(&dst->list);
+- list_add_tail(&dsa_tree_list, &dst->list);
++ list_add_tail(&dst->list, &dsa_tree_list);
+
+ kref_init(&dst->refcount);
+
+diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
+index 9a0fe0c2fa02..4a8550c49202 100644
+--- a/net/ipv4/datagram.c
++++ b/net/ipv4/datagram.c
+@@ -73,7 +73,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
+ reuseport_has_conns(sk, true);
+ sk->sk_state = TCP_ESTABLISHED;
+ sk_set_txhash(sk);
+- inet->inet_id = jiffies;
++ inet->inet_id = prandom_u32();
+
+ sk_dst_set(sk, &rt->dst);
+ err = 0;
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index e8bc939b56dd..fb4162943fae 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1147,7 +1147,7 @@ void fib_modify_prefix_metric(struct in_ifaddr *ifa, u32 new_metric)
+ if (!(dev->flags & IFF_UP) ||
+ ifa->ifa_flags & (IFA_F_SECONDARY | IFA_F_NOPREFIXROUTE) ||
+ ipv4_is_zeronet(prefix) ||
+- prefix == ifa->ifa_local || ifa->ifa_prefixlen == 32)
++ (prefix == ifa->ifa_local && ifa->ifa_prefixlen == 32))
+ return;
+
+ /* add the new */
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 97824864e40d..83fb00153018 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -240,7 +240,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ return -1;
+
+ score = sk->sk_family == PF_INET ? 2 : 1;
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+ }
+ return score;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 52690bb3e40f..10636fb6093e 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -509,9 +509,9 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ key = &tun_info->key;
+ if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
+ goto err_free_skb;
+- md = ip_tunnel_info_opts(tun_info);
+- if (!md)
++ if (tun_info->options_len < sizeof(*md))
+ goto err_free_skb;
++ md = ip_tunnel_info_opts(tun_info);
+
+ /* ERSPAN has fixed 8 byte GRE header */
+ version = md->version;
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index da521790cd63..e780ceab16e1 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -645,11 +645,12 @@ void ip_fraglist_prepare(struct sk_buff *skb, struct ip_fraglist_iter *iter)
+ EXPORT_SYMBOL(ip_fraglist_prepare);
+
+ void ip_frag_init(struct sk_buff *skb, unsigned int hlen,
+- unsigned int ll_rs, unsigned int mtu,
++ unsigned int ll_rs, unsigned int mtu, bool DF,
+ struct ip_frag_state *state)
+ {
+ struct iphdr *iph = ip_hdr(skb);
+
++ state->DF = DF;
+ state->hlen = hlen;
+ state->ll_rs = ll_rs;
+ state->mtu = mtu;
+@@ -668,9 +669,6 @@ static void ip_frag_ipcb(struct sk_buff *from, struct sk_buff *to,
+ /* Copy the flags to each fragment. */
+ IPCB(to)->flags = IPCB(from)->flags;
+
+- if (IPCB(from)->flags & IPSKB_FRAG_PMTU)
+- state->iph->frag_off |= htons(IP_DF);
+-
+ /* ANK: dirty, but effective trick. Upgrade options only if
+ * the segment to be fragmented was THE FIRST (otherwise,
+ * options are already fixed) and make it ONCE
+@@ -738,6 +736,8 @@ struct sk_buff *ip_frag_next(struct sk_buff *skb, struct ip_frag_state *state)
+ */
+ iph = ip_hdr(skb2);
+ iph->frag_off = htons((state->offset >> 3));
++ if (state->DF)
++ iph->frag_off |= htons(IP_DF);
+
+ /*
+ * Added AC : If we are fragmenting a fragment that's not the
+@@ -771,6 +771,7 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ struct rtable *rt = skb_rtable(skb);
+ unsigned int mtu, hlen, ll_rs;
+ struct ip_fraglist_iter iter;
++ ktime_t tstamp = skb->tstamp;
+ struct ip_frag_state state;
+ int err = 0;
+
+@@ -846,6 +847,7 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ ip_fraglist_prepare(skb, &iter);
+ }
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, skb);
+
+ if (!err)
+@@ -881,7 +883,8 @@ slow_path:
+ * Fragment the datagram.
+ */
+
+- ip_frag_init(skb, hlen, ll_rs, mtu, &state);
++ ip_frag_init(skb, hlen, ll_rs, mtu, IPCB(skb)->flags & IPSKB_FRAG_PMTU,
++ &state);
+
+ /*
+ * Keep copying data until we run out.
+@@ -900,6 +903,7 @@ slow_path:
+ /*
+ * Put this fragment into the sending queue.
+ */
++ skb2->tstamp = tstamp;
+ err = output(net, sk, skb2);
+ if (err)
+ goto fail;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 61082065b26a..cf79ab96c2df 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -584,7 +584,7 @@ __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ }
+ /* This barrier is coupled with smp_wmb() in tcp_reset() */
+ smp_rmb();
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR;
+
+ return mask;
+@@ -1961,7 +1961,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock,
+ if (unlikely(flags & MSG_ERRQUEUE))
+ return inet_recv_error(sk, msg, len, addr_len);
+
+- if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) &&
++ if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue) &&
+ (sk->sk_state == TCP_ESTABLISHED))
+ sk_busy_loop(sk, nonblock);
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index d57641cb3477..54320ef35405 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -300,7 +300,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ inet->inet_daddr);
+ }
+
+- inet->inet_id = tp->write_seq ^ jiffies;
++ inet->inet_id = prandom_u32();
+
+ if (tcp_fastopen_defer_connect(sk, &err))
+ return err;
+@@ -1443,7 +1443,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
+ inet_csk(newsk)->icsk_ext_hdr_len = 0;
+ if (inet_opt)
+ inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
+- newinet->inet_id = newtp->write_seq ^ jiffies;
++ newinet->inet_id = prandom_u32();
+
+ if (!dst) {
+ dst = inet_csk_route_child_sock(sk, newsk, req);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 5e5d0575a43c..5487b43b8a56 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -388,7 +388,7 @@ static int compute_score(struct sock *sk, struct net *net,
+ return -1;
+ score += 4;
+
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+ return score;
+ }
+@@ -1316,6 +1316,20 @@ static void udp_set_dev_scratch(struct sk_buff *skb)
+ scratch->_tsize_state |= UDP_SKB_IS_STATELESS;
+ }
+
++static void udp_skb_csum_unnecessary_set(struct sk_buff *skb)
++{
++ /* We come here after udp_lib_checksum_complete() returned 0.
++ * This means that __skb_checksum_complete() might have
++ * set skb->csum_valid to 1.
++ * On 64bit platforms, we can set csum_unnecessary
++ * to true, but only if the skb is not shared.
++ */
++#if BITS_PER_LONG == 64
++ if (!skb_shared(skb))
++ udp_skb_scratch(skb)->csum_unnecessary = true;
++#endif
++}
++
+ static int udp_skb_truesize(struct sk_buff *skb)
+ {
+ return udp_skb_scratch(skb)->_tsize_state & ~UDP_SKB_IS_STATELESS;
+@@ -1550,10 +1564,7 @@ static struct sk_buff *__first_packet_length(struct sock *sk,
+ *total += skb->truesize;
+ kfree_skb(skb);
+ } else {
+- /* the csum related bits could be changed, refresh
+- * the scratch area
+- */
+- udp_set_dev_scratch(skb);
++ udp_skb_csum_unnecessary_set(skb);
+ break;
+ }
+ }
+@@ -1577,7 +1588,7 @@ static int first_packet_length(struct sock *sk)
+
+ spin_lock_bh(&rcvq->lock);
+ skb = __first_packet_length(sk, rcvq, &total);
+- if (!skb && !skb_queue_empty(sk_queue)) {
++ if (!skb && !skb_queue_empty_lockless(sk_queue)) {
+ spin_lock(&sk_queue->lock);
+ skb_queue_splice_tail_init(sk_queue, rcvq);
+ spin_unlock(&sk_queue->lock);
+@@ -1650,7 +1661,7 @@ struct sk_buff *__skb_recv_udp(struct sock *sk, unsigned int flags,
+ return skb;
+ }
+
+- if (skb_queue_empty(sk_queue)) {
++ if (skb_queue_empty_lockless(sk_queue)) {
+ spin_unlock_bh(&queue->lock);
+ goto busy_check;
+ }
+@@ -1676,7 +1687,7 @@ busy_check:
+ break;
+
+ sk_busy_loop(sk, flags & MSG_DONTWAIT);
+- } while (!skb_queue_empty(sk_queue));
++ } while (!skb_queue_empty_lockless(sk_queue));
+
+ /* sk_queue is empty, reader_queue may contain peeked packets */
+ } while (timeo &&
+@@ -2712,7 +2723,7 @@ __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ __poll_t mask = datagram_poll(file, sock, wait);
+ struct sock *sk = sock->sk;
+
+- if (!skb_queue_empty(&udp_sk(sk)->reader_queue))
++ if (!skb_queue_empty_lockless(&udp_sk(sk)->reader_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Check for false positives due to checksum errors */
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index cf60fae9533b..fbe9d4295eac 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -105,7 +105,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ return -1;
+
+ score = 1;
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+ }
+ return score;
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index d5779d6a6065..4efc272c6027 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -980,9 +980,9 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ dsfield = key->tos;
+ if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))
+ goto tx_err;
+- md = ip_tunnel_info_opts(tun_info);
+- if (!md)
++ if (tun_info->options_len < sizeof(*md))
+ goto tx_err;
++ md = ip_tunnel_info_opts(tun_info);
+
+ tun_id = tunnel_id_to_key32(key->tun_id);
+ if (md->version == 1) {
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 8e49fd62eea9..e71568f730f9 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -768,6 +768,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ inet6_sk(skb->sk) : NULL;
+ struct ip6_frag_state state;
+ unsigned int mtu, hlen, nexthdr_offset;
++ ktime_t tstamp = skb->tstamp;
+ int hroom, err = 0;
+ __be32 frag_id;
+ u8 *prevhdr, nexthdr = 0;
+@@ -855,6 +856,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ if (iter.frag)
+ ip6_fraglist_prepare(skb, &iter);
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, skb);
+ if (!err)
+ IP6_INC_STATS(net, ip6_dst_idev(&rt->dst),
+@@ -913,6 +915,7 @@ slow_path:
+ /*
+ * Put this fragment into the sending queue.
+ */
++ frag->tstamp = tstamp;
+ err = output(net, sk, frag);
+ if (err)
+ goto fail;
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 61819ed858b1..7e75d01464fb 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -119,6 +119,7 @@ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ struct sk_buff *))
+ {
+ int frag_max_size = BR_INPUT_SKB_CB(skb)->frag_max_size;
++ ktime_t tstamp = skb->tstamp;
+ struct ip6_frag_state state;
+ u8 *prevhdr, nexthdr = 0;
+ unsigned int mtu, hlen;
+@@ -183,6 +184,7 @@ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ if (iter.frag)
+ ip6_fraglist_prepare(skb, &iter);
+
++ skb->tstamp = tstamp;
+ err = output(net, sk, data, skb);
+ if (err || !iter.frag)
+ break;
+@@ -215,6 +217,7 @@ slow_path:
+ goto blackhole;
+ }
+
++ skb2->tstamp = tstamp;
+ err = output(net, sk, data, skb2);
+ if (err)
+ goto blackhole;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 0454a8a3b39c..bea3bdad0369 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -135,7 +135,7 @@ static int compute_score(struct sock *sk, struct net *net,
+ return -1;
+ score++;
+
+- if (sk->sk_incoming_cpu == raw_smp_processor_id())
++ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ score++;
+
+ return score;
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index ccdd790e163a..28604414dec1 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -554,11 +554,11 @@ static __poll_t llcp_sock_poll(struct file *file, struct socket *sock,
+ if (sk->sk_state == LLCP_LISTEN)
+ return llcp_accept_poll(sk);
+
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ if (sk->sk_state == LLCP_CLOSED)
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index f1e7041a5a60..43aeca12208c 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -1850,7 +1850,7 @@ static struct genl_family dp_datapath_genl_family __ro_after_init = {
+ /* Called with ovs_mutex or RCU read lock. */
+ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ struct net *net, u32 portid, u32 seq,
+- u32 flags, u8 cmd)
++ u32 flags, u8 cmd, gfp_t gfp)
+ {
+ struct ovs_header *ovs_header;
+ struct ovs_vport_stats vport_stats;
+@@ -1871,7 +1871,7 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ goto nla_put_failure;
+
+ if (!net_eq(net, dev_net(vport->dev))) {
+- int id = peernet2id_alloc(net, dev_net(vport->dev));
++ int id = peernet2id_alloc(net, dev_net(vport->dev), gfp);
+
+ if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id))
+ goto nla_put_failure;
+@@ -1912,11 +1912,12 @@ struct sk_buff *ovs_vport_cmd_build_info(struct vport *vport, struct net *net,
+ struct sk_buff *skb;
+ int retval;
+
+- skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC);
++ skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+- retval = ovs_vport_cmd_fill_info(vport, skb, net, portid, seq, 0, cmd);
++ retval = ovs_vport_cmd_fill_info(vport, skb, net, portid, seq, 0, cmd,
++ GFP_KERNEL);
+ BUG_ON(retval < 0);
+
+ return skb;
+@@ -2058,7 +2059,7 @@ restart:
+
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_NEW);
++ OVS_VPORT_CMD_NEW, GFP_KERNEL);
+
+ new_headroom = netdev_get_fwd_headroom(vport->dev);
+
+@@ -2119,7 +2120,7 @@ static int ovs_vport_cmd_set(struct sk_buff *skb, struct genl_info *info)
+
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_SET);
++ OVS_VPORT_CMD_SET, GFP_KERNEL);
+ BUG_ON(err < 0);
+
+ ovs_unlock();
+@@ -2159,7 +2160,7 @@ static int ovs_vport_cmd_del(struct sk_buff *skb, struct genl_info *info)
+
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_DEL);
++ OVS_VPORT_CMD_DEL, GFP_KERNEL);
+ BUG_ON(err < 0);
+
+ /* the vport deletion may trigger dp headroom update */
+@@ -2206,7 +2207,7 @@ static int ovs_vport_cmd_get(struct sk_buff *skb, struct genl_info *info)
+ goto exit_unlock_free;
+ err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
+ info->snd_portid, info->snd_seq, 0,
+- OVS_VPORT_CMD_GET);
++ OVS_VPORT_CMD_GET, GFP_ATOMIC);
+ BUG_ON(err < 0);
+ rcu_read_unlock();
+
+@@ -2242,7 +2243,8 @@ static int ovs_vport_cmd_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq,
+ NLM_F_MULTI,
+- OVS_VPORT_CMD_GET) < 0)
++ OVS_VPORT_CMD_GET,
++ GFP_ATOMIC) < 0)
+ goto out;
+
+ j++;
+diff --git a/net/phonet/socket.c b/net/phonet/socket.c
+index 96ea9f254ae9..76d499f6af9a 100644
+--- a/net/phonet/socket.c
++++ b/net/phonet/socket.c
+@@ -338,9 +338,9 @@ static __poll_t pn_socket_poll(struct file *file, struct socket *sock,
+
+ if (sk->sk_state == TCP_CLOSE)
+ return EPOLLERR;
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+- if (!skb_queue_empty(&pn->ctrlreq_queue))
++ if (!skb_queue_empty_lockless(&pn->ctrlreq_queue))
+ mask |= EPOLLPRI;
+ if (!mask && sk->sk_state == TCP_CLOSE_WAIT)
+ return EPOLLHUP;
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 8051dfdcf26d..b23a13c69019 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -596,6 +596,7 @@ struct rxrpc_call {
+ int debug_id; /* debug ID for printks */
+ unsigned short rx_pkt_offset; /* Current recvmsg packet offset */
+ unsigned short rx_pkt_len; /* Current recvmsg packet len */
++ bool rx_pkt_last; /* Current recvmsg packet is last */
+
+ /* Rx/Tx circular buffer, depending on phase.
+ *
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index 3b0becb12041..08d4b4b9283a 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -267,11 +267,13 @@ static int rxrpc_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ */
+ static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
+ u8 *_annotation,
+- unsigned int *_offset, unsigned int *_len)
++ unsigned int *_offset, unsigned int *_len,
++ bool *_last)
+ {
+ struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ unsigned int offset = sizeof(struct rxrpc_wire_header);
+ unsigned int len;
++ bool last = false;
+ int ret;
+ u8 annotation = *_annotation;
+ u8 subpacket = annotation & RXRPC_RX_ANNO_SUBPACKET;
+@@ -281,6 +283,8 @@ static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
+ len = skb->len - offset;
+ if (subpacket < sp->nr_subpackets - 1)
+ len = RXRPC_JUMBO_DATALEN;
++ else if (sp->rx_flags & RXRPC_SKB_INCL_LAST)
++ last = true;
+
+ if (!(annotation & RXRPC_RX_ANNO_VERIFIED)) {
+ ret = rxrpc_verify_packet(call, skb, annotation, offset, len);
+@@ -291,6 +295,7 @@ static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
+
+ *_offset = offset;
+ *_len = len;
++ *_last = last;
+ call->conn->security->locate_data(call, skb, _offset, _len);
+ return 0;
+ }
+@@ -309,7 +314,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ rxrpc_serial_t serial;
+ rxrpc_seq_t hard_ack, top, seq;
+ size_t remain;
+- bool last;
++ bool rx_pkt_last;
+ unsigned int rx_pkt_offset, rx_pkt_len;
+ int ix, copy, ret = -EAGAIN, ret2;
+
+@@ -319,6 +324,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+
+ rx_pkt_offset = call->rx_pkt_offset;
+ rx_pkt_len = call->rx_pkt_len;
++ rx_pkt_last = call->rx_pkt_last;
+
+ if (call->state >= RXRPC_CALL_SERVER_ACK_REQUEST) {
+ seq = call->rx_hard_ack;
+@@ -329,6 +335,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ /* Barriers against rxrpc_input_data(). */
+ hard_ack = call->rx_hard_ack;
+ seq = hard_ack + 1;
++
+ while (top = smp_load_acquire(&call->rx_top),
+ before_eq(seq, top)
+ ) {
+@@ -356,7 +363,8 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ if (rx_pkt_offset == 0) {
+ ret2 = rxrpc_locate_data(call, skb,
+ &call->rxtx_annotations[ix],
+- &rx_pkt_offset, &rx_pkt_len);
++ &rx_pkt_offset, &rx_pkt_len,
++ &rx_pkt_last);
+ trace_rxrpc_recvmsg(call, rxrpc_recvmsg_next, seq,
+ rx_pkt_offset, rx_pkt_len, ret2);
+ if (ret2 < 0) {
+@@ -396,13 +404,12 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
+ }
+
+ /* The whole packet has been transferred. */
+- last = sp->hdr.flags & RXRPC_LAST_PACKET;
+ if (!(flags & MSG_PEEK))
+ rxrpc_rotate_rx_window(call);
+ rx_pkt_offset = 0;
+ rx_pkt_len = 0;
+
+- if (last) {
++ if (rx_pkt_last) {
+ ASSERTCMP(seq, ==, READ_ONCE(call->rx_top));
+ ret = 1;
+ goto out;
+@@ -415,6 +422,7 @@ out:
+ if (!(flags & MSG_PEEK)) {
+ call->rx_pkt_offset = rx_pkt_offset;
+ call->rx_pkt_len = rx_pkt_len;
++ call->rx_pkt_last = rx_pkt_last;
+ }
+ done:
+ trace_rxrpc_recvmsg(call, rxrpc_recvmsg_data_return, seq,
+diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c
+index 23cd1c873a2c..be35f03b657b 100644
+--- a/net/sched/sch_hhf.c
++++ b/net/sched/sch_hhf.c
+@@ -5,11 +5,11 @@
+ * Copyright (C) 2013 Nandita Dukkipati <nanditad@google.com>
+ */
+
+-#include <linux/jhash.h>
+ #include <linux/jiffies.h>
+ #include <linux/module.h>
+ #include <linux/skbuff.h>
+ #include <linux/vmalloc.h>
++#include <linux/siphash.h>
+ #include <net/pkt_sched.h>
+ #include <net/sock.h>
+
+@@ -126,7 +126,7 @@ struct wdrr_bucket {
+
+ struct hhf_sched_data {
+ struct wdrr_bucket buckets[WDRR_BUCKET_CNT];
+- u32 perturbation; /* hash perturbation */
++ siphash_key_t perturbation; /* hash perturbation */
+ u32 quantum; /* psched_mtu(qdisc_dev(sch)); */
+ u32 drop_overlimit; /* number of times max qdisc packet
+ * limit was hit
+@@ -264,7 +264,7 @@ static enum wdrr_bucket_idx hhf_classify(struct sk_buff *skb, struct Qdisc *sch)
+ }
+
+ /* Get hashed flow-id of the skb. */
+- hash = skb_get_hash_perturb(skb, q->perturbation);
++ hash = skb_get_hash_perturb(skb, &q->perturbation);
+
+ /* Check if this packet belongs to an already established HH flow. */
+ flow_pos = hash & HHF_BIT_MASK;
+@@ -582,7 +582,7 @@ static int hhf_init(struct Qdisc *sch, struct nlattr *opt,
+
+ sch->limit = 1000;
+ q->quantum = psched_mtu(qdisc_dev(sch));
+- q->perturbation = prandom_u32();
++ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+ INIT_LIST_HEAD(&q->new_buckets);
+ INIT_LIST_HEAD(&q->old_buckets);
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 0e44039e729c..42e557d48e4e 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -509,6 +509,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ if (skb->ip_summed == CHECKSUM_PARTIAL &&
+ skb_checksum_help(skb)) {
+ qdisc_drop(skb, sch, to_free);
++ skb = NULL;
+ goto finish_segs;
+ }
+
+@@ -593,9 +594,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ finish_segs:
+ if (segs) {
+ unsigned int len, last_len;
+- int nb = 0;
++ int nb;
+
+- len = skb->len;
++ len = skb ? skb->len : 0;
++ nb = skb ? 1 : 0;
+
+ while (segs) {
+ skb2 = segs->next;
+@@ -612,7 +614,10 @@ finish_segs:
+ }
+ segs = skb2;
+ }
+- qdisc_tree_reduce_backlog(sch, -nb, prev_len - len);
++ /* Parent qdiscs accounted for 1 skb of size @prev_len */
++ qdisc_tree_reduce_backlog(sch, -(nb - 1), -(len - prev_len));
++ } else if (!skb) {
++ return NET_XMIT_DROP;
+ }
+ return NET_XMIT_SUCCESS;
+ }
+diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c
+index d448fe3068e5..4074c50ac3d7 100644
+--- a/net/sched/sch_sfb.c
++++ b/net/sched/sch_sfb.c
+@@ -18,7 +18,7 @@
+ #include <linux/errno.h>
+ #include <linux/skbuff.h>
+ #include <linux/random.h>
+-#include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <net/ip.h>
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+@@ -45,7 +45,7 @@ struct sfb_bucket {
+ * (Section 4.4 of SFB reference : moving hash functions)
+ */
+ struct sfb_bins {
+- u32 perturbation; /* jhash perturbation */
++ siphash_key_t perturbation; /* siphash key */
+ struct sfb_bucket bins[SFB_LEVELS][SFB_NUMBUCKETS];
+ };
+
+@@ -217,7 +217,8 @@ static u32 sfb_compute_qlen(u32 *prob_r, u32 *avgpm_r, const struct sfb_sched_da
+
+ static void sfb_init_perturbation(u32 slot, struct sfb_sched_data *q)
+ {
+- q->bins[slot].perturbation = prandom_u32();
++ get_random_bytes(&q->bins[slot].perturbation,
++ sizeof(q->bins[slot].perturbation));
+ }
+
+ static void sfb_swap_slot(struct sfb_sched_data *q)
+@@ -314,9 +315,9 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ /* If using external classifiers, get result and record it. */
+ if (!sfb_classify(skb, fl, &ret, &salt))
+ goto other_drop;
+- sfbhash = jhash_1word(salt, q->bins[slot].perturbation);
++ sfbhash = siphash_1u32(salt, &q->bins[slot].perturbation);
+ } else {
+- sfbhash = skb_get_hash_perturb(skb, q->bins[slot].perturbation);
++ sfbhash = skb_get_hash_perturb(skb, &q->bins[slot].perturbation);
+ }
+
+
+@@ -352,7 +353,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ /* Inelastic flow */
+ if (q->double_buffering) {
+ sfbhash = skb_get_hash_perturb(skb,
+- q->bins[slot].perturbation);
++ &q->bins[slot].perturbation);
+ if (!sfbhash)
+ sfbhash = 1;
+ sfb_skb_cb(skb)->hashes[slot] = sfbhash;
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 68404a9d2ce4..c787d4d46017 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -14,7 +14,7 @@
+ #include <linux/errno.h>
+ #include <linux/init.h>
+ #include <linux/skbuff.h>
+-#include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+ #include <net/netlink.h>
+@@ -117,7 +117,7 @@ struct sfq_sched_data {
+ u8 headdrop;
+ u8 maxdepth; /* limit of packets per flow */
+
+- u32 perturbation;
++ siphash_key_t perturbation;
+ u8 cur_depth; /* depth of longest slot */
+ u8 flags;
+ unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */
+@@ -157,7 +157,7 @@ static inline struct sfq_head *sfq_dep_head(struct sfq_sched_data *q, sfq_index
+ static unsigned int sfq_hash(const struct sfq_sched_data *q,
+ const struct sk_buff *skb)
+ {
+- return skb_get_hash_perturb(skb, q->perturbation) & (q->divisor - 1);
++ return skb_get_hash_perturb(skb, &q->perturbation) & (q->divisor - 1);
+ }
+
+ static unsigned int sfq_classify(struct sk_buff *skb, struct Qdisc *sch,
+@@ -607,9 +607,11 @@ static void sfq_perturbation(struct timer_list *t)
+ struct sfq_sched_data *q = from_timer(q, t, perturb_timer);
+ struct Qdisc *sch = q->sch;
+ spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch));
++ siphash_key_t nkey;
+
++ get_random_bytes(&nkey, sizeof(nkey));
+ spin_lock(root_lock);
+- q->perturbation = prandom_u32();
++ q->perturbation = nkey;
+ if (!q->filter_list && q->tail)
+ sfq_rehash(sch);
+ spin_unlock(root_lock);
+@@ -688,7 +690,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ del_timer(&q->perturb_timer);
+ if (q->perturb_period) {
+ mod_timer(&q->perturb_timer, jiffies + q->perturb_period);
+- q->perturbation = prandom_u32();
++ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+ }
+ sch_tree_unlock(sch);
+ kfree(p);
+@@ -745,7 +747,7 @@ static int sfq_init(struct Qdisc *sch, struct nlattr *opt,
+ q->quantum = psched_mtu(qdisc_dev(sch));
+ q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum);
+ q->perturb_period = 0;
+- q->perturbation = prandom_u32();
++ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+
+ if (opt) {
+ int err = sfq_change(sch, opt);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 8fd7b0e6ce9f..b81d7673634c 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8329,7 +8329,7 @@ __poll_t sctp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ mask = 0;
+
+ /* Is there any exceptional events? */
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+@@ -8338,7 +8338,7 @@ __poll_t sctp_poll(struct file *file, struct socket *sock, poll_table *wait)
+ mask |= EPOLLHUP;
+
+ /* Is it readable? Reconsider this code with TCP-style support. */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* The association is either gone or not ready. */
+@@ -8724,7 +8724,7 @@ struct sk_buff *sctp_skb_recv_datagram(struct sock *sk, int flags,
+ if (sk_can_busy_loop(sk)) {
+ sk_busy_loop(sk, noblock);
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ continue;
+ }
+
+@@ -9159,7 +9159,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
+ newinet->inet_rcv_saddr = inet->inet_rcv_saddr;
+ newinet->inet_dport = htons(asoc->peer.port);
+ newinet->pmtudisc = inet->pmtudisc;
+- newinet->inet_id = asoc->next_tsn ^ jiffies;
++ newinet->inet_id = prandom_u32();
+
+ newinet->uc_ttl = inet->uc_ttl;
+ newinet->mc_loop = 1;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 5b932583e407..47946f489fd4 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -123,6 +123,12 @@ struct proto smc_proto6 = {
+ };
+ EXPORT_SYMBOL_GPL(smc_proto6);
+
++static void smc_restore_fallback_changes(struct smc_sock *smc)
++{
++ smc->clcsock->file->private_data = smc->sk.sk_socket;
++ smc->clcsock->file = NULL;
++}
++
+ static int __smc_release(struct smc_sock *smc)
+ {
+ struct sock *sk = &smc->sk;
+@@ -141,6 +147,7 @@ static int __smc_release(struct smc_sock *smc)
+ }
+ sk->sk_state = SMC_CLOSED;
+ sk->sk_state_change(sk);
++ smc_restore_fallback_changes(smc);
+ }
+
+ sk->sk_prot->unhash(sk);
+@@ -700,8 +707,6 @@ static int __smc_connect(struct smc_sock *smc)
+ int smc_type;
+ int rc = 0;
+
+- sock_hold(&smc->sk); /* sock put in passive closing */
+-
+ if (smc->use_fallback)
+ return smc_connect_fallback(smc, smc->fallback_rsn);
+
+@@ -846,6 +851,8 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
+ rc = kernel_connect(smc->clcsock, addr, alen, flags);
+ if (rc && rc != -EINPROGRESS)
+ goto out;
++
++ sock_hold(&smc->sk); /* sock put in passive closing */
+ if (flags & O_NONBLOCK) {
+ if (schedule_work(&smc->connect_work))
+ smc->connect_nonblock = 1;
+@@ -1291,8 +1298,8 @@ static void smc_listen_work(struct work_struct *work)
+ /* check if RDMA is available */
+ if (!ism_supported) { /* SMC_TYPE_R or SMC_TYPE_B */
+ /* prepare RDMA check */
+- memset(&ini, 0, sizeof(ini));
+ ini.is_smcd = false;
++ ini.ism_dev = NULL;
+ ini.ib_lcl = &pclc->lcl;
+ rc = smc_find_rdma_device(new_smc, &ini);
+ if (rc) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 83ae41d7e554..90ecca988d12 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -740,7 +740,7 @@ static __poll_t tipc_poll(struct file *file, struct socket *sock,
+ /* fall through */
+ case TIPC_LISTEN:
+ case TIPC_CONNECTING:
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ revents |= EPOLLIN | EPOLLRDNORM;
+ break;
+ case TIPC_OPEN:
+@@ -748,7 +748,7 @@ static __poll_t tipc_poll(struct file *file, struct socket *sock,
+ revents |= EPOLLOUT;
+ if (!tipc_sk_type_connectionless(sk))
+ break;
+- if (skb_queue_empty(&sk->sk_receive_queue))
++ if (skb_queue_empty_lockless(&sk->sk_receive_queue))
+ break;
+ revents |= EPOLLIN | EPOLLRDNORM;
+ break;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 67e87db5877f..0d8da809bea2 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2599,7 +2599,7 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Connection-based need to check for termination and startup */
+@@ -2628,7 +2628,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ mask = 0;
+
+ /* exceptional events? */
+- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))
++ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ mask |= EPOLLERR |
+ (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
+
+@@ -2638,7 +2638,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ mask |= EPOLLHUP;
+
+ /* readable? */
+- if (!skb_queue_empty(&sk->sk_receive_queue))
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ /* Connection-based need to check for termination and startup */
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 2ab43b2bba31..582a3e4dfce2 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -870,7 +870,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ * the queue and write as long as the socket isn't shutdown for
+ * sending.
+ */
+- if (!skb_queue_empty(&sk->sk_receive_queue) ||
++ if (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+ (sk->sk_shutdown & RCV_SHUTDOWN)) {
+ mask |= EPOLLIN | EPOLLRDNORM;
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b0de3e3b33e5..e1791d01ccc0 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2431,6 +2431,12 @@ static const struct pci_device_id azx_ids[] = {
+ /* Icelake */
+ { PCI_DEVICE(0x8086, 0x34c8),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* Jasperlake */
++ { PCI_DEVICE(0x8086, 0x38c8),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
++ /* Tigerlake */
++ { PCI_DEVICE(0x8086, 0xa0c8),
++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+ /* Elkhart Lake */
+ { PCI_DEVICE(0x8086, 0x4b55),
+ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index 1db7e43ec203..5963d170df43 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -243,6 +243,10 @@ static const char *const rx_mix1_text[] = {
+ "ZERO", "IIR1", "IIR2", "RX1", "RX2", "RX3"
+ };
+
++static const char * const rx_mix2_text[] = {
++ "ZERO", "IIR1", "IIR2"
++};
++
+ static const char *const dec_mux_text[] = {
+ "ZERO", "ADC1", "ADC2", "ADC3", "DMIC1", "DMIC2"
+ };
+@@ -270,6 +274,16 @@ static const struct soc_enum rx3_mix1_inp_enum[] = {
+ SOC_ENUM_SINGLE(LPASS_CDC_CONN_RX3_B2_CTL, 0, 6, rx_mix1_text),
+ };
+
++/* RX1 MIX2 */
++static const struct soc_enum rx_mix2_inp1_chain_enum =
++ SOC_ENUM_SINGLE(LPASS_CDC_CONN_RX1_B3_CTL,
++ 0, 3, rx_mix2_text);
++
++/* RX2 MIX2 */
++static const struct soc_enum rx2_mix2_inp1_chain_enum =
++ SOC_ENUM_SINGLE(LPASS_CDC_CONN_RX2_B3_CTL,
++ 0, 3, rx_mix2_text);
++
+ /* DEC */
+ static const struct soc_enum dec1_mux_enum = SOC_ENUM_SINGLE(
+ LPASS_CDC_CONN_TX_B1_CTL, 0, 6, dec_mux_text);
+@@ -309,6 +323,10 @@ static const struct snd_kcontrol_new rx3_mix1_inp2_mux = SOC_DAPM_ENUM(
+ "RX3 MIX1 INP2 Mux", rx3_mix1_inp_enum[1]);
+ static const struct snd_kcontrol_new rx3_mix1_inp3_mux = SOC_DAPM_ENUM(
+ "RX3 MIX1 INP3 Mux", rx3_mix1_inp_enum[2]);
++static const struct snd_kcontrol_new rx1_mix2_inp1_mux = SOC_DAPM_ENUM(
++ "RX1 MIX2 INP1 Mux", rx_mix2_inp1_chain_enum);
++static const struct snd_kcontrol_new rx2_mix2_inp1_mux = SOC_DAPM_ENUM(
++ "RX2 MIX2 INP1 Mux", rx2_mix2_inp1_chain_enum);
+
+ /* Digital Gain control -38.4 dB to +38.4 dB in 0.3 dB steps */
+ static const DECLARE_TLV_DB_SCALE(digital_gain, -3840, 30, 0);
+@@ -740,6 +758,10 @@ static const struct snd_soc_dapm_widget msm8916_wcd_digital_dapm_widgets[] = {
+ &rx3_mix1_inp2_mux),
+ SND_SOC_DAPM_MUX("RX3 MIX1 INP3", SND_SOC_NOPM, 0, 0,
+ &rx3_mix1_inp3_mux),
++ SND_SOC_DAPM_MUX("RX1 MIX2 INP1", SND_SOC_NOPM, 0, 0,
++ &rx1_mix2_inp1_mux),
++ SND_SOC_DAPM_MUX("RX2 MIX2 INP1", SND_SOC_NOPM, 0, 0,
++ &rx2_mix2_inp1_mux),
+
+ SND_SOC_DAPM_MUX("CIC1 MUX", SND_SOC_NOPM, 0, 0, &cic1_mux),
+ SND_SOC_DAPM_MUX("CIC2 MUX", SND_SOC_NOPM, 0, 0, &cic2_mux),
+diff --git a/sound/soc/codecs/pcm3168a.c b/sound/soc/codecs/pcm3168a.c
+index f1104d7d6426..b31997075a50 100644
+--- a/sound/soc/codecs/pcm3168a.c
++++ b/sound/soc/codecs/pcm3168a.c
+@@ -21,8 +21,7 @@
+
+ #define PCM3168A_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | \
+ SNDRV_PCM_FMTBIT_S24_3LE | \
+- SNDRV_PCM_FMTBIT_S24_LE | \
+- SNDRV_PCM_FMTBIT_S32_LE)
++ SNDRV_PCM_FMTBIT_S24_LE)
+
+ #define PCM3168A_FMT_I2S 0x0
+ #define PCM3168A_FMT_LEFT_J 0x1
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index 762595de956c..c506c9305043 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -1770,6 +1770,9 @@ static int rt5651_detect_headset(struct snd_soc_component *component)
+
+ static bool rt5651_support_button_press(struct rt5651_priv *rt5651)
+ {
++ if (!rt5651->hp_jack)
++ return false;
++
+ /* Button press support only works with internal jack-detection */
+ return (rt5651->hp_jack->status & SND_JACK_MICROPHONE) &&
+ rt5651->gpiod_hp_det == NULL;
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index 1ef470700ed5..c50b75ce82e0 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -995,6 +995,16 @@ static int rt5682_set_jack_detect(struct snd_soc_component *component,
+ {
+ struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component);
+
++ rt5682->hs_jack = hs_jack;
++
++ if (!hs_jack) {
++ regmap_update_bits(rt5682->regmap, RT5682_IRQ_CTRL_2,
++ RT5682_JD1_EN_MASK, RT5682_JD1_DIS);
++ regmap_update_bits(rt5682->regmap, RT5682_RC_CLK_CTRL,
++ RT5682_POW_JDH | RT5682_POW_JDL, 0);
++ return 0;
++ }
++
+ switch (rt5682->pdata.jd_src) {
+ case RT5682_JD1:
+ snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_2,
+@@ -1032,8 +1042,6 @@ static int rt5682_set_jack_detect(struct snd_soc_component *component,
+ break;
+ }
+
+- rt5682->hs_jack = hs_jack;
+-
+ return 0;
+ }
+
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index c3d06e8bc54f..d5fb7f5dd551 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -533,13 +533,10 @@ static SOC_ENUM_SINGLE_DECL(dac_osr,
+ static SOC_ENUM_SINGLE_DECL(adc_osr,
+ WM8994_OVERSAMPLING, 1, osr_text);
+
+-static const struct snd_kcontrol_new wm8994_snd_controls[] = {
++static const struct snd_kcontrol_new wm8994_common_snd_controls[] = {
+ SOC_DOUBLE_R_TLV("AIF1ADC1 Volume", WM8994_AIF1_ADC1_LEFT_VOLUME,
+ WM8994_AIF1_ADC1_RIGHT_VOLUME,
+ 1, 119, 0, digital_tlv),
+-SOC_DOUBLE_R_TLV("AIF1ADC2 Volume", WM8994_AIF1_ADC2_LEFT_VOLUME,
+- WM8994_AIF1_ADC2_RIGHT_VOLUME,
+- 1, 119, 0, digital_tlv),
+ SOC_DOUBLE_R_TLV("AIF2ADC Volume", WM8994_AIF2_ADC_LEFT_VOLUME,
+ WM8994_AIF2_ADC_RIGHT_VOLUME,
+ 1, 119, 0, digital_tlv),
+@@ -556,8 +553,6 @@ SOC_ENUM("AIF2DACR Source", aif2dacr_src),
+
+ SOC_DOUBLE_R_TLV("AIF1DAC1 Volume", WM8994_AIF1_DAC1_LEFT_VOLUME,
+ WM8994_AIF1_DAC1_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
+-SOC_DOUBLE_R_TLV("AIF1DAC2 Volume", WM8994_AIF1_DAC2_LEFT_VOLUME,
+- WM8994_AIF1_DAC2_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
+ SOC_DOUBLE_R_TLV("AIF2DAC Volume", WM8994_AIF2_DAC_LEFT_VOLUME,
+ WM8994_AIF2_DAC_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
+
+@@ -565,17 +560,12 @@ SOC_SINGLE_TLV("AIF1 Boost Volume", WM8994_AIF1_CONTROL_2, 10, 3, 0, aif_tlv),
+ SOC_SINGLE_TLV("AIF2 Boost Volume", WM8994_AIF2_CONTROL_2, 10, 3, 0, aif_tlv),
+
+ SOC_SINGLE("AIF1DAC1 EQ Switch", WM8994_AIF1_DAC1_EQ_GAINS_1, 0, 1, 0),
+-SOC_SINGLE("AIF1DAC2 EQ Switch", WM8994_AIF1_DAC2_EQ_GAINS_1, 0, 1, 0),
+ SOC_SINGLE("AIF2 EQ Switch", WM8994_AIF2_EQ_GAINS_1, 0, 1, 0),
+
+ WM8994_DRC_SWITCH("AIF1DAC1 DRC Switch", WM8994_AIF1_DRC1_1, 2),
+ WM8994_DRC_SWITCH("AIF1ADC1L DRC Switch", WM8994_AIF1_DRC1_1, 1),
+ WM8994_DRC_SWITCH("AIF1ADC1R DRC Switch", WM8994_AIF1_DRC1_1, 0),
+
+-WM8994_DRC_SWITCH("AIF1DAC2 DRC Switch", WM8994_AIF1_DRC2_1, 2),
+-WM8994_DRC_SWITCH("AIF1ADC2L DRC Switch", WM8994_AIF1_DRC2_1, 1),
+-WM8994_DRC_SWITCH("AIF1ADC2R DRC Switch", WM8994_AIF1_DRC2_1, 0),
+-
+ WM8994_DRC_SWITCH("AIF2DAC DRC Switch", WM8994_AIF2_DRC_1, 2),
+ WM8994_DRC_SWITCH("AIF2ADCL DRC Switch", WM8994_AIF2_DRC_1, 1),
+ WM8994_DRC_SWITCH("AIF2ADCR DRC Switch", WM8994_AIF2_DRC_1, 0),
+@@ -594,9 +584,6 @@ SOC_SINGLE("Sidetone HPF Switch", WM8994_SIDETONE, 6, 1, 0),
+ SOC_ENUM("AIF1ADC1 HPF Mode", aif1adc1_hpf),
+ SOC_DOUBLE("AIF1ADC1 HPF Switch", WM8994_AIF1_ADC1_FILTERS, 12, 11, 1, 0),
+
+-SOC_ENUM("AIF1ADC2 HPF Mode", aif1adc2_hpf),
+-SOC_DOUBLE("AIF1ADC2 HPF Switch", WM8994_AIF1_ADC2_FILTERS, 12, 11, 1, 0),
+-
+ SOC_ENUM("AIF2ADC HPF Mode", aif2adc_hpf),
+ SOC_DOUBLE("AIF2ADC HPF Switch", WM8994_AIF2_ADC_FILTERS, 12, 11, 1, 0),
+
+@@ -637,6 +624,24 @@ SOC_SINGLE("AIF2DAC 3D Stereo Switch", WM8994_AIF2_DAC_FILTERS_2,
+ 8, 1, 0),
+ };
+
++/* Controls not available on WM1811 */
++static const struct snd_kcontrol_new wm8994_snd_controls[] = {
++SOC_DOUBLE_R_TLV("AIF1ADC2 Volume", WM8994_AIF1_ADC2_LEFT_VOLUME,
++ WM8994_AIF1_ADC2_RIGHT_VOLUME,
++ 1, 119, 0, digital_tlv),
++SOC_DOUBLE_R_TLV("AIF1DAC2 Volume", WM8994_AIF1_DAC2_LEFT_VOLUME,
++ WM8994_AIF1_DAC2_RIGHT_VOLUME, 1, 96, 0, digital_tlv),
++
++SOC_SINGLE("AIF1DAC2 EQ Switch", WM8994_AIF1_DAC2_EQ_GAINS_1, 0, 1, 0),
++
++WM8994_DRC_SWITCH("AIF1DAC2 DRC Switch", WM8994_AIF1_DRC2_1, 2),
++WM8994_DRC_SWITCH("AIF1ADC2L DRC Switch", WM8994_AIF1_DRC2_1, 1),
++WM8994_DRC_SWITCH("AIF1ADC2R DRC Switch", WM8994_AIF1_DRC2_1, 0),
++
++SOC_ENUM("AIF1ADC2 HPF Mode", aif1adc2_hpf),
++SOC_DOUBLE("AIF1ADC2 HPF Switch", WM8994_AIF1_ADC2_FILTERS, 12, 11, 1, 0),
++};
++
+ static const struct snd_kcontrol_new wm8994_eq_controls[] = {
+ SOC_SINGLE_TLV("AIF1DAC1 EQ1 Volume", WM8994_AIF1_DAC1_EQ_GAINS_1, 11, 31, 0,
+ eq_tlv),
+@@ -4258,13 +4263,15 @@ static int wm8994_component_probe(struct snd_soc_component *component)
+ wm8994_handle_pdata(wm8994);
+
+ wm_hubs_add_analogue_controls(component);
+- snd_soc_add_component_controls(component, wm8994_snd_controls,
+- ARRAY_SIZE(wm8994_snd_controls));
++ snd_soc_add_component_controls(component, wm8994_common_snd_controls,
++ ARRAY_SIZE(wm8994_common_snd_controls));
+ snd_soc_dapm_new_controls(dapm, wm8994_dapm_widgets,
+ ARRAY_SIZE(wm8994_dapm_widgets));
+
+ switch (control->type) {
+ case WM8994:
++ snd_soc_add_component_controls(component, wm8994_snd_controls,
++ ARRAY_SIZE(wm8994_snd_controls));
+ snd_soc_dapm_new_controls(dapm, wm8994_specific_dapm_widgets,
+ ARRAY_SIZE(wm8994_specific_dapm_widgets));
+ if (control->revision < 4) {
+@@ -4284,8 +4291,10 @@ static int wm8994_component_probe(struct snd_soc_component *component)
+ }
+ break;
+ case WM8958:
++ snd_soc_add_component_controls(component, wm8994_snd_controls,
++ ARRAY_SIZE(wm8994_snd_controls));
+ snd_soc_add_component_controls(component, wm8958_snd_controls,
+- ARRAY_SIZE(wm8958_snd_controls));
++ ARRAY_SIZE(wm8958_snd_controls));
+ snd_soc_dapm_new_controls(dapm, wm8958_dapm_widgets,
+ ARRAY_SIZE(wm8958_dapm_widgets));
+ if (control->revision < 1) {
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index f5fbadc5e7e2..914fb3be5fea 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1259,8 +1259,7 @@ static unsigned int wmfw_convert_flags(unsigned int in, unsigned int len)
+ }
+
+ if (in) {
+- if (in & WMFW_CTL_FLAG_READABLE)
+- out |= rd;
++ out |= rd;
+ if (in & WMFW_CTL_FLAG_WRITEABLE)
+ out |= wr;
+ if (in & WMFW_CTL_FLAG_VOLATILE)
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index daeaa396d928..9e59586e03ba 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -573,6 +573,15 @@ static int sof_audio_probe(struct platform_device *pdev)
+ /* need to get main clock from pmc */
+ if (sof_rt5682_quirk & SOF_RT5682_MCLK_BYTCHT_EN) {
+ ctx->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");
++ if (IS_ERR(ctx->mclk)) {
++ ret = PTR_ERR(ctx->mclk);
++
++ dev_err(&pdev->dev,
++ "Failed to get MCLK from pmc_plt_clk_3: %d\n",
++ ret);
++ return ret;
++ }
++
+ ret = clk_prepare_enable(ctx->mclk);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+@@ -618,8 +627,24 @@ static int sof_audio_probe(struct platform_device *pdev)
+ &sof_audio_card_rt5682);
+ }
+
++static int sof_rt5682_remove(struct platform_device *pdev)
++{
++ struct snd_soc_card *card = platform_get_drvdata(pdev);
++ struct snd_soc_component *component = NULL;
++
++ for_each_card_components(card, component) {
++ if (!strcmp(component->name, rt5682_component[0].name)) {
++ snd_soc_component_set_jack(component, NULL, NULL);
++ break;
++ }
++ }
++
++ return 0;
++}
++
+ static struct platform_driver sof_audio = {
+ .probe = sof_audio_probe,
++ .remove = sof_rt5682_remove,
+ .driver = {
+ .name = "sof_rt5682",
+ .pm = &snd_soc_pm_ops,
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index 88ebaf6e1880..a0506e554c98 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -674,7 +674,7 @@ static int rockchip_i2s_probe(struct platform_device *pdev)
+ ret = rockchip_pcm_platform_register(&pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "Could not register PCM\n");
+- return ret;
++ goto err_suspend;
+ }
+
+ return 0;
+diff --git a/sound/soc/samsung/arndale_rt5631.c b/sound/soc/samsung/arndale_rt5631.c
+index c213913eb984..fd8c6642fb0d 100644
+--- a/sound/soc/samsung/arndale_rt5631.c
++++ b/sound/soc/samsung/arndale_rt5631.c
+@@ -5,6 +5,7 @@
+ // Author: Claude <claude@insginal.co.kr>
+
+ #include <linux/module.h>
++#include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/clk.h>
+
+@@ -74,6 +75,17 @@ static struct snd_soc_card arndale_rt5631 = {
+ .num_links = ARRAY_SIZE(arndale_rt5631_dai),
+ };
+
++static void arndale_put_of_nodes(struct snd_soc_card *card)
++{
++ struct snd_soc_dai_link *dai_link;
++ int i;
++
++ for_each_card_prelinks(card, i, dai_link) {
++ of_node_put(dai_link->cpus->of_node);
++ of_node_put(dai_link->codecs->of_node);
++ }
++}
++
+ static int arndale_audio_probe(struct platform_device *pdev)
+ {
+ int n, ret;
+@@ -103,18 +115,31 @@ static int arndale_audio_probe(struct platform_device *pdev)
+ if (!arndale_rt5631_dai[0].codecs->of_node) {
+ dev_err(&pdev->dev,
+ "Property 'samsung,audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_put_of_nodes;
+ }
+ }
+
+ ret = devm_snd_soc_register_card(card->dev, card);
++ if (ret) {
++ dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", ret);
++ goto err_put_of_nodes;
++ }
++ return 0;
+
+- if (ret)
+- dev_err(&pdev->dev, "snd_soc_register_card() failed:%d\n", ret);
+-
++err_put_of_nodes:
++ arndale_put_of_nodes(card);
+ return ret;
+ }
+
++static int arndale_audio_remove(struct platform_device *pdev)
++{
++ struct snd_soc_card *card = platform_get_drvdata(pdev);
++
++ arndale_put_of_nodes(card);
++ return 0;
++}
++
+ static const struct of_device_id samsung_arndale_rt5631_of_match[] __maybe_unused = {
+ { .compatible = "samsung,arndale-rt5631", },
+ { .compatible = "samsung,arndale-alc5631", },
+@@ -129,6 +154,7 @@ static struct platform_driver arndale_audio_driver = {
+ .of_match_table = of_match_ptr(samsung_arndale_rt5631_of_match),
+ },
+ .probe = arndale_audio_probe,
++ .remove = arndale_audio_remove,
+ };
+
+ module_platform_driver(arndale_audio_driver);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index dc463f1a9e24..1cc5a07a2f5c 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -1588,7 +1588,7 @@ static int soc_tplg_dapm_widget_create(struct soc_tplg *tplg,
+
+ /* map user to kernel widget ID */
+ template.id = get_widget_id(le32_to_cpu(w->id));
+- if (template.id < 0)
++ if ((int)template.id < 0)
+ return template.id;
+
+ /* strings are allocated here, but used and freed by the widget */
+diff --git a/sound/soc/sof/control.c b/sound/soc/sof/control.c
+index a4983f90ff5b..2b8711eda362 100644
+--- a/sound/soc/sof/control.c
++++ b/sound/soc/sof/control.c
+@@ -60,13 +60,16 @@ int snd_sof_volume_put(struct snd_kcontrol *kcontrol,
+ struct snd_sof_dev *sdev = scontrol->sdev;
+ struct sof_ipc_ctrl_data *cdata = scontrol->control_data;
+ unsigned int i, channels = scontrol->num_channels;
++ bool change = false;
++ u32 value;
+
+ /* update each channel */
+ for (i = 0; i < channels; i++) {
+- cdata->chanv[i].value =
+- mixer_to_ipc(ucontrol->value.integer.value[i],
++ value = mixer_to_ipc(ucontrol->value.integer.value[i],
+ scontrol->volume_table, sm->max + 1);
++ change = change || (value != cdata->chanv[i].value);
+ cdata->chanv[i].channel = i;
++ cdata->chanv[i].value = value;
+ }
+
+ /* notify DSP of mixer updates */
+@@ -76,8 +79,7 @@ int snd_sof_volume_put(struct snd_kcontrol *kcontrol,
+ SOF_CTRL_TYPE_VALUE_CHAN_GET,
+ SOF_CTRL_CMD_VOLUME,
+ true);
+-
+- return 0;
++ return change;
+ }
+
+ int snd_sof_switch_get(struct snd_kcontrol *kcontrol,
+@@ -105,11 +107,15 @@ int snd_sof_switch_put(struct snd_kcontrol *kcontrol,
+ struct snd_sof_dev *sdev = scontrol->sdev;
+ struct sof_ipc_ctrl_data *cdata = scontrol->control_data;
+ unsigned int i, channels = scontrol->num_channels;
++ bool change = false;
++ u32 value;
+
+ /* update each channel */
+ for (i = 0; i < channels; i++) {
+- cdata->chanv[i].value = ucontrol->value.integer.value[i];
++ value = ucontrol->value.integer.value[i];
++ change = change || (value != cdata->chanv[i].value);
+ cdata->chanv[i].channel = i;
++ cdata->chanv[i].value = value;
+ }
+
+ /* notify DSP of mixer updates */
+@@ -120,7 +126,7 @@ int snd_sof_switch_put(struct snd_kcontrol *kcontrol,
+ SOF_CTRL_CMD_SWITCH,
+ true);
+
+- return 0;
++ return change;
+ }
+
+ int snd_sof_enum_get(struct snd_kcontrol *kcontrol,
+@@ -148,11 +154,15 @@ int snd_sof_enum_put(struct snd_kcontrol *kcontrol,
+ struct snd_sof_dev *sdev = scontrol->sdev;
+ struct sof_ipc_ctrl_data *cdata = scontrol->control_data;
+ unsigned int i, channels = scontrol->num_channels;
++ bool change = false;
++ u32 value;
+
+ /* update each channel */
+ for (i = 0; i < channels; i++) {
+- cdata->chanv[i].value = ucontrol->value.enumerated.item[i];
++ value = ucontrol->value.enumerated.item[i];
++ change = change || (value != cdata->chanv[i].value);
+ cdata->chanv[i].channel = i;
++ cdata->chanv[i].value = value;
+ }
+
+ /* notify DSP of enum updates */
+@@ -163,7 +173,7 @@ int snd_sof_enum_put(struct snd_kcontrol *kcontrol,
+ SOF_CTRL_CMD_ENUM,
+ true);
+
+- return 0;
++ return change;
+ }
+
+ int snd_sof_bytes_get(struct snd_kcontrol *kcontrol,
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index dd14ce92fe10..a5fd356776ee 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -241,6 +241,16 @@ config SND_SOC_SOF_HDA_AUDIO_CODEC
+ Say Y if you want to enable HDAudio codecs with SOF.
+ If unsure select "N".
+
++config SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1
++ bool "SOF enable DMI Link L1"
++ help
++ This option enables DMI L1 for both playback and capture
++ and disables known workarounds for specific HDaudio platforms.
++ Only use to look into power optimizations on platforms not
++ affected by DMI L1 issues. This option is not recommended.
++ Say Y if you want to enable DMI Link L1
++ If unsure, select "N".
++
+ endif ## SND_SOC_SOF_HDA_COMMON
+
+ config SND_SOC_SOF_HDA_LINK_BASELINE
+diff --git a/sound/soc/sof/intel/bdw.c b/sound/soc/sof/intel/bdw.c
+index 70d524ef9bc0..0ca3c1b55eeb 100644
+--- a/sound/soc/sof/intel/bdw.c
++++ b/sound/soc/sof/intel/bdw.c
+@@ -37,6 +37,7 @@
+ #define MBOX_SIZE 0x1000
+ #define MBOX_DUMP_SIZE 0x30
+ #define EXCEPT_OFFSET 0x800
++#define EXCEPT_MAX_HDR_SIZE 0x400
+
+ /* DSP peripherals */
+ #define DMAC0_OFFSET 0xFE000
+@@ -228,6 +229,11 @@ static void bdw_get_registers(struct snd_sof_dev *sdev,
+ /* note: variable AR register array is not read */
+
+ /* then get panic info */
++ if (xoops->arch_hdr.totalsize > EXCEPT_MAX_HDR_SIZE) {
++ dev_err(sdev->dev, "invalid header size 0x%x. FW oops is bogus\n",
++ xoops->arch_hdr.totalsize);
++ return;
++ }
+ offset += xoops->arch_hdr.totalsize;
+ sof_mailbox_read(sdev, offset, panic_info, sizeof(*panic_info));
+
+@@ -588,6 +594,7 @@ static int bdw_probe(struct snd_sof_dev *sdev)
+ /* TODO: add offsets */
+ sdev->mmio_bar = BDW_DSP_BAR;
+ sdev->mailbox_bar = BDW_DSP_BAR;
++ sdev->dsp_oops_offset = MBOX_OFFSET;
+
+ /* PCI base */
+ mmio = platform_get_resource(pdev, IORESOURCE_MEM,
+diff --git a/sound/soc/sof/intel/byt.c b/sound/soc/sof/intel/byt.c
+index 107d711efc3f..96faaa8fa5a3 100644
+--- a/sound/soc/sof/intel/byt.c
++++ b/sound/soc/sof/intel/byt.c
+@@ -28,6 +28,7 @@
+ #define MBOX_OFFSET 0x144000
+ #define MBOX_SIZE 0x1000
+ #define EXCEPT_OFFSET 0x800
++#define EXCEPT_MAX_HDR_SIZE 0x400
+
+ /* DSP peripherals */
+ #define DMAC0_OFFSET 0x098000
+@@ -273,6 +274,11 @@ static void byt_get_registers(struct snd_sof_dev *sdev,
+ /* note: variable AR register array is not read */
+
+ /* then get panic info */
++ if (xoops->arch_hdr.totalsize > EXCEPT_MAX_HDR_SIZE) {
++ dev_err(sdev->dev, "invalid header size 0x%x. FW oops is bogus\n",
++ xoops->arch_hdr.totalsize);
++ return;
++ }
+ offset += xoops->arch_hdr.totalsize;
+ sof_mailbox_read(sdev, offset, panic_info, sizeof(*panic_info));
+
+diff --git a/sound/soc/sof/intel/hda-ctrl.c b/sound/soc/sof/intel/hda-ctrl.c
+index ea63f83a509b..760094d49f18 100644
+--- a/sound/soc/sof/intel/hda-ctrl.c
++++ b/sound/soc/sof/intel/hda-ctrl.c
+@@ -139,20 +139,16 @@ void hda_dsp_ctrl_misc_clock_gating(struct snd_sof_dev *sdev, bool enable)
+ */
+ int hda_dsp_ctrl_clock_power_gating(struct snd_sof_dev *sdev, bool enable)
+ {
+-#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+- struct hdac_bus *bus = sof_to_bus(sdev);
+-#endif
+ u32 val;
+
+ /* enable/disable audio dsp clock gating */
+ val = enable ? PCI_CGCTL_ADSPDCGE : 0;
+ snd_sof_pci_update_bits(sdev, PCI_CGCTL, PCI_CGCTL_ADSPDCGE, val);
+
+-#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+- /* enable/disable L1 support */
+- val = enable ? SOF_HDA_VS_EM2_L1SEN : 0;
+- snd_hdac_chip_updatel(bus, VS_EM2, SOF_HDA_VS_EM2_L1SEN, val);
+-#endif
++ /* enable/disable DMI Link L1 support */
++ val = enable ? HDA_VS_INTEL_EM2_L1SEN : 0;
++ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR, HDA_VS_INTEL_EM2,
++ HDA_VS_INTEL_EM2_L1SEN, val);
+
+ /* enable/disable audio dsp power gating */
+ val = enable ? 0 : PCI_PGCTL_ADSPPGD;
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index 6427f0b3a2f1..65c2af3fcaab 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -44,6 +44,7 @@ static int cl_stream_prepare(struct snd_sof_dev *sdev, unsigned int format,
+ return -ENODEV;
+ }
+ hstream = &dsp_stream->hstream;
++ hstream->substream = NULL;
+
+ /* allocate DMA buffer */
+ ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG, &pci->dev, size, dmab);
+diff --git a/sound/soc/sof/intel/hda-stream.c b/sound/soc/sof/intel/hda-stream.c
+index ad8d41f22e92..2c7447188402 100644
+--- a/sound/soc/sof/intel/hda-stream.c
++++ b/sound/soc/sof/intel/hda-stream.c
+@@ -185,6 +185,17 @@ hda_dsp_stream_get(struct snd_sof_dev *sdev, int direction)
+ direction == SNDRV_PCM_STREAM_PLAYBACK ?
+ "playback" : "capture");
+
++ /*
++ * Disable DMI Link L1 entry when capture stream is opened.
++ * Workaround to address a known issue with host DMA that results
++ * in xruns during pause/release in capture scenarios.
++ */
++ if (!IS_ENABLED(SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
++ if (stream && direction == SNDRV_PCM_STREAM_CAPTURE)
++ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR,
++ HDA_VS_INTEL_EM2,
++ HDA_VS_INTEL_EM2_L1SEN, 0);
++
+ return stream;
+ }
+
+@@ -193,23 +204,43 @@ int hda_dsp_stream_put(struct snd_sof_dev *sdev, int direction, int stream_tag)
+ {
+ struct hdac_bus *bus = sof_to_bus(sdev);
+ struct hdac_stream *s;
++ bool active_capture_stream = false;
++ bool found = false;
+
+ spin_lock_irq(&bus->reg_lock);
+
+- /* find used stream */
++ /*
++ * close stream matching the stream tag
++ * and check if there are any open capture streams.
++ */
+ list_for_each_entry(s, &bus->stream_list, list) {
+- if (s->direction == direction &&
+- s->opened && s->stream_tag == stream_tag) {
++ if (!s->opened)
++ continue;
++
++ if (s->direction == direction && s->stream_tag == stream_tag) {
+ s->opened = false;
+- spin_unlock_irq(&bus->reg_lock);
+- return 0;
++ found = true;
++ } else if (s->direction == SNDRV_PCM_STREAM_CAPTURE) {
++ active_capture_stream = true;
+ }
+ }
+
+ spin_unlock_irq(&bus->reg_lock);
+
+- dev_dbg(sdev->dev, "stream_tag %d not opened!\n", stream_tag);
+- return -ENODEV;
++ /* Enable DMI L1 entry if there are no capture streams open */
++ if (!IS_ENABLED(SND_SOC_SOF_HDA_ALWAYS_ENABLE_DMI_L1))
++ if (!active_capture_stream)
++ snd_sof_dsp_update_bits(sdev, HDA_DSP_HDA_BAR,
++ HDA_VS_INTEL_EM2,
++ HDA_VS_INTEL_EM2_L1SEN,
++ HDA_VS_INTEL_EM2_L1SEN);
++
++ if (!found) {
++ dev_dbg(sdev->dev, "stream_tag %d not opened!\n", stream_tag);
++ return -ENODEV;
++ }
++
++ return 0;
+ }
+
+ int hda_dsp_stream_trigger(struct snd_sof_dev *sdev,
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 7f665392618f..f2d45d62dfa5 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -37,6 +37,8 @@
+ #define IS_CFL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa348)
+ #define IS_CNL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9dc8)
+
++#define EXCEPT_MAX_HDR_SIZE 0x400
++
+ /*
+ * Debug
+ */
+@@ -121,6 +123,11 @@ static void hda_dsp_get_registers(struct snd_sof_dev *sdev,
+ /* note: variable AR register array is not read */
+
+ /* then get panic info */
++ if (xoops->arch_hdr.totalsize > EXCEPT_MAX_HDR_SIZE) {
++ dev_err(sdev->dev, "invalid header size 0x%x. FW oops is bogus\n",
++ xoops->arch_hdr.totalsize);
++ return;
++ }
+ offset += xoops->arch_hdr.totalsize;
+ sof_block_read(sdev, sdev->mmio_bar, offset,
+ panic_info, sizeof(*panic_info));
+diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h
+index d9c17146200b..2cc789f0e83c 100644
+--- a/sound/soc/sof/intel/hda.h
++++ b/sound/soc/sof/intel/hda.h
+@@ -39,7 +39,6 @@
+ #define SOF_HDA_WAKESTS 0x0E
+ #define SOF_HDA_WAKESTS_INT_MASK ((1 << 8) - 1)
+ #define SOF_HDA_RIRBSTS 0x5d
+-#define SOF_HDA_VS_EM2_L1SEN BIT(13)
+
+ /* SOF_HDA_GCTL register bist */
+ #define SOF_HDA_GCTL_RESET BIT(0)
+@@ -228,6 +227,10 @@
+ #define HDA_DSP_REG_HIPCIE (HDA_DSP_IPC_BASE + 0x0C)
+ #define HDA_DSP_REG_HIPCCTL (HDA_DSP_IPC_BASE + 0x10)
+
++/* Intel Vendor Specific Registers */
++#define HDA_VS_INTEL_EM2 0x1030
++#define HDA_VS_INTEL_EM2_L1SEN BIT(13)
++
+ /* HIPCI */
+ #define HDA_DSP_REG_HIPCI_BUSY BIT(31)
+ #define HDA_DSP_REG_HIPCI_MSG_MASK 0x7FFFFFFF
+diff --git a/sound/soc/sof/loader.c b/sound/soc/sof/loader.c
+index 952a19091c58..01775231f2b8 100644
+--- a/sound/soc/sof/loader.c
++++ b/sound/soc/sof/loader.c
+@@ -370,10 +370,10 @@ int snd_sof_run_firmware(struct snd_sof_dev *sdev)
+ msecs_to_jiffies(sdev->boot_timeout));
+ if (ret == 0) {
+ dev_err(sdev->dev, "error: firmware boot failure\n");
+- /* after this point FW_READY msg should be ignored */
+- sdev->boot_complete = true;
+ snd_sof_dsp_dbg_dump(sdev, SOF_DBG_REGS | SOF_DBG_MBOX |
+ SOF_DBG_TEXT | SOF_DBG_PCI);
++ /* after this point FW_READY msg should be ignored */
++ sdev->boot_complete = true;
+ return -EIO;
+ }
+
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 432ae343f960..96230329e678 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -907,7 +907,9 @@ static void sof_parse_word_tokens(struct snd_soc_component *scomp,
+ for (j = 0; j < count; j++) {
+ /* match token type */
+ if (!(tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_WORD ||
+- tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_SHORT))
++ tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_SHORT ||
++ tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_BYTE ||
++ tokens[j].type == SND_SOC_TPLG_TUPLE_TYPE_BOOL))
+ continue;
+
+ /* match token id */
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index e3776f5c2e01..637e18142658 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2627,6 +2627,7 @@ static int build_cl_output(char *cl_sort, bool no_source)
+ bool add_sym = false;
+ bool add_dso = false;
+ bool add_src = false;
++ int ret = 0;
+
+ if (!buf)
+ return -ENOMEM;
+@@ -2645,7 +2646,8 @@ static int build_cl_output(char *cl_sort, bool no_source)
+ add_dso = true;
+ } else if (strcmp(tok, "offset")) {
+ pr_err("unrecognized sort token: %s\n", tok);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err;
+ }
+ }
+
+@@ -2668,13 +2670,15 @@ static int build_cl_output(char *cl_sort, bool no_source)
+ add_sym ? "symbol," : "",
+ add_dso ? "dso," : "",
+ add_src ? "cl_srcline," : "",
+- "node") < 0)
+- return -ENOMEM;
++ "node") < 0) {
++ ret = -ENOMEM;
++ goto err;
++ }
+
+ c2c.show_src = add_src;
+-
++err:
+ free(buf);
+- return 0;
++ return ret;
+ }
+
+ static int setup_coalesce(const char *coalesce, bool no_source)
+diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
+index 9e5e60898083..353c9417e864 100644
+--- a/tools/perf/builtin-kmem.c
++++ b/tools/perf/builtin-kmem.c
+@@ -688,6 +688,7 @@ static char *compact_gfp_flags(char *gfp_flags)
+ new = realloc(new_flags, len + strlen(cpt) + 2);
+ if (new == NULL) {
+ free(new_flags);
++ free(orig_flags);
+ return NULL;
+ }
+
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index e95a2a26c40a..277cdf1fc5ac 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1282,8 +1282,10 @@ static int build_mem_topology(struct memory_node *nodes, u64 size, u64 *cntp)
+ continue;
+
+ if (WARN_ONCE(cnt >= size,
+- "failed to write MEM_TOPOLOGY, way too many nodes\n"))
++ "failed to write MEM_TOPOLOGY, way too many nodes\n")) {
++ closedir(dir);
+ return -1;
++ }
+
+ ret = memory_node__read(&nodes[cnt++], idx);
+ }
+diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c
+index a61535cf1bca..d0930c38e147 100644
+--- a/tools/perf/util/util.c
++++ b/tools/perf/util/util.c
+@@ -176,8 +176,10 @@ static int rm_rf_depth_pat(const char *path, int depth, const char **pat)
+ if (!strcmp(d->d_name, ".") || !strcmp(d->d_name, ".."))
+ continue;
+
+- if (!match_pat(d->d_name, pat))
+- return -2;
++ if (!match_pat(d->d_name, pat)) {
++ ret = -2;
++ break;
++ }
+
+ scnprintf(namebuf, sizeof(namebuf), "%s/%s",
+ path, d->d_name);
+diff --git a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
+index 11c2a70a7b87..5c8224256294 100644
+--- a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
++++ b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
+@@ -22,18 +22,19 @@
+
+ #define VCPU_ID 5
+
++#define UCALL_PIO_PORT ((uint16_t)0x1000)
++
++/*
++ * ucall is embedded here to protect against compiler reshuffling registers
++ * before calling a function. In this test we only need to get KVM_EXIT_IO
++ * vmexit and preserve RBX, no additional information is needed.
++ */
+ void guest_code(void)
+ {
+- /*
+- * use a callee-save register, otherwise the compiler
+- * saves it around the call to GUEST_SYNC.
+- */
+- register u32 stage asm("rbx");
+- for (;;) {
+- GUEST_SYNC(0);
+- stage++;
+- asm volatile ("" : : "r" (stage));
+- }
++ asm volatile("1: in %[port], %%al\n"
++ "add $0x1, %%rbx\n"
++ "jmp 1b"
++ : : [port] "d" (UCALL_PIO_PORT) : "rax", "rbx");
+ }
+
+ static void compare_regs(struct kvm_regs *left, struct kvm_regs *right)
+diff --git a/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c b/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c
+index 853e370e8a39..a6d85614ae4d 100644
+--- a/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c
++++ b/tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c
+@@ -271,12 +271,7 @@ int main(int argc, char *argv[])
+ state.flags = KVM_STATE_NESTED_RUN_PENDING;
+ test_nested_state_expect_einval(vm, &state);
+
+- /*
+- * TODO: When SVM support is added for KVM_SET_NESTED_STATE
+- * add tests here to support it like VMX.
+- */
+- if (entry->ecx & CPUID_VMX)
+- test_vmx_nested_state(vm);
++ test_vmx_nested_state(vm);
+
+ kvm_vm_free(vm);
+ return 0;
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index c4ba0ff4a53f..76c1897e6352 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -1438,6 +1438,27 @@ ipv4_addr_metric_test()
+ fi
+ log_test $rc 0 "Prefix route with metric on link up"
+
++ # explicitly check for metric changes on edge scenarios
++ run_cmd "$IP addr flush dev dummy2"
++ run_cmd "$IP addr add dev dummy2 172.16.104.0/24 metric 259"
++ run_cmd "$IP addr change dev dummy2 172.16.104.0/24 metric 260"
++ rc=$?
++ if [ $rc -eq 0 ]; then
++ check_route "172.16.104.0/24 dev dummy2 proto kernel scope link src 172.16.104.0 metric 260"
++ rc=$?
++ fi
++ log_test $rc 0 "Modify metric of .0/24 address"
++
++ run_cmd "$IP addr flush dev dummy2"
++ run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260"
++ run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261"
++ rc=$?
++ if [ $rc -eq 0 ]; then
++ check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261"
++ rc=$?
++ fi
++ log_test $rc 0 "Modify metric of address with peer route"
++
+ $IP li del dummy1
+ $IP li del dummy2
+ cleanup
+diff --git a/tools/testing/selftests/net/reuseport_dualstack.c b/tools/testing/selftests/net/reuseport_dualstack.c
+index fe3230c55986..fb7a59ed759e 100644
+--- a/tools/testing/selftests/net/reuseport_dualstack.c
++++ b/tools/testing/selftests/net/reuseport_dualstack.c
+@@ -129,7 +129,7 @@ static void test(int *rcv_fds, int count, int proto)
+ {
+ struct epoll_event ev;
+ int epfd, i, test_fd;
+- uint16_t test_family;
++ int test_family;
+ socklen_t len;
+
+ epfd = epoll_create(1);
+@@ -146,6 +146,7 @@ static void test(int *rcv_fds, int count, int proto)
+ send_from_v4(proto);
+
+ test_fd = receive_once(epfd, proto);
++ len = sizeof(test_family);
+ if (getsockopt(test_fd, SOL_SOCKET, SO_DOMAIN, &test_family, &len))
+ error(1, errno, "failed to read socket domain");
+ if (test_family != AF_INET)
+diff --git a/tools/testing/selftests/powerpc/mm/Makefile b/tools/testing/selftests/powerpc/mm/Makefile
+index f1fbc15800c4..ed1565809d2b 100644
+--- a/tools/testing/selftests/powerpc/mm/Makefile
++++ b/tools/testing/selftests/powerpc/mm/Makefile
+@@ -4,6 +4,7 @@ noarg:
+
+ TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr \
+ large_vm_fork_separation
++TEST_GEN_PROGS_EXTENDED := tlbie_test
+ TEST_GEN_FILES := tempfile
+
+ top_srcdir = ../../../../..
+@@ -19,3 +20,4 @@ $(OUTPUT)/large_vm_fork_separation: CFLAGS += -m64
+ $(OUTPUT)/tempfile:
+ dd if=/dev/zero of=$@ bs=64k count=1
+
++$(OUTPUT)/tlbie_test: LDLIBS += -lpthread
+diff --git a/tools/testing/selftests/powerpc/mm/tlbie_test.c b/tools/testing/selftests/powerpc/mm/tlbie_test.c
+new file mode 100644
+index 000000000000..f85a0938ab25
+--- /dev/null
++++ b/tools/testing/selftests/powerpc/mm/tlbie_test.c
+@@ -0,0 +1,734 @@
++// SPDX-License-Identifier: GPL-2.0
++
++/*
++ * Copyright 2019, Nick Piggin, Gautham R. Shenoy, Aneesh Kumar K.V, IBM Corp.
++ */
++
++/*
++ *
++ * Test tlbie/mtpidr race. We have 4 threads doing flush/load/compare/store
++ * sequence in a loop. The same threads also rung a context switch task
++ * that does sched_yield() in loop.
++ *
++ * The snapshot thread mark the mmap area PROT_READ in between, make a copy
++ * and copy it back to the original area. This helps us to detect if any
++ * store continued to happen after we marked the memory PROT_READ.
++ */
++
++#define _GNU_SOURCE
++#include <stdio.h>
++#include <sys/mman.h>
++#include <sys/types.h>
++#include <sys/wait.h>
++#include <sys/ipc.h>
++#include <sys/shm.h>
++#include <sys/stat.h>
++#include <sys/time.h>
++#include <linux/futex.h>
++#include <unistd.h>
++#include <asm/unistd.h>
++#include <string.h>
++#include <stdlib.h>
++#include <fcntl.h>
++#include <sched.h>
++#include <time.h>
++#include <stdarg.h>
++#include <sched.h>
++#include <pthread.h>
++#include <signal.h>
++#include <sys/prctl.h>
++
++static inline void dcbf(volatile unsigned int *addr)
++{
++ __asm__ __volatile__ ("dcbf %y0; sync" : : "Z"(*(unsigned char *)addr) : "memory");
++}
++
++static void err_msg(char *msg)
++{
++
++ time_t now;
++ time(&now);
++ printf("=================================\n");
++ printf(" Error: %s\n", msg);
++ printf(" %s", ctime(&now));
++ printf("=================================\n");
++ exit(1);
++}
++
++static char *map1;
++static char *map2;
++static pid_t rim_process_pid;
++
++/*
++ * A "rim-sequence" is defined to be the sequence of the following
++ * operations performed on a memory word:
++ * 1) FLUSH the contents of that word.
++ * 2) LOAD the contents of that word.
++ * 3) COMPARE the contents of that word with the content that was
++ * previously stored at that word
++ * 4) STORE new content into that word.
++ *
++ * The threads in this test that perform the rim-sequence are termed
++ * as rim_threads.
++ */
++
++/*
++ * A "corruption" is defined to be the failed COMPARE operation in a
++ * rim-sequence.
++ *
++ * A rim_thread that detects a corruption informs about it to all the
++ * other rim_threads, and the mem_snapshot thread.
++ */
++static volatile unsigned int corruption_found;
++
++/*
++ * This defines the maximum number of rim_threads in this test.
++ *
++ * The THREAD_ID_BITS denote the number of bits required
++ * to represent the thread_ids [0..MAX_THREADS - 1].
++ * We are being a bit paranoid here and set it to 8 bits,
++ * though 6 bits suffice.
++ *
++ */
++#define MAX_THREADS 64
++#define THREAD_ID_BITS 8
++#define THREAD_ID_MASK ((1 << THREAD_ID_BITS) - 1)
++static unsigned int rim_thread_ids[MAX_THREADS];
++static pthread_t rim_threads[MAX_THREADS];
++
++
++/*
++ * Each rim_thread works on an exclusive "chunk" of size
++ * RIM_CHUNK_SIZE.
++ *
++ * The ith rim_thread works on the ith chunk.
++ *
++ * The ith chunk begins at
++ * map1 + (i * RIM_CHUNK_SIZE)
++ */
++#define RIM_CHUNK_SIZE 1024
++#define BITS_PER_BYTE 8
++#define WORD_SIZE (sizeof(unsigned int))
++#define WORD_BITS (WORD_SIZE * BITS_PER_BYTE)
++#define WORDS_PER_CHUNK (RIM_CHUNK_SIZE/WORD_SIZE)
++
++static inline char *compute_chunk_start_addr(unsigned int thread_id)
++{
++ char *chunk_start;
++
++ chunk_start = (char *)((unsigned long)map1 +
++ (thread_id * RIM_CHUNK_SIZE));
++
++ return chunk_start;
++}
++
++/*
++ * The "word-offset" of a word-aligned address inside a chunk, is
++ * defined to be the number of words that precede the address in that
++ * chunk.
++ *
++ * WORD_OFFSET_BITS denote the number of bits required to represent
++ * the word-offsets of all the word-aligned addresses of a chunk.
++ */
++#define WORD_OFFSET_BITS (__builtin_ctz(WORDS_PER_CHUNK))
++#define WORD_OFFSET_MASK ((1 << WORD_OFFSET_BITS) - 1)
++
++static inline unsigned int compute_word_offset(char *start, unsigned int *addr)
++{
++ unsigned int delta_bytes, ret;
++ delta_bytes = (unsigned long)addr - (unsigned long)start;
++
++ ret = delta_bytes/WORD_SIZE;
++
++ return ret;
++}
++
++/*
++ * A "sweep" is defined to be the sequential execution of the
++ * rim-sequence by a rim_thread on its chunk one word at a time,
++ * starting from the first word of its chunk and ending with the last
++ * word of its chunk.
++ *
++ * Each sweep of a rim_thread is uniquely identified by a sweep_id.
++ * SWEEP_ID_BITS denote the number of bits required to represent
++ * the sweep_ids of rim_threads.
++ *
++ * As to why SWEEP_ID_BITS are computed as a function of THREAD_ID_BITS,
++ * WORD_OFFSET_BITS, and WORD_BITS, see the "store-pattern" below.
++ */
++#define SWEEP_ID_BITS (WORD_BITS - (THREAD_ID_BITS + WORD_OFFSET_BITS))
++#define SWEEP_ID_MASK ((1 << SWEEP_ID_BITS) - 1)
++
++/*
++ * A "store-pattern" is the word-pattern that is stored into a word
++ * location in the 4)STORE step of the rim-sequence.
++ *
++ * In the store-pattern, we shall encode:
++ *
++ * - The thread-id of the rim_thread performing the store
++ * (The most significant THREAD_ID_BITS)
++ *
++ * - The word-offset of the address into which the store is being
++ * performed (The next WORD_OFFSET_BITS)
++ *
++ * - The sweep_id of the current sweep in which the store is
++ * being performed. (The lower SWEEP_ID_BITS)
++ *
++ * Store Pattern: 32 bits
++ * |------------------|--------------------|---------------------------------|
++ * | Thread id | Word offset | sweep_id |
++ * |------------------|--------------------|---------------------------------|
++ * THREAD_ID_BITS WORD_OFFSET_BITS SWEEP_ID_BITS
++ *
++ * In the store pattern, the (Thread-id + Word-offset) uniquely identify the
++ * address to which the store is being performed i.e,
++ * address == map1 +
++ * (Thread-id * RIM_CHUNK_SIZE) + (Word-offset * WORD_SIZE)
++ *
++ * And the sweep_id in the store pattern identifies the time when the
++ * store was performed by the rim_thread.
++ *
++ * We shall use this property in the 3)COMPARE step of the
++ * rim-sequence.
++ */
++#define SWEEP_ID_SHIFT 0
++#define WORD_OFFSET_SHIFT (SWEEP_ID_BITS)
++#define THREAD_ID_SHIFT (WORD_OFFSET_BITS + SWEEP_ID_BITS)
++
++/*
++ * Compute the store pattern for a given thread with id @tid, at
++ * location @addr in the sweep identified by @sweep_id
++ */
++static inline unsigned int compute_store_pattern(unsigned int tid,
++ unsigned int *addr,
++ unsigned int sweep_id)
++{
++ unsigned int ret = 0;
++ char *start = compute_chunk_start_addr(tid);
++ unsigned int word_offset = compute_word_offset(start, addr);
++
++ ret += (tid & THREAD_ID_MASK) << THREAD_ID_SHIFT;
++ ret += (word_offset & WORD_OFFSET_MASK) << WORD_OFFSET_SHIFT;
++ ret += (sweep_id & SWEEP_ID_MASK) << SWEEP_ID_SHIFT;
++ return ret;
++}
++
++/* Extract the thread-id from the given store-pattern */
++static inline unsigned int extract_tid(unsigned int pattern)
++{
++ unsigned int ret;
++
++ ret = (pattern >> THREAD_ID_SHIFT) & THREAD_ID_MASK;
++ return ret;
++}
++
++/* Extract the word-offset from the given store-pattern */
++static inline unsigned int extract_word_offset(unsigned int pattern)
++{
++ unsigned int ret;
++
++ ret = (pattern >> WORD_OFFSET_SHIFT) & WORD_OFFSET_MASK;
++
++ return ret;
++}
++
++/* Extract the sweep-id from the given store-pattern */
++static inline unsigned int extract_sweep_id(unsigned int pattern)
++
++{
++ unsigned int ret;
++
++ ret = (pattern >> SWEEP_ID_SHIFT) & SWEEP_ID_MASK;
++
++ return ret;
++}
++
++/************************************************************
++ * *
++ * Logging the output of the verification *
++ * *
++ ************************************************************/
++#define LOGDIR_NAME_SIZE 100
++static char logdir[LOGDIR_NAME_SIZE];
++
++static FILE *fp[MAX_THREADS];
++static const char logfilename[] ="Thread-%02d-Chunk";
++
++static inline void start_verification_log(unsigned int tid,
++ unsigned int *addr,
++ unsigned int cur_sweep_id,
++ unsigned int prev_sweep_id)
++{
++ FILE *f;
++ char logfile[30];
++ char path[LOGDIR_NAME_SIZE + 30];
++ char separator[2] = "/";
++ char *chunk_start = compute_chunk_start_addr(tid);
++ unsigned int size = RIM_CHUNK_SIZE;
++
++ sprintf(logfile, logfilename, tid);
++ strcpy(path, logdir);
++ strcat(path, separator);
++ strcat(path, logfile);
++ f = fopen(path, "w");
++
++ if (!f) {
++ err_msg("Unable to create logfile\n");
++ }
++
++ fp[tid] = f;
++
++ fprintf(f, "----------------------------------------------------------\n");
++ fprintf(f, "PID = %d\n", rim_process_pid);
++ fprintf(f, "Thread id = %02d\n", tid);
++ fprintf(f, "Chunk Start Addr = 0x%016lx\n", (unsigned long)chunk_start);
++ fprintf(f, "Chunk Size = %d\n", size);
++ fprintf(f, "Next Store Addr = 0x%016lx\n", (unsigned long)addr);
++ fprintf(f, "Current sweep-id = 0x%08x\n", cur_sweep_id);
++ fprintf(f, "Previous sweep-id = 0x%08x\n", prev_sweep_id);
++ fprintf(f, "----------------------------------------------------------\n");
++}
++
++static inline void log_anamoly(unsigned int tid, unsigned int *addr,
++ unsigned int expected, unsigned int observed)
++{
++ FILE *f = fp[tid];
++
++ fprintf(f, "Thread %02d: Addr 0x%lx: Expected 0x%x, Observed 0x%x\n",
++ tid, (unsigned long)addr, expected, observed);
++ fprintf(f, "Thread %02d: Expected Thread id = %02d\n", tid, extract_tid(expected));
++ fprintf(f, "Thread %02d: Observed Thread id = %02d\n", tid, extract_tid(observed));
++ fprintf(f, "Thread %02d: Expected Word offset = %03d\n", tid, extract_word_offset(expected));
++ fprintf(f, "Thread %02d: Observed Word offset = %03d\n", tid, extract_word_offset(observed));
++ fprintf(f, "Thread %02d: Expected sweep-id = 0x%x\n", tid, extract_sweep_id(expected));
++ fprintf(f, "Thread %02d: Observed sweep-id = 0x%x\n", tid, extract_sweep_id(observed));
++ fprintf(f, "----------------------------------------------------------\n");
++}
++
++static inline void end_verification_log(unsigned int tid, unsigned nr_anamolies)
++{
++ FILE *f = fp[tid];
++ char logfile[30];
++ char path[LOGDIR_NAME_SIZE + 30];
++ char separator[] = "/";
++
++ fclose(f);
++
++ if (nr_anamolies == 0) {
++ remove(path);
++ return;
++ }
++
++ sprintf(logfile, logfilename, tid);
++ strcpy(path, logdir);
++ strcat(path, separator);
++ strcat(path, logfile);
++
++ printf("Thread %02d chunk has %d corrupted words. For details check %s\n",
++ tid, nr_anamolies, path);
++}
++
++/*
++ * When a COMPARE step of a rim-sequence fails, the rim_thread informs
++ * everyone else via the shared_memory pointed to by
++ * corruption_found variable. On seeing this, every thread verifies the
++ * content of its chunk as follows.
++ *
++ * Suppose a thread identified with @tid was about to store (but not
++ * yet stored) to @next_store_addr in its current sweep identified
++ * @cur_sweep_id. Let @prev_sweep_id indicate the previous sweep_id.
++ *
++ * This implies that for all the addresses @addr < @next_store_addr,
++ * Thread @tid has already performed a store as part of its current
++ * sweep. Hence we expect the content of such @addr to be:
++ * |-------------------------------------------------|
++ * | tid | word_offset(addr) | cur_sweep_id |
++ * |-------------------------------------------------|
++ *
++ * Since Thread @tid is yet to perform stores on address
++ * @next_store_addr and above, we expect the content of such an
++ * address @addr to be:
++ * |-------------------------------------------------|
++ * | tid | word_offset(addr) | prev_sweep_id |
++ * |-------------------------------------------------|
++ *
++ * The verifier function @verify_chunk does this verification and logs
++ * any anamolies that it finds.
++ */
++static void verify_chunk(unsigned int tid, unsigned int *next_store_addr,
++ unsigned int cur_sweep_id,
++ unsigned int prev_sweep_id)
++{
++ unsigned int *iter_ptr;
++ unsigned int size = RIM_CHUNK_SIZE;
++ unsigned int expected;
++ unsigned int observed;
++ char *chunk_start = compute_chunk_start_addr(tid);
++
++ int nr_anamolies = 0;
++
++ start_verification_log(tid, next_store_addr,
++ cur_sweep_id, prev_sweep_id);
++
++ for (iter_ptr = (unsigned int *)chunk_start;
++ (unsigned long)iter_ptr < (unsigned long)chunk_start + size;
++ iter_ptr++) {
++ unsigned int expected_sweep_id;
++
++ if (iter_ptr < next_store_addr) {
++ expected_sweep_id = cur_sweep_id;
++ } else {
++ expected_sweep_id = prev_sweep_id;
++ }
++
++ expected = compute_store_pattern(tid, iter_ptr, expected_sweep_id);
++
++ dcbf((volatile unsigned int*)iter_ptr); //Flush before reading
++ observed = *iter_ptr;
++
++ if (observed != expected) {
++ nr_anamolies++;
++ log_anamoly(tid, iter_ptr, expected, observed);
++ }
++ }
++
++ end_verification_log(tid, nr_anamolies);
++}
++
++static void set_pthread_cpu(pthread_t th, int cpu)
++{
++ cpu_set_t run_cpu_mask;
++ struct sched_param param;
++
++ CPU_ZERO(&run_cpu_mask);
++ CPU_SET(cpu, &run_cpu_mask);
++ pthread_setaffinity_np(th, sizeof(cpu_set_t), &run_cpu_mask);
++
++ param.sched_priority = 1;
++ if (0 && sched_setscheduler(0, SCHED_FIFO, ¶m) == -1) {
++ /* haven't reproduced with this setting, it kills random preemption which may be a factor */
++ fprintf(stderr, "could not set SCHED_FIFO, run as root?\n");
++ }
++}
++
++static void set_mycpu(int cpu)
++{
++ cpu_set_t run_cpu_mask;
++ struct sched_param param;
++
++ CPU_ZERO(&run_cpu_mask);
++ CPU_SET(cpu, &run_cpu_mask);
++ sched_setaffinity(0, sizeof(cpu_set_t), &run_cpu_mask);
++
++ param.sched_priority = 1;
++ if (0 && sched_setscheduler(0, SCHED_FIFO, ¶m) == -1) {
++ fprintf(stderr, "could not set SCHED_FIFO, run as root?\n");
++ }
++}
++
++static volatile int segv_wait;
++
++static void segv_handler(int signo, siginfo_t *info, void *extra)
++{
++ while (segv_wait) {
++ sched_yield();
++ }
++
++}
++
++static void set_segv_handler(void)
++{
++ struct sigaction sa;
++
++ sa.sa_flags = SA_SIGINFO;
++ sa.sa_sigaction = segv_handler;
++
++ if (sigaction(SIGSEGV, &sa, NULL) == -1) {
++ perror("sigaction");
++ exit(EXIT_FAILURE);
++ }
++}
++
++int timeout = 0;
++/*
++ * This function is executed by every rim_thread.
++ *
++ * This function performs sweeps over the exclusive chunks of the
++ * rim_threads executing the rim-sequence one word at a time.
++ */
++static void *rim_fn(void *arg)
++{
++ unsigned int tid = *((unsigned int *)arg);
++
++ int size = RIM_CHUNK_SIZE;
++ char *chunk_start = compute_chunk_start_addr(tid);
++
++ unsigned int prev_sweep_id;
++ unsigned int cur_sweep_id = 0;
++
++ /* word access */
++ unsigned int pattern = cur_sweep_id;
++ unsigned int *pattern_ptr = &pattern;
++ unsigned int *w_ptr, read_data;
++
++ set_segv_handler();
++
++ /*
++ * Let us initialize the chunk:
++ *
++ * Each word-aligned address addr in the chunk,
++ * is initialized to :
++ * |-------------------------------------------------|
++ * | tid | word_offset(addr) | 0 |
++ * |-------------------------------------------------|
++ */
++ for (w_ptr = (unsigned int *)chunk_start;
++ (unsigned long)w_ptr < (unsigned long)(chunk_start) + size;
++ w_ptr++) {
++
++ *pattern_ptr = compute_store_pattern(tid, w_ptr, cur_sweep_id);
++ *w_ptr = *pattern_ptr;
++ }
++
++ while (!corruption_found && !timeout) {
++ prev_sweep_id = cur_sweep_id;
++ cur_sweep_id = cur_sweep_id + 1;
++
++ for (w_ptr = (unsigned int *)chunk_start;
++ (unsigned long)w_ptr < (unsigned long)(chunk_start) + size;
++ w_ptr++) {
++ unsigned int old_pattern;
++
++ /*
++ * Compute the pattern that we would have
++ * stored at this location in the previous
++ * sweep.
++ */
++ old_pattern = compute_store_pattern(tid, w_ptr, prev_sweep_id);
++
++ /*
++ * FLUSH:Ensure that we flush the contents of
++ * the cache before loading
++ */
++ dcbf((volatile unsigned int*)w_ptr); //Flush
++
++ /* LOAD: Read the value */
++ read_data = *w_ptr; //Load
++
++ /*
++ * COMPARE: Is it the same as what we had stored
++ * in the previous sweep ? It better be!
++ */
++ if (read_data != old_pattern) {
++ /* No it isn't! Tell everyone */
++ corruption_found = 1;
++ }
++
++ /*
++ * Before performing a store, let us check if
++ * any rim_thread has found a corruption.
++ */
++ if (corruption_found || timeout) {
++ /*
++ * Yes. Someone (including us!) has found
++ * a corruption :(
++ *
++ * Let us verify that our chunk is
++ * correct.
++ */
++ /* But first, let us allow the dust to settle down! */
++ verify_chunk(tid, w_ptr, cur_sweep_id, prev_sweep_id);
++
++ return 0;
++ }
++
++ /*
++ * Compute the new pattern that we are going
++ * to write to this location
++ */
++ *pattern_ptr = compute_store_pattern(tid, w_ptr, cur_sweep_id);
++
++ /*
++ * STORE: Now let us write this pattern into
++ * the location
++ */
++ *w_ptr = *pattern_ptr;
++ }
++ }
++
++ return NULL;
++}
++
++
++static unsigned long start_cpu = 0;
++static unsigned long nrthreads = 4;
++
++static pthread_t mem_snapshot_thread;
++
++static void *mem_snapshot_fn(void *arg)
++{
++ int page_size = getpagesize();
++ size_t size = page_size;
++ void *tmp = malloc(size);
++
++ while (!corruption_found && !timeout) {
++ /* Stop memory migration once corruption is found */
++ segv_wait = 1;
++
++ mprotect(map1, size, PROT_READ);
++
++ /*
++ * Load from the working alias (map1). Loading from map2
++ * also fails.
++ */
++ memcpy(tmp, map1, size);
++
++ /*
++ * Stores must go via map2 which has write permissions, but
++ * the corrupted data tends to be seen in the snapshot buffer,
++ * so corruption does not appear to be introduced at the
++ * copy-back via map2 alias here.
++ */
++ memcpy(map2, tmp, size);
++ /*
++ * Before releasing other threads, must ensure the copy
++ * back to
++ */
++ asm volatile("sync" ::: "memory");
++ mprotect(map1, size, PROT_READ|PROT_WRITE);
++ asm volatile("sync" ::: "memory");
++ segv_wait = 0;
++
++ usleep(1); /* This value makes a big difference */
++ }
++
++ return 0;
++}
++
++void alrm_sighandler(int sig)
++{
++ timeout = 1;
++}
++
++int main(int argc, char *argv[])
++{
++ int c;
++ int page_size = getpagesize();
++ time_t now;
++ int i, dir_error;
++ pthread_attr_t attr;
++ key_t shm_key = (key_t) getpid();
++ int shmid, run_time = 20 * 60;
++ struct sigaction sa_alrm;
++
++ snprintf(logdir, LOGDIR_NAME_SIZE,
++ "/tmp/logdir-%u", (unsigned int)getpid());
++ while ((c = getopt(argc, argv, "r:hn:l:t:")) != -1) {
++ switch(c) {
++ case 'r':
++ start_cpu = strtoul(optarg, NULL, 10);
++ break;
++ case 'h':
++ printf("%s [-r <start_cpu>] [-n <nrthreads>] [-l <logdir>] [-t <timeout>]\n", argv[0]);
++ exit(0);
++ break;
++ case 'n':
++ nrthreads = strtoul(optarg, NULL, 10);
++ break;
++ case 'l':
++ strncpy(logdir, optarg, LOGDIR_NAME_SIZE - 1);
++ break;
++ case 't':
++ run_time = strtoul(optarg, NULL, 10);
++ break;
++ default:
++ printf("invalid option\n");
++ exit(0);
++ break;
++ }
++ }
++
++ if (nrthreads > MAX_THREADS)
++ nrthreads = MAX_THREADS;
++
++ shmid = shmget(shm_key, page_size, IPC_CREAT|0666);
++ if (shmid < 0) {
++ err_msg("Failed shmget\n");
++ }
++
++ map1 = shmat(shmid, NULL, 0);
++ if (map1 == (void *) -1) {
++ err_msg("Failed shmat");
++ }
++
++ map2 = shmat(shmid, NULL, 0);
++ if (map2 == (void *) -1) {
++ err_msg("Failed shmat");
++ }
++
++ dir_error = mkdir(logdir, 0755);
++
++ if (dir_error) {
++ err_msg("Failed mkdir");
++ }
++
++ printf("start_cpu list:%lu\n", start_cpu);
++ printf("number of worker threads:%lu + 1 snapshot thread\n", nrthreads);
++ printf("Allocated address:0x%016lx + secondary map:0x%016lx\n", (unsigned long)map1, (unsigned long)map2);
++ printf("logdir at : %s\n", logdir);
++ printf("Timeout: %d seconds\n", run_time);
++
++ time(&now);
++ printf("=================================\n");
++ printf(" Starting Test\n");
++ printf(" %s", ctime(&now));
++ printf("=================================\n");
++
++ for (i = 0; i < nrthreads; i++) {
++ if (1 && !fork()) {
++ prctl(PR_SET_PDEATHSIG, SIGKILL);
++ set_mycpu(start_cpu + i);
++ for (;;)
++ sched_yield();
++ exit(0);
++ }
++ }
++
++
++ sa_alrm.sa_handler = &alrm_sighandler;
++ sigemptyset(&sa_alrm.sa_mask);
++ sa_alrm.sa_flags = 0;
++
++ if (sigaction(SIGALRM, &sa_alrm, 0) == -1) {
++ err_msg("Failed signal handler registration\n");
++ }
++
++ alarm(run_time);
++
++ pthread_attr_init(&attr);
++ for (i = 0; i < nrthreads; i++) {
++ rim_thread_ids[i] = i;
++ pthread_create(&rim_threads[i], &attr, rim_fn, &rim_thread_ids[i]);
++ set_pthread_cpu(rim_threads[i], start_cpu + i);
++ }
++
++ pthread_create(&mem_snapshot_thread, &attr, mem_snapshot_fn, map1);
++ set_pthread_cpu(mem_snapshot_thread, start_cpu + i);
++
++
++ pthread_join(mem_snapshot_thread, NULL);
++ for (i = 0; i < nrthreads; i++) {
++ pthread_join(rim_threads[i], NULL);
++ }
++
++ if (!timeout) {
++ time(&now);
++ printf("=================================\n");
++ printf(" Data Corruption Detected\n");
++ printf(" %s", ctime(&now));
++ printf(" See logfiles in %s\n", logdir);
++ printf("=================================\n");
++ return 1;
++ }
++ return 0;
++}
next reply other threads:[~2019-11-10 16:22 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-10 16:22 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2019-12-18 19:31 [gentoo-commits] proj/linux-patches:5.3 commit in: / Mike Pagano
2019-12-17 21:57 Mike Pagano
2019-12-13 12:37 Mike Pagano
2019-12-05 1:06 Thomas Deutschmann
2019-11-30 13:15 Thomas Deutschmann
2019-11-29 21:38 Thomas Deutschmann
2019-11-24 15:45 Mike Pagano
2019-11-20 16:39 Mike Pagano
2019-11-14 23:08 Mike Pagano
2019-11-12 19:36 Mike Pagano
2019-11-06 14:27 Mike Pagano
2019-10-29 12:06 Mike Pagano
2019-10-17 22:28 Mike Pagano
2019-10-16 18:21 Mike Pagano
2019-10-11 17:08 Mike Pagano
2019-10-07 17:48 Mike Pagano
2019-10-05 20:41 Mike Pagano
2019-10-01 10:12 Mike Pagano
2019-09-21 15:53 Mike Pagano
2019-09-16 11:55 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1573402912.0daea8e2bb22888ab05d48a2e6486bedbbb18a61.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox