From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.18 commit in: /
Date: Thu, 4 Oct 2018 10:44:23 +0000 (UTC) [thread overview]
Message-ID: <1538649847.2e88d7f9eaffea37f89cf24b95cf17b308c058e9.mpagano@gentoo> (raw)
commit: 2e88d7f9eaffea37f89cf24b95cf17b308c058e9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 4 10:44:07 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 4 10:44:07 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2e88d7f9
Linux patch 4.18.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-4.18.12.patch | 7724 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7728 insertions(+)
diff --git a/0000_README b/0000_README
index cccbd63..ff87445 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-4.18.11.patch
From: http://www.kernel.org
Desc: Linux 4.18.11
+Patch: 1011_linux-4.18.12.patch
+From: http://www.kernel.org
+Desc: Linux 4.18.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-4.18.12.patch b/1011_linux-4.18.12.patch
new file mode 100644
index 0000000..0851ea8
--- /dev/null
+++ b/1011_linux-4.18.12.patch
@@ -0,0 +1,7724 @@
+diff --git a/Documentation/hwmon/ina2xx b/Documentation/hwmon/ina2xx
+index 72d16f08e431..b8df81f6d6bc 100644
+--- a/Documentation/hwmon/ina2xx
++++ b/Documentation/hwmon/ina2xx
+@@ -32,7 +32,7 @@ Supported chips:
+ Datasheet: Publicly available at the Texas Instruments website
+ http://www.ti.com/
+
+-Author: Lothar Felten <l-felten@ti.com>
++Author: Lothar Felten <lothar.felten@gmail.com>
+
+ Description
+ -----------
+diff --git a/Documentation/process/2.Process.rst b/Documentation/process/2.Process.rst
+index a9c46dd0706b..51d0349c7809 100644
+--- a/Documentation/process/2.Process.rst
++++ b/Documentation/process/2.Process.rst
+@@ -134,7 +134,7 @@ and their maintainers are:
+ 4.4 Greg Kroah-Hartman (very long-term stable kernel)
+ 4.9 Greg Kroah-Hartman
+ 4.14 Greg Kroah-Hartman
+- ====== ====================== ===========================
++ ====== ====================== ==============================
+
+ The selection of a kernel for long-term support is purely a matter of a
+ maintainer having the need and the time to maintain that release. There
+diff --git a/Makefile b/Makefile
+index de0ecace693a..466e07af8473 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Merciless Moray
+
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index e03495a799ce..a0ddf497e8cd 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1893,7 +1893,7 @@
+ };
+ };
+
+- dcan1: can@481cc000 {
++ dcan1: can@4ae3c000 {
+ compatible = "ti,dra7-d_can";
+ ti,hwmods = "dcan1";
+ reg = <0x4ae3c000 0x2000>;
+@@ -1903,7 +1903,7 @@
+ status = "disabled";
+ };
+
+- dcan2: can@481d0000 {
++ dcan2: can@48480000 {
+ compatible = "ti,dra7-d_can";
+ ti,hwmods = "dcan2";
+ reg = <0x48480000 0x2000>;
+diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
+index 8d3d123d0a5c..37f0a5afe348 100644
+--- a/arch/arm/boot/dts/imx7d.dtsi
++++ b/arch/arm/boot/dts/imx7d.dtsi
+@@ -125,10 +125,14 @@
+ interrupt-names = "msi";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+- interrupt-map = <0 0 0 1 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 2 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 3 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
+- <0 0 0 4 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>;
++ /*
++ * Reference manual lists pci irqs incorrectly
++ * Real hardware ordering is same as imx6: D+MSI, C, B, A
++ */
++ interrupt-map = <0 0 0 1 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 2 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 3 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
++ <0 0 0 4 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX7D_PCIE_CTRL_ROOT_CLK>,
+ <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>,
+ <&clks IMX7D_PCIE_PHY_ROOT_CLK>;
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index c55d479971cc..f18490548c78 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -84,6 +84,7 @@
+ device_type = "cpu";
+ reg = <0xf01>;
+ clocks = <&clockgen 1 0>;
++ #cooling-cells = <2>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
+index d1eb123bc73b..1cdc346a05e8 100644
+--- a/arch/arm/boot/dts/mt7623.dtsi
++++ b/arch/arm/boot/dts/mt7623.dtsi
+@@ -92,6 +92,7 @@
+ <&apmixedsys CLK_APMIXED_MAINPLL>;
+ clock-names = "cpu", "intermediate";
+ operating-points-v2 = <&cpu_opp_table>;
++ #cooling-cells = <2>;
+ clock-frequency = <1300000000>;
+ };
+
+@@ -103,6 +104,7 @@
+ <&apmixedsys CLK_APMIXED_MAINPLL>;
+ clock-names = "cpu", "intermediate";
+ operating-points-v2 = <&cpu_opp_table>;
++ #cooling-cells = <2>;
+ clock-frequency = <1300000000>;
+ };
+
+@@ -114,6 +116,7 @@
+ <&apmixedsys CLK_APMIXED_MAINPLL>;
+ clock-names = "cpu", "intermediate";
+ operating-points-v2 = <&cpu_opp_table>;
++ #cooling-cells = <2>;
+ clock-frequency = <1300000000>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index e7c3c563ff8f..5f27518561c4 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -351,7 +351,7 @@
+ &mmc2 {
+ vmmc-supply = <&vsdio>;
+ bus-width = <8>;
+- non-removable;
++ ti,non-removable;
+ };
+
+ &mmc3 {
+@@ -618,15 +618,6 @@
+ OMAP4_IOPAD(0x10c, PIN_INPUT | MUX_MODE1) /* abe_mcbsp3_fsx */
+ >;
+ };
+-};
+-
+-&omap4_pmx_wkup {
+- usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
+- /* gpio_wk0 */
+- pinctrl-single,pins = <
+- OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
+- >;
+- };
+
+ vibrator_direction_pin: pinmux_vibrator_direction_pin {
+ pinctrl-single,pins = <
+@@ -641,6 +632,15 @@
+ };
+ };
+
++&omap4_pmx_wkup {
++ usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
++ /* gpio_wk0 */
++ pinctrl-single,pins = <
++ OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
++ >;
++ };
++};
++
+ /*
+ * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for
+ * uart1 wakeirq.
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 27a78c80e5b1..73d5d72dfc3e 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -116,8 +116,8 @@ void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr)
+ PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu));
+ }
+
+-extern unsigned char mvebu_boot_wa_start;
+-extern unsigned char mvebu_boot_wa_end;
++extern unsigned char mvebu_boot_wa_start[];
++extern unsigned char mvebu_boot_wa_end[];
+
+ /*
+ * This function sets up the boot address workaround needed for SMP
+@@ -130,7 +130,7 @@ int mvebu_setup_boot_addr_wa(unsigned int crypto_eng_target,
+ phys_addr_t resume_addr_reg)
+ {
+ void __iomem *sram_virt_base;
+- u32 code_len = &mvebu_boot_wa_end - &mvebu_boot_wa_start;
++ u32 code_len = mvebu_boot_wa_end - mvebu_boot_wa_start;
+
+ mvebu_mbus_del_window(BOOTROM_BASE, BOOTROM_SIZE);
+ mvebu_mbus_add_window_by_id(crypto_eng_target, crypto_eng_attribute,
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 2ceffd85dd3d..cd65ea4e9c54 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -2160,6 +2160,37 @@ static int of_dev_hwmod_lookup(struct device_node *np,
+ return -ENODEV;
+ }
+
++/**
++ * omap_hwmod_fix_mpu_rt_idx - fix up mpu_rt_idx register offsets
++ *
++ * @oh: struct omap_hwmod *
++ * @np: struct device_node *
++ *
++ * Fix up module register offsets for modules with mpu_rt_idx.
++ * Only needed for cpsw with interconnect target module defined
++ * in device tree while still using legacy hwmod platform data
++ * for rev, sysc and syss registers.
++ *
++ * Can be removed when all cpsw hwmod platform data has been
++ * dropped.
++ */
++static void omap_hwmod_fix_mpu_rt_idx(struct omap_hwmod *oh,
++ struct device_node *np,
++ struct resource *res)
++{
++ struct device_node *child = NULL;
++ int error;
++
++ child = of_get_next_child(np, child);
++ if (!child)
++ return;
++
++ error = of_address_to_resource(child, oh->mpu_rt_idx, res);
++ if (error)
++ pr_err("%s: error mapping mpu_rt_idx: %i\n",
++ __func__, error);
++}
++
+ /**
+ * omap_hwmod_parse_module_range - map module IO range from device tree
+ * @oh: struct omap_hwmod *
+@@ -2220,7 +2251,13 @@ int omap_hwmod_parse_module_range(struct omap_hwmod *oh,
+ size = be32_to_cpup(ranges);
+
+ pr_debug("omap_hwmod: %s %s at 0x%llx size 0x%llx\n",
+- oh->name, np->name, base, size);
++ oh ? oh->name : "", np->name, base, size);
++
++ if (oh && oh->mpu_rt_idx) {
++ omap_hwmod_fix_mpu_rt_idx(oh, np, res);
++
++ return 0;
++ }
+
+ res->start = base;
+ res->end = base + size - 1;
+diff --git a/arch/arm/mach-omap2/omap_hwmod_reset.c b/arch/arm/mach-omap2/omap_hwmod_reset.c
+index b68f9c0aff0b..d5ddba00bb73 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_reset.c
++++ b/arch/arm/mach-omap2/omap_hwmod_reset.c
+@@ -92,11 +92,13 @@ static void omap_rtc_wait_not_busy(struct omap_hwmod *oh)
+ */
+ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+ {
+- local_irq_disable();
++ unsigned long flags;
++
++ local_irq_save(flags);
+ omap_rtc_wait_not_busy(oh);
+ omap_hwmod_write(OMAP_RTC_KICK0_VALUE, oh, OMAP_RTC_KICK0_REG);
+ omap_hwmod_write(OMAP_RTC_KICK1_VALUE, oh, OMAP_RTC_KICK1_REG);
+- local_irq_enable();
++ local_irq_restore(flags);
+ }
+
+ /**
+@@ -110,9 +112,11 @@ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+ */
+ void omap_hwmod_rtc_lock(struct omap_hwmod *oh)
+ {
+- local_irq_disable();
++ unsigned long flags;
++
++ local_irq_save(flags);
+ omap_rtc_wait_not_busy(oh);
+ omap_hwmod_write(0x0, oh, OMAP_RTC_KICK0_REG);
+ omap_hwmod_write(0x0, oh, OMAP_RTC_KICK1_REG);
+- local_irq_enable();
++ local_irq_restore(flags);
+ }
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+index e19dcd6cb767..0a42b016f257 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+@@ -80,7 +80,7 @@
+
+ vspd3: vsp@fea38000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea38000 0 0x8000>;
++ reg = <0 0xfea38000 0 0x5000>;
+ interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 620>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795.dtsi b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+index d842940b2f43..91c392f879f9 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+@@ -2530,7 +2530,7 @@
+
+ vspd0: vsp@fea20000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea20000 0 0x8000>;
++ reg = <0 0xfea20000 0 0x5000>;
+ interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 623>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2541,7 +2541,7 @@
+
+ vspd1: vsp@fea28000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea28000 0 0x8000>;
++ reg = <0 0xfea28000 0 0x5000>;
+ interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 622>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2552,7 +2552,7 @@
+
+ vspd2: vsp@fea30000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea30000 0 0x8000>;
++ reg = <0 0xfea30000 0 0x5000>;
+ interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 621>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7796.dtsi b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+index 7c25be6b5af3..a3653f9f4627 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7796.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+@@ -2212,7 +2212,7 @@
+
+ vspd0: vsp@fea20000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea20000 0 0x8000>;
++ reg = <0 0xfea20000 0 0x5000>;
+ interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 623>;
+ power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2223,7 +2223,7 @@
+
+ vspd1: vsp@fea28000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea28000 0 0x8000>;
++ reg = <0 0xfea28000 0 0x5000>;
+ interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 622>;
+ power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2234,7 +2234,7 @@
+
+ vspd2: vsp@fea30000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea30000 0 0x8000>;
++ reg = <0 0xfea30000 0 0x5000>;
+ interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 621>;
+ power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 486aecacb22a..ca618228fce1 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -1397,7 +1397,7 @@
+
+ vspd0: vsp@fea20000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea20000 0 0x8000>;
++ reg = <0 0xfea20000 0 0x5000>;
+ interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 623>;
+ power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+@@ -1416,7 +1416,7 @@
+
+ vspd1: vsp@fea28000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea28000 0 0x8000>;
++ reg = <0 0xfea28000 0 0x5000>;
+ interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 622>;
+ power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+index 98a2317a16c4..89dc4e343b7c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+@@ -776,7 +776,7 @@
+
+ vspd0: vsp@fea20000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea20000 0 0x8000>;
++ reg = <0 0xfea20000 0 0x5000>;
+ interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 623>;
+ power-domains = <&sysc R8A77970_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index 2506f46293e8..ac9aadf2723c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -699,7 +699,7 @@
+
+ vspd0: vsp@fea20000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea20000 0 0x8000>;
++ reg = <0 0xfea20000 0 0x5000>;
+ interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 623>;
+ power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+@@ -709,7 +709,7 @@
+
+ vspd1: vsp@fea28000 {
+ compatible = "renesas,vsp2";
+- reg = <0 0xfea28000 0 0x8000>;
++ reg = <0 0xfea28000 0 0x5000>;
+ interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 622>;
+ power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 9256fbaaab7f..5853f5177b4b 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -440,7 +440,7 @@
+ };
+ };
+
+- port@10 {
++ port@a {
+ reg = <10>;
+
+ adv7482_txa: endpoint {
+@@ -450,7 +450,7 @@
+ };
+ };
+
+- port@11 {
++ port@b {
+ reg = <11>;
+
+ adv7482_txb: endpoint {
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 56a0260ceb11..d5c6bb1562d8 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -57,6 +57,45 @@ static u64 core_reg_offset_from_id(u64 id)
+ return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
+ }
+
++static int validate_core_offset(const struct kvm_one_reg *reg)
++{
++ u64 off = core_reg_offset_from_id(reg->id);
++ int size;
++
++ switch (off) {
++ case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
++ KVM_REG_ARM_CORE_REG(regs.regs[30]):
++ case KVM_REG_ARM_CORE_REG(regs.sp):
++ case KVM_REG_ARM_CORE_REG(regs.pc):
++ case KVM_REG_ARM_CORE_REG(regs.pstate):
++ case KVM_REG_ARM_CORE_REG(sp_el1):
++ case KVM_REG_ARM_CORE_REG(elr_el1):
++ case KVM_REG_ARM_CORE_REG(spsr[0]) ...
++ KVM_REG_ARM_CORE_REG(spsr[KVM_NR_SPSR - 1]):
++ size = sizeof(__u64);
++ break;
++
++ case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ...
++ KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]):
++ size = sizeof(__uint128_t);
++ break;
++
++ case KVM_REG_ARM_CORE_REG(fp_regs.fpsr):
++ case KVM_REG_ARM_CORE_REG(fp_regs.fpcr):
++ size = sizeof(__u32);
++ break;
++
++ default:
++ return -EINVAL;
++ }
++
++ if (KVM_REG_SIZE(reg->id) == size &&
++ IS_ALIGNED(off, size / sizeof(__u32)))
++ return 0;
++
++ return -EINVAL;
++}
++
+ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ {
+ /*
+@@ -76,6 +115,9 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ return -ENOENT;
+
++ if (validate_core_offset(reg))
++ return -EINVAL;
++
+ if (copy_to_user(uaddr, ((u32 *)regs) + off, KVM_REG_SIZE(reg->id)))
+ return -EFAULT;
+
+@@ -98,6 +140,9 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ return -ENOENT;
+
++ if (validate_core_offset(reg))
++ return -EINVAL;
++
+ if (KVM_REG_SIZE(reg->id) > sizeof(tmp))
+ return -EINVAL;
+
+@@ -107,17 +152,25 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ }
+
+ if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
+- u32 mode = (*(u32 *)valp) & COMPAT_PSR_MODE_MASK;
++ u64 mode = (*(u64 *)valp) & COMPAT_PSR_MODE_MASK;
+ switch (mode) {
+ case COMPAT_PSR_MODE_USR:
++ if (!system_supports_32bit_el0())
++ return -EINVAL;
++ break;
+ case COMPAT_PSR_MODE_FIQ:
+ case COMPAT_PSR_MODE_IRQ:
+ case COMPAT_PSR_MODE_SVC:
+ case COMPAT_PSR_MODE_ABT:
+ case COMPAT_PSR_MODE_UND:
++ if (!vcpu_el1_is_32bit(vcpu))
++ return -EINVAL;
++ break;
+ case PSR_MODE_EL0t:
+ case PSR_MODE_EL1t:
+ case PSR_MODE_EL1h:
++ if (vcpu_el1_is_32bit(vcpu))
++ return -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+diff --git a/arch/mips/boot/Makefile b/arch/mips/boot/Makefile
+index c22da16d67b8..5c7bfa8478e7 100644
+--- a/arch/mips/boot/Makefile
++++ b/arch/mips/boot/Makefile
+@@ -118,10 +118,12 @@ ifeq ($(ADDR_BITS),64)
+ itb_addr_cells = 2
+ endif
+
++targets += vmlinux.its.S
++
+ quiet_cmd_its_cat = CAT $@
+- cmd_its_cat = cat $^ >$@
++ cmd_its_cat = cat $(filter-out $(PHONY), $^) >$@
+
+-$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS))
++$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) FORCE
+ $(call if_changed,its_cat)
+
+ quiet_cmd_cpp_its_S = ITS $@
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index f817342aab8f..53729220b48d 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1321,9 +1321,7 @@ EXC_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x100)
+
+ #ifdef CONFIG_PPC_DENORMALISATION
+ mfspr r10,SPRN_HSRR1
+- mfspr r11,SPRN_HSRR0 /* save HSRR0 */
+ andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */
+- addi r11,r11,-4 /* HSRR0 is next instruction */
+ bne+ denorm_assist
+ #endif
+
+@@ -1389,6 +1387,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
+ */
+ XVCPSGNDP32(32)
+ denorm_done:
++ mfspr r11,SPRN_HSRR0
++ subi r11,r11,4
+ mtspr SPRN_HSRR0,r11
+ mtcrf 0x80,r9
+ ld r9,PACA_EXGEN+EX_R9(r13)
+diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
+index 936c7e2d421e..b53401334e81 100644
+--- a/arch/powerpc/kernel/machine_kexec.c
++++ b/arch/powerpc/kernel/machine_kexec.c
+@@ -188,7 +188,12 @@ void __init reserve_crashkernel(void)
+ (unsigned long)(crashk_res.start >> 20),
+ (unsigned long)(memblock_phys_mem_size() >> 20));
+
+- memblock_reserve(crashk_res.start, crash_size);
++ if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
++ memblock_reserve(crashk_res.start, crash_size)) {
++ pr_err("Failed to reserve memory for crashkernel!\n");
++ crashk_res.start = crashk_res.end = 0;
++ return;
++ }
+ }
+
+ int overlaps_crashkernel(unsigned long start, unsigned long size)
+diff --git a/arch/powerpc/lib/checksum_64.S b/arch/powerpc/lib/checksum_64.S
+index 886ed94b9c13..d05c8af4ac51 100644
+--- a/arch/powerpc/lib/checksum_64.S
++++ b/arch/powerpc/lib/checksum_64.S
+@@ -443,6 +443,9 @@ _GLOBAL(csum_ipv6_magic)
+ addc r0, r8, r9
+ ld r10, 0(r4)
+ ld r11, 8(r4)
++#ifdef CONFIG_CPU_LITTLE_ENDIAN
++ rotldi r5, r5, 8
++#endif
+ adde r0, r0, r10
+ add r5, r5, r7
+ adde r0, r0, r11
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 35ac5422903a..b5a71baedbc2 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1452,7 +1452,8 @@ static struct timer_list topology_timer;
+
+ static void reset_topology_timer(void)
+ {
+- mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
++ if (vphn_enabled)
++ mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
+ }
+
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index 0e7810ccd1ae..c18d17d830a1 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -44,7 +44,7 @@ static void scan_pkey_feature(void)
+ * Since any pkey can be used for data or execute, we will just treat
+ * all keys as equal and track them as one entity.
+ */
+- pkeys_total = be32_to_cpu(vals[0]);
++ pkeys_total = vals[0];
+ pkeys_devtree_defined = true;
+ }
+
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index a2cdf358a3ac..0976049d3365 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2841,7 +2841,7 @@ static long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ level_shift = entries_shift + 3;
+ level_shift = max_t(unsigned, level_shift, PAGE_SHIFT);
+
+- if ((level_shift - 3) * levels + page_shift >= 60)
++ if ((level_shift - 3) * levels + page_shift >= 55)
+ return -EINVAL;
+
+ /* Allocate TCE table */
+diff --git a/arch/s390/kernel/sysinfo.c b/arch/s390/kernel/sysinfo.c
+index 54f5496913fa..12f80d1f0415 100644
+--- a/arch/s390/kernel/sysinfo.c
++++ b/arch/s390/kernel/sysinfo.c
+@@ -59,6 +59,8 @@ int stsi(void *sysinfo, int fc, int sel1, int sel2)
+ }
+ EXPORT_SYMBOL(stsi);
+
++#ifdef CONFIG_PROC_FS
++
+ static bool convert_ext_name(unsigned char encoding, char *name, size_t len)
+ {
+ switch (encoding) {
+@@ -301,6 +303,8 @@ static int __init sysinfo_create_proc(void)
+ }
+ device_initcall(sysinfo_create_proc);
+
++#endif /* CONFIG_PROC_FS */
++
+ /*
+ * Service levels interface.
+ */
+diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
+index 6ad15d3fab81..84111a43ea29 100644
+--- a/arch/s390/mm/extmem.c
++++ b/arch/s390/mm/extmem.c
+@@ -80,7 +80,7 @@ struct qin64 {
+ struct dcss_segment {
+ struct list_head list;
+ char dcss_name[8];
+- char res_name[15];
++ char res_name[16];
+ unsigned long start_addr;
+ unsigned long end;
+ atomic_t ref_count;
+@@ -433,7 +433,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ memcpy(&seg->res_name, seg->dcss_name, 8);
+ EBCASC(seg->res_name, 8);
+ seg->res_name[8] = '\0';
+- strncat(seg->res_name, " (DCSS)", 7);
++ strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
+ seg->res->name = seg->res_name;
+ rc = seg->vm_segtype;
+ if (rc == SEG_TYPE_SC ||
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index e3bd5627afef..76d89ee8b428 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -28,7 +28,7 @@ static struct ctl_table page_table_sysctl[] = {
+ .data = &page_table_allocate_pgste,
+ .maxlen = sizeof(int),
+ .mode = S_IRUGO | S_IWUSR,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &page_table_allocate_pgste_min,
+ .extra2 = &page_table_allocate_pgste_max,
+ },
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 8ae7ffda8f98..0ab33af41fbd 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -92,7 +92,7 @@ END(native_usergs_sysret64)
+ .endm
+
+ .macro TRACE_IRQS_IRETQ_DEBUG
+- bt $9, EFLAGS(%rsp) /* interrupts off? */
++ btl $9, EFLAGS(%rsp) /* interrupts off? */
+ jnc 1f
+ TRACE_IRQS_ON_DEBUG
+ 1:
+@@ -701,7 +701,7 @@ retint_kernel:
+ #ifdef CONFIG_PREEMPT
+ /* Interrupts are off */
+ /* Check if we need preemption */
+- bt $9, EFLAGS(%rsp) /* were interrupts off? */
++ btl $9, EFLAGS(%rsp) /* were interrupts off? */
+ jnc 1f
+ 0: cmpl $0, PER_CPU_VAR(__preempt_count)
+ jnz 1f
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index cf372b90557e..a4170048a30b 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -346,7 +346,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+
+ mask = x86_pmu.lbr_nr - 1;
+ tos = task_ctx->tos;
+- for (i = 0; i < tos; i++) {
++ for (i = 0; i < task_ctx->valid_lbrs; i++) {
+ lbr_idx = (tos - i) & mask;
+ wrlbr_from(lbr_idx, task_ctx->lbr_from[i]);
+ wrlbr_to (lbr_idx, task_ctx->lbr_to[i]);
+@@ -354,6 +354,15 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ }
++
++ for (; i < x86_pmu.lbr_nr; i++) {
++ lbr_idx = (tos - i) & mask;
++ wrlbr_from(lbr_idx, 0);
++ wrlbr_to(lbr_idx, 0);
++ if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
++ wrmsrl(MSR_LBR_INFO_0 + lbr_idx, 0);
++ }
++
+ wrmsrl(x86_pmu.lbr_tos, tos);
+ task_ctx->lbr_stack_state = LBR_NONE;
+ }
+@@ -361,7 +370,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ {
+ unsigned lbr_idx, mask;
+- u64 tos;
++ u64 tos, from;
+ int i;
+
+ if (task_ctx->lbr_callstack_users == 0) {
+@@ -371,13 +380,17 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+
+ mask = x86_pmu.lbr_nr - 1;
+ tos = intel_pmu_lbr_tos();
+- for (i = 0; i < tos; i++) {
++ for (i = 0; i < x86_pmu.lbr_nr; i++) {
+ lbr_idx = (tos - i) & mask;
+- task_ctx->lbr_from[i] = rdlbr_from(lbr_idx);
++ from = rdlbr_from(lbr_idx);
++ if (!from)
++ break;
++ task_ctx->lbr_from[i] = from;
+ task_ctx->lbr_to[i] = rdlbr_to(lbr_idx);
+ if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ }
++ task_ctx->valid_lbrs = i;
+ task_ctx->tos = tos;
+ task_ctx->lbr_stack_state = LBR_VALID;
+ }
+@@ -531,7 +544,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
+ */
+ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ {
+- bool need_info = false;
++ bool need_info = false, call_stack = false;
+ unsigned long mask = x86_pmu.lbr_nr - 1;
+ int lbr_format = x86_pmu.intel_cap.lbr_format;
+ u64 tos = intel_pmu_lbr_tos();
+@@ -542,7 +555,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ if (cpuc->lbr_sel) {
+ need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO);
+ if (cpuc->lbr_sel->config & LBR_CALL_STACK)
+- num = tos;
++ call_stack = true;
+ }
+
+ for (i = 0; i < num; i++) {
+@@ -555,6 +568,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ from = rdlbr_from(lbr_idx);
+ to = rdlbr_to(lbr_idx);
+
++ /*
++ * Read LBR call stack entries
++ * until invalid entry (0s) is detected.
++ */
++ if (call_stack && !from)
++ break;
++
+ if (lbr_format == LBR_FORMAT_INFO && need_info) {
+ u64 info;
+
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 9f3711470ec1..6b72a92069fd 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -648,6 +648,7 @@ struct x86_perf_task_context {
+ u64 lbr_to[MAX_LBR_ENTRIES];
+ u64 lbr_info[MAX_LBR_ENTRIES];
+ int tos;
++ int valid_lbrs;
+ int lbr_callstack_users;
+ int lbr_stack_state;
+ };
+diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
+index e203169931c7..6390bd8c141b 100644
+--- a/arch/x86/include/asm/fixmap.h
++++ b/arch/x86/include/asm/fixmap.h
+@@ -14,6 +14,16 @@
+ #ifndef _ASM_X86_FIXMAP_H
+ #define _ASM_X86_FIXMAP_H
+
++/*
++ * Exposed to assembly code for setting up initial page tables. Cannot be
++ * calculated in assembly code (fixmap entries are an enum), but is sanity
++ * checked in the actual fixmap C code to make sure that the fixmap is
++ * covered fully.
++ */
++#define FIXMAP_PMD_NUM 2
++/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
++#define FIXMAP_PMD_TOP 507
++
+ #ifndef __ASSEMBLY__
+ #include <linux/kernel.h>
+ #include <asm/acpi.h>
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 82ff20b0ae45..20127d551ab5 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -14,6 +14,7 @@
+ #include <asm/processor.h>
+ #include <linux/bitops.h>
+ #include <linux/threads.h>
++#include <asm/fixmap.h>
+
+ extern p4d_t level4_kernel_pgt[512];
+ extern p4d_t level4_ident_pgt[512];
+@@ -22,7 +23,7 @@ extern pud_t level3_ident_pgt[512];
+ extern pmd_t level2_kernel_pgt[512];
+ extern pmd_t level2_fixmap_pgt[512];
+ extern pmd_t level2_ident_pgt[512];
+-extern pte_t level1_fixmap_pgt[512];
++extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM];
+ extern pgd_t init_top_pgt[];
+
+ #define swapper_pg_dir init_top_pgt
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 8047379e575a..11455200ae66 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -35,6 +35,7 @@
+ #include <asm/bootparam_utils.h>
+ #include <asm/microcode.h>
+ #include <asm/kasan.h>
++#include <asm/fixmap.h>
+
+ /*
+ * Manage page tables very early on.
+@@ -165,7 +166,8 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ pud[511] += load_delta;
+
+ pmd = fixup_pointer(level2_fixmap_pgt, physaddr);
+- pmd[506] += load_delta;
++ for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--)
++ pmd[i] += load_delta;
+
+ /*
+ * Set up the identity mapping for the switchover. These
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 8344dd2f310a..6bc215c15ce0 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -24,6 +24,7 @@
+ #include "../entry/calling.h"
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
++#include <asm/fixmap.h>
+
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/asm-offsets.h>
+@@ -445,13 +446,20 @@ NEXT_PAGE(level2_kernel_pgt)
+ KERNEL_IMAGE_SIZE/PMD_SIZE)
+
+ NEXT_PAGE(level2_fixmap_pgt)
+- .fill 506,8,0
+- .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+- /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */
+- .fill 5,8,0
++ .fill (512 - 4 - FIXMAP_PMD_NUM),8,0
++ pgtno = 0
++ .rept (FIXMAP_PMD_NUM)
++ .quad level1_fixmap_pgt + (pgtno << PAGE_SHIFT) - __START_KERNEL_map \
++ + _PAGE_TABLE_NOENC;
++ pgtno = pgtno + 1
++ .endr
++ /* 6 MB reserved space + a 2MB hole */
++ .fill 4,8,0
+
+ NEXT_PAGE(level1_fixmap_pgt)
++ .rept (FIXMAP_PMD_NUM)
+ .fill 512,8,0
++ .endr
+
+ #undef PMDS
+
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 19afdbd7d0a7..5532d1be7687 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -12,6 +12,7 @@
+ #include <asm/setup.h>
+ #include <asm/apic.h>
+ #include <asm/param.h>
++#include <asm/tsc.h>
+
+ #define MAX_NUM_FREQS 9
+
+diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
+index 34a2a3bfde9c..22cbad56acab 100644
+--- a/arch/x86/mm/numa_emulation.c
++++ b/arch/x86/mm/numa_emulation.c
+@@ -61,7 +61,7 @@ static int __init emu_setup_memblk(struct numa_meminfo *ei,
+ eb->nid = nid;
+
+ if (emu_nid_to_phys[nid] == NUMA_NO_NODE)
+- emu_nid_to_phys[nid] = nid;
++ emu_nid_to_phys[nid] = pb->nid;
+
+ pb->start += size;
+ if (pb->start >= pb->end) {
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index e3deefb891da..a300ffeece9b 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -577,6 +577,15 @@ void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)
+ {
+ unsigned long address = __fix_to_virt(idx);
+
++#ifdef CONFIG_X86_64
++ /*
++ * Ensure that the static initial page tables are covering the
++ * fixmap completely.
++ */
++ BUILD_BUG_ON(__end_of_permanent_fixed_addresses >
++ (FIXMAP_PMD_NUM * PTRS_PER_PTE));
++#endif
++
+ if (idx >= __end_of_fixed_addresses) {
+ BUG();
+ return;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 1d2106d83b4e..019da252a04f 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -239,7 +239,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ *
+ * Returns a pointer to a PTE on success, or NULL on failure.
+ */
+-static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
++static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+ pmd_t *pmd;
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 071d82ec9abb..2473eaca3468 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -1908,7 +1908,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ /* L3_k[511] -> level2_fixmap_pgt */
+ convert_pfn_mfn(level3_kernel_pgt);
+
+- /* L3_k[511][506] -> level1_fixmap_pgt */
++ /* L3_k[511][508-FIXMAP_PMD_NUM ... 507] -> level1_fixmap_pgt */
+ convert_pfn_mfn(level2_fixmap_pgt);
+
+ /* We get [511][511] and have Xen's version of level2_kernel_pgt */
+@@ -1953,7 +1953,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+ set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+ set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+- set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
++
++ for (i = 0; i < FIXMAP_PMD_NUM; i++) {
++ set_page_prot(level1_fixmap_pgt + i * PTRS_PER_PTE,
++ PAGE_KERNEL_RO);
++ }
+
+ /* Pin down new L4 */
+ pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+diff --git a/block/elevator.c b/block/elevator.c
+index fa828b5bfd4b..89a48a3a8c12 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -609,7 +609,7 @@ void elv_drain_elevator(struct request_queue *q)
+
+ while (e->type->ops.sq.elevator_dispatch_fn(q, 1))
+ ;
+- if (q->nr_sorted && printed++ < 10) {
++ if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) {
+ printk(KERN_ERR "%s: forced dispatching is broken "
+ "(nr_sorted=%u), please report this\n",
+ q->elevator->type->elevator_name, q->nr_sorted);
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index 4ee7c041bb82..8882e90e868e 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -368,6 +368,7 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ strncpy(rblkcipher.type, "ablkcipher", sizeof(rblkcipher.type));
+ strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>",
+ sizeof(rblkcipher.geniv));
++ rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+
+ rblkcipher.blocksize = alg->cra_blocksize;
+ rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+@@ -442,6 +443,7 @@ static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ strncpy(rblkcipher.type, "givcipher", sizeof(rblkcipher.type));
+ strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<built-in>",
+ sizeof(rblkcipher.geniv));
++ rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+
+ rblkcipher.blocksize = alg->cra_blocksize;
+ rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 77b5fa293f66..f93abf13b5d4 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -510,6 +510,7 @@ static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ strncpy(rblkcipher.type, "blkcipher", sizeof(rblkcipher.type));
+ strncpy(rblkcipher.geniv, alg->cra_blkcipher.geniv ?: "<default>",
+ sizeof(rblkcipher.geniv));
++ rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+
+ rblkcipher.blocksize = alg->cra_blocksize;
+ rblkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index 2345a5ee2dbb..40ed3ec9fc94 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -235,9 +235,6 @@ static int acpi_lid_notify_state(struct acpi_device *device, int state)
+ button->last_time = ktime_get();
+ }
+
+- if (state)
+- acpi_pm_wakeup_event(&device->dev);
+-
+ ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device);
+ if (ret == NOTIFY_DONE)
+ ret = blocking_notifier_call_chain(&acpi_lid_notifier, state,
+@@ -366,7 +363,8 @@ int acpi_lid_open(void)
+ }
+ EXPORT_SYMBOL(acpi_lid_open);
+
+-static int acpi_lid_update_state(struct acpi_device *device)
++static int acpi_lid_update_state(struct acpi_device *device,
++ bool signal_wakeup)
+ {
+ int state;
+
+@@ -374,6 +372,9 @@ static int acpi_lid_update_state(struct acpi_device *device)
+ if (state < 0)
+ return state;
+
++ if (state && signal_wakeup)
++ acpi_pm_wakeup_event(&device->dev);
++
+ return acpi_lid_notify_state(device, state);
+ }
+
+@@ -384,7 +385,7 @@ static void acpi_lid_initialize_state(struct acpi_device *device)
+ (void)acpi_lid_notify_state(device, 1);
+ break;
+ case ACPI_BUTTON_LID_INIT_METHOD:
+- (void)acpi_lid_update_state(device);
++ (void)acpi_lid_update_state(device, false);
+ break;
+ case ACPI_BUTTON_LID_INIT_IGNORE:
+ default:
+@@ -409,7 +410,7 @@ static void acpi_button_notify(struct acpi_device *device, u32 event)
+ users = button->input->users;
+ mutex_unlock(&button->input->mutex);
+ if (users)
+- acpi_lid_update_state(device);
++ acpi_lid_update_state(device, true);
+ } else {
+ int keycode;
+
+diff --git a/drivers/ata/pata_ftide010.c b/drivers/ata/pata_ftide010.c
+index 5d4b72e21161..569a4a662dcd 100644
+--- a/drivers/ata/pata_ftide010.c
++++ b/drivers/ata/pata_ftide010.c
+@@ -256,14 +256,12 @@ static struct ata_port_operations pata_ftide010_port_ops = {
+ .qc_issue = ftide010_qc_issue,
+ };
+
+-static struct ata_port_info ftide010_port_info[] = {
+- {
+- .flags = ATA_FLAG_SLAVE_POSS,
+- .mwdma_mask = ATA_MWDMA2,
+- .udma_mask = ATA_UDMA6,
+- .pio_mask = ATA_PIO4,
+- .port_ops = &pata_ftide010_port_ops,
+- },
++static struct ata_port_info ftide010_port_info = {
++ .flags = ATA_FLAG_SLAVE_POSS,
++ .mwdma_mask = ATA_MWDMA2,
++ .udma_mask = ATA_UDMA6,
++ .pio_mask = ATA_PIO4,
++ .port_ops = &pata_ftide010_port_ops,
+ };
+
+ #if IS_ENABLED(CONFIG_SATA_GEMINI)
+@@ -349,6 +347,7 @@ static int pata_ftide010_gemini_cable_detect(struct ata_port *ap)
+ }
+
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++ struct ata_port_info *pi,
+ bool is_ata1)
+ {
+ struct device *dev = ftide->dev;
+@@ -373,7 +372,13 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+
+ /* Flag port as SATA-capable */
+ if (gemini_sata_bridge_enabled(sg, is_ata1))
+- ftide010_port_info[0].flags |= ATA_FLAG_SATA;
++ pi->flags |= ATA_FLAG_SATA;
++
++ /* This device has broken DMA, only PIO works */
++ if (of_machine_is_compatible("itian,sq201")) {
++ pi->mwdma_mask = 0;
++ pi->udma_mask = 0;
++ }
+
+ /*
+ * We assume that a simple 40-wire cable is used in the PATA mode.
+@@ -435,6 +440,7 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ }
+ #else
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++ struct ata_port_info *pi,
+ bool is_ata1)
+ {
+ return -ENOTSUPP;
+@@ -446,7 +452,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+- const struct ata_port_info pi = ftide010_port_info[0];
++ struct ata_port_info pi = ftide010_port_info;
+ const struct ata_port_info *ppi[] = { &pi, NULL };
+ struct ftide010 *ftide;
+ struct resource *res;
+@@ -490,6 +496,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ * are ATA0. This will also set up the cable types.
+ */
+ ret = pata_ftide010_gemini_init(ftide,
++ &pi,
+ (res->start == 0x63400000));
+ if (ret)
+ goto err_dis_clk;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 8871b5044d9e..7d7c698c0213 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3470,6 +3470,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ (struct floppy_struct **)&outparam);
+ if (ret)
+ return ret;
++ memcpy(&inparam.g, outparam,
++ offsetof(struct floppy_struct, name));
++ outparam = &inparam.g;
+ break;
+ case FDMSGON:
+ UDP->flags |= FTD_MSG;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f73a27ea28cc..75947f04fc75 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -374,6 +374,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+
+ /* Additional Realtek 8723DE Bluetooth devices */
++ { USB_DEVICE(0x0bda, 0xb009), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
+
+ /* Additional Realtek 8821AE Bluetooth devices */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 80d60f43db56..4576a1268e0e 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -490,32 +490,29 @@ static int sysc_check_registers(struct sysc *ddata)
+
+ /**
+ * syc_ioremap - ioremap register space for the interconnect target module
+- * @ddata: deviec driver data
++ * @ddata: device driver data
+ *
+ * Note that the interconnect target module registers can be anywhere
+- * within the first child device address space. For example, SGX has
+- * them at offset 0x1fc00 in the 32MB module address space. We just
+- * what we need around the interconnect target module registers.
++ * within the interconnect target module range. For example, SGX has
++ * them at offset 0x1fc00 in the 32MB module address space. And cpsw
++ * has them at offset 0x1200 in the CPSW_WR child. Usually the
++ * the interconnect target module registers are at the beginning of
++ * the module range though.
+ */
+ static int sysc_ioremap(struct sysc *ddata)
+ {
+- u32 size = 0;
+-
+- if (ddata->offsets[SYSC_SYSSTATUS] >= 0)
+- size = ddata->offsets[SYSC_SYSSTATUS];
+- else if (ddata->offsets[SYSC_SYSCONFIG] >= 0)
+- size = ddata->offsets[SYSC_SYSCONFIG];
+- else if (ddata->offsets[SYSC_REVISION] >= 0)
+- size = ddata->offsets[SYSC_REVISION];
+- else
+- return -EINVAL;
++ int size;
+
+- size &= 0xfff00;
+- size += SZ_256;
++ size = max3(ddata->offsets[SYSC_REVISION],
++ ddata->offsets[SYSC_SYSCONFIG],
++ ddata->offsets[SYSC_SYSSTATUS]);
++
++ if (size < 0 || (size + sizeof(u32)) > ddata->module_size)
++ return -EINVAL;
+
+ ddata->module_va = devm_ioremap(ddata->dev,
+ ddata->module_pa,
+- size);
++ size + sizeof(u32));
+ if (!ddata->module_va)
+ return -EIO;
+
+@@ -1178,10 +1175,10 @@ static int sysc_child_suspend_noirq(struct device *dev)
+ if (!pm_runtime_status_suspended(dev)) {
+ error = pm_generic_runtime_suspend(dev);
+ if (error) {
+- dev_err(dev, "%s error at %i: %i\n",
+- __func__, __LINE__, error);
++ dev_warn(dev, "%s busy at %i: %i\n",
++ __func__, __LINE__, error);
+
+- return error;
++ return 0;
+ }
+
+ error = sysc_runtime_suspend(ddata->dev);
+diff --git a/drivers/clk/x86/clk-st.c b/drivers/clk/x86/clk-st.c
+index fb62f3938008..3a0996f2d556 100644
+--- a/drivers/clk/x86/clk-st.c
++++ b/drivers/clk/x86/clk-st.c
+@@ -46,7 +46,7 @@ static int st_clk_probe(struct platform_device *pdev)
+ clk_oscout1_parents, ARRAY_SIZE(clk_oscout1_parents),
+ 0, st_data->base + CLKDRVSTR2, OSCOUT1CLK25MHZ, 3, 0, NULL);
+
+- clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_25M]->clk);
++ clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_48M]->clk);
+
+ hws[ST_CLK_GATE] = clk_hw_register_gate(NULL, "oscout1", "oscout1_mux",
+ 0, st_data->base + MISCCLKCNTL1, OSCCLKENB,
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_dev.h b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+index 9a476bb6d4c7..af596455b420 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_dev.h
++++ b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+@@ -35,6 +35,7 @@ struct nitrox_cmdq {
+ /* requests in backlog queues */
+ atomic_t backlog_count;
+
++ int write_idx;
+ /* command size 32B/64B */
+ u8 instr_size;
+ u8 qno;
+@@ -87,7 +88,7 @@ struct nitrox_bh {
+ struct bh_data *slc;
+ };
+
+-/* NITROX-5 driver state */
++/* NITROX-V driver state */
+ #define NITROX_UCODE_LOADED 0
+ #define NITROX_READY 1
+
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_lib.c b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+index 4fdc921ba611..9906c0086647 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_lib.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+@@ -36,6 +36,7 @@ static int cmdq_common_init(struct nitrox_cmdq *cmdq)
+ cmdq->head = PTR_ALIGN(cmdq->head_unaligned, PKT_IN_ALIGN);
+ cmdq->dma = PTR_ALIGN(cmdq->dma_unaligned, PKT_IN_ALIGN);
+ cmdq->qsize = (qsize + PKT_IN_ALIGN);
++ cmdq->write_idx = 0;
+
+ spin_lock_init(&cmdq->response_lock);
+ spin_lock_init(&cmdq->cmdq_lock);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+index deaefd532aaa..4a362fc22f62 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+@@ -42,6 +42,16 @@
+ * Invalid flag options in AES-CCM IV.
+ */
+
++static inline int incr_index(int index, int count, int max)
++{
++ if ((index + count) >= max)
++ index = index + count - max;
++ else
++ index += count;
++
++ return index;
++}
++
+ /**
+ * dma_free_sglist - unmap and free the sg lists.
+ * @ndev: N5 device
+@@ -426,30 +436,29 @@ static void post_se_instr(struct nitrox_softreq *sr,
+ struct nitrox_cmdq *cmdq)
+ {
+ struct nitrox_device *ndev = sr->ndev;
+- union nps_pkt_in_instr_baoff_dbell pkt_in_baoff_dbell;
+- u64 offset;
++ int idx;
+ u8 *ent;
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+- /* get the next write offset */
+- offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(cmdq->qno);
+- pkt_in_baoff_dbell.value = nitrox_read_csr(ndev, offset);
++ idx = cmdq->write_idx;
+ /* copy the instruction */
+- ent = cmdq->head + pkt_in_baoff_dbell.s.aoff;
++ ent = cmdq->head + (idx * cmdq->instr_size);
+ memcpy(ent, &sr->instr, cmdq->instr_size);
+- /* flush the command queue updates */
+- dma_wmb();
+
+- sr->tstamp = jiffies;
+ atomic_set(&sr->status, REQ_POSTED);
+ response_list_add(sr, cmdq);
++ sr->tstamp = jiffies;
++ /* flush the command queue updates */
++ dma_wmb();
+
+ /* Ring doorbell with count 1 */
+ writeq(1, cmdq->dbell_csr_addr);
+ /* orders the doorbell rings */
+ mmiowb();
+
++ cmdq->write_idx = incr_index(idx, 1, ndev->qlen);
++
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ }
+
+@@ -459,6 +468,9 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ struct nitrox_softreq *sr, *tmp;
+ int ret = 0;
+
++ if (!atomic_read(&cmdq->backlog_count))
++ return 0;
++
+ spin_lock_bh(&cmdq->backlog_lock);
+
+ list_for_each_entry_safe(sr, tmp, &cmdq->backlog_head, backlog) {
+@@ -466,7 +478,7 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+
+ /* submit until space available */
+ if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+- ret = -EBUSY;
++ ret = -ENOSPC;
+ break;
+ }
+ /* delete from backlog list */
+@@ -491,23 +503,20 @@ static int nitrox_enqueue_request(struct nitrox_softreq *sr)
+ {
+ struct nitrox_cmdq *cmdq = sr->cmdq;
+ struct nitrox_device *ndev = sr->ndev;
+- int ret = -EBUSY;
++
++ /* try to post backlog requests */
++ post_backlog_cmds(cmdq);
+
+ if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+ if (!(sr->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+- return -EAGAIN;
+-
++ return -ENOSPC;
++ /* add to backlog list */
+ backlog_list_add(sr, cmdq);
+- } else {
+- ret = post_backlog_cmds(cmdq);
+- if (ret) {
+- backlog_list_add(sr, cmdq);
+- return ret;
+- }
+- post_se_instr(sr, cmdq);
+- ret = -EINPROGRESS;
++ return -EBUSY;
+ }
+- return ret;
++ post_se_instr(sr, cmdq);
++
++ return -EINPROGRESS;
+ }
+
+ /**
+@@ -624,11 +633,9 @@ int nitrox_process_se_request(struct nitrox_device *ndev,
+ */
+ sr->instr.fdata[0] = *((u64 *)&req->gph);
+ sr->instr.fdata[1] = 0;
+- /* flush the soft_req changes before posting the cmd */
+- wmb();
+
+ ret = nitrox_enqueue_request(sr);
+- if (ret == -EAGAIN)
++ if (ret == -ENOSPC)
+ goto send_fail;
+
+ return ret;
+diff --git a/drivers/crypto/chelsio/chtls/chtls.h b/drivers/crypto/chelsio/chtls/chtls.h
+index a53a0e6ba024..7725b6ee14ef 100644
+--- a/drivers/crypto/chelsio/chtls/chtls.h
++++ b/drivers/crypto/chelsio/chtls/chtls.h
+@@ -96,6 +96,10 @@ enum csk_flags {
+ CSK_CONN_INLINE, /* Connection on HW */
+ };
+
++enum chtls_cdev_state {
++ CHTLS_CDEV_STATE_UP = 1
++};
++
+ struct listen_ctx {
+ struct sock *lsk;
+ struct chtls_dev *cdev;
+@@ -146,6 +150,7 @@ struct chtls_dev {
+ unsigned int send_page_order;
+ int max_host_sndbuf;
+ struct key_map kmap;
++ unsigned int cdev_state;
+ };
+
+ struct chtls_hws {
+diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c b/drivers/crypto/chelsio/chtls/chtls_main.c
+index 9b07f9165658..f59b044ebd25 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_main.c
++++ b/drivers/crypto/chelsio/chtls/chtls_main.c
+@@ -160,6 +160,7 @@ static void chtls_register_dev(struct chtls_dev *cdev)
+ tlsdev->hash = chtls_create_hash;
+ tlsdev->unhash = chtls_destroy_hash;
+ tls_register_device(&cdev->tlsdev);
++ cdev->cdev_state = CHTLS_CDEV_STATE_UP;
+ }
+
+ static void chtls_unregister_dev(struct chtls_dev *cdev)
+@@ -281,8 +282,10 @@ static void chtls_free_all_uld(void)
+ struct chtls_dev *cdev, *tmp;
+
+ mutex_lock(&cdev_mutex);
+- list_for_each_entry_safe(cdev, tmp, &cdev_list, list)
+- chtls_free_uld(cdev);
++ list_for_each_entry_safe(cdev, tmp, &cdev_list, list) {
++ if (cdev->cdev_state == CHTLS_CDEV_STATE_UP)
++ chtls_free_uld(cdev);
++ }
+ mutex_unlock(&cdev_mutex);
+ }
+
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index d0d5c4dbe097..5762c3c383f2 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -730,7 +730,8 @@ static int altr_s10_sdram_probe(struct platform_device *pdev)
+ S10_DDR0_IRQ_MASK)) {
+ edac_printk(KERN_ERR, EDAC_MC,
+ "Error clearing SDRAM ECC count\n");
+- return -ENODEV;
++ ret = -ENODEV;
++ goto err2;
+ }
+
+ if (regmap_update_bits(drvdata->mc_vbase, priv->ecc_irq_en_offset,
+diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
+index 7481955160a4..20374b8248f0 100644
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -1075,14 +1075,14 @@ int __init edac_mc_sysfs_init(void)
+
+ err = device_add(mci_pdev);
+ if (err < 0)
+- goto out_dev_free;
++ goto out_put_device;
+
+ edac_dbg(0, "device %s created\n", dev_name(mci_pdev));
+
+ return 0;
+
+- out_dev_free:
+- kfree(mci_pdev);
++ out_put_device:
++ put_device(mci_pdev);
+ out:
+ return err;
+ }
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8ed4dd9c571b..8e120bf60624 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1177,15 +1177,14 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+
+ rc = device_add(pvt->addrmatch_dev);
+ if (rc < 0)
+- return rc;
++ goto err_put_addrmatch;
+
+ if (!pvt->is_registered) {
+ pvt->chancounts_dev = kzalloc(sizeof(*pvt->chancounts_dev),
+ GFP_KERNEL);
+ if (!pvt->chancounts_dev) {
+- put_device(pvt->addrmatch_dev);
+- device_del(pvt->addrmatch_dev);
+- return -ENOMEM;
++ rc = -ENOMEM;
++ goto err_del_addrmatch;
+ }
+
+ pvt->chancounts_dev->type = &all_channel_counts_type;
+@@ -1199,9 +1198,18 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+
+ rc = device_add(pvt->chancounts_dev);
+ if (rc < 0)
+- return rc;
++ goto err_put_chancounts;
+ }
+ return 0;
++
++err_put_chancounts:
++ put_device(pvt->chancounts_dev);
++err_del_addrmatch:
++ device_del(pvt->addrmatch_dev);
++err_put_addrmatch:
++ put_device(pvt->addrmatch_dev);
++
++ return rc;
+ }
+
+ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+@@ -1211,11 +1219,11 @@ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+ edac_dbg(1, "\n");
+
+ if (!pvt->is_registered) {
+- put_device(pvt->chancounts_dev);
+ device_del(pvt->chancounts_dev);
++ put_device(pvt->chancounts_dev);
+ }
+- put_device(pvt->addrmatch_dev);
+ device_del(pvt->addrmatch_dev);
++ put_device(pvt->addrmatch_dev);
+ }
+
+ /****************************************************************************
+diff --git a/drivers/gpio/gpio-menz127.c b/drivers/gpio/gpio-menz127.c
+index e1037582e34d..b2635326546e 100644
+--- a/drivers/gpio/gpio-menz127.c
++++ b/drivers/gpio/gpio-menz127.c
+@@ -56,9 +56,9 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
+ rnd = fls(debounce) - 1;
+
+ if (rnd && (debounce & BIT(rnd - 1)))
+- debounce = round_up(debounce, MEN_Z127_DB_MIN_US);
++ debounce = roundup(debounce, MEN_Z127_DB_MIN_US);
+ else
+- debounce = round_down(debounce, MEN_Z127_DB_MIN_US);
++ debounce = rounddown(debounce, MEN_Z127_DB_MIN_US);
+
+ if (debounce > MEN_Z127_DB_MAX_US)
+ debounce = MEN_Z127_DB_MAX_US;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index d5d79727c55d..d9e4da146227 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -323,13 +323,6 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ return -EINVAL;
+ }
+
+- ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
+- if (ret) {
+- dev_err(tgi->dev,
+- "unable to lock Tegra GPIO %u as IRQ\n", gpio);
+- return ret;
+- }
+-
+ spin_lock_irqsave(&bank->lvl_lock[port], flags);
+
+ val = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio));
+@@ -342,6 +335,14 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, gpio), gpio, 0);
+ tegra_gpio_enable(tgi, gpio);
+
++ ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
++ if (ret) {
++ dev_err(tgi->dev,
++ "unable to lock Tegra GPIO %u as IRQ\n", gpio);
++ tegra_gpio_disable(tgi, gpio);
++ return ret;
++ }
++
+ if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
+ irq_set_handler_locked(d, handle_level_irq);
+ else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 5a196ec49be8..7200eea4f918 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -975,13 +975,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
+ if (r)
+ return r;
+
+- if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
+- parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+- if (!parser->ctx->preamble_presented) {
+- parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+- parser->ctx->preamble_presented = true;
+- }
+- }
++ if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
++ parser->job->preamble_status |=
++ AMDGPU_PREAMBLE_IB_PRESENT;
+
+ if (parser->job->ring && parser->job->ring != ring)
+ return -EINVAL;
+@@ -1206,6 +1202,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+
+ amdgpu_cs_post_dependencies(p);
+
++ if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
++ !p->ctx->preamble_presented) {
++ job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
++ p->ctx->preamble_presented = true;
++ }
++
+ cs->out.handle = seq;
+ job->uf_sequence = seq;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 7aaa263ad8c7..6b5d4a20860d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -164,8 +164,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ return r;
+ }
+
++ need_ctx_switch = ring->current_ctx != fence_ctx;
+ if (ring->funcs->emit_pipeline_sync && job &&
+ ((tmp = amdgpu_sync_get_fence(&job->sched_sync, NULL)) ||
++ (amdgpu_sriov_vf(adev) && need_ctx_switch) ||
+ amdgpu_vm_need_pipeline_sync(ring, job))) {
+ need_pipe_sync = true;
+ dma_fence_put(tmp);
+@@ -196,7 +198,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ }
+
+ skip_preamble = ring->current_ctx == fence_ctx;
+- need_ctx_switch = ring->current_ctx != fence_ctx;
+ if (job && ring->funcs->emit_cntxcntl) {
+ if (need_ctx_switch)
+ status |= AMDGPU_HAVE_CTX_SWITCH;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index fdcb498f6d19..c31fff32a321 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -123,6 +123,7 @@ static void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base,
+ * is validated on next vm use to avoid fault.
+ * */
+ list_move_tail(&base->vm_status, &vm->evicted);
++ base->moved = true;
+ }
+
+ /**
+@@ -303,7 +304,6 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ uint64_t addr;
+ int r;
+
+- addr = amdgpu_bo_gpu_offset(bo);
+ entries = amdgpu_bo_size(bo) / 8;
+
+ if (pte_support_ats) {
+@@ -335,6 +335,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ if (r)
+ goto error;
+
++ addr = amdgpu_bo_gpu_offset(bo);
+ if (ats_entries) {
+ uint64_t ats_value;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index 818874b13c99..9057a5adb31b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -5614,6 +5614,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ if (amdgpu_sriov_vf(adev))
+ return 0;
+
++ if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++ AMD_PG_SUPPORT_RLC_SMU_HS |
++ AMD_PG_SUPPORT_CP |
++ AMD_PG_SUPPORT_GFX_DMG))
++ adev->gfx.rlc.funcs->enter_safe_mode(adev);
+ switch (adev->asic_type) {
+ case CHIP_CARRIZO:
+ case CHIP_STONEY:
+@@ -5663,7 +5668,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ default:
+ break;
+ }
+-
++ if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++ AMD_PG_SUPPORT_RLC_SMU_HS |
++ AMD_PG_SUPPORT_CP |
++ AMD_PG_SUPPORT_GFX_DMG))
++ adev->gfx.rlc.funcs->exit_safe_mode(adev);
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+index 7a1e77c93bf1..d8e469c594bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+@@ -1354,8 +1354,6 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
+ return ret;
+ }
+
+- kv_update_current_ps(adev, adev->pm.dpm.boot_ps);
+-
+ if (adev->irq.installed &&
+ amdgpu_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
+ ret = kv_set_thermal_temperature_range(adev, KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
+@@ -3061,7 +3059,7 @@ static int kv_dpm_hw_init(void *handle)
+ else
+ adev->pm.dpm_enabled = true;
+ mutex_unlock(&adev->pm.mutex);
+-
++ amdgpu_pm_compute_clocks(adev);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 5c97a3671726..606f461dce49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -6887,7 +6887,6 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+
+ si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
+ si_thermal_start_thermal_controller(adev);
+- ni_update_current_ps(adev, boot_ps);
+
+ return 0;
+ }
+@@ -7764,7 +7763,7 @@ static int si_dpm_hw_init(void *handle)
+ else
+ adev->pm.dpm_enabled = true;
+ mutex_unlock(&adev->pm.mutex);
+-
++ amdgpu_pm_compute_clocks(adev);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 88b09dd758ba..ca137757a69e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -133,7 +133,7 @@ static bool calculate_fb_and_fractional_fb_divider(
+ uint64_t feedback_divider;
+
+ feedback_divider =
+- (uint64_t)(target_pix_clk_khz * ref_divider * post_divider);
++ (uint64_t)target_pix_clk_khz * ref_divider * post_divider;
+ feedback_divider *= 10;
+ /* additional factor, since we divide by 10 afterwards */
+ feedback_divider *= (uint64_t)(calc_pll_cs->fract_fb_divider_factor);
+@@ -145,8 +145,8 @@ static bool calculate_fb_and_fractional_fb_divider(
+ * of fractional feedback decimal point and the fractional FB Divider precision
+ * is 2 then the equation becomes (ullfeedbackDivider + 5*100) / (10*100))*/
+
+- feedback_divider += (uint64_t)
+- (5 * calc_pll_cs->fract_fb_divider_precision_factor);
++ feedback_divider += 5ULL *
++ calc_pll_cs->fract_fb_divider_precision_factor;
+ feedback_divider =
+ div_u64(feedback_divider,
+ calc_pll_cs->fract_fb_divider_precision_factor * 10);
+@@ -203,8 +203,8 @@ static bool calc_fb_divider_checking_tolerance(
+ &fract_feedback_divider);
+
+ /*Actual calculated value*/
+- actual_calc_clk_khz = (uint64_t)(feedback_divider *
+- calc_pll_cs->fract_fb_divider_factor) +
++ actual_calc_clk_khz = (uint64_t)feedback_divider *
++ calc_pll_cs->fract_fb_divider_factor +
+ fract_feedback_divider;
+ actual_calc_clk_khz *= calc_pll_cs->ref_freq_khz;
+ actual_calc_clk_khz =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index c2037daa8e66..0efbf411667a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -239,6 +239,8 @@ void dml1_extract_rq_regs(
+ extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_l), rq_param.sizing.rq_l);
+ if (rq_param.yuv420)
+ extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_c), rq_param.sizing.rq_c);
++ else
++ memset(&(rq_regs->rq_regs_c), 0, sizeof(rq_regs->rq_regs_c));
+
+ rq_regs->rq_regs_l.swath_height = dml_log2(rq_param.dlg.rq_l.swath_height);
+ rq_regs->rq_regs_c.swath_height = dml_log2(rq_param.dlg.rq_c.swath_height);
+diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+index b42e286616b0..84da7a5b84f3 100644
+--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
++++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+@@ -37,7 +37,9 @@ static int gem_show(struct seq_file *m, void *arg)
+ return ret;
+
+ seq_printf(m, "All Objects:\n");
++ mutex_lock(&priv->list_lock);
+ omap_gem_describe_objects(&priv->obj_list, m);
++ mutex_unlock(&priv->list_lock);
+
+ mutex_unlock(&dev->struct_mutex);
+
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index ef3b0e3571ec..5fcf9eaf3eaf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -540,7 +540,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ priv->omaprev = soc ? (unsigned int)soc->data : 0;
+ priv->wq = alloc_ordered_workqueue("omapdrm", 0);
+
+- spin_lock_init(&priv->list_lock);
++ mutex_init(&priv->list_lock);
+ INIT_LIST_HEAD(&priv->obj_list);
+
+ /* Allocate and initialize the DRM device. */
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.h b/drivers/gpu/drm/omapdrm/omap_drv.h
+index 6eaee4df4559..f27c8e216adf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.h
++++ b/drivers/gpu/drm/omapdrm/omap_drv.h
+@@ -71,7 +71,7 @@ struct omap_drm_private {
+ struct workqueue_struct *wq;
+
+ /* lock for obj_list below */
+- spinlock_t list_lock;
++ struct mutex list_lock;
+
+ /* list of GEM objects: */
+ struct list_head obj_list;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index 17a53d207978..7a029b892a37 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1001,6 +1001,7 @@ int omap_gem_resume(struct drm_device *dev)
+ struct omap_gem_object *omap_obj;
+ int ret = 0;
+
++ mutex_lock(&priv->list_lock);
+ list_for_each_entry(omap_obj, &priv->obj_list, mm_list) {
+ if (omap_obj->block) {
+ struct drm_gem_object *obj = &omap_obj->base;
+@@ -1012,12 +1013,14 @@ int omap_gem_resume(struct drm_device *dev)
+ omap_obj->roll, true);
+ if (ret) {
+ dev_err(dev->dev, "could not repin: %d\n", ret);
+- return ret;
++ goto done;
+ }
+ }
+ }
+
+- return 0;
++done:
++ mutex_unlock(&priv->list_lock);
++ return ret;
+ }
+ #endif
+
+@@ -1085,9 +1088,9 @@ void omap_gem_free_object(struct drm_gem_object *obj)
+
+ WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+
+- spin_lock(&priv->list_lock);
++ mutex_lock(&priv->list_lock);
+ list_del(&omap_obj->mm_list);
+- spin_unlock(&priv->list_lock);
++ mutex_unlock(&priv->list_lock);
+
+ /* this means the object is still pinned.. which really should
+ * not happen. I think..
+@@ -1206,9 +1209,9 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
+ goto err_release;
+ }
+
+- spin_lock(&priv->list_lock);
++ mutex_lock(&priv->list_lock);
+ list_add(&omap_obj->mm_list, &priv->obj_list);
+- spin_unlock(&priv->list_lock);
++ mutex_unlock(&priv->list_lock);
+
+ return obj;
+
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index 50d19605c38f..e15fa2389e3f 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -283,7 +283,6 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ remote = of_graph_get_remote_port_parent(ep);
+ if (!remote) {
+ DRM_DEBUG_DRIVER("Error retrieving the output node\n");
+- of_node_put(remote);
+ continue;
+ }
+
+@@ -297,11 +296,13 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+
+ if (of_graph_parse_endpoint(ep, &endpoint)) {
+ DRM_DEBUG_DRIVER("Couldn't parse endpoint\n");
++ of_node_put(remote);
+ continue;
+ }
+
+ if (!endpoint.id) {
+ DRM_DEBUG_DRIVER("Endpoint is our panel... skipping\n");
++ of_node_put(remote);
+ continue;
+ }
+ }
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 5a52fc489a9d..966688f04741 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -477,13 +477,15 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ dev_err(dev, "Couldn't create the PHY clock\n");
+ goto err_put_clk_pll0;
+ }
++
++ clk_prepare_enable(phy->clk_phy);
+ }
+
+ phy->rst_phy = of_reset_control_get_shared(node, "phy");
+ if (IS_ERR(phy->rst_phy)) {
+ dev_err(dev, "Could not get phy reset control\n");
+ ret = PTR_ERR(phy->rst_phy);
+- goto err_put_clk_pll0;
++ goto err_disable_clk_phy;
+ }
+
+ ret = reset_control_deassert(phy->rst_phy);
+@@ -514,6 +516,8 @@ err_deassert_rst_phy:
+ reset_control_assert(phy->rst_phy);
+ err_put_rst_phy:
+ reset_control_put(phy->rst_phy);
++err_disable_clk_phy:
++ clk_disable_unprepare(phy->clk_phy);
+ err_put_clk_pll0:
+ if (phy->variant->has_phy_clk)
+ clk_put(phy->clk_pll0);
+@@ -531,6 +535,7 @@ void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi)
+
+ clk_disable_unprepare(phy->clk_mod);
+ clk_disable_unprepare(phy->clk_bus);
++ clk_disable_unprepare(phy->clk_phy);
+
+ reset_control_assert(phy->rst_phy);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index a043ac3aae98..26005abd9c5d 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -85,6 +85,11 @@ struct v3d_dev {
+ */
+ struct mutex reset_lock;
+
++ /* Lock taken when creating and pushing the GPU scheduler
++ * jobs, to keep the sched-fence seqnos in order.
++ */
++ struct mutex sched_lock;
++
+ struct {
+ u32 num_allocated;
+ u32 pages_allocated;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index b513f9189caf..269fe16379c0 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -550,6 +550,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ if (ret)
+ goto fail;
+
++ mutex_lock(&v3d->sched_lock);
+ if (exec->bin.start != exec->bin.end) {
+ ret = drm_sched_job_init(&exec->bin.base,
+ &v3d->queue[V3D_BIN].sched,
+@@ -576,6 +577,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ kref_get(&exec->refcount); /* put by scheduler job completion */
+ drm_sched_entity_push_job(&exec->render.base,
+ &v3d_priv->sched_entity[V3D_RENDER]);
++ mutex_unlock(&v3d->sched_lock);
+
+ v3d_attach_object_fences(exec);
+
+@@ -594,6 +596,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ return 0;
+
+ fail_unreserve:
++ mutex_unlock(&v3d->sched_lock);
+ v3d_unlock_bo_reservations(dev, exec, &acquire_ctx);
+ fail:
+ v3d_exec_put(exec);
+@@ -615,6 +618,7 @@ v3d_gem_init(struct drm_device *dev)
+ spin_lock_init(&v3d->job_lock);
+ mutex_init(&v3d->bo_lock);
+ mutex_init(&v3d->reset_lock);
++ mutex_init(&v3d->sched_lock);
+
+ /* Note: We don't allocate address 0. Various bits of HW
+ * treat 0 as special, such as the occlusion query counters
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index cf5aea1d6488..203ddf5723e8 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -543,6 +543,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ /* Control word */
+ vc4_dlist_write(vc4_state,
+ SCALER_CTL0_VALID |
++ VC4_SET_FIELD(SCALER_CTL0_RGBA_EXPAND_ROUND, SCALER_CTL0_RGBA_EXPAND) |
+ (format->pixel_order << SCALER_CTL0_ORDER_SHIFT) |
+ (format->hvs << SCALER_CTL0_PIXEL_FORMAT_SHIFT) |
+ VC4_SET_FIELD(tiling, SCALER_CTL0_TILING) |
+@@ -874,7 +875,9 @@ static bool vc4_format_mod_supported(struct drm_plane *plane,
+ case DRM_FORMAT_YUV420:
+ case DRM_FORMAT_YVU420:
+ case DRM_FORMAT_NV12:
++ case DRM_FORMAT_NV21:
+ case DRM_FORMAT_NV16:
++ case DRM_FORMAT_NV61:
+ default:
+ return (modifier == DRM_FORMAT_MOD_LINEAR);
+ }
+diff --git a/drivers/hid/hid-ntrig.c b/drivers/hid/hid-ntrig.c
+index 43b1c7234316..9bc6f4867cb3 100644
+--- a/drivers/hid/hid-ntrig.c
++++ b/drivers/hid/hid-ntrig.c
+@@ -955,6 +955,8 @@ static int ntrig_probe(struct hid_device *hdev, const struct hid_device_id *id)
+
+ ret = sysfs_create_group(&hdev->dev.kobj,
+ &ntrig_attribute_group);
++ if (ret)
++ hid_err(hdev, "cannot create sysfs group\n");
+
+ return 0;
+ err_free:
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 5fd1159fc095..64773433b947 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -1004,18 +1004,18 @@ static int i2c_hid_probe(struct i2c_client *client,
+ return client->irq;
+ }
+
+- ihid = kzalloc(sizeof(struct i2c_hid), GFP_KERNEL);
++ ihid = devm_kzalloc(&client->dev, sizeof(*ihid), GFP_KERNEL);
+ if (!ihid)
+ return -ENOMEM;
+
+ if (client->dev.of_node) {
+ ret = i2c_hid_of_probe(client, &ihid->pdata);
+ if (ret)
+- goto err;
++ return ret;
+ } else if (!platform_data) {
+ ret = i2c_hid_acpi_pdata(client, &ihid->pdata);
+ if (ret)
+- goto err;
++ return ret;
+ } else {
+ ihid->pdata = *platform_data;
+ }
+@@ -1128,7 +1128,6 @@ err_regulator:
+
+ err:
+ i2c_hid_free_buffers(ihid);
+- kfree(ihid);
+ return ret;
+ }
+
+@@ -1152,8 +1151,6 @@ static int i2c_hid_remove(struct i2c_client *client)
+
+ regulator_disable(ihid->pdata.supply);
+
+- kfree(ihid);
+-
+ return 0;
+ }
+
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 9ef84998c7f3..37db2eb66ed7 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -303,14 +303,18 @@ static inline u16 volt2reg(int channel, long volt, u8 bypass_attn)
+ return clamp_val(reg, 0, 1023) & (0xff << 2);
+ }
+
+-static u16 adt7475_read_word(struct i2c_client *client, int reg)
++static int adt7475_read_word(struct i2c_client *client, int reg)
+ {
+- u16 val;
++ int val1, val2;
+
+- val = i2c_smbus_read_byte_data(client, reg);
+- val |= (i2c_smbus_read_byte_data(client, reg + 1) << 8);
++ val1 = i2c_smbus_read_byte_data(client, reg);
++ if (val1 < 0)
++ return val1;
++ val2 = i2c_smbus_read_byte_data(client, reg + 1);
++ if (val2 < 0)
++ return val2;
+
+- return val;
++ return val1 | (val2 << 8);
+ }
+
+ static void adt7475_write_word(struct i2c_client *client, int reg, u16 val)
+diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c
+index e9e6aeabbf84..71d3445ba869 100644
+--- a/drivers/hwmon/ina2xx.c
++++ b/drivers/hwmon/ina2xx.c
+@@ -17,7 +17,7 @@
+ * Bi-directional Current/Power Monitor with I2C Interface
+ * Datasheet: http://www.ti.com/product/ina230
+ *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+ * Thanks to Jan Volkering
+ *
+ * This program is free software; you can redistribute it and/or modify
+@@ -329,6 +329,15 @@ static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
+ return 0;
+ }
+
++static ssize_t ina2xx_show_shunt(struct device *dev,
++ struct device_attribute *da,
++ char *buf)
++{
++ struct ina2xx_data *data = dev_get_drvdata(dev);
++
++ return snprintf(buf, PAGE_SIZE, "%li\n", data->rshunt);
++}
++
+ static ssize_t ina2xx_store_shunt(struct device *dev,
+ struct device_attribute *da,
+ const char *buf, size_t count)
+@@ -403,7 +412,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
+
+ /* shunt resistance */
+ static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
+- ina2xx_show_value, ina2xx_store_shunt,
++ ina2xx_show_shunt, ina2xx_store_shunt,
+ INA2XX_CALIBRATION);
+
+ /* update interval (ina226 only) */
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index da962aa2cef5..fc6b7f8b62fb 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -139,7 +139,8 @@ static int intel_th_remove(struct device *dev)
+ th->thdev[i] = NULL;
+ }
+
+- th->num_thdevs = lowest;
++ if (lowest >= 0)
++ th->num_thdevs = lowest;
+ }
+
+ if (thdrv->attr_group)
+@@ -487,7 +488,7 @@ static const struct intel_th_subdevice {
+ .flags = IORESOURCE_MEM,
+ },
+ {
+- .start = TH_MMIO_SW,
++ .start = 1, /* use resource[1] */
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ },
+@@ -580,6 +581,7 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ struct intel_th_device *thdev;
+ struct resource res[3];
+ unsigned int req = 0;
++ bool is64bit = false;
+ int r, err;
+
+ thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
+@@ -589,12 +591,18 @@ intel_th_subdevice_alloc(struct intel_th *th,
+
+ thdev->drvdata = th->drvdata;
+
++ for (r = 0; r < th->num_resources; r++)
++ if (th->resource[r].flags & IORESOURCE_MEM_64) {
++ is64bit = true;
++ break;
++ }
++
+ memcpy(res, subdev->res,
+ sizeof(struct resource) * subdev->nres);
+
+ for (r = 0; r < subdev->nres; r++) {
+ struct resource *devres = th->resource;
+- int bar = TH_MMIO_CONFIG;
++ int bar = 0; /* cut subdevices' MMIO from resource[0] */
+
+ /*
+ * Take .end == 0 to mean 'take the whole bar',
+@@ -603,6 +611,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ */
+ if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
+ bar = res[r].start;
++ if (is64bit)
++ bar *= 2;
+ res[r].start = 0;
+ res[r].end = resource_size(&devres[bar]) - 1;
+ }
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 45fcf0c37a9e..2806cdeda053 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1417,6 +1417,13 @@ static void i801_add_tco(struct i801_priv *priv)
+ }
+
+ #ifdef CONFIG_ACPI
++static bool i801_acpi_is_smbus_ioport(const struct i801_priv *priv,
++ acpi_physical_address address)
++{
++ return address >= priv->smba &&
++ address <= pci_resource_end(priv->pci_dev, SMBBAR);
++}
++
+ static acpi_status
+ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ u64 *value, void *handler_context, void *region_context)
+@@ -1432,7 +1439,7 @@ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ */
+ mutex_lock(&priv->acpi_lock);
+
+- if (!priv->acpi_reserved) {
++ if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) {
+ priv->acpi_reserved = true;
+
+ dev_warn(&pdev->dev, "BIOS is accessing SMBus registers\n");
+diff --git a/drivers/iio/accel/adxl345_core.c b/drivers/iio/accel/adxl345_core.c
+index 7251d0e63d74..98080e05ac6d 100644
+--- a/drivers/iio/accel/adxl345_core.c
++++ b/drivers/iio/accel/adxl345_core.c
+@@ -21,6 +21,8 @@
+ #define ADXL345_REG_DATAX0 0x32
+ #define ADXL345_REG_DATAY0 0x34
+ #define ADXL345_REG_DATAZ0 0x36
++#define ADXL345_REG_DATA_AXIS(index) \
++ (ADXL345_REG_DATAX0 + (index) * sizeof(__le16))
+
+ #define ADXL345_POWER_CTL_MEASURE BIT(3)
+ #define ADXL345_POWER_CTL_STANDBY 0x00
+@@ -47,19 +49,19 @@ struct adxl345_data {
+ u8 data_range;
+ };
+
+-#define ADXL345_CHANNEL(reg, axis) { \
++#define ADXL345_CHANNEL(index, axis) { \
+ .type = IIO_ACCEL, \
+ .modified = 1, \
+ .channel2 = IIO_MOD_##axis, \
+- .address = reg, \
++ .address = index, \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+ }
+
+ static const struct iio_chan_spec adxl345_channels[] = {
+- ADXL345_CHANNEL(ADXL345_REG_DATAX0, X),
+- ADXL345_CHANNEL(ADXL345_REG_DATAY0, Y),
+- ADXL345_CHANNEL(ADXL345_REG_DATAZ0, Z),
++ ADXL345_CHANNEL(0, X),
++ ADXL345_CHANNEL(1, Y),
++ ADXL345_CHANNEL(2, Z),
+ };
+
+ static int adxl345_read_raw(struct iio_dev *indio_dev,
+@@ -67,7 +69,7 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ int *val, int *val2, long mask)
+ {
+ struct adxl345_data *data = iio_priv(indio_dev);
+- __le16 regval;
++ __le16 accel;
+ int ret;
+
+ switch (mask) {
+@@ -77,12 +79,13 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ * ADXL345_REG_DATA(X0/Y0/Z0) contain the least significant byte
+ * and ADXL345_REG_DATA(X0/Y0/Z0) + 1 the most significant byte
+ */
+- ret = regmap_bulk_read(data->regmap, chan->address, ®val,
+- sizeof(regval));
++ ret = regmap_bulk_read(data->regmap,
++ ADXL345_REG_DATA_AXIS(chan->address),
++ &accel, sizeof(accel));
+ if (ret < 0)
+ return ret;
+
+- *val = sign_extend32(le16_to_cpu(regval), 12);
++ *val = sign_extend32(le16_to_cpu(accel), 12);
+ return IIO_VAL_INT;
+ case IIO_CHAN_INFO_SCALE:
+ *val = 0;
+diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
+index 0635a79864bf..d1239624187d 100644
+--- a/drivers/iio/adc/ina2xx-adc.c
++++ b/drivers/iio/adc/ina2xx-adc.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/regmap.h>
++#include <linux/sched/task.h>
+ #include <linux/util_macros.h>
+
+ #include <linux/platform_data/ina2xx.h>
+@@ -826,6 +827,7 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ {
+ struct ina2xx_chip_info *chip = iio_priv(indio_dev);
+ unsigned int sampling_us = SAMPLING_PERIOD(chip);
++ struct task_struct *task;
+
+ dev_dbg(&indio_dev->dev, "Enabling buffer w/ scan_mask %02x, freq = %d, avg =%u\n",
+ (unsigned int)(*indio_dev->active_scan_mask),
+@@ -835,11 +837,17 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ dev_dbg(&indio_dev->dev, "Async readout mode: %d\n",
+ chip->allow_async_readout);
+
+- chip->task = kthread_run(ina2xx_capture_thread, (void *)indio_dev,
+- "%s:%d-%uus", indio_dev->name, indio_dev->id,
+- sampling_us);
++ task = kthread_create(ina2xx_capture_thread, (void *)indio_dev,
++ "%s:%d-%uus", indio_dev->name, indio_dev->id,
++ sampling_us);
++ if (IS_ERR(task))
++ return PTR_ERR(task);
++
++ get_task_struct(task);
++ wake_up_process(task);
++ chip->task = task;
+
+- return PTR_ERR_OR_ZERO(chip->task);
++ return 0;
+ }
+
+ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+@@ -848,6 +856,7 @@ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+
+ if (chip->task) {
+ kthread_stop(chip->task);
++ put_task_struct(chip->task);
+ chip->task = NULL;
+ }
+
+diff --git a/drivers/iio/counter/104-quad-8.c b/drivers/iio/counter/104-quad-8.c
+index b56985078d8c..4be85ec54af4 100644
+--- a/drivers/iio/counter/104-quad-8.c
++++ b/drivers/iio/counter/104-quad-8.c
+@@ -138,7 +138,7 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ outb(val >> (8 * i), base_offset);
+
+ /* Reset Borrow, Carry, Compare, and Sign flags */
+- outb(0x02, base_offset + 1);
++ outb(0x04, base_offset + 1);
+ /* Reset Error flag */
+ outb(0x06, base_offset + 1);
+
+diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
+index c8963e91f92a..3ee0adfb45e9 100644
+--- a/drivers/infiniband/core/rw.c
++++ b/drivers/infiniband/core/rw.c
+@@ -87,7 +87,7 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num,
+ }
+
+ ret = ib_map_mr_sg(reg->mr, sg, nents, &offset, PAGE_SIZE);
+- if (ret < nents) {
++ if (ret < 0 || ret < nents) {
+ ib_mr_pool_put(qp, &qp->rdma_mrs, reg->mr);
+ return -EINVAL;
+ }
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 583d3a10b940..0e5eb0f547d3 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2812,6 +2812,9 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ if (!resources)
+ goto err_res;
+
++ if (!num_specs)
++ goto out;
++
+ resources->counters =
+ kcalloc(num_specs, sizeof(*resources->counters), GFP_KERNEL);
+
+@@ -2824,8 +2827,8 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ if (!resources->collection)
+ goto err_collection;
+
++out:
+ resources->max = num_specs;
+-
+ return resources;
+
+ err_collection:
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 2094d136513d..92d8469e28f3 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -429,6 +429,7 @@ static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
+ list_del(&entry->obj_list);
+ kfree(entry);
+ }
++ file->ev_queue.is_closed = 1;
+ spin_unlock_irq(&file->ev_queue.lock);
+
+ uverbs_close_fd(filp);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 50d8f1fc98d5..e426b990c1dd 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2354,7 +2354,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ srq = qp->srq;
+ if (!srq)
+ return -EINVAL;
+- if (wr_id_idx > srq->hwq.max_elements) {
++ if (wr_id_idx >= srq->hwq.max_elements) {
+ dev_err(&cq->hwq.pdev->dev,
+ "QPLIB: FP: CQ Process RC ");
+ dev_err(&cq->hwq.pdev->dev,
+@@ -2369,7 +2369,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ *pcqe = cqe;
+ } else {
+ rq = &qp->rq;
+- if (wr_id_idx > rq->hwq.max_elements) {
++ if (wr_id_idx >= rq->hwq.max_elements) {
+ dev_err(&cq->hwq.pdev->dev,
+ "QPLIB: FP: CQ Process RC ");
+ dev_err(&cq->hwq.pdev->dev,
+@@ -2437,7 +2437,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ if (!srq)
+ return -EINVAL;
+
+- if (wr_id_idx > srq->hwq.max_elements) {
++ if (wr_id_idx >= srq->hwq.max_elements) {
+ dev_err(&cq->hwq.pdev->dev,
+ "QPLIB: FP: CQ Process UD ");
+ dev_err(&cq->hwq.pdev->dev,
+@@ -2452,7 +2452,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ *pcqe = cqe;
+ } else {
+ rq = &qp->rq;
+- if (wr_id_idx > rq->hwq.max_elements) {
++ if (wr_id_idx >= rq->hwq.max_elements) {
+ dev_err(&cq->hwq.pdev->dev,
+ "QPLIB: FP: CQ Process UD ");
+ dev_err(&cq->hwq.pdev->dev,
+@@ -2546,7 +2546,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ "QPLIB: FP: SRQ used but not defined??");
+ return -EINVAL;
+ }
+- if (wr_id_idx > srq->hwq.max_elements) {
++ if (wr_id_idx >= srq->hwq.max_elements) {
+ dev_err(&cq->hwq.pdev->dev,
+ "QPLIB: FP: CQ Process Raw/QP1 ");
+ dev_err(&cq->hwq.pdev->dev,
+@@ -2561,7 +2561,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ *pcqe = cqe;
+ } else {
+ rq = &qp->rq;
+- if (wr_id_idx > rq->hwq.max_elements) {
++ if (wr_id_idx >= rq->hwq.max_elements) {
+ dev_err(&cq->hwq.pdev->dev,
+ "QPLIB: FP: CQ Process Raw/QP1 RQ wr_id ");
+ dev_err(&cq->hwq.pdev->dev,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 2f3f32eaa1d5..4097f3fa25c5 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -197,7 +197,7 @@ int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
+ struct bnxt_qplib_gid *gid)
+ {
+- if (index > sgid_tbl->max) {
++ if (index >= sgid_tbl->max) {
+ dev_err(&res->pdev->dev,
+ "QPLIB: Index %d exceeded SGID table max (%d)",
+ index, sgid_tbl->max);
+@@ -402,7 +402,7 @@ int bnxt_qplib_get_pkey(struct bnxt_qplib_res *res,
+ *pkey = 0xFFFF;
+ return 0;
+ }
+- if (index > pkey_tbl->max) {
++ if (index >= pkey_tbl->max) {
+ dev_err(&res->pdev->dev,
+ "QPLIB: Index %d exceeded PKEY table max (%d)",
+ index, pkey_tbl->max);
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 6deb101cdd43..b49351914feb 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -6733,6 +6733,7 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ struct hfi1_devdata *dd = ppd->dd;
+ struct send_context *sc;
+ int i;
++ int sc_flags;
+
+ if (flags & FREEZE_SELF)
+ write_csr(dd, CCE_CTRL, CCE_CTRL_SPC_FREEZE_SMASK);
+@@ -6743,11 +6744,13 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ /* notify all SDMA engines that they are going into a freeze */
+ sdma_freeze_notify(dd, !!(flags & FREEZE_LINK_DOWN));
+
++ sc_flags = SCF_FROZEN | SCF_HALTED | (flags & FREEZE_LINK_DOWN ?
++ SCF_LINK_DOWN : 0);
+ /* do halt pre-handling on all enabled send contexts */
+ for (i = 0; i < dd->num_send_contexts; i++) {
+ sc = dd->send_contexts[i].sc;
+ if (sc && (sc->flags & SCF_ENABLED))
+- sc_stop(sc, SCF_FROZEN | SCF_HALTED);
++ sc_stop(sc, sc_flags);
+ }
+
+ /* Send context are frozen. Notify user space */
+@@ -10665,6 +10668,7 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
+ add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
+
+ handle_linkup_change(dd, 1);
++ pio_kernel_linkup(dd);
+
+ /*
+ * After link up, a new link width will have been set.
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 9cac15d10c4f..81f7cd7abcc5 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -86,6 +86,7 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ unsigned long flags;
+ int write = 1; /* write sendctrl back */
+ int flush = 0; /* re-read sendctrl to make sure it is flushed */
++ int i;
+
+ spin_lock_irqsave(&dd->sendctrl_lock, flags);
+
+@@ -95,9 +96,13 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ reg |= SEND_CTRL_SEND_ENABLE_SMASK;
+ /* Fall through */
+ case PSC_DATA_VL_ENABLE:
++ mask = 0;
++ for (i = 0; i < ARRAY_SIZE(dd->vld); i++)
++ if (!dd->vld[i].mtu)
++ mask |= BIT_ULL(i);
+ /* Disallow sending on VLs not enabled */
+- mask = (((~0ull) << num_vls) & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
+- SEND_CTRL_UNSUPPORTED_VL_SHIFT;
++ mask = (mask & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
++ SEND_CTRL_UNSUPPORTED_VL_SHIFT;
+ reg = (reg & ~SEND_CTRL_UNSUPPORTED_VL_SMASK) | mask;
+ break;
+ case PSC_GLOBAL_DISABLE:
+@@ -921,20 +926,18 @@ void sc_free(struct send_context *sc)
+ void sc_disable(struct send_context *sc)
+ {
+ u64 reg;
+- unsigned long flags;
+ struct pio_buf *pbuf;
+
+ if (!sc)
+ return;
+
+ /* do all steps, even if already disabled */
+- spin_lock_irqsave(&sc->alloc_lock, flags);
++ spin_lock_irq(&sc->alloc_lock);
+ reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL));
+ reg &= ~SC(CTRL_CTXT_ENABLE_SMASK);
+ sc->flags &= ~SCF_ENABLED;
+ sc_wait_for_packet_egress(sc, 1);
+ write_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL), reg);
+- spin_unlock_irqrestore(&sc->alloc_lock, flags);
+
+ /*
+ * Flush any waiters. Once the context is disabled,
+@@ -944,7 +947,7 @@ void sc_disable(struct send_context *sc)
+ * proceed with the flush.
+ */
+ udelay(1);
+- spin_lock_irqsave(&sc->release_lock, flags);
++ spin_lock(&sc->release_lock);
+ if (sc->sr) { /* this context has a shadow ring */
+ while (sc->sr_tail != sc->sr_head) {
+ pbuf = &sc->sr[sc->sr_tail].pbuf;
+@@ -955,7 +958,8 @@ void sc_disable(struct send_context *sc)
+ sc->sr_tail = 0;
+ }
+ }
+- spin_unlock_irqrestore(&sc->release_lock, flags);
++ spin_unlock(&sc->release_lock);
++ spin_unlock_irq(&sc->alloc_lock);
+ }
+
+ /* return SendEgressCtxtStatus.PacketOccupancy */
+@@ -1178,11 +1182,39 @@ void pio_kernel_unfreeze(struct hfi1_devdata *dd)
+ sc = dd->send_contexts[i].sc;
+ if (!sc || !(sc->flags & SCF_FROZEN) || sc->type == SC_USER)
+ continue;
++ if (sc->flags & SCF_LINK_DOWN)
++ continue;
+
+ sc_enable(sc); /* will clear the sc frozen flag */
+ }
+ }
+
++/**
++ * pio_kernel_linkup() - Re-enable send contexts after linkup event
++ * @dd: valid devive data
++ *
++ * When the link goes down, the freeze path is taken. However, a link down
++ * event is different from a freeze because if the send context is re-enabled
++ * whowever is sending data will start sending data again, which will hang
++ * any QP that is sending data.
++ *
++ * The freeze path now looks at the type of event that occurs and takes this
++ * path for link down event.
++ */
++void pio_kernel_linkup(struct hfi1_devdata *dd)
++{
++ struct send_context *sc;
++ int i;
++
++ for (i = 0; i < dd->num_send_contexts; i++) {
++ sc = dd->send_contexts[i].sc;
++ if (!sc || !(sc->flags & SCF_LINK_DOWN) || sc->type == SC_USER)
++ continue;
++
++ sc_enable(sc); /* will clear the sc link down flag */
++ }
++}
++
+ /*
+ * Wait for the SendPioInitCtxt.PioInitInProgress bit to clear.
+ * Returns:
+@@ -1382,11 +1414,10 @@ void sc_stop(struct send_context *sc, int flag)
+ {
+ unsigned long flags;
+
+- /* mark the context */
+- sc->flags |= flag;
+-
+ /* stop buffer allocations */
+ spin_lock_irqsave(&sc->alloc_lock, flags);
++ /* mark the context */
++ sc->flags |= flag;
+ sc->flags &= ~SCF_ENABLED;
+ spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ wake_up(&sc->halt_wait);
+diff --git a/drivers/infiniband/hw/hfi1/pio.h b/drivers/infiniband/hw/hfi1/pio.h
+index 058b08f459ab..aaf372c3e5d6 100644
+--- a/drivers/infiniband/hw/hfi1/pio.h
++++ b/drivers/infiniband/hw/hfi1/pio.h
+@@ -139,6 +139,7 @@ struct send_context {
+ #define SCF_IN_FREE 0x02
+ #define SCF_HALTED 0x04
+ #define SCF_FROZEN 0x08
++#define SCF_LINK_DOWN 0x10
+
+ struct send_context_info {
+ struct send_context *sc; /* allocated working context */
+@@ -306,6 +307,7 @@ void set_pio_integrity(struct send_context *sc);
+ void pio_reset_all(struct hfi1_devdata *dd);
+ void pio_freeze(struct hfi1_devdata *dd);
+ void pio_kernel_unfreeze(struct hfi1_devdata *dd);
++void pio_kernel_linkup(struct hfi1_devdata *dd);
+
+ /* global PIO send control operations */
+ #define PSC_GLOBAL_ENABLE 0
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index a3a7b33196d6..5c88706121c1 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -828,7 +828,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, unsigned maxpkts)
+ if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {
+ if (++req->iov_idx == req->data_iovs) {
+ ret = -EFAULT;
+- goto free_txreq;
++ goto free_tx;
+ }
+ iovec = &req->iovs[req->iov_idx];
+ WARN_ON(iovec->offset);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 08991874c0e2..a1040a142aac 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1590,6 +1590,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ struct hfi1_pportdata *ppd;
+ struct hfi1_devdata *dd;
+ u8 sc5;
++ u8 sl;
+
+ if (hfi1_check_mcast(rdma_ah_get_dlid(ah_attr)) &&
+ !(rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH))
+@@ -1598,8 +1599,13 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ /* test the mapping for validity */
+ ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr));
+ ppd = ppd_from_ibp(ibp);
+- sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
+ dd = dd_from_ppd(ppd);
++
++ sl = rdma_ah_get_sl(ah_attr);
++ if (sl >= ARRAY_SIZE(ibp->sl_to_sc))
++ return -EINVAL;
++
++ sc5 = ibp->sl_to_sc[sl];
+ if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf)
+ return -EINVAL;
+ return 0;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 68679ad4c6da..937899fea01d 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -1409,6 +1409,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ struct vm_area_struct *vma;
+ struct hstate *h;
+
++ down_read(¤t->mm->mmap_sem);
+ vma = find_vma(current->mm, addr);
+ if (vma && is_vm_hugetlb_page(vma)) {
+ h = hstate_vma(vma);
+@@ -1417,6 +1418,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ iwmr->page_msk = huge_page_mask(h);
+ }
+ }
++ up_read(¤t->mm->mmap_sem);
+ }
+
+ /**
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 3b8045fd23ed..b94e33a56e97 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -4047,9 +4047,9 @@ static void to_rdma_ah_attr(struct mlx4_ib_dev *ibdev,
+ u8 port_num = path->sched_queue & 0x40 ? 2 : 1;
+
+ memset(ah_attr, 0, sizeof(*ah_attr));
+- ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ if (port_num == 0 || port_num > dev->caps.num_ports)
+ return;
++ ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+
+ if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE)
+ rdma_ah_set_sl(ah_attr, ((path->sched_queue >> 3) & 0x7) |
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index cbeae4509359..85677afa6f77 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2699,7 +2699,7 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
+ IPPROTO_GRE);
+
+ MLX5_SET(fte_match_set_misc, misc_params_c, gre_protocol,
+- 0xffff);
++ ntohs(ib_spec->gre.mask.protocol));
+ MLX5_SET(fte_match_set_misc, misc_params_v, gre_protocol,
+ ntohs(ib_spec->gre.val.protocol));
+
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 9786b24b956f..2b8cc76bb77e 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -2954,7 +2954,7 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ {
+ struct srp_target_port *target = host_to_target(scmnd->device->host);
+ struct srp_rdma_ch *ch;
+- int i;
++ int i, j;
+ u8 status;
+
+ shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
+@@ -2968,8 +2968,8 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+
+ for (i = 0; i < target->ch_count; i++) {
+ ch = &target->ch[i];
+- for (i = 0; i < target->req_ring_size; ++i) {
+- struct srp_request *req = &ch->req_ring[i];
++ for (j = 0; j < target->req_ring_size; ++j) {
++ struct srp_request *req = &ch->req_ring[j];
+
+ srp_finish_req(ch, req, scmnd->device, DID_RESET << 16);
+ }
+diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
+index d91f3b1c5375..92d739649022 100644
+--- a/drivers/input/misc/xen-kbdfront.c
++++ b/drivers/input/misc/xen-kbdfront.c
+@@ -229,7 +229,7 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ }
+ }
+
+- touch = xenbus_read_unsigned(dev->nodename,
++ touch = xenbus_read_unsigned(dev->otherend,
+ XENKBD_FIELD_FEAT_MTOUCH, 0);
+ if (touch) {
+ ret = xenbus_write(XBT_NIL, dev->nodename,
+@@ -304,13 +304,13 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ if (!mtouch)
+ goto error_nomem;
+
+- num_cont = xenbus_read_unsigned(info->xbdev->nodename,
++ num_cont = xenbus_read_unsigned(info->xbdev->otherend,
+ XENKBD_FIELD_MT_NUM_CONTACTS,
+ 1);
+- width = xenbus_read_unsigned(info->xbdev->nodename,
++ width = xenbus_read_unsigned(info->xbdev->otherend,
+ XENKBD_FIELD_MT_WIDTH,
+ XENFB_WIDTH);
+- height = xenbus_read_unsigned(info->xbdev->nodename,
++ height = xenbus_read_unsigned(info->xbdev->otherend,
+ XENKBD_FIELD_MT_HEIGHT,
+ XENFB_HEIGHT);
+
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index dd85b16dc6f8..88564f729e93 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1178,6 +1178,8 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+ static const char * const middle_button_pnp_ids[] = {
+ "LEN2131", /* ThinkPad P52 w/ NFC */
+ "LEN2132", /* ThinkPad P52 */
++ "LEN2133", /* ThinkPad P72 w/ NFC */
++ "LEN2134", /* ThinkPad P72 */
+ NULL
+ };
+
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 596b95c50051..d77c97fe4a23 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2405,9 +2405,9 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
+ }
+
+ if (amd_iommu_unmap_flush) {
+- dma_ops_free_iova(dma_dom, dma_addr, pages);
+ domain_flush_tlb(&dma_dom->domain);
+ domain_flush_complete(&dma_dom->domain);
++ dma_ops_free_iova(dma_dom, dma_addr, pages);
+ } else {
+ pages = __roundup_pow_of_two(pages);
+ queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 0d3350463a3f..9a95c9b9d0d8 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -395,20 +395,15 @@ static int msm_iommu_add_device(struct device *dev)
+ struct msm_iommu_dev *iommu;
+ struct iommu_group *group;
+ unsigned long flags;
+- int ret = 0;
+
+ spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ iommu = find_iommu_for_dev(dev);
++ spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ if (iommu)
+ iommu_device_link(&iommu->iommu, dev);
+ else
+- ret = -ENODEV;
+-
+- spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+- if (ret)
+- return ret;
++ return -ENODEV;
+
+ group = iommu_group_get_for_dev(dev);
+ if (IS_ERR(group))
+@@ -425,13 +420,12 @@ static void msm_iommu_remove_device(struct device *dev)
+ unsigned long flags;
+
+ spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ iommu = find_iommu_for_dev(dev);
++ spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ if (iommu)
+ iommu_device_unlink(&iommu->iommu, dev);
+
+- spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+ iommu_group_remove_device(dev);
+ }
+
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 021cbf9ef1bf..1ac945f7a3c2 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -304,15 +304,6 @@ static void recover_bitmaps(struct md_thread *thread)
+ while (cinfo->recovery_map) {
+ slot = fls64((u64)cinfo->recovery_map) - 1;
+
+- /* Clear suspend_area associated with the bitmap */
+- spin_lock_irq(&cinfo->suspend_lock);
+- list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
+- if (slot == s->slot) {
+- list_del(&s->list);
+- kfree(s);
+- }
+- spin_unlock_irq(&cinfo->suspend_lock);
+-
+ snprintf(str, 64, "bitmap%04d", slot);
+ bm_lockres = lockres_init(mddev, str, NULL, 1);
+ if (!bm_lockres) {
+@@ -331,6 +322,16 @@ static void recover_bitmaps(struct md_thread *thread)
+ pr_err("md-cluster: Could not copy data from bitmap %d\n", slot);
+ goto clear_bit;
+ }
++
++ /* Clear suspend_area associated with the bitmap */
++ spin_lock_irq(&cinfo->suspend_lock);
++ list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
++ if (slot == s->slot) {
++ list_del(&s->list);
++ kfree(s);
++ }
++ spin_unlock_irq(&cinfo->suspend_lock);
++
+ if (hi > 0) {
+ if (lo < mddev->recovery_cp)
+ mddev->recovery_cp = lo;
+diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
+index e2550708abc8..3fdbe644648a 100644
+--- a/drivers/media/i2c/ov772x.c
++++ b/drivers/media/i2c/ov772x.c
+@@ -542,9 +542,19 @@ static struct ov772x_priv *to_ov772x(struct v4l2_subdev *sd)
+ return container_of(sd, struct ov772x_priv, subdev);
+ }
+
+-static inline int ov772x_read(struct i2c_client *client, u8 addr)
++static int ov772x_read(struct i2c_client *client, u8 addr)
+ {
+- return i2c_smbus_read_byte_data(client, addr);
++ int ret;
++ u8 val;
++
++ ret = i2c_master_send(client, &addr, 1);
++ if (ret < 0)
++ return ret;
++ ret = i2c_master_recv(client, &val, 1);
++ if (ret < 0)
++ return ret;
++
++ return val;
+ }
+
+ static inline int ov772x_write(struct i2c_client *client, u8 addr, u8 value)
+@@ -1136,7 +1146,7 @@ static int ov772x_set_fmt(struct v4l2_subdev *sd,
+ static int ov772x_video_probe(struct ov772x_priv *priv)
+ {
+ struct i2c_client *client = v4l2_get_subdevdata(&priv->subdev);
+- u8 pid, ver;
++ int pid, ver, midh, midl;
+ const char *devname;
+ int ret;
+
+@@ -1146,7 +1156,11 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+
+ /* Check and show product ID and manufacturer ID. */
+ pid = ov772x_read(client, PID);
++ if (pid < 0)
++ return pid;
+ ver = ov772x_read(client, VER);
++ if (ver < 0)
++ return ver;
+
+ switch (VERSION(pid, ver)) {
+ case OV7720:
+@@ -1162,13 +1176,17 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ goto done;
+ }
+
++ midh = ov772x_read(client, MIDH);
++ if (midh < 0)
++ return midh;
++ midl = ov772x_read(client, MIDL);
++ if (midl < 0)
++ return midl;
++
+ dev_info(&client->dev,
+ "%s Product ID %0x:%0x Manufacturer ID %x:%x\n",
+- devname,
+- pid,
+- ver,
+- ov772x_read(client, MIDH),
+- ov772x_read(client, MIDL));
++ devname, pid, ver, midh, midl);
++
+ ret = v4l2_ctrl_handler_setup(&priv->hdl);
+
+ done:
+@@ -1255,13 +1273,11 @@ static int ov772x_probe(struct i2c_client *client,
+ return -EINVAL;
+ }
+
+- if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+- I2C_FUNC_PROTOCOL_MANGLING)) {
++ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) {
+ dev_err(&adapter->dev,
+- "I2C-Adapter doesn't support SMBUS_BYTE_DATA or PROTOCOL_MANGLING\n");
++ "I2C-Adapter doesn't support SMBUS_BYTE_DATA\n");
+ return -EIO;
+ }
+- client->flags |= I2C_CLIENT_SCCB;
+
+ priv = devm_kzalloc(&client->dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+diff --git a/drivers/media/i2c/soc_camera/ov772x.c b/drivers/media/i2c/soc_camera/ov772x.c
+index 806383500313..14377af7c888 100644
+--- a/drivers/media/i2c/soc_camera/ov772x.c
++++ b/drivers/media/i2c/soc_camera/ov772x.c
+@@ -834,7 +834,7 @@ static int ov772x_set_params(struct ov772x_priv *priv,
+ * set COM8
+ */
+ if (priv->band_filter) {
+- ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, 1);
++ ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, BNDF_ON_OFF);
+ if (!ret)
+ ret = ov772x_mask_set(client, BDBASE,
+ 0xff, 256 - priv->band_filter);
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+index 55ba696b8cf4..a920164f53f1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+@@ -384,12 +384,17 @@ static void __isp_video_try_fmt(struct fimc_isp *isp,
+ struct v4l2_pix_format_mplane *pixm,
+ const struct fimc_fmt **fmt)
+ {
+- *fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++ const struct fimc_fmt *__fmt;
++
++ __fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++
++ if (fmt)
++ *fmt = __fmt;
+
+ pixm->colorspace = V4L2_COLORSPACE_SRGB;
+ pixm->field = V4L2_FIELD_NONE;
+- pixm->num_planes = (*fmt)->memplanes;
+- pixm->pixelformat = (*fmt)->fourcc;
++ pixm->num_planes = __fmt->memplanes;
++ pixm->pixelformat = __fmt->fourcc;
+ /*
+ * TODO: double check with the docmentation these width/height
+ * constraints are correct.
+diff --git a/drivers/media/platform/fsl-viu.c b/drivers/media/platform/fsl-viu.c
+index e41510ce69a4..0273302aa741 100644
+--- a/drivers/media/platform/fsl-viu.c
++++ b/drivers/media/platform/fsl-viu.c
+@@ -1414,7 +1414,7 @@ static int viu_of_probe(struct platform_device *op)
+ sizeof(struct viu_reg), DRV_NAME)) {
+ dev_err(&op->dev, "Error while requesting mem region\n");
+ ret = -EBUSY;
+- goto err;
++ goto err_irq;
+ }
+
+ /* remap registers */
+@@ -1422,7 +1422,7 @@ static int viu_of_probe(struct platform_device *op)
+ if (!viu_regs) {
+ dev_err(&op->dev, "Can't map register set\n");
+ ret = -ENOMEM;
+- goto err;
++ goto err_irq;
+ }
+
+ /* Prepare our private structure */
+@@ -1430,7 +1430,7 @@ static int viu_of_probe(struct platform_device *op)
+ if (!viu_dev) {
+ dev_err(&op->dev, "Can't allocate private structure\n");
+ ret = -ENOMEM;
+- goto err;
++ goto err_irq;
+ }
+
+ viu_dev->vr = viu_regs;
+@@ -1446,16 +1446,21 @@ static int viu_of_probe(struct platform_device *op)
+ ret = v4l2_device_register(viu_dev->dev, &viu_dev->v4l2_dev);
+ if (ret < 0) {
+ dev_err(&op->dev, "v4l2_device_register() failed: %d\n", ret);
+- goto err;
++ goto err_irq;
+ }
+
+ ad = i2c_get_adapter(0);
++ if (!ad) {
++ ret = -EFAULT;
++ dev_err(&op->dev, "couldn't get i2c adapter\n");
++ goto err_v4l2;
++ }
+
+ v4l2_ctrl_handler_init(&viu_dev->hdl, 5);
+ if (viu_dev->hdl.error) {
+ ret = viu_dev->hdl.error;
+ dev_err(&op->dev, "couldn't register control\n");
+- goto err_vdev;
++ goto err_i2c;
+ }
+ /* This control handler will inherit the control(s) from the
+ sub-device(s). */
+@@ -1471,7 +1476,7 @@ static int viu_of_probe(struct platform_device *op)
+ vdev = video_device_alloc();
+ if (vdev == NULL) {
+ ret = -ENOMEM;
+- goto err_vdev;
++ goto err_hdl;
+ }
+
+ *vdev = viu_template;
+@@ -1492,7 +1497,7 @@ static int viu_of_probe(struct platform_device *op)
+ ret = video_register_device(viu_dev->vdev, VFL_TYPE_GRABBER, -1);
+ if (ret < 0) {
+ video_device_release(viu_dev->vdev);
+- goto err_vdev;
++ goto err_unlock;
+ }
+
+ /* enable VIU clock */
+@@ -1500,12 +1505,12 @@ static int viu_of_probe(struct platform_device *op)
+ if (IS_ERR(clk)) {
+ dev_err(&op->dev, "failed to lookup the clock!\n");
+ ret = PTR_ERR(clk);
+- goto err_clk;
++ goto err_vdev;
+ }
+ ret = clk_prepare_enable(clk);
+ if (ret) {
+ dev_err(&op->dev, "failed to enable the clock!\n");
+- goto err_clk;
++ goto err_vdev;
+ }
+ viu_dev->clk = clk;
+
+@@ -1516,7 +1521,7 @@ static int viu_of_probe(struct platform_device *op)
+ if (request_irq(viu_dev->irq, viu_intr, 0, "viu", (void *)viu_dev)) {
+ dev_err(&op->dev, "Request VIU IRQ failed.\n");
+ ret = -ENODEV;
+- goto err_irq;
++ goto err_clk;
+ }
+
+ mutex_unlock(&viu_dev->lock);
+@@ -1524,16 +1529,19 @@ static int viu_of_probe(struct platform_device *op)
+ dev_info(&op->dev, "Freescale VIU Video Capture Board\n");
+ return ret;
+
+-err_irq:
+- clk_disable_unprepare(viu_dev->clk);
+ err_clk:
+- video_unregister_device(viu_dev->vdev);
++ clk_disable_unprepare(viu_dev->clk);
+ err_vdev:
+- v4l2_ctrl_handler_free(&viu_dev->hdl);
++ video_unregister_device(viu_dev->vdev);
++err_unlock:
+ mutex_unlock(&viu_dev->lock);
++err_hdl:
++ v4l2_ctrl_handler_free(&viu_dev->hdl);
++err_i2c:
+ i2c_put_adapter(ad);
++err_v4l2:
+ v4l2_device_unregister(&viu_dev->v4l2_dev);
+-err:
++err_irq:
+ irq_dispose_mapping(viu_irq);
+ return ret;
+ }
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index f22cf351e3ee..ae0ef8b241a7 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -300,7 +300,7 @@ static struct clk *isp_xclk_src_get(struct of_phandle_args *clkspec, void *data)
+ static int isp_xclk_init(struct isp_device *isp)
+ {
+ struct device_node *np = isp->dev->of_node;
+- struct clk_init_data init;
++ struct clk_init_data init = { 0 };
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(isp->xclks); ++i)
+diff --git a/drivers/media/platform/s3c-camif/camif-capture.c b/drivers/media/platform/s3c-camif/camif-capture.c
+index 9ab8e7ee2e1e..b1d9f3857d3d 100644
+--- a/drivers/media/platform/s3c-camif/camif-capture.c
++++ b/drivers/media/platform/s3c-camif/camif-capture.c
+@@ -117,6 +117,8 @@ static int sensor_set_power(struct camif_dev *camif, int on)
+
+ if (camif->sensor.power_count == !on)
+ err = v4l2_subdev_call(sensor->sd, core, s_power, on);
++ if (err == -ENOIOCTLCMD)
++ err = 0;
+ if (!err)
+ sensor->power_count += on ? 1 : -1;
+
+diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c
+index c811fc6cf48a..3a4e545c6037 100644
+--- a/drivers/media/usb/tm6000/tm6000-dvb.c
++++ b/drivers/media/usb/tm6000/tm6000-dvb.c
+@@ -266,6 +266,11 @@ static int register_dvb(struct tm6000_core *dev)
+
+ ret = dvb_register_adapter(&dvb->adapter, "Trident TVMaster 6000 DVB-T",
+ THIS_MODULE, &dev->udev->dev, adapter_nr);
++ if (ret < 0) {
++ pr_err("tm6000: couldn't register the adapter!\n");
++ goto err;
++ }
++
+ dvb->adapter.priv = dev;
+
+ if (dvb->frontend) {
+diff --git a/drivers/media/v4l2-core/v4l2-event.c b/drivers/media/v4l2-core/v4l2-event.c
+index 127fe6eb91d9..a3ef1f50a4b3 100644
+--- a/drivers/media/v4l2-core/v4l2-event.c
++++ b/drivers/media/v4l2-core/v4l2-event.c
+@@ -115,14 +115,6 @@ static void __v4l2_event_queue_fh(struct v4l2_fh *fh, const struct v4l2_event *e
+ if (sev == NULL)
+ return;
+
+- /*
+- * If the event has been added to the fh->subscribed list, but its
+- * add op has not completed yet elems will be 0, treat this as
+- * not being subscribed.
+- */
+- if (!sev->elems)
+- return;
+-
+ /* Increase event sequence number on fh. */
+ fh->sequence++;
+
+@@ -208,6 +200,7 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ struct v4l2_subscribed_event *sev, *found_ev;
+ unsigned long flags;
+ unsigned i;
++ int ret = 0;
+
+ if (sub->type == V4L2_EVENT_ALL)
+ return -EINVAL;
+@@ -225,31 +218,36 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ sev->flags = sub->flags;
+ sev->fh = fh;
+ sev->ops = ops;
++ sev->elems = elems;
++
++ mutex_lock(&fh->subscribe_lock);
+
+ spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ found_ev = v4l2_event_subscribed(fh, sub->type, sub->id);
+- if (!found_ev)
+- list_add(&sev->list, &fh->subscribed);
+ spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+
+ if (found_ev) {
++ /* Already listening */
+ kvfree(sev);
+- return 0; /* Already listening */
++ goto out_unlock;
+ }
+
+ if (sev->ops && sev->ops->add) {
+- int ret = sev->ops->add(sev, elems);
++ ret = sev->ops->add(sev, elems);
+ if (ret) {
+- sev->ops = NULL;
+- v4l2_event_unsubscribe(fh, sub);
+- return ret;
++ kvfree(sev);
++ goto out_unlock;
+ }
+ }
+
+- /* Mark as ready for use */
+- sev->elems = elems;
++ spin_lock_irqsave(&fh->vdev->fh_lock, flags);
++ list_add(&sev->list, &fh->subscribed);
++ spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+
+- return 0;
++out_unlock:
++ mutex_unlock(&fh->subscribe_lock);
++
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_event_subscribe);
+
+@@ -288,6 +286,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ return 0;
+ }
+
++ mutex_lock(&fh->subscribe_lock);
++
+ spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+
+ sev = v4l2_event_subscribed(fh, sub->type, sub->id);
+@@ -305,6 +305,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ if (sev && sev->ops && sev->ops->del)
+ sev->ops->del(sev);
+
++ mutex_unlock(&fh->subscribe_lock);
++
+ kvfree(sev);
+
+ return 0;
+diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
+index 3895999bf880..c91a7bd3ecfc 100644
+--- a/drivers/media/v4l2-core/v4l2-fh.c
++++ b/drivers/media/v4l2-core/v4l2-fh.c
+@@ -45,6 +45,7 @@ void v4l2_fh_init(struct v4l2_fh *fh, struct video_device *vdev)
+ INIT_LIST_HEAD(&fh->available);
+ INIT_LIST_HEAD(&fh->subscribed);
+ fh->sequence = -1;
++ mutex_init(&fh->subscribe_lock);
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_init);
+
+@@ -90,6 +91,7 @@ void v4l2_fh_exit(struct v4l2_fh *fh)
+ return;
+ v4l_disable_media_source(fh->vdev);
+ v4l2_event_unsubscribe_all(fh);
++ mutex_destroy(&fh->subscribe_lock);
+ fh->vdev = NULL;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_exit);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index 50d82c3d032a..b8aaa684c397 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -273,7 +273,7 @@ static void *alloc_dma_buffer(struct vio_dev *vdev, size_t size,
+ dma_addr_t *dma_handle)
+ {
+ /* allocate memory */
+- void *buffer = kzalloc(size, GFP_KERNEL);
++ void *buffer = kzalloc(size, GFP_ATOMIC);
+
+ if (!buffer) {
+ *dma_handle = 0;
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index 679647713e36..74b183baf044 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -391,23 +391,23 @@ static int sram_probe(struct platform_device *pdev)
+ if (IS_ERR(sram->pool))
+ return PTR_ERR(sram->pool);
+
+- ret = sram_reserve_regions(sram, res);
+- if (ret)
+- return ret;
+-
+ sram->clk = devm_clk_get(sram->dev, NULL);
+ if (IS_ERR(sram->clk))
+ sram->clk = NULL;
+ else
+ clk_prepare_enable(sram->clk);
+
++ ret = sram_reserve_regions(sram, res);
++ if (ret)
++ goto err_disable_clk;
++
+ platform_set_drvdata(pdev, sram);
+
+ init_func = of_device_get_match_data(&pdev->dev);
+ if (init_func) {
+ ret = init_func();
+ if (ret)
+- goto err_disable_clk;
++ goto err_free_partitions;
+ }
+
+ dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+@@ -415,10 +415,11 @@ static int sram_probe(struct platform_device *pdev)
+
+ return 0;
+
++err_free_partitions:
++ sram_free_partitions(sram);
+ err_disable_clk:
+ if (sram->clk)
+ clk_disable_unprepare(sram->clk);
+- sram_free_partitions(sram);
+
+ return ret;
+ }
+diff --git a/drivers/misc/tsl2550.c b/drivers/misc/tsl2550.c
+index adf46072cb37..3fce3b6a3624 100644
+--- a/drivers/misc/tsl2550.c
++++ b/drivers/misc/tsl2550.c
+@@ -177,7 +177,7 @@ static int tsl2550_calculate_lux(u8 ch0, u8 ch1)
+ } else
+ lux = 0;
+ else
+- return -EAGAIN;
++ return 0;
+
+ /* LUX range check */
+ return lux > TSL2550_MAX_LUX ? TSL2550_MAX_LUX : lux;
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index b4d7774cfe07..d95e8648e7b3 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -668,7 +668,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ retval = get_user_pages_fast((uintptr_t) produce_uva,
+ produce_q->kernel_if->num_pages, 1,
+ produce_q->kernel_if->u.h.header_page);
+- if (retval < produce_q->kernel_if->num_pages) {
++ if (retval < (int)produce_q->kernel_if->num_pages) {
+ pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
+ retval);
+ qp_release_pages(produce_q->kernel_if->u.h.header_page,
+@@ -680,7 +680,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ retval = get_user_pages_fast((uintptr_t) consume_uva,
+ consume_q->kernel_if->num_pages, 1,
+ consume_q->kernel_if->u.h.header_page);
+- if (retval < consume_q->kernel_if->num_pages) {
++ if (retval < (int)consume_q->kernel_if->num_pages) {
+ pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
+ retval);
+ qp_release_pages(consume_q->kernel_if->u.h.header_page,
+diff --git a/drivers/mmc/host/android-goldfish.c b/drivers/mmc/host/android-goldfish.c
+index 294de177632c..61e4e2a213c9 100644
+--- a/drivers/mmc/host/android-goldfish.c
++++ b/drivers/mmc/host/android-goldfish.c
+@@ -217,7 +217,7 @@ static void goldfish_mmc_xfer_done(struct goldfish_mmc_host *host,
+ * We don't really have DMA, so we need
+ * to copy from our platform driver buffer
+ */
+- sg_copy_to_buffer(data->sg, 1, host->virt_base,
++ sg_copy_from_buffer(data->sg, 1, host->virt_base,
+ data->sg->length);
+ }
+ host->data->bytes_xfered += data->sg->length;
+@@ -393,7 +393,7 @@ static void goldfish_mmc_prepare_data(struct goldfish_mmc_host *host,
+ * We don't really have DMA, so we need to copy to our
+ * platform driver buffer
+ */
+- sg_copy_from_buffer(data->sg, 1, host->virt_base,
++ sg_copy_to_buffer(data->sg, 1, host->virt_base,
+ data->sg->length);
+ }
+ }
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 5aa2c9404e92..be53044086c7 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1976,7 +1976,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ do {
+ value = atmci_readl(host, ATMCI_RDR);
+ if (likely(offset + 4 <= sg->length)) {
+- sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
++ sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
+
+ offset += 4;
+ nbytes += 4;
+@@ -1993,7 +1993,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ } else {
+ unsigned int remaining = sg->length - offset;
+
+- sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
++ sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
+ nbytes += remaining;
+
+ flush_dcache_page(sg_page(sg));
+@@ -2003,7 +2003,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ goto done;
+
+ offset = 4 - remaining;
+- sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
++ sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
+ offset, 0);
+ nbytes += offset;
+ }
+@@ -2042,7 +2042,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+
+ do {
+ if (likely(offset + 4 <= sg->length)) {
+- sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
++ sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
+ atmci_writel(host, ATMCI_TDR, value);
+
+ offset += 4;
+@@ -2059,7 +2059,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ unsigned int remaining = sg->length - offset;
+
+ value = 0;
+- sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
++ sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
+ nbytes += remaining;
+
+ host->sg = sg = sg_next(sg);
+@@ -2070,7 +2070,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ }
+
+ offset = 4 - remaining;
+- sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
++ sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
+ offset, 0);
+ atmci_writel(host, ATMCI_TDR, value);
+ nbytes += offset;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 12f6753d47ae..e686fe73159e 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -129,6 +129,11 @@
+ #define DEFAULT_TIMEOUT_MS 1000
+ #define MIN_DMA_LEN 128
+
++static bool atmel_nand_avoid_dma __read_mostly;
++
++MODULE_PARM_DESC(avoiddma, "Avoid using DMA");
++module_param_named(avoiddma, atmel_nand_avoid_dma, bool, 0400);
++
+ enum atmel_nand_rb_type {
+ ATMEL_NAND_NO_RB,
+ ATMEL_NAND_NATIVE_RB,
+@@ -1977,7 +1982,7 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ return ret;
+ }
+
+- if (nc->caps->has_dma) {
++ if (nc->caps->has_dma && !atmel_nand_avoid_dma) {
+ dma_cap_mask_t mask;
+
+ dma_cap_zero(mask);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index a8926e97935e..c5d387be6cfe 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5705,7 +5705,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (t4_read_reg(adapter, LE_DB_CONFIG_A) & HASHEN_F) {
+ u32 hash_base, hash_reg;
+
+- if (chip <= CHELSIO_T5) {
++ if (chip_ver <= CHELSIO_T5) {
+ hash_reg = LE_DB_TID_HASHBASE_A;
+ hash_base = t4_read_reg(adapter, hash_reg);
+ adapter->tids.hash_base = hash_base / 4;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index fa5b30f547f6..cad52bd331f7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -220,10 +220,10 @@ struct hnae_desc_cb {
+
+ /* priv data for the desc, e.g. skb when use with ip stack*/
+ void *priv;
+- u16 page_offset;
+- u16 reuse_flag;
++ u32 page_offset;
++ u32 length; /* length of the buffer */
+
+- u16 length; /* length of the buffer */
++ u16 reuse_flag;
+
+ /* desc type, used by the ring user to mark the type of the priv data */
+ u16 type;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef9ef703d13a..ef994a715f93 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -530,7 +530,7 @@ static void hns_nic_reuse_page(struct sk_buff *skb, int i,
+ }
+
+ skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pull_len,
+- size - pull_len, truesize - pull_len);
++ size - pull_len, truesize);
+
+ /* avoid re-using remote pages,flag default unreuse */
+ if (unlikely(page_to_nid(desc_cb->priv) != numa_node_id()))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 3b083d5ae9ce..c84c09053640 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -290,11 +290,11 @@ struct hns3_desc_cb {
+
+ /* priv data for the desc, e.g. skb when use with ip stack*/
+ void *priv;
+- u16 page_offset;
+- u16 reuse_flag;
+-
++ u32 page_offset;
+ u32 length; /* length of the buffer */
+
++ u16 reuse_flag;
++
+ /* desc type, used by the ring user to mark the type of the priv data */
+ u16 type;
+ };
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 40c0425b4023..11620e003a8e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -201,7 +201,9 @@ static u32 hns3_lb_check_rx_ring(struct hns3_nic_priv *priv, u32 budget)
+ rx_group = &ring->tqp_vector->rx_group;
+ pre_rx_pkt = rx_group->total_packets;
+
++ preempt_disable();
+ hns3_clean_rx_ring(ring, budget, hns3_lb_check_skb_data);
++ preempt_enable();
+
+ rcv_good_pkt_total += (rx_group->total_packets - pre_rx_pkt);
+ rx_group->total_packets = pre_rx_pkt;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 262c125f8137..f027fceea548 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1223,6 +1223,10 @@ static int hclge_mac_pause_setup_hw(struct hclge_dev *hdev)
+ tx_en = true;
+ rx_en = true;
+ break;
++ case HCLGE_FC_PFC:
++ tx_en = false;
++ rx_en = false;
++ break;
+ default:
+ tx_en = true;
+ rx_en = true;
+@@ -1240,8 +1244,9 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev)
+ if (ret)
+ return ret;
+
+- if (hdev->tm_info.fc_mode != HCLGE_FC_PFC)
+- return hclge_mac_pause_setup_hw(hdev);
++ ret = hclge_mac_pause_setup_hw(hdev);
++ if (ret)
++ return ret;
+
+ /* Only DCB-supported dev supports qset back pressure and pfc cmd */
+ if (!hnae3_dev_dcb_supported(hdev))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a17872aab168..12aa1f1b99ef 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -648,8 +648,17 @@ static int hclgevf_unmap_ring_from_vector(
+ static int hclgevf_put_vector(struct hnae3_handle *handle, int vector)
+ {
+ struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++ int vector_id;
+
+- hclgevf_free_vector(hdev, vector);
++ vector_id = hclgevf_get_vector_index(hdev, vector);
++ if (vector_id < 0) {
++ dev_err(&handle->pdev->dev,
++ "hclgevf_put_vector get vector index fail. ret =%d\n",
++ vector_id);
++ return vector_id;
++ }
++
++ hclgevf_free_vector(hdev, vector_id);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index b598c06af8e0..cd246f906150 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -208,7 +208,8 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+
+ /* tail the async message in arq */
+ msg_q = hdev->arq.msg_q[hdev->arq.tail];
+- memcpy(&msg_q[0], req->msg, HCLGE_MBX_MAX_ARQ_MSG_SIZE);
++ memcpy(&msg_q[0], req->msg,
++ HCLGE_MBX_MAX_ARQ_MSG_SIZE * sizeof(u16));
+ hclge_mbx_tail_ptr_move_arq(hdev->arq);
+ hdev->arq.count++;
+
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+index bdb3f8e65ed4..2569a168334c 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+@@ -624,14 +624,14 @@ static int e1000_set_ringparam(struct net_device *netdev,
+ adapter->tx_ring = tx_old;
+ e1000_free_all_rx_resources(adapter);
+ e1000_free_all_tx_resources(adapter);
+- kfree(tx_old);
+- kfree(rx_old);
+ adapter->rx_ring = rxdr;
+ adapter->tx_ring = txdr;
+ err = e1000_up(adapter);
+ if (err)
+ goto err_setup;
+ }
++ kfree(tx_old);
++ kfree(rx_old);
+
+ clear_bit(__E1000_RESETTING, &adapter->flags);
+ return 0;
+@@ -644,7 +644,8 @@ err_setup_rx:
+ err_alloc_rx:
+ kfree(txdr);
+ err_alloc_tx:
+- e1000_up(adapter);
++ if (netif_running(adapter->netdev))
++ e1000_up(adapter);
+ err_setup:
+ clear_bit(__E1000_RESETTING, &adapter->flags);
+ return err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 6947a2a571cb..5d670f4ce5ac 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -1903,7 +1903,7 @@ static void i40e_get_stat_strings(struct net_device *netdev, u8 *data)
+ data += ETH_GSTRING_LEN;
+ }
+
+- WARN_ONCE(p - data != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
++ WARN_ONCE(data - p != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
+ "stat strings count mismatch!");
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index c944bd10b03d..5f105bc68c6a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5121,15 +5121,17 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ u8 *bw_share)
+ {
+ struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
++ struct i40e_pf *pf = vsi->back;
+ i40e_status ret;
+ int i;
+
+- if (vsi->back->flags & I40E_FLAG_TC_MQPRIO)
++ /* There is no need to reset BW when mqprio mode is on. */
++ if (pf->flags & I40E_FLAG_TC_MQPRIO)
+ return 0;
+- if (!vsi->mqprio_qopt.qopt.hw) {
++ if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+ if (ret)
+- dev_info(&vsi->back->pdev->dev,
++ dev_info(&pf->pdev->dev,
+ "Failed to reset tx rate for vsi->seid %u\n",
+ vsi->seid);
+ return ret;
+@@ -5138,12 +5140,11 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+ bw_data.tc_bw_credits[i] = bw_share[i];
+
+- ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, &bw_data,
+- NULL);
++ ret = i40e_aq_config_vsi_tc_bw(&pf->hw, vsi->seid, &bw_data, NULL);
+ if (ret) {
+- dev_info(&vsi->back->pdev->dev,
++ dev_info(&pf->pdev->dev,
+ "AQ command Config VSI BW allocation per TC failed = %d\n",
+- vsi->back->hw.aq.asq_last_status);
++ pf->hw.aq.asq_last_status);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index d8b5fff581e7..ed071ea75f20 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -89,6 +89,13 @@ extern const char ice_drv_ver[];
+ #define ice_for_each_rxq(vsi, i) \
+ for ((i) = 0; (i) < (vsi)->num_rxq; (i)++)
+
++/* Macros for each allocated tx/rx ring whether used or not in a VSI */
++#define ice_for_each_alloc_txq(vsi, i) \
++ for ((i) = 0; (i) < (vsi)->alloc_txq; (i)++)
++
++#define ice_for_each_alloc_rxq(vsi, i) \
++ for ((i) = 0; (i) < (vsi)->alloc_rxq; (i)++)
++
+ struct ice_tc_info {
+ u16 qoffset;
+ u16 qcount;
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 7541ec2270b3..a0614f472658 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -329,19 +329,19 @@ struct ice_aqc_vsi_props {
+ /* VLAN section */
+ __le16 pvid; /* VLANS include priority bits */
+ u8 pvlan_reserved[2];
+- u8 port_vlan_flags;
+-#define ICE_AQ_VSI_PVLAN_MODE_S 0
+-#define ICE_AQ_VSI_PVLAN_MODE_M (0x3 << ICE_AQ_VSI_PVLAN_MODE_S)
+-#define ICE_AQ_VSI_PVLAN_MODE_UNTAGGED 0x1
+-#define ICE_AQ_VSI_PVLAN_MODE_TAGGED 0x2
+-#define ICE_AQ_VSI_PVLAN_MODE_ALL 0x3
++ u8 vlan_flags;
++#define ICE_AQ_VSI_VLAN_MODE_S 0
++#define ICE_AQ_VSI_VLAN_MODE_M (0x3 << ICE_AQ_VSI_VLAN_MODE_S)
++#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED 0x1
++#define ICE_AQ_VSI_VLAN_MODE_TAGGED 0x2
++#define ICE_AQ_VSI_VLAN_MODE_ALL 0x3
+ #define ICE_AQ_VSI_PVLAN_INSERT_PVID BIT(2)
+-#define ICE_AQ_VSI_PVLAN_EMOD_S 3
+-#define ICE_AQ_VSI_PVLAN_EMOD_M (0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH (0x0 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_UP (0x1 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR (0x2 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_NOTHING (0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_S 3
++#define ICE_AQ_VSI_VLAN_EMOD_M (0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH (0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_UP (0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR (0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_NOTHING (0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+ u8 pvlan_reserved2[3];
+ /* ingress egress up sections */
+ __le32 ingress_table; /* bitmap, 3 bits per up */
+@@ -594,6 +594,7 @@ struct ice_sw_rule_lg_act {
+ #define ICE_LG_ACT_GENERIC_OFFSET_M (0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+ #define ICE_LG_ACT_GENERIC_PRIORITY_S 22
+ #define ICE_LG_ACT_GENERIC_PRIORITY_M (0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
++#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX 7
+
+ /* Action = 7 - Set Stat count */
+ #define ICE_LG_ACT_STAT_COUNT 0x7
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 71d032cc5fa7..ebd701ac9428 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -1483,7 +1483,7 @@ enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+ struct ice_phy_info *phy_info;
+ enum ice_status status = 0;
+
+- if (!pi)
++ if (!pi || !link_up)
+ return ICE_ERR_PARAM;
+
+ phy_info = &pi->phy;
+@@ -1619,20 +1619,23 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+ }
+
+ /* LUT size is only valid for Global and PF table types */
+- if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128) {
+- flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+- } else if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512) {
++ switch (lut_size) {
++ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
++ break;
++ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+- } else if ((lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K) &&
+- (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF)) {
+- flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+- ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+- } else {
++ break;
++ case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
++ if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
++ flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
++ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
++ ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
++ break;
++ }
++ /* fall-through */
++ default:
+ status = ICE_ERR_PARAM;
+ goto ice_aq_get_set_rss_lut_exit;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 7c511f144ed6..62be72fdc8f3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -597,10 +597,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ return 0;
+
+ init_ctrlq_free_rq:
+- ice_shutdown_rq(hw, cq);
+- ice_shutdown_sq(hw, cq);
+- mutex_destroy(&cq->sq_lock);
+- mutex_destroy(&cq->rq_lock);
++ if (cq->rq.head) {
++ ice_shutdown_rq(hw, cq);
++ mutex_destroy(&cq->rq_lock);
++ }
++ if (cq->sq.head) {
++ ice_shutdown_sq(hw, cq);
++ mutex_destroy(&cq->sq_lock);
++ }
+ return status;
+ }
+
+@@ -706,10 +710,14 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+ return;
+ }
+
+- ice_shutdown_sq(hw, cq);
+- ice_shutdown_rq(hw, cq);
+- mutex_destroy(&cq->sq_lock);
+- mutex_destroy(&cq->rq_lock);
++ if (cq->sq.head) {
++ ice_shutdown_sq(hw, cq);
++ mutex_destroy(&cq->sq_lock);
++ }
++ if (cq->rq.head) {
++ ice_shutdown_rq(hw, cq);
++ mutex_destroy(&cq->rq_lock);
++ }
+ }
+
+ /**
+@@ -1057,8 +1065,11 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+
+ clean_rq_elem_out:
+ /* Set pending if needed, unlock and return */
+- if (pending)
++ if (pending) {
++ /* re-read HW head to calculate actual pending messages */
++ ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+ *pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
++ }
+ clean_rq_elem_err:
+ mutex_unlock(&cq->rq_lock);
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 1db304c01d10..c71a9b528d6d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -26,7 +26,7 @@ static int ice_q_stats_len(struct net_device *netdev)
+ {
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+
+- return ((np->vsi->num_txq + np->vsi->num_rxq) *
++ return ((np->vsi->alloc_txq + np->vsi->alloc_rxq) *
+ (sizeof(struct ice_q_stats) / sizeof(u64)));
+ }
+
+@@ -218,7 +218,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ p += ETH_GSTRING_LEN;
+ }
+
+- ice_for_each_txq(vsi, i) {
++ ice_for_each_alloc_txq(vsi, i) {
+ snprintf(p, ETH_GSTRING_LEN,
+ "tx-queue-%u.tx_packets", i);
+ p += ETH_GSTRING_LEN;
+@@ -226,7 +226,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ p += ETH_GSTRING_LEN;
+ }
+
+- ice_for_each_rxq(vsi, i) {
++ ice_for_each_alloc_rxq(vsi, i) {
+ snprintf(p, ETH_GSTRING_LEN,
+ "rx-queue-%u.rx_packets", i);
+ p += ETH_GSTRING_LEN;
+@@ -253,6 +253,24 @@ static int ice_get_sset_count(struct net_device *netdev, int sset)
+ {
+ switch (sset) {
+ case ETH_SS_STATS:
++ /* The number (and order) of strings reported *must* remain
++ * constant for a given netdevice. This function must not
++ * report a different number based on run time parameters
++ * (such as the number of queues in use, or the setting of
++ * a private ethtool flag). This is due to the nature of the
++ * ethtool stats API.
++ *
++ * User space programs such as ethtool must make 3 separate
++ * ioctl requests, one for size, one for the strings, and
++ * finally one for the stats. Since these cross into
++ * user space, changes to the number or size could result in
++ * undefined memory access or incorrect string<->value
++ * correlations for statistics.
++ *
++ * Even if it appears to be safe, changes to the size or
++ * order of strings will suffer from race conditions and are
++ * not safe.
++ */
+ return ICE_ALL_STATS_LEN(netdev);
+ default:
+ return -EOPNOTSUPP;
+@@ -280,18 +298,26 @@ ice_get_ethtool_stats(struct net_device *netdev,
+ /* populate per queue stats */
+ rcu_read_lock();
+
+- ice_for_each_txq(vsi, j) {
++ ice_for_each_alloc_txq(vsi, j) {
+ ring = READ_ONCE(vsi->tx_rings[j]);
+- if (!ring)
+- continue;
+- data[i++] = ring->stats.pkts;
+- data[i++] = ring->stats.bytes;
++ if (ring) {
++ data[i++] = ring->stats.pkts;
++ data[i++] = ring->stats.bytes;
++ } else {
++ data[i++] = 0;
++ data[i++] = 0;
++ }
+ }
+
+- ice_for_each_rxq(vsi, j) {
++ ice_for_each_alloc_rxq(vsi, j) {
+ ring = READ_ONCE(vsi->rx_rings[j]);
+- data[i++] = ring->stats.pkts;
+- data[i++] = ring->stats.bytes;
++ if (ring) {
++ data[i++] = ring->stats.pkts;
++ data[i++] = ring->stats.bytes;
++ } else {
++ data[i++] = 0;
++ data[i++] = 0;
++ }
+ }
+
+ rcu_read_unlock();
+@@ -519,7 +545,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ goto done;
+ }
+
+- for (i = 0; i < vsi->num_txq; i++) {
++ for (i = 0; i < vsi->alloc_txq; i++) {
+ /* clone ring and setup updated count */
+ tx_rings[i] = *vsi->tx_rings[i];
+ tx_rings[i].count = new_tx_cnt;
+@@ -551,7 +577,7 @@ process_rx:
+ goto done;
+ }
+
+- for (i = 0; i < vsi->num_rxq; i++) {
++ for (i = 0; i < vsi->alloc_rxq; i++) {
+ /* clone ring and setup updated count */
+ rx_rings[i] = *vsi->rx_rings[i];
+ rx_rings[i].count = new_rx_cnt;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5299caf55a7f..27c9aa31b248 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -916,6 +916,21 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ return pending && (i == ICE_DFLT_IRQ_WORK);
+ }
+
++/**
++ * ice_ctrlq_pending - check if there is a difference between ntc and ntu
++ * @hw: pointer to hardware info
++ * @cq: control queue information
++ *
++ * returns true if there are pending messages in a queue, false if there aren't
++ */
++static bool ice_ctrlq_pending(struct ice_hw *hw, struct ice_ctl_q_info *cq)
++{
++ u16 ntu;
++
++ ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
++ return cq->rq.next_to_clean != ntu;
++}
++
+ /**
+ * ice_clean_adminq_subtask - clean the AdminQ rings
+ * @pf: board private structure
+@@ -923,7 +938,6 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ {
+ struct ice_hw *hw = &pf->hw;
+- u32 val;
+
+ if (!test_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state))
+ return;
+@@ -933,9 +947,13 @@ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+
+ clear_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state);
+
+- /* re-enable Admin queue interrupt causes */
+- val = rd32(hw, PFINT_FW_CTL);
+- wr32(hw, PFINT_FW_CTL, (val | PFINT_FW_CTL_CAUSE_ENA_M));
++ /* There might be a situation where new messages arrive to a control
++ * queue between processing the last message and clearing the
++ * EVENT_PENDING bit. So before exiting, check queue head again (using
++ * ice_ctrlq_pending) and process new messages if any.
++ */
++ if (ice_ctrlq_pending(hw, &hw->adminq))
++ __ice_clean_ctrlq(pf, ICE_CTL_Q_ADMIN);
+
+ ice_flush(hw);
+ }
+@@ -1295,11 +1313,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+ qcount = numq_tc;
+ }
+
+- /* find higher power-of-2 of qcount */
+- pow = ilog2(qcount);
+-
+- if (!is_power_of_2(qcount))
+- pow++;
++ /* find the (rounded up) power-of-2 of qcount */
++ pow = order_base_2(qcount);
+
+ for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+ if (!(vsi->tc_cfg.ena_tc & BIT(i))) {
+@@ -1352,14 +1367,15 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt)
+ ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE;
+ /* Traffic from VSI can be sent to LAN */
+ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+- /* Allow all packets untagged/tagged */
+- ctxt->info.port_vlan_flags = ((ICE_AQ_VSI_PVLAN_MODE_ALL &
+- ICE_AQ_VSI_PVLAN_MODE_M) >>
+- ICE_AQ_VSI_PVLAN_MODE_S);
+- /* Show VLAN/UP from packets in Rx descriptors */
+- ctxt->info.port_vlan_flags |= ((ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH &
+- ICE_AQ_VSI_PVLAN_EMOD_M) >>
+- ICE_AQ_VSI_PVLAN_EMOD_S);
++
++ /* By default bits 3 and 4 in vlan_flags are 0's which results in legacy
++ * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all
++ * packets untagged/tagged.
++ */
++ ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL &
++ ICE_AQ_VSI_VLAN_MODE_M) >>
++ ICE_AQ_VSI_VLAN_MODE_S);
++
+ /* Have 1:1 UP mapping for both ingress/egress tables */
+ table |= ICE_UP_TABLE_TRANSLATE(0, 0);
+ table |= ICE_UP_TABLE_TRANSLATE(1, 1);
+@@ -2058,15 +2074,13 @@ static int ice_req_irq_msix_misc(struct ice_pf *pf)
+ skip_req_irq:
+ ice_ena_misc_vector(pf);
+
+- val = (pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
+- (ICE_RX_ITR & PFINT_OICR_CTL_ITR_INDX_M) |
+- PFINT_OICR_CTL_CAUSE_ENA_M;
++ val = ((pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
++ PFINT_OICR_CTL_CAUSE_ENA_M);
+ wr32(hw, PFINT_OICR_CTL, val);
+
+ /* This enables Admin queue Interrupt causes */
+- val = (pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
+- (ICE_RX_ITR & PFINT_FW_CTL_ITR_INDX_M) |
+- PFINT_FW_CTL_CAUSE_ENA_M;
++ val = ((pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
++ PFINT_FW_CTL_CAUSE_ENA_M);
+ wr32(hw, PFINT_FW_CTL, val);
+
+ itr_gran = hw->itr_gran_200;
+@@ -3246,8 +3260,10 @@ static void ice_clear_interrupt_scheme(struct ice_pf *pf)
+ if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
+ ice_dis_msix(pf);
+
+- devm_kfree(&pf->pdev->dev, pf->irq_tracker);
+- pf->irq_tracker = NULL;
++ if (pf->irq_tracker) {
++ devm_kfree(&pf->pdev->dev, pf->irq_tracker);
++ pf->irq_tracker = NULL;
++ }
+ }
+
+ /**
+@@ -3720,10 +3736,10 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ enum ice_status status;
+
+ /* Here we are configuring the VSI to let the driver add VLAN tags by
+- * setting port_vlan_flags to ICE_AQ_VSI_PVLAN_MODE_ALL. The actual VLAN
+- * tag insertion happens in the Tx hot path, in ice_tx_map.
++ * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag
++ * insertion happens in the Tx hot path, in ice_tx_map.
+ */
+- ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_MODE_ALL;
++ ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+
+ ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.vsi_num = vsi->vsi_num;
+@@ -3735,7 +3751,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ return -EIO;
+ }
+
+- vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++ vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ return 0;
+ }
+
+@@ -3757,12 +3773,15 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ */
+ if (ena) {
+ /* Strip VLAN tag from Rx packet and put it in the desc */
+- ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH;
++ ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+ } else {
+ /* Disable stripping. Leave tag in packet */
+- ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_NOTHING;
++ ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+ }
+
++ /* Allow all packets untagged/tagged */
++ ctxt.info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL;
++
+ ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.vsi_num = vsi->vsi_num;
+
+@@ -3773,7 +3792,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ return -EIO;
+ }
+
+- vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++ vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ return 0;
+ }
+
+@@ -4098,11 +4117,12 @@ static int ice_vsi_cfg(struct ice_vsi *vsi)
+ {
+ int err;
+
+- ice_set_rx_mode(vsi->netdev);
+-
+- err = ice_restore_vlan(vsi);
+- if (err)
+- return err;
++ if (vsi->netdev) {
++ ice_set_rx_mode(vsi->netdev);
++ err = ice_restore_vlan(vsi);
++ if (err)
++ return err;
++ }
+
+ err = ice_vsi_cfg_txqs(vsi);
+ if (!err)
+@@ -4868,7 +4888,7 @@ int ice_down(struct ice_vsi *vsi)
+ */
+ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+ {
+- int i, err;
++ int i, err = 0;
+
+ if (!vsi->num_txq) {
+ dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Tx queues\n",
+@@ -4893,7 +4913,7 @@ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+ */
+ static int ice_vsi_setup_rx_rings(struct ice_vsi *vsi)
+ {
+- int i, err;
++ int i, err = 0;
+
+ if (!vsi->num_rxq) {
+ dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Rx queues\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 723d15f1e90b..6b7ec2ae5ad6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -645,14 +645,14 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+ act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ lg_act->pdata.lg_act.act[1] = cpu_to_le32(act);
+
+- act = (7 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
++ act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
++ ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+ /* Third action Marker value */
+ act |= ICE_LG_ACT_GENERIC;
+ act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+ ICE_LG_ACT_GENERIC_VALUE_M;
+
+- act |= (0 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ lg_act->pdata.lg_act.act[2] = cpu_to_le32(act);
+
+ /* call the fill switch rule to fill the lookup tx rx structure */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 6f59933cdff7..2bc4fe475f28 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -688,8 +688,13 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter,
+ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ {
+ struct ixgbe_hw *hw = &adapter->hw;
++ struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ struct vf_data_storage *vfinfo = &adapter->vfinfo[vf];
++ u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+ u8 num_tcs = adapter->hw_tcs;
++ u32 reg_val;
++ u32 queue;
++ u32 word;
+
+ /* remove VLAN filters beloning to this VF */
+ ixgbe_clear_vf_vlans(adapter, vf);
+@@ -726,6 +731,27 @@ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+
+ /* reset VF api back to unknown */
+ adapter->vfinfo[vf].vf_api = ixgbe_mbox_api_10;
++
++ /* Restart each queue for given VF */
++ for (queue = 0; queue < q_per_pool; queue++) {
++ unsigned int reg_idx = (vf * q_per_pool) + queue;
++
++ reg_val = IXGBE_READ_REG(hw, IXGBE_PVFTXDCTL(reg_idx));
++
++ /* Re-enabling only configured queues */
++ if (reg_val) {
++ reg_val |= IXGBE_TXDCTL_ENABLE;
++ IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++ reg_val &= ~IXGBE_TXDCTL_ENABLE;
++ IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++ }
++ }
++
++ /* Clear VF's mailbox memory */
++ for (word = 0; word < IXGBE_VFMAILBOX_SIZE; word++)
++ IXGBE_WRITE_REG_ARRAY(hw, IXGBE_PFMBMEM(vf), word, 0);
++
++ IXGBE_WRITE_FLUSH(hw);
+ }
+
+ static int ixgbe_set_vf_mac(struct ixgbe_adapter *adapter,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+index 44cfb2021145..41bcbb337e83 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+@@ -2518,6 +2518,7 @@ enum {
+ /* Translated register #defines */
+ #define IXGBE_PVFTDH(P) (0x06010 + (0x40 * (P)))
+ #define IXGBE_PVFTDT(P) (0x06018 + (0x40 * (P)))
++#define IXGBE_PVFTXDCTL(P) (0x06028 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAL(P) (0x06038 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAH(P) (0x0603C + (0x40 * (P)))
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index cdd645024a32..ad6826b5f758 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -48,7 +48,7 @@
+ #include "qed_reg_addr.h"
+ #include "qed_sriov.h"
+
+-#define CHIP_MCP_RESP_ITER_US 10
++#define QED_MCP_RESP_ITER_US 10
+
+ #define QED_DRV_MB_MAX_RETRIES (500 * 1000) /* Account for 5 sec */
+ #define QED_MCP_RESET_RETRIES (50 * 1000) /* Account for 500 msec */
+@@ -183,18 +183,57 @@ int qed_mcp_free(struct qed_hwfn *p_hwfn)
+ return 0;
+ }
+
++/* Maximum of 1 sec to wait for the SHMEM ready indication */
++#define QED_MCP_SHMEM_RDY_MAX_RETRIES 20
++#define QED_MCP_SHMEM_RDY_ITER_MS 50
++
+ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+ struct qed_mcp_info *p_info = p_hwfn->mcp_info;
++ u8 cnt = QED_MCP_SHMEM_RDY_MAX_RETRIES;
++ u8 msec = QED_MCP_SHMEM_RDY_ITER_MS;
+ u32 drv_mb_offsize, mfw_mb_offsize;
+ u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
+
+ p_info->public_base = qed_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
+- if (!p_info->public_base)
+- return 0;
++ if (!p_info->public_base) {
++ DP_NOTICE(p_hwfn,
++ "The address of the MCP scratch-pad is not configured\n");
++ return -EINVAL;
++ }
+
+ p_info->public_base |= GRCBASE_MCP;
+
++ /* Get the MFW MB address and number of supported messages */
++ mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
++ SECTION_OFFSIZE_ADDR(p_info->public_base,
++ PUBLIC_MFW_MB));
++ p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
++ p_info->mfw_mb_length = (u16)qed_rd(p_hwfn, p_ptt,
++ p_info->mfw_mb_addr +
++ offsetof(struct public_mfw_mb,
++ sup_msgs));
++
++ /* The driver can notify that there was an MCP reset, and might read the
++ * SHMEM values before the MFW has completed initializing them.
++ * To avoid this, the "sup_msgs" field in the MFW mailbox is used as a
++ * data ready indication.
++ */
++ while (!p_info->mfw_mb_length && --cnt) {
++ msleep(msec);
++ p_info->mfw_mb_length =
++ (u16)qed_rd(p_hwfn, p_ptt,
++ p_info->mfw_mb_addr +
++ offsetof(struct public_mfw_mb, sup_msgs));
++ }
++
++ if (!cnt) {
++ DP_NOTICE(p_hwfn,
++ "Failed to get the SHMEM ready notification after %d msec\n",
++ QED_MCP_SHMEM_RDY_MAX_RETRIES * msec);
++ return -EBUSY;
++ }
++
+ /* Calculate the driver and MFW mailbox address */
+ drv_mb_offsize = qed_rd(p_hwfn, p_ptt,
+ SECTION_OFFSIZE_ADDR(p_info->public_base,
+@@ -204,13 +243,6 @@ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n",
+ drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
+
+- /* Set the MFW MB address */
+- mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
+- SECTION_OFFSIZE_ADDR(p_info->public_base,
+- PUBLIC_MFW_MB));
+- p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
+- p_info->mfw_mb_length = (u16)qed_rd(p_hwfn, p_ptt, p_info->mfw_mb_addr);
+-
+ /* Get the current driver mailbox sequence before sending
+ * the first command
+ */
+@@ -285,9 +317,15 @@ static void qed_mcp_reread_offsets(struct qed_hwfn *p_hwfn,
+
+ int qed_mcp_reset(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+- u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
++ u32 org_mcp_reset_seq, seq, delay = QED_MCP_RESP_ITER_US, cnt = 0;
+ int rc = 0;
+
++ if (p_hwfn->mcp_info->b_block_cmd) {
++ DP_NOTICE(p_hwfn,
++ "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
++ return -EBUSY;
++ }
++
+ /* Ensure that only a single thread is accessing the mailbox */
+ spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+
+@@ -413,14 +451,41 @@ static void __qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ (p_mb_params->cmd | seq_num), p_mb_params->param);
+ }
+
++static void qed_mcp_cmd_set_blocking(struct qed_hwfn *p_hwfn, bool block_cmd)
++{
++ p_hwfn->mcp_info->b_block_cmd = block_cmd;
++
++ DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n",
++ block_cmd ? "Block" : "Unblock");
++}
++
++static void qed_mcp_print_cpu_info(struct qed_hwfn *p_hwfn,
++ struct qed_ptt *p_ptt)
++{
++ u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
++ u32 delay = QED_MCP_RESP_ITER_US;
++
++ cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++ cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++ cpu_pc_0 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++ udelay(delay);
++ cpu_pc_1 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++ udelay(delay);
++ cpu_pc_2 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++
++ DP_NOTICE(p_hwfn,
++ "MCP CPU info: mode 0x%08x, state 0x%08x, pc {0x%08x, 0x%08x, 0x%08x}\n",
++ cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2);
++}
++
+ static int
+ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_mcp_mb_params *p_mb_params,
+- u32 max_retries, u32 delay)
++ u32 max_retries, u32 usecs)
+ {
++ u32 cnt = 0, msecs = DIV_ROUND_UP(usecs, 1000);
+ struct qed_mcp_cmd_elem *p_cmd_elem;
+- u32 cnt = 0;
+ u16 seq_num;
+ int rc = 0;
+
+@@ -443,7 +508,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ goto err;
+
+ spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+- udelay(delay);
++
++ if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++ msleep(msecs);
++ else
++ udelay(usecs);
+ } while (++cnt < max_retries);
+
+ if (cnt >= max_retries) {
+@@ -472,7 +541,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ * The spinlock stays locked until the list element is removed.
+ */
+
+- udelay(delay);
++ if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++ msleep(msecs);
++ else
++ udelay(usecs);
++
+ spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+
+ if (p_cmd_elem->b_is_completed)
+@@ -491,11 +564,15 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ DP_NOTICE(p_hwfn,
+ "The MFW failed to respond to command 0x%08x [param 0x%08x].\n",
+ p_mb_params->cmd, p_mb_params->param);
++ qed_mcp_print_cpu_info(p_hwfn, p_ptt);
+
+ spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ qed_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+ spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+
++ if (!QED_MB_FLAGS_IS_SET(p_mb_params, AVOID_BLOCK))
++ qed_mcp_cmd_set_blocking(p_hwfn, true);
++
+ return -EAGAIN;
+ }
+
+@@ -507,7 +584,7 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
+ p_mb_params->mcp_resp,
+ p_mb_params->mcp_param,
+- (cnt * delay) / 1000, (cnt * delay) % 1000);
++ (cnt * usecs) / 1000, (cnt * usecs) % 1000);
+
+ /* Clear the sequence number from the MFW response */
+ p_mb_params->mcp_resp &= FW_MSG_CODE_MASK;
+@@ -525,7 +602,7 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ {
+ size_t union_data_size = sizeof(union drv_union_data);
+ u32 max_retries = QED_DRV_MB_MAX_RETRIES;
+- u32 delay = CHIP_MCP_RESP_ITER_US;
++ u32 usecs = QED_MCP_RESP_ITER_US;
+
+ /* MCP not initialized */
+ if (!qed_mcp_is_init(p_hwfn)) {
+@@ -533,6 +610,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ return -EBUSY;
+ }
+
++ if (p_hwfn->mcp_info->b_block_cmd) {
++ DP_NOTICE(p_hwfn,
++ "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
++ p_mb_params->cmd, p_mb_params->param);
++ return -EBUSY;
++ }
++
+ if (p_mb_params->data_src_size > union_data_size ||
+ p_mb_params->data_dst_size > union_data_size) {
+ DP_ERR(p_hwfn,
+@@ -542,8 +626,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ return -EINVAL;
+ }
+
++ if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
++ max_retries = DIV_ROUND_UP(max_retries, 1000);
++ usecs *= 1000;
++ }
++
+ return _qed_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
+- delay);
++ usecs);
+ }
+
+ int qed_mcp_cmd(struct qed_hwfn *p_hwfn,
+@@ -760,6 +849,7 @@ __qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ mb_params.data_src_size = sizeof(load_req);
+ mb_params.p_data_dst = &load_rsp;
+ mb_params.data_dst_size = sizeof(load_rsp);
++ mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_SP,
+ "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+@@ -981,7 +1071,8 @@ int qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+
+ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+- u32 wol_param, mcp_resp, mcp_param;
++ struct qed_mcp_mb_params mb_params;
++ u32 wol_param;
+
+ switch (p_hwfn->cdev->wol_config) {
+ case QED_OV_WOL_DISABLED:
+@@ -999,8 +1090,12 @@ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+ }
+
+- return qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+- &mcp_resp, &mcp_param);
++ memset(&mb_params, 0, sizeof(mb_params));
++ mb_params.cmd = DRV_MSG_CODE_UNLOAD_REQ;
++ mb_params.param = wol_param;
++ mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
++
++ return qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ }
+
+ int qed_mcp_unload_done(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+@@ -2075,31 +2170,65 @@ qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
+ return rc;
+ }
+
++/* A maximal 100 msec waiting time for the MCP to halt */
++#define QED_MCP_HALT_SLEEP_MS 10
++#define QED_MCP_HALT_MAX_RETRIES 10
++
+ int qed_mcp_halt(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+- u32 resp = 0, param = 0;
++ u32 resp = 0, param = 0, cpu_state, cnt = 0;
+ int rc;
+
+ rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MCP_HALT, 0, &resp,
+ ¶m);
+- if (rc)
++ if (rc) {
+ DP_ERR(p_hwfn, "MCP response failure, aborting\n");
++ return rc;
++ }
+
+- return rc;
++ do {
++ msleep(QED_MCP_HALT_SLEEP_MS);
++ cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++ if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED)
++ break;
++ } while (++cnt < QED_MCP_HALT_MAX_RETRIES);
++
++ if (cnt == QED_MCP_HALT_MAX_RETRIES) {
++ DP_NOTICE(p_hwfn,
++ "Failed to halt the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++ qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE), cpu_state);
++ return -EBUSY;
++ }
++
++ qed_mcp_cmd_set_blocking(p_hwfn, true);
++
++ return 0;
+ }
+
++#define QED_MCP_RESUME_SLEEP_MS 10
++
+ int qed_mcp_resume(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+- u32 value, cpu_mode;
++ u32 cpu_mode, cpu_state;
+
+ qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_STATE, 0xffffffff);
+
+- value = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
+- value &= ~MCP_REG_CPU_MODE_SOFT_HALT;
+- qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, value);
+ cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++ cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT;
++ qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode);
++ msleep(QED_MCP_RESUME_SLEEP_MS);
++ cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
+
+- return (cpu_mode & MCP_REG_CPU_MODE_SOFT_HALT) ? -EAGAIN : 0;
++ if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED) {
++ DP_NOTICE(p_hwfn,
++ "Failed to resume the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++ cpu_mode, cpu_state);
++ return -EBUSY;
++ }
++
++ qed_mcp_cmd_set_blocking(p_hwfn, false);
++
++ return 0;
+ }
+
+ int qed_mcp_ov_update_current_config(struct qed_hwfn *p_hwfn,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+index 632a838f1fe3..ce2e617d2cab 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+@@ -635,11 +635,14 @@ struct qed_mcp_info {
+ */
+ spinlock_t cmd_lock;
+
++ /* Flag to indicate whether sending a MFW mailbox command is blocked */
++ bool b_block_cmd;
++
+ /* Spinlock used for syncing SW link-changes and link-changes
+ * originating from attention context.
+ */
+ spinlock_t link_lock;
+- bool block_mb_sending;
++
+ u32 public_base;
+ u32 drv_mb_addr;
+ u32 mfw_mb_addr;
+@@ -660,14 +663,20 @@ struct qed_mcp_info {
+ };
+
+ struct qed_mcp_mb_params {
+- u32 cmd;
+- u32 param;
+- void *p_data_src;
+- u8 data_src_size;
+- void *p_data_dst;
+- u8 data_dst_size;
+- u32 mcp_resp;
+- u32 mcp_param;
++ u32 cmd;
++ u32 param;
++ void *p_data_src;
++ void *p_data_dst;
++ u8 data_src_size;
++ u8 data_dst_size;
++ u32 mcp_resp;
++ u32 mcp_param;
++ u32 flags;
++#define QED_MB_FLAG_CAN_SLEEP (0x1 << 0)
++#define QED_MB_FLAG_AVOID_BLOCK (0x1 << 1)
++#define QED_MB_FLAGS_IS_SET(params, flag) \
++ ({ typeof(params) __params = (params); \
++ (__params && (__params->flags & QED_MB_FLAG_ ## flag)); })
+ };
+
+ struct qed_drv_tlv_hdr {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+index d8ad2dcad8d5..f736f70956fd 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+@@ -562,8 +562,10 @@
+ 0
+ #define MCP_REG_CPU_STATE \
+ 0xe05004UL
++#define MCP_REG_CPU_STATE_SOFT_HALTED (0x1UL << 10)
+ #define MCP_REG_CPU_EVENT_MASK \
+ 0xe05008UL
++#define MCP_REG_CPU_PROGRAM_COUNTER 0xe0501cUL
+ #define PGLUE_B_REG_PF_BAR0_SIZE \
+ 0x2aae60UL
+ #define PGLUE_B_REG_PF_BAR1_SIZE \
+diff --git a/drivers/net/phy/xilinx_gmii2rgmii.c b/drivers/net/phy/xilinx_gmii2rgmii.c
+index 2e5150b0b8d5..7a14e8170e82 100644
+--- a/drivers/net/phy/xilinx_gmii2rgmii.c
++++ b/drivers/net/phy/xilinx_gmii2rgmii.c
+@@ -40,8 +40,11 @@ static int xgmiitorgmii_read_status(struct phy_device *phydev)
+ {
+ struct gmii2rgmii *priv = phydev->priv;
+ u16 val = 0;
++ int err;
+
+- priv->phy_drv->read_status(phydev);
++ err = priv->phy_drv->read_status(phydev);
++ if (err < 0)
++ return err;
+
+ val = mdiobus_read(phydev->mdio.bus, priv->addr, XILINX_GMII2RGMII_REG);
+ val &= ~XILINX_GMII2RGMII_SPEED_MASK;
+@@ -81,6 +84,11 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
+ return -EPROBE_DEFER;
+ }
+
++ if (!priv->phy_dev->drv) {
++ dev_info(dev, "Attached phy not ready\n");
++ return -EPROBE_DEFER;
++ }
++
+ priv->addr = mdiodev->addr;
+ priv->phy_drv = priv->phy_dev->drv;
+ memcpy(&priv->conv_phy_drv, priv->phy_dev->drv,
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 3b96a43fbda4..18c709c484e7 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -1512,7 +1512,7 @@ ath10k_ce_alloc_src_ring_64(struct ath10k *ar, unsigned int ce_id,
+ ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ if (ret) {
+ dma_free_coherent(ar->dev,
+- (nentries * sizeof(struct ce_desc) +
++ (nentries * sizeof(struct ce_desc_64) +
+ CE_DESC_RING_ALIGN),
+ src_ring->base_addr_owner_space_unaligned,
+ base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index c72d8af122a2..4d1cd90d6d27 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -268,11 +268,12 @@ int ath10k_htt_rx_ring_refill(struct ath10k *ar)
+ spin_lock_bh(&htt->rx_ring.lock);
+ ret = ath10k_htt_rx_ring_fill_n(htt, (htt->rx_ring.fill_level -
+ htt->rx_ring.fill_cnt));
+- spin_unlock_bh(&htt->rx_ring.lock);
+
+ if (ret)
+ ath10k_htt_rx_ring_free(htt);
+
++ spin_unlock_bh(&htt->rx_ring.lock);
++
+ return ret;
+ }
+
+@@ -284,7 +285,9 @@ void ath10k_htt_rx_free(struct ath10k_htt *htt)
+ skb_queue_purge(&htt->rx_in_ord_compl_q);
+ skb_queue_purge(&htt->tx_fetch_ind_q);
+
++ spin_lock_bh(&htt->rx_ring.lock);
+ ath10k_htt_rx_ring_free(htt);
++ spin_unlock_bh(&htt->rx_ring.lock);
+
+ dma_free_coherent(htt->ar->dev,
+ ath10k_htt_get_rx_ring_size(htt),
+@@ -1089,7 +1092,7 @@ static void ath10k_htt_rx_h_queue_msdu(struct ath10k *ar,
+ status = IEEE80211_SKB_RXCB(skb);
+ *status = *rx_status;
+
+- __skb_queue_tail(&ar->htt.rx_msdus_q, skb);
++ skb_queue_tail(&ar->htt.rx_msdus_q, skb);
+ }
+
+ static void ath10k_process_rx(struct ath10k *ar, struct sk_buff *skb)
+@@ -2810,7 +2813,7 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
+ break;
+ }
+ case HTT_T2H_MSG_TYPE_RX_IN_ORD_PADDR_IND: {
+- __skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
++ skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
+ return false;
+ }
+ case HTT_T2H_MSG_TYPE_TX_CREDIT_UPDATE_IND:
+@@ -2874,7 +2877,7 @@ static int ath10k_htt_rx_deliver_msdu(struct ath10k *ar, int quota, int budget)
+ if (skb_queue_empty(&ar->htt.rx_msdus_q))
+ break;
+
+- skb = __skb_dequeue(&ar->htt.rx_msdus_q);
++ skb = skb_dequeue(&ar->htt.rx_msdus_q);
+ if (!skb)
+ break;
+ ath10k_process_rx(ar, skb);
+@@ -2905,7 +2908,7 @@ int ath10k_htt_txrx_compl_task(struct ath10k *ar, int budget)
+ goto exit;
+ }
+
+- while ((skb = __skb_dequeue(&htt->rx_in_ord_compl_q))) {
++ while ((skb = skb_dequeue(&htt->rx_in_ord_compl_q))) {
+ spin_lock_bh(&htt->rx_ring.lock);
+ ret = ath10k_htt_rx_in_ord_ind(ar, skb);
+ spin_unlock_bh(&htt->rx_ring.lock);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 747c6951b5c1..e0b9f7d0dfd3 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -4054,6 +4054,7 @@ void ath10k_mac_tx_push_pending(struct ath10k *ar)
+ rcu_read_unlock();
+ spin_unlock_bh(&ar->txqs_lock);
+ }
++EXPORT_SYMBOL(ath10k_mac_tx_push_pending);
+
+ /************/
+ /* Scanning */
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index d612ce8c9cff..299db8b1c9ba 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -30,6 +30,7 @@
+ #include "debug.h"
+ #include "hif.h"
+ #include "htc.h"
++#include "mac.h"
+ #include "targaddrs.h"
+ #include "trace.h"
+ #include "sdio.h"
+@@ -396,6 +397,7 @@ static int ath10k_sdio_mbox_rx_process_packet(struct ath10k *ar,
+ int ret;
+
+ payload_len = le16_to_cpu(htc_hdr->len);
++ skb->len = payload_len + sizeof(struct ath10k_htc_hdr);
+
+ if (trailer_present) {
+ trailer = skb->data + sizeof(*htc_hdr) +
+@@ -434,12 +436,14 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ enum ath10k_htc_ep_id id;
+ int ret, i, *n_lookahead_local;
+ u32 *lookaheads_local;
++ int lookahead_idx = 0;
+
+ for (i = 0; i < ar_sdio->n_rx_pkts; i++) {
+ lookaheads_local = lookaheads;
+ n_lookahead_local = n_lookahead;
+
+- id = ((struct ath10k_htc_hdr *)&lookaheads[i])->eid;
++ id = ((struct ath10k_htc_hdr *)
++ &lookaheads[lookahead_idx++])->eid;
+
+ if (id >= ATH10K_HTC_EP_COUNT) {
+ ath10k_warn(ar, "invalid endpoint in look-ahead: %d\n",
+@@ -462,6 +466,7 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ /* Only read lookahead's from RX trailers
+ * for the last packet in a bundle.
+ */
++ lookahead_idx--;
+ lookaheads_local = NULL;
+ n_lookahead_local = NULL;
+ }
+@@ -1342,6 +1347,8 @@ static void ath10k_sdio_irq_handler(struct sdio_func *func)
+ break;
+ } while (time_before(jiffies, timeout) && !done);
+
++ ath10k_mac_tx_push_pending(ar);
++
+ sdio_claim_host(ar_sdio->func);
+
+ if (ret && ret != -ECANCELED)
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index a3a7042fe13a..aa621bf50a91 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -449,7 +449,7 @@ static void ath10k_snoc_htt_rx_cb(struct ath10k_ce_pipe *ce_state)
+
+ static void ath10k_snoc_rx_replenish_retry(struct timer_list *t)
+ {
+- struct ath10k_pci *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
++ struct ath10k_snoc *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
+ struct ath10k *ar = ar_snoc->ar;
+
+ ath10k_snoc_rx_post(ar);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index f97ab795cf2e..2319f79b34f0 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4602,10 +4602,6 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+
+ ev = (struct wmi_pdev_tpc_config_event *)skb->data;
+
+- tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
+- if (!tpc_stats)
+- return;
+-
+ num_tx_chain = __le32_to_cpu(ev->num_tx_chain);
+
+ if (num_tx_chain > WMI_TPC_TX_N_CHAIN) {
+@@ -4614,6 +4610,10 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ return;
+ }
+
++ tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
++ if (!tpc_stats)
++ return;
++
+ ath10k_wmi_tpc_config_get_rate_code(rate_code, pream_table,
+ num_tx_chain);
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+index b9672da24a9d..b24bc57ca91b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+@@ -213,7 +213,7 @@ static const s16 log_table[] = {
+ 30498,
+ 31267,
+ 32024,
+- 32768
++ 32767
+ };
+
+ #define LOG_TABLE_SIZE 32 /* log_table size */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index b49aea4da2d6..8985446570bd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -439,15 +439,13 @@ mt76x2_mac_fill_tx_status(struct mt76x2_dev *dev,
+ if (last_rate < IEEE80211_TX_MAX_RATES - 1)
+ rate[last_rate + 1].idx = -1;
+
+- cur_idx = rate[last_rate].idx + st->retry;
++ cur_idx = rate[last_rate].idx + last_rate;
+ for (i = 0; i <= last_rate; i++) {
+ rate[i].flags = rate[last_rate].flags;
+ rate[i].idx = max_t(int, 0, cur_idx - i);
+ rate[i].count = 1;
+ }
+-
+- if (last_rate > 0)
+- rate[last_rate - 1].count = st->retry + 1 - last_rate;
++ rate[last_rate].count = st->retry + 1 - last_rate;
+
+ info->status.ampdu_len = n_frames;
+ info->status.ampdu_ack_len = st->success ? n_frames : 0;
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index 9935bd09db1f..d4947e3a909e 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -2928,6 +2928,8 @@ static void rndis_wlan_auth_indication(struct usbnet *usbdev,
+
+ while (buflen >= sizeof(*auth_req)) {
+ auth_req = (void *)buf;
++ if (buflen < le32_to_cpu(auth_req->length))
++ return;
+ type = "unknown";
+ flags = le32_to_cpu(auth_req->flags);
+ pairwise_error = false;
+diff --git a/drivers/net/wireless/ti/wlcore/cmd.c b/drivers/net/wireless/ti/wlcore/cmd.c
+index 761cf8573a80..f48c3f62966d 100644
+--- a/drivers/net/wireless/ti/wlcore/cmd.c
++++ b/drivers/net/wireless/ti/wlcore/cmd.c
+@@ -35,6 +35,7 @@
+ #include "wl12xx_80211.h"
+ #include "cmd.h"
+ #include "event.h"
++#include "ps.h"
+ #include "tx.h"
+ #include "hw_ops.h"
+
+@@ -191,6 +192,10 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+
+ timeout_time = jiffies + msecs_to_jiffies(WL1271_EVENT_TIMEOUT);
+
++ ret = wl1271_ps_elp_wakeup(wl);
++ if (ret < 0)
++ return ret;
++
+ do {
+ if (time_after(jiffies, timeout_time)) {
+ wl1271_debug(DEBUG_CMD, "timeout waiting for event %d",
+@@ -222,6 +227,7 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ } while (!event);
+
+ out:
++ wl1271_ps_elp_sleep(wl);
+ kfree(events_vector);
+ return ret;
+ }
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 34712def81b1..5251689a1d9a 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -311,7 +311,7 @@ fcloop_tgt_lsrqst_done_work(struct work_struct *work)
+ struct fcloop_tport *tport = tls_req->tport;
+ struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+
+- if (tport->remoteport)
++ if (!tport || tport->remoteport)
+ lsreq->done(lsreq, tls_req->status);
+ }
+
+@@ -329,6 +329,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
+
+ if (!rport->targetport) {
+ tls_req->status = -ECONNREFUSED;
++ tls_req->tport = NULL;
+ schedule_work(&tls_req->work);
+ return ret;
+ }
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index ef0b1b6ba86f..12afa7fdf77e 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -457,17 +457,18 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge)
+ /**
+ * enable_slot - enable, configure a slot
+ * @slot: slot to be enabled
++ * @bridge: true if enable is for the whole bridge (not a single slot)
+ *
+ * This function should be called per *physical slot*,
+ * not per each slot object in ACPI namespace.
+ */
+-static void enable_slot(struct acpiphp_slot *slot)
++static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ {
+ struct pci_dev *dev;
+ struct pci_bus *bus = slot->bus;
+ struct acpiphp_func *func;
+
+- if (bus->self && hotplug_is_native(bus->self)) {
++ if (bridge && bus->self && hotplug_is_native(bus->self)) {
+ /*
+ * If native hotplug is used, it will take care of hotplug
+ * slot management and resource allocation for hotplug
+@@ -701,7 +702,7 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
+ trim_stale_devices(dev);
+
+ /* configure all functions */
+- enable_slot(slot);
++ enable_slot(slot, true);
+ } else {
+ disable_slot(slot);
+ }
+@@ -785,7 +786,7 @@ static void hotplug_event(u32 type, struct acpiphp_context *context)
+ if (bridge)
+ acpiphp_check_bridge(bridge);
+ else if (!(slot->flags & SLOT_IS_GOING_AWAY))
+- enable_slot(slot);
++ enable_slot(slot, false);
+
+ break;
+
+@@ -973,7 +974,7 @@ int acpiphp_enable_slot(struct acpiphp_slot *slot)
+
+ /* configure all functions */
+ if (!(slot->flags & SLOT_ENABLED))
+- enable_slot(slot);
++ enable_slot(slot, false);
+
+ pci_unlock_rescan_remove();
+ return 0;
+diff --git a/drivers/platform/x86/asus-wireless.c b/drivers/platform/x86/asus-wireless.c
+index 6afd011de9e5..b8e35a8d65cf 100644
+--- a/drivers/platform/x86/asus-wireless.c
++++ b/drivers/platform/x86/asus-wireless.c
+@@ -52,13 +52,12 @@ static const struct acpi_device_id device_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(acpi, device_ids);
+
+-static u64 asus_wireless_method(acpi_handle handle, const char *method,
+- int param)
++static acpi_status asus_wireless_method(acpi_handle handle, const char *method,
++ int param, u64 *ret)
+ {
+ struct acpi_object_list p;
+ union acpi_object obj;
+ acpi_status s;
+- u64 ret;
+
+ acpi_handle_debug(handle, "Evaluating method %s, parameter %#x\n",
+ method, param);
+@@ -67,24 +66,27 @@ static u64 asus_wireless_method(acpi_handle handle, const char *method,
+ p.count = 1;
+ p.pointer = &obj;
+
+- s = acpi_evaluate_integer(handle, (acpi_string) method, &p, &ret);
++ s = acpi_evaluate_integer(handle, (acpi_string) method, &p, ret);
+ if (ACPI_FAILURE(s))
+ acpi_handle_err(handle,
+ "Failed to eval method %s, param %#x (%d)\n",
+ method, param, s);
+- acpi_handle_debug(handle, "%s returned %#llx\n", method, ret);
+- return ret;
++ else
++ acpi_handle_debug(handle, "%s returned %#llx\n", method, *ret);
++
++ return s;
+ }
+
+ static enum led_brightness led_state_get(struct led_classdev *led)
+ {
+ struct asus_wireless_data *data;
+- int s;
++ acpi_status s;
++ u64 ret;
+
+ data = container_of(led, struct asus_wireless_data, led);
+ s = asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+- data->hswc_params->status);
+- if (s == data->hswc_params->on)
++ data->hswc_params->status, &ret);
++ if (ACPI_SUCCESS(s) && ret == data->hswc_params->on)
+ return LED_FULL;
+ return LED_OFF;
+ }
+@@ -92,10 +94,11 @@ static enum led_brightness led_state_get(struct led_classdev *led)
+ static void led_state_update(struct work_struct *work)
+ {
+ struct asus_wireless_data *data;
++ u64 ret;
+
+ data = container_of(work, struct asus_wireless_data, led_work);
+ asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+- data->led_state);
++ data->led_state, &ret);
+ }
+
+ static void led_state_set(struct led_classdev *led, enum led_brightness value)
+diff --git a/drivers/power/reset/vexpress-poweroff.c b/drivers/power/reset/vexpress-poweroff.c
+index 102f95a09460..e9e749f87517 100644
+--- a/drivers/power/reset/vexpress-poweroff.c
++++ b/drivers/power/reset/vexpress-poweroff.c
+@@ -35,6 +35,7 @@ static void vexpress_reset_do(struct device *dev, const char *what)
+ }
+
+ static struct device *vexpress_power_off_device;
++static atomic_t vexpress_restart_nb_refcnt = ATOMIC_INIT(0);
+
+ static void vexpress_power_off(void)
+ {
+@@ -99,10 +100,13 @@ static int _vexpress_register_restart_handler(struct device *dev)
+ int err;
+
+ vexpress_restart_device = dev;
+- err = register_restart_handler(&vexpress_restart_nb);
+- if (err) {
+- dev_err(dev, "cannot register restart handler (err=%d)\n", err);
+- return err;
++ if (atomic_inc_return(&vexpress_restart_nb_refcnt) == 1) {
++ err = register_restart_handler(&vexpress_restart_nb);
++ if (err) {
++ dev_err(dev, "cannot register restart handler (err=%d)\n", err);
++ atomic_dec(&vexpress_restart_nb_refcnt);
++ return err;
++ }
+ }
+ device_create_file(dev, &dev_attr_active);
+
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 6e1bc14c3304..735658ee1c60 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -718,7 +718,7 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ }
+
+ /* Determine charge current limit */
+- cc = (ret & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
++ cc = (val & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
+ cc = (cc * CHRG_CCCV_CC_LSB_RES) + CHRG_CCCV_CC_OFFSET;
+ info->cc = cc;
+
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index d21f478741c1..e85361878450 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -14,6 +14,7 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/notifier.h>
+ #include <linux/err.h>
+@@ -140,8 +141,13 @@ static void power_supply_deferred_register_work(struct work_struct *work)
+ struct power_supply *psy = container_of(work, struct power_supply,
+ deferred_register_work.work);
+
+- if (psy->dev.parent)
+- mutex_lock(&psy->dev.parent->mutex);
++ if (psy->dev.parent) {
++ while (!mutex_trylock(&psy->dev.parent->mutex)) {
++ if (psy->removing)
++ return;
++ msleep(10);
++ }
++ }
+
+ power_supply_changed(psy);
+
+@@ -1082,6 +1088,7 @@ EXPORT_SYMBOL_GPL(devm_power_supply_register_no_ws);
+ void power_supply_unregister(struct power_supply *psy)
+ {
+ WARN_ON(atomic_dec_return(&psy->use_cnt));
++ psy->removing = true;
+ cancel_work_sync(&psy->changed_work);
+ cancel_delayed_work_sync(&psy->deferred_register_work);
+ sysfs_remove_link(&psy->dev.kobj, "powers");
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 6ed568b96c0e..cc1450c53fb2 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3147,7 +3147,7 @@ static inline int regulator_suspend_toggle(struct regulator_dev *rdev,
+ if (!rstate->changeable)
+ return -EPERM;
+
+- rstate->enabled = en;
++ rstate->enabled = (en) ? ENABLE_IN_SUSPEND : DISABLE_IN_SUSPEND;
+
+ return 0;
+ }
+@@ -4381,13 +4381,13 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ !rdev->desc->fixed_uV)
+ rdev->is_switch = true;
+
++ dev_set_drvdata(&rdev->dev, rdev);
+ ret = device_register(&rdev->dev);
+ if (ret != 0) {
+ put_device(&rdev->dev);
+ goto unset_supplies;
+ }
+
+- dev_set_drvdata(&rdev->dev, rdev);
+ rdev_init_debugfs(rdev);
+
+ /* try to resolve regulators supply since a new one was registered */
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 638f17d4c848..210fc20f7de7 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -213,8 +213,6 @@ static void of_get_regulation_constraints(struct device_node *np,
+ else if (of_property_read_bool(suspend_np,
+ "regulator-off-in-suspend"))
+ suspend_state->enabled = DISABLE_IN_SUSPEND;
+- else
+- suspend_state->enabled = DO_NOTHING_IN_SUSPEND;
+
+ if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
+ &pval))
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index a9f60d0ee02e..7c732414367f 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3127,6 +3127,7 @@ static int dasd_alloc_queue(struct dasd_block *block)
+ block->tag_set.nr_hw_queues = nr_hw_queues;
+ block->tag_set.queue_depth = queue_depth;
+ block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++ block->tag_set.numa_node = NUMA_NO_NODE;
+
+ rc = blk_mq_alloc_tag_set(&block->tag_set);
+ if (rc)
+diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
+index b1fcb76dd272..98f66b7b6794 100644
+--- a/drivers/s390/block/scm_blk.c
++++ b/drivers/s390/block/scm_blk.c
+@@ -455,6 +455,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
+ bdev->tag_set.nr_hw_queues = nr_requests;
+ bdev->tag_set.queue_depth = nr_requests_per_io * nr_requests;
+ bdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++ bdev->tag_set.numa_node = NUMA_NO_NODE;
+
+ ret = blk_mq_alloc_tag_set(&bdev->tag_set);
+ if (ret)
+diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
+index 8f03a869ac98..e9e669a6c2bc 100644
+--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
++++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
+@@ -2727,6 +2727,8 @@ int bnx2i_map_ep_dbell_regs(struct bnx2i_endpoint *ep)
+ BNX2X_DOORBELL_PCI_BAR);
+ reg_off = (1 << BNX2X_DB_SHIFT) * (cid_num & 0x1FFFF);
+ ep->qp.ctx_base = ioremap_nocache(reg_base + reg_off, 4);
++ if (!ep->qp.ctx_base)
++ return -ENOMEM;
+ goto arm_cq;
+ }
+
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index 7052a5d45f7f..78e5a9254143 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -277,6 +277,7 @@ struct hisi_hba {
+
+ int n_phy;
+ spinlock_t lock;
++ struct semaphore sem;
+
+ struct timer_list timer;
+ struct workqueue_struct *wq;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 6f562974f8f6..bfbd2fb7e69e 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -914,7 +914,9 @@ static void hisi_sas_dev_gone(struct domain_device *device)
+
+ hisi_sas_dereg_device(hisi_hba, device);
+
++ down(&hisi_hba->sem);
+ hisi_hba->hw->clear_itct(hisi_hba, sas_dev);
++ up(&hisi_hba->sem);
+ device->lldd_dev = NULL;
+ }
+
+@@ -1364,6 +1366,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+ return -1;
+
++ down(&hisi_hba->sem);
+ dev_info(dev, "controller resetting...\n");
+ old_state = hisi_hba->hw->get_phys_state(hisi_hba);
+
+@@ -1378,6 +1381,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ if (rc) {
+ dev_warn(dev, "controller reset failed (%d)\n", rc);
+ clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
++ up(&hisi_hba->sem);
+ scsi_unblock_requests(shost);
+ goto out;
+ }
+@@ -1388,6 +1392,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ hisi_hba->hw->phys_init(hisi_hba);
+ msleep(1000);
+ hisi_sas_refresh_port_id(hisi_hba);
++ up(&hisi_hba->sem);
+
+ if (hisi_hba->reject_stp_links_msk)
+ hisi_sas_terminate_stp_reject(hisi_hba);
+@@ -2016,6 +2021,7 @@ int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost)
+ struct device *dev = hisi_hba->dev;
+ int i, s, max_command_entries = hisi_hba->hw->max_command_entries;
+
++ sema_init(&hisi_hba->sem, 1);
+ spin_lock_init(&hisi_hba->lock);
+ for (i = 0; i < hisi_hba->n_phy; i++) {
+ hisi_sas_phy_init(hisi_hba, i);
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 17df76f0be3c..67a2c844e30d 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -93,7 +93,7 @@ static int max_requests = IBMVSCSI_MAX_REQUESTS_DEFAULT;
+ static int max_events = IBMVSCSI_MAX_REQUESTS_DEFAULT + 2;
+ static int fast_fail = 1;
+ static int client_reserve = 1;
+-static char partition_name[97] = "UNKNOWN";
++static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
+
+@@ -262,7 +262,7 @@ static void gather_partition_info(void)
+
+ ppartition_name = of_get_property(of_root, "ibm,partition-name", NULL);
+ if (ppartition_name)
+- strncpy(partition_name, ppartition_name,
++ strlcpy(partition_name, ppartition_name,
+ sizeof(partition_name));
+ p_number_ptr = of_get_property(of_root, "ibm,partition-no", NULL);
+ if (p_number_ptr)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 71d97573a667..8e84e3fb648a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -6789,6 +6789,9 @@ megasas_resume(struct pci_dev *pdev)
+ goto fail_init_mfi;
+ }
+
++ if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS)
++ goto fail_init_mfi;
++
+ tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
+ (unsigned long)instance);
+
+diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c
+index 16590dfaafa4..cef307c0399c 100644
+--- a/drivers/siox/siox-core.c
++++ b/drivers/siox/siox-core.c
+@@ -715,17 +715,17 @@ int siox_master_register(struct siox_master *smaster)
+
+ dev_set_name(&smaster->dev, "siox-%d", smaster->busno);
+
++ mutex_init(&smaster->lock);
++ INIT_LIST_HEAD(&smaster->devices);
++
+ smaster->last_poll = jiffies;
+- smaster->poll_thread = kthread_create(siox_poll_thread, smaster,
+- "siox-%d", smaster->busno);
++ smaster->poll_thread = kthread_run(siox_poll_thread, smaster,
++ "siox-%d", smaster->busno);
+ if (IS_ERR(smaster->poll_thread)) {
+ smaster->active = 0;
+ return PTR_ERR(smaster->poll_thread);
+ }
+
+- mutex_init(&smaster->lock);
+- INIT_LIST_HEAD(&smaster->devices);
+-
+ ret = device_add(&smaster->dev);
+ if (ret)
+ kthread_stop(smaster->poll_thread);
+diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
+index d01a6adc726e..47ef6b1a2e76 100644
+--- a/drivers/spi/spi-orion.c
++++ b/drivers/spi/spi-orion.c
+@@ -20,6 +20,7 @@
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
++#include <linux/of_gpio.h>
+ #include <linux/clk.h>
+ #include <linux/sizes.h>
+ #include <linux/gpio.h>
+@@ -681,9 +682,9 @@ static int orion_spi_probe(struct platform_device *pdev)
+ goto out_rel_axi_clk;
+ }
+
+- /* Scan all SPI devices of this controller for direct mapped devices */
+ for_each_available_child_of_node(pdev->dev.of_node, np) {
+ u32 cs;
++ int cs_gpio;
+
+ /* Get chip-select number from the "reg" property */
+ status = of_property_read_u32(np, "reg", &cs);
+@@ -694,6 +695,44 @@ static int orion_spi_probe(struct platform_device *pdev)
+ continue;
+ }
+
++ /*
++ * Initialize the CS GPIO:
++ * - properly request the actual GPIO signal
++ * - de-assert the logical signal so that all GPIO CS lines
++ * are inactive when probing for slaves
++ * - find an unused physical CS which will be driven for any
++ * slave which uses a CS GPIO
++ */
++ cs_gpio = of_get_named_gpio(pdev->dev.of_node, "cs-gpios", cs);
++ if (cs_gpio > 0) {
++ char *gpio_name;
++ int cs_flags;
++
++ if (spi->unused_hw_gpio == -1) {
++ dev_info(&pdev->dev,
++ "Selected unused HW CS#%d for any GPIO CSes\n",
++ cs);
++ spi->unused_hw_gpio = cs;
++ }
++
++ gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
++ "%s-CS%d", dev_name(&pdev->dev), cs);
++ if (!gpio_name) {
++ status = -ENOMEM;
++ goto out_rel_axi_clk;
++ }
++
++ cs_flags = of_property_read_bool(np, "spi-cs-high") ?
++ GPIOF_OUT_INIT_LOW : GPIOF_OUT_INIT_HIGH;
++ status = devm_gpio_request_one(&pdev->dev, cs_gpio,
++ cs_flags, gpio_name);
++ if (status) {
++ dev_err(&pdev->dev,
++ "Can't request GPIO for CS %d\n", cs);
++ goto out_rel_axi_clk;
++ }
++ }
++
+ /*
+ * Check if an address is configured for this SPI device. If
+ * not, the MBus mapping via the 'ranges' property in the 'soc'
+@@ -740,44 +779,8 @@ static int orion_spi_probe(struct platform_device *pdev)
+ if (status < 0)
+ goto out_rel_pm;
+
+- if (master->cs_gpios) {
+- int i;
+- for (i = 0; i < master->num_chipselect; ++i) {
+- char *gpio_name;
+-
+- if (!gpio_is_valid(master->cs_gpios[i])) {
+- continue;
+- }
+-
+- gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+- "%s-CS%d", dev_name(&pdev->dev), i);
+- if (!gpio_name) {
+- status = -ENOMEM;
+- goto out_rel_master;
+- }
+-
+- status = devm_gpio_request(&pdev->dev,
+- master->cs_gpios[i], gpio_name);
+- if (status) {
+- dev_err(&pdev->dev,
+- "Can't request GPIO for CS %d\n",
+- master->cs_gpios[i]);
+- goto out_rel_master;
+- }
+- if (spi->unused_hw_gpio == -1) {
+- dev_info(&pdev->dev,
+- "Selected unused HW CS#%d for any GPIO CSes\n",
+- i);
+- spi->unused_hw_gpio = i;
+- }
+- }
+- }
+-
+-
+ return status;
+
+-out_rel_master:
+- spi_unregister_master(master);
+ out_rel_pm:
+ pm_runtime_disable(&pdev->dev);
+ out_rel_axi_clk:
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index 95dc4d78618d..b37de1d991d6 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -598,11 +598,13 @@ static int rspi_dma_transfer(struct rspi_data *rspi, struct sg_table *tx,
+
+ ret = wait_event_interruptible_timeout(rspi->wait,
+ rspi->dma_callbacked, HZ);
+- if (ret > 0 && rspi->dma_callbacked)
++ if (ret > 0 && rspi->dma_callbacked) {
+ ret = 0;
+- else if (!ret) {
+- dev_err(&rspi->master->dev, "DMA timeout\n");
+- ret = -ETIMEDOUT;
++ } else {
++ if (!ret) {
++ dev_err(&rspi->master->dev, "DMA timeout\n");
++ ret = -ETIMEDOUT;
++ }
+ if (tx)
+ dmaengine_terminate_all(rspi->master->dma_tx);
+ if (rx)
+@@ -1350,12 +1352,36 @@ static const struct platform_device_id spi_driver_ids[] = {
+
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+
++#ifdef CONFIG_PM_SLEEP
++static int rspi_suspend(struct device *dev)
++{
++ struct platform_device *pdev = to_platform_device(dev);
++ struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++ return spi_master_suspend(rspi->master);
++}
++
++static int rspi_resume(struct device *dev)
++{
++ struct platform_device *pdev = to_platform_device(dev);
++ struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++ return spi_master_resume(rspi->master);
++}
++
++static SIMPLE_DEV_PM_OPS(rspi_pm_ops, rspi_suspend, rspi_resume);
++#define DEV_PM_OPS &rspi_pm_ops
++#else
++#define DEV_PM_OPS NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver rspi_driver = {
+ .probe = rspi_probe,
+ .remove = rspi_remove,
+ .id_table = spi_driver_ids,
+ .driver = {
+ .name = "renesas_spi",
++ .pm = DEV_PM_OPS,
+ .of_match_table = of_match_ptr(rspi_of_match),
+ },
+ };
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 0e74cbf9929d..37364c634fef 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -396,7 +396,8 @@ static void sh_msiof_spi_set_mode_regs(struct sh_msiof_spi_priv *p,
+
+ static void sh_msiof_reset_str(struct sh_msiof_spi_priv *p)
+ {
+- sh_msiof_write(p, STR, sh_msiof_read(p, STR));
++ sh_msiof_write(p, STR,
++ sh_msiof_read(p, STR) & ~(STR_TDREQ | STR_RDREQ));
+ }
+
+ static void sh_msiof_spi_write_fifo_8(struct sh_msiof_spi_priv *p,
+@@ -1421,12 +1422,37 @@ static const struct platform_device_id spi_driver_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+
++#ifdef CONFIG_PM_SLEEP
++static int sh_msiof_spi_suspend(struct device *dev)
++{
++ struct platform_device *pdev = to_platform_device(dev);
++ struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++ return spi_master_suspend(p->master);
++}
++
++static int sh_msiof_spi_resume(struct device *dev)
++{
++ struct platform_device *pdev = to_platform_device(dev);
++ struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++ return spi_master_resume(p->master);
++}
++
++static SIMPLE_DEV_PM_OPS(sh_msiof_spi_pm_ops, sh_msiof_spi_suspend,
++ sh_msiof_spi_resume);
++#define DEV_PM_OPS &sh_msiof_spi_pm_ops
++#else
++#define DEV_PM_OPS NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver sh_msiof_spi_drv = {
+ .probe = sh_msiof_spi_probe,
+ .remove = sh_msiof_spi_remove,
+ .id_table = spi_driver_ids,
+ .driver = {
+ .name = "spi_sh_msiof",
++ .pm = DEV_PM_OPS,
+ .of_match_table = of_match_ptr(sh_msiof_match),
+ },
+ };
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 6f7b946b5ced..1427f343b39a 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1063,6 +1063,24 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ goto exit_free_master;
+ }
+
++ /* disabled clock may cause interrupt storm upon request */
++ tspi->clk = devm_clk_get(&pdev->dev, NULL);
++ if (IS_ERR(tspi->clk)) {
++ ret = PTR_ERR(tspi->clk);
++ dev_err(&pdev->dev, "Can not get clock %d\n", ret);
++ goto exit_free_master;
++ }
++ ret = clk_prepare(tspi->clk);
++ if (ret < 0) {
++ dev_err(&pdev->dev, "Clock prepare failed %d\n", ret);
++ goto exit_free_master;
++ }
++ ret = clk_enable(tspi->clk);
++ if (ret < 0) {
++ dev_err(&pdev->dev, "Clock enable failed %d\n", ret);
++ goto exit_free_master;
++ }
++
+ spi_irq = platform_get_irq(pdev, 0);
+ tspi->irq = spi_irq;
+ ret = request_threaded_irq(tspi->irq, tegra_slink_isr,
+@@ -1071,14 +1089,7 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
+ tspi->irq);
+- goto exit_free_master;
+- }
+-
+- tspi->clk = devm_clk_get(&pdev->dev, NULL);
+- if (IS_ERR(tspi->clk)) {
+- dev_err(&pdev->dev, "can not get clock\n");
+- ret = PTR_ERR(tspi->clk);
+- goto exit_free_irq;
++ goto exit_clk_disable;
+ }
+
+ tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
+@@ -1138,6 +1149,8 @@ exit_rx_dma_free:
+ tegra_slink_deinit_dma_param(tspi, true);
+ exit_free_irq:
+ free_irq(spi_irq, tspi);
++exit_clk_disable:
++ clk_disable(tspi->clk);
+ exit_free_master:
+ spi_master_put(master);
+ return ret;
+@@ -1150,6 +1163,8 @@ static int tegra_slink_remove(struct platform_device *pdev)
+
+ free_irq(tspi->irq, tspi);
+
++ clk_disable(tspi->clk);
++
+ if (tspi->tx_dma_chan)
+ tegra_slink_deinit_dma_param(tspi, false);
+
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index d5d33e12e952..716573c21579 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -366,6 +366,12 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ goto out;
+ }
+
++ /* requested mapping size larger than object size */
++ if (vma->vm_end - vma->vm_start > PAGE_ALIGN(asma->size)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ /* requested protection bits must match our allowed protection mask */
+ if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask, 0)) &
+ calc_vm_prot_bits(PROT_MASK, 0))) {
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index ae453fd422f0..ffeb017c73b2 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -210,6 +210,7 @@ static void prp_vb2_buf_done(struct prp_priv *priv, struct ipuv3_channel *ch)
+
+ done = priv->active_vb2_buf[priv->ipu_buf_num];
+ if (done) {
++ done->vbuf.field = vdev->fmt.fmt.pix.field;
+ vb = &done->vbuf.vb2_buf;
+ vb->timestamp = ktime_get_ns();
+ vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 95d7805f3485..0e963c24af37 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -236,6 +236,7 @@ static void csi_vb2_buf_done(struct csi_priv *priv)
+
+ done = priv->active_vb2_buf[priv->ipu_buf_num];
+ if (done) {
++ done->vbuf.field = vdev->fmt.fmt.pix.field;
+ vb = &done->vbuf.vb2_buf;
+ vb->timestamp = ktime_get_ns();
+ vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index 6b13d85d9d34..87555600195f 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -113,6 +113,8 @@
+ };
+
+ &pcie {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pcie_pins>;
+ status = "okay";
+ };
+
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index eb3966b7f033..ce6b43639079 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -447,31 +447,28 @@
+ clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>;
+ clock-names = "pcie0", "pcie1", "pcie2";
+
+- pcie0 {
++ pcie@0,0 {
+ reg = <0x0000 0 0 0 0>;
+-
+ #address-cells = <3>;
+ #size-cells = <2>;
+-
+- device_type = "pci";
++ ranges;
++ bus-range = <0x00 0xff>;
+ };
+
+- pcie1 {
++ pcie@1,0 {
+ reg = <0x0800 0 0 0 0>;
+-
+ #address-cells = <3>;
+ #size-cells = <2>;
+-
+- device_type = "pci";
++ ranges;
++ bus-range = <0x00 0xff>;
+ };
+
+- pcie2 {
++ pcie@2,0 {
+ reg = <0x1000 0 0 0 0>;
+-
+ #address-cells = <3>;
+ #size-cells = <2>;
+-
+- device_type = "pci";
++ ranges;
++ bus-range = <0x00 0xff>;
+ };
+ };
+ };
+diff --git a/drivers/staging/mt7621-eth/mtk_eth_soc.c b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+index 2c7a2e666bfb..381d9d270bf5 100644
+--- a/drivers/staging/mt7621-eth/mtk_eth_soc.c
++++ b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+@@ -2012,8 +2012,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ mac->hw_stats = devm_kzalloc(eth->dev,
+ sizeof(*mac->hw_stats),
+ GFP_KERNEL);
+- if (!mac->hw_stats)
+- return -ENOMEM;
++ if (!mac->hw_stats) {
++ err = -ENOMEM;
++ goto free_netdev;
++ }
+ spin_lock_init(&mac->hw_stats->stats_lock);
+ mac->hw_stats->reg_offset = id * MTK_STAT_OFFSET;
+ }
+@@ -2037,7 +2039,8 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ err = register_netdev(eth->netdev[id]);
+ if (err) {
+ dev_err(eth->dev, "error bringing up device\n");
+- return err;
++ err = -ENOMEM;
++ goto free_netdev;
+ }
+ eth->netdev[id]->irq = eth->irq;
+ netif_info(eth, probe, eth->netdev[id],
+@@ -2045,6 +2048,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ eth->netdev[id]->base_addr, eth->netdev[id]->irq);
+
+ return 0;
++
++free_netdev:
++ free_netdev(eth->netdev[id]);
++ return err;
+ }
+
+ static int mtk_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c
+index b061f77dda41..94e0bfcec991 100644
+--- a/drivers/staging/pi433/pi433_if.c
++++ b/drivers/staging/pi433/pi433_if.c
+@@ -880,6 +880,7 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ int retval = 0;
+ struct pi433_instance *instance;
+ struct pi433_device *device;
++ struct pi433_tx_cfg tx_cfg;
+ void __user *argp = (void __user *)arg;
+
+ /* Check type and command number */
+@@ -902,9 +903,11 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ return -EFAULT;
+ break;
+ case PI433_IOC_WR_TX_CFG:
+- if (copy_from_user(&instance->tx_cfg, argp,
+- sizeof(struct pi433_tx_cfg)))
++ if (copy_from_user(&tx_cfg, argp, sizeof(struct pi433_tx_cfg)))
+ return -EFAULT;
++ mutex_lock(&device->tx_fifo_lock);
++ memcpy(&instance->tx_cfg, &tx_cfg, sizeof(struct pi433_tx_cfg));
++ mutex_unlock(&device->tx_fifo_lock);
+ break;
+ case PI433_IOC_RD_RX_CFG:
+ if (copy_to_user(argp, &device->rx_cfg,
+diff --git a/drivers/staging/rts5208/sd.c b/drivers/staging/rts5208/sd.c
+index d548bc695f9e..0421dd9277a8 100644
+--- a/drivers/staging/rts5208/sd.c
++++ b/drivers/staging/rts5208/sd.c
+@@ -4996,7 +4996,7 @@ int sd_execute_write_data(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ goto sd_execute_write_cmd_failed;
+ }
+
+- rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
++ retval = rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
+ if (retval != STATUS_SUCCESS) {
+ rtsx_trace(chip);
+ goto sd_execute_write_cmd_failed;
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 4b34f71547c6..101d62105c93 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -636,8 +636,7 @@ int iscsit_ta_authentication(struct iscsi_portal_group *tpg, u32 authentication)
+ none = strstr(buf1, NONE);
+ if (none)
+ goto out;
+- strncat(buf1, ",", strlen(","));
+- strncat(buf1, NONE, strlen(NONE));
++ strlcat(buf1, "," NONE, sizeof(buf1));
+ if (iscsi_update_param_value(param, buf1) < 0)
+ return -EINVAL;
+ }
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index e27db4d45a9d..06c9886e556c 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -904,14 +904,20 @@ struct se_device *target_find_device(int id, bool do_depend)
+ EXPORT_SYMBOL(target_find_device);
+
+ struct devices_idr_iter {
++ struct config_item *prev_item;
+ int (*fn)(struct se_device *dev, void *data);
+ void *data;
+ };
+
+ static int target_devices_idr_iter(int id, void *p, void *data)
++ __must_hold(&device_mutex)
+ {
+ struct devices_idr_iter *iter = data;
+ struct se_device *dev = p;
++ int ret;
++
++ config_item_put(iter->prev_item);
++ iter->prev_item = NULL;
+
+ /*
+ * We add the device early to the idr, so it can be used
+@@ -922,7 +928,15 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ if (!(dev->dev_flags & DF_CONFIGURED))
+ return 0;
+
+- return iter->fn(dev, iter->data);
++ iter->prev_item = config_item_get_unless_zero(&dev->dev_group.cg_item);
++ if (!iter->prev_item)
++ return 0;
++ mutex_unlock(&device_mutex);
++
++ ret = iter->fn(dev, iter->data);
++
++ mutex_lock(&device_mutex);
++ return ret;
+ }
+
+ /**
+@@ -936,15 +950,13 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ int target_for_each_device(int (*fn)(struct se_device *dev, void *data),
+ void *data)
+ {
+- struct devices_idr_iter iter;
++ struct devices_idr_iter iter = { .fn = fn, .data = data };
+ int ret;
+
+- iter.fn = fn;
+- iter.data = data;
+-
+ mutex_lock(&device_mutex);
+ ret = idr_for_each(&devices_idr, target_devices_idr_iter, &iter);
+ mutex_unlock(&device_mutex);
++ config_item_put(iter.prev_item);
+ return ret;
+ }
+
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index 334d98be03b9..b1f82d64253e 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -604,7 +604,10 @@ static int imx_init_from_nvmem_cells(struct platform_device *pdev)
+ ret = nvmem_cell_read_u32(&pdev->dev, "calib", &val);
+ if (ret)
+ return ret;
+- imx_init_calib(pdev, val);
++
++ ret = imx_init_calib(pdev, val);
++ if (ret)
++ return ret;
+
+ ret = nvmem_cell_read_u32(&pdev->dev, "temp_grade", &val);
+ if (ret)
+diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c
+index 977a8307fbb1..4f2816559205 100644
+--- a/drivers/thermal/of-thermal.c
++++ b/drivers/thermal/of-thermal.c
+@@ -260,10 +260,13 @@ static int of_thermal_set_mode(struct thermal_zone_device *tz,
+
+ mutex_lock(&tz->lock);
+
+- if (mode == THERMAL_DEVICE_ENABLED)
++ if (mode == THERMAL_DEVICE_ENABLED) {
+ tz->polling_delay = data->polling_delay;
+- else
++ tz->passive_delay = data->passive_delay;
++ } else {
+ tz->polling_delay = 0;
++ tz->passive_delay = 0;
++ }
+
+ mutex_unlock(&tz->lock);
+
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 9963a766dcfb..c8186a05a453 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -638,8 +638,10 @@ static int serial_config(struct pcmcia_device *link)
+ (link->has_func_id) &&
+ (link->socket->pcmcia_pfc == 0) &&
+ ((link->func_id == CISTPL_FUNCID_MULTI) ||
+- (link->func_id == CISTPL_FUNCID_SERIAL)))
+- pcmcia_loop_config(link, serial_check_for_multi, info);
++ (link->func_id == CISTPL_FUNCID_SERIAL))) {
++ if (pcmcia_loop_config(link, serial_check_for_multi, info))
++ goto failed;
++ }
+
+ /*
+ * Apply any multi-port quirk.
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index 24a5f05e769b..e5389591bb4f 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1054,8 +1054,8 @@ static int poll_wait_key(char *obuf, struct uart_cpm_port *pinfo)
+ /* Get the address of the host memory buffer.
+ */
+ bdp = pinfo->rx_cur;
+- while (bdp->cbd_sc & BD_SC_EMPTY)
+- ;
++ if (bdp->cbd_sc & BD_SC_EMPTY)
++ return NO_POLL_CHAR;
+
+ /* If the buffer address is in the CPM DPRAM, don't
+ * convert it.
+@@ -1090,7 +1090,11 @@ static int cpm_get_poll_char(struct uart_port *port)
+ poll_chars = 0;
+ }
+ if (poll_chars <= 0) {
+- poll_chars = poll_wait_key(poll_buf, pinfo);
++ int ret = poll_wait_key(poll_buf, pinfo);
++
++ if (ret == NO_POLL_CHAR)
++ return ret;
++ poll_chars = ret;
+ pollp = poll_buf;
+ }
+ poll_chars--;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 51e47a63d61a..3f8d1274fc85 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -979,7 +979,8 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ struct circ_buf *ring = &sport->rx_ring;
+ int ret, nent;
+ int bits, baud;
+- struct tty_struct *tty = tty_port_tty_get(&sport->port.state->port);
++ struct tty_port *port = &sport->port.state->port;
++ struct tty_struct *tty = port->tty;
+ struct ktermios *termios = &tty->termios;
+
+ baud = tty_get_baud_rate(tty);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 4e853570ea80..554a69db1bca 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2350,6 +2350,14 @@ static int imx_uart_probe(struct platform_device *pdev)
+ ret);
+ return ret;
+ }
++
++ ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0,
++ dev_name(&pdev->dev), sport);
++ if (ret) {
++ dev_err(&pdev->dev, "failed to request rts irq: %d\n",
++ ret);
++ return ret;
++ }
+ } else {
+ ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0,
+ dev_name(&pdev->dev), sport);
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index d04b5eeea3c6..170e446a2f62 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -511,6 +511,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR);
+ termios->c_cflag &= CREAD | CBAUD;
+ termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD);
++ termios->c_cflag |= CS8;
+ }
+
+ spin_unlock_irqrestore(&port->lock, flags);
+diff --git a/drivers/tty/serial/pxa.c b/drivers/tty/serial/pxa.c
+index eda3c7710d6a..4932b674f7ef 100644
+--- a/drivers/tty/serial/pxa.c
++++ b/drivers/tty/serial/pxa.c
+@@ -887,7 +887,8 @@ static int serial_pxa_probe(struct platform_device *dev)
+ goto err_clk;
+ if (sport->port.line >= ARRAY_SIZE(serial_pxa_ports)) {
+ dev_err(&dev->dev, "serial%d out of range\n", sport->port.line);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_clk;
+ }
+ snprintf(sport->name, PXA_NAME_LEN - 1, "UART%d", sport->port.line + 1);
+
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index c181eb37f985..3c55600a8236 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2099,6 +2099,8 @@ static void sci_shutdown(struct uart_port *port)
+ }
+ #endif
+
++ if (s->rx_trigger > 1 && s->rx_fifo_timeout > 0)
++ del_timer_sync(&s->rx_fifo_timer);
+ sci_free_irq(s);
+ sci_free_dma(port);
+ }
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 632a2bfabc08..a0d284ef3f40 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+
+ set_bit(WDM_RESPONDING, &desc->flags);
+ spin_unlock_irq(&desc->iuspin);
+- rv = usb_submit_urb(desc->response, GFP_ATOMIC);
++ rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ spin_lock_irq(&desc->iuspin);
+ if (rv) {
+ dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/common/roles.c b/drivers/usb/common/roles.c
+index 15cc76e22123..99116af07f1d 100644
+--- a/drivers/usb/common/roles.c
++++ b/drivers/usb/common/roles.c
+@@ -109,8 +109,15 @@ static void *usb_role_switch_match(struct device_connection *con, int ep,
+ */
+ struct usb_role_switch *usb_role_switch_get(struct device *dev)
+ {
+- return device_connection_find_match(dev, "usb-role-switch", NULL,
+- usb_role_switch_match);
++ struct usb_role_switch *sw;
++
++ sw = device_connection_find_match(dev, "usb-role-switch", NULL,
++ usb_role_switch_match);
++
++ if (!IS_ERR_OR_NULL(sw))
++ WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++
++ return sw;
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+
+@@ -122,8 +129,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+ */
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+- if (!IS_ERR_OR_NULL(sw))
++ if (!IS_ERR_OR_NULL(sw)) {
+ put_device(&sw->dev);
++ module_put(sw->dev.parent->driver->owner);
++ }
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_put);
+
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 476dcc5f2da3..e1e0c90ce569 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1433,10 +1433,13 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ struct async *as = NULL;
+ struct usb_ctrlrequest *dr = NULL;
+ unsigned int u, totlen, isofrmlen;
+- int i, ret, is_in, num_sgs = 0, ifnum = -1;
++ int i, ret, num_sgs = 0, ifnum = -1;
+ int number_of_packets = 0;
+ unsigned int stream_id = 0;
+ void *buf;
++ bool is_in;
++ bool allow_short = false;
++ bool allow_zero = false;
+ unsigned long mask = USBDEVFS_URB_SHORT_NOT_OK |
+ USBDEVFS_URB_BULK_CONTINUATION |
+ USBDEVFS_URB_NO_FSBR |
+@@ -1470,6 +1473,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ u = 0;
+ switch (uurb->type) {
+ case USBDEVFS_URB_TYPE_CONTROL:
++ if (is_in)
++ allow_short = true;
+ if (!usb_endpoint_xfer_control(&ep->desc))
+ return -EINVAL;
+ /* min 8 byte setup packet */
+@@ -1510,6 +1515,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ break;
+
+ case USBDEVFS_URB_TYPE_BULK:
++ if (!is_in)
++ allow_zero = true;
++ else
++ allow_short = true;
+ switch (usb_endpoint_type(&ep->desc)) {
+ case USB_ENDPOINT_XFER_CONTROL:
+ case USB_ENDPOINT_XFER_ISOC:
+@@ -1530,6 +1539,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ if (!usb_endpoint_xfer_int(&ep->desc))
+ return -EINVAL;
+ interrupt_urb:
++ if (!is_in)
++ allow_zero = true;
++ else
++ allow_short = true;
+ break;
+
+ case USBDEVFS_URB_TYPE_ISO:
+@@ -1675,14 +1688,19 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ u = (is_in ? URB_DIR_IN : URB_DIR_OUT);
+ if (uurb->flags & USBDEVFS_URB_ISO_ASAP)
+ u |= URB_ISO_ASAP;
+- if (uurb->flags & USBDEVFS_URB_SHORT_NOT_OK && is_in)
++ if (allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
+ u |= URB_SHORT_NOT_OK;
+- if (uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++ if (allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
+ u |= URB_ZERO_PACKET;
+ if (uurb->flags & USBDEVFS_URB_NO_INTERRUPT)
+ u |= URB_NO_INTERRUPT;
+ as->urb->transfer_flags = u;
+
++ if (!allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
++ dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_SHORT_NOT_OK.\n");
++ if (!allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++ dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_ZERO_PACKET.\n");
++
+ as->urb->transfer_buffer_length = uurb->buffer_length;
+ as->urb->setup_packet = (unsigned char *)dr;
+ dr = NULL;
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index e76e95f62f76..a1f225f077cd 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -512,7 +512,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ struct device *dev;
+ struct usb_device *udev;
+ int retval = 0;
+- int lpm_disable_error = -ENODEV;
+
+ if (!iface)
+ return -ENODEV;
+@@ -533,16 +532,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+
+ iface->condition = USB_INTERFACE_BOUND;
+
+- /* See the comment about disabling LPM in usb_probe_interface(). */
+- if (driver->disable_hub_initiated_lpm) {
+- lpm_disable_error = usb_unlocked_disable_lpm(udev);
+- if (lpm_disable_error) {
+- dev_err(&iface->dev, "%s Failed to disable LPM for driver %s\n",
+- __func__, driver->name);
+- return -ENOMEM;
+- }
+- }
+-
+ /* Claimed interfaces are initially inactive (suspended) and
+ * runtime-PM-enabled, but only if the driver has autosuspend
+ * support. Otherwise they are marked active, to prevent the
+@@ -561,9 +550,20 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ if (device_is_registered(dev))
+ retval = device_bind_driver(dev);
+
+- /* Attempt to re-enable USB3 LPM, if the disable was successful. */
+- if (!lpm_disable_error)
+- usb_unlocked_enable_lpm(udev);
++ if (retval) {
++ dev->driver = NULL;
++ usb_set_intfdata(iface, NULL);
++ iface->needs_remote_wakeup = 0;
++ iface->condition = USB_INTERFACE_UNBOUND;
++
++ /*
++ * Unbound interfaces are always runtime-PM-disabled
++ * and runtime-PM-suspended
++ */
++ if (driver->supports_autosuspend)
++ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
++ }
+
+ return retval;
+ }
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index e77dfe5ed5ec..178d6c6063c0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -58,6 +58,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+ quirk_list = kcalloc(quirk_count, sizeof(struct quirk_entry),
+ GFP_KERNEL);
+ if (!quirk_list) {
++ quirk_count = 0;
+ mutex_unlock(&quirk_mutex);
+ return -ENOMEM;
+ }
+@@ -154,7 +155,7 @@ static struct kparam_string quirks_param_string = {
+ .string = quirks_param,
+ };
+
+-module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
++device_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
+ MODULE_PARM_DESC(quirks, "Add/modify USB quirks by specifying quirks=vendorID:productID:quirks");
+
+ /* Lists of quirky USB devices, split in device quirks and interface quirks.
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 623be3174fb3..79d8bd7a612e 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -228,6 +228,8 @@ struct usb_host_interface *usb_find_alt_setting(
+ struct usb_interface_cache *intf_cache = NULL;
+ int i;
+
++ if (!config)
++ return NULL;
+ for (i = 0; i < config->desc.bNumInterfaces; i++) {
+ if (config->intf_cache[i]->altsetting[0].desc.bInterfaceNumber
+ == iface_num) {
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index fb871eabcc10..a129d601a0c3 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -658,16 +658,6 @@ dsps_dma_controller_create(struct musb *musb, void __iomem *base)
+ return controller;
+ }
+
+-static void dsps_dma_controller_destroy(struct dma_controller *c)
+-{
+- struct musb *musb = c->musb;
+- struct dsps_glue *glue = dev_get_drvdata(musb->controller->parent);
+- void __iomem *usbss_base = glue->usbss_base;
+-
+- musb_writel(usbss_base, USBSS_IRQ_CLEARR, USBSS_IRQ_PD_COMP);
+- cppi41_dma_controller_destroy(c);
+-}
+-
+ #ifdef CONFIG_PM_SLEEP
+ static void dsps_dma_controller_suspend(struct dsps_glue *glue)
+ {
+@@ -697,7 +687,7 @@ static struct musb_platform_ops dsps_ops = {
+
+ #ifdef CONFIG_USB_TI_CPPI41_DMA
+ .dma_init = dsps_dma_controller_create,
+- .dma_exit = dsps_dma_controller_destroy,
++ .dma_exit = cppi41_dma_controller_destroy,
+ #endif
+ .enable = dsps_musb_enable,
+ .disable = dsps_musb_disable,
+diff --git a/drivers/usb/serial/kobil_sct.c b/drivers/usb/serial/kobil_sct.c
+index a31ea7e194dd..a6ebed1e0f20 100644
+--- a/drivers/usb/serial/kobil_sct.c
++++ b/drivers/usb/serial/kobil_sct.c
+@@ -393,12 +393,20 @@ static int kobil_tiocmget(struct tty_struct *tty)
+ transfer_buffer_length,
+ KOBIL_TIMEOUT);
+
+- dev_dbg(&port->dev, "%s - Send get_status_line_state URB returns: %i. Statusline: %02x\n",
+- __func__, result, transfer_buffer[0]);
++ dev_dbg(&port->dev, "Send get_status_line_state URB returns: %i\n",
++ result);
++ if (result < 1) {
++ if (result >= 0)
++ result = -EIO;
++ goto out_free;
++ }
++
++ dev_dbg(&port->dev, "Statusline: %02x\n", transfer_buffer[0]);
+
+ result = 0;
+ if ((transfer_buffer[0] & SUSBCR_GSL_DSR) != 0)
+ result = TIOCM_DSR;
++out_free:
+ kfree(transfer_buffer);
+ return result;
+ }
+diff --git a/drivers/usb/wusbcore/security.c b/drivers/usb/wusbcore/security.c
+index 33d2f5d7f33b..14ac8c98ac9e 100644
+--- a/drivers/usb/wusbcore/security.c
++++ b/drivers/usb/wusbcore/security.c
+@@ -217,7 +217,7 @@ int wusb_dev_sec_add(struct wusbhc *wusbhc,
+
+ result = usb_get_descriptor(usb_dev, USB_DT_SECURITY,
+ 0, secd, sizeof(*secd));
+- if (result < sizeof(*secd)) {
++ if (result < (int)sizeof(*secd)) {
+ dev_err(dev, "Can't read security descriptor or "
+ "not enough data: %d\n", result);
+ goto out;
+diff --git a/drivers/uwb/hwa-rc.c b/drivers/uwb/hwa-rc.c
+index 9a53912bdfe9..5d3ba747ae17 100644
+--- a/drivers/uwb/hwa-rc.c
++++ b/drivers/uwb/hwa-rc.c
+@@ -873,6 +873,7 @@ error_get_version:
+ error_rc_add:
+ usb_put_intf(iface);
+ usb_put_dev(hwarc->usb_dev);
++ kfree(hwarc);
+ error_alloc:
+ uwb_rc_put(uwb_rc);
+ error_rc_alloc:
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 29756d88799b..6b86ca8772fb 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -396,13 +396,10 @@ static inline unsigned long busy_clock(void)
+ return local_clock() >> 10;
+ }
+
+-static bool vhost_can_busy_poll(struct vhost_dev *dev,
+- unsigned long endtime)
++static bool vhost_can_busy_poll(unsigned long endtime)
+ {
+- return likely(!need_resched()) &&
+- likely(!time_after(busy_clock(), endtime)) &&
+- likely(!signal_pending(current)) &&
+- !vhost_has_work(dev);
++ return likely(!need_resched() && !time_after(busy_clock(), endtime) &&
++ !signal_pending(current));
+ }
+
+ static void vhost_net_disable_vq(struct vhost_net *n,
+@@ -434,7 +431,8 @@ static int vhost_net_enable_vq(struct vhost_net *n,
+ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ struct vhost_virtqueue *vq,
+ struct iovec iov[], unsigned int iov_size,
+- unsigned int *out_num, unsigned int *in_num)
++ unsigned int *out_num, unsigned int *in_num,
++ bool *busyloop_intr)
+ {
+ unsigned long uninitialized_var(endtime);
+ int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+@@ -443,9 +441,15 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ if (r == vq->num && vq->busyloop_timeout) {
+ preempt_disable();
+ endtime = busy_clock() + vq->busyloop_timeout;
+- while (vhost_can_busy_poll(vq->dev, endtime) &&
+- vhost_vq_avail_empty(vq->dev, vq))
++ while (vhost_can_busy_poll(endtime)) {
++ if (vhost_has_work(vq->dev)) {
++ *busyloop_intr = true;
++ break;
++ }
++ if (!vhost_vq_avail_empty(vq->dev, vq))
++ break;
+ cpu_relax();
++ }
+ preempt_enable();
+ r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ out_num, in_num, NULL, NULL);
+@@ -501,20 +505,24 @@ static void handle_tx(struct vhost_net *net)
+ zcopy = nvq->ubufs;
+
+ for (;;) {
++ bool busyloop_intr;
++
+ /* Release DMAs done buffers first */
+ if (zcopy)
+ vhost_zerocopy_signal_used(net, vq);
+
+-
++ busyloop_intr = false;
+ head = vhost_net_tx_get_vq_desc(net, vq, vq->iov,
+ ARRAY_SIZE(vq->iov),
+- &out, &in);
++ &out, &in, &busyloop_intr);
+ /* On error, stop handling until the next kick. */
+ if (unlikely(head < 0))
+ break;
+ /* Nothing new? Wait for eventfd to tell us they refilled. */
+ if (head == vq->num) {
+- if (unlikely(vhost_enable_notify(&net->dev, vq))) {
++ if (unlikely(busyloop_intr)) {
++ vhost_poll_queue(&vq->poll);
++ } else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
+ vhost_disable_notify(&net->dev, vq);
+ continue;
+ }
+@@ -663,7 +671,8 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
+ preempt_disable();
+ endtime = busy_clock() + vq->busyloop_timeout;
+
+- while (vhost_can_busy_poll(&net->dev, endtime) &&
++ while (vhost_can_busy_poll(endtime) &&
++ !vhost_has_work(&net->dev) &&
+ !sk_has_rx_data(sk) &&
+ vhost_vq_avail_empty(&net->dev, vq))
+ cpu_relax();
+diff --git a/fs/dax.c b/fs/dax.c
+index 641192808bb6..94f9fe002b12 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1007,21 +1007,12 @@ static vm_fault_t dax_load_hole(struct address_space *mapping, void *entry,
+ {
+ struct inode *inode = mapping->host;
+ unsigned long vaddr = vmf->address;
+- vm_fault_t ret = VM_FAULT_NOPAGE;
+- struct page *zero_page;
+- pfn_t pfn;
+-
+- zero_page = ZERO_PAGE(0);
+- if (unlikely(!zero_page)) {
+- ret = VM_FAULT_OOM;
+- goto out;
+- }
++ pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
++ vm_fault_t ret;
+
+- pfn = page_to_pfn_t(zero_page);
+ dax_insert_mapping_entry(mapping, vmf, entry, pfn, RADIX_DAX_ZERO_PAGE,
+ false);
+ ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
+-out:
+ trace_dax_load_hole(inode, vmf, ret);
+ return ret;
+ }
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 71635909df3b..b4e0501bcba1 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -1448,6 +1448,7 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ }
+ inode->i_blocks = le32_to_cpu(raw_inode->i_blocks);
+ ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++ ext2_set_inode_flags(inode);
+ ei->i_faddr = le32_to_cpu(raw_inode->i_faddr);
+ ei->i_frag_no = raw_inode->i_frag;
+ ei->i_frag_size = raw_inode->i_fsize;
+@@ -1517,7 +1518,6 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ new_decode_dev(le32_to_cpu(raw_inode->i_block[1])));
+ }
+ brelse (bh);
+- ext2_set_inode_flags(inode);
+ unlock_new_inode(inode);
+ return inode;
+
+diff --git a/fs/iomap.c b/fs/iomap.c
+index 0d0bd8845586..af6144fd4919 100644
+--- a/fs/iomap.c
++++ b/fs/iomap.c
+@@ -811,6 +811,7 @@ struct iomap_dio {
+ atomic_t ref;
+ unsigned flags;
+ int error;
++ bool wait_for_completion;
+
+ union {
+ /* used during submission and for synchronous completion: */
+@@ -914,9 +915,8 @@ static void iomap_dio_bio_end_io(struct bio *bio)
+ iomap_dio_set_error(dio, blk_status_to_errno(bio->bi_status));
+
+ if (atomic_dec_and_test(&dio->ref)) {
+- if (is_sync_kiocb(dio->iocb)) {
++ if (dio->wait_for_completion) {
+ struct task_struct *waiter = dio->submit.waiter;
+-
+ WRITE_ONCE(dio->submit.waiter, NULL);
+ wake_up_process(waiter);
+ } else if (dio->flags & IOMAP_DIO_WRITE) {
+@@ -1131,13 +1131,12 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ dio->end_io = end_io;
+ dio->error = 0;
+ dio->flags = 0;
++ dio->wait_for_completion = is_sync_kiocb(iocb);
+
+ dio->submit.iter = iter;
+- if (is_sync_kiocb(iocb)) {
+- dio->submit.waiter = current;
+- dio->submit.cookie = BLK_QC_T_NONE;
+- dio->submit.last_queue = NULL;
+- }
++ dio->submit.waiter = current;
++ dio->submit.cookie = BLK_QC_T_NONE;
++ dio->submit.last_queue = NULL;
+
+ if (iov_iter_rw(iter) == READ) {
+ if (pos >= dio->i_size)
+@@ -1187,7 +1186,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ dio_warn_stale_pagecache(iocb->ki_filp);
+ ret = 0;
+
+- if (iov_iter_rw(iter) == WRITE && !is_sync_kiocb(iocb) &&
++ if (iov_iter_rw(iter) == WRITE && !dio->wait_for_completion &&
+ !inode->i_sb->s_dio_done_wq) {
+ ret = sb_init_dio_done_wq(inode->i_sb);
+ if (ret < 0)
+@@ -1202,8 +1201,10 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ iomap_dio_actor);
+ if (ret <= 0) {
+ /* magic error code to fall back to buffered I/O */
+- if (ret == -ENOTBLK)
++ if (ret == -ENOTBLK) {
++ dio->wait_for_completion = true;
+ ret = 0;
++ }
+ break;
+ }
+ pos += ret;
+@@ -1224,7 +1225,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ dio->flags &= ~IOMAP_DIO_NEED_SYNC;
+
+ if (!atomic_dec_and_test(&dio->ref)) {
+- if (!is_sync_kiocb(iocb))
++ if (!dio->wait_for_completion)
+ return -EIOCBQUEUED;
+
+ for (;;) {
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index ec3fba7d492f..488a9e7f8f66 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -24,6 +24,7 @@
+ #include <linux/mpage.h>
+ #include <linux/user_namespace.h>
+ #include <linux/seq_file.h>
++#include <linux/blkdev.h>
+
+ #include "isofs.h"
+ #include "zisofs.h"
+@@ -653,6 +654,12 @@ static int isofs_fill_super(struct super_block *s, void *data, int silent)
+ /*
+ * What if bugger tells us to go beyond page size?
+ */
++ if (bdev_logical_block_size(s->s_bdev) > 2048) {
++ printk(KERN_WARNING
++ "ISOFS: unsupported/invalid hardware sector size %d\n",
++ bdev_logical_block_size(s->s_bdev));
++ goto out_freesbi;
++ }
+ opt.blocksize = sb_min_blocksize(s, opt.blocksize);
+
+ sbi->s_high_sierra = 0; /* default is iso9660 */
+diff --git a/fs/locks.c b/fs/locks.c
+index db7b6917d9c5..fafce5a8d74f 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2072,6 +2072,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
+ return -1;
+ if (IS_REMOTELCK(fl))
+ return fl->fl_pid;
++ /*
++ * If the flock owner process is dead and its pid has been already
++ * freed, the translation below won't work, but we still want to show
++ * flock owner pid number in init pidns.
++ */
++ if (ns == &init_pid_ns)
++ return (pid_t)fl->fl_pid;
+
+ rcu_read_lock();
+ pid = find_pid_ns(fl->fl_pid, &init_pid_ns);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 5d99e8810b85..0dded931f119 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1726,6 +1726,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ if (status) {
+ op = &args->ops[0];
+ op->status = status;
++ resp->opcnt = 1;
+ goto encode_op;
+ }
+
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index ca1d2cc2cdfa..18863d56273c 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -199,47 +199,57 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+
+ #define __declare_arg_0(a0, res) \
+ struct arm_smccc_res *___res = res; \
+- register u32 r0 asm("r0") = a0; \
++ register unsigned long r0 asm("r0") = (u32)a0; \
+ register unsigned long r1 asm("r1"); \
+ register unsigned long r2 asm("r2"); \
+ register unsigned long r3 asm("r3")
+
+ #define __declare_arg_1(a0, a1, res) \
++ typeof(a1) __a1 = a1; \
+ struct arm_smccc_res *___res = res; \
+- register u32 r0 asm("r0") = a0; \
+- register typeof(a1) r1 asm("r1") = a1; \
++ register unsigned long r0 asm("r0") = (u32)a0; \
++ register unsigned long r1 asm("r1") = __a1; \
+ register unsigned long r2 asm("r2"); \
+ register unsigned long r3 asm("r3")
+
+ #define __declare_arg_2(a0, a1, a2, res) \
++ typeof(a1) __a1 = a1; \
++ typeof(a2) __a2 = a2; \
+ struct arm_smccc_res *___res = res; \
+- register u32 r0 asm("r0") = a0; \
+- register typeof(a1) r1 asm("r1") = a1; \
+- register typeof(a2) r2 asm("r2") = a2; \
++ register unsigned long r0 asm("r0") = (u32)a0; \
++ register unsigned long r1 asm("r1") = __a1; \
++ register unsigned long r2 asm("r2") = __a2; \
+ register unsigned long r3 asm("r3")
+
+ #define __declare_arg_3(a0, a1, a2, a3, res) \
++ typeof(a1) __a1 = a1; \
++ typeof(a2) __a2 = a2; \
++ typeof(a3) __a3 = a3; \
+ struct arm_smccc_res *___res = res; \
+- register u32 r0 asm("r0") = a0; \
+- register typeof(a1) r1 asm("r1") = a1; \
+- register typeof(a2) r2 asm("r2") = a2; \
+- register typeof(a3) r3 asm("r3") = a3
++ register unsigned long r0 asm("r0") = (u32)a0; \
++ register unsigned long r1 asm("r1") = __a1; \
++ register unsigned long r2 asm("r2") = __a2; \
++ register unsigned long r3 asm("r3") = __a3
+
+ #define __declare_arg_4(a0, a1, a2, a3, a4, res) \
++ typeof(a4) __a4 = a4; \
+ __declare_arg_3(a0, a1, a2, a3, res); \
+- register typeof(a4) r4 asm("r4") = a4
++ register unsigned long r4 asm("r4") = __a4
+
+ #define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \
++ typeof(a5) __a5 = a5; \
+ __declare_arg_4(a0, a1, a2, a3, a4, res); \
+- register typeof(a5) r5 asm("r5") = a5
++ register unsigned long r5 asm("r5") = __a5
+
+ #define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \
++ typeof(a6) __a6 = a6; \
+ __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \
+- register typeof(a6) r6 asm("r6") = a6
++ register unsigned long r6 asm("r6") = __a6
+
+ #define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \
++ typeof(a7) __a7 = a7; \
+ __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \
+- register typeof(a7) r7 asm("r7") = a7
++ register unsigned long r7 asm("r7") = __a7
+
+ #define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
+ #define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__)
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index cf2588d81148..147a7bb341dd 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -104,7 +104,7 @@
+ (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \
+ })
+
+-extern void __compiletime_warning("value doesn't fit into mask")
++extern void __compiletime_error("value doesn't fit into mask")
+ __field_overflow(void);
+ extern void __compiletime_error("bad bitfield mask")
+ __bad_mask(void);
+@@ -121,8 +121,8 @@ static __always_inline u64 field_mask(u64 field)
+ #define ____MAKE_OP(type,base,to,from) \
+ static __always_inline __##type type##_encode_bits(base v, base field) \
+ { \
+- if (__builtin_constant_p(v) && (v & ~field_multiplier(field))) \
+- __field_overflow(); \
++ if (__builtin_constant_p(v) && (v & ~field_mask(field))) \
++ __field_overflow(); \
+ return to((v & field_mask(field)) * field_multiplier(field)); \
+ } \
+ static __always_inline __##type type##_replace_bits(__##type old, \
+diff --git a/include/linux/platform_data/ina2xx.h b/include/linux/platform_data/ina2xx.h
+index 9abc0ca7259b..9f0aa1b48c78 100644
+--- a/include/linux/platform_data/ina2xx.h
++++ b/include/linux/platform_data/ina2xx.h
+@@ -1,7 +1,7 @@
+ /*
+ * Driver for Texas Instruments INA219, INA226 power monitor chips
+ *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index c85704fcdbd2..ee7e987ea1b4 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -95,8 +95,8 @@ struct k_itimer {
+ clockid_t it_clock;
+ timer_t it_id;
+ int it_active;
+- int it_overrun;
+- int it_overrun_last;
++ s64 it_overrun;
++ s64 it_overrun_last;
+ int it_requeue_pending;
+ int it_sigev_notify;
+ ktime_t it_interval;
+diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
+index b21c4bd96b84..f80769175c56 100644
+--- a/include/linux/power_supply.h
++++ b/include/linux/power_supply.h
+@@ -269,6 +269,7 @@ struct power_supply {
+ spinlock_t changed_lock;
+ bool changed;
+ bool initialized;
++ bool removing;
+ atomic_t use_cnt;
+ #ifdef CONFIG_THERMAL
+ struct thermal_zone_device *tzd;
+diff --git a/include/linux/regulator/machine.h b/include/linux/regulator/machine.h
+index 3468703d663a..a459a5e973a7 100644
+--- a/include/linux/regulator/machine.h
++++ b/include/linux/regulator/machine.h
+@@ -48,9 +48,9 @@ struct regulator;
+ * DISABLE_IN_SUSPEND - turn off regulator in suspend states
+ * ENABLE_IN_SUSPEND - keep regulator on in suspend states
+ */
+-#define DO_NOTHING_IN_SUSPEND (-1)
+-#define DISABLE_IN_SUSPEND 0
+-#define ENABLE_IN_SUSPEND 1
++#define DO_NOTHING_IN_SUSPEND 0
++#define DISABLE_IN_SUSPEND 1
++#define ENABLE_IN_SUSPEND 2
+
+ /* Regulator active discharge flags */
+ enum regulator_active_discharge {
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 409c845d4cd3..422b1c01ee0d 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -172,7 +172,7 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
+ static __always_inline __must_check
+ size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i)
+ {
+- if (unlikely(!check_copy_size(addr, bytes, false)))
++ if (unlikely(!check_copy_size(addr, bytes, true)))
+ return 0;
+ else
+ return _copy_to_iter_mcsafe(addr, bytes, i);
+diff --git a/include/media/v4l2-fh.h b/include/media/v4l2-fh.h
+index ea73fef8bdc0..8586cfb49828 100644
+--- a/include/media/v4l2-fh.h
++++ b/include/media/v4l2-fh.h
+@@ -38,10 +38,13 @@ struct v4l2_ctrl_handler;
+ * @prio: priority of the file handler, as defined by &enum v4l2_priority
+ *
+ * @wait: event' s wait queue
++ * @subscribe_lock: serialise changes to the subscribed list; guarantee that
++ * the add and del event callbacks are orderly called
+ * @subscribed: list of subscribed events
+ * @available: list of events waiting to be dequeued
+ * @navailable: number of available events at @available list
+ * @sequence: event sequence number
++ *
+ * @m2m_ctx: pointer to &struct v4l2_m2m_ctx
+ */
+ struct v4l2_fh {
+@@ -52,6 +55,7 @@ struct v4l2_fh {
+
+ /* Events */
+ wait_queue_head_t wait;
++ struct mutex subscribe_lock;
+ struct list_head subscribed;
+ struct list_head available;
+ unsigned int navailable;
+diff --git a/include/rdma/opa_addr.h b/include/rdma/opa_addr.h
+index 2bbb7a67e643..66d4393d339c 100644
+--- a/include/rdma/opa_addr.h
++++ b/include/rdma/opa_addr.h
+@@ -120,7 +120,7 @@ static inline bool rdma_is_valid_unicast_lid(struct rdma_ah_attr *attr)
+ if (attr->type == RDMA_AH_ATTR_TYPE_IB) {
+ if (!rdma_ah_get_dlid(attr) ||
+ rdma_ah_get_dlid(attr) >=
+- be32_to_cpu(IB_MULTICAST_LID_BASE))
++ be16_to_cpu(IB_MULTICAST_LID_BASE))
+ return false;
+ } else if (attr->type == RDMA_AH_ATTR_TYPE_OPA) {
+ if (!rdma_ah_get_dlid(attr) ||
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index 58899601fccf..ed707b21d152 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -1430,12 +1430,15 @@ out:
+ static void smap_write_space(struct sock *sk)
+ {
+ struct smap_psock *psock;
++ void (*write_space)(struct sock *sk);
+
+ rcu_read_lock();
+ psock = smap_psock_sk(sk);
+ if (likely(psock && test_bit(SMAP_TX_RUNNING, &psock->state)))
+ schedule_work(&psock->tx_work);
++ write_space = psock->save_write_space;
+ rcu_read_unlock();
++ write_space(sk);
+ }
+
+ static void smap_stop_sock(struct smap_psock *psock, struct sock *sk)
+@@ -2143,7 +2146,9 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
+ return ERR_PTR(-EPERM);
+
+ /* check sanity of attributes */
+- if (attr->max_entries == 0 || attr->value_size != 4 ||
++ if (attr->max_entries == 0 ||
++ attr->key_size == 0 ||
++ attr->value_size != 4 ||
+ attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
+ return ERR_PTR(-EINVAL);
+
+@@ -2270,8 +2275,10 @@ static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
+ }
+ l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
+ htab->map.numa_node);
+- if (!l_new)
++ if (!l_new) {
++ atomic_dec(&htab->count);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ memcpy(l_new->key, key, key_size);
+ l_new->sk = sk;
+diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
+index 6e28d2866be5..314e2a9040c7 100644
+--- a/kernel/events/hw_breakpoint.c
++++ b/kernel/events/hw_breakpoint.c
+@@ -400,16 +400,35 @@ int dbg_release_bp_slot(struct perf_event *bp)
+ return 0;
+ }
+
+-static int validate_hw_breakpoint(struct perf_event *bp)
++#ifndef hw_breakpoint_arch_parse
++int hw_breakpoint_arch_parse(struct perf_event *bp,
++ const struct perf_event_attr *attr,
++ struct arch_hw_breakpoint *hw)
+ {
+- int ret;
++ int err;
+
+- ret = arch_validate_hwbkpt_settings(bp);
+- if (ret)
+- return ret;
++ err = arch_validate_hwbkpt_settings(bp);
++ if (err)
++ return err;
++
++ *hw = bp->hw.info;
++
++ return 0;
++}
++#endif
++
++static int hw_breakpoint_parse(struct perf_event *bp,
++ const struct perf_event_attr *attr,
++ struct arch_hw_breakpoint *hw)
++{
++ int err;
++
++ err = hw_breakpoint_arch_parse(bp, attr, hw);
++ if (err)
++ return err;
+
+ if (arch_check_bp_in_kernelspace(bp)) {
+- if (bp->attr.exclude_kernel)
++ if (attr->exclude_kernel)
+ return -EINVAL;
+ /*
+ * Don't let unprivileged users set a breakpoint in the trap
+@@ -424,19 +443,22 @@ static int validate_hw_breakpoint(struct perf_event *bp)
+
+ int register_perf_hw_breakpoint(struct perf_event *bp)
+ {
+- int ret;
+-
+- ret = reserve_bp_slot(bp);
+- if (ret)
+- return ret;
++ struct arch_hw_breakpoint hw;
++ int err;
+
+- ret = validate_hw_breakpoint(bp);
++ err = reserve_bp_slot(bp);
++ if (err)
++ return err;
+
+- /* if arch_validate_hwbkpt_settings() fails then release bp slot */
+- if (ret)
++ err = hw_breakpoint_parse(bp, &bp->attr, &hw);
++ if (err) {
+ release_bp_slot(bp);
++ return err;
++ }
+
+- return ret;
++ bp->hw.info = hw;
++
++ return 0;
+ }
+
+ /**
+@@ -464,6 +486,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ u64 old_len = bp->attr.bp_len;
+ int old_type = bp->attr.bp_type;
+ bool modify = attr->bp_type != old_type;
++ struct arch_hw_breakpoint hw;
+ int err = 0;
+
+ bp->attr.bp_addr = attr->bp_addr;
+@@ -473,7 +496,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ if (check && memcmp(&bp->attr, attr, sizeof(*attr)))
+ return -EINVAL;
+
+- err = validate_hw_breakpoint(bp);
++ err = hw_breakpoint_parse(bp, attr, &hw);
+ if (!err && modify)
+ err = modify_bp_slot(bp, old_type);
+
+@@ -484,7 +507,9 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ return err;
+ }
+
++ bp->hw.info = hw;
+ bp->attr.disabled = attr->disabled;
++
+ return 0;
+ }
+
+diff --git a/kernel/module.c b/kernel/module.c
+index f475f30eed8c..4a6b9c6d5f2c 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4067,7 +4067,7 @@ static unsigned long mod_find_symname(struct module *mod, const char *name)
+
+ for (i = 0; i < kallsyms->num_symtab; i++)
+ if (strcmp(name, symname(kallsyms, i)) == 0 &&
+- kallsyms->symtab[i].st_info != 'U')
++ kallsyms->symtab[i].st_shndx != SHN_UNDEF)
+ return kallsyms->symtab[i].st_value;
+ return 0;
+ }
+@@ -4113,6 +4113,10 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
+ for (i = 0; i < kallsyms->num_symtab; i++) {
++
++ if (kallsyms->symtab[i].st_shndx == SHN_UNDEF)
++ continue;
++
+ ret = fn(data, symname(kallsyms, i),
+ mod, kallsyms->symtab[i].st_value);
+ if (ret != 0)
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 639321bf2e39..fa5de5e8de61 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -581,11 +581,11 @@ static void alarm_timer_rearm(struct k_itimer *timr)
+ * @timr: Pointer to the posixtimer data struct
+ * @now: Current time to forward the timer against
+ */
+-static int alarm_timer_forward(struct k_itimer *timr, ktime_t now)
++static s64 alarm_timer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ struct alarm *alarm = &timr->it.alarm.alarmtimer;
+
+- return (int) alarm_forward(alarm, timr->it_interval, now);
++ return alarm_forward(alarm, timr->it_interval, now);
+ }
+
+ /**
+@@ -808,7 +808,8 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
+ /* Convert (if necessary) to absolute time */
+ if (flags != TIMER_ABSTIME) {
+ ktime_t now = alarm_bases[type].gettime();
+- exp = ktime_add(now, exp);
++
++ exp = ktime_add_safe(now, exp);
+ }
+
+ ret = alarmtimer_do_nsleep(&alarm, exp, type);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 9cdf54b04ca8..294d7b65af33 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -85,7 +85,7 @@ static void bump_cpu_timer(struct k_itimer *timer, u64 now)
+ continue;
+
+ timer->it.cpu.expires += incr;
+- timer->it_overrun += 1 << i;
++ timer->it_overrun += 1LL << i;
+ delta -= incr;
+ }
+ }
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index e08ce3f27447..e475012bff7e 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -283,6 +283,17 @@ static __init int init_posix_timers(void)
+ }
+ __initcall(init_posix_timers);
+
++/*
++ * The siginfo si_overrun field and the return value of timer_getoverrun(2)
++ * are of type int. Clamp the overrun value to INT_MAX
++ */
++static inline int timer_overrun_to_int(struct k_itimer *timr, int baseval)
++{
++ s64 sum = timr->it_overrun_last + (s64)baseval;
++
++ return sum > (s64)INT_MAX ? INT_MAX : (int)sum;
++}
++
+ static void common_hrtimer_rearm(struct k_itimer *timr)
+ {
+ struct hrtimer *timer = &timr->it.real.timer;
+@@ -290,9 +301,8 @@ static void common_hrtimer_rearm(struct k_itimer *timr)
+ if (!timr->it_interval)
+ return;
+
+- timr->it_overrun += (unsigned int) hrtimer_forward(timer,
+- timer->base->get_time(),
+- timr->it_interval);
++ timr->it_overrun += hrtimer_forward(timer, timer->base->get_time(),
++ timr->it_interval);
+ hrtimer_restart(timer);
+ }
+
+@@ -321,10 +331,10 @@ void posixtimer_rearm(struct siginfo *info)
+
+ timr->it_active = 1;
+ timr->it_overrun_last = timr->it_overrun;
+- timr->it_overrun = -1;
++ timr->it_overrun = -1LL;
+ ++timr->it_requeue_pending;
+
+- info->si_overrun += timr->it_overrun_last;
++ info->si_overrun = timer_overrun_to_int(timr, info->si_overrun);
+ }
+
+ unlock_timer(timr, flags);
+@@ -418,9 +428,8 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
+ now = ktime_add(now, kj);
+ }
+ #endif
+- timr->it_overrun += (unsigned int)
+- hrtimer_forward(timer, now,
+- timr->it_interval);
++ timr->it_overrun += hrtimer_forward(timer, now,
++ timr->it_interval);
+ ret = HRTIMER_RESTART;
+ ++timr->it_requeue_pending;
+ timr->it_active = 1;
+@@ -524,7 +533,7 @@ static int do_timer_create(clockid_t which_clock, struct sigevent *event,
+ new_timer->it_id = (timer_t) new_timer_id;
+ new_timer->it_clock = which_clock;
+ new_timer->kclock = kc;
+- new_timer->it_overrun = -1;
++ new_timer->it_overrun = -1LL;
+
+ if (event) {
+ rcu_read_lock();
+@@ -645,11 +654,11 @@ static ktime_t common_hrtimer_remaining(struct k_itimer *timr, ktime_t now)
+ return __hrtimer_expires_remaining_adjusted(timer, now);
+ }
+
+-static int common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
++static s64 common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ struct hrtimer *timer = &timr->it.real.timer;
+
+- return (int)hrtimer_forward(timer, now, timr->it_interval);
++ return hrtimer_forward(timer, now, timr->it_interval);
+ }
+
+ /*
+@@ -789,7 +798,7 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
+ if (!timr)
+ return -EINVAL;
+
+- overrun = timr->it_overrun_last;
++ overrun = timer_overrun_to_int(timr, 0);
+ unlock_timer(timr, flags);
+
+ return overrun;
+diff --git a/kernel/time/posix-timers.h b/kernel/time/posix-timers.h
+index 151e28f5bf30..ddb21145211a 100644
+--- a/kernel/time/posix-timers.h
++++ b/kernel/time/posix-timers.h
+@@ -19,7 +19,7 @@ struct k_clock {
+ void (*timer_get)(struct k_itimer *timr,
+ struct itimerspec64 *cur_setting);
+ void (*timer_rearm)(struct k_itimer *timr);
+- int (*timer_forward)(struct k_itimer *timr, ktime_t now);
++ s64 (*timer_forward)(struct k_itimer *timr, ktime_t now);
+ ktime_t (*timer_remaining)(struct k_itimer *timr, ktime_t now);
+ int (*timer_try_to_cancel)(struct k_itimer *timr);
+ void (*timer_arm)(struct k_itimer *timr, ktime_t expires,
+diff --git a/lib/klist.c b/lib/klist.c
+index 0507fa5d84c5..f6b547812fe3 100644
+--- a/lib/klist.c
++++ b/lib/klist.c
+@@ -336,8 +336,9 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ void (*put)(struct klist_node *) = i->i_klist->put;
+ struct klist_node *last = i->i_cur;
+ struct klist_node *prev;
++ unsigned long flags;
+
+- spin_lock(&i->i_klist->k_lock);
++ spin_lock_irqsave(&i->i_klist->k_lock, flags);
+
+ if (last) {
+ prev = to_klist_node(last->n_node.prev);
+@@ -356,7 +357,7 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ prev = to_klist_node(prev->n_node.prev);
+ }
+
+- spin_unlock(&i->i_klist->k_lock);
++ spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+
+ if (put && last)
+ put(last);
+@@ -377,8 +378,9 @@ struct klist_node *klist_next(struct klist_iter *i)
+ void (*put)(struct klist_node *) = i->i_klist->put;
+ struct klist_node *last = i->i_cur;
+ struct klist_node *next;
++ unsigned long flags;
+
+- spin_lock(&i->i_klist->k_lock);
++ spin_lock_irqsave(&i->i_klist->k_lock, flags);
+
+ if (last) {
+ next = to_klist_node(last->n_node.next);
+@@ -397,7 +399,7 @@ struct klist_node *klist_next(struct klist_iter *i)
+ next = to_klist_node(next->n_node.next);
+ }
+
+- spin_unlock(&i->i_klist->k_lock);
++ spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+
+ if (put && last)
+ put(last);
+diff --git a/net/6lowpan/iphc.c b/net/6lowpan/iphc.c
+index 6b1042e21656..52fad5dad9f7 100644
+--- a/net/6lowpan/iphc.c
++++ b/net/6lowpan/iphc.c
+@@ -770,6 +770,7 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
+ hdr.hop_limit, &hdr.daddr);
+
+ skb_push(skb, sizeof(hdr));
++ skb_reset_mac_header(skb);
+ skb_reset_network_header(skb);
+ skb_copy_to_linear_data(skb, &hdr, sizeof(hdr));
+
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 4bfff3c87e8e..e99d6afb70ef 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -95,11 +95,10 @@ struct bbr {
+ u32 mode:3, /* current bbr_mode in state machine */
+ prev_ca_state:3, /* CA state on previous ACK */
+ packet_conservation:1, /* use packet conservation? */
+- restore_cwnd:1, /* decided to revert cwnd to old value */
+ round_start:1, /* start of packet-timed tx->ack round? */
+ idle_restart:1, /* restarting after idle? */
+ probe_rtt_round_done:1, /* a BBR_PROBE_RTT round at 4 pkts? */
+- unused:12,
++ unused:13,
+ lt_is_sampling:1, /* taking long-term ("LT") samples now? */
+ lt_rtt_cnt:7, /* round trips in long-term interval */
+ lt_use_bw:1; /* use lt_bw as our bw estimate? */
+@@ -175,6 +174,8 @@ static const u32 bbr_lt_bw_diff = 4000 / 8;
+ /* If we estimate we're policed, use lt_bw for this many round trips: */
+ static const u32 bbr_lt_bw_max_rtts = 48;
+
++static void bbr_check_probe_rtt_done(struct sock *sk);
++
+ /* Do we estimate that STARTUP filled the pipe? */
+ static bool bbr_full_bw_reached(const struct sock *sk)
+ {
+@@ -305,6 +306,8 @@ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+ */
+ if (bbr->mode == BBR_PROBE_BW)
+ bbr_set_pacing_rate(sk, bbr_bw(sk), BBR_UNIT);
++ else if (bbr->mode == BBR_PROBE_RTT)
++ bbr_check_probe_rtt_done(sk);
+ }
+ }
+
+@@ -392,17 +395,11 @@ static bool bbr_set_cwnd_to_recover_or_restore(
+ cwnd = tcp_packets_in_flight(tp) + acked;
+ } else if (prev_state >= TCP_CA_Recovery && state < TCP_CA_Recovery) {
+ /* Exiting loss recovery; restore cwnd saved before recovery. */
+- bbr->restore_cwnd = 1;
++ cwnd = max(cwnd, bbr->prior_cwnd);
+ bbr->packet_conservation = 0;
+ }
+ bbr->prev_ca_state = state;
+
+- if (bbr->restore_cwnd) {
+- /* Restore cwnd after exiting loss recovery or PROBE_RTT. */
+- cwnd = max(cwnd, bbr->prior_cwnd);
+- bbr->restore_cwnd = 0;
+- }
+-
+ if (bbr->packet_conservation) {
+ *new_cwnd = max(cwnd, tcp_packets_in_flight(tp) + acked);
+ return true; /* yes, using packet conservation */
+@@ -744,6 +741,20 @@ static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs)
+ bbr_reset_probe_bw_mode(sk); /* we estimate queue is drained */
+ }
+
++static void bbr_check_probe_rtt_done(struct sock *sk)
++{
++ struct tcp_sock *tp = tcp_sk(sk);
++ struct bbr *bbr = inet_csk_ca(sk);
++
++ if (!(bbr->probe_rtt_done_stamp &&
++ after(tcp_jiffies32, bbr->probe_rtt_done_stamp)))
++ return;
++
++ bbr->min_rtt_stamp = tcp_jiffies32; /* wait a while until PROBE_RTT */
++ tp->snd_cwnd = max(tp->snd_cwnd, bbr->prior_cwnd);
++ bbr_reset_mode(sk);
++}
++
+ /* The goal of PROBE_RTT mode is to have BBR flows cooperatively and
+ * periodically drain the bottleneck queue, to converge to measure the true
+ * min_rtt (unloaded propagation delay). This allows the flows to keep queues
+@@ -802,12 +813,8 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs)
+ } else if (bbr->probe_rtt_done_stamp) {
+ if (bbr->round_start)
+ bbr->probe_rtt_round_done = 1;
+- if (bbr->probe_rtt_round_done &&
+- after(tcp_jiffies32, bbr->probe_rtt_done_stamp)) {
+- bbr->min_rtt_stamp = tcp_jiffies32;
+- bbr->restore_cwnd = 1; /* snap to prior_cwnd */
+- bbr_reset_mode(sk);
+- }
++ if (bbr->probe_rtt_round_done)
++ bbr_check_probe_rtt_done(sk);
+ }
+ }
+ /* Restart after idle ends only once we process a new S/ACK for data */
+@@ -858,7 +865,6 @@ static void bbr_init(struct sock *sk)
+ bbr->has_seen_rtt = 0;
+ bbr_init_pacing_rate_from_rtt(sk);
+
+- bbr->restore_cwnd = 0;
+ bbr->round_start = 0;
+ bbr->idle_restart = 0;
+ bbr->full_bw_reached = 0;
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index 82e6edf9c5d9..45f33d6dedf7 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -100,7 +100,7 @@ static int ncsi_write_package_info(struct sk_buff *skb,
+ bool found;
+ int rc;
+
+- if (id > ndp->package_num) {
++ if (id > ndp->package_num - 1) {
+ netdev_info(ndp->ndev.dev, "NCSI: No package with id %u\n", id);
+ return -ENODEV;
+ }
+@@ -240,7 +240,7 @@ static int ncsi_pkg_info_all_nl(struct sk_buff *skb,
+ return 0; /* done */
+
+ hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+- &ncsi_genl_family, 0, NCSI_CMD_PKG_INFO);
++ &ncsi_genl_family, NLM_F_MULTI, NCSI_CMD_PKG_INFO);
+ if (!hdr) {
+ rc = -EMSGSIZE;
+ goto err;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 2ccf194c3ebb..8015e50e8d0a 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -222,9 +222,14 @@ static void tls_write_space(struct sock *sk)
+ {
+ struct tls_context *ctx = tls_get_ctx(sk);
+
+- /* We are already sending pages, ignore notification */
+- if (ctx->in_tcp_sendpages)
++ /* If in_tcp_sendpages call lower protocol write space handler
++ * to ensure we wake up any waiting operations there. For example
++ * if do_tcp_sendpages where to call sk_wait_event.
++ */
++ if (ctx->in_tcp_sendpages) {
++ ctx->sk_write_space(sk);
+ return;
++ }
+
+ if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) {
+ gfp_t sk_allocation = sk->sk_allocation;
+diff --git a/sound/aoa/core/gpio-feature.c b/sound/aoa/core/gpio-feature.c
+index 71960089e207..65557421fe0b 100644
+--- a/sound/aoa/core/gpio-feature.c
++++ b/sound/aoa/core/gpio-feature.c
+@@ -88,8 +88,10 @@ static struct device_node *get_gpio(char *name,
+ }
+
+ reg = of_get_property(np, "reg", NULL);
+- if (!reg)
++ if (!reg) {
++ of_node_put(np);
+ return NULL;
++ }
+
+ *gpioptr = *reg;
+
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 647ae1a71e10..28dc5e124995 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2535,7 +2535,8 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ /* AMD Raven */
+ { PCI_DEVICE(0x1022, 0x15e3),
+- .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
++ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
++ AZX_DCAPS_PM_RUNTIME },
+ /* ATI HDMI */
+ { PCI_DEVICE(0x1002, 0x0002),
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+diff --git a/sound/soc/codecs/rt1305.c b/sound/soc/codecs/rt1305.c
+index f4c8c45f4010..421b8fb2fa04 100644
+--- a/sound/soc/codecs/rt1305.c
++++ b/sound/soc/codecs/rt1305.c
+@@ -1066,7 +1066,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ pr_debug("Left_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ pr_info("Left channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+
+- r0l = 562949953421312;
++ r0l = 562949953421312ULL;
+ if (rhl != 0)
+ do_div(r0l, rhl);
+ pr_debug("Left_r0 = 0x%llx\n", r0l);
+@@ -1083,7 +1083,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ pr_debug("Right_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ pr_info("Right channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+
+- r0r = 562949953421312;
++ r0r = 562949953421312ULL;
+ if (rhl != 0)
+ do_div(r0r, rhl);
+ pr_debug("Right_r0 = 0x%llx\n", r0r);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 33065ba294a9..d2c9d7865bde 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -404,7 +404,7 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ },
+ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
+ BYT_RT5640_JD_SRC_JD1_IN4P |
+- BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_TH_1500UA |
+ BYT_RT5640_OVCD_SF_0P75 |
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+diff --git a/sound/soc/qcom/qdsp6/q6afe.c b/sound/soc/qcom/qdsp6/q6afe.c
+index 01f43218984b..69a7896cb713 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.c
++++ b/sound/soc/qcom/qdsp6/q6afe.c
+@@ -777,7 +777,7 @@ static int q6afe_callback(struct apr_device *adev, struct apr_resp_pkt *data)
+ */
+ int q6afe_get_port_id(int index)
+ {
+- if (index < 0 || index > AFE_PORT_MAX)
++ if (index < 0 || index >= AFE_PORT_MAX)
+ return -EINVAL;
+
+ return port_maps[index].port_id;
+@@ -1014,7 +1014,7 @@ int q6afe_port_stop(struct q6afe_port *port)
+
+ port_id = port->id;
+ index = port->token;
+- if (index < 0 || index > AFE_PORT_MAX) {
++ if (index < 0 || index >= AFE_PORT_MAX) {
+ dev_err(afe->dev, "AFE port index[%d] invalid!\n", index);
+ return -EINVAL;
+ }
+@@ -1355,7 +1355,7 @@ struct q6afe_port *q6afe_port_get_from_id(struct device *dev, int id)
+ unsigned long flags;
+ int cfg_type;
+
+- if (id < 0 || id > AFE_PORT_MAX) {
++ if (id < 0 || id >= AFE_PORT_MAX) {
+ dev_err(dev, "AFE port token[%d] invalid!\n", id);
+ return ERR_PTR(-EINVAL);
+ }
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index cf4b40d376e5..c675058b908b 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -37,6 +37,7 @@
+ #define CHNL_4 (1 << 22) /* Channels */
+ #define CHNL_6 (2 << 22) /* Channels */
+ #define CHNL_8 (3 << 22) /* Channels */
++#define DWL_MASK (7 << 19) /* Data Word Length mask */
+ #define DWL_8 (0 << 19) /* Data Word Length */
+ #define DWL_16 (1 << 19) /* Data Word Length */
+ #define DWL_18 (2 << 19) /* Data Word Length */
+@@ -353,21 +354,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ struct rsnd_dai *rdai = rsnd_io_to_rdai(io);
+ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
+ struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+- u32 cr_own;
+- u32 cr_mode;
+- u32 wsr;
++ u32 cr_own = ssi->cr_own;
++ u32 cr_mode = ssi->cr_mode;
++ u32 wsr = ssi->wsr;
+ int is_tdm;
+
+- if (rsnd_ssi_is_parent(mod, io))
+- return;
+-
+ is_tdm = rsnd_runtime_is_ssi_tdm(io);
+
+ /*
+ * always use 32bit system word.
+ * see also rsnd_ssi_master_clk_enable()
+ */
+- cr_own = FORCE | SWL_32;
++ cr_own |= FORCE | SWL_32;
+
+ if (rdai->bit_clk_inv)
+ cr_own |= SCKP;
+@@ -377,9 +375,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ cr_own |= SDTA;
+ if (rdai->sys_delay)
+ cr_own |= DEL;
++
++ /*
++ * We shouldn't exchange SWSP after running.
++ * This means, parent needs to care it.
++ */
++ if (rsnd_ssi_is_parent(mod, io))
++ goto init_end;
++
+ if (rsnd_io_is_play(io))
+ cr_own |= TRMD;
+
++ cr_own &= ~DWL_MASK;
+ switch (snd_pcm_format_width(runtime->format)) {
+ case 16:
+ cr_own |= DWL_16;
+@@ -406,7 +413,7 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ wsr |= WS_MODE;
+ cr_own |= CHNL_8;
+ }
+-
++init_end:
+ ssi->cr_own = cr_own;
+ ssi->cr_mode = cr_mode;
+ ssi->wsr = wsr;
+@@ -465,15 +472,18 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ return -EIO;
+ }
+
+- if (!rsnd_ssi_is_parent(mod, io))
+- ssi->cr_own = 0;
+-
+ rsnd_ssi_master_clk_stop(mod, io);
+
+ rsnd_mod_power_off(mod);
+
+ ssi->usrcnt--;
+
++ if (!ssi->usrcnt) {
++ ssi->cr_own = 0;
++ ssi->cr_mode = 0;
++ ssi->wsr = 0;
++ }
++
+ return 0;
+ }
+
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 229c12349803..a099c3e45504 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4073,6 +4073,13 @@ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card)
+ continue;
+ }
+
++ /* let users know there is no DAI to link */
++ if (!dai_w->priv) {
++ dev_dbg(card->dev, "dai widget %s has no DAI\n",
++ dai_w->name);
++ continue;
++ }
++
+ dai = dai_w->priv;
+
+ /* ...find all widgets with the same stream and link them */
+diff --git a/tools/bpf/bpftool/map_perf_ring.c b/tools/bpf/bpftool/map_perf_ring.c
+index 1832100d1b27..6d41323be291 100644
+--- a/tools/bpf/bpftool/map_perf_ring.c
++++ b/tools/bpf/bpftool/map_perf_ring.c
+@@ -194,8 +194,10 @@ int do_event_pipe(int argc, char **argv)
+ }
+
+ while (argc) {
+- if (argc < 2)
++ if (argc < 2) {
+ BAD_ARG();
++ goto err_close_map;
++ }
+
+ if (is_prefix(*argv, "cpu")) {
+ char *endptr;
+@@ -221,6 +223,7 @@ int do_event_pipe(int argc, char **argv)
+ NEXT_ARG();
+ } else {
+ BAD_ARG();
++ goto err_close_map;
+ }
+
+ do_all = false;
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index 4f5de8245b32..6631b0b8b4ab 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -385,7 +385,7 @@ static int test_and_print(struct test *t, bool force_skip, int subtest)
+ if (!t->subtest.get_nr)
+ pr_debug("%s:", t->desc);
+ else
+- pr_debug("%s subtest %d:", t->desc, subtest);
++ pr_debug("%s subtest %d:", t->desc, subtest + 1);
+
+ switch (err) {
+ case TEST_OK:
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+index 3bb4c2ba7b14..197e769c2ed1 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+@@ -74,12 +74,14 @@ test_vlan_match()
+
+ test_gretap()
+ {
+- test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++ test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++ "mirror to gretap"
+ }
+
+ test_ip6gretap()
+ {
+- test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++ test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++ "mirror to ip6gretap"
+ }
+
+ test_gretap_stp()
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+index 619b469365be..1c18e332cd4f 100644
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+@@ -62,7 +62,7 @@ full_test_span_gre_dir_vlan_ips()
+ "$backward_type" "$ip1" "$ip2"
+
+ tc filter add dev $h3 ingress pref 77 prot 802.1q \
+- flower $vlan_match ip_proto 0x2f \
++ flower $vlan_match \
+ action pass
+ mirror_test v$h1 $ip1 $ip2 $h3 77 10
+ tc filter del dev $h3 ingress pref 77
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+index 5dbc7a08f4bd..a12274776116 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+@@ -79,12 +79,14 @@ test_vlan_match()
+
+ test_gretap()
+ {
+- test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++ test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++ "mirror to gretap"
+ }
+
+ test_ip6gretap()
+ {
+- test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++ test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++ "mirror to ip6gretap"
+ }
+
+ test_span_gre_forbidden_cpu()
next reply other threads:[~2018-10-04 10:44 UTC|newest]
Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-04 10:44 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2018-11-21 12:28 [gentoo-commits] proj/linux-patches:4.18 commit in: / Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 11:40 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-13 21:17 Mike Pagano
2018-11-11 1:51 Mike Pagano
2018-11-10 21:33 Mike Pagano
2018-11-04 17:33 Alice Ferrazzi
2018-10-20 12:36 Mike Pagano
2018-10-18 10:27 Mike Pagano
2018-10-13 16:32 Mike Pagano
2018-10-10 11:16 Mike Pagano
2018-09-29 13:36 Mike Pagano
2018-09-26 10:40 Mike Pagano
2018-09-19 22:41 Mike Pagano
2018-09-15 10:12 Mike Pagano
2018-09-09 11:25 Mike Pagano
2018-09-05 15:30 Mike Pagano
2018-08-24 11:46 Mike Pagano
2018-08-22 9:59 Alice Ferrazzi
2018-08-18 18:13 Mike Pagano
2018-08-17 19:44 Mike Pagano
2018-08-17 19:28 Mike Pagano
2018-08-16 11:45 Mike Pagano
2018-08-15 16:36 Mike Pagano
2018-08-12 23:21 Mike Pagano
2018-08-12 23:15 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1538649847.2e88d7f9eaffea37f89cf24b95cf17b308c058e9.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox